The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
10-03-2022
RONDWORM MET MINIBREIN NEEMT BEHOORLIJK SLIMME BESLISSINGEN
RONDWORM MET MINIBREIN NEEMT BEHOORLIJK SLIMME BESLISSINGEN
Jean-Paul Keulen
Een wormpje met een behoorlijk klein stel hersenen zet zijn beet op doordachte wijze in bij het bewaken van zijn voedselvoorraad.
In eerste instantie lijkt het eetgedrag van de rondworm Pristionchus pacificus (P. pacificus) ontzettend simpel. Het enige wat dit wormpje van ongeveer een millimeter lang doet, is bijten. Komt het bacteriën tegen die het kan eten? Hap. Komt het een larve tegen van de worm Caenorhabditis elegans (C. elegans)? Hap. Komt het een volwassen exemplaar van C. elegans tegen? Hap.
Maar, zo hebben neurowetenschappers Kathleen Quach en Sreekanth Chalasani van het Salk Institute for Biological Studies vastgesteld in hun lab: in werkelijkheid lijken er flink wat afwegingen schuil te gaan achter de beten van P. pacificus. En dat terwijl het diertje het moet doen met een brein dat maar zo’n driehonderd neuronen bevat, waar onze hersenen er zo’n 86 miljard hebben.
Zes uur bijten
Wat de situatie rond P. pacificus en C. elegans complex maakt, is dat C. elegans een dubbelrol speelt. Deze worm is zowel een prooi van P. pacifus, als een concurrent. Hij voedt zich namelijk met dezelfde bacteriën als P. pacificus, maar doet dat anderhalf keer zo snel.
Daarbij komt dat vooral de larven van C. elegans geschikt zijn als prooi. Die bijt P. pacificus in één keer dood, waarna de maaltijd kan beginnen. Wil P. pacificus zich vergrijpen aan een volwassen C. elegans, dan wordt het een heel ander verhaal. Zes uur bijten voordat de prooi/concurrent het loodje legt, is geen uitzondering, zo blijkt uit de experimenten van Quach en Chalasani. Maar: een gebeten C. elegans druipt wel af na een enkele beet – en zal dan dus minder van de bacteriën opeten waar P. pacificus het ook op heeft voorzien.
Een C. elegans-worm (rechts) vlucht weg voor een bijtende P. pacificus.
Kortom, het bijtgedrag van P. pacificus kan twee doelen dienen. Ofwel het gaat om eten (van bacteriën of larven), ofwel het gaat om het wegjagen van concurrenten om de voedselvoorraad te beschermen.
Eten of verjagen
Nu zou je kunnen denken dat P. pacificus zich simpelweg in alles vastbijt dat op voedsel lijkt. Blijkt dat een kwakje bacteriën of een larve, dan heeft hij meteen wat te eten. Zet hij zijn tanden per ongeluk in een volwassen C. elegans, dan heeft hij in eerste instantie pech: zo’n grote worm krijgt hij niet zomaar dood. Maar dat pakt dan op de langere termijn toch positief uit voor de hoeveelheid bacteriën die tot zijn beschikking staat.
Het zit echter complexer dan dat. Als er weinig tot geen bacteriën in de buurt zijn, blijkt P. pacificus de volwassen C. elegans-wormen voornamelijk met rust te laten. Bij een overvloed aan bacteriën idem dito. Alleen als er een schaarse hoeveelheid bacteriën is, moeten volwassen C. elegans gaan uitkijken: dan zet P. pacificus het op een bijten. Bovendien lijkt P. pacificus in die situatie sneller te gaan bewegen en actief op zoek te gaan naar C. elegans die het op ‘zijn’ bacteriën voorzien hebben.
Zijn kleine brein ten spijt, lijkt P. pacificus dus behoorlijk slim te werk te gaan. Zijn beet kan bedoeld zijn om te eten of te verjagen, afhankelijk van de omstandigheden.
Bijtbereidheid
Quach en Chalasani zijn allesbehalve uitgekeken op hun wormpjes en hebben volop plannen voor vervolgonderzoek. Zo keken ze nu alleen naar hoeveel energie bacteriën of wormen P. pacificus opleverden, niet naar specifieke voedings- of giftige stoffen. Ook varieert de ‘bijtbereidheid’ per worm; de onderzoekers zouden graag begrijpen waarom sommige exemplaren van P. pacificus zoveel agressiever zijn dan andere.
Maar, zo schrijven ze, hun uiteindelijke doel is om uit te vinden hoe dat handjevol neuronen zulke complexe beslissingen neemt.
ALLE GERELATEERDE VIDEO'S, uitgekozen en gepost door peter2011
Advances in flow cytometry drive small bioparticle research
Advances in flow cytometry drive small bioparticle research
For researchers exploring the nature of small bioparticles, like extracellular vesicles or artificial nanoparticles, flow cytometry has largely been out of reach. No longer.
During the process of exocytosis, cells release membrane-bound vesicles (shown) into the extracellular space. Such bioparticles facilitate cell-to-cell signalling.
Credit: Meletios Verras/Shutterstock
In science, some of the most valuable discoveries hide in plain sight. Such was the case for extracellular vesicles (EVs). The small lipid-bilayer compartments are released from cells and contain nucleic acids, proteins and lipids. For decades, most researchers considered them insignificant. Many referred to them simply as platelet dust1.
In 2006, a series of published papers detailed the roles of EVs in intracellular communication. The findings spurred a wave of research. Between 2010 and 2019, the published mentions of EVs grew by 10 times from around 400 to more than 50002. Researchers now believe that EVs, which can be characterized into distinct subtypes, are vital in cell-to-cell signaling, and could serve as drug-delivery vectors and disease biomarkers.
“Extracellular vesicles are a really hot area of research right now,” says Stephanie Brunelle, a molecular biologist, and senior product manager for flow cytometry at the biotechnology company, Luminex in Seattle. “Many think they could be the next big biomarker.”
The challenge, Brunelle says, is analyzing and quantifying them.
While a number of methods exist, one of the most logical ones, flow cytometry, was until recently out of reach3. Flow cytometry is a bench-standard technique for cell sorting and quantification, and lends itself to high-throughput methods. But it was not sensitive enough to assess EVs or any other small bioparticles, whether artificial nanoparticles or small bacterial cells. Advances in imaging, assays and software are now enabling small-particle flow cytometry, and will almost certainly drive an even bigger wave of published EV research in the years ahead.
A better flow
Researchers have long used flow cytometry to count and characterize cells as microfluidics guide them over a detector. But human cells can be relatively large, up to 150 micrometres in diameter. EVs are decidedly smaller. One particularly interesting subtype of EVs, exosomes, have diameters between 30 to 100 nanometres, three orders of magnitude smaller than the average cell4.
Particles that small often emit signals too dim for standard flow cytometers to reliably detect, pushing researchers and companies to improve them. For example, flow cytometers traditionally used a photomultiplier tube as a sensor, but more modern devices incorporate more sensitive avalanche photodiodes or even CCD cameras, which can be five to 10 times as sensitive as PMTs. Luminex, for instance, makes a camera-based system.
“This technology is really great for detecting small particles,” Brunelle says.
Likewise researchers have developed improved assays and detection algorithms and their efforts have made the latest flow cytometers and techniques well suited to analysing and quantifying bioparticles.
Work is now ongoing to detect even dimmer signals. Many researchers are now interested in specific molecules inside EVs or carried on their surface, which can yield important clues about EVs’ purpose and mechanisms. But signals from those molecules can be between 10,000 and one million times dimmer than standard cells.
“This is the crux of why it's been hard to apply flow cytometry, which works so well in cells, for these small particles,” says John Nolan, a biochemist at the Scintillon Institute, a research organization in San Diego, California, and CEO of Cellarcus Biosciences. At Cellarcus, Nolan and others have developed a membrane stain that causes EVs to fluoresce brightly enough for a camera to detect5. The company also uses fluorescent-tagged antibodies, which they validate to make sure they’re selectively binding to the desired surface molecules.
While researchers could develop similar tools themselves, the availability of a simple kit can be a force multiplier for research. “It’s a hard measurement to make, and you have to do about a dozen things correctly,” Nolan says. “You don’t want it to be a physics project. You want it to be a clinical test at some point.”
The right signal
Gains in the sensitivity of flow cytometry are welcome, but they also can increase noise, whether from debris in the sample, autofluorescence in the buffer, or other factors. Researchers need to manage that risk to get reproducible results.
Perhaps the most important consideration, Brunelle says, is to run controls to calibrate the equipment, as well as to validate the EV sample. Researchers need to look at the buffer solution first by itself and then with fluorescent stain or antibodies added in order to calibrate their equipment. That way, when they make measurements on actual samples, those will be comparable to measurements made at a different time or on a different sample.
Likewise, Brunelle recommends performing incremental dilutions on a sample. By gradually reducing the concentration, researchers can determine which mixtures emit too much signal, saturating the detector, and which yield too little to be seen.
Using established standards and protocols is also important. Because small particle flow cytometry is still new, not all the standards have been set. But the International Society for Extracellular Vesicles publishes a series of guidelines laying out the controls and protocols scientists should follow to make sure they have a well validated particle population.
“Not all researchers are aware of this because EV research is still a little bit like the wild west, where people are kind of doing whatever they want,” Brunelle says. “But how can you be so sure that what you're seeing is true and real without using all the proper controls, especially something that's so technically challenging because it's so small?” Software, too, can help reduce noise by picking out weak signals. Luminex has an algorithm that can determine that a dim streak across the field of view of the camera is a signal from a single particle moving across the detector. It will then integrate that into a stronger signal.
More work remains. Nolan acknowledges that some of the smallest particles are still at the edge of reliable detection for flow cytometry. Also, researchers have found a surprising heterogeneity in EVs. It would be useful to sort small particles into different subgroups as they pass through the flow cytometer, as is done commonly with cell types. That could help researchers pair their work with further analysis, such as mass spectrometry. One possibility, Nolan says, could be to attach magnetic beads to antibodies, but those would then need to be removed somehow, and unlike cells, which can proliferate after sorting, it’s not clear how to get a large enough volume of EVs.
Almost certainly, these incremental improvements will come. “This is building on 20, 25 years of quantitative flow cytometry, and these concepts are well established for quantitative cell analysis,” Nolan says. “We are largely just adapting it down to this new, dim regime.”
To learn more about flow cytometry instruments and assays suitable for small bioparticle research, visit our website.
References
Hargett LA, Bauer NN. On the origin of microparticles: From "platelet dust" to mediators of intercellular communication. Pulm Circ. 2013;3(2):329-340. doi:10.4103/2045-8932.114760
Veziroglu Eren M., Mias George I. Characterizing Extracellular Vesicles and Their Diverse RNA Contents. Front. Genet. (11) 700 2020 https://doi.org/10.3389/fgene.2020.00700
Welsh, Joshua A, et al MIFlowCty-EV: A framework for standard reporting of extracellular flow cytometry experiments. Journal of Extracellular Vesicles. 9:1, 2020 https://doi.org/10.1080/20013078.2020.1713526
van Niel, G., D'Angelo, G. & Raposo, G. Shedding light on the cell biology of extracellular vesicles. Nat Rev Mol Cell Biol 19, 213–228 (2018). https://doi.org/10.1038/nrm.2017.125
Crooks ET, et al. Engineering well-expressed, V2-immunofocusing HIV-1 envelope glycoprotein membrane trimers for use in heterologous prime-boost vaccine regimens. PLoS Pathog 17(10): e1009807. (2021) https://doi.org/10.1371/journal.ppat.1009807
Wetenschappers ontdekken dat brein van ex-coronapatiënten is gekrompen
Wie besmet raakte met het coronavirus, blijkt nadien meer tekenen van hersenveroudering te vertonen dan mensen die geen corona hebben gehad of die een andere luchtweginfectie doormaakten. Dat hebben onderzoekers aan de universiteit van Oxford ontdekt. Het brein van ex-coronapatiënten is een klein beetje gekrompen en toont vooral beschadigingen in hersengebieden die verbonden zijn met het reukcentrum.
De wetenschappers delen hun bevindingen in het wetenschappelijk vakblad Nature. Wie corona doormaakte blijkt gemiddeld ook iets langer te doen over een simpel puzzelspelletje als ‘verbind de punten’. “Dat zegt iets over de verwerkingssnelheid en uitvoerende functies van het brein, oftewel iemands vermogen om een complexe taak uit te voeren”, zegt professor Gwenaëlle Douaud van de faculteit neurowetenschap aan de Oxford Universiteit.
Douaud en haar team bestudeerden de hersenscans van ongeveer achthonderd Britten tussen 50 en 80 jaar oud die meedoen aan een al lopend langdurig medisch volgonderzoek. De helft had tussendoor corona gehad, de andere helft niet, waardoor de onderzoekers goed de situatie voor en na de infectie konden vergelijken.
Het gaat om zeer subtiele verschillen, die bovendien per persoon verschillen, benadrukt de wetenschapper. Gemiddeld werd tijdens de studie 0,2 tot 2 procentpunt extra verval boven op de schade die mensen sowieso met de jaren oplopen vastgesteld. Over het algemeen zijn de door het coronavirus aangetaste hersengebieden zo’n tien jaar ‘ouder’ dan ze anders waren geweest, stelt het onderzoek. Het is nog niet duidelijk of dit verlies aan hersenmassa weer bijtrekt. “Dat is nu een van de grote vragen”, aldus Douaud.
Opmerkelijk is dat de verschillen ook zichtbaar zijn bij patiënten die thuis konden uitzieken. Bij de meeste andere studies naar de langetermijneffecten van corona gaat om patiënten die zwaar ziek in het ziekenhuis lagen.
Reukverlies
De wetenschappers vermoeden dat de schade te maken heeft met het reukverlies dat tot voor kort een kenmerkend symptoom was van corona – bij de omikronvariant komen reukstoornissen minder vaak voor. Wellicht komt het virus zelf via het reukcentrum het brein binnen, maar de schade kan ook het gevolg zijn van een ontstekingsreactie. Een andere mogelijkheid is dat de schade gewoon komt doordat patiënten een tijd hun reukcentrum niet gebruikten: in het brein beginnen ongebruikte gebiedjes vaak vanzelf te krimpen.
In our oceans lie the biggest mysteries of our world. It’s one of the reasons why divers are fascinated with the bottom of the ocean. And it is due to their relentless submerging in the darkest depths of our seas that we have made some of the most bizarre discoveries. These span from shipwrecks that were deemed forever-gone, to entire submerged cities that we didn’t even know about. Stay close to explore with us the 10 most exciting discoveries made by deep-sea divers!
Our lives really DO flash before us: Scientists record the brain activity of an 87-year-old man at the moment he died, revealing a rapid 'memory retrieval' process
Our lives really DO flash before us: Scientists record the brain activity of an 87-year-old man at the moment he died, revealing a rapid 'memory retrieval' process
Researchers recorded brain activity of 87-year-old as he died from a heart attack
Brain waves indicated rapid memory retrieval process occurred at time of death
Findings suggest our life does flash before our eyes through 'memory retrieval'
What happens in the brain as we die has been a source of mystery for centuries, but a new study suggests our lives really do flash before our eyes in our final moments.
Neuroscientists inadvertently recorded a dying brain while they were using electroencephalography (EEG) to detect and treat seizures in an 87-year-old man, and he suffered a cardiac arrest.
It was the first time ever that scientists had recorded the activity of a dying human brain, according to the team.
Rhythmic brain wave patterns were observed to be similar to those occurring during memory retrieval, as well as dreaming and meditation.
This supports a theory known as 'life recall' – that we relive our entire life in the space of seconds like a flash of lightning just prior to death.
In fact, the brain may remain active and coordinated during and after the transition to death, and may even be programmed to 'orchestrate the whole ordeal', according to the researchers.
The team recorded a dying brain while they were using electroencephalography (EEG) to detect and treat seizures in an 87-year-old man and the patient suffered a heart attack. Pictured is EEG output over a 900 second period encompassing a seizure (S), suppression of left cerebral hemisphere activity (LS), suppression of bilateral cerebral hemisphere activity (BS), and cardiac arrest (CA). Point of death is CA, coinciding with changes in EEG patterns. FP1, F7, T3 and so on refer to different electrodes of the EEG which are attached or contact different regions on the scalp of the patient. Left indicates left brain hemisphere, right indicates right brain hemisphere
Scientists have recorded the brain activity of a 87-year-old male epilepsy patient while he was dying from a heart attack. Pictured are CT scans of the patient, whose identity was not disclosed. A and B show effects of subdural hematoma - a serious condition where blood collects between the skull and the surface of the brain - with a larger mass effect on the left side. C and D show the same scan sequences after decompressive craniotomy - a surgery to treat the condition
THE LIFE RECALL THEORY
Imagine reliving your entire life in the space of seconds.
Like a flash of lightning, you are outside of your body, watching memorable moments you lived through.
This process, known as 'life recall', can be similar to what it's like to have a near-death experience.
What happens inside your brain during these experiences and after death are questions that have puzzled neuroscientists for centuries.
The patient, who is unnamed, was admitted to the Vancouver General Hospital in British Columbia, where neurosurgeon Dr Ajmal Zemmar was working at the time.
The researchers took EEG recordings from his brain before he eventually underwent a fatal cardiac arrest.
EEG is a method of recording electrical activity of the brain that involves electrodes placed along the scalp.
'We measured 900 seconds of brain activity around the time of death and set a specific focus to investigate what happened in the 30 seconds before and after the heart stopped beating,' said Dr Zemmar, now based at the University of Louisville, Kentucky.
'Just before and after the heart stopped working, we saw changes in a specific band of neural oscillations, so-called gamma oscillations, but also in others such as delta, theta, alpha and beta oscillations.'
Brain oscillations (more commonly known as 'brain waves') are patterns of rhythmic brain activity normally present in living human brains.
The different types of oscillations, including gamma, are involved in high-cognitive functions, such as concentrating, dreaming, meditation, memory retrieval, information processing, and conscious perception, just like those associated with memory flashbacks.
'Through generating oscillations involved in memory retrieval, the brain may be playing a last recall of important life events just before we die, similar to the ones reported in near-death experiences,' Zemmar said.
'These findings challenge our understanding of when exactly life ends and generate important subsequent questions, such as those related to the timing of organ donation.'
While this study is the first of its kind to measure live brain activity during the process of dying in humans, similar changes in gamma oscillations have been previously observed in rats kept in controlled environments.
This means it is possible that, during death, the brain organises and executes a biological response that could be conserved across species.
Electroencephalography (EEG) is a method of recording electrical activity of the brain that involves electrodes placed along the scalp
(file photo)
These measurements are, however, based on a single case and stem from the brain of a patient who had suffered injury, seizures and swelling.
This complicates the interpretation of the data, although Dr Zemmar said he hopes to investigate more cases in future.
'As a neurosurgeon, I deal with loss at times. It is indescribably difficult to deliver the news of death to distraught family members,' he said.
'Something we may learn from this research is: although our loved ones have their eyes closed and are ready to leave us to rest, their brains may be replaying some of the nicest moments they experienced in their lives.'
An electroencephalogram (EEG) is a recording of brain activity which was originally developed for clinical use.
During the test, small sensors are attached to the scalp to pick up the electrical signals produced when brain cells send messages to each other.
In the medical field, EEGs are typically carried out by a highly trained specialist known as a clinical neurophysiologist.
These signals are recorded by a machine and are analysed by a medical professional to determine whether they're unusual.
An EEG can be used to help diagnose and monitor a number of conditions that affect the brain.
It may help identify the cause of certain symptoms, such as seizures or memory problems.
More recently, technology companies have used the technique to create brain-computer interfaces, sometimes referred to as 'mind-reading' devices.
This has led to the creation and design of a number of futuristic sounding gadgets.
These have ranged from a machine that can decipher words from brainwaves without them being spoken to a headband design that would let computer users open apps using the power of thought.
It’s a cliché that everyone has heard when person tells of being in danger or in a near-death experiences: “I saw my life flash before my eyes.” Could this really happen? An 87-year-old man with epilepsy was connected to a brain-scanning monitor tracking seizures when he suffered a heart attack and died … with the monitor recording his brain activity until it stopped. His doctors now had an image of his thoughts before death. What, if anything flashed before his eyes? Will it happen to all of us?
“We measured 900 seconds of brain activity around the time of death and set a specific focus to investigate what happened in the 30 seconds before and after the heart stopped beating. Just before and after the heart stopped working, we saw changes in a specific band of neural oscillations, so-called gamma oscillations, but also in others such as delta, theta, alpha and beta oscillations.”
In a study published in the journal Frontiers in Aging Neuroscience, research leader Dr Ajmal Zemmar, a neurosurgeon at the University of Louisville, explains the unnamed man in Estonia was on a continuous electroencephalography (EEG) machine while doctors attempted to captures his brain waves during a seizure and attempt to diagnose and treat his problem. The sudden heart attack leading to death allowed them to inadvertently record the activity of a dying human brain for the first time. Those brain waves answered the question.
“Brain oscillations (more commonly known as ‘brain waves’) are patterns of rhythmic brain activity normally present in living human brains. The different types of oscillations, including gamma, are involved in high-cognitive functions, such as concentrating, dreaming, meditation, memory retrieval, information processing, and conscious perception, just like those associated with memory flashbacks.”
The man showed the same brain waves a person has during memory flashbacks. In addition, the waves showed signs of concentration, memory retrieval and information processing – exactly the activities a rain would perform when tasked with organizing the facts, images and memories of a person’s life. Zemmar sounds confident that’s when the EEG recorded.
“Through generating oscillations involved in memory retrieval, the brain may be playing a last recall of important life events just before we die, similar to the ones reported in near-death experiences.”
This new information affects both the science and ethics of death. This brain activity impacts determining the moment of death for organ donations. It also impacts how family, hospice providers and others present at the deathbed react to what they are seeing – while the loved one may be still, their mind may be racing though many decades of memories, which would dictate a bedside manner that allows it to finish and perhaps even aids in the activity.
Will this happen to all of us eventually?
Before drawing any conclusions, the press release reminds us that this is a single case and the patient had an epileptic brain that was injured. Nonetheless, this type of activity has been observed in a controlled rodent study. Taken together, it “suggests that the brain may pass through a series of stereotyped activity patterns during death.” In other words … we may all see our lives pass before our eyes at the time of death.
What is humanity? Do our minds set us apart from the rest of nature and from the rest of Earth? Or does Earth have a collective mind of its own, and we’re simply part of that mind? On the literal face of it, that last question might sound ridiculous.
But a new thought experiment explores it more deeply, and while there’s no firm conclusion about humanity and a planetary mind, just thinking about it invites minds to reconsider their relationship with nature.
Overcoming our challenges requires a better understanding of ourselves and nature, and the same is true for any other civilizations that make it past the Great Filter.
Humanity is pretty proud of itself sometimes. We’ve built a more-or-less global civilization, we’ve wiped out deadly diseases, and we’ve travelled to the Moon. We’re so smart we’re taking steps to protect Earth from the type of calamitous impact that wiped out Earth’s previous tenants, the dinosaurs. But that’s just one perspective.
Another perspective says that we’re still primitive. That billions of us are in the grip of ancient superstitions. That nuclear war haunts us like a spectre. That tribalism still drives us to do horrible animalistic things to one another. That we’re not wise enough to manage our own technological advancement.
Both perspectives are equally valid. All that can really be said is that we’re not as primitive as we used to be, but we’re nowhere near as mature as we need to be if we hope to persist beyond the Great Filter.
The Juno spacecraft took this image of Earth during a gravity assist flyby of our planet in 2013. The fact that we can make a spacecraft take a picture of our home planet is a sign of intelligence. But how intelligent are we really? Credit: NASA/JPL-Caltech/SwRI/MSSS/Kevin M. Gill.
Can we come up with a way to explain what stage we’re at in our development? The authors of a new article think they can. And they think we can only do that if we take into account Earth’s planetary history, the collective mind, and the state of our technology.
This trio of scientists wrote the new article in the International Journal of Astrobiology. It’s titled “Intelligence as a planetary scale process.” The authors are Adam Frank from the University of Rochester, David Grinspoon from the Planetary Science Institute, and Sara Walker from Arizona State University. The article is a thought experiment based on our scientific understanding of Earth alongside questions about how life has altered and continues to alter the planet.
Humans tend to think of intelligence as a property belonging to individuals. But it’s also a property belonging to collectives. Social insects use their collective intelligence to make decisions. The authors take the idea of intelligence even further: from individual intelligence to collective intelligence, to planetary intelligence. “Here, we broaden the idea of intelligence as a collective property and extend it to the planetary scale,” the authors write. “We consider the ways in which the appearance of technological intelligence may represent a kind of planetary-scale transition, and thus might be seen not as something which happens on a planet but to a planet, much as some models propose the origin of life itself was a planetary phenomenon.”
We’ve divided Earth’s life forms into species. We recognize that evolution drove the development of all these species. But are we missing something in our urge to classify? Is it more correct to view life as planetary rather than as individual species? After all, species didn’t suddenly appear; each one appeared in an ongoing chain of evolution. (Except for the original species, whose origins remain clouded in mystery.) And all species are linked together in the biosphere. It’s often pointed out that Earth is a bacterial world and the rest of us are only here because of bacteria.
It’s worthwhile to recall the work of Vladimir Vernadsky. Vernadsky was an important founder of biogeochemistry. Wikipedia defines biogeochemistry as “… the scientific discipline that involves the study of the chemical, physical, geological, and biological processes and reactions that govern the composition of the natural environment (including the biosphere, the cryosphere, the hydrosphere, the pedosphere, the atmosphere, and the lithosphere).
Vernadsky saw that the biosphere system is strongly linked to the Earth’s non-living systems. It’s difficult to understand the biosphere without looking at how it’s linked with other systems like the atmosphere. The linkage allows the biosphere to shape Earth’s other “spheres.”
Vernadsky wrote: “Activated by radiation, the matter of the biosphere collects and redistributes solar energy and converts it ultimately into free energy capable of doing work on Earth. A new character is imparted to the planet by this powerful cosmic force. The radiations that pour upon the Earth cause the biosphere to take on properties unknown to lifeless planetary surfaces, and thus transform the face of the Earth.”
In their article, the authors point out how organisms changed Earth’s biosphere. When the ability to photosynthesize appeared in lifeforms, individual lifeforms used it to great benefit. But collectively, they oxygenated Earth’s atmosphere in the Great Oxygenation Event (GOE.) The photosynthesizers opened a pathway for their own continuation and for more complex life to develop. It not only changed the course of evolution, but it also changed the very geology and geochemistry of the planet. The authors liken the collective activity of photosynthetic organisms to collective intelligence.
This figure from the article illustrates multi-level networks as a property of planetary-scale operation of intelligence. Each layer of the coupled planetary systems constitutes its own network of chemical and physical interactions. Specific nodes in each layer represent links connecting the layers. Thus, the geosphere contains chemical/physical networks associated with processes such as atmospheric circulation, evaporation, condensation and weathering. These are modified by the biosphere via additional networks of processes such as microbial chemical processing and leaf transpiration. The technosphere adds an additional layer of networked processes such as industrial-scale agriculture, manufacturing byproducts and energy generation. Image Credit: Frank et al. 2022.
“Making sense of how a planet’s intelligence might be defined and understood helps shine a little light on humanity’s future on this planet—or lack thereof,” they write. “If we ever hope to survive as a species, we must use our intelligence for the greater good of the planet,” said Adam Frank.
That won’t come as a shock to Universe Today readers.
The authors point out how collective activity changes the planet. They base their experiment partly on the Gaia hypothesis, which says that the Earth’s non-biological systems—geochemistry, plate tectonics, the atmosphere, the oceans—interact with living systems to maintain the entire planet in a habitable state. Without the “collective intelligence” of the biological world, the Earth wouldn’t be habitable.
The authors use an example from forests to illustrate the point.
Earth’s great forests couldn’t exist without the network of mycorrhizal fungi that live below ground. Tree roots interact with the network and the network moves nutrients around in the forest. The fungi get carbon in return. Without this network, the trees couldn’t survive, and no great forests would emerge.
Mycorrhizal fungi are in a symbiotic relationship with plants. The relationship is usually mutualistic, the fungus providing the plant with water and minerals from the soil and the plants providing the fungus with photosynthesis products. Parasitic organisms are also part of the network. Image Credit: By Charlotte Roy, Salsero35, Nefronus – Adapted from https://commons.wikimedia.org/wiki/File:R%C3%A9seau_mycorhizien.svg, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=92921450
As schoolchildren, we learn that plants produce the oxygen we need to breathe. Without photosynthetic organisms, we couldn’t survive. So the collective activity of the plant world (and algae, etc.) changes the planet to a place hospitable for humanity and other complex life. But now in our short time on Earth, we’ve developed technology, which is the most powerful expression of our collective planetary intelligence. What does that mean for Earth?
The authors talk about four stages of Earth’s development and how we can understand the idea of collective planetary intelligence as those stages evolve.
“Planets evolve through immature and mature stages, and planetary intelligence is indicative of when you get to a mature planet.”
Adam Frank, co-author, “Intelligence as a planetary scale process.”
The first stage is an immature biosphere. Billions of years ago the Earth was an immature biosphere. The only lifeform was bacteria, which couldn’t exert much force on Earth’s planetary systems. Because of this, there was no important global feedback between life and the planet. There was no collective intelligence.
The second stage was a mature biosphere. This was about 2.5 billion to 540 million years ago. Photosynthesis appeared and then plants. Photosynthesis oxygenated Earth’s atmosphere and an ozone layer developed. Life was making the Earth more stable and hospitable for itself. This is the collective planetary intelligence the authors are talking about.
Earth’s immature biosphere and mature biosphere stages. The mature biosphere stage was only possible once photosynthetic organisms created feedback with Earth’s non-biological processes, oxygenating the atmosphere and creating an ozone layer. Image Credit: University of Rochester illustration / Michael Osadciw
The third stage is where we’re at now, according to the authors. We live in an immature technosphere of our own creation. Our communication, transportation, electrical, and governmental networks are increasingly linked into a technosphere. A quick scan of headlines in consumer tech media shows how we can get a little excited about what we’ve created as a species (Meta, anyone?) But it’s wise not to get too excited. Why?
Because our technosphere is not linked with natural systems. Our immature technosphere largely ignores its impact on the Earth’s atmosphere, oceans, and the biosphere in general. We extract fossil fuels and push carbon into the atmosphere in an unregulated way. The danger is that this technological immaturity will force the Earth’s systems into a state that imperils the technosphere itself. The immature technosphere is working against itself and the biosphere that supports it.
The fourth stage represents a workable future. It’s the mature technosphere, and in a mature technosphere, our technological intelligence benefits the Earth. For example, renewable energy sources like solar energy will displace fossil fuels and help the climate regulate itself and maintain its habitability. Technological agriculture will strengthen the Earth’s soil systems rather than degrade them. We’ll use our technology to build cities that co-exist with natural systems rather than dominating them. But there are a lot of unknowns.
Earth’s immature technosphere and mature technosphere stages. The mature technosphere stage will be possible when we use our technology to maintain Earth’s life-supporting systems rather than to degrade them. Image Credit: University of Rochester illustration / Michael Osadciw
“Planets evolve through immature and mature stages, and planetary intelligence is indicative of when you get to a mature planet,” Frank says in a press release. “The million-dollar question is figuring out what planetary intelligence looks like and means for us in practice because we don’t know how to move to a mature technosphere yet.”
In a mature technosphere, systems would interact in mutually beneficial ways, like the trees and the mycorrhizal network in forests. A network of feedback loops both technological and natural would work intelligently to maintain habitability. This would be an entirely new arrangement, and the complexity would allow new capabilities to emerge. The emerging capabilities are one hallmark of a mature technosphere. Another is self-maintenance.
This figure from the article is a schematic representation of the evolution of coupled planetary systems in terms of degrees of planetary intelligence. The authors propose five possible properties required for a world to show cognitive activity operating across planetary scales (i.e. planetary intelligence). These are: (1) emergence, (2) dynamics of networks, (3) networks of semantic information, (4) appearance of complex adaptive systems, (5) autopoiesis. Different degrees of these properties appear as a world evolves from abiotic (geosphere) to biotic (biosphere) to technologic (technosphere). Image Credit: Frank et al. 2022.
“The biosphere figured out how to host life by itself billions of years ago by creating systems for moving around nitrogen and transporting carbon,” Frank says. “Now we have to figure out how to have the same kind of self-maintaining characteristics with the technosphere.”
There are some signs that we’re groping towards a mature technosphere, but they’re mostly crisis-driven. In 1987, we banned the ozone-harming class of chemicals called chlorofluorocarbons (CFCs) after scientists found a hole in the ozone layer. Acid rain is caused by sulphur dioxide and nitrogen dioxide and we’ve developed international agreements to limit them after scientists found that acid rain damages soil, trees, fish and other aquatic animals. DDT was used to kill pests and malarial mosquitoes but many countries banned their use when scientists found that it persisted in the environment and led to population declines in birds of prey, among other biosphere-harming effects.
This figure from the article shows timescales for interventions at different proposed levels of planetary intelligence. For so-called ‘mature biospheres’, feedbacks or interventions occur across a range of timescales from decades (DMS ((dimethyl sulphide) ocean temperature regulation) to millions of years for CH4 climate regulation. For ‘immature technospheres’ where the feedbacks or interventions are inadvertent, timescales occur on decades to century timescales. For ‘mature technospheres’ interventions are intentional and designed to maintain the sustainability of both the biosphere and the technosphere as a coupled system. Ozone replenishment and climate mitigation would occur on decades to century timescales while intentional changes in stellar evolution (if possible) would define the longest timescales at tens to hundreds of millions of years. Image Credit: Frank et al. 2022.
So there’s been some progress towards planetary intelligence. But those successes are mostly corrections to previous bad behaviour. Can we be more proactive?
We might be starting to. We’re developing systems to detect, catalogue, and deflect dangerous asteroids that pose a collision hazard with Earth. If we can do that, we can protect the entire biosphere from calamity, along with our own civilization. NASA and the ESA are working on planetary defence, and NASA launched a technology demonstration mission in 2021. If we can use technology to protect the entire planet, that must constitute a step toward a mature technosphere.
Some of these efforts are heartening, but we have a long ways to go, and this thought experiment can help us think more clearly about it. “We don’t have planetary intelligence or a mature technosphere yet,” Frank said. “But the whole purpose of this research is to point out where we should be headed.”
Are the development of planetary intelligence and a mature technosphere hallmarks of civilizations that make it past a “Great Filter?” Maybe. That idea dovetails with Frank’s other work in the search for alien technosignatures on distant exoplanets.
“We’re saying the only technological civilizations we may ever see—the ones we should expect to see—are the ones that didn’t kill themselves, meaning they must have reached the stage of a true planetary intelligence,” he says. “That’s the power of this line of inquiry: it unites what we need to know to survive the climate crisis with what might happen on any planet where life and intelligence evolve.”
For we lifeforms on Earth at this time, Anthropogenic Global Warming is the biggest threat to a sustainable biosphere. While we can debate what it is about our species that drives us to want more stuff, consume more stuff and create more pollution, the debate about AGW itself is over. It’s happening and we’re causing it.
There are some glimmers of planetary intelligence flickering on the horizon. But we’ve got a long way to go yet. Will we become intelligent enough to make it past the climatic Great Filter?
A new process for turning atmospheric carbon dioxide desorbed from an absorbent into dry ice reduces the energy input needed for carbon capture.
A new technology for capturing carbon dioxide from air, Cryo-DAC can use existing infrastructure at ports for ships that transport liquefied natural gas and infrastructure used to prepare city gas.
Carbon capture is playing an increasingly prominent role in plans to combat climate change. A new process for direct air capture, which involves capturing carbon dioxide (CO2) from the atmosphere, promises to greatly enhance the efficiency of the technology.
“Direct air capture has great potential for removing CO2 from the atmosphere on massive scales,” says Soichiro Masuda at the R&D/Digital Division of the Japanese energy-provider Toho Gas. “And it has evolved rapidly in the past several years.”
Direct air capture complements other technologies that capture carbon from industrial emissions, but the lower levels of CO2 in atmospheric air make it considerably more challenging. “Efficiency has continued to be a challenge for direct air capture, as the steps that isolate CO2 from atmospheric air require the input of energy,” says Masuda. “Burning fossil fuel to provide the energy input ends up creating more carbon emission for the sake of capturing carbon.”
“Direct air capture technology is a key part of our corporate strategy to reach carbon neutrality by 2050,” says Masuda. Now, Toho Gas and Nagoya University, have started research and development into realizing carbon neutrality and have devised a way to largely overcome the problem of capturing carbon with an improved direct air capture technology called Cryo-DAC.
Diagram depicting the carbon cycle (left) of Cryo-DAC (right), the direct air capture technology developed by researchers at Toho Gas and Nagoya University.
A key advantage of recycling carbon by Cryo-DAC is that it can use existing infrastructure such as ports for ships that transport liquefied natural gas, along with the associated infrastructure used to prepare city gas for industrial and household use. Natural gas is imported in liquefied form at about −162 degrees Celsius. Japan is one of the world’s major importers of liquefied natural gas, accounting for nearly 20% of global imports.
“Ever since Japan first imported natural gas in 1969, we’ve been exploring ways to exploit the cold energy of liquid natural gas,” explains Masuda. “We think we’ve finally found a solution.” Liquefied natural gas is vaporized by exchanging heat with seawater; the cold energy generated in this exchange is used for industrial purposes such as liquefying industrial gases. Large amounts of the cold energy, however, was wasted.
Cryo-DAC uses cold energy, thereby minimizing the thermal energy needed for the process. Of the various types of direct air capture being developed worldwide, Cryo-DAC employs a method that captures and isolates CO2 with chemical absorbents. “The scalability of the chemical absorption method is well suited for collecting massive amounts of CO2,” says Masuda. “This involves collecting atmospheric air, absorbing CO2 in a solvent, and then isolating the CO2 from the solvent. This last step, however, requires large amounts of heat, creating carbon emission.”
Using dry ice to create a vacuum
The research team designed a new process that has a chamber in which CO2is sublimated into dry ice by using the cold energy of liquid natural gas. The new chamber is connected to another in which CO2 is absorbed in solvent; the phase change from CO2 to dry ice lowers the pressure inside, which causes the solvent and CO2 to evaporate. “As a result, CO2 can be recovered from the solvent at near room temperature, minimizing the thermal energy needed,” explains Yoshito Umeda, a professor at Nagoya University.
Schematic diagram of the cryopump used in Cryo-DAC.
The output of Cryo-DAC is high-pressure CO2 gas. Toho Gas plans to use the captured CO2 as a raw material for city gases that the company provides to its customers. “High-pressure CO2 is needed to produce methane, the main component of city gas, that can be obtained by reacting CO2 and hydrogen. While CO2 for methanation is typically prepared with compressors, Cryo-DAC has the potential to separate CO2 from air and generate high-pressure CO2 at low cost. Although city gas leaves a carbon footprint when burned, direct air capture with Cryo-DAC could offset these emissions,” says Masuda. “The International Energy Agency predicts that the demand for natural gas will continue to increase until 2050, unlike other major fossil fuels like oil or coal. We thus see Cryo-DAC as a key part of future gas infrastructure with net-zero carbon emission.”
The research is now a part of Japan’s Moonshot Research and Development Program, the Cabinet Office’s initiative to fund high-risk, high-impact research projects. The team includes collaborators at Tokyo University of Science, Chukyo University and the University of Tokyo, who are enhancing the materials and processes used in Cryo-DAC. The group is currently developing a solvent with higher absorption capabilities, as well as trying to achieve a continuous flow from CO2 sublimation to the output of high-pressure CO2. The aim is to establish the core technology by 2022 so that the system can operate continuously with a capacity of 1 tonne of CO2 per year in 2024. The group also aspires to design equipment for commercial use, and create detailed plans for implementing the system in a real-world setting by 2029.
“By using existing infrastructure for gas-consuming appliances and pipelines, we expect to transition smoothly to carbon neutrality without imposing a significant burden on our customers or the wider society,” says Masuda.
Darwin’s Natural Selection Theory May Not Be True, Gene Study Says
Darwin’s Natural Selection Theory May Not Be True, Gene Study Says
Researchers from Ghana and the University of Haifa, Israel, have published a breakthrough study that questions randomness in Darwin’s natural selection theory, in the Genome Research journal which may revolutionize human evolutionary history. According to the researchers, mutations have been misattributed to randomness, and this has been the backbone of the theory of evolution, until now. Instead, the researchers have been able to provide evidence of non-random mutations by showing “a long-term direct mutational response to environmental pressure.”
For over 160 years, the scientific community has followed Darwin’s natural selection theory, which basically says nature selects for new mutations in a totally random way. But a new study shows that it may not be randomness at all but environmental pressures that cause mutations.
Non-Randomness Versus Accidental Natural Selection Theory
This is in direct contradiction to Darwin’s longstanding theory of natural selection, which argues that all genetic mutations are random and accidental, and attributes beneficial traits being passed on through generations of breeding. For long, this has been a key tenet of neo-Darwinism, but we can now safely postulate that one helpful genetic mutation was not random at all – the human haemoglobin S (Hbs) mutation that protects against malaria.
Lead researcher Professor Adi Livnat, from the University of Haifa, Israel said:
“For over a century, the leading theory of evolution has been based on random mutations. The results show that the HbS mutation is not generated at random but instead originates preferentially in the gene and in the population where it is of adaptive significance. We hypothesize that evolution is influenced by two sources of information: external information that is natural selection, and internal information that is accumulated in the genome through the generations and impacts the origination of mutations.”
Professor Livnat is referring to the unique approach adopted by his team, wherein, the HbS mutation was isolated to distinguish between random mutations, and natural selection. In the mix, non-random mutations were added to detect “de novo” mutations, which literally mean “out of the blue” mutations that are present in an offspring but not inherited from either parent, reported The Daily Mail .
Interestingly, the HbS mutation was found to occur more frequently in populations where malaria is endemic, i.e., Africa, suggesting that certain mutations arise more frequently where they are of adaptive significance. The scientists behind the latest study hypothesize that evolution is influenced both by external information (natural selection), and internal information (generational genetic pools).
For over 160 years, based on Darwin’s natural selection theory, we have been taught that evolution through mutation is random and accidental, but the latest study shows that this isn’t true for malaria.
Lamarckism, Environmental Pressures and De Novo Mutations
This new thinking about natural selection has actually been around for a long time but the recent study proves it for the human hemoglobin malaria mutation.
Many scientists have written that complex and impressive adaptations in the eyes, brain, or hands, cannot be just attributed to randomness. Neither can the entire natural selection process be explained by Lamarckism, which posits that all beneficial adaptations come from direct environmental pressure. When the out-of-the-blue mutation hypothesis is applied to HbS, it is seen to provide protection against malaria for people with one copy, but causes sickle cell anemia in those with two copies, reported Salon.
"This shows empirically for the first time a directional response of mutation to a specific long-term environmental pressure . This sort of result cannot be explained by Neo-Darwinism, which is limited to explaining minor, gross-level effects on average mutation rates, not responses of specific mutations to specific environmental pressures. Therefore, the implications are that here there is an empirical finding that Neo-Darwinism really cannot explain, which challenges the notion of random mutation on a fundamental level,” added Dr Livnat.
Dr Livnat, and his lab manager, Dr Daniel Melamed, applied the de novo emergence of the HbS mutation to its origins, showing that the malaria-protective mutation actually originates de novo more frequently in sub-Saharan Africans , a population subgroup that has been exposed to centuries of malarial selection pressure, compared to that of the Europeans. Clearly, a random mutation would have equally random chances of appearing in both populations, as per Darwinian postulation, but that is not what actually happened.
“Mutations defy traditional thinking. The results suggest that complex information that is accumulated in the genome through the generations impacts mutation, and therefore mutation-specific origination rates can respond in the long-term to specific environmental pressures. Mutations may be generated nonrandomly in evolution after all, but not in the way previously conceived. We must study the internal information and how it affects mutation, as it opens the door to evolution being a far bigger process than previously conceived,” Livnat concluded.
Previous studies using Lamarckism as a theoretical base looked for immediate mutational adaptations to environmental stressors. Other studies, which found Lamarckism too limited in its scope, used only Darwinian natural selection and looked for random internal genetic mutations.
The current study gives scientists motive to reconsider current practices “of measuring mutation rates as averages across a multitude of positions on the genome.” This also opens up the field to study mutations other than HbS to see if the story of human evolution is actually random or smart by design!
Top image:This arc of five hominin skulls has been used for over 100 years to prove that natural selection theory is totally random and accidental, but a new study shows this to be false for a malaria mutation. Source: Smithsonian
It looks like I'm on somebody's list in the government. I'm on Wikileaks...a site that discloses secret documents and data of the US government so that the public can learn about it. It's about a UFO article on Before Its News site a few years back, and the email is addressed to a US gov spy (think tank) news site that pretends to gather world news info for the public, but really gathers it for the US government intel. I know this because this news site https://www.stratfor.com is in a lot of documents in Wikileaks...including to Hillary Clinton and many US presidents. So...big brother is watching. Lets hope those poor saps are learning from all this UFO and alien intel they are gathering, so it will make the change from within the government. Scott C. Waring
The global climate is warming, and Earth’s polar regions are feeling the effects. A new study of the South Orkney Islands shows that the region has warmed significantly since the 1950s. The rise in warming in the South Orkneys exceeds the overall global warming.
As the islands warm, plant life is spreading.
The South Orkney Islands lie about 600 km (375 miles) northeast of the Antarctica Peninsula’s tip. Britain and Argentina both lay claim to the group of islands. Both nations maintain research stations in the South Orkneys: Argentina has one on Laurie Island and Britain has one on Signy Island.
A study based on Signy Island data going back to the 1950s shows that the climate is warming and that the spread of vascular plants in the warming conditions is turning more of the island green, especially since 2009. The study is “Acceleration of climate warming and plant dynamics in Antarctica,” published in the journal Current Biology. The lead author is Nicoletta Cannone from the Università degli Studi dell’Insubria, Dip. Scienza e Alta Tecnologia, Italy.
While the South Orkneys are separated from Antarctica by about 600 km, they’re still in a polar climate. About 90% of the islands were glaciated as of 2009, and the summers are very short and very cold. Ice-covered seas surround the South Orkneys seas from late April to November.
But the new study shows that things are changing in these remote islands. According to the paper, the two species of vascular plants on Signy Island responded to the climate change acceleration with a “striking advance,” according to the report.
“This is the first evidence in Antarctica for accelerated ecosystem responses to climate warming, confirming similar observations in the Northern Hemisphere.”
From “Acceleration of climate warming and plant dynamics in Antarctica” by Cannone et al. 2022
The warming hasn’t been a continuous trend. There was one period of pronounced cooling in the years since the study began. The study points out that “… a short but intense cooling occurred from the Antarctic Peninsula to the South Orkney Islands…” between 1999 and 2016.
But air temperature warming resumed in 2012 on Signy Island, accelerating the expansion of the two vascular plant species. “We also hypothesize that the “pulse” climatic event of the strong air cooling detected in 2012 did not appear to influence the vegetation community dynamics on this island,” the authors write. “The lack of negative impacts of the strong pulse cooling event in 2012 on both species could be explained by their ability to perform photosynthesis at low ambient temperatures.”
This figure from the study shows the Summer Air Temperature at Signy Island. Blue dots are SAT between 1960 and 2011, and orange dots are SAT between 2012 and 2018. Image Credit: Cannone et al. 2022.
Other research shows that the same type of accelerated ecosystem responses from climate warming occurs in the Arctic. A 2018 research article reported that plants are increasing their northern range in the Arctic and getting taller. A 2020 paper showed that the warming climate creates terrestrial algae blooms in Antarctica. But the authors of this paper say theirs is the first research to document the advance of vascular plants in the Antarctic. They also say that ongoing climate change will significantly affect the region.
“This is the first evidence in Antarctica for accelerated ecosystem responses to climate warming, confirming similar observations in the Northern Hemisphere,” they explain in their paper. “Our findings support the hypothesis that future warming will trigger significant changes in these fragile Antarctic ecosystems.”
There are two species of vascular plants native to Signy Island. One is D. antarctica, a flowering plant known as Antarctic Hair Grass. The other is C. quitensis, another flowering plant that’s also called Antarctic Pearlwort.
This figure from the study shows how climate warming resumed at Signy Island after the 2012 cooling and accelerated the expansion of D. antarctica and C. quitensis. D1 through D5 represent plant sites of increasing density. D1 is the least dense site, and D5 is the densest site. Image Credit: Cannone et al. 2022.
“In the almost six decades up to 2018, D. antarctica exhibited a very large increase in the number of sites of occurrence, which doubled between 1960 and 2009 and then again between 2009 and 2018,” the authors write. C. quitensis expanded even more. “Colobanthus quitensis also showed a large expansion, even more so than D. antarctica in the last decade, involving both the number of sites of occurrence and their extent…” the paper says.
The number of sites with D. antarctica doubled between 1960 and 2009. Then it doubled again between 2009 and 2018. C. quitensis expanded even more than D. antarctica in the last decade.
This figure from the study illustrates the spread of both vascular plants native to Signy Island going back to 1960. The top row is D. antarctica, and the bottom row is C. quitensis. From the paper: “Distribution of D. antarctica and C. quitensis in 1960 (yellow dots) (A and D), 2009 (B and E), and 2018 (C and F) (green and magenta dots) in relation with the patterns of Holocene deglaciation and glacier boundaries and indicating the occurrence (magenta dots) or absence (green dots) of marine vertebrate disturbance in 2009 and 2018. Legend: dark blue, glacier boundaries as recorded in 2016; blue, glacier boundaries during the Little Ice Age; pale blue, terrains deglaciated between 6600 years BP and the Little Ice Age; white, terrains deglaciated before 6600 years BP.” Image Credit: Cannone et al. 2022.
The warming climate isn’t the only factor in this study. The image above shows areas of marine vertebrate disturbance. That refers to fur seals that inhabit the island. “In the last decade, the impact of fur seal disturbance on both species decreased, becoming almost negligible,” the authors explain. “During the last decade, both species expanded in response to air temperature warming and release from the limitation of animal disturbance.”
Climate change doubters might think they’ve found ammunition here. Some people might want to emphasize the reduction in animal disturbance as a factor in plant spread and downplay the effect of climate warming. The researchers don’t discount reduced animal disturbance, but it’s a secondary factor. “We also hypothesize that the accelerated population expansion of D. antarctica and C. quitensis could result from a combination of climate warming and the recently reduced impacts of animal disturbance. This hypothesis is compatible with observations in the Northern Hemisphere, in particular in Europe, where land-use change correlates with vegetation change but, as here, the primary driver of these responses was climate warming,” they write.
These are the two vascular plants native to Signy Island. On the left is D. antarctica and on the right is C. quitensis. Image Credit: L: By Lomvi2 – Own work, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10372682. Image Credit: R: By Liam Quinn – Flickr: Antarctic Pearlwort, CC BY-SA 2.0, https://commons.wikimedia.org/w/index.php?curid=15525940
What does increased native planet growth due to a warming climate mean for the future of the Antarctic? On the surface of it, there could be some benefits. Increased plant growth removes more CO2 from the atmosphere through photosynthesis. “Such climate warming may benefit some and possibly many native Antarctic terrestrial species and communities in isolation…” the researchers write in their paper’s summary.
But it’s not just native plants that will benefit. They’ve exploited their cold niche for a long time, and other plant species haven’t gained a foothold on Signy Island. That could change, and the change could be disruptive.
Climate warming “… will also lead to increased risks from non-native species establishment. These may outcompete native species and trigger irreversible biodiversity loss and changes to these fragile and unique ecosystems,” they say in conclusion.
Sea Level to Rise up to a Foot by 2050, Interagency Report Finds
Sea Level to Rise up to a Foot by 2050, Interagency Report Finds
Coastal cities like Miami, shown, already experience high-tide flooding. But a new federal interagency report projects an uptick in the frequency and intensity of such events in the coming decades because of rising seas. Credits: B137 (CC-BY)
NASA, NOAA, USGS, and other U.S. government agencies project that the rise in ocean height in the next 30 years could equal the total rise seen over the past 100 years.
Coastal flooding will increase significantly over the next 30 years because of sea level rise, according to a new report by an interagency sea level rise task force that includes NASA, the National Oceanic and Atmospheric Administration (NOAA), and other federal agencies. Titled Global and Regional Sea Level Rise Scenarios for the United States, the Feb. 15 report concludes that sea level along U.S. coastlines will rise between 10 to 12 inches (25 to 30 centimeters) on average above today’s levels by 2050.
The report – an update to a 2017 report – forecasts sea level to the year 2150 and, for the first time, offers near-term projections for the next 30 years. Agencies at the federal, state, and local levels use these reports to inform their plans on anticipating and coping with the effects of sea level rise.
“This report supports previous studies and confirms what we have long known: Sea levels are continuing to rise at an alarming rate, endangering communities around the world. Science is indisputable and urgent action is required to mitigate a climate crisis that is well underway,” said NASA Administrator Bill Nelson. “NASA is steadfast in our commitment to protecting our home planet by expanding our monitoring capabilities and continuing to ensure our climate data is not only accessible but understandable.”
The task force developed their near-term sea level rise projections by drawing on an improved understanding of how the processes that contribute to rising seas – such as melting glaciers and ice sheets as well as complex interactions between ocean, land, and ice – will affect ocean height. “That understanding has really advanced since the 2017 report, which gave us more certainty over how much sea level rise we’ll get in the coming decades,” said Ben Hamlington, a research scientist at NASA’s Jet Propulsion Laboratory in Southern California and one of the update’s lead authors.
NASA’s Sea Level Change Team, led by Hamlington, has also developed an online mapping tool to visualize the report’s state-of-the-art sea level rise projections on a localized level across the U.S. “The hope is that the online tool will help make the information as widely accessible as possible,” Hamlington said.
The Interagency Sea Level Rise Task Force projects an uptick in the frequency and intensity of high-tide coastal flooding, otherwise known as nuisance flooding, because of higher sea level. It also notes that if greenhouse gas emissions continue to increase, global temperatures will become even greater, leading to a greater likelihood that sea level rise by the end of the century will exceed the projections in the 2022 update.
“It takes a village to make climate predictions. When you combine NASA’s scenarios of global sea level rise with NOAA’s estimates of extreme water levels and the U.S. Geological Survey’s impact studies, you get a robust national estimate of the projected future that awaits American coastal communities and our economic infrastructure in 20, 30, or 100 years from now,” said Nadya Vinogradova Shiffer, who directs the NASA Sea Level Change Team at NASA Headquarters in Washington.
“This is a global wake-up call and gives Americans the information needed to act now to best position ourselves for the future,” said NOAA Administrator Rick Spinrad, Ph.D. “As we build a Climate Ready Nation, these updated data can inform coastal communities and others about current and future vulnerabilities in the face of climate change and help them make smart decisions to keep people and property safe over the long run.”
Building on a Research Legacy
The Global and Regional Sea Level Rise report incorporates sea level projections from the latest Intergovernmental Panel on Climate Change (IPCC) assessment, released by the United Nations in August 2021. The IPCC reports, issued every five to seven years, provide global evaluations of Earth’s climate and use analyses based on computer simulations, among other data.
A separate forthcoming report known as the Fifth National Climate Assessment, produced by the U.S. Global Change Research Program, is the latest in a series summarizing the impacts of climate change on the U.S., and it will in turn use the results from the Global and Regional Sea Level Rise report in its analysis. The Climate Assessment is slated to publish in 2023.
NASA sea level researchers have years of experience studying how Earth’s changing climate will affect the ocean. Their work includes research forecasting how much coastal flooding U.S. communities will experience in 10 years, helping to visualize IPCC data on global sea level rise using an online visualization tool, and launching satellites that contribute data to a decades-long record of global sea surface height.
Learn more about sea level and climate change here:
Fluoride (in yellow bucket) to be added to drinking water at a treatment plant in California.
Michael Macor/Hearst/SFC via Getty
Michael Connett had been preparing for this moment for four years. The California-based attorney was headed to court, where he would be suing the US Environmental Protection Agency (EPA). Connett was slated to appear at the San Francisco federal courthouse on behalf of several individuals and advocacy groups. His contention: that supplemental water fluoridation is unsafe and should be halted. Immediately.
On the first day of the hearing, Connett woke up at 3.30 a.m. to put the finishing touches to his opening presentation. He downed a cup of coffee and an energy bar, then walked the two blocks from the hotel to his office, where he sat down, signed into Zoom and prepared to give his opening statement. The date was 8 June 2020, and the court had been closed to in-person business since March because of the COVID-19 pandemic. There was no bailiff, no audience sitting in the gallery. Instead of 50 onlookers in a physical courtroom, more than 500 people had signed in to view the virtual proceedings. They watched as Connett enumerated issues that have been bubbling up in the world of fluoride research.
The bulk of public opinion, based on decades of dental-health research, is against him — at least in the United States, where more than 63% of people have access to fluoridated water. One study after another, from the 1940s through to the 1970s, has pointed to fluoride as an important factor in preventing tooth decay, also known as caries. The mineral has become part of public-health lore, and has been hailed by the US Centers for Disease Control and Prevention as one of the ten greatest public-health achievements of the twentieth century. Most people who live in areas with fluoridated water on tap take the benefits for granted and view with suspicion those who question the supplementation.
Part of Nature Outlook: Oral health
Yet research over the past 50 years has sown a seed of doubt. Rates of tooth decay in some high-income countries with no fluoridation have declined at a pace similar to that seen in fluoridated US communities. And an increasing number of studies are indicating that fluoride — which occurs naturally in soil and therefore also in groundwater — might be a developmental neurotoxin, even at the level that the US Public Health Service has declared optimal for fluoridation.
Some toxicologists and epidemiologists are now questioning whether even low doses of fluoride can have systemic effects, including causing a dip in IQ in children who were exposed to it in utero. The first indications of this came from studies that compared unfluoridated villages and communities with fluoridated ones (where fluoride is either naturally occurring or added to water), followed by better-controlled studies that measured fluoride in individuals. In the United States, each new study was met with extreme criticism, ridicule and anger that, at times, threatened the careers of those involved.
Many dentists, having seen what life was like before fluoridation, have no interest in returning to the pre-fluoridation era of widespread cavities, abscesses, dentures and people in pain. But toxicologists worry that dental-health gains have come at a cost. Today, despite a shared goal of protecting public health, researchers on opposing sides of the fluoridation debate have trouble finding common ground.
Landmark in oral health
Fluoride has, without doubt, improved oral health and decreased rates of dental caries. Community water fluoridation has its roots in the 1940s, when a handful of trials were conducted after it was noticed that some communities with naturally fluoridated groundwater had a lower-than-average incidence of tooth decay. The first of these trials began in Michigan, New York state and Ontario, Canada, in 1945. In Michigan, researchers compared rates of tooth decay in Grand Rapids, where fluoride was added to the community water supply, and in Muskegon, where it was not1. When the five-year data were analysed and formally reviewed, the results were so striking that Muskegon abandoned the trial and began adding the mineral to its water, too. Over the following five decades, fluoridation was introduced in communities around the United States.
The practice remains common not only in the United States but elsewhere, including Australia (where 90% of municipal water supplies are fluoridated), New Zealand (47%), and Canada (39%), and has strong proponents in the United Kingdom (10%), where many dentists and public-health officials have been exerting pressure to start fluoridating the water in more communities.
Dental practitioners who remember the time before fluoridation know well what impact it has had. “My first practice was on the border of Birmingham, which was fluoridated, and Sandwell, which wasn’t,” says Nigel Carter, a paediatric dentist and chief executive of the Oral Health Foundation in Rugby, UK. It was clear from their charts, he says, that children with extensive tooth decay were almost always from Sandwell. In 1987, Sandwell began fluoridating its water, making it one of the most recent UK communities to do so. “Within five years, it went from the bottom ten, in terms of oral health, to the top five, purely due to fluoride being introduced in the water,” Carter says.
A resident of Birmingham, UK, shown in 1964 lining up bottles so that they can be filled with unfluoridated water.
Credit: Hulton Archive/Getty
Yet as research pushed forward in the late 1970s and 1980s, it became clear that the common understanding of how fluoride works was wrong. For decades, it was thought that fluoride was most effective at strengthening teeth when it was consumed, and that this would benefit a fetus exposed to fluoride during gestation. But it turns out although fluoride is incorporated into developing teeth in utero, it is protective against dental caries only after the teeth have emerged from the gums2.
In the mouth, fluoride ions incorporate themselves into plaque, a biofilm on teeth. When the environment becomes too acidic, the ions are released from the plaque and help pull minerals from the saliva to remineralize enamel surfaces and slow down tooth decay3. Fluoride ions can get into the mouth either by applying them directly to the teeth — with topical products such as toothpaste and varnish — or by ingesting fluoridated water and foods. The latter results in a tiny amount being constantly secreted in saliva. About 50% of ingested fluoride is absorbed and retained in bones and teeth, and the rest is excreted in urine; ingesting too much causes weakened bones and joints, in a condition known as skeletal fluorosis.
As research showing that topical fluoride was at least as effective as systemic doses piled up4, fluoridated toothpastes flooded the market. Children in primary schools were given fluoride tablets and told to swish and spit. Dentists incorporated fluoride varnishes and lacquers into their patients’ twice-yearly cleanings. And the incidence of dental caries in the United States and around the world continued to fall5.
Despite widespread adoption of topical fluoride, tap-water fluoridation continued. If topical fluoride has proven so effective, and rates of dental caries around the world have dropped without water fluoridation, then why is fluoride still being added to water supplies, opponents ask. Connett thinks it shouldn’t be. Others say that the answer is not so simple, and point to knottier issues of health inequities and environmental justice.
First, do no harm
Most of the research into water fluoridation’s protective effects was done before 1975, meaning that few studies directly address whether the widespread use of fluoridated rinses and toothpastes has made systemic fluoride unnecessary. But there are some clues that suggest this might be the case. Even in countries with no water fluoridation, such as Denmark, tooth decay has declined at rates comparable to those seen in US communities with fluoridation. That alone is enough to convince some researchers that adding fluoride to water is not necessary for cavity prevention, at least in societies with comprehensive public-health measures in place.
“We’re talking about a simple, highly electronegative anion. That’s it. That’s all fluoride is,” says Pamela Den Besten, a paediatric dentist who studies fluorosis and enamel formation at the University of California, San Francisco.
Den Besten has spent her career trying to work out the systemic effects of swallowing this anion. The fact that fluoride can affect ameloblasts, the cells that produce and deposit tooth enamel, suggests that it could affect other cells of the body. In fact, she notes, studies in animals and humans show that, in addition to fluorosis, cellular effects of fluoride also include inflammation and altered neurodevelopment. That, in turn, suggests that it could make its way into the brain. Den Besten says that means researchers should be looking into whether fluoride has potential effects on the central nervous system. “It should be a high priority to answer these questions. And yet, it’s not.” These potential effects of fluoride are important for individuals at all ages, she says.
The possibility of neurological effects is part of what Connett is trying to draw attention to in his lawsuit against the EPA. The finding that has garnered the most attention is a 2019 study in JAMA Pediatrics6, in which researchers compared the IQ of children who were born to women living in fluoridated areas and non-fluoridated areas. The data, which came from 512 mother–child pairs in 6 cities in Canada, indicated that, depending on how fluoride intake was assessed, exposure during fetal development was associated with as much as a five-point drop in IQ. A second study, led by public-health physician and epidemiologist Howard Hu at the Keck School of Medicine at the University of Southern California in Los Angeles, found a correlation between increased maternal urinary fluoride and decreased IQ in children born in Mexico City7.
“It’s not disputed that fluoride is toxic at high levels,” says Christine Till, a neuropsychologist at York University in Toronto, Canada, and lead researcher of the JAMA Pediatrics study. But what happens at lower levels, such as the 0.7 milligrams of fluoride per litre recommended in US fluoridation, is contested. That’s what Till and her colleagues have been working to tease out. “You have some weaker studies saying there’s no effect. And then you have our study, and the Mexico study, that are high quality, saying there is an effect,” she says.
On the basis of these two studies, Philippe Grandjean, a physician and environmental medicine researcher at the University of Southern Denmark in Odense, put together a benchmark-dose study on fluoride to document concentrations at which fluoride begins to have detectable adverse effects on IQ. According to the report, published in June8, that level is 0.2 milligrams per litre. That’s less than one-third of the recommended level for US water supplementation and one-twentieth of the US maximum allowable level of 4 mg l−1 (a level originally intended to prevent skeletal fluorosis). These numbers are just the beginning. More cohort studies are under way, and toxicologists and epidemiologists hope they’ll help to bring clarity to the fraught debate.
Earlier in his career, Grandjean had worked to prove the dangers of mercury exposure, and of lead exposure before that. Bruce Lanphear, an environmental neurotoxicologist at Simon Fraser University in Burnaby, Canada, was also involved in the lead toxicity studies and worked with Till on the Canadian fluoridation study. Both Lanphear and Grandjean testified during Connett’s lawsuit, noting that the data from their fluoride analyses are comparable to those used to limit the use of mercury and lead.
Over the past 30 years, researchers have shown that the developing brain is uniquely vulnerable to lead, mercury and other neurotoxins. “Low-level lead was contentious, but it doesn’t match up to fluoride,” Lanphear says. “I don’t think people have been sceptical enough about the benefits or the safety of [systemic] fluoride.”
Hard benefits
Some public-health dentists think the issue isn’t quite so clear cut. E. Angeles Martinez Mier, who studies dental public health at Indiana University’s School of Dentistry in Indianapolis, agrees that fluoride safety is worth investigating but says there’s not yet enough evidence to convince her that the risks outweigh the benefits. “Fluoridated water works for caries prevention,” says Martinez Mier, whose laboratory did the fluoride analysis on both the Canadian and Mexico cohorts, and who is an author of both papers.
But the magnitude of this benefit could be modest. Comparing fluoridated and non-fluoridated US communities, dentists see about one fewer cavity in baby teeth in fluoridated areas, and about 0.3 fewer cavities on average in adults9. “The size of the effect is not as much as people might think,” Till says.
Crystals of sodium fluoride.
Credit: NIH/SPL
Still, that benefit means something to those who can not afford dental care or to miss school or work because of poor oral health. “It’s not realistic, given the system that we have, that we’ll be able to reach every child with topical fluoride,” Martinez Mier says. “A lot of public-health dentists are adamant that fluoridated water is the only thing we have that reaches the public, regardless of access to care, regardless of public health.” If fluoridated water can help prevent so much hardship, public-health dentists argue, why wouldn’t people want it?
They also point out that although rates of tooth decay have gone down across the world, many of the countries studied have government-funded universal health-care programmes that educate citizens on the proper care of teeth and gums. The United States does not. “We are not Scandinavia. We are not Canada. Our public-health system, our infrastructure, is very different than those countries,” Martinez Mier says. “In Scandinavia, many countries have nurses who visit you at home, teach you how to brush, and you have access to fluoride through universal health care.” In the United States, she says, “it’s not realistic that we’ll be able to reach every child with topical fluoride.” Fluoridated water, however, reaches anyone who drinks or cooks with treated tap water. That’s insurance Martinez Mier is not yet willing to give up.
Hard questions
“If we’re looking at a practice that affects so many people, we want it to be scrutinized. We need transparency in the science,” says Brittany Seymour, a dentist who studies oral-health policy and epidemiology at the Harvard School of Dental Medicine in Cambridge, Massachusetts. She thinks there are some who are so fixed in their views of fluoridation that they will not reassess their stance no matter what the latest research might show. But she also thinks that the questions Till, Lanphear and others are asking are important.
Seymour, who is also a spokesperson for the American Dental Association, studies online health misinformation and has seen all the ways in which fluoride has been demonized. For now, at least, she thinks it’s too early to consider revising a programme that has clearly made a difference to children’s oral health, especially when the data are limited to just a few cohorts. And while tooth decay might be down globally, she doesn’t think it’s because of fluoridated toothpaste alone. She points to two cities — Juneau in Alaska10, and Calgary in Canada11 — where the ending of water fluoridation seems to be directly correlated with a rise in dental caries. “If we remove something that we know has a protective benefit, we’re trading that for another problem,” she says.
Martinez Mier agrees. “It’s too early to be reactive and to cease water fluoridation without understanding the full scope of what that would mean for a community,” she says. If something designed to protect people’s oral health is removed, then new protective measures need to be put in place, she says.
It is difficult to ignore the importance of equity in these arguments. On the one hand, dentists think that fluoridated water most benefits those who lack access to dental services, oral-health education, or a steady supply of fluoridated toothpaste — the very people who are most susceptible to poor oral health and who experience the greatest financial hardship when dental problems strike. On the other hand, toxicologists worry about any impact of fluoridated water on IQ, especially in populations that are already vulnerable because of exposure to high rates of air pollution and elevated poverty rates, for example. And even if such populations are aware of the potential risks of fluoridation, they are least likely to be able to afford bottled water to use when formula-feeding infants, for instance.
“A couple of cavities and a couple of IQ points are both serious when you think about a population. If you’re in a place of privilege, and luck and environment is with you, and you have a child testing in the high percentile, a few IQ points may not be of great impact. But for others, in different conditions, it can be.” And, she says, “At a population level, it’s a big shift. Being in a disadvantaged position cuts across domains — health, economics, education, exposure. The most vulnerable populations are most vulnerable to a lot of things, not just dental caries and neurotoxicants.”
Back in the Zoom federal court, Connett closed his case. One scientist after another, specializing in epidemiology, toxicology and risk assessment, took to the virtual stand and testified that there was consistent evidence pointing to fluoride being a developmental neurotoxin. And Connett informed the judge of a draft report from the US National Toxicology Program (NTP), which reached the same conclusion in early 2019. Although the report wasn’t entered as evidence, Connett says, “its presence loomed large.” Today, the case is still open. Before the judge commits to a ruling, he wants to know the NTP’s conclusion — the third and final draft of the report is expected early in 2022.
Till is not holding her breath. “I don’t think they’ll ever come up with a consensus,” she says, noting that she doesn’t anticipate a scenario that will please dentists and toxicologists alike, at least not without the courts being involved. It has become a circular argument: The two groups can’t convince each other because they’re having different conversations, each siloed in their respective fields of study. “We’re in this odd situation where dental public health is in tension with environmental public health, and it’s really a dispute within the family,” Hu says.
Hu sees two big problems with how the dental public-health community has reacted. The first, he says, is that most of those in the dental community who are critiquing his and Till’s conclusions are doing so without a deep understanding of how they got them. “From the environmental epidemiology perspective, the methods employed in the most recent studies of prenatal fluoride exposure and neurodevelopment are exceptionally rigorous,” he says, and were put through stringent peer review. The second problem is a misplaced idea that decades of research on fluoride prove it is safe. “They are ignoring the fact that almost none of these ‘decades’ of research have focused on the very specific issue of prenatal fluoride exposure and neurodevelopment. The unfortunate result is that the two sides — environmental health and dental public health — keep talking past each other.” What they need, he says, is a neutral forum in which experts can dispassionately discuss and debate the evidence.
The other thing they need is more data. “There hasn’t been a single US study of fluoridation, prenatal exposure and natal development,” Hu says. He and his collaborators are starting one now, using data from past studies, and they aim to have answers in the next two years. Whether that study, or the anticipated revision of the NTP report, end up casting fluoride in a positive or negative light, their very existence will at least push the conversation forwards.
Fusion Scientists Say They Just Made a Major Breakthrough
Fusion Scientists Say They Just Made a Major Breakthrough
"We've demonstrated that we can create a mini star inside of our machine and hold it..."
Image by JET
Scientists at the UK-based Joint European Torus (JET) lab have smashed a fusion energy record for the first time in 25 years, producing 59 megajoules of energy over five seconds, the BBC reports. That’s 11 megawatts, enough to boil about 60 kettles worth of water, or the equivalent of 30 pounds of TNT.
The test more than doubles the previous record of just 21.7 megajoules, set in 1997 at the same facility.
The team behind the experiment say it’s a major breakthrough, and one that inches closer to a green form of energy that doesn’t run the risk of ending in a nuclear meltdown.
“The JET experiments put us a step closer to fusion power,” Joe Milnes, the head of operations at JET, told the BBC. “We’ve demonstrated that we can create a mini star inside of our machine and hold it there for five seconds and get high performance, which really takes us into a new realm.”
“These landmark results have taken us a huge step closer to conquering one of the biggest scientific and engineering challenges of them all,” Ian Chapman, the chief executive of the UK Atomic Energy Authority, said in a statement. “It’s clear we must make significant changes to address the effects of climate change, and fusion offers so much potential.”
The test involved heating up ionized gases to roughly ten times the temperature of the Sun’s core. In these conditions, atomic nuclei fuse and release copious amounts of energy.
The difficult part is producing more energy than has to be put in to kickstart the reaction, which remains the holy grail of fusion energy. The JET facility achieved a Q value, the fusion power output relative to power, of just 0.33. A value of one would mean the facility produced as much energy as it used.
That may not sound awfully impressive in and of itself, but the fact that it sustained such a value over five seconds represents a major leap in the field. The 1997 record may have achieved a Q value of 0.7 — but it did so for less than 4 billionths of a second, as Nature points out.
Still, the JET reactor won’t be powering homes any time soon.
“Five seconds doesn’t sound like much, but if you can burn it for five seconds, presumably you could keep it stable and keep it burning for many minutes, hours, or days, which is what you are going to need for a proper fusion power plant, Mark Wenman, nuclear materials research fellow at Imperial College London, told The Guardian.
“It’s the proof of that concept that they have achieved,” he added.
The landmark experiment sets the stage for the much larger ITER, a multi-billion dollar fusion reactor being built in France. JET uses the same deuterium-tritium fuel mix that ITER will be using as well.
While it’s a notable moment in the development of fusion energy, scientists still have a long way to go until we can use fusion reactors as a sustainable form of energy.
But, for now, it’s important to celebrate a step in the right direction.
“We didn’t jump up and down and hug each other — we were at 2 metres distance — but it was very exciting,” Fernanda Rimini, a plasma scientist at the Culham Centre for Fusion Energy (CCFE) where JET is based, told Nature.
Levels of methane found in the atmosphere are 'growing dangerously fast', scientists have warned, and it could be global warming causing the rapid increase.
A report, published in Nature, was compiled by an international team that examines data gathered by the US National Oceanic and Atmospheric Administration (NOAA) throughout 2021.
Methane is a dangerous, powerful greenhouse gas, with sources ranging from natural wetlands, to human activity, including livestock farming.
In the new study, the team found that methane in the atmosphere had raced past 1,900 parts per billion, which is triple levels found before the industrial revolution.
This 'grim new milestone' could be linked to global warming causing a rise in wetland areas, which then produce higher levels of methane, the team said.
Methane growth started to slow down around 2000, but there was a 'mysterious uptick' around 2007, which caused researchers at the time to worry global warming was creating a 'feedback mechanism'.
Levels of methane found in the atmosphere are 'growing dangerously fast', scientists have warned, and it could be global warming causing the rapid increase through more productive tropical wetlands.
Stock image
In the new study, the team found that methane in the atmosphere had raced past 1,900 parts per billion, which is triple levels found before the industrial revolution
As a greenhouse gas, methane is 28 times as potent as CO2, according to scientists, who said that if rising temperatures are causing more methane emissions, this will lead to ever greater, and faster, increases in global average temperatures.
'Methane levels are growing dangerously fast,' Euan Nisbet, an Earth scientist at Royal Holloway, University of London, in Egham, UK told Nature.
He said the emissions, which have been accelerating, are now a major threat the global efforts to limit global warming to 3.6F above pre-industrial levels.
Because of its potency, researchers have used aircraft and satellites to track levels of methane in the atmosphere, and built computer models to understand what is driving the increase.
One explanation was direct human activities, including the expanding use of oil and gas, emissions from landfill, larger livestock herds, and wetlands.
Trends have proved to be 'enigmatic', said atmospheric chemist, Alex Turner, from the University of Washington, adding that there are no conclusive answers.
This 'grim new milestone' could be linked to global warming causing a rise in wetland areas, which then produce higher levels of methane, the team said.
Stock image
METHANE: A POTENT GREENHOUSE GAS
In 2019, methane (CH4) accounted for about 10 per cent of all US greenhouse gas emissions from human activities.
Methane trapped in ice bubbles
Human activities emitting methane include leaks from natural gas systems and the raising of livestock.
Methane is also emitted by natural sources such as natural wetlands.
In addition, natural processes in soil and chemical reactions in the atmosphere help remove methane (CH4) from the atmosphere.
Methane's lifetime in the atmosphere is much shorter than carbon dioxide (CO2), but CH4 is more efficient at trapping radiation than CO2.
Pound for pound, the comparative impact of CH4 is 25 times greater than CO2 over a 100-year period.
Globally, 50-65 per cent of total CH4 emissions come from human activities.
Methane is emitted from energy, industry, agriculture, land use, and waste management activities, described below.
SOURCE: EPA
There are some clues, including through the isotpic signature of methane molecules - which normally contain carbon-12, but some have the heavier carbon-13.
Scientists found that methane produced by microbes, that have consumed carbon in the mud of a wetland, or gut of a cow, have less carbon-13 than methane produced by heat and pressure inside the planet - from fossil fuel extraction.
They compared this to the methane seen in the atmosphere, as well as methane trapped centuries ago in ice cores, or accumulated in snow.
For the two centuries after the start of the Industrial Revolution, the amount of methane containing carbon-13 has been increasing, but that reversed in 2007.
This was the year methane levels began to rise rapidly again, and scientists discovered the proportion of carbon-13 started to fall.
Researchers have put this down to an increase in microbial sources of methane over the past 15 years - which could be from livestock or more productive wetlands.
Xin Lan, from the NOAA Global Monitoring Laboratory in Colorado, told Nature that this was a 'powerful signature' suggesting human activities alone aren't to blame.
They used the carbon-13 in the atmospheric methane to estimate that microbes are responsible for 85 per cent of methane emission growth over the past 15 years.
The rest is down to fossil fuel extraction, through natural gas and oil recovery.
After comparing the types of methane, they then had to discover which environmental system the microbes came from - wetlands, livestock or landfill.
This is still an unanswered question, according to the Nature report, but if it is coming from tropical wetlands, which have become more productive due to increasing global temperatures, then we could be in a feedback mechanism.
The warmer it gets, the more productive the wetlands get, the more methane they produce, which leads to more warmer, more productive wetlands and more methane.
However, uncovering the source is a 'challenging problem', according to Lan, whose team are running new atmospheric models to try to trace the methane to its source.
'Is warming feeding the warming? It's an incredibly important question,' Nisbet told Nature, adding that 'as yet, no answer, but it very much looks that way.'
Even if a feedback mechanism is at play in increasing methane levels, humans aren't completely free of blame, said Lan, who estimates that human sources such as livestock, agricultural waste, landfill and fossil fuels account for 62 per cent of all methane emissions from 2007 to 2016.
Human activities emitting methane include leaks from natural gas systems and the raising of livestock. Methane is also emitted by natural sources such as natural wetlands.
Stock image
The upward trend in methane emissions continued in the past four years, which researchers put down to microbes, rather than fossil fuels. Found in livestock, wetlands and landfill
To limit the impact of any feedback mechanism, scientists say more needs to be done to reduce overall methane emission levels.
This could be done through reductions in livestock activities, fewer fossil fuel extractions and finding alternative uses for agricultural waste.
More than 100 countries signed the Global Methane Pledge at COP26 in Glasgow, with the target of cutting emissions by 30 per cent from 2020 levels by 2040.
Riley Duren, leader of the non-profit Carbon Mapper, which tracks sources of methane, said the focus should be on cutting emissions in the global south, particularly in low- and middle-income countries.
THE ENVIRONMENTAL IMPACT OF FARMING COWS
The livestock animals are notorious for creating large amounts of methane, which is a major contributor to global warming.
Each of the farm animals produces the equivalent of three tonnes of carbon dioxide per year and the amount of the animals is increasing with the growing need to feed a booming population.
Methane is one of the most potent greenhouse gases, trapping 30 times more heat than the same amount of carbon dioxide.
Scientists are investigating how feeding them various diets can make cattle more climate-friendly.
They believe feeding seaweed to dairy cows may help and are also using a herb-rich foodstuff called the Lindhof sample.
Researchers found a cow's methane emissions were reduced by more than 30 per cent when they ate ocean algae.
In research conducted by the University of California, in August, small amounts of it were mixed into the animals' feed and sweetened with molasses to disguise the salty taste.
As a result, methane emissions dropped by almost a third.
'I was extremely surprised when I saw the results,' said Professor Ermias Kebreab, the animal scientist who led the study.
'I wasn't expecting it to be that dramatic with a small amount of seaweed.'
The team now plans to conduct a further six-month study of a seaweed-infused diet in beef cattle, starting this month.
Tropical wetlands, such as the Pantanal in Brazil, are a major source of methane emissions.
Credit: Carl De Souza/AFP via Getty
Methane concentrations in the atmosphere raced past 1,900 parts per billion last year, nearly triple preindustrial levels, according to data released in January by the US National Oceanic and Atmospheric Administration (NOAA). Scientists says the grim milestone underscores the importance of a pledge made at last year’s COP26 climate summit to curb emissions of methane, a greenhouse gas at least 28 times as potent as CO2.
The growth of methane emissions slowed around the turn of the millennium, but began a rapid and mysterious uptick around 2007. The spike has caused many researchers to worry that global warming is creating a feedback mechanism that will cause ever more methane to be released, making it even harder to rein in rising temperatures.
“Methane levels are growing dangerously fast,” says Euan Nisbet, an Earth scientist at Royal Holloway, University of London, in Egham, UK. The emissions, which seem to have accelerated in the past few years, are a major threat to the world’s goal of limiting global warming to 1.5–2 °C over pre-industrial temperatures, he says.
Source: NOAA
Enigmatic patterns
For more than a decade, researchers have deployed aircraft, taken satellite measurements and run models in an effort to understand the drivers of the increase (see ‘A worrying trend’)1,2. Potential explanations range from the expanding exploitation of oil and natural gas and rising emissions from landfill to growing livestock herds and increasing activity by microbes in wetlands3.
“The causes of the methane trends have indeed proved rather enigmatic,” says Alex Turner, an atmospheric chemist at the University of Washington in Seattle. And despite a flurry of research, Turner says he is yet to see any conclusive answers emerge.
One clue is in the isotopic signature of methane molecules. The majority of carbon is carbon-12, but methane molecules sometimes also contain the heavier isotope carbon-13. Methane generated by microbes — after they consume carbon in the mud of a wetland or in the gut of a cow, for instance — contains less 13C than does methane generated by heat and pressure inside Earth, which is released during fossil-fuel extraction.
Scientists have sought to understand the source of the mystery methane by comparing this knowledge about the production of the gas with what is observed in the atmosphere.
By studying methane trapped decades or centuries ago in ice cores and accumulated snow, as well as gas in the atmosphere, they have been able to show that for two centuries after the start of the Industrial Revolution the proportion of methane containing 13C increased4. But since 2007, when methane levels began to rise more rapidly again, the proportion of methane containing 13C began to fall (see ‘The rise and fall of methane’). Some researchers believe that this suggests that much of the increase in the past 15 years might be due to microbial sources, rather than the extraction of fossil fuels.
Source: Sylvia Michel, University of Colorado Institute of Arctic and Alpine Research
Back to the source
“It’s a powerful signal,” says Xin Lan, an atmospheric scientist at NOAA’s Global Monitoring Laboratory in Boulder, Colorado, and it suggests that human activities alone are not responsible for the increase. Lan’s team has used the atmospheric 13C data to estimate that microbes are responsible for around 85% of the growth in emissions since 2007, with fossil-fuel extraction accounting for the remainder5.
The next — and most challenging — step is to try to pin down the relative contributions of microbes from various systems, such as natural wetlands or human-raised livestock and landfills. This may help determine whether warming itself is contributing to the increase, potentially via mechanisms such as increasing the productivity of tropical wetlands. To provide answers, Lan and her team are running atmospheric models to trace methane back to its source.
“Is warming feeding the warming? It’s an incredibly important question,” says Nisbet. “As yet, no answer, but it very much looks that way.”
Regardless of how this mystery plays out, humans are not off the hook. Based on their latest analysis of the isotopic trends, Lan’s team estimates that anthropogenic sources such as livestock, agricultural waste, landfill and fossil-fuel extraction accounted for about 62% of total methane emissions since from 2007 to 2016 (see ‘Where is methane coming from?’).
SOURCE: Ref. 5.
Global Methane Pledge
This means there is plenty that can be done to reduce emissions. Despite NOAA’s worrying numbers for 2021, scientists already have the knowledge to help governments take action, says Riley Duren, who leads Carbon Mapper, a non-profit consortium in Pasadena, California, that uses satellites to pinpoint the source of methane emissions.
Last month, for instance, Carbon Mapper and the Environmental Defense Fund, an advocacy group in New York City, released data revealing that 30 oil and gas facilities in the southwestern United States have collectively emitted about 100,000 tonnes of methane for at least the past three years, equivalent to the annual warming impact of half a million cars. These facilities could easily halt those emissions by preventing methane from leaking out, the groups argue.
At COP26 in Glasgow, UK, more than 100 countries signed the Global Methane Pledge to cut emissions by 30% from 2020 levels by 2030, and Duren says the emphasis must now be on action, including in low- and middle-income countries across the global south. “Tackling methane is probably the best opportunity we have to buy some time”, he says, to solve the much bigger challenge of reducing the world’s CO2 emissions.
doi: https://doi.org/10.1038/d41586-022-00312-2
UPDATES & CORRECTIONS
Correction 08 February 2022: An earlier version of this story said that bacteria generate methane in wetlands and the guts of cows. Methane is emitted by microbes in these places.
References
Nisbet, E. et al.Phil. Trans. R. Soc. A. https://doi.org/10.1098/rsta.2021.0112 (2021).
MILITARY DESPERATELY TRYING TO RECOVER $100 MILLION STEALTH JET FROM BOTTOM OF OCEAN
MILITARY DESPERATELY TRYING TO RECOVER $100 MILLION STEALTH JET FROM BOTTOM OF OCEAN
WHOOPS.
LOCKHEED MARTIN/FUTURISM
$100 Million Rescue
Think you had a bad day at work? Just remember there’s a US Navy pilot out there who crashed a multimillion dollar jet — causing it to sink to the bottom of the ocean.
The US military is frantically searching for a $100 million F-35C fighter jet in the South China Sea after its pilot crashed into the USS Carl Vinson aircraft carrier while attempting to land, The Associated Press reports.
And that’s not exactly surprising, given the fact that the F-35, manufactured by Lockheed Martin, is the most expensive weapons system ever built, with an estimated lifetime cost of $1.6 trillion.
Luckily, the pilot was able to hit the eject button and safely yeet himself away from the crash.
His jet, however, ended up sinking.
“The US Navy is making recovery operations arrangements for the F-35C aircraft involved in the mishap aboard USS Carl Vinson in the South China Sea,” LT Nicholas Lingo said in a statement seen by The Independent.
Race Against Russia
The F-35C Lightning II jet reportedly carried advanced radar and stealth tech, which makes it a pretty big target for US adversaries. It’s now up to the US military to make sure countries such as Russia and China don’t beat them to it.
However, we still don’t know if other countries — most notably China, given the proximity — are actually looking for it. Lingo added in the statement that the military could not “speculate on what the People’s Republic of China intentions are on this matter.”
The jet is designated as a NATO Joint Strike Fighter. That means Moscow would likely love to get its hands on it, considering the saber rattling it’s doing against NATO forces outside of Ukraine. So the US may also be hoping to get there before Putin snatches it up.
The US Navy admitted to a F-35 crash while attempting to land on a US Navy aircraft carrier last week. The black box transponder has a ten day life and the Navy says that it's a race now to recover the jet before the Chinese do. It has the newest, hi tech devices in it, things highly classified and still top secret. Then...the US Navy announces it across all news media agencies around the world! Wait...they did what? Yeah...that gave it away right there. They wanted the news agencies to scream out the news so that China would go and recover the jet...thus satisfying China temporarily with the fake crash of the craft, which by the way...just happens to be 99.99% intact minus the cockpit pilot chair and canopy. Coincidence? I think not.
Do you remember back in April of 2001, when a US Navy EP-3E Aries II spy plane on routine surveillance mission over South China, was intercepted by several China fighter jets and was told to land in China or be shot down? So they flew to China...with the most high tech spy plane the US had and handed them the keys saying its yours, but we want it back in a few months...China gave it back...in boxes since they took everything apart and copied, every single item before eventually returning...most, minus software, computer chips, memory storage and so on. That was strange...like it was a gift all wrapped up for China to open. Not even a protest from the pilots before landing in China. Thats not normal for a US military pilot. I'm a USAF vet and I worked on may B-1 bombers and I can tell you, pilots are cocky and ready for a fight win or lose. They want it. They don't give in that easy in the US military unless ordered to do so.
So the US is making it look like its an accident to keep Taiwan safe from China since there must be a secret agreement with payment of aircraft every twenty years or so. But mostly its made to look like an accident to keep the US public out of it...keep the public in the dark so there is no protests, activist, anger over the who thing.
I totally get why the US is doing this, and honestly appreciate it since I'm living in Taiwan now, but...it does concern me...and makes me wonder...how many other secrets are US presidents giving away to the communists?
Thousands of Crows Take Flight Over Washington, Bad or Good Omen, Video, UFO Sighting News.
Thousands of Crows Take Flight Over Washington, Bad or Good Omen, Video, UFO Sighting News.
Date of video:January 25, 2022
Location of event: Bothell Washington, USA
Two different eyewitness videos of thousands of crows were recorded yesterday. Crows are often seen as a symbol of death or doom. If you see a crow, it is a sign that someone close to you may soon die. A bad omen if you will. Maybe so, maybe there is something to superstition...a grain of truth that keeps it passed down. But lets face it, birds are sensitive feeling things far easier than us. If something were about to happen, something big, life changing, dramatic and frightening...its highly possible some animal species would know beforehand. This is what it looks like in Bothell, Washington this week, when thousands of crows took flight over the city. It may very well may be a warning of something to come. Only time will tell. Scott C. Waring - Taiwan
An artist’s rendering shows Radian’s reusable space plane. (Radian Aerospace Illustration)
More than five years after its founding, Renton, Wash.-based Radian Aerospace is emerging from stealth mode and reporting a $27.5 million seed funding round to support its plans to build an orbital space plane.
The round was led by Boston-based Fine Structure Ventures, with additional funding from EXOR, The Venture Collective, Helios Capital, SpaceFund, Gaingels, The Private Shares Fund, Explorer 1 Fund, Type One Ventures and other investors.
Radian has previously brought in pre-seed investments, but the newly announced funding should accelerate its progress.
One of the company’s investors and strategic advisers, former Lockheed Martin executive Doug Greenlaw, said Radian was going after the “Holy Grail” of space access with a fully reusable system that would provide for single-stage-to-orbit (SSTO) launches.
“What we are doing is hard, but it’s no longer impossible thanks to significant advancements in materials science, miniaturization and manufacturing technologies,” Livingston Holder, Radian’s co-founder and chief technology officer, said today in a news release.
Holder was part of the U.S. Air Force’s Manned Spaceflight Engineer program in the 1980s — and went on to become a program manager at Boeing, focusing on reusable space systems. The design for Radian’s space plane was inspired by Boeing’s 1970s-era concept for a Reusable Aerodynamic Space Vehicle, or RASP.
For the past few years, Radian has been working on rocket engine development at its Renton headquarters and at a testing facility near Bremerton, Wash. Ars Technica reported that the liquid-fueled engine is designed to provide about 200,000 pounds of thrust, and that the space plane would be powered by three of the engines. The current design would support carrying up to five people and 5,000 pounds of cargo into orbit, Ars Technica reported.
Radian says its space plane, called Radian One, would make sled-assisted takeoffs and airplane-like runway landings, with a turnaround time of as little as 48 hours between missions.
“Over time, we intend to make space travel nearly as simple and convenient as airliner travel,” said Richard Humphrey, Radian’s CEO and co-founder. “We are not focused on tourism, we are dedicated to missions that make life better on our own planet, like research, in-space manufacturing and terrestrial observation, as well as critical new missions like rapid global delivery right here on Earth.”
The company hasn’t announced a timetable for development or operations, but its founders hope to have the plane available to service commercial space stations that could be in orbit by the 2030s. Radian says it already has launch service agreements with commercial space station ventures as well as in-space manufacturers, satellite operators and cargo companies, plus agreements with the U.S. government and “selected foreign governments.”
For what it’s worth, one of Radian’s early-stage investors is Dylan Taylor, chairman and CEO of Voyager Space Holdings. Voyager Space is one of the partners in a commercial space station project known as Starlab.
“On demand space operations is a growing economy, and I believe Radian’s technology can deliver on the right-sized, high-cadence operations that the market opportunity is showing,” Taylor said. “I am confident in the team working at Radian and look forward to cheering them along in this historical endeavor.”
Update for 12:15 p.m. PT Jan. 20: In an emailed response to GeekWire’s questions, Radian CEO Richard Humphrey confirmed that there are 18 full-time employees in Renton, and that the latest funding round brings total investment to $32 million.
“This funding will primarily be used to support our next series of risk-reducing milestones that include main engine testing, composite tank testing, design maturation, aero analysis and customer development,” he said.
Humphrey said Radian is “planning to undergo an upgrade” at the Bremerton engine testing facility and expects to resume increased testing by midyear.
He declined to be more specific about Radian’s partners or potential customers. “Nearly all of our agreements are subject to NDA [non-disclosure agreements] so we are not able to share the specific names or values,” he said. “Notable is that we have a number of mission sets that we are focused on that include habitation, Earth observation, in-space servicing, downmass, launch and delivery, and over a dozen companies have signed on across all those areas.”
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.