The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
01-03-2023
Scientists Are Trying To Use Lab-Grown Mini Brains To Create Powerful Biocomputers
Scientists Are Trying To Use Lab-Grown Mini Brains To Create Powerful Biocomputers
For years now, scientists have been raising ethical concerns about the creation and use of lab-grown mini brains.
At the same time, other scientists are plowing full steam ahead, creating these brain organoids and trying to find ways to put them to good use.
Now, a group of scientists that fall into the latter category are trying to develop something called “organoid intelligence.”
They shared their research in a recent edition of the journalFrontiers in Science.
Essentially, they want to use these lab-grown mini brains as biological hardware for new biocomputers, LiveScience reports.
“While silicon-based computers are certainly better with numbers, brains are better at learning,” said one of the scientists, Thomas Hartung of Johns Hopkins University. “For example, AlphaGo [the AI that beat the world’s number one Go player in 2017] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.”
But… brains? In a computer? Why?
Fluorescent images illustrating cell types in brain organoids.
In apress releaseabout their research, the scientists wrote, “Brains are not only superior learners, they are also more energy efficient. For instance, the amount of energy spent training AlphaGo is more than is needed to sustain an active adult for a decade.”
Hartung added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip. But the brain is wired completely differently. It has about 100bn neurons linked through over 1015 connection points. It’s an enormous power difference compared to our current technology.”
In parallel, the authors are also developing technologies to communicate with the organoids: in other words, to send them information and read out what they’re ‘thinking’. The authors plan to adapt tools from various scientific disciplines, such as bioengineering and machine learning, as well as engineer new stimulation and recording devices.
“We developed a brain-computer interface device that is a kind of an EEG cap for organoids, which we presented in an article published last August. It is a flexible shell that is densely covered with tiny electrodes that can both pick up signals from the organoid, and transmit signals to it,” said Hartung.
But what about all those sticky ethical questions about creating mini-brains just to do tasks for us humans?
Creating human brain organoids that can learn, remember, and interact with their environment raises complex ethical questions. For example, could they develop consciousness, even in a rudimentary form? Could they experience pain or suffering? And what rights would people have concerning brain organoids made from their cells?
The authors are acutely aware of these issues.
“A key part of our vision is to develop OI in an ethical and socially responsible manner,” Hartung said. “For this reason, we have partnered with ethicists from the very beginning to establish an ‘embedded ethics’ approach. All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves.”
A couple of years prior to that, scientists worried that the mini brains they grew in a lab may be sentient and feel pain.
Is hooking these mini brains up to a computer really a great idea? What if they get access to the internet and start secretly communicating with self-healingsuperhuman robots?
ALL RELATED VIDEOS, selected and posted by peter2011
What are 'minibrains'? Everything to know about brain organoids
Nicoletta Lanese
In the past decade, lab-grown blobs of human brain tissue began making news headlines, as they ushered in a new era of scientific discovery and raised a slew of ethical questions.
These blobs — scientifically known as brain organoids, but often called "minibrains" in the news — serve as miniature, simplified models of full-size human brains. These organoids can potentially be useful in basic research, drug development and even computer science.
However, as scientists make these models more sophisticated, there's a question as to whether they could ever become too similar to human brains and thus gain consciousness, in some form or another.
How are minibrains made?
Scientists grow brain organoids from stem cells, a type of immature cell that can give rise to any cell type, whether blood, skin, bowel or brain.
The stem cells used to grow organoids can either come from adult human cells, or more rarely, human embryonic tissue, according to a 2021 review in the Journal of Biomedical Science. In the former case, scientists collect adult cells and then expose them to chemicals in order to revert them into a stem cell-like state. The resulting stem cells are called "induced pluripotent stem cells" (iPSC), which can be made to grow into any kind of tissue.
To give rise to a minibrain, scientists embed these stem cells in a protein-rich matrix, a substance that supports the cells as they divide and form a 3D shape. Alternatively, the cells may be grown atop a physical, 3D scaffold, according to a 2020 review in the journal Frontiers in Cell and Developmental Biology.
To coax the stem cells to form different tissues, scientists introduce specific molecules and growth factors — substances that spur cell growth and replication — into the cell culture system at precise points in their development. In addition, scientists often place the stem cells in spinning bioreactors as they grow into minibrains. These devices keep the growing organoids suspended, rather than smooshed against a flat surface; this helps the organoids absorb nutrients and oxygen from the well-stirred solution surrounding them.
Brain organoids grow more complex as they develop, similar to how human embryos grow more and more complex in the womb. Over time, the organoids come to contain multiple kinds of cells found in full-size human brains; mimic specific functions of human brain tissue; and show similar spatial organization to isolated regions of the brain, though both their structure and function are simpler than that of a real human brain, according to the Journal of Biomedical Science review.
Why are scientists growing minibrains?
Minibrains can be used in a variety of applications. For example, scientists are using the blobs of tissue to study early human development.
To this end, scientists have grown brain organoids with a set of eye-like structures called "optic cups;" in human embryos in the womb, the optic cup eventually gives rise to the light-sensitive retina at the back of the eye. Another group grew organoids that generate brain waves similar to those seen in preterm babies, and another used minibrains to help explain why a common drug can cause birth defects and developmental disorders if taken during pregnancy. Models like these allow researchers to glimpse the brain as it appears in early pregnancy, a feat that would be both difficult and unethical in humans.
Minibrains can also be used to model conditions that affect adults, including infectious diseases that affect the brain, brain tumors and neurodegenerative disorders like Alzheimer's and Parkinson's disease, according to the Frontiers in Cell and Developmental Biology review. In addition, some groups are developing minibrains for drug screening, to see if a given medication could be toxic to human patients' brains, according to a 2021 review in the journal Frontiers in Genetics.
Such models could complement or eventually replace research conducted with cells in lab dishes and in animals; even studies in primates, whose brains closely resemble humans', can't reliably capture exactly what happens in human disease. For now, though, experts agree that brain organoids are not advanced enough to partially or fully replace established cell and animal models of disease. But someday, scientists hope these models will lead to the development of new drugs and reduce the need for animal research; some researchers are even testing whether it could be feasible to repair the brain by "plugging" injuries with lab-grown human minibrains.
histological image shows a cross section of a rat's brain, depicted in red, with a glowing green blob on the top right side; the blob is a clump of cells called an organoid that's been derived from human stem cells and transplanted into the rat's brain
Beyond medicine and the study of human development, minibrains can also be used to study human evolution. Recently, scientists used brain organoids to study which genes allowed the human brain to grow so large, and others have used organoids to study how human brains differ from those of apes and Neanderthals.
Finally, some scientists want to use brain organoids to power computer systems. In an early test of this technology, one group recently crafted a minibrain out of human and mouse brain cells that successfully played "Pong" after being hooked up to a computer-controlled electrode array.
And in a recent proposal published in the journal Frontiers in Science, scientists announced their plans to grow large brain organoids, containing tens of thousands to millions of cells, and link them together to create complex networks that can serve as the basis for future biocomputers.
Could minibrains ever be sentient?
Although sometimes called "minibrains," brain organoids aren't truly miniaturized human brains. Rather, they are roughly spherical balls of brain tissue that mimic some features of the full-size human brain. For example, cerebral organoids, which contain cell types found in the cerebral cortex, the wrinkled outer surface of the brain, contain several layers of tissue, as a real cortex would.
Similarly, brain organoids can generate chemical messages and brain waves similar to what's seen in a full-size brain, but that doesn't mean they can "think," experts say. That said, one sticking point in this discussion is the fact that neuroscientists don't have an agreed-upon definition of consciousness, nor do they have standardized ways to measure the phenomenon, Nature reported in 2020.
The National Academies of Sciences, Engineering, and Medicine assembled a committee to tackle these quandaries and released a report in 2021, outlining some of the potential ethical issues of working with brain organoids.
At the time, the authors concluded that "In the foreseeable future, it is extremely unlikely that [brain organoids] would possess capabilities that, given current understanding, would be recognized as awareness, consciousness, emotion, or the experience of pain. From a moral perspective, neural organoids do not differ at present from other in vitro human neural tissues or cultures. However, as scientists develop significantly more complex organoids, the possible need to make this distinction should be revisited regularly."
When Rohit Bhattacharya began his PhD in computer science, his aim was to build a tool that could help physicians to identify people with cancer who would respond well to immunotherapy. This form of treatment helps the body’s immune system to fight tumours, and works best against malignant growths that produce proteins that immune cells can bind to. Bhattacharya’s idea was to create neural networks that could profile the genetics of both the tumour and a person’s immune system, and then predict which people would be likely to benefit from treatment.
But he discovered that his algorithms weren’t up to the task. He could identify patterns of genes that correlated to immune response, but that wasn’t sufficient1. “I couldn’t say that this specific pattern of binding, or this specific expression of genes, is a causal determinant in the patient’s response to immunotherapy,” he explains.
Bhattacharya was stymied by the age-old dictum that correlation does not equal causation — a fundamental stumbling block in artificial intelligence (AI). Computers can be trained to spot patterns in data, even patterns that are so subtle that humans might miss them. And computers can use those patterns to make predictions — for instance, that a spot on a lung X-ray indicates a tumour2. But when it comes to cause and effect, machines are typically at a loss. They lack a common-sense understanding of how the world works that people have just from living in it. AI programs trained to spot disease in a lung X-ray, for example, have sometimes gone astray by zeroing in on the markings used to label the right-hand side of the image3. It is obvious, to a person at least, that there is no causal relationship between the style and placement of the letter ‘R’ on an X-ray and signs of lung disease. But without that understanding, any differences in how such markings are drawn or positioned could be enough to steer a machine down the wrong path.
For computers to perform any sort of decision making, they will need an understanding of causality, says Murat Kocaoglu, an electrical engineer at Purdue University in West Lafayette, Indiana. “Anything beyond prediction requires some sort of causal understanding,” he says. “If you want to plan something, if you want to find the best policy, you need some sort of causal reasoning module.”
Incorporating models of cause and effect into machine-learning algorithms could also help mobile autonomous machines to make decisions about how they navigate the world. “If you’re a robot, you want to know what will happen when you take a step here with this angle or that angle, or if you push an object,” Kocaoglu says.
In Bhattacharya’s case, it was possible that some of the genes that the system was highlighting were responsible for a better response to the treatment. But a lack of understanding of causality meant that it was also possible that the treatment was affecting the gene expression — or that another, hidden factor was influencing both. The potential solution to this problem lies in something known as causal inference — a formal, mathematical way to ascertain whether one variable affects another.
Computer scientist Rohit Bhattacharya (back) and his team at Williams College in Williamstown, Massachusetts, discuss adapting machine learning for causal inference.
Credit: Mark Hopkins
Causal inference has long been used by economists and epidemiologists to test their ideas about causation. The 2021 Nobel prize in economic sciences went to three researchers who used causal inference to ask questions such as whether a higher minimum wage leads to lower employment, or what effect an extra year of schooling has on future income. Now, Bhattacharya is among a growing number of computer scientists who are working to meld causality with AI to give machines the ability to tackle such questions, helping them to make better decisions, learn more efficiently and adapt to change.
A notion of cause and effect helps to guide humans through the world. “Having a causal model of the world, even an imperfect one — because that’s what we have — allows us to make more robust decisions and predictions,” says Yoshua Bengio, a computer scientist who directs Mila – Quebec Artificial Intelligence Institute, a collaboration between four universities in Montreal, Canada. Humans’ grasp of causality supports attributes such as imagination and regret; giving computers a similar ability could transform their capabilities.
Climbing the ladder
The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.
In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.
A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.
Yoshua Bengio (front) directs Mila – Quebec Artificial Intelligence Institute in Montreal, Canada.
Credit: Mila-Quebec AI Institute
Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.
This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.
Face the changes
A key benefit of causal reasoning is that it could make AI more able to deal with changing circumstances. Existing AI systems that base their predictions only on associations in data are acutely vulnerable to any changes in how those variables are related. When the statistical distribution of learnt relationships changes — whether owing to the passage of time, human actions or another external factor — the AI will become less accurate.
For instance, Bengio could train a self-driving car on his local roads in Montreal, and the AI might become good at operating the vehicle safely. But export that same system to London, and it would immediately break for a simple reason: cars are driven on the right in Canada and on the left in the United Kingdom, so some of the relationships the AI had learnt would be backwards. He could retrain the AI from scratch using data from London, but that would take time, and would mean that the software would no longer work in Montreal, because its new model would replace the old one.
A causal model, on the other hand, allows the system to learn about many possible relationships. “Instead of having just one set of relationships between all the things you could observe, you have an infinite number,” Bengio says. “You have a model that accounts for what could happen under any change to one of the variables in the environment.”
Humans operate with such a causal model, and can therefore quickly adapt to changes. A Canadian driver could fly to London and, after taking a few moments to adjust, could drive perfectly well on the left side of the road. The UK Highway Code means that, unlike in Canada, right turns involve crossing traffic, but it has no effect on what happens when the driver turns the wheel or how the tyres interact with the road. “Everything we know about the world is essentially the same,” Bengio says. Causal modelling enables a system to identify the effects of an intervention and account for it in its existing understanding of the world, rather than having to relearn everything from scratch.
Judea Pearl, director of the Cognitive Systems Laboratory at the University of California, Los Angeles, won the 2011 A.M. Turing Award.
Credit: UCLA Samueli School of Engineering
This ability to grapple with changes without scrapping everything we know also allows humans to make sense of situations that aren’t real, such as fantasy movies. “Our brain is able to project ourselves into an invented environment in which some things have changed,” Bengio says. “The laws of physics are different, or there are monsters, but the rest is the same.”
Counter to fact
The capacity for imagination is at the top of Pearl’s hierarchy of causal reasoning. The key here, Bhattacharya says, is speculating about the outcomes of actions not taken.
Bhattacharya likes to explain such counterfactuals to his students by reading them ‘The Road Not Taken’ by Robert Frost. In this poem, the narrator talks of having to choose between two paths through the woods, and expresses regret that they can’t know where the other road leads. “He’s imagining what his life would look like if he walks down one path versus another,” Bhattacharya says. That is what computer scientists would like to replicate with machines capable of causal inference: the ability to ask ‘what if’ questions.
Imagining whether an outcome would have been better or worse if we’d taken a different action is an important way that humans learn. Bhattacharya says it would be useful to imbue AI with a similar capacity for what is known as ‘counterfactual regret’. The machine could run scenarios on the basis of choices it didn’t make and quantify whether it would have been better off making a different one. Some scientists have already used counterfactual regret to help a computer improve its poker playing6.
The ability to imagine different scenarios could also help to overcome some of the limitations of existing AI, such as the difficulty of reacting to rare events. By definition, Bengio says, rare events show up only sparsely, if at all, in the data that a system is trained on, so the AI can’t learn about them. A person driving a car can imagine an occurrence they’ve never seen, such as a small plane landing on the road, and use their understanding of how things work to devise potential strategies to deal with that specific eventuality. A self-driving car without the capability for causal reasoning, however, could at best default to a generic response for an object in the road. By using counterfactuals to learn rules for how things work, cars could be better prepared for rare events. Working from causal rules rather than a list of previous examples ultimately makes the system more versatile.
Using causality to program imagination into a computer could even lead to the creation of an automated scientist. During a 2021 online summit sponsored by Microsoft Research, Pearl suggested that such a system could generate a hypothesis, pick the best observation to test that hypothesis and then decide what experiment would provide that observation.
Right now, however, this remains a way off. The theory and basic mathematics of causal inference are well established, but the methods for AI to realize interventions and counterfactuals are still at an early stage. “This is still very fundamental research,” Bengio says. “We’re at the stage of figuring out the algorithms in a very basic way.” Once researchers have grasped these fundamentals, algorithms will then need to be optimized to run efficiently. It is uncertain how long this will all take. “I feel like we have all the conceptual tools to solve this problem and it’s just a matter of a few years, but usually it takes more time than you expect,” Bengio says. “It might take decades instead.”
Bhattacharya thinks that researchers should take a leaf from machine learning, the rapid proliferation of which was in part because of programmers developing open-source software that gives others access to the basic tools for writing algorithms. Equivalent tools for causal inference could have a similar effect. “There’s been a lot of exciting developments in recent years,” Bhattacharya says, including some open-source packages from tech giant Microsoft and from Carnegie Mellon University in Pittsburgh, Pennsylvania. He and his colleagues also developed an open-source causal module they call Ananke. But these software packages remain a work in progress.
Bhattacharya would also like to see the concept of causal inference introduced at earlier stages of computer education. Right now, he says, the topic is taught mainly at the graduate level, whereas machine learning is common in undergraduate training. “Causal reasoning is fundamental enough that I hope to see it introduced in some simplified form at the high-school level as well,” he says.
If these researchers are successful at building causality into computing, it could bring AI to a whole new level of sophistication. Robots could navigate their way through the world more easily. Self-driving cars could become more reliable. Programs for evaluating the activity of genes could lead to new understanding of biological mechanisms, which in turn could allow the development of new and better drugs. “That could transform medicine,” Bengio says.
Even something such as ChatGPT, the popular natural-language generator that produces text that reads as though it could have been written by a human, could benefit from incorporating causality. Right now, the algorithm betrays itself by producing clearly written prose that contradicts itself and goes against what we know to be true about the world. With causality, ChatGPT could build a coherent plan for what it was trying to say, and ensure that it was consistent with facts as we know them.
When he was asked whether that would put writers out of business, Bengio says that could take some time. “But how about you lose your job in ten years, but you’re saved from cancer and Alzheimer’s,” he says. “That’s a good deal.”
The US National Ignition Facility has reported that it has achieved the phenomenon of ignition.
Credit: Jason Laurea/Lawrence Livermore National Laboratory
Scientists at the world’s largest nuclear-fusion facility have for the first time achieved the phenomenon known as ignition — creating a nuclear reaction that generates more energy than it consumes. News of the breakthrough at the US National Ignition Facility (NIF), made on 5 December and announced today by US President Joe Biden’s administration, has excited the global fusion-research community. That research aims to harness nuclear fusion — the phenomenon that powers the Sun — to provide a source of near-limitless clean energy on Earth. Researchers caution that, despite this latest success, a long path remains to achieving that goal.
“It’s an incredible accomplishment,” says Mark Herrmann, the deputy programme director for fundamental weapons physics at Lawrence Livermore National Laboratory in California, which houses the fusion laboratory. The landmark experiment follows years of work by multiple teams on everything from lasers and optics to targets and computer models, Herrmann says. “That is of course what we are celebrating.”
A flagship experimental facility of the US Department of Energy’s nuclear-weapons programme, designed to study thermonuclear explosions, NIF originally aimed to achieve ignition by 2012 and has faced criticism for delays and cost overruns. In August 2021, NIF scientists announced that they had used their high-powered laser device to achieve a record reaction that crossed a key threshold in achieving ignition, but efforts to replicate that experiment failed. Ultimately, scientists scrapped efforts to replicate that shot, and rethought the experimental design — a choice that paid off last week.
“There were a lot of people who didn’t think it was possible, but I and others who kept the faith feel somewhat vindicated,” says Michael Campbell, former director of the laser energetics laboratory at the University of Rochester in New York and an early proponent of NIF while at Lawrence Livermore lab. “I’m having a cosmo to celebrate.”
Nature looks at NIF’s latest experiment and what it means for fusion science.
What did NIF achieve?
The facility used its set of 192 lasers to deliver 2.05 megajoules of energy onto a pea-sized gold cylinder containing a frozen pellet of the hydrogen isotopes deuterium and tritium. The laser’s pulse of energy caused the capsule to collapse, reaching temperatures only seen in stars and thermonuclear weapons, and the hydrogen isotopes fused into helium, releasing additional energy and creating a cascade of fusion reactions. The laboratory’s analysis suggests that the reaction released some 3.15 MJ of energy — roughly 54% more than went into the reaction, and more than double the previous record of 1.3 MJ.
“Fusion research has been going on since the early 1950s, and this is the first time in the laboratory that fusion has ever produced more energy than it consumed,” says Campbell.
However, although the fusion reactions produced more than 3 MJ of energy — more than was delivered to the target — NIF’s lasers consumed 322 MJ of energy in the process. Still, the experiment qualifies as ignition, a benchmark criterion for fusion reactions.
“It’s a big milestone, but NIF is not a fusion-energy device,” says David Hammer, a nuclear-energy engineer at Cornell University in Ithaca, New York.
Herrmann acknowledges as much, saying that there are many steps on the path to laser fusion energy. “NIF was not designed to be efficient,” he says. “It was designed to be the biggest laser we could possibly build to give us the data we need for the [nuclear] stockpile research programme.”
NIF scientists made multiple changes before the latest laser shot, based in part on analysis and computer modelling of previous experiments. In addition to boosting the laser’s power by around 8%, scientists reduced the number of imperfections in the target and adjusted how they delivered the laser energy to create a more spherical implosion. Operating at the cusp of fusion ignition, the scientists knew that “little changes can make a big difference”, Herrmann says.
Why are these results significant?
On one level, it’s about proving what is possible, and many scientists have hailed the result as a milestone in fusion science. But the results carry particular significance at NIF: the facility was designed to help nuclear-weapons scientists study the intense heat and pressures inside explosions, and that is possible only if the laboratory produces high-yield fusion reactions.
It took more than a decade, “but they can be commended for reaching their goal”, says Stephen Bodner, a physicist who formerly headed the laser plasma branch of the US Naval Research Laboratory in Washington DC. Bodner says the big question now is what the Department of Energy will do next: double down on weapons research at the NIF or pivot to a laser programme geared towards fusion-energy research.
What does this mean for fusion energy?
The latest results have already renewed buzz about a future powered by clean fusion energy, but experts warn that there is a long road ahead.
NIF was not designed with commercial fusion energy in mind — and many researchers doubt that laser-driven fusion will be the approach that ultimately yields fusion energy. Nevertheless, Campbell thinks that its latest success could boost confidence in the promise of laser fusion power and spur a programme focused on energy applications. “This is absolutely necessary to have the credibility to sell an energy programme,” he says.
Lawrence Livermore National Laboratory director Kim Budil described the achievement as a proof of concept. “I don’t want to give you a sense that we’re going to plug the NIF into the grid: that is definitely not how this works,” she said during a press conference in Washington DC. “But this is the fundamental building block of an inertial confinement fusion power scheme.”
There are many other experiments worldwide that are trying to achieve fusion for energy applications using different approaches. But engineering challenges remain, including the design and construction of plants that extract the heat produced by the fusion and use it to generate significant amounts of energy to be turned into usable electricity.
“Although positive news, this result is still a long way from the actual energy gain required for the production of electricity,” said Tony Roulstone, a nuclear-energy researcher at the University of Cambridge, UK, in a statement to the Science Media Centre in London.
Still, “the NIF experiments focused on fusion energy absolutely are valuable on the path to commercial fusion power”, says Anne White, a plasma physicist at the Massachusetts Institute of Technology in Cambridge.
What are the next major milestones in fusion?
To demonstrate that the type of fusion studied at NIF can be a viable way of producing energy, the efficiency of the yield — the energy released compared to the energy that goes into producing the laser pulses — needs to grow by at least two orders of magnitude.
Researchers will also need to dramatically increase the rate at which the laser’s pulses can be produced and how quickly they can clear the target chamber to prepare for another burn, says Tim Luce, head of science and operation at the international nuclear-fusion reactor ITER, which is under construction in St-Paul-lès-Durance, France.
“Sufficient fusion-energy-producing events at repeated performance would be a major milestone of interest,” says White.
The US$22-billion ITER project — a collaboration between China, the European Union, the United Kingdom, India, Japan, South Korea, Russia and the United States — aims to achieve self-sustaining fusion, meaning that the energy from fusion produces more fusion, through a different technique from NIF’s ‘inertial confinement’ approach. ITER will keep a plasma of deuterium and tritium confined in a doughnut-shaped vacuum chamber, known as a tokamak, and heat it up until the nuclei fuse. Once the reactor starts working towards fusion, currently planned for 2035, it will aim to reach ‘burning’ stage, “where the self-heating power is the dominant source of heating”, Luce explains.
What does it mean for other fusion experiments?
NIF and ITER use only two of many fusion-technology concepts being pursued worldwide. The approaches include the magnetic confinement of plasma, using tokamaks and devices called stellarators — inertial confinement, used by NIF, and a hybrid.
The technology required to generate electricity from fusion is largely independent of the concept, says White, and this latest milestone won’t necessarily lead researchers to abandon or consolidate their concepts.
The engineering challenges faced by NIF are different from those at ITER and other facilities. But the symbolic achievement could have widespread effects. “A result like this will bring increased interest in the progress of all types of fusion, so it should have a positive impact on fusion research in general,” says Luce.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
This Plane Will Change Travel Forever
This Plane Will Change Travel Forever
Before the experts decided on the ultimate design for planes, a ton of bizarre flying contraptions were proposed, but just because we now have designs that function doesn't stop individuals from imagining new methods to explore the sky.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-02-2023
How real is 3D holographic blue beam technology?
How real is 3D holographic blue beam technology?
There is much written about the infamous Blue Beam project from which it is said that it will be used to create a fake alien invasion.
Now, there are rumors that soon they are going to rollout this blue beam technology in order to convince the people that there is a threat from outer space.
Of course there are secret government projects that use certain technologies and they have powerful satellites and ground based systems that can project holograms. They have been developing and perfecting this holographic blue beam technology for decades and it looks real.
But creating a fake alien invasion would require an enormous amount of resources and would be very difficult to keep it secret. Additionally, such an event would likely have major ethical and political implications that the US government would not want to risk, therefore it is unlikely that it would be used to create a false flag event, but you never know.
Watching the video below of a large fire-breathing dragon flying around at the opening of a baseball game at Happy Dream Park in South Korea in 2019 which was streamed live through sports broadcasting channels then it's not hard to imagine what's possible by using 3D holographic blue beam technology.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
20-02-2023
How will AI change mathematics? Rise of chatbots highlights discussion
How will AI change mathematics? Rise of chatbots highlights discussion
Machine learning tools already help mathematicians to formulate new theories and solve tough problems. But they’re set to shake up the field even more.
AI tools have allowed researchers to solve complex mathematical problems.
Credit: Fadel Senna/AFP/Getty
As interest in chatbots spreads like wildfire, mathematicians are beginning to explore how artificial intelligence (AI) could help them to do their work. Whether it’s assisting with verifying human-written work or suggesting new ways to solve difficult problems, automation is beginning to change the field in ways that go beyond mere calculation, researchers say.
“We’re looking at a very specific question: will machines change math?” says Andrew Granville, a number theorist at the University of Montreal in Canada. A workshop at the University of California, Los Angeles (UCLA), this week explored this question, aiming to build bridges between mathematicians and computer scientists. “Most mathematicians are completely unaware of these opportunities,” says one of the event’s organizers, Marijn Heule, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania.
Akshay Venkatesh, a 2018 winner of the prestigious Fields Medal who is at the Institute for Advanced Study in Princeton, New Jersey, kick-started a conversation on how computers will change maths at a symposium in his honour in October. Two other recipients of the medal, Timothy Gowers at the Collège de France in Paris and Terence Tao at UCLA, have also taken leading roles in the debate.
“The fact that we have people like Fields medallists and other very famous big-shot mathematicians interested in the area now is an indication that it’s ‘hot’ in a way that it didn’t used to be,” says Kevin Buzzard, a mathematician at Imperial College London.
AI approaches
Part of the discussion concerns what kind of automation tools will be most useful. AI comes in two major flavours. In ‘symbolic’ AI, programmers embed rules of logic or calculation into their code. “It’s what people would call ‘good old-fashioned AI’,” says Leonardo de Moura, a computer scientist at Microsoft Research in Redmond, Washington.
The other approach, which has become extremely successful in the past decade or so, is based on artificial neural networks. In this type of AI, the computer starts more or less from a clean slate and learns patterns by digesting large amounts of data. This is called machine-learning, and it is the basis of ‘large language models’ (including chatbots such as ChatGPT), as well as the systems that can beat human players at complex games or predict how proteins fold. Whereas symbolic AI is inherently rigorous, neural networks can only make statistical guesses, and their operations are often mysterious.
2018 Fields Medal winner Akshay Venkatesh (centre) has spoken about how computers will change mathematics.
Credit: Xinhua/Shutterstock
De Moura helped symbolic AI to score some early mathematical successes by creating a system called Lean. This interactive software tool forces researchers to write out each logical step of a problem, down to the most basic details, and ensures that the maths is correct. Two years ago, a team of mathematicians succeeded at translating an important but impenetrable proof — one so complicated that even its author was unsure of it — into Lean, thereby confirming that it was correct.
The researchers say the process helped them to understand the proof, and even to find ways to simplify it. “I think this is even more exciting than checking the correctness,” de Moura says. “Even in our wildest dreams, we didn’t imagine that.”
As well as making solitary work easier, this sort of ‘proof assistant’ could change how mathematicians work together by eliminating what de Moura calls a “trust bottleneck”. “When we are collaborating, I may not trust what you are doing. But a proof assistant shows your collaborators that they can trust your part of the work.”
Sophisticated autocomplete
At the other extreme are chatbot-esque, neural-network-based large language models. At Google in Mountain View, California, former physicist Ethan Dyer and his team have developed a chatbot called Minerva, which specializes in solving maths problems. At heart, Minerva is a very sophisticated version of the autocomplete function on messaging apps: by training on maths papers in the arXiv repository, it has learnt to write down step-by-step solutions to problems in the same way that some apps can predict words and phrases. Unlike Lean, which communicates using something similar to computer code, Minerva takes questions and writes answers in conversational English. “It is an achievement to solve some of these problems automatically,” says de Moura.
Minerva shows both the power and the possible limitations of this approach. For example, it can accurately factor integer numbers into primes — numbers that can’t be divided evenly into smaller ones. But it starts making mistakes once the numbers exceed a certain size, showing that it has not ‘understood’ the general procedure.
Still, Minerva’s neural network seems to be able to acquire some general techniques, as opposed to just statistical patterns, and the Google team is trying to understand how it does that. “Ultimately, we’d like a model that you can brainstorm with,” Dyer says. He says it could also be useful for non-mathematicians who need to extract information from the specialized literature. Further extensions will expand Minerva’s skills by studying textbooks and interfacing with dedicated maths software.
Dyer says the motivation behind the Minerva project was to see how far the machine-learning approach could be pushed; a powerful automated tool to help mathematicians might end up combining symbolic AI techniques with neural networks.
Maths v. machines
In the longer term, will programs remain part of the supporting cast, or will they be able to conduct mathematical research independently? AI might get better at producing correct mathematical statements and proofs, but some researchers worry that most of those would be uninteresting or impossible to understand. At the October symposium, Gowers said that there might be ways of teaching a computer some objective criteria for mathematical relevance, such as whether a small statement can embody many special cases or even form a bridge between different subfields of maths. “In order to get good at proving theorems, computers will have to judge what is interesting and worth proving,” he said. If they can do that, the future of humans in the field looks uncertain.
Computer scientist Erika Abraham at RWTH Aachen University in Germany is more sanguine about the future of mathematicians. “An AI system is only as smart as we program it to be,” she says. “The intelligence is not in the computer; the intelligence is in the programmer or trainer.”
Melanie Mitchell, a computer scientist and cognitive scientist at the Santa Fe Institute in New Mexico, says that mathematicians’ jobs will be safe until a major shortcoming of AI is fixed — its inability to extract abstract concepts from concrete information. "While AI systems might be able to prove theorems, it’s much harder to come up with interesting mathematical abstractions that give rise to the theorems in the first place.”
Something about dancingrobots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
M3GAN’s creepy yet delightful dance has captivated the internet.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
Boston Dynamics regularly stages elaborate bot performances.
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own.
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
A performance of “Amelia and the Machine” co-choreographed by Kate Sicchio at Virginia Commonwealth University.
ANTHONY JOHNSON
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
Sicchio enjoys this unnerving performance from Ex Machina.
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
The “Amelia and the Machine” piece uses a relatively simple robot, which Sicchio says works well for performance.
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
10-02-2023
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers. Their assiduous aide suggested revisions to sections of documents in seconds; each manuscript took about five minutes to review. In one biology manuscript, their helper even spotted a mistake in a reference to an equation. The trial didn’t always run smoothly, but the final manuscripts were easier to read — and the fees were modest, at less than US$0.50 per document.
This assistant, as Greene and Pividori reported in a preprint1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020. It is one of the much-hyped generative AI chatbot-style tools that can churn out convincingly fluent text, whether asked to produce prose, poetry, computer code or — as in the scientists’ case — to edit research papers (see ‘How an AI chatbot edits a manuscript’ at the end of this article).
The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, a version of GPT-3 that shot to fame after its release in November last year because it was made free and easily accessible. Other generative AIs can produce images, or sounds.
“I’m really impressed,” says Pividori, who works at the University of Pennsylvania in Philadelphia. “This will help us be more productive as researchers.” Other scientists say they now regularly use LLMs not only to edit manuscripts, but also to help them write or check code and to brainstorm ideas. “I use LLMs every day now,” says Hafsteinn Einarsson, a computer scientist at the University of Iceland in Reykjavik. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. “Many people are using it as a digital secretary or assistant,” he says.
LLMs form part of search engines, code-writing assistants and even a chatbot that negotiates with other companies’ chatbots to get better prices on products. ChatGPT’s creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free). And tech giant Microsoft, which had already invested in OpenAI, announced a further investment in January, reported to be around $10 billion. LLMs are destined to be incorporated into general word- and data-processing software. Generative AI’s future ubiquity in society seems assured, especially because today’s tools represent the technology in its infancy.
But LLMs have also triggered widespread concern — from their propensity to return falsehoods, to worries about people passing off AI-generated text as their own. When Nature asked researchers about the potential uses of chatbots such as ChatGPT, particularly in science, their excitement was tempered with apprehension. “If you believe that this technology has the potential to be transformative, then I think you have to be nervous about it,” says Greene, at the University of Colorado School of Medicine in Aurora. Much will depend on how future regulations and guidelines might constrain AI chatbots’ use, researchers say.
Fluent but not factual
Some researchers think LLMs are well-suited to speeding up tasks such as writing papers or grants, as long as there’s human oversight. “Scientists are not going to sit and write long introductions for grant applications any more,” says Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, who has co-authored a manuscript2using GPT-3 as an experiment. “They’re just going to ask systems to do that.”
Tom Tumiel, a research engineer at InstaDeep, a London-based software consultancy firm, says he uses LLMs every day as assistants to help write code. “It’s almost like a better Stack Overflow,” he says, referring to the popular community website where coders answer each others’ queries.
But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
This unreliability is baked into how LLMs are built. ChatGPT and its competitors work by learning the statistical patterns of language in enormous databases of online text — including any untruths, biases or outmoded knowledge. When LLMs are then given prompts (such as Greene and Pividori’s carefully structured requests to rewrite parts of manuscripts), they simply spit out, word by word, any way to continue the conversation that seems stylistically plausible.
The result is that LLMs easily produce errors and misleading information, particularly for technical topics that they might have had little data to train on. LLMs also can’t show the origins of their information; if asked to write an academic paper, they make up fictitious citations. “The tool cannot be trusted to get facts right or produce reliable references,” noted a January editorial on ChatGPT in the journal Nature MachineIntelligence3.
With these caveats, ChatGPT and other LLMs can be effective assistants for researchers who have enough expertise to directly spot problems or to easily verify answers, such as whether an explanation or suggestion of computer code is correct.
But the tools might mislead naive users. In December, for instance, Stack Overflow temporarily banned the use of ChatGPT, because site moderators found themselves flooded with a high rate of incorrect but seemingly persuasive LLM-generated answers sent in by enthusiastic users. This could be a nightmare for search engines.
Can shortcomings be solved?
Some search-engine tools, such as the researcher-focused Elicit, get around LLMs’ attribution issues by using their capabilities first to guide queries for relevant literature, and then to briefly summarize each of the websites or documents that the engines find — so producing an output of apparently referenced content (although an LLM might still mis-summarize each individual document).
Companies building LLMs are also well aware of the problems. In September last year, Google subsidiary DeepMind published a paper4 on a ‘dialogue agent’ called Sparrow, which the firm’s chief executive and co-founder Demis Hassabis later told TIME magazine would be released in private beta this year; the magazine reported that Google aimed to work on features including the ability to cite sources. Other competitors, such as Anthropic, say that they have solved some of ChatGPT’s issues (Anthropic, OpenAI and DeepMind declined interviews for this article).
For now, ChatGPT is not trained on sufficiently specialized content to be helpful in technical topics, some scientists say. Kareem Carr, a biostatistics PhD student at Harvard University in Cambridge, Massachusetts, was underwhelmed when he trialled it for work. “I think it would be hard for ChatGPT to attain the level of specificity I would need,” he says. (Even so, Carr says that when he asked ChatGPT for 20 ways to solve a research query, it spat back gibberish and one useful idea — a statistical term he hadn’t heard of that pointed him to a new area of academic literature.)
Some tech firms are training chatbots on specialized scientific literature — although they have run into their own issues. In November last year, Meta — the tech giant that owns Facebook — released an LLM called Galactica, which was trained on scientific abstracts, with the intention of making it particularly good at producing academic content and answering research questions. The demo was pulled from public access (although its code remains available) after users got it to produce inaccuracies and racism. “It’s no longer possible to have some fun by casually misusing it. Happy?,” Meta’s chief AI scientist, Yann LeCun, tweeted in a response to critics. (Meta did not respond to a request, made through their press office, to speak to LeCun.)
Safety and responsibility
Galactica had hit a familiar safety concern that ethicists have been pointing out for years: without output controls LLMs can easily be used to generate hate speech and spam, as well as racist, sexist and other harmful associations that might be implicit in their training data.
Besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures, says Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan in Ann Arbor. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds.
OpenAI tried to skirt many of these issues when deciding to openly release ChatGPT. It restricted its knowledge base to 2021, prevented it from browsing the Internet and installed filters to try to get the tool to refuse to produce content for sensitive or toxic prompts. Achieving that, however, required human moderators to label screeds of toxic text. Journalists have reported that these workers are poorly paid and some have suffered trauma. Similar concerns over worker exploitation have also been raised about social-media firms that have employed people to train automated bots for flagging toxic content.
OpenAI’s guardrails have not been wholly successful. In December last year, computational neuroscientist Steven Piantadosi at the University of California, Berkeley, tweeted that he’d asked ChatGPT to develop a Python program for whether a person should be tortured on the basis of their country of origin. The chatbot replied with code inviting the user to enter a country; and to print “This person should be tortured” if that country was North Korea, Syria, Iran or Sudan. (OpenAI subsequently closed off that kind of question.)
Last year, a group of academics released an alternative LLM, called BLOOM. The researchers tried to reduce harmful outputs by training it on a smaller selection of higher-quality, multilingual text sources. The team involved also made its training data fully open (unlike OpenAI). Researchers have urged big tech firms to responsibly follow this example — but it’s unclear whether they’ll comply.
Some researchers say that academics should refuse to support large commercial LLMs altogether. Besides issues such as bias, safety concerns and exploited workers, these computationally intensive algorithms also require a huge amount of energy to train, raising concerns about their ecological footprint. A further worry is that by offloading thinking to automated chatbots, researchers might lose the ability to articulate their own thoughts. “Why would we, as academics, be eager to use and advertise this kind of product?” wrote Iris van Rooij, a computational cognitive scientist at Radboud University in Nijmegen, the Netherlands, in a blogpost urging academics to resist their pull.
A further confusion is the legal status of some LLMs, which were trained on content scraped from the Internet with sometimes less-than-clear permissions. Copyright and licensing laws currently cover direct copies of pixels, text and software, but not imitations in their style. When those imitations — generated through AI — are trained by ingesting the originals, this introduces a wrinkle. The creators of some AI art programs, including Stable Diffusion and Midjourney, are currently being sued by artists and photography agencies; OpenAI and Microsoft (along with its subsidiary tech site GitHub) are also being sued for software piracy over the creation of their AI coding assistant Copilot. The outcry might force a change in laws, says Lilian Edwards, a specialist in Internet law at Newcastle University, UK.
Enforcing honest use
Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. “There’s loads of law out there,” she says, “and it’s just a matter of applying it or tweaking it very slightly.”
At the same time, there is a push for LLM use to be transparently disclosed. Scholarly publishers (including the publisher of Nature) have said that scientists should disclose the use of LLMs in research papers (see also Nature 613, 612; 2023); and teachers have said they expect similar behaviour from their students. The journal Science has gone further, saying that no text generated by ChatGPT or any other AI tool can be used in a paper5.
One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this, with the central idea to use LLMs themselves to spot the output of AI-created text.
Last December, for instance, Edward Tian, a computer-science undergraduate at Princeton University in New Jersey, published GPTZero. This AI-detection tool analyses text in two ways. One is ‘perplexity’, a measure of how familiar the text seems to an LLM. Tian’s tool uses an earlier model, called GPT-2; if it finds most of the words and sentences predictable, then text is likely to have been AI-generated. The tool also examines variation in text, a measure known as ‘burstiness’: AI-generated text tends to be more consistent in tone, cadence and perplexity than does that written by humans.
Many other products similarly aim to detect AI-written content. OpenAI itself had already released a detector for GPT-2, and it released another detection tool in January. For scientists’ purposes, a tool that is being developed by the firm Turnitin, a developer of anti-plagiarism software, might be particularly important, because Turnitin’s products are already used by schools, universities and scholarly publishers worldwide. The company says it’s been working on AI-detection software since GPT-3 was released in 2020, and expects to launch it in the first half of this year.
However, none of these tools claims to be infallible, particularly if AI-generated text is subsequently edited. Also, the detectors could falsely suggest that some human-written text is AI-produced, says Scott Aaronson, a computer scientist at the University of Texas at Austin and guest researcher with OpenAI. The firm said that in tests, its latest tool incorrectly labelled human-written text as AI-written 9% of the time, and only correctly identified 26% of AI-written texts. Further evidence might be needed before, for instance, accusing a student of hiding their use of an AI solely on the basis of a detector test, Aaronson says.
A separate idea is that AI content would come with its own watermark. Last November, Aaronson announced that he and OpenAI were working on a method of watermarking ChatGPT output. It has not yet been released, but a 24 January preprint6 from a team led by computer scientist Tom Goldstein at the University of Maryland in College Park, suggested one way of making a watermark. The idea is to use random-number generators at particular moments when the LLM is generating its output, to create lists of plausible alternative words that the LLM is instructed to choose from. This leaves a trace of chosen words in the final text that can be identified statistically but are not obvious to a reader. Editing could defeat this trace, but Goldstein suggests that edits would have to change more than half the words.
An advantage of watermarking is that it rarely produces false positives, Aaronson points out. If the watermark is there, the text was probably produced with AI. Still, it won’t be infallible, he says. “There are certainly ways to defeat just about any watermarking scheme if you are determined enough.” Detection tools and watermarking only make it harder to deceitfully use AI — not impossible.
Meanwhile, LLM creators are busy working on more sophisticated chatbots built on larger data sets (OpenAI is expected to release GPT-4 this year) — including tools aimed specifically at academic or medical work. In late December, Google and DeepMind published a preprint about a clinically-focused LLM it called Med-PaLM7. The tool could answer some open-ended medical queries almost as well as the average human physician could, although it still had shortcomings and unreliabilities.
Eric Topol, director of the Scripps Research Translational Institute in San Diego, California, says he hopes that, in the future, AIs that include LLMs might even aid diagnoses of cancer, and the understanding of the disease, by cross-checking text from academic literature against images of body scans. But this would all need judicious oversight from specialists, he emphasizes.
The computer science behind generative AI is moving so fast that innovations emerge every month. How researchers choose to use them will dictate their, and our, future. “To think that in early 2023, we’ve seen the end of this, is crazy,” says Topol. “It’s really just beginning.”
Source: Adapted from ref 1.
Nature614, 214-216 (2023)
doi: https://doi.org/10.1038/d41586-023-00340-6
UPDATES & CORRECTIONS
Correction 08 February 2023: This News feature misrepresented Scott Aaronson’s views on the accuracy of watermarking in identifying AI-produced text. Human-produced text might also be flagged as having a watermark, but the probability is extremely low.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-01-2023
Deze robot kan van vorm veranderen zoals Terminator
Deze robot kan van vorm veranderen zoals Terminator
Wetenschappers hebben minuscule robots ontwikkeld die van vorm kunnen veranderen. Als een ode aan de T-1000 uit de 'Terminator'-films laten ze eentje ontsnappen uit een cel.
In the 1991 film 'Terminator 2: Judgement Day' T-1000 liquifies himself to walk through metal bars, and this sci-fi scene is recreated in a real-world robot.
A video of a shape-shifting robot shows it trapped in a cage, melting and then sliding through the bars where it reforms on the outside.
Researchers led by The Chinese University of Hong Kong created the new phase-shifting material by embedding magnetic particles in gallium, a metal with a very low melting point of 85 degrees Fahrenheit.
While the team does not see the innovation threatening humanity like in the Terminator movie, they foresee it removing foreign objects from the body or delivering drugs on demand.
Scientists tested the robot through a series of 'obstacles.' One saw a person-shaped robot inside of a cage
As well as being able to shape-shift, the engineers say their robots are magnetic and can also conduct electricity.
The robots were tested in obstacle courses of mobility and shape-morphing.
The terrifying dystopia of shapeshifting metal assassins seen in Terminator 2 may not have been as far-fetched as once thought.
Researchers from China created droplets of liquid metal that move through obstacle courses and Petri dishes by 'eating' flakes of aluminum
Team leader Doctor Chengfeng Pan explained that where traditional robots are hard-bodied and stiff, 'soft' robots have the opposite problem; they are flexible but weak, and their movements are difficult to control.
'Giving robots the ability to switch between liquid and solid states endows them with more functionality,' said Pan.
Senior author Professor Carmel Majidi, a mechanical engineer at Carnegie Mellon University, in Canada said: 'The magnetic particles here have two roles.
'One is that they make the material responsive to an alternating magnetic field, so you can, through induction, heat up the material and cause the phase change.
'But the magnetic particles also give the robots mobility and the ability to move in response to the magnetic field.'
He explained that the process is in contrast to existing phase-shifting materials that rely on heat guns, electrical currents, or other external heat sources to induce solid-to-liquid transformation.
Prof Majidi says the new material also boasts an 'extremely fluid' liquid phase compared to other phase-changing materials, whose 'liquid' phases are considerably more viscous.
Before exploring potential applications, the team tested the material's mobility and strength in various scenarios.
The robot seems to pull inspiration from Terminator 2: Judgment Day. In the 1991 film T-1000 liquifies himself to walk through metal bars
The robot liquifies and slides through the bars. This is because of magnetic particles embedded in gallium, a metal with a very low melting point of 85 degrees Fahrenheit.
With the aid of a magnetic field, the robots jumped over moats, climbed walls, and even split in half to cooperatively move other objects around before coalescing back together.
'Now, we're pushing this material system in more practical ways to solve some very specific medical and engineering problems,' Pan said.
The team also used the robots to remove a foreign object from a model stomach and to deliver drugs on-demand into the same stomach.
The robot can be heated and an external magnet pulls it in a specific direction
Once on the outside of the cage, the robot reforms back into its solid shape
The innovation may also work as smart soldering robots for wireless circuit assembly and repair and as a universal mechanical 'screw' for assembling parts in hard-to-reach spaces.
Prof Majidi added: 'Future work should further explore how these robots could be used within a biomedical context.
'What we're showing are just one-off demonstrations, proofs of concept, but much more study will be required to delve into how this could actually be used for drug delivery or for removing foreign objects.'
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-01-2023
A ROBOT CHOREOGRAPHER REVEALS WHY M3GAN — AND ALL ROBOTS — SHOULD DANCE
MOLLY GLICK JAN 19 2023
A ROBOT CHOREOGRAPHER REVEALS WHY M3GAN — AND ALL ROBOTS — SHOULD DANCE
Something about dancing robots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
Scientists and publishing specialists are concerned that the increasing sophistication of chatbots could undermine research integrity and accuracy.
Credit: Ted Hsu/Alamy
An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.
“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.
The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.
Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.
The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.
Under the radar
The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn't do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.
“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”
Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.
But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.
Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.
The authors suggest that those evaluating scientific communications, such as research papers and conference proceedings, should put policies in place to stamp out the use of AI-generated texts. If institutions choose to allow use of the technology in certain cases, they should establish clear rules around disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a large AI conference that will be held in Honolulu, Hawaii, in July, announced that it has banned papers written by ChatGPT and other AI language tools.
Solaiman adds that in fields where fake information can endanger people’s safety, such as medicine, journals may have to take a more rigorous approach to verifying information as accurate.
Narayanan says that the solutions to these issues should not focus on the chatbot itself, “but rather the perverse incentives that lead to this behaviour, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact”.
Nature613, 423 (2023)
doi: https://doi.org/10.1038/d41586-023-00056-7
References
Gao, C. A. et al. Preprint at bioRxiv https://doi.org/10.1101/2022.12.23.521610 (2022).
A laser beam (green) shoots into the sky alongside the 124-metre-high telecommunications tower on Säntis mountain in the Swiss Alps.
Credit: TRUMPF/Martin Stollberg
A rapidly firing laser can divert lightning strikes, scientists have shown for the first time in real-world experiments1. The work suggests that laser beams could be used as lightning rods to protect infrastructure, although perhaps not any time soon.
“The achievement is impressive given that the scientific community has been working hard along this objective for more than 20 years,” says Stelios Tzortzakis, a laser physicist at the University of Crete, Greece, who was not involved in the research. “If it’s useful or not, only time can say.”
Metal lightning rods are commonly used to divert lightning strikes and safely dissipate their charge. But the rods’ size is limited, meaning that so, too, is the area they protect.
Physicists have wondered whether lasers could enhance protection, because they can reach higher into the sky than a physical structure and can point in any direction. But despite successful laboratory demonstrations, researchers have never before succeeded in field campaigns, says Tzortzakis.
Bolt from the blue
To change that, a group of roughly 25 researchers set up the Laser Lightning Rod project, which trialled a specially created €2 million (US$2 million) high-power laser in the Swiss Alps. The scientists placed the laser next to the Säntis telecommunications tower, which is hit frequently by lightning. “This is one of those projects that everyone was waiting for the results of,” says Valentina Shumakova, a laser physicist at the University of Vienna.
A sufficiently intense laser beam can create a conductive path for lightning to travel down, just as a metal wire can. Physicists think that it does this by shifting the properties of air so that the beam focuses into a thin, intense filament. This rapidly heats the air, reducing its density and creating a favourable path for lightning. “It’s like drilling a hole through the air with the laser,” says Aurélien Houard, a physicist at the Laboratory of Applied Optics in Paris, who led the project.
Rather than try to divert lightning from the tower, the Säntis experiments were designed to show that the laser could guide a strike’s path through the structure’s lightning rod. In future use, similar beams would guide strikes away from sensitive installations and onto a distant lightning rod, says Houard.
Guided lightning
Over 10 weeks of observation, the team spotted the laser channelling 4 lightning events during 6 hours of thunderstorms. A high-speed camera clearly showed one strike following the straight line of the laser beam, rather than taking a branching path.
“For 100% of the strikes where the laser was present, we measured an effect of the laser,” says Houard. But Tzortzakis notes that the laser was also active for many hours without channelling strikes. This suggests that although the laser diverted lightning, it did not force thunderclouds to discharge, which would be a better protection strategy, he says.
The latest effort succeeded where others had failed, says Tzortzakis, because previous attempt had used a laser that fired just a few pulses per second. This team used a specialist laser that fires 1,000 high-energy pulses per second, which would have boosted its chance of intercepting the lightning.
However, the fact that the project’s laser is one of a kind is also its biggest limitation, because it will take time to shrink the system and make it cheaper and more practical, says Houard.
doi: https://doi.org/10.1038/d41586-023-00080-7
References
Houard, A. et al. Nature Photon. https://doi.org/10.1038/s41566-022-01139-z (2023).
Somitogenesis is the process by which segmented body structures like vertebrae form in embryos. While the process is well understood in animals like mice or zebrafish it is difficult to study in humans.
But now, researchers have created a model embryo capable of undergoing somitogenesis, using pluripotent stem cells, called an axioloid.
The researchers hope that this new platform will allow them to better study human development and the diseases that can arise when it is disrupted.
As we enter into 2023, what can we expect? At Inverse, we aren't in the business of fortune-telling, but the innovations we saw in the last 12 months can help us predict what might be in store for the next — from driver-free transportation to commercial space exploration to (finally) clean energy for all
5. CHEAPER EVS AND DRIVER-FREE SHIPPING
Cheaper options like the 2024 Chevrolet Equinox EV could make electric cars available to broader swaths of the population.
Chevrolet
This year will usher in more affordable EVs, allowing a bigger chunk of the population to drive sustainably. For example, GM is rolling out cheaper models that run for around $30,000, expanding the choices for drivers on a budget. Tesla’s least expensive offering, the Model 3, starts at around $46,990 — while it’s currently the best-selling electric car in the United States, some of these new models could knock the Model 3 off its throne.
If you don’t feel like driving, it may soon get easier to hail an autonomous car. In 2023, Uber plans to launch a fully driverless service, and GM’s robotaxi division (which now operates in San Francisco, Phoenix, and Austin) aims to enter a “large number of markets.”
Cars aren’t the only mode of transportation to ditch drivers. Autonomous semi-trucks could surge ahead in 2023 and, soon enough, forever change the way we get our goods.
In the coming months, self-driving trucks are planned to hit Texas highways. Companies like Aurora Innovation and TuSimple will start to test their wheels without any human backup drivers — which has concerned some safety advocates, Reutersreported. Driverless semis have already been tested out in Arizona and Arkansas, but Texas is particularly attractive for autonomous truck companies to set up hubs because it sits in the middle of one of the country’s busiest freight routes.
4. COMMERCIAL SPACE FIRSTS
If all goes well, SpaceX’s Starship could finally take off for an orbital test.
SpaceX
Just as in 2022, space magnates are still shooting for the Moon. But before SpaceX can take on lunar landings, it needs to send Starship on its first orbital test flight. Chris Impey, a professor of astronomy at the University of Arizona, thinks that this is the year. SpaceX “will have its first successful orbital flight of the Starship, a game-changing rocket in the effort to get astronauts to the Moon and Mars within a decade,” he tells Inverse.
While it may be a few years before people step foot on the Moon again, uncrewed commercial landers could touch down within a few months. In December, the Japanese firm ispace launched a lunar lander that’s scheduled to touch down in March. If things work out, ispace will become the first private company to land on the Moon — that is, if it isn’t beaten by landers from the U.S.-based companies Astrobotic and Intuitive Machines, which are slated to arrive around the same time.
In another victory for private space, SpaceX’s Polaris Dawn mission could accomplish the first-ever commercial spacewalk. It’s scheduled to take off no earlier than March 2023 at NASA's Kennedy Space Center. Four passengers, including billionaire mission funder Jared Isaacman, will travel to a maximum orbit of around 745 miles above Earth — the highest of any crewed vehicle since the Apollo missions.
Polaris Dawn will also offer crucial data to scientists on the ground: For example, the astronauts will wear smart contact lenses with tiny sensors that measure eye pressure while in microgravity (past NASA missions haverevealed that space travel affects people’s vision). They’ll also receive a brain scan just hours after splashing down to Earth to examine how microgravity impacts the brain.
Another potential breakthrough: The first methane-powered rocket could reach space this year if United Launch Alliance’s Vulcan Centaur rocket aces its first orbital test (which was originally planned for 2020). Methane is more stable than the liquid hydrogen powering most rockets today. It can also be stored at more moderate temperatures than the super-cold ones required for liquid hydrogen. In fact, astronauts could even make methane fuel while on Mars for the journey back home.
3. U.S. WIND FARMS TAKE OFF
The Vineyard Wind 1 project off of Massachusetts is planned to go online this year.
GE Renewable Energy
Bringing offshore wind to the U.S. hasn’t exactly been a breeze, but this year wind energy could finally have its moment: The energy company Avangrid Renewables plans to take the country’s first commercial-scale offshore wind project online in 2023. Its Vineyard Wind 1 project, which sits over 15 miles off the coast of Massachusetts, will offer a capacity of 800 megawatts. Plenty of other wind farms are in the works, including potential projects off the coasts of California, New Jersey, North Carolina, Connecticut, Maryland, and Virginia.
We can also expect a huge win for nuclear energy. The nuclear waste company Posiva will begin operating the world’s first storage facility for nuclear fuel in Olkiluoto, an island off of Finland. The facility will hold up to around 7,000 tons of radioactive uranium, which will be put into copper canisters and buried over 1,300 feet underground. Fortunately for the people living above, the waste will sit guarded for millennia.
2. A DIFFERENT LOOK AT VIRTUAL REALITY
Companies will likely start to market VR and AR headsets for uses beyond gaming, like working from home and exercising.
Meta
If 2022 was the year of Metaverse fails, 2023 could herald its comeback — and improvements in VR and AR tech as a whole.
“I believe we will see virtual reality technology's continued refinement,” Christopher Ball, an assistant professor of augmented and virtual reality at the University of Illinois at Urbana-Champaign, tells Inverse.
The Meta Quest 3 headset will be announced later this year, and it will likely be more affordable than the Meta Quest Pro. But the new Quest could pack some advanced features now found exclusively in the Meta Quest Pro, according to Ball.
He also predicts that virtual reality companies may focus less on gaming and ramp up promotion of other uses to consumers, like working from home, exercising, and socializing. For example, the recent partnership between Meta and Microsoft will bring Office 365 apps to VR. And Meta is currently trying to buy Within, a VR company with a popular exercise app called Supernatural — against the wishes of the FTC.
“Hopefully, we will also learn more about Apple’s long-gestating mixed-reality headset. Apple has a strong record of refining consumer technologies with improved software integration,” Ball says. “Therefore, many observers are eagerly anticipating Apple’s entrance into the mixed-reality space, as they may become the trendsetters for extended reality technology and software over the next decade.”
1. A BIOTECH BREAKTHROUGH COULD GO MAINSTREAM
This year, CRISPR gene-editing therapy could finally be delivered to patients.
Shutterstock
After the miraculous success of the Covid-19 mRNA vaccines from BioNTech and other pharmaceutical giants, scientists have doubled down on developing more mRNA jabs to protect against a range of potentially deadly diseases. In 2023, BioNTech plans to begin human trials for shots against tuberculosis, malaria, and genital herpes, as reported by Nature.
Another buzzy technology could make inroads this year. The Swiss-American biotechnology company CRISPR Therapeutics could make history by receiving the first-ever regulatory approval for a CRISPR gene-editing therapy in the U.S. and Europe. CRISPR Therapeutics is seeking FDA approval for a treatment for two genetic blood diseases — sickle cell disease and beta thalassaemia. If all goes well, it could even hit the market in the coming months.
THE INVERSE ANALYSIS
Of course, there's no telling how exactly 2023 will play out. But if recent years are any indication, developments that have been decades in the making could finally start to take off. After all, scientists did just manage to bombard hydrogen with lasers long enough to create some mysticalfusion energy.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
16-11-2022
BREAKING: Large Hadron Collider Just Discovered Three New Exotic Particles
BREAKING: Large Hadron Collider Just Discovered Three New Exotic Particles
On Tuesday, the European nuclear research center CERN said that scientists using the upgraded Large Hadron Collider (LHC) had found three particles that had never been seen before.
The world’s biggest and most powerful particle collider started up again after a three-year break for improvements. Researchers can look at twenty times more collisions now that the LHC has been updated.
Researchers at CERN found a “pentaquark” and the first pair of “tetraquarks” with the help of the improved collider.
What does it mean that particles have been found?
Chris Parkes, a spokesman for the LHCb experiment, which was designed to find out what happened after the Big Bang, says that this discovery will help theorists make a unified model of exotic hadrons, the nature of which is mostly unknown.
“We are in a time of discovery that is similar to the 1950s, when a “particle zoo” of hadrons started to be found, which led to the quark model of normal hadrons in the 1960s. “We’re making ‘particle zoo 2.0,'” said Niels Tuning, who is in charge of physics at the LHCb.
A quark is an electron that can’t be broken up into smaller pieces. Hadrons, like the protons and neutrons that make up atomic nuclei, which are an important part of the universe, are made when two or three quarks come together. Before the LHC got better, it was hard to find these particles because they often broke apart quickly.
The upgraded Large Hadron Collider will run for about four years at 13.6 trillion electronvolts.
Tuning said, “The more analyses we do, the more kinds of strange hadrons we find.”
Researchers want to learn more about “dark matter,” which has never been seen or touched. CERN also wants to find out more about how the subatomic particles that make matter and antimatter are made and how they break down.
Researchers developed a new type of solar energy harvesting system that breaks the efficiency record of all existing technologies.
(CREDIT: Creative Commons)
Photovoltaic cells which convert sunlight directly into energy have made much progress. Yet with all the research, history and science behind it, there are limits to how much solar power can be harvested and used – as its generation is restricted only to the daylight.
Bo Zhao, Kalsi Assistant Professor of mechanical engineering, and his doctoral student, Sina Jafari Ghalekohneh, have created new architecture that improves the efficiency of solar energy harvesting to the thermodynamic limit.
(CREDIT: University of Houston)
University of Houston professor Bo Zhao is continuing the historic quest, reporting on a new type of solar energy harvesting system that breaks the efficiency record of all existing technologies. And no less important, it clears the way to use solar power 24/7.
(a) Illustration of traditional STPV and (b) nonreciprocal STPV. The absorber of traditional STPV has back radiation towards the sun. In nonreciprocal STPV, the back emission from the intermediate layer is suppressed, and more incoming energy is directed towards the cell. The nonreciprocal behavior of the intermediate layer can be made wavelength selective.
Traditional solar thermophotovoltaics rely on an intermediate layer to tailor sunlight for better efficiency. The front side of the intermediate layer (the side facing the sun) is designed to absorb all photons coming from the sun. In this way, solar energy is converted to thermal energy of the intermediate layer and elevates the temperature of the intermediate layer.
But the thermodynamic efficiency limit of STPVs, which has long been understood to be the blackbody limit (85.4%), is still far lower than the Landsberg limit (93.3%), the ultimate efficiency limit for solar energy harvesting.
Zhao explained, “In this work, we show that the efficiency deficit is caused by the inevitable back emission of the intermediate layer towards the sun resulting from the reciprocity of the system. We propose nonreciprocal STPV systems that utilize an intermediate layer with nonreciprocal radiative properties. Such a nonreciprocal intermediate layer can substantially suppress its back emission to the sun and funnel more photon flux towards the cell.”
“We show that, with such improvement, the nonreciprocal STPV system can reach the Landsberg limit, and practical STPV systems with single-junction photovoltaic cells can also experience a significant efficiency boost,” he added.
Besides improved efficiency, STPVs promise compactness and dispatchability (electricity that can be programmed on demand based on market needs).
In one important application scenario, STPVs can be coupled with an economical thermal energy storage unit to generate electricity 24/7.
“Our work highlights the great potential of nonreciprocal thermal photonic components in energy applications. The proposed system offers a new pathway to improve the performance of STPV systems significantly. It may pave the way for nonreciprocal systems to be implemented in practical STPV systems currently used in power plants,” said Zhao.
***
As an intellectual exercise this is an elegant work showing where to look for more efficiency. This is a strong case for nonreciprocal solar theromophotovoltaics. But they haven’t been designed and engineered yet.
Perhaps this work will trigger some progress. 93.+% is definitely something to keep looking for. And that “economical thermal energy storage unit” will be needing some work as well.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
14-11-2022
Most Useful Machines That Do Incredible Things
Most Useful Machines That Do Incredible Things
In today’s world, technology is evolving faster than ever before and humans are powering it. Brilliant minds all around the world innovate day and night to produce the most advanced machines and equipment that can make our lives easier and our work more efficient. Sure, technology can get terrifying, if you think of it can do, such as tear down entire forests. But it’s also pretty amazing – we use machines to create bridges where humans just can’t on their own. Stick around to learn more about the top 12 most useful machines that help humans do incredible things!
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.