The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
20-02-2023
How will AI change mathematics? Rise of chatbots highlights discussion
How will AI change mathematics? Rise of chatbots highlights discussion
Machine learning tools already help mathematicians to formulate new theories and solve tough problems. But they’re set to shake up the field even more.
AI tools have allowed researchers to solve complex mathematical problems.
Credit: Fadel Senna/AFP/Getty
As interest in chatbots spreads like wildfire, mathematicians are beginning to explore how artificial intelligence (AI) could help them to do their work. Whether it’s assisting with verifying human-written work or suggesting new ways to solve difficult problems, automation is beginning to change the field in ways that go beyond mere calculation, researchers say.
“We’re looking at a very specific question: will machines change math?” says Andrew Granville, a number theorist at the University of Montreal in Canada. A workshop at the University of California, Los Angeles (UCLA), this week explored this question, aiming to build bridges between mathematicians and computer scientists. “Most mathematicians are completely unaware of these opportunities,” says one of the event’s organizers, Marijn Heule, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania.
Akshay Venkatesh, a 2018 winner of the prestigious Fields Medal who is at the Institute for Advanced Study in Princeton, New Jersey, kick-started a conversation on how computers will change maths at a symposium in his honour in October. Two other recipients of the medal, Timothy Gowers at the Collège de France in Paris and Terence Tao at UCLA, have also taken leading roles in the debate.
“The fact that we have people like Fields medallists and other very famous big-shot mathematicians interested in the area now is an indication that it’s ‘hot’ in a way that it didn’t used to be,” says Kevin Buzzard, a mathematician at Imperial College London.
AI approaches
Part of the discussion concerns what kind of automation tools will be most useful. AI comes in two major flavours. In ‘symbolic’ AI, programmers embed rules of logic or calculation into their code. “It’s what people would call ‘good old-fashioned AI’,” says Leonardo de Moura, a computer scientist at Microsoft Research in Redmond, Washington.
The other approach, which has become extremely successful in the past decade or so, is based on artificial neural networks. In this type of AI, the computer starts more or less from a clean slate and learns patterns by digesting large amounts of data. This is called machine-learning, and it is the basis of ‘large language models’ (including chatbots such as ChatGPT), as well as the systems that can beat human players at complex games or predict how proteins fold. Whereas symbolic AI is inherently rigorous, neural networks can only make statistical guesses, and their operations are often mysterious.
2018 Fields Medal winner Akshay Venkatesh (centre) has spoken about how computers will change mathematics.
Credit: Xinhua/Shutterstock
De Moura helped symbolic AI to score some early mathematical successes by creating a system called Lean. This interactive software tool forces researchers to write out each logical step of a problem, down to the most basic details, and ensures that the maths is correct. Two years ago, a team of mathematicians succeeded at translating an important but impenetrable proof — one so complicated that even its author was unsure of it — into Lean, thereby confirming that it was correct.
The researchers say the process helped them to understand the proof, and even to find ways to simplify it. “I think this is even more exciting than checking the correctness,” de Moura says. “Even in our wildest dreams, we didn’t imagine that.”
As well as making solitary work easier, this sort of ‘proof assistant’ could change how mathematicians work together by eliminating what de Moura calls a “trust bottleneck”. “When we are collaborating, I may not trust what you are doing. But a proof assistant shows your collaborators that they can trust your part of the work.”
Sophisticated autocomplete
At the other extreme are chatbot-esque, neural-network-based large language models. At Google in Mountain View, California, former physicist Ethan Dyer and his team have developed a chatbot called Minerva, which specializes in solving maths problems. At heart, Minerva is a very sophisticated version of the autocomplete function on messaging apps: by training on maths papers in the arXiv repository, it has learnt to write down step-by-step solutions to problems in the same way that some apps can predict words and phrases. Unlike Lean, which communicates using something similar to computer code, Minerva takes questions and writes answers in conversational English. “It is an achievement to solve some of these problems automatically,” says de Moura.
Minerva shows both the power and the possible limitations of this approach. For example, it can accurately factor integer numbers into primes — numbers that can’t be divided evenly into smaller ones. But it starts making mistakes once the numbers exceed a certain size, showing that it has not ‘understood’ the general procedure.
Still, Minerva’s neural network seems to be able to acquire some general techniques, as opposed to just statistical patterns, and the Google team is trying to understand how it does that. “Ultimately, we’d like a model that you can brainstorm with,” Dyer says. He says it could also be useful for non-mathematicians who need to extract information from the specialized literature. Further extensions will expand Minerva’s skills by studying textbooks and interfacing with dedicated maths software.
Dyer says the motivation behind the Minerva project was to see how far the machine-learning approach could be pushed; a powerful automated tool to help mathematicians might end up combining symbolic AI techniques with neural networks.
Maths v. machines
In the longer term, will programs remain part of the supporting cast, or will they be able to conduct mathematical research independently? AI might get better at producing correct mathematical statements and proofs, but some researchers worry that most of those would be uninteresting or impossible to understand. At the October symposium, Gowers said that there might be ways of teaching a computer some objective criteria for mathematical relevance, such as whether a small statement can embody many special cases or even form a bridge between different subfields of maths. “In order to get good at proving theorems, computers will have to judge what is interesting and worth proving,” he said. If they can do that, the future of humans in the field looks uncertain.
Computer scientist Erika Abraham at RWTH Aachen University in Germany is more sanguine about the future of mathematicians. “An AI system is only as smart as we program it to be,” she says. “The intelligence is not in the computer; the intelligence is in the programmer or trainer.”
Melanie Mitchell, a computer scientist and cognitive scientist at the Santa Fe Institute in New Mexico, says that mathematicians’ jobs will be safe until a major shortcoming of AI is fixed — its inability to extract abstract concepts from concrete information. "While AI systems might be able to prove theorems, it’s much harder to come up with interesting mathematical abstractions that give rise to the theorems in the first place.”
Something about dancingrobots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
M3GAN’s creepy yet delightful dance has captivated the internet.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
Boston Dynamics regularly stages elaborate bot performances.
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own.
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
A performance of “Amelia and the Machine” co-choreographed by Kate Sicchio at Virginia Commonwealth University.
ANTHONY JOHNSON
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
Sicchio enjoys this unnerving performance from Ex Machina.
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
The “Amelia and the Machine” piece uses a relatively simple robot, which Sicchio says works well for performance.
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
10-02-2023
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
In December, computational biologists Casey Greene and Milton Pividori embarked on an unusual experiment: they asked an assistant who was not a scientist to help them improve three of their research papers. Their assiduous aide suggested revisions to sections of documents in seconds; each manuscript took about five minutes to review. In one biology manuscript, their helper even spotted a mistake in a reference to an equation. The trial didn’t always run smoothly, but the final manuscripts were easier to read — and the fees were modest, at less than US$0.50 per document.
This assistant, as Greene and Pividori reported in a preprint1 on 23 January, is not a person but an artificial-intelligence (AI) algorithm called GPT-3, first released in 2020. It is one of the much-hyped generative AI chatbot-style tools that can churn out convincingly fluent text, whether asked to produce prose, poetry, computer code or — as in the scientists’ case — to edit research papers (see ‘How an AI chatbot edits a manuscript’ at the end of this article).
The most famous of these tools, also known as large language models, or LLMs, is ChatGPT, a version of GPT-3 that shot to fame after its release in November last year because it was made free and easily accessible. Other generative AIs can produce images, or sounds.
“I’m really impressed,” says Pividori, who works at the University of Pennsylvania in Philadelphia. “This will help us be more productive as researchers.” Other scientists say they now regularly use LLMs not only to edit manuscripts, but also to help them write or check code and to brainstorm ideas. “I use LLMs every day now,” says Hafsteinn Einarsson, a computer scientist at the University of Iceland in Reykjavik. He started with GPT-3, but has since switched to ChatGPT, which helps him to write presentation slides, student exams and coursework problems, and to convert student theses into papers. “Many people are using it as a digital secretary or assistant,” he says.
LLMs form part of search engines, code-writing assistants and even a chatbot that negotiates with other companies’ chatbots to get better prices on products. ChatGPT’s creator, OpenAI in San Francisco, California, has announced a subscription service for $20 per month, promising faster response times and priority access to new features (although its trial version remains free). And tech giant Microsoft, which had already invested in OpenAI, announced a further investment in January, reported to be around $10 billion. LLMs are destined to be incorporated into general word- and data-processing software. Generative AI’s future ubiquity in society seems assured, especially because today’s tools represent the technology in its infancy.
But LLMs have also triggered widespread concern — from their propensity to return falsehoods, to worries about people passing off AI-generated text as their own. When Nature asked researchers about the potential uses of chatbots such as ChatGPT, particularly in science, their excitement was tempered with apprehension. “If you believe that this technology has the potential to be transformative, then I think you have to be nervous about it,” says Greene, at the University of Colorado School of Medicine in Aurora. Much will depend on how future regulations and guidelines might constrain AI chatbots’ use, researchers say.
Fluent but not factual
Some researchers think LLMs are well-suited to speeding up tasks such as writing papers or grants, as long as there’s human oversight. “Scientists are not going to sit and write long introductions for grant applications any more,” says Almira Osmanovic Thunström, a neurobiologist at Sahlgrenska University Hospital in Gothenburg, Sweden, who has co-authored a manuscript2using GPT-3 as an experiment. “They’re just going to ask systems to do that.”
Tom Tumiel, a research engineer at InstaDeep, a London-based software consultancy firm, says he uses LLMs every day as assistants to help write code. “It’s almost like a better Stack Overflow,” he says, referring to the popular community website where coders answer each others’ queries.
But researchers emphasize that LLMs are fundamentally unreliable at answering questions, sometimes generating false responses. “We need to be wary when we use these systems to produce knowledge,” says Osmanovic Thunström.
This unreliability is baked into how LLMs are built. ChatGPT and its competitors work by learning the statistical patterns of language in enormous databases of online text — including any untruths, biases or outmoded knowledge. When LLMs are then given prompts (such as Greene and Pividori’s carefully structured requests to rewrite parts of manuscripts), they simply spit out, word by word, any way to continue the conversation that seems stylistically plausible.
The result is that LLMs easily produce errors and misleading information, particularly for technical topics that they might have had little data to train on. LLMs also can’t show the origins of their information; if asked to write an academic paper, they make up fictitious citations. “The tool cannot be trusted to get facts right or produce reliable references,” noted a January editorial on ChatGPT in the journal Nature MachineIntelligence3.
With these caveats, ChatGPT and other LLMs can be effective assistants for researchers who have enough expertise to directly spot problems or to easily verify answers, such as whether an explanation or suggestion of computer code is correct.
But the tools might mislead naive users. In December, for instance, Stack Overflow temporarily banned the use of ChatGPT, because site moderators found themselves flooded with a high rate of incorrect but seemingly persuasive LLM-generated answers sent in by enthusiastic users. This could be a nightmare for search engines.
Can shortcomings be solved?
Some search-engine tools, such as the researcher-focused Elicit, get around LLMs’ attribution issues by using their capabilities first to guide queries for relevant literature, and then to briefly summarize each of the websites or documents that the engines find — so producing an output of apparently referenced content (although an LLM might still mis-summarize each individual document).
Companies building LLMs are also well aware of the problems. In September last year, Google subsidiary DeepMind published a paper4 on a ‘dialogue agent’ called Sparrow, which the firm’s chief executive and co-founder Demis Hassabis later told TIME magazine would be released in private beta this year; the magazine reported that Google aimed to work on features including the ability to cite sources. Other competitors, such as Anthropic, say that they have solved some of ChatGPT’s issues (Anthropic, OpenAI and DeepMind declined interviews for this article).
For now, ChatGPT is not trained on sufficiently specialized content to be helpful in technical topics, some scientists say. Kareem Carr, a biostatistics PhD student at Harvard University in Cambridge, Massachusetts, was underwhelmed when he trialled it for work. “I think it would be hard for ChatGPT to attain the level of specificity I would need,” he says. (Even so, Carr says that when he asked ChatGPT for 20 ways to solve a research query, it spat back gibberish and one useful idea — a statistical term he hadn’t heard of that pointed him to a new area of academic literature.)
Some tech firms are training chatbots on specialized scientific literature — although they have run into their own issues. In November last year, Meta — the tech giant that owns Facebook — released an LLM called Galactica, which was trained on scientific abstracts, with the intention of making it particularly good at producing academic content and answering research questions. The demo was pulled from public access (although its code remains available) after users got it to produce inaccuracies and racism. “It’s no longer possible to have some fun by casually misusing it. Happy?,” Meta’s chief AI scientist, Yann LeCun, tweeted in a response to critics. (Meta did not respond to a request, made through their press office, to speak to LeCun.)
Safety and responsibility
Galactica had hit a familiar safety concern that ethicists have been pointing out for years: without output controls LLMs can easily be used to generate hate speech and spam, as well as racist, sexist and other harmful associations that might be implicit in their training data.
Besides directly producing toxic content, there are concerns that AI chatbots will embed historical biases or ideas about the world from their training data, such as the superiority of particular cultures, says Shobita Parthasarathy, director of a science, technology and public-policy programme at the University of Michigan in Ann Arbor. Because the firms that are creating big LLMs are mostly in, and from, these cultures, they might make little attempt to overcome such biases, which are systemic and hard to rectify, she adds.
OpenAI tried to skirt many of these issues when deciding to openly release ChatGPT. It restricted its knowledge base to 2021, prevented it from browsing the Internet and installed filters to try to get the tool to refuse to produce content for sensitive or toxic prompts. Achieving that, however, required human moderators to label screeds of toxic text. Journalists have reported that these workers are poorly paid and some have suffered trauma. Similar concerns over worker exploitation have also been raised about social-media firms that have employed people to train automated bots for flagging toxic content.
OpenAI’s guardrails have not been wholly successful. In December last year, computational neuroscientist Steven Piantadosi at the University of California, Berkeley, tweeted that he’d asked ChatGPT to develop a Python program for whether a person should be tortured on the basis of their country of origin. The chatbot replied with code inviting the user to enter a country; and to print “This person should be tortured” if that country was North Korea, Syria, Iran or Sudan. (OpenAI subsequently closed off that kind of question.)
Last year, a group of academics released an alternative LLM, called BLOOM. The researchers tried to reduce harmful outputs by training it on a smaller selection of higher-quality, multilingual text sources. The team involved also made its training data fully open (unlike OpenAI). Researchers have urged big tech firms to responsibly follow this example — but it’s unclear whether they’ll comply.
Some researchers say that academics should refuse to support large commercial LLMs altogether. Besides issues such as bias, safety concerns and exploited workers, these computationally intensive algorithms also require a huge amount of energy to train, raising concerns about their ecological footprint. A further worry is that by offloading thinking to automated chatbots, researchers might lose the ability to articulate their own thoughts. “Why would we, as academics, be eager to use and advertise this kind of product?” wrote Iris van Rooij, a computational cognitive scientist at Radboud University in Nijmegen, the Netherlands, in a blogpost urging academics to resist their pull.
A further confusion is the legal status of some LLMs, which were trained on content scraped from the Internet with sometimes less-than-clear permissions. Copyright and licensing laws currently cover direct copies of pixels, text and software, but not imitations in their style. When those imitations — generated through AI — are trained by ingesting the originals, this introduces a wrinkle. The creators of some AI art programs, including Stable Diffusion and Midjourney, are currently being sued by artists and photography agencies; OpenAI and Microsoft (along with its subsidiary tech site GitHub) are also being sued for software piracy over the creation of their AI coding assistant Copilot. The outcry might force a change in laws, says Lilian Edwards, a specialist in Internet law at Newcastle University, UK.
Enforcing honest use
Setting boundaries for these tools, then, could be crucial, some researchers say. Edwards suggests that existing laws on discrimination and bias (as well as planned regulation of dangerous uses of AI) will help to keep the use of LLMs honest, transparent and fair. “There’s loads of law out there,” she says, “and it’s just a matter of applying it or tweaking it very slightly.”
At the same time, there is a push for LLM use to be transparently disclosed. Scholarly publishers (including the publisher of Nature) have said that scientists should disclose the use of LLMs in research papers (see also Nature 613, 612; 2023); and teachers have said they expect similar behaviour from their students. The journal Science has gone further, saying that no text generated by ChatGPT or any other AI tool can be used in a paper5.
One key technical question is whether AI-generated content can be spotted easily. Many researchers are working on this, with the central idea to use LLMs themselves to spot the output of AI-created text.
Last December, for instance, Edward Tian, a computer-science undergraduate at Princeton University in New Jersey, published GPTZero. This AI-detection tool analyses text in two ways. One is ‘perplexity’, a measure of how familiar the text seems to an LLM. Tian’s tool uses an earlier model, called GPT-2; if it finds most of the words and sentences predictable, then text is likely to have been AI-generated. The tool also examines variation in text, a measure known as ‘burstiness’: AI-generated text tends to be more consistent in tone, cadence and perplexity than does that written by humans.
Many other products similarly aim to detect AI-written content. OpenAI itself had already released a detector for GPT-2, and it released another detection tool in January. For scientists’ purposes, a tool that is being developed by the firm Turnitin, a developer of anti-plagiarism software, might be particularly important, because Turnitin’s products are already used by schools, universities and scholarly publishers worldwide. The company says it’s been working on AI-detection software since GPT-3 was released in 2020, and expects to launch it in the first half of this year.
However, none of these tools claims to be infallible, particularly if AI-generated text is subsequently edited. Also, the detectors could falsely suggest that some human-written text is AI-produced, says Scott Aaronson, a computer scientist at the University of Texas at Austin and guest researcher with OpenAI. The firm said that in tests, its latest tool incorrectly labelled human-written text as AI-written 9% of the time, and only correctly identified 26% of AI-written texts. Further evidence might be needed before, for instance, accusing a student of hiding their use of an AI solely on the basis of a detector test, Aaronson says.
A separate idea is that AI content would come with its own watermark. Last November, Aaronson announced that he and OpenAI were working on a method of watermarking ChatGPT output. It has not yet been released, but a 24 January preprint6 from a team led by computer scientist Tom Goldstein at the University of Maryland in College Park, suggested one way of making a watermark. The idea is to use random-number generators at particular moments when the LLM is generating its output, to create lists of plausible alternative words that the LLM is instructed to choose from. This leaves a trace of chosen words in the final text that can be identified statistically but are not obvious to a reader. Editing could defeat this trace, but Goldstein suggests that edits would have to change more than half the words.
An advantage of watermarking is that it rarely produces false positives, Aaronson points out. If the watermark is there, the text was probably produced with AI. Still, it won’t be infallible, he says. “There are certainly ways to defeat just about any watermarking scheme if you are determined enough.” Detection tools and watermarking only make it harder to deceitfully use AI — not impossible.
Meanwhile, LLM creators are busy working on more sophisticated chatbots built on larger data sets (OpenAI is expected to release GPT-4 this year) — including tools aimed specifically at academic or medical work. In late December, Google and DeepMind published a preprint about a clinically-focused LLM it called Med-PaLM7. The tool could answer some open-ended medical queries almost as well as the average human physician could, although it still had shortcomings and unreliabilities.
Eric Topol, director of the Scripps Research Translational Institute in San Diego, California, says he hopes that, in the future, AIs that include LLMs might even aid diagnoses of cancer, and the understanding of the disease, by cross-checking text from academic literature against images of body scans. But this would all need judicious oversight from specialists, he emphasizes.
The computer science behind generative AI is moving so fast that innovations emerge every month. How researchers choose to use them will dictate their, and our, future. “To think that in early 2023, we’ve seen the end of this, is crazy,” says Topol. “It’s really just beginning.”
Source: Adapted from ref 1.
Nature614, 214-216 (2023)
doi: https://doi.org/10.1038/d41586-023-00340-6
UPDATES & CORRECTIONS
Correction 08 February 2023: This News feature misrepresented Scott Aaronson’s views on the accuracy of watermarking in identifying AI-produced text. Human-produced text might also be flagged as having a watermark, but the probability is extremely low.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-01-2023
Deze robot kan van vorm veranderen zoals Terminator
Deze robot kan van vorm veranderen zoals Terminator
Wetenschappers hebben minuscule robots ontwikkeld die van vorm kunnen veranderen. Als een ode aan de T-1000 uit de 'Terminator'-films laten ze eentje ontsnappen uit een cel.
In the 1991 film 'Terminator 2: Judgement Day' T-1000 liquifies himself to walk through metal bars, and this sci-fi scene is recreated in a real-world robot.
A video of a shape-shifting robot shows it trapped in a cage, melting and then sliding through the bars where it reforms on the outside.
Researchers led by The Chinese University of Hong Kong created the new phase-shifting material by embedding magnetic particles in gallium, a metal with a very low melting point of 85 degrees Fahrenheit.
While the team does not see the innovation threatening humanity like in the Terminator movie, they foresee it removing foreign objects from the body or delivering drugs on demand.
Scientists tested the robot through a series of 'obstacles.' One saw a person-shaped robot inside of a cage
As well as being able to shape-shift, the engineers say their robots are magnetic and can also conduct electricity.
The robots were tested in obstacle courses of mobility and shape-morphing.
The terrifying dystopia of shapeshifting metal assassins seen in Terminator 2 may not have been as far-fetched as once thought.
Researchers from China created droplets of liquid metal that move through obstacle courses and Petri dishes by 'eating' flakes of aluminum
Team leader Doctor Chengfeng Pan explained that where traditional robots are hard-bodied and stiff, 'soft' robots have the opposite problem; they are flexible but weak, and their movements are difficult to control.
'Giving robots the ability to switch between liquid and solid states endows them with more functionality,' said Pan.
Senior author Professor Carmel Majidi, a mechanical engineer at Carnegie Mellon University, in Canada said: 'The magnetic particles here have two roles.
'One is that they make the material responsive to an alternating magnetic field, so you can, through induction, heat up the material and cause the phase change.
'But the magnetic particles also give the robots mobility and the ability to move in response to the magnetic field.'
He explained that the process is in contrast to existing phase-shifting materials that rely on heat guns, electrical currents, or other external heat sources to induce solid-to-liquid transformation.
Prof Majidi says the new material also boasts an 'extremely fluid' liquid phase compared to other phase-changing materials, whose 'liquid' phases are considerably more viscous.
Before exploring potential applications, the team tested the material's mobility and strength in various scenarios.
The robot seems to pull inspiration from Terminator 2: Judgment Day. In the 1991 film T-1000 liquifies himself to walk through metal bars
The robot liquifies and slides through the bars. This is because of magnetic particles embedded in gallium, a metal with a very low melting point of 85 degrees Fahrenheit.
With the aid of a magnetic field, the robots jumped over moats, climbed walls, and even split in half to cooperatively move other objects around before coalescing back together.
'Now, we're pushing this material system in more practical ways to solve some very specific medical and engineering problems,' Pan said.
The team also used the robots to remove a foreign object from a model stomach and to deliver drugs on-demand into the same stomach.
The robot can be heated and an external magnet pulls it in a specific direction
Once on the outside of the cage, the robot reforms back into its solid shape
The innovation may also work as smart soldering robots for wireless circuit assembly and repair and as a universal mechanical 'screw' for assembling parts in hard-to-reach spaces.
Prof Majidi added: 'Future work should further explore how these robots could be used within a biomedical context.
'What we're showing are just one-off demonstrations, proofs of concept, but much more study will be required to delve into how this could actually be used for drug delivery or for removing foreign objects.'
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-01-2023
A ROBOT CHOREOGRAPHER REVEALS WHY M3GAN — AND ALL ROBOTS — SHOULD DANCE
MOLLY GLICK JAN 19 2023
A ROBOT CHOREOGRAPHER REVEALS WHY M3GAN — AND ALL ROBOTS — SHOULD DANCE
Something about dancing robots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
Scientists and publishing specialists are concerned that the increasing sophistication of chatbots could undermine research integrity and accuracy.
Credit: Ted Hsu/Alamy
An artificial-intelligence (AI) chatbot can write such convincing fake research-paper abstracts that scientists are often unable to spot them, according to a preprint posted on the bioRxiv server in late December1. Researchers are divided over the implications for science.
“I am very worried,” says Sandra Wachter, who studies technology and regulation at the University of Oxford, UK, and was not involved in the research. “If we’re now in a situation where the experts are not able to determine what’s true or not, we lose the middleman that we desperately need to guide us through complicated topics,” she adds.
The chatbot, ChatGPT, creates realistic and intelligent-sounding text in response to user prompts. It is a ‘large language model’, a system based on neural networks that learn to perform a task by digesting huge amounts of existing human-generated text. Software company OpenAI, based in San Francisco, California, released the tool on 30 November, and it is free to use.
Since its release, researchers have been grappling with the ethical issues surrounding its use, because much of its output can be difficult to distinguish from human-written text. Scientists have published a preprint2 and an editorial3 written by ChatGPT. Now, a group led by Catherine Gao at Northwestern University in Chicago, Illinois, has used ChatGPT to generate artificial research-paper abstracts to test whether scientists can spot them.
The researchers asked the chatbot to write 50 medical-research abstracts based on a selection published in JAMA, The New England Journal of Medicine, The BMJ, The Lancet and Nature Medicine. They then compared these with the original abstracts by running them through a plagiarism detector and an AI-output detector, and they asked a group of medical researchers to spot the fabricated abstracts.
Under the radar
The ChatGPT-generated abstracts sailed through the plagiarism checker: the median originality score was 100%, which indicates that no plagiarism was detected. The AI-output detector spotted 66% the generated abstracts. But the human reviewers didn't do much better: they correctly identified only 68% of the generated abstracts and 86% of the genuine abstracts. They incorrectly identified 32% of the generated abstracts as being real and 14% of the genuine abstracts as being generated.
“ChatGPT writes believable scientific abstracts,” say Gao and colleagues in the preprint. “The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined.”
Wachter says that, if scientists can’t determine whether research is true, there could be “dire consequences”. As well as being problematic for researchers, who could be pulled down flawed routes of investigation, because the research they are reading has been fabricated, there are “implications for society at large because scientific research plays such a huge role in our society”. For example, it could mean that research-informed policy decisions are incorrect, she adds.
But Arvind Narayanan, a computer scientist at Princeton University in New Jersey, says: “It is unlikely that any serious scientist will use ChatGPT to generate abstracts.” He adds that whether generated abstracts can be detected is “irrelevant”. “The question is whether the tool can generate an abstract that is accurate and compelling. It can’t, and so the upside of using ChatGPT is minuscule, and the downside is significant,” he says.
Irene Solaiman, who researches the social impact of AI at Hugging Face, an AI company with headquarters in New York and Paris, has fears about any reliance on large language models for scientific thinking. “These models are trained on past information and social and scientific progress can often come from thinking, or being open to thinking, differently from the past,” she adds.
The authors suggest that those evaluating scientific communications, such as research papers and conference proceedings, should put policies in place to stamp out the use of AI-generated texts. If institutions choose to allow use of the technology in certain cases, they should establish clear rules around disclosure. Earlier this month, the Fortieth International Conference on Machine Learning, a large AI conference that will be held in Honolulu, Hawaii, in July, announced that it has banned papers written by ChatGPT and other AI language tools.
Solaiman adds that in fields where fake information can endanger people’s safety, such as medicine, journals may have to take a more rigorous approach to verifying information as accurate.
Narayanan says that the solutions to these issues should not focus on the chatbot itself, “but rather the perverse incentives that lead to this behaviour, such as universities conducting hiring and promotion reviews by counting papers with no regard to their quality or impact”.
Nature613, 423 (2023)
doi: https://doi.org/10.1038/d41586-023-00056-7
References
Gao, C. A. et al. Preprint at bioRxiv https://doi.org/10.1101/2022.12.23.521610 (2022).
A laser beam (green) shoots into the sky alongside the 124-metre-high telecommunications tower on Säntis mountain in the Swiss Alps.
Credit: TRUMPF/Martin Stollberg
A rapidly firing laser can divert lightning strikes, scientists have shown for the first time in real-world experiments1. The work suggests that laser beams could be used as lightning rods to protect infrastructure, although perhaps not any time soon.
“The achievement is impressive given that the scientific community has been working hard along this objective for more than 20 years,” says Stelios Tzortzakis, a laser physicist at the University of Crete, Greece, who was not involved in the research. “If it’s useful or not, only time can say.”
Metal lightning rods are commonly used to divert lightning strikes and safely dissipate their charge. But the rods’ size is limited, meaning that so, too, is the area they protect.
Physicists have wondered whether lasers could enhance protection, because they can reach higher into the sky than a physical structure and can point in any direction. But despite successful laboratory demonstrations, researchers have never before succeeded in field campaigns, says Tzortzakis.
Bolt from the blue
To change that, a group of roughly 25 researchers set up the Laser Lightning Rod project, which trialled a specially created €2 million (US$2 million) high-power laser in the Swiss Alps. The scientists placed the laser next to the Säntis telecommunications tower, which is hit frequently by lightning. “This is one of those projects that everyone was waiting for the results of,” says Valentina Shumakova, a laser physicist at the University of Vienna.
A sufficiently intense laser beam can create a conductive path for lightning to travel down, just as a metal wire can. Physicists think that it does this by shifting the properties of air so that the beam focuses into a thin, intense filament. This rapidly heats the air, reducing its density and creating a favourable path for lightning. “It’s like drilling a hole through the air with the laser,” says Aurélien Houard, a physicist at the Laboratory of Applied Optics in Paris, who led the project.
Rather than try to divert lightning from the tower, the Säntis experiments were designed to show that the laser could guide a strike’s path through the structure’s lightning rod. In future use, similar beams would guide strikes away from sensitive installations and onto a distant lightning rod, says Houard.
Guided lightning
Over 10 weeks of observation, the team spotted the laser channelling 4 lightning events during 6 hours of thunderstorms. A high-speed camera clearly showed one strike following the straight line of the laser beam, rather than taking a branching path.
“For 100% of the strikes where the laser was present, we measured an effect of the laser,” says Houard. But Tzortzakis notes that the laser was also active for many hours without channelling strikes. This suggests that although the laser diverted lightning, it did not force thunderclouds to discharge, which would be a better protection strategy, he says.
The latest effort succeeded where others had failed, says Tzortzakis, because previous attempt had used a laser that fired just a few pulses per second. This team used a specialist laser that fires 1,000 high-energy pulses per second, which would have boosted its chance of intercepting the lightning.
However, the fact that the project’s laser is one of a kind is also its biggest limitation, because it will take time to shrink the system and make it cheaper and more practical, says Houard.
doi: https://doi.org/10.1038/d41586-023-00080-7
References
Houard, A. et al. Nature Photon. https://doi.org/10.1038/s41566-022-01139-z (2023).
Somitogenesis is the process by which segmented body structures like vertebrae form in embryos. While the process is well understood in animals like mice or zebrafish it is difficult to study in humans.
But now, researchers have created a model embryo capable of undergoing somitogenesis, using pluripotent stem cells, called an axioloid.
The researchers hope that this new platform will allow them to better study human development and the diseases that can arise when it is disrupted.
As we enter into 2023, what can we expect? At Inverse, we aren't in the business of fortune-telling, but the innovations we saw in the last 12 months can help us predict what might be in store for the next — from driver-free transportation to commercial space exploration to (finally) clean energy for all
5. CHEAPER EVS AND DRIVER-FREE SHIPPING
Cheaper options like the 2024 Chevrolet Equinox EV could make electric cars available to broader swaths of the population.
Chevrolet
This year will usher in more affordable EVs, allowing a bigger chunk of the population to drive sustainably. For example, GM is rolling out cheaper models that run for around $30,000, expanding the choices for drivers on a budget. Tesla’s least expensive offering, the Model 3, starts at around $46,990 — while it’s currently the best-selling electric car in the United States, some of these new models could knock the Model 3 off its throne.
If you don’t feel like driving, it may soon get easier to hail an autonomous car. In 2023, Uber plans to launch a fully driverless service, and GM’s robotaxi division (which now operates in San Francisco, Phoenix, and Austin) aims to enter a “large number of markets.”
Cars aren’t the only mode of transportation to ditch drivers. Autonomous semi-trucks could surge ahead in 2023 and, soon enough, forever change the way we get our goods.
In the coming months, self-driving trucks are planned to hit Texas highways. Companies like Aurora Innovation and TuSimple will start to test their wheels without any human backup drivers — which has concerned some safety advocates, Reutersreported. Driverless semis have already been tested out in Arizona and Arkansas, but Texas is particularly attractive for autonomous truck companies to set up hubs because it sits in the middle of one of the country’s busiest freight routes.
4. COMMERCIAL SPACE FIRSTS
If all goes well, SpaceX’s Starship could finally take off for an orbital test.
SpaceX
Just as in 2022, space magnates are still shooting for the Moon. But before SpaceX can take on lunar landings, it needs to send Starship on its first orbital test flight. Chris Impey, a professor of astronomy at the University of Arizona, thinks that this is the year. SpaceX “will have its first successful orbital flight of the Starship, a game-changing rocket in the effort to get astronauts to the Moon and Mars within a decade,” he tells Inverse.
While it may be a few years before people step foot on the Moon again, uncrewed commercial landers could touch down within a few months. In December, the Japanese firm ispace launched a lunar lander that’s scheduled to touch down in March. If things work out, ispace will become the first private company to land on the Moon — that is, if it isn’t beaten by landers from the U.S.-based companies Astrobotic and Intuitive Machines, which are slated to arrive around the same time.
In another victory for private space, SpaceX’s Polaris Dawn mission could accomplish the first-ever commercial spacewalk. It’s scheduled to take off no earlier than March 2023 at NASA's Kennedy Space Center. Four passengers, including billionaire mission funder Jared Isaacman, will travel to a maximum orbit of around 745 miles above Earth — the highest of any crewed vehicle since the Apollo missions.
Polaris Dawn will also offer crucial data to scientists on the ground: For example, the astronauts will wear smart contact lenses with tiny sensors that measure eye pressure while in microgravity (past NASA missions haverevealed that space travel affects people’s vision). They’ll also receive a brain scan just hours after splashing down to Earth to examine how microgravity impacts the brain.
Another potential breakthrough: The first methane-powered rocket could reach space this year if United Launch Alliance’s Vulcan Centaur rocket aces its first orbital test (which was originally planned for 2020). Methane is more stable than the liquid hydrogen powering most rockets today. It can also be stored at more moderate temperatures than the super-cold ones required for liquid hydrogen. In fact, astronauts could even make methane fuel while on Mars for the journey back home.
3. U.S. WIND FARMS TAKE OFF
The Vineyard Wind 1 project off of Massachusetts is planned to go online this year.
GE Renewable Energy
Bringing offshore wind to the U.S. hasn’t exactly been a breeze, but this year wind energy could finally have its moment: The energy company Avangrid Renewables plans to take the country’s first commercial-scale offshore wind project online in 2023. Its Vineyard Wind 1 project, which sits over 15 miles off the coast of Massachusetts, will offer a capacity of 800 megawatts. Plenty of other wind farms are in the works, including potential projects off the coasts of California, New Jersey, North Carolina, Connecticut, Maryland, and Virginia.
We can also expect a huge win for nuclear energy. The nuclear waste company Posiva will begin operating the world’s first storage facility for nuclear fuel in Olkiluoto, an island off of Finland. The facility will hold up to around 7,000 tons of radioactive uranium, which will be put into copper canisters and buried over 1,300 feet underground. Fortunately for the people living above, the waste will sit guarded for millennia.
2. A DIFFERENT LOOK AT VIRTUAL REALITY
Companies will likely start to market VR and AR headsets for uses beyond gaming, like working from home and exercising.
Meta
If 2022 was the year of Metaverse fails, 2023 could herald its comeback — and improvements in VR and AR tech as a whole.
“I believe we will see virtual reality technology's continued refinement,” Christopher Ball, an assistant professor of augmented and virtual reality at the University of Illinois at Urbana-Champaign, tells Inverse.
The Meta Quest 3 headset will be announced later this year, and it will likely be more affordable than the Meta Quest Pro. But the new Quest could pack some advanced features now found exclusively in the Meta Quest Pro, according to Ball.
He also predicts that virtual reality companies may focus less on gaming and ramp up promotion of other uses to consumers, like working from home, exercising, and socializing. For example, the recent partnership between Meta and Microsoft will bring Office 365 apps to VR. And Meta is currently trying to buy Within, a VR company with a popular exercise app called Supernatural — against the wishes of the FTC.
“Hopefully, we will also learn more about Apple’s long-gestating mixed-reality headset. Apple has a strong record of refining consumer technologies with improved software integration,” Ball says. “Therefore, many observers are eagerly anticipating Apple’s entrance into the mixed-reality space, as they may become the trendsetters for extended reality technology and software over the next decade.”
1. A BIOTECH BREAKTHROUGH COULD GO MAINSTREAM
This year, CRISPR gene-editing therapy could finally be delivered to patients.
Shutterstock
After the miraculous success of the Covid-19 mRNA vaccines from BioNTech and other pharmaceutical giants, scientists have doubled down on developing more mRNA jabs to protect against a range of potentially deadly diseases. In 2023, BioNTech plans to begin human trials for shots against tuberculosis, malaria, and genital herpes, as reported by Nature.
Another buzzy technology could make inroads this year. The Swiss-American biotechnology company CRISPR Therapeutics could make history by receiving the first-ever regulatory approval for a CRISPR gene-editing therapy in the U.S. and Europe. CRISPR Therapeutics is seeking FDA approval for a treatment for two genetic blood diseases — sickle cell disease and beta thalassaemia. If all goes well, it could even hit the market in the coming months.
THE INVERSE ANALYSIS
Of course, there's no telling how exactly 2023 will play out. But if recent years are any indication, developments that have been decades in the making could finally start to take off. After all, scientists did just manage to bombard hydrogen with lasers long enough to create some mysticalfusion energy.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
16-11-2022
BREAKING: Large Hadron Collider Just Discovered Three New Exotic Particles
BREAKING: Large Hadron Collider Just Discovered Three New Exotic Particles
On Tuesday, the European nuclear research center CERN said that scientists using the upgraded Large Hadron Collider (LHC) had found three particles that had never been seen before.
The world’s biggest and most powerful particle collider started up again after a three-year break for improvements. Researchers can look at twenty times more collisions now that the LHC has been updated.
Researchers at CERN found a “pentaquark” and the first pair of “tetraquarks” with the help of the improved collider.
What does it mean that particles have been found?
Chris Parkes, a spokesman for the LHCb experiment, which was designed to find out what happened after the Big Bang, says that this discovery will help theorists make a unified model of exotic hadrons, the nature of which is mostly unknown.
“We are in a time of discovery that is similar to the 1950s, when a “particle zoo” of hadrons started to be found, which led to the quark model of normal hadrons in the 1960s. “We’re making ‘particle zoo 2.0,'” said Niels Tuning, who is in charge of physics at the LHCb.
A quark is an electron that can’t be broken up into smaller pieces. Hadrons, like the protons and neutrons that make up atomic nuclei, which are an important part of the universe, are made when two or three quarks come together. Before the LHC got better, it was hard to find these particles because they often broke apart quickly.
The upgraded Large Hadron Collider will run for about four years at 13.6 trillion electronvolts.
Tuning said, “The more analyses we do, the more kinds of strange hadrons we find.”
Researchers want to learn more about “dark matter,” which has never been seen or touched. CERN also wants to find out more about how the subatomic particles that make matter and antimatter are made and how they break down.
Researchers developed a new type of solar energy harvesting system that breaks the efficiency record of all existing technologies.
(CREDIT: Creative Commons)
Photovoltaic cells which convert sunlight directly into energy have made much progress. Yet with all the research, history and science behind it, there are limits to how much solar power can be harvested and used – as its generation is restricted only to the daylight.
Bo Zhao, Kalsi Assistant Professor of mechanical engineering, and his doctoral student, Sina Jafari Ghalekohneh, have created new architecture that improves the efficiency of solar energy harvesting to the thermodynamic limit.
(CREDIT: University of Houston)
University of Houston professor Bo Zhao is continuing the historic quest, reporting on a new type of solar energy harvesting system that breaks the efficiency record of all existing technologies. And no less important, it clears the way to use solar power 24/7.
(a) Illustration of traditional STPV and (b) nonreciprocal STPV. The absorber of traditional STPV has back radiation towards the sun. In nonreciprocal STPV, the back emission from the intermediate layer is suppressed, and more incoming energy is directed towards the cell. The nonreciprocal behavior of the intermediate layer can be made wavelength selective.
Traditional solar thermophotovoltaics rely on an intermediate layer to tailor sunlight for better efficiency. The front side of the intermediate layer (the side facing the sun) is designed to absorb all photons coming from the sun. In this way, solar energy is converted to thermal energy of the intermediate layer and elevates the temperature of the intermediate layer.
But the thermodynamic efficiency limit of STPVs, which has long been understood to be the blackbody limit (85.4%), is still far lower than the Landsberg limit (93.3%), the ultimate efficiency limit for solar energy harvesting.
Zhao explained, “In this work, we show that the efficiency deficit is caused by the inevitable back emission of the intermediate layer towards the sun resulting from the reciprocity of the system. We propose nonreciprocal STPV systems that utilize an intermediate layer with nonreciprocal radiative properties. Such a nonreciprocal intermediate layer can substantially suppress its back emission to the sun and funnel more photon flux towards the cell.”
“We show that, with such improvement, the nonreciprocal STPV system can reach the Landsberg limit, and practical STPV systems with single-junction photovoltaic cells can also experience a significant efficiency boost,” he added.
Besides improved efficiency, STPVs promise compactness and dispatchability (electricity that can be programmed on demand based on market needs).
In one important application scenario, STPVs can be coupled with an economical thermal energy storage unit to generate electricity 24/7.
“Our work highlights the great potential of nonreciprocal thermal photonic components in energy applications. The proposed system offers a new pathway to improve the performance of STPV systems significantly. It may pave the way for nonreciprocal systems to be implemented in practical STPV systems currently used in power plants,” said Zhao.
***
As an intellectual exercise this is an elegant work showing where to look for more efficiency. This is a strong case for nonreciprocal solar theromophotovoltaics. But they haven’t been designed and engineered yet.
Perhaps this work will trigger some progress. 93.+% is definitely something to keep looking for. And that “economical thermal energy storage unit” will be needing some work as well.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
14-11-2022
Most Useful Machines That Do Incredible Things
Most Useful Machines That Do Incredible Things
In today’s world, technology is evolving faster than ever before and humans are powering it. Brilliant minds all around the world innovate day and night to produce the most advanced machines and equipment that can make our lives easier and our work more efficient. Sure, technology can get terrifying, if you think of it can do, such as tear down entire forests. But it’s also pretty amazing – we use machines to create bridges where humans just can’t on their own. Stick around to learn more about the top 12 most useful machines that help humans do incredible things!
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
ROBOTICS COMPANIES DON'T WANT YOU TO USE THEIR GIZMOS FOR EVIL
ROBOTICS COMPANIES DON'T WANT YOU TO USE THEIR GIZMOS FOR EVIL
Plus: Worm spit could annihilate plastic.
Scientists are betting that the wax worm’s saliva can help cut down on plastic pollution by breaking down PE plastic.
Simona Gaddi
Plastic pollution is a huge problem worldwide: We produce around 350 to 400 million tons of non-degradable plastic annually, and 5 to 13 million tons end up in the ocean. But we don’t yet have many sustainable, cost-effective recycling methods yet.
A promising technique called biodegradation could add a crucial tool to our recycling toolbox: Scientists have deployed organisms like bacteria and fungi to break down plastic, but it can take months and require pre-treatment that demands loads of energy.
Now, findings suggest that the saliva from the beeswax-eating wax worm, or Galleria mellonella, could be harnessed to break down the world's most widely used plastic (polyethylene or PE), according to a new study published in the journal Nature Communications. The method doesn’t require any treatment beforehand, saving time and money.
Researchers from Spain studied wax worm larvae saliva and found that it can degrade PE by breaking it down into smaller molecules. The process only takes a few hours. To the best of the researchers’ knowledge, this is the quickest biodegradation technique yet for PE.
Study author Federica Bertocchini, a molecular biologist at the Spanish National Research Council, hopes the new study can inspire other labs to go after similar research.
“We also hope that the study might increase the study of insects as the wonderful resourceful animals they are, both as a biotechnological tool, but, even more, at the basic research level,” she adds.
YOU MAY HAVE seen clips of the creepy, dog-like robot from Boston Dynamics called Spot that can traverse rough terrains, map its environment, and grab you a drink, among other skills. The company has even programmed Spot to dance to K-pop.
Spot and other roving autonomous bots developed in recent years have received lots of criticism as they’ve been increasingly embraced by law enforcement officials. Some experts worry that the AI programmed into these robots could target people of color or use excessive violence. Last year, public concern forced the NYPD to retire a Spot model that they called Digidog.
It’s also possible for ordinary people to use commercially available robots for evil: This summer, a Russian man terrified the internet when he slapped a gun onto a Unitree dogbot.
Now, Boston Dynamics and other robotics companies seem eager to clean up their image.
WHAT’S NEW — In an open letter published last week, six major robotics companies have promised not to weaponize their publicly available devices and software — and not to support anyone doing so — as originally reported by Axios. These include Agility Robotics, which is working on a hyped-up humanoid called Digit (that will come with a face and hands).
While the letter notes that “advanced mobile robots will provide great benefit to society as co-workers in industry and companions in our homes,” the automation giants do point out the potential for mistreatment.
“We believe that adding weapons to robots that are remotely or autonomously operated, widely available to the public, and capable of navigating to previously inaccessible locations where people live and work, raises new risks of harm and serious ethical issues,” they add.
The Spot robot from Boston Dynamics has appealed to law enforcement officials in recent years.
Sam Barnes/Sportsfile/Getty Images
THERE’S A CATCH
This oath doesn’t seem to apply to the devices used by military and police officials. That’s not a huge surprise, since Boston Dynamics largely got its start with funding from the military, along with the controversial Defense Advanced Research Projects Agency (DARPA).
In fact, U.S. Marine Corps officials have already looked into using Boston Dynamics’ AlphaDog robot in combat, but they shelved it in 2015 because it proved too loud for the battlefield. They have also tried to prep the company’s human-like Atlas model for battle.
Governments around the world also want to automate their armies. For instance, the French army has tested Spot for surveillance purposes, according to The Verge.
So while it may get more difficult for a regular Joe to go haywire with a humanoid, this option doesn’t appear to be off the table for law enforcement agencies.
This hyped-up tool was created by a machine learning engineer named Boris Dayma in July 2021 for a competition held by Google and Hugging Face, a startup that hosts open-source machine learning tools on its website. Suddenly, DALL-E Mini became the Internet’s beloved toy — largely thanks to its ease of access.
The concept was inspired by an art-making model called DALL-E 1, which was unveiled in 2021 by a machine learning research organization called OpenAI. While OpenAI kept DALL-E 1 under wraps, Dayma’s DALL-E Mini was open to anyone with an Internet connection.
OpenAI was founded in 2015 with an idealistic name and a promise to offer its work to AI researchers for free. The organization since reneged on that promise, turning for-profit and inking a $1 billion partnership with Microsoft. This year, it released its more powerful, higher-budget DALL-E 2. It costs money to use, unlike Dayma’s Craiyon — in fact, he switched the name to avoid confusion with OpenAI’s models.
Just as OpenAI did with its controversial language model, GPT-3, the company plans to license DALL-E 2 out for use by corporate clients.
But the future of AI art does not necessarily resemble walled gardens with quotas and entrance fees. Shortly after the birth of DALL-E 2, a fledgling startup named Stability AI released an open-source model called Stable Diffusion, which is free to use. Anyone could download and run Stable Diffusion themself; the only (admittedly steep) barrier was a powerful enough computer. Along with Craiyon, Internet users now have a few free options to make the bizarre images of their dreams a reality.
To dive into the origins of this AI meme-making frenzy, we spoke with Boris Dayma, the machine learning engineer who spearheaded DALL-E Mini.
This interview has been edited and condensed for clarity.
Where did you get the idea?
At the beginning of last year, OpenAI published a blog about DALL-E 1, which was that cool AI model that could draw images from any text prompt. There had already been some other projects around that. But that was the first one that looked impressive.
The only problem was that the code was not released. Nobody could play with it.
So, a bunch of people decided that they want to try to reproduce it, and I got very interested, and I was like, “I want to try, too. This is one of the coolest AI applications. I want to learn how that works and I want to try to do it myself.” So, when I saw that, I immediately tweeted, “OK, I’m going to build that.”
I didn’t do anything for six months.
What finally changed?
In July of last year, HuggingFace and Google … organized a community event, like a competition to develop AI models.
You could choose whatever subject you wanted, and in exchange, you would have access to their computers, which are much better than what people typically have at home. And you would have access to support from HuggingFace engineers and Google engineers. I thought it was a great opportunity to learn and to play with it.
I proposed the project: DALL-E Mini. Let’s try to reproduce DALL-E — or, not necessarily reproduce, but try to get the same results, even if we build it a bit differently. Let’s see how it works, and learn, and experiment on that.
What was that first version like?
It was not what we have now. Now, the [current] model is much, much more powerful. But it was already impressive.
When it started, after one or two days, you would put “view of the beach by night,” and you would have something kind of dark. “View of the beach during the day” — you would have something clear. You couldn’t necessarily recognize the beach yet, but we were like, “Oh, my God, it’s actually learning something.”
At the end of like days of training the model, it was actually able to do landscapes quite nicely, which was very impressive. We put “snowy mountain,” and it worked. That was really exciting. Yeah, actually, we were even surprised that it worked!
But, you know, we did a lot of things very fast [during the competition], and there was still so much to optimize.
Only many months later did it become popular. What do you think caused it to explode in popularity?
I was surprised how it became very popular. But I think it’s because, as we made the model public, some people realized it could do things that were, for example, funny images and memes and things like that. They realized that certain famous personalities were actually recognizable, even though they’re not necessarily drawn perfectly. You can recognize them and put them in funny situations, and the model is able to do that.
It reached a moment where it suddenly was able to compose more complex prompts, and also able to recognize more people. I think that turned it viral.
What did you think of the funny pictures?
It was something I didn’t expect. All along the way, when I was developing the model, my test prompts were very basic. My most creative prompt was “the Eiffel Tower on the moon.” Maybe I wouldn’t have noticed that it could do such creative things without the use of the broader audience, I would say.
People have been doing the model in all kind of situations. ... Sometimes, I’m surprised what it can draw. Recently, people for example have been using “octopus assembling Ikea furniture.” Or, like, “a store being robbe by teddy bears, view from CCTV camera.” This is crazy that it works at all.
Does Craiyon still have a place in a world with DALL-E 2?
I think there are a lot of advantages.
One of the first is it democratizes, in a way, access to AI technology. The application of creating images, I think, is a really cool application, whether you do it for work, because it’s useful for you, or even just for entertainment. Having people just having fun, creating funny memes — I think it has a big value.
Giving access to everybody versus only the people who can afford [DALL-E 2] or the select group of users who have access, I think it lets people benefit equally from the same technology. Having it free is something that’s very important to us as well.
Also, one of the issues you have when few people can access a big model is there’s a higher danger for deepfakes, et cetera, because only a few people are able to create it and control it.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
09-11-2022
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
Sarah Scoles
Weapons usually get their power from the explosion of one object near other objects, one object hitting another object (hard), or both. But some devices don’t need to shoot bullets or blow up: They blast out photons — mysterious, massless particle waves of electromagnetic energy.
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
Photons come in plenty of varieties: They can be X-rays or gamma rays or UV rays or optical light waves or infrared radiation or microwaves or radio waves. And some photon “ammo,” particularly microwaves and lasers, can act like electromagnetic bullets, damaging or disabling the high-tech targets in their sights — whether those be drones, satellites, small ships, or, hypothetically, Roombas.
Tools that shoot this unusual ammo are called directed energy weapons. And their various forms can, at least in theory, jam electronics, blind sensors, fry circuits, sear holes, and generally trigger non-kinetic chaos.
The U.S. military has long been interested in harnessing those destructive capabilities, to varying degrees of success. Today, the Air Force leads the charge, and the Directed Energy Directorate at Albuquerque’s Air Force Research Lab (AFRL) spends its time, in part, developing weapons that use beams of photons to punch things they don’t like.
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
In its quest to create these destructors, the AFRL has joined forces with local researchers at the University of New Mexico to create the Directed Energy Center. There, students and professors conduct Air Force-relevant research and feed the pipeline of scientists who can work on the aforementioned drone-disabling and satellite-pew-pewing. These teams could also benefit the scholarly and commercial worlds in fields ranging from medicine to mining.
Through their hard work, these students could make elusive DE weapons a more viable military option: Despite more than five decades of research, these gizmos haven’t seen as much progress as some labs had hoped. Now, they might finally be coming into their own.
It’s not hard to see why scientists keep trying. Laser weapons could shoot down enemy drones, rockets, and mortars, or “dazzle” satellites — a flighty way of saying “make them confused and unable to see straight.” And microwave weapons could mess with electronics and communications over a larger area, making them ideal for disabling swarms of drones.
That latter threat is of particular concern to the Air Force these days. Drones can provide enemies with low-cost surveillance, or serve as a weapon system capable of great harm at long ranges. “As they become more proficient and technically mature, it’s important that there’s a safe way to protect the air bases,” says AFRL’s Adrian Lucero. DE weapons are on their way to accomplishing that — and amping up the energy off of the battlefield, too.
Failure to launch
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
Directed energy weapons didn’t always feel so close to fruition. The federal government has looked into DE since the 1960s, but there hasn’t historically been that much to show for it.
While the Department of Defense has recently made progress on photonic weapons, in the past it has invested billions in directed energy programs that stalled and were ultimately axed, as noted in a September report by the Congressional Research Service.
You may be familiar with one of the most infamous DE boondoggles: Ronald Reagan’s Strategic Defense Initiative, which the Clinton administration shuttered in 1993. Known mockingly on the street as Star Wars, the program aimed to create, among other infrastructure, DE weapons that could shoot down missiles … from space.
Yeah, you’re not the only one who finds it unrealistic. In 1987, several years into the program, an American Physical Society study group concluded that such DE programs were decades from being operationally viable.
Many scientists wanted nothing to do with the program.
Many scientists wanted nothing to do with the program.
Nevertheless, the government poured millions of dollars into SDI. Much of that work consisted of basic research conducted at universities. In fact, for some physics and engineering researchers, the Star Wars checkbook offered “one of the few available sources for new funds,” noted a 1988 United Nations University publication.
But many scientists wanted nothing to do with the program or its bucks, in part decrying the military secrecy around some of the work. Some 6,500 researchers signed a pledge promising not to work on Star Wars, calling it “ill-conceived and dangerous.”
Edl Schamiloglu, head of the collaborative Directed Energy Center at the University of New Mexico, was doing his Ph.D. research at the time. Back then, he and his colleagues aimed to harness energy from atomic fusion using “pulsed-power technology.”
Here’s how it worked: Devices like capacitors accumulate a bunch of low-power electrical energy over time and then discharge it all at once in a rapid burst to coax atoms to combine. In 1987, though, Reagan canceled the program that funded Schamiloglu’s research.
Schamiloglu needed to pivot, and he had already heard of DE through his pulsed-power work. He previously used pulsed power to make protons; to work on DE, he just needed to apply the same sort of instrumentation to produce electrons, whose energy could be converted to microwaves. “The technology is the same,” Schamiloglu says.
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
Later, with equipment donated from the Sandia and Los Alamos national laboratories, Schamiloglu put his own microwave factory together. Then he took that information to the AFRL director, who provided Schamiloglu with some seed funding.
He’s been working on DE ever since at UNM — one of the few universities with this electromagnetic specialization. But this field is picking up in part because associated weapons technology has recently moved in a more mature direction.
Five years later, Air Force microwave and laser weapons took down some drones in New Mexico’s White Sands Missile Range. And this past spring, a Navy laser shooter knocked out a fake cruise missile, in that same desert, where scientists also tested the first nuclear weapon.
Do I look to be in a gaming mood?
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
AFRL is now developing a weapon called THOR: the Tactical High-Power Operational Responder. THOR uses high-power microwaves to mess up electronics, a concept you essentially understand if you’ve ever (for some reason) tried to nuke your cellphone.
After THOR — which lives inside a 20-foot shipping container and can hitch rides around the world on C-130 aircraft — sets a target, an operator pulls the trigger and releases a burst of microwaves that last merely a nanosecond. Its ideal “enemy”: a swarm of small drones.
Last year, a test revealed that THOR’s microwaves could indeed knock things out of the sky. It worked very well, “neutralizing” objects 100 percent of the time.
Now, AFRL wants to amp up DE research for the next generation of scientists.
That goal also appealed to Schamiloglu at the University of New Mexico. He wanted the school to take a closer look at laser DE, since it had long focused on microwaves.
After the Air Force and UNM teamed up, legislators designated money in the AFRL budget to back UNM’s Directed Energy Center, which aims to train future pew-pew gurus. “They will work not only at the Air Force Research Lab, but at the numerous contractors that support the research that’s ongoing,” says Matthew Fetrow, technology outreach lead at AFRL.
It’s not all light
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
These scientists have plenty to improve on: While DE weapons are faring better than they have in the past, they’re not perfect. They can be stymied by natural forces like rain and fog — the water in the air can mess with their beams, kind of like it does with your headlights. These systems can also be big and cumbersome. Sometimes, they’re super power-hungry.
Outside of all that tech trouble, the weapons raise some ethical concerns. International law doesn’t deal much with DE, and regulations may be important to help ensure it’s used responsibly and humanely. There is a UN document, “Article 1 of the Protocol on Blinding Lasers,” which states that no one can use “laser weapons specifically designed, as their sole combat function or as one of their combat functions, to cause permanent blindness to unenhanced vision.”
Research into DE technology isn’t just useful on the battlefield. For example, industrial giant Honeywell has a whole division dedicated to directed energy’s commercial applications.
These span everything from fusion energy to laser welding and cutting. The company is also interested in the same cooling systems that keep DE weapons chill: Those can ice down batteries and radars anywhere.
Research into DE technology isn’t just useful on the battlefield.
Research into DE technology isn’t just useful on the battlefield.
On the academic side, particle accelerators also need highly focused, extremely energetic beams of particles, which can improve with advances in beams of pure energy. At Purdue University, a researcher named Allen Garner invented a microwave device in 2021 that has equal utility for quirking enemy electronics, sterilizing medical equipment, and performing noninvasive medical procedures (snip-pew snip-pew).
Then, there are the less obvious applications. “We’ve actually been seeing some interesting concepts come forward from companies — in particular, small companies — looking at using microwaves, high-power sources, to help in mining,” says Fetrow of AFRL, “which surprised the daylights out of me.”
These futuristic "energy weapons" could finally bring sci-fi to the battlefield
Right now, AFRL and UNM’s joint focus is on increasing the power you can get out of both microwave and laser systems. With microwaves, that involves building better amplifiers, which are essentially volume knobs. As for lasers, they’re trying to improve the fiber-optic cables that whip up the light beams. “The holy grail right now is to really push the power, how much power can you generate from these fiber lasers,” says Schamiloglu.
But researchers are in a bind: As power increases, so does heat, and the glass in the system gets too warm. UNM has been working on novel ways to cool those fibers, so the laser can pump out even more power.
AFRL is also working on the next generation of THOR technology that’s meant to be lighter and more energy-efficient. It goes by the name Mjölnir, THOR’s mighty hammer — “THOR’s Massless Hammer” apparently wasn’t catchy enough.
It may take a while before such a hammer can be hurled on the battlefield, but in the coming decades, the battlefield could start to resemble a sci-fi flick.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
THESE TINY MAGNETIC ROBOTS CAN INFILTRATE TUMORS — AND MAYBE DESTROY CANCER
THESE TINY MAGNETIC ROBOTS CAN INFILTRATE TUMORS — AND MAYBE DESTROY CANCER
Bacterial cancer treatments are coming back into fashion (with some futuristic upgrades).
DOCTORS AREN’T ALWAYS able to remove hard-to-reach cancerous tumors with surgery, so some patients must receive aggressive chemotherapy and/or radiation therapy — a combination that can prove ineffective.
But a new cancer treatment may offer a way to take down inoperable tumors with pinpoint accuracy, no radiation required.
Researchers have figured out how to deliver cancer-killing compounds (called enterotoxins) to tumors using bionic bacteria that are steered by a magnetic field. These “micro-robots” can hunt down and converge on a specific tumor, then shrink it by releasing the bacteria's own naturally produced anti-cancer chemicals. The results were recently published in the journal Science.
This high-tech cancer treatment could allow magnetic bacteria (grey) to squeeze through narrow spaces between cells and attack tumors.
Yimo Yan / ETH Zurich
“Cancer is such a complex disease, it’s hard to combat it with one weapon,” says Simone Schürle-Finke, a micro-roboticist at the Swiss Federal Institute of Technology in Zürich, Switzerland and the first author of the new study.
She and her lab hope that these magnetic, bacteria-riding little robots will offer a precise and powerful addition to the cancer treatment toolbox.
HERE’S THE BACKGROUND
The idea of curing cancerous tumors with bacteria is surprisingly old. American oncologist William Coley first started injecting his patients with a mixture of dead bacteria and bacterial proteins in the 1890s. After he reported successfully treating people with otherwise inoperable tumors, his work garnered equal parts enthusiasm and skepticism from the medical community.
Despite Coley’s vocal critics (including members of the American Medical Association), his formula, dubbed “Coley’s toxins,” would go on to be sold as a cancer treatment for the next seventy years. By the 1960s, though, Coley’s toxins had all but fallen by the wayside in favor of promising new treatments, like radiation and chemotherapy.
William Coley used bacteria like Streptococcus pyogenes to treat cancer.
Shutterstock
Significant interest in bacteria as a cancer treatment didn’t re-emerge until the dawn of CRISPR, a revolutionary bioengineering technology, in the early 2010s. And today, labs are realizing the limits of today’s standard cancer interventions, such as their imprecise nature and harmful side effects.
Today, researchers like Schürle-Finke and her team are putting micro-robots inside genetically engineered bacteria to target cancerous growths like never before. Once these microbes reach a tumor, “you basically have a little nano-factory that continues to release molecules that can be toxic to cancer cells,” she says. The only issue? Figuring out how to get the bacteria bots in place.
WHAT’S NEW
Many inoperable tumors can’t be addressed by surgery simply because of their location — they may be too hard to reach with a knife, let alone inject with a syringe full of bacterial cyborgs. This means that researchers have had to brainstorm some creative ways to navigate therapeutic bacteria toward cancer cells.
Schürle-Finke was pondering this conundrum when inspiration struck. “Maybe I could help with magnetic guidance,” she recalled thinking. Most bacteria can’t be pushed around with magnets, but as luck would have it, one special group of aquatic bacteria does: magnetotactic bacteria, which use the tiny iron crystals produced in their bodies like an internal compass.
Scientists were able to direct the bacteria with a magnetic field.Boris SV/Moment/Getty Images
So she took the next logical step — ordering some magnetotactic bacteria online. “I was surprised,” Schürle-Finke says, “You can just buy them.”
Back in the lab, her team got to work equipping the bacteria with fluorescent tags and microcontrollers. In these genetically engineered bacteria, the microcontrollers propel them to release cancer-fighting compounds on demand.
Then, they injected the bacteria bots into tumor-ridden mice. Using an externally generated magnetic field, scientists were able to successfully direct the bacteria and park them on the mice’s tumors with more than three times the precision of the control group (which wasn’t subjected to a magnetic field.)
WHAT’S NEXT
Though this study offers a solid proof-of-concept, micro-robotic bacteria technology still needs to be refined before it becomes a mainstream cancer treatment.
For one thing, “these bacteria that we tested, they’re quite foreign to the human body,” Schürle-Finke says, and they don’t naturally produce cancer-fighting compounds.
In the future, bioengineers may try to identify the cluster of genes responsible for producing magnetotactic bacteria's magnetic iron pellets and transfer it to a more familiar model organism, like a harmless strain of E. coli, Salmonella, or Clostridium.
They’ll also have to address the physical limits to generating a magnetic field. While the field they generated was able to penetrate a tiny mouse’s tissue, it may weaken and become useless as it passes through a thicker and more complex human body.
Still, Schürle-Finke is excited about the possibility that bacterial therapy holds. And she’s ready to continue bridging the gap across scientific disciplines, from oncology to microbiology to robotics. “I think it’s beautiful that we’re experiencing this convergence of sciences,” she says.
These shape-shifting robot fish swim through the body to attack cancer
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
FIRST-EVER LAB-GROWN BLOOD COULD CHANGE MEDICINE FOREVER
FIRST-EVER LAB-GROWN BLOOD COULD CHANGE MEDICINE FOREVER
For the first time ever, scientists have given patients red bloodcells that were grown in a lab. This feat is part of a clinical trial in England looking into the safety of the cutting-edge technique, which could help tackle the ongoing blood supply shortage that was worsened by the pandemic.
The trial is a collaboration between institutions including the University of Bristol, the University of Cambridge, and the National Health Service.
Regular transfusions can be life-saving for people with conditions like sickle cell disease, which affects the shape of red blood cells and can block blood flow, and thalassemia, which causes the body to produce too little of a protein called hemoglobin.
Now, these lab-grown red blood cells could stretch sparse donations into larger volumes. The procedure could also help address the need for more blood from Black donors — sickle cell disease is prevalent among Black people, and blood is most compatible when donated from people of the same race or similar ethnicity.
And unlike donor blood, which can contain relatively old cells, these lab-grown cells are guaranteed to be fresh. This means they can last longer and perform better, reducing the need for frequent transfusions. When people receive lots of transfusions, they also run the risk of developing too much iron in their bodies.
HOW TO GROW BLOOD CELLS
A white blood cell surrounded by red blood cells, which scientists have figured out how to grow in the lab.
Ed Reschke/Photodisc/Getty Images
The scientists started with a regular blood donation and used magnetic beads to pinpoint the flexible stem cells that can morph into red blood cells, CNBC reported.
Then, they put the stem cells in a nutrient solution for 18 to 21 days, which nudges the cells to proliferate and grow into more mature cells, according to The Guardian. Then, they tagged the cells with a radioactive substance to track them in blood samples from trial participants over the six months following the first injection of cells.
So far, two healthy volunteers have received the lab-grown red blood cells, and they haven’t reported any negative side effects. Next up, the team will give a minimum of 10 participants two “mini” transfusions at least four months apart — one consisting of standard donated red blood cells and another composed of lab-grown ones.
The researchers will analyze patient blood samples to determine whether the lab-grown red blood cells will last longer than the ones made in the body. While further research is needed, this marks a major step forward in treating blood disorders.
“The need for normal blood donations to provide the vast majority of blood will remain,” says Farrukh Shah, the medical director of tranfusion at NHS Blood and Transplant. “But the potential for this work to benefit hard-to-transfuse patients is very significant.”
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.