The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
03-01-2026
Robots with Feelings: New Robotic Skin Reproduces the Human Experiences of Touch and Pain
(Image Credit: Xinge Yu, City University of Hong Kong)
Robots with Feelings: New Robotic Skin Reproduces the Human Experiences of Touch and Pain
Chinese researchers have developed a robotic e-skin that brings robots one step closer to humans by mimicking our ability to touch, and even sense “pain” when encountering potentially dangerous surfaces.
As companies like Tesla push robots toward a fully human level of capability, recreating the sense of touch is essential not only for understanding the environment but also for navigating it safely. The team behind therobotics advancement revealed their work in a recent paper published in the Proceedings of the National Academy of Sciences.
The Importance of Pain
While pain may be among the least desirable human experiences, it plays an essential role in self-preservation. The spinal cord acts as a relay system to the brain, sending reflexive messages to our muscles in response to pain stimuli. For example, if we touch something hot, we withdraw our hand without thinking, thereby preventing a more severe burn. Alternatively, if we step on a sharp object, we lift our foot to avoid a deep wound. The signals involved in these actions are rapid, with the brain becoming aware of what has occurred only after the movement has begun.
Saving those precious seconds of processing time as the brain decodes sensory data into understanding (which results in a conscious response in humans) can make an enormous difference between receiving a minor abrasion and sustaining a serious injury. However, robots typically lack a swift, automatic system for processing external stimuli. Instead, sensors collect data, which is sent to a central processing unit (CPU).
Electronic robotic skin (representational image)
The CPU compares the data against its program and generates an appropriate response, which is then transmitted over the robot’s data network to an actuator, which decodes the response and executes the CPU’s selected movement. While this may occur at an impressive speed, even a slight delay in action due to processing time can cause greater damage to the robot.
Challenging Environments for Robots
Automation, until now, has primarily been confined to highly controlled environments, specifically designed to safely accommodate robotic machinery, such as factory floors and laboratories.
Presently, advances in both mechanical robotics and artificial intelligence are seeking to change this. Companies such as Tesla, with its humanoid Optimus robot, are attempting to integrate robots into everyday environments to perform a variety of human tasks. Unfortunately, homes, hospitals, and workplaces are designed for humans, who can navigate with considerably more intuitive ease than pre-programmed machines.
To enable robots to match humans’ instinctive environmental responses as they move into our imperfect and sometimes hazardous world, Chinese scientists have developed a robotic e-skin (NRE-skin) that provides robots not only with a “sense” of touch, but also the ability to “feel” pain.
Previous attempts to provide robots with sensor skins have been much simpler, wrapping the robot in a sensor system that sends signals to a CPU for processing and response. By contrast, the NRE-skin processes the information obtained when a robot comes into contact with an object and identifies potentially dangerous contact (i.e., pain) within the skin itself, thereby reducing the time required for sending and receiving information.
Modular, neuromorphic electronic skin capable of active pain and injury perception in robotic applications.
Credit: Xinge Yu, City University of Hong Kong
Robotic NRE-Skin
The Chinese researchers developed their NRE-skin as a four-layer system. Like our own epidermis, the top layer features a protective coating that shields the delicate underlying components from the environment. Beneath that layer, the skin performs its functions, with layers of sensors and circuits designed to mimic human nerves. Even when nothing is touching the robot, the skin sends a “all clear” null result signal every 75-150 seconds, informing the CPU that the system is still operating correctly. If the skin is cut or damaged significantly enough, the lack of signal alerts the robot that damage has occurred in the area.
Most importantly, the skin registers touch with signals called “spikes.” These spikes occur in two forms, depending on the severity of the situation. Regular touch sends a spike to the CPU, which processes the data to understand the environment. When the skin detects an extreme event, it instead sends a spike directly to the robot’s actuators to produce an automatic response, thereby removing it from potential harm.
The team designed the skin not only to warn of real-world dangers but also to accept that harm will eventually occur in an uncontrolled environment. The skin is produced in swappable magnetic patches. While it cannot “heal” in the sense that a living creature does, it can quickly be mended by changing a patch without having to repair the entire skin covering.
Currently, the primary issue is that multiple points of contact can lead to confusion within the system. To overcome this, the next step for researchers will be to enhance the skin’s sensitivity and enable it to disambiguate between the many sensations experienced while moving through a range of environments.
Ryan Whalen covers science and technology for The Debrief. He holds an MA in History and a Master of Library and Information Science with a certificate in Data Science. He can be contacted at ryan@thedebrief.org, and follow him on Twitter@mdntwvlf.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-12-2025
Meta AI Can Now Read Your Mind: A Deep Dive into Brain-to-Text Technology
Meta AI Can Now Read Your Mind: A Deep Dive into Brain-to-Text Technology
Meta's AI can now decode brain activity into text with 80% accuracy, bringing us closer to mind-powered communication. Dive into the future of brain-computer interfaces—read the full story now!
Imagine a world where your thoughts can be transcribed into text without lifting a finger. Meta is turning this futuristic vision into reality. In collaboration with the Basque Center on Cognition, Brain, and Language, Meta's AI research team has developed a groundbreaking system capable of decoding brain activity into text with remarkable accuracy.
The Science Behind the Magic
This innovative approach utilises non-invasive techniques, specifically magnetoencephalography (MEG) and electroencephalography (EEG), to measure the brain's magnetic and electrical activity. In a study involving 35 participants, researchers recorded brain signals as individuals typed sentences. These recordings trained an AI model to predict text based solely on brain activity. The results were astounding: the system achieved up to 80% accuracy in decoding characters from MEG data, significantly outperforming previous EEG-based methods.
A Leap Forward in Non-Invasive Brain-Computer Interfaces
Traditional brain-computer interfaces often require surgical implants, posing risks and limiting accessibility. Meta's approach, however, is entirely non-invasive. By employing MEG and EEG, the system captures brain activity without the need for implants, making the technology safer and more accessible. This advancement holds promise for individuals with speech impairments or paralysis, offering a potential pathway to regain communication abilities.
Challenges on the Horizon
While the progress is impressive, several hurdles remain:
Equipment Limitations: MEG technology requires large, expensive machinery—approximately $2 million per device—and necessitates a magnetically shielded room. This setup is currently impractical for everyday use.
Sensitivity to Movement: Participants must remain still during MEG recordings, as even slight movements can disrupt signal accuracy. This constraint poses challenges for real-world applications.
Individual Variability: The AI model requires personalised training, as brain activity patterns differ among individuals. Developing a universal model applicable to everyone remains a complex task.
The Road Ahead: From Lab to Life
Transitioning this technology from the laboratory to everyday life involves addressing these challenges. Researchers are exploring ways to miniaturise MEG equipment and enhance its portability. Advancements in AI could lead to models that generalise across users, reducing the need for individualised training. Moreover, ethical considerations, particularly concerning mental privacy and data security, must be prioritised as the technology progresses.
A Glimpse into the Future
Meta's brain-to-text system represents a significant stride in human-computer interaction. Envision a future where composing messages or controlling devices is as effortless as thinking. While practical implementation may still be years away, the foundation laid by this research brings us closer to a world where our minds can seamlessly interface with technology.
In the words of Meta's AI research team, "Our efforts are not towards products but towards understanding the computational principles that allow the brain to acquire language."
As we continue to unravel the mysteries of the mind, the possibilities for innovation are boundless.
Introduction: Pre-Crime and Mind Reading Technology
References 2002 movie "Minority Report" (2054 setting) - pre-crime police units arresting future criminals
Breakthrough: AI translating brain scans into text at tissue level
May 2023: University of Texas Austin creates "semantic decoder" converting brain activity to text
Meta develops similar technology plus real-time visual brain wave analysis
Potential applications: helping speech-impaired individuals vs privacy concerns
UT Austin Semantic Decoder Research
Lead researchers: Jerry Tang (doctoral student computer science), Alexander Huth (assistant professor neuroscience/computer science)
Publication: Journal of Nature Neuroscience
Device name: Non-invasive language decoder
Capabilities: Reconstruct continuous language from perceived speech, imagined speech, silent videos
Advantage over invasive methods: Lower risk than Neuralink-type implants but less clear data
Technical Implementation
Method: Functional magnetic resonance imaging (fMRI) - non-invasive brain recordings
Spatial specificity: Excellent - pinpoints neuroactivity with great accuracy
A hilarious video has revealed the moment a man was kicked in the groin by a humanoid robot that was mimicking his own movements.
The footage was initially shared to BiliBili by user zeonsunlight, but has since gone viral across social media.
It shows a man wearing a motion capture suit – an outfit with sensors that record body movements and convert them into digital motion data.
Unfortunately for him, this data is fed straight to a Unitree G1 robot, which replicates his movements almost immediately.
So, when the man goes for a high kick, the robot quickly follows suit – aimed directly at his groin.
Following the kick, the man doubles over in pain, which the robot obediently also mimics.
The short clip has gained huge attention on Bluesky with one user joking: 'The kick in the n***s is one thing but then mocking his pain is just diabolical...'
Another added: 'Humanity kicking itself in the junk with technology is the perfect metaphor the moment.'
A hilarious video has revealed the moment a man was kicked in the groin by a humanoid robot that was mimicking his own movements
Following the kick, the main doubles over in pain, which the robot obediently also mimics
The clip was posted to Bluesky by journalist James Vincent, who captioned it: 'another robot highlight for 2025: man wearing humanoid mocap suit kicks himself in the balls.'
Hundreds of delighted viewers flocked to his replies to discuss the footage.
'The greatest AI metaphor of all time doesn't exi—,' one user joked.
Another wrote: 'I've been laughing for ten minutes at this. My belly is cramping up.'
And one quipped: 'How many humans in history can be said to have kicked themselves in the balls? Truly revolutionary.
'Mankind's dream for millennia has finally been fulfilled. Our destiny has been reached.'
The Unitree G1 robot weighs 35 kilograms (77 lbs), stands at 1.32 metres tall (4.33 ft) and boasts 23 degrees of freedom in its joints, which gives it more mobility than an average human.
Behind its blank face, the robot is hiding an advanced perception system which includes a 3D LiDAR sensor and a depth–sensing camera.
The Unitree G1 robot weighs 35 kilograms (77 lbs), stands at 1.32 metres tall (4.33 ft) and boasts 23 degrees of freedom in its joints, which gives it more mobility than an average human
Although this makes it one of the most advanced commercially available humanoid robots, it needs to be specifically programmed to carry out any given task.
Straight out of the box, like it is in this video, the Unitree G1 is capable of little more than walking around and waving.
So it's somewhat unsurprising that the robot ended up in this hilarious situation.
This also isn't the first time that Unitree's humanoid robots have gone viral for their bizarre behaviour.
In a viral video posted last month which amassed over 6.3 million views, a humanoid robot attempted to make a stir–fry for its owner – with disastrous results.
YouTuber Cody Detwiller, who goes by the name WhistlinDiesel, put his lunch in the unsteady hands of a Unitree G1 robot.
The $80,000 (£60,940) bot promptly lost control of the pan, threw the food on the floor, and slipped up in the mess.
After clattering about like a drunken ice skater, the robot eventually collapsed to the floor in a crumpled heap.
On social media, tech fans flooded the comments with their reactions, with one calling it 'peak comedy'.
In 2025, many new thresholds in this complex area of study were crossed, with empirical inquiry into our questions about the nature of consciousness occurring within fields such asneuroscience,psychology, andmedicine. Many advancements in this area over the last year have also challenged long-held assumptions about where and how consciousness originates, how widespread it may be, and how profoundlyaltered statescan reshape human perception.
Here’s a look at just a few of the major stories involving consciousness, the mind, awareness, and the science behind it all that The Debrief has covered in 2025.
Did ‘Universal Consciousness’ Exist Before the Big Bang?
Among the year’s most provocative work about consciousness, one controversial peer-reviewed paper published in AIP Advances proposed that “universal consciousness” may have existed before the Big Bang, functioning not as a byproduct of matter but as a foundational feature of reality itself.
(Image Credit: Pixabay)
Such claims are nothing new and remain hotly debated by researchers, although they reflect a growing willingness among scientists to explore questions about consciousness, whether it is purely emergent or could play a deeper role in shaping the universe. The result has been a reignition of discussions long relegated to philosophy, now increasingly framed through modern cosmology and theoretical physics.
Consciousness May Be Far More Widespread Than Previously Believed
Closer to Earth, neuroscientists and cognitive researchers have increasingly argued that consciousness may be far older and more widespread than traditionally believed. Studies examining simple organisms, brain networks, and evolutionary pathways this year, undertaken by researchers at Ruhr University Bochum, suggested that rudimentary forms of awareness could predate complex nervous systems throughout the animal kingdom.
Rather than being the apex of the human evolutionary process, the researchers argue in a pair of papers that appeared in Philosophical Transactions of the Royal Society B, consciousness “rather represents a more basic cognitive process, possibly shared with other animal phyla.” This reframing has major implications not only for how scientists define consciousness but also for how humans understand their relationship to other life forms.
Psychedelic Therapy and Related Discoveries
Perhaps the most tangible advances came from renewed interest in altered states of consciousness, particularly through psychedelic research. Multiple studies in 2025 demonstrated that psychedelic compounds can rapidly reorganize brain networks, temporarily dissolving rigid patterns of thought associated with depression, trauma, and addiction.
(Image Credit: Pixabay/CC0 Public Domain)
At the same time, scientists explored non-drug pathways to similar states, such as research into ancient breathwork techniques combined with modern neuroscience that suggests altered states resembling psychedelic experiences could be induced through controlled breathing alone.
Additionally, long-term studies also continued to examine the social and spiritual dimensions of psychedelic experiences. Decades of research now suggest that such states often produce a heightened sense of connection—to other people, to nature, and to perceived transcendent realities. In 2025, experiments involving participants from diverse religious backgrounds highlighted how profoundly personal belief systems shape the interpretation of these experiences, even when the underlying neurochemical mechanisms are shared.
The resulting research revealed functional connections between neurons within the visual areas of the brain and the brain’s frontal areas, which the researchers behind the study say helps them “understand how our perceptions tie to our thoughts” while also reducing the typical emphasis on “the importance of the prefrontal cortex in consciousness, suggesting that while it’s important for reasoning and planning, consciousness itself may be linked with sensory processing and perception.”
University of Virginia Researchers Study Support Gaps for NDE Experiencers
Finally, 2025 saw increased attention to near-death experiences (NDEs) as a legitimate area of study. Researchers at the University of Virginia identified significant gaps in psychological and medical support for people who report NDEs, many of whom struggle to integrate these experiences into their lives. While interpretations of NDEs vary widely, the research emphasized a growing consensus: regardless of cause, such experiences can be deeply transformative—and ignoring their impact may carry real mental health consequences.
Taken together, these stories reveal a year in which consciousness research moved decisively out of the shadows. Whether probing the origins of awareness in the early universe, mapping its neural signatures, or exploring its therapeutic potential, scientists in 2025 treated consciousness not as an unspeakable mystery—but as a frontier worth confronting directly.
Micah Hanks is the Editor-in-Chief and Co-Founder of The Debrief. A longtime reporter on science, defense, and technology with a focus on space and astronomy, he can be reached atmicah@thedebrief.org. Follow him on X @MicahHanks, and at micahhanks.com.
In the race to build ever-smarter machines, one philosopher is asking an uncomfortable question: What if we cannot know whether an artificial intelligence is conscious, and what if that uncertainty itself is the real danger?
For decades, debates about “conscious AI” have split into two camps: optimists who think a sophisticated enough machine could one day have experiences like ours, and skeptics who insist consciousness is a strictly biological phenomenon.
In a new paper titled “Agnosticism About Artificial Consciousness,” Tom McClelland, a philosopher at the University of Cambridge, argues that both sides are overconfident. The only honest answer right now, he says, is that we probably won’t know any time soon.
McClelland’s central idea concerns the confusion many people feel when dealing with an LLM. What does it mean to be conscious, and can all those zeroes and ones ever actually achieve it?
Everything scientists currently understand about consciousness comes from studying biological creatures like humans, and to a lesser extent, animals like octopuses and monkeys. When we try to apply those findings to computer systems built from silicon chips instead of neurons, he argues, we hit what he calls an “epistemic wall.” That is, a point at which our knowledge runs out and we can’t go further with the evidence we currently have. We ‘guess,’ rather than ‘know.’
McClelland insists that claims about AI consciousness should follow a principle he calls “evidentialism.” So, if you say an AI is or isn’t conscious, your claim should be grounded in solid scientific evidence, not vibes, sci‑fi stories, or metaphysical faith. And that, he says, is exactly where current discussion fails.
In humans, the science of consciousness relies on messy but workable tools such as brain scans, behavioural experiments, and models like Global Workspace Theory, which link specific kinds of information processing with awareness rather than unconscious processing. Those tools allow reasonably confident judgments, say, about whether a patient in a coma shows signs of awareness or whether an octopus is likely to feel pain.
But none of these tools explains the “why” at the heart of the so‑called hard problem of consciousness.
“We do not have a deep explanation of consciousness,” McClelland explains in the paper. “There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological.”
Because we don’t understand the nuts and bolts behind consciousness, McClelland argues that confident ‘yes‑or‑no’ answers about future conscious-like AI systems are not scientifically responsible. In other words, we get lost in the “this thing is genuinely conscious” versus “this thing is a perfect non‑conscious mimic.”
At first glance, this might sound like a technical quarrel among philosophers in their ivory towers, but McClelland’s agnosticism has direct implications for the rest of us, because laws, policies, and social norms are already being written under the assumption that we will soon have tests for machine consciousness.
In the immediate future, large tech companies are already pumping out rhetoric concerning the stages of their AI tools and marketing the next leaps in AI development.
“There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology,” he writes. “It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness.”
In turn, McClelland is concerned that research grants and funding will be diverted to the study of AI consciousness, when in reality those funds could be used more effectively.
“A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI,” he explains.
Beyond the financial interests of tech firms and their investors, there are obvious social, cultural, and even personal implications that we have already seen manifest.
If we wrongly assume that advanced AIs are not conscious when they are, we could be creating and exploiting beings capable of suffering. But if we wrongly assume they are conscious when they are not, we risk pouring care, legal rights, and empathy into systems that do not actually feel anything, potentially at the expense of humans and animals who do. And this is the philosophical rub.
McClelland says that both mistakes become more likely if we pretend to know more than we do. He points out that people are already treating chatbots as if they were conscious companions, with surveys finding that more than a third of people have felt a system “truly understood” their emotions or seemed conscious. AI companies, meanwhile, have strong incentives to play up that impression. Without a clear scientific basis for deciding who, if anyone, is really conscious, public belief and marketing could drift far from reality.
According to the paper, McClelland suggests shifting the ethical spotlight from consciousness in general to a narrower and more morally urgent notion: sentience.
In simple terms, sentience is the capacity for experiences that are good or bad for the subject. For humans, it’s our ability to feel pleasure or suffering. Many moral theories already treat sentience as what really matters ethically, whether in humans, animals, or potentially even in digital minds. McClelland argues that even if we remain agnostic about whether an AI is conscious at all, we can still ask a slightly different question: if this system were conscious, what kinds of experiences would it be having?
Instead of trying to build a “consciousness meter” for AI, researchers and regulators could focus on designing systems whose internal states, as far as we can tell, would not naturally correspond to pain, fear, or despair if they were conscious.
This shift opens up a practical path that, if applied, could change how companies and governments talk about and design advanced AI. It would encourage more transparency about architectures, more interdisciplinary work on the science of sentience and emotion, and a cautious approach to systems that imitate human distress or self‑awareness for persuasive effect.
As AI companies continue to push ever farther and faster in their race to stay ahead and generate revenue, the question of whether the things they are building are “alive” becomes increasingly important. Equally, as AI systems grow more capable and more lifelike, the primary risk is not just whether they become conscious, but whether our beliefs about their minds—right or wrong—reshape how we treat each other, structure our laws, and allocate our morals.
By avoiding leaps of faith and remaining skeptical, McClelland argues, the race towards future AI could be slowed down, thereby allowing for better regulation and transparency.
“If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism,” McClelland writes.
“We cannot, and may never, know.”
MJ Banias covers space, security, and technology with The Debrief. You can email him at mj@thedebrief.org or follow him on Twitter @mjbanias.
AI has now cracked several rather difficult problems in math. How close is it to supplanting the world's best mathematicians?
(Image credit: Adrián A. Astorgano for Future)
In October 2024, news broke that Facebook parent company Meta had cracked an "impossible" problem that had stymied mathematicians for a century.
In this case, the solvers weren't human.
An artificial intelligence (AI) model developed by Meta determined whether solutions of the equations governing certain dynamically changing systems — like the swing of a pendulum or the oscillation of a spring — would remain stable, and thus predictable forever.
The key to the problem was finding Lyapunov functions, which determine the long-term stability of these systems.
Meta's work made headlines and raised a possibility once considered pure fantasy: that AI could soon outperform the world's best mathematicians by cracking math's marquee "unsolvable" problems en masse.
After looking under the hood, however, mathematicians were less impressed. The AI found Lyapunov functions for 10.1% of randomly generated problems posed to it. This was a substantial improvement over the 2.1% solved by previous algorithms, but it was by no means a quantum leap forward. And the model needed lots of hand-holding by humans to come up with the right solutions.
A similar scenario played out earlier this year, when Google announced its AI research lab DeepMind had discovered new solutions to the Navier-Stokes equations of fluid dynamics. The solutions were impressive, but AI was still some distance from solving the more general problem associated with the equations, which would garner its solvers the $1 million Millennium Prize.
Beyond the hype, just how close is AI to replacing the world's best mathematicians? To find out Live Science asked some of the world's best mathematicians.
While some experts were dubious about AI’s problem solving abilities in the short term, most noted that the technology is developing frighteningly fast. And some speculated that not so far into the future, AI may be able to solve hard conjectures — unproven mathematical hypotheses — at a massive scale, invent new fields of study, and tackle problems we never even considered.
"I think what's going to happen very soon — actually, in the next few years — is that AIs become capable enough that they can sweep through the literature at the scale of thousands — well, maybe hundreds, tens of thousands of conjectures," UCLA mathematician Terence Tao, who won the Fields Medal (one of mathematics' most prestigious medals) for his deep contributions to an extraordinary range of different mathematical problems, told Live Science. "And so we will see what will initially seem quite impressive, with thousands of conjectures suddenly being solved. And a few of them may actually be quite high-profile ones."
From games to abstract reasoning
To understand where we are in the field of AI-driven mathematics, it helps to look at how AI progressed in related fields. Math requires abstract thinking and complex multistep reasoning. Tech companies made early inroads into such thinking by looking at complex, multistep logical games.
In the 1980s, IBM algorithms began making progress in games like chess. It's been decades since IBM's Deep Blue beat what was then the world's best chess player, Garry Kasparov, and about a decade since Alphabet's DeepMind defeated the period's best Go player, Lee Sedol. Now AI systems are so good at such mathematical games that there's no point to these competitions because AI can beat us every time.
But pure math is different from chess and Go in a fundamental way: Whereas the two board games are very large but ultimately constrained (or, as mathematicians would say, "finite") problems, there are no limits to the range, depth and variety of problems mathematics can reveal.
In many ways, AI math-solving models are where chess-playing algorithms were a few decades ago. "They're doing things that humans know how to do already," said Kevin Buzzard, a mathematician at Imperial College London.
World Chess Champion Garry Kasparov competing against the IBM Deep Blue algorithm.(Image credit STAN HONDA via Getty Images)
"The chess computers got good, and then they got better and then they got better," Buzzard told Live Science. "But then, at some point, they beat the best human. Deep Blue beat Garry Kasparov. And at that moment, you can kind of say, 'OK, now something interesting has happened.'"
That breakthrough hasn't happened yet for math, Buzzard argued.
"In mathematics we still haven't had that moment when the computer says, 'Oh, here's a proof of a theorem that no human can prove,'" Buzzard said.
Mathematical genius?
Yet many mathematicians are excited and impressed by AI's mathematical prowess. Ken Ono, a mathematician at the University of Virginia, attended this year's "FrontierMath' meeting organized by OpenAI. Ono and around 30 of the world's other leading mathematicians were charged with developing problems for o4-mini — a reasoning large language model from OpenAI — and evaluating its solutions.
After witnessing the heavily human-trained chatbot in action, Ono said, "I've never seen that kind of reasoning before in models. That's what a scientist does. That's frightening." He argued that he wasn't alone in his high praise of the AI, adding that he has "colleagues who literally said these models are approaching mathematical genius."
To Buzzard, these claims seem far-fetched. "The bottom line is, have any of these systems ever told us something interesting that we didn't know already?" Buzzard asked. "And the answer is no."
Rather, Buzzard argues, AI's math ability seems solidly in the realm of the ordinary, if mathematically talented, human. This summer and last, several tech companies' specially trained AI models attempted to answer the questions from the International Mathematical Olympiad (IMO), the most prestigious tournament for high school "mathletes" around the world. In 2024, Deepmind's AlphaProof and AlphaGeometry 2 systems combined to solve four of the six problems, scoring a total of 28 points — the equivalent of an IMO silver medal. But the AI first required humans to translate the problems into a special computer language before it could begin work. It then took several days of computing time to solve the problems — well outside the 4.5-hour time limit imposed on human participants.
This year's tournament witnessed a significant leap forward. Google's Gemini Deep Think solved five of the six problems well within the time limit, scoring a total of 35 points. This is the sort of performance that, in a human, would have been worthy of a gold medal — a feat achieved by less than 10% of the world's best math students.
The 2011 International Mathematical Olympiad in Amsterdam (Image credit: VALERIE KUYPERS via Getty Images)
Research-level problems
Although the most recent IMO results are impressive, it's debatable whether matching the performance of the top high school math students qualifies as "genius-level."
Another challenge in determining AI's mathematical prowess is that many of the companies developing these algorithms don't always show their work.
"AI companies are sort of shut. When it comes to results, they tend to write the blog post, try and go viral and they never write the paper anymore," Buzzard, whose own research lies at the interface of math and AI, told Live Science.
However, there's no doubt that AI can be useful in research-level mathematics.
In December 2021, University of Oxford mathematician Marc Lackenby's research with DeepMind was on the cover of the journal Nature.
Lackenby's research is in the area of topology which is sometimes referred to as geometry (the maths of shapes) with play dough. Topology asks which objects (like knots, linked rings, pretzels or doughnuts) keep the same properties when twisted, stretched or bent. (The classic math joke is that topologists consider a doughnut and a coffee cup to be the same because both have one hole.)
Lackenby and his colleagues used AI to generate conjectures connecting two different areas of topology, which he and his colleagues then went on to try to prove. The experience was enlightening.
It turned out that the conjecture was wrong and that an extra quantity was needed in the conjecture to make it right, Lackenby told Live Science.
Yet the AI had already seen that, and the team "had just ignored it as a bit of noise," Lackenby said.
Can we trust AI at the frontier of math?
Lackenby's mistake had been not to trust the AI enough. But his experience speaks to one of the current limitations of AI in the realm of research mathematics: that its outputs still need human interpretation and can't always be trusted.
"One of the problems with AI is that it doesn't tell you what that connection is," Lackenby said. "So we have to spend quite a long time and use various methods to get a little bit under the hood."
Ultimately, AI isn't designed to get the "right" answer; it's trained to find the most probable one, said Neil Saunders, a mathematician who studies geometric representation theory at City St George's, University of London and the author of the forthcoming book "AI (r)Evolution" (Chapman and Hall, 2026), told Live Science.
"That most probable answer doesn't necessarily mean it's the right answer," Saunders said.
"We've had situations in the past where entire fields of mathematics became basically solvable by computer. It didn't mean mathematics died."
Terence Tao, UCLA
AI's unreliability means it wouldn't be wise to rely on it to prove theorems in which every step of the proof must be correct, rather than just reasonable.
"You wouldn't want to use it in writing a proof, for the same reason you wouldn't want ChatGPT writing your life insurance contract," Saunders said.
Despite these potential limitations, Lackenby sees AI's promise in mathematical hypothesis generation. "So many different areas of mathematics are connected to each other, but spotting new connections is really of interest and this process is a good way of seeing new connections that you couldn't see before," he said.
The future of mathematics?
Lackenby's work demonstrates that AI can be helpful in suggesting conjectures that mathematicians can then go on to prove. And despite Saunders' reservations, Tao thinks AI could be useful in proving existing conjectures.
The most immediate payoff might not be in tackling the hardest problems but in picking off the lowest-hanging fruit, Tao said.
The highest-profile math problems, which "dozens of mathematicians have already spent a long time working on — they're probably not amenable to any of the standard counterexamples or proof techniques," Tao said. "But there will be a lot that are."
Tao believes AI might transform the nature of what it means to be a mathematician.
"In 20 or 30 years, a typical paper that you would see today might indeed be something that you could automatically do by sending it to an AI," he said. "Instead of studying one problem at a time for months, which is the norm, we're going to be studying 10,000 problems a year … and do things that you just can't dream of doing today."
Rather than AI posing an existential threat to mathematicians, however, he thinks mathematicians will evolve to work with AI.
"We've had situations in the past where entire fields of mathematics became basically solvable by computer," Tao said. At one point, we even had a human profession called a "computer," he added. That job has disappeared, but humans just moved on to harder problems. "It didn't mean mathematics died," Tao said.
Andrew Granville, a professor of number theory at the University of Montreal, is more circumspect about the future of the field. "My feeling is that it's very unclear where we're going," Granville told Live Science. "What is clear is that things are not going to be the same. What that means in the long term for us depends on our adaptability to new circumstances."
Lackenby similarly doesn't think human mathematicians are headed for extinction.
While the precise degree to which AI will infiltrate the subject remains uncertain, he's convinced that the future of mathematics is intertwined with the rise of AI.
"I think we live in interesting times," Lackenby said. "I think it's clear that AI will have an increasing role in mathematics."
Hoeveel energie en water kost kunstmatige intelligentie? Techbedrijven houden die informatie liever voor zich. Alex de Vries-Gao, een onderzoeker van de VU Amsterdam, deed een poging om het te becijferen en het resultaat is confronterend.
Sinds ChatGPT drie jaar geleden op het toneel verscheen, is de vraag naar AI-toepassingen explosief gegroeid en daarmee ook het stroomverbruik van de datacenters die deze systemen draaiende houden. In 2024 was AI nog goed was zo’n 15 procent van het totale elektriciteitsverbruik van datacenters. Tegen het einde van 2025 zou dat kunnen oplopen tot bijna de helft, zo zegt de Vries-Gao in zijn studie.
Om de schaal te begrijpen: het vermogen dat AI-systemen eind 2025 nodig hebben, zou uitkomen op 23 gigawatt. Dat is vergelijkbaar met de piekvraag in Nederland. Die ligt momenteel rond de 18 GW en wordt volgens netbeheerder TenneT verwacht te groeien naar 21,4 tot 26,1 GW in 2030. Maar wat betekent dat voor het klimaat? En hoeveel water gaat er in om al die servers te koelen?
De Vries-Gao probeerde die vragen te beantwoorden. Het bleek volgens de onderzoeker lastiger dan gedacht. Techbedrijven maken in hun duurzaamheidsrapporten namelijk geen onderscheid tussen AI- en niet-AI. Sterker nog: sommige grote spelers, zoals ByteDance (het moederbedrijf van TikTok) en CoreWeave, publiceren helemaal geen milieurapportages.
De grote stilte
De Vries-Gao bekeek de duurzaamheidsrapporten van de grote techbedrijven die ze wél publiceren, waaronder Amazon, Apple, Google, Meta en Microsoft. Geen enkel bedrijf rapporteert specifieke milieucijfers voor zijn AI-activiteiten apart. Wel erkennen ze dat AI een belangrijke motor is achter hun groeiende energieverbruik.
Bij gebrek aan specifieke AI-data moest de onderzoeker terugvallen op de algemene prestaties van datacenters. Door de geschatte stroomvraag van AI-systemen te combineren met de gemiddelde CO2-intensiteit van elektriciteitsopwekking voor datacenters, kwam hij tot een schatting van 32,6 tot 79,7 miljoen ton CO2-uitstoot in 2025. Ter vergelijking: Nederland stoot zo’n 151 miljoen ton CO2-equivalenten uit.
Watergebruik is helemaal niet duidelijk
Voor water is het beeld nog schimmiger. Datacenters gebruiken water op twee manieren: rechtstreeks, voor de koeling van servers, en onrechtstreeks, via de elektriciteitsopwekking. Volgens cijfers van het IEA die de Vries-Gao citeert, gebruikten datacenters in 2023 zo’n 560 miljard liter. Maar de onderzoeker becijfert dat het IEA het indirecte watergebruik waarschijnlijk fors onderschat.
Meta is het enige onderzochte bedrijf dat indirect watergebruik rapporteert. De cijfers liggen bijna vier keer hoger dan wat het IEA aanneemt. Google publiceerde wel een rapport over de milieu-impact van zijn AI-model Gemini, maar koos ervoor om indirect watergebruik niet te melden.
Zoveel als alle flessenwater ter wereld
Op basis van de beschikbare data schat de Vries-Gao dat AI-systemen in 2025 tussen de 312,5 en 764,6 miljard liter water zouden kunnen gebruiken. De wereldwijde jaarlijkse consumptie van flessenwater bedraagt zo’n 446 miljard liter. AI zou dus in de buurt kunnen komen van of zelfs voorbijstreven wat de hele mensheid jaarlijks aan flessenwater drinkt.
De onzekerheid rond deze cijfers blijft wel groot. De koolstof- en waterintensiteit van stroomnetten varieert sterk per locatie. Zo kan de waterintensiteit van elektriciteitsopwekking voor Amerikaanse datacenters variëren van 0,68 tot 11,98 liter per kilowattuur, afhankelijk van de locatie.
Transparantie dringend nodig
Maar hoewel de cijfers niet helemaal helder zijn, is de Vries-Gao dat wel. Hij pleit voor nieuwe regelgeving die techbedrijven verplicht om meer gegevens openbaar te maken. Zonder transparantie is het onmogelijk om de werkelijke milieu-impact van AI vast te stellen, zo zegt hij, laat staan om effectieve maatregelen te nemen.
Artificial intelligence (AI) is already helping to solve problems in finance, research and medicine.
But could it be reaching consciousness?
Dr Tom McClelland, a philosopher from the University of Cambridge has warned that current evidence is 'far too limited' to rule this dystopian possibility out.
According to the expert, the only sensible position on the question of whether AI is conscious is one of 'agnosticism'.
The main problem, he claims, is that we don't have a 'deep explanation' of what makes something conscious in the first place, so can't test for it in AI.
'The best–case scenario is we're an intellectual revolution away from any kind of viable consciousness test,' Dr McClelland explained.
'If neither common sense nor hard–nosed research can give us an answer, the logical position is agnosticism.
'We cannot, and may never, know.'
Artificial intelligence ( AI) is already helping to solve problems in finance, research and medicine. But could it be reaching consciousness? Pictured: Terminator Genisys
But as they work towards this goal, some also claim that increasingly sophisticated AI may develop consciousness.
This means AI could develop the capacity for perception and become self–aware.
While this idea might evoke visions of killer robots, Dr McClelland argues that AI could make this jump without us even realising, because we don't really have an agreed–upon theory of consciousness to begin with.
Some theories say consciousness is a matter of processing information in the right way, and that AI could be conscious if only it could run the 'software' of a conscious mind.
Others argue it is inherently biological, meaning AI can only imitate consciousness at best.
Until we can figure out which side of the argument is right, we simply don't have any basis on which to test for consciousness in AI.
In a paper published in the journal Mind and Language, Dr McClelland claims both sides of the debate are taking a 'leap of faith'.
We can't tell whether an AI, like in the sci–fi film Ex Machina (pictured), really has conscious experience or whether it is just simulating consciousness
Whether something is conscious radically changes the kinds of ethical questions we need to consider.
For example, humans are expected to behave morally towards other people and animals, because consciousness gives them 'moral status'.
In contrast, we don't have these same values towards inanimate objects, like toasters or computers.
'It makes no sense to be concerned for a toaster's well–being because the toaster doesn't experience anything,' Dr McClelland explains.
'So when I yell at my computer, I really don't need to feel guilty about it. But if we end up with AI that's conscious, then that could all change.'
While that might make dealing with AI an ethical nightmare, the bigger risk may be that we start to consider AIs as conscious or sentient when they are not.
Dr McClelland explained: 'If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic.'
Worryingly, the philosopher says that members of the public are already sending him letters written by chatbots 'pleading with me that they're conscious'.
He added: 'We don't want to risk mistreating artificial beings that are conscious, but nor do we want to dedicate our resources to protecting the "rights" of something no more conscious than a toaster.'
Brain and memory preservation has been explored at length by futurists, scientists and science fiction junkies alike.
Many say it falls under the category of 'transhumanism.'
Transhumanism is the belief that the human body can evolve beyond its current form with the help of scientists and technology.
The practice of mind uploading has been promoted by many people, including Ray Kurzweil, Google's director of engineering, who believes we will be able to upload our entire brains to computers by 2045.
Similar technologies have been depicted in science fiction dramas, ranging from Netflix's Altered Carbon, to the popular series Black Mirror.
Another prominent futurist, Dr Michio Kaku, believes virtual reality can be used to keep our loved ones' personalities and memories alive even after they die.
Scientists and futurists have different theories about how we might be able to preserve the human brain, ranging from uploading our memories to a computer to Nectome's high-tech embalming process, which can keep it intact for thousands of years
'Imagine being able to speak to your loved one after they die ... it is possible if their personality has been downloaded onto a computer as an avatar,' he explained.
These ideas haven't been met without criticism.
McGill University Neuroscientist Michael Hendricks told MIT that these technologies are a 'joke.'
'I hope future people are appalled that in the 21st century, the richest and most comfortable people in history spent their money and resources trying to live forever on the backs of their descendants. I mean, it’s a joke, right? They are cartoon bad guys,' he said.
Meanwhile, neuroscientist Miguel Nicolelis said recently that such technologies would be virtually impossible.
'The brain is not computable and no engineering can reproduce it,' he said.
'You can have all the computer chips in the world and you won't create a consciousness.'
If you've ever dreamed of soaring over traffic on your daily commute, your dreams could soon be a reality – as the 'world's first' flying car enters production.
The Alef Model A Ultralight uses eight propellers hidden in the boot and bonnet to take off at any time.
After more than a decade of development, the US–based Alef Aeronautics has finally announced that the first customers will soon get their flying cars.
The futuristic vehicles will be hand–assembled in the company's facility in Silicon Valley, California.
However, Alef Aeronautics says that each car will take 'several months' of craftsmanship before it is safe to send out to customers.
The first handmade cars will only be delivered to a few customers to test out the experimental vehicles in real–world conditions.
The company says this slow rollout will allow it to work out any potential issues before the flying car enters mass production.
The 'world's first' flying car (pictured) has finally entered production, as Alef Aeronautics announces that its first all–electric vehicle will be hand assembled in the US
Alef Aeronautics' futuristic vehicle can be driven around like a normal car on the streets or take off and fly using eight propellers hidden in its carbon–fibre mesh body
Jim Dukhovny, CEO of Alef Aeronautics, says: 'We are happy to report that production of the first flying car has started on schedule.
'The team worked hard to meet the timeline, because we know people are waiting. We're finally able to get production off the ground.'
The Model A is both a road–legal vehicle and an aircraft capable of taking off without wings via eVTOL (electric vertical take–off and landing).
On the ground, the Model A drives just like a normal electric car, thanks to four small engines in each of the wheels.
But the driver's seat is also surrounded by powerful propellers that provide enough thrust for flight at a cruising speed of 110 miles per hour (177 km/h).
The carbon–fibre mesh body – measuring around five metres by two metres – allows air to pass through the car while keeping the spinning blades safely covered.
The company says that the car will have enough room for the pilot and one passenger, and have a range of 200 miles (321 km) on the ground and 110 miles (177 km) in the air.
The company says that the flying car will have a range of 200 miles (321 km) on the ground and 110 miles (177 km) in the air
Mr Dukhovny claims the car, which is aimed at the general public, is relatively simple to use and would take just 15 minutes to learn.
The entire car weighs just 385 kg (850 lbs), so that it can be classified as an ultralight 'low speed vehicle' – a legal classification for small electric vehicles like golf carts.
That means the car will be capped at 25 miles per hour (40 km/h) on public roads despite being able to drive faster.
Having received airworthiness certification from the Federal Aviation Administration (FAA) in 2023, Alef Aeronautics is now edging closer to making the Model A a reality – over a decade after the company was founded.
The company reports that it has received 3,500 pre–orders, collectively worth more than £800 million.
However, don't expect to see The Jetsons–style flying cars filling the air near you just yet.
Alef Aeronautics says that the first customers will only be allowed to test their flying cars under 'very controlled conditions'.
Alef Aeronautics will send a limited number of its flying cars to customers for them to test in 'very controlled conditions'
The company adds that each customer will need to receive training in compliance and maintenance before flying.
Likewise, creating each car involves robotic, industrial, and hand manufacturing, with rigorous testing of individual parts and a large number of test flights.
Mr Dukhovny has previously said he wanted to bring sci–fi to life and build an 'affordable' flying car, with the cost likely to be closer to £25,000 when built at scale.
Eventually, Aleph Aeronautics says that the production process of the full–size Model A will be automated but, for now, only a limited number can be produced.
Advances in electric motors, battery technology and autonomous software has triggered an explosion in the field of electric air taxis.
Larry Page, CEO of Google parent company Alphabet, has poured millions into aviation start-ups Zee Aero and Kitty Hawk, which are both striving to create all-electric flying cabs.
Kitty Hawk is believed to be developing a flying car and has already filed more than a dozen different aircraft registrations with the Federal Aviation Administration, or FAA.
Page, who co-founded Google with Sergey Brin back in 1998, has personally invested $100 million (£70 million) into the two companies, which have yet to publicly acknowledge or demonstrate their technology.
AirSpaceX unveiled its latest prototype, Mobi-One, at the North American International Auto Show in early 2018. Like its closest rivals, the electric aircraft is designed to carry two to four passengers and is capable of vertical take-off and landing
Airbus is also hard at work on an all-electric, vertical-take-off-and-landing craft, with its latest Project Vahana prototype, branded Alpha One, successfully completing its maiden test flight in February 2018.
The self-piloted helicopter reached a height of 16 feet (five metres) before successfully returning to the ground. In total, the test flight lasted 53 seconds.
Airbus previously shared a well-produced concept video, showcasing its vision for Project Vahana.
The footage reveals a sleek self-flying aircraft that seats one passenger under a canopy that retracts in similar way to a motorcycle helmet visor.
Airbus Project Vahana prototype, branded Alpha One, successfully completed its maiden test flight in February 2018. The self-piloted helicopter reached a height of 16 feet (five metres) before successfully returning to the ground. In total, the test flight lasted 53 seconds
AirSpaceX is another company with ambitions to take commuters to the skies.
The Detroit-based start-up has promised to deploy 2,500 aircrafts in the 50 largest cities in the United States by 2026.
AirSpaceX unveiled its latest prototype, Mobi-One, at the North American International Auto Show in early 2018.
Like its closest rivals, the electric aircraft is designed to carry two to four passengers and is capable of vertical take-off and landing.
AirSpaceX has even included broadband connectivity for high speed internet access so you can check your Facebook News Feed as you fly to work.
Aside from passenger and cargo services, AirSpaceX says the craft can also be used for medical and casualty evacuation, as well as tactical Intelligence, Surveillance, and Reconnaissance (ISR).
Even Uber is working on making its ride-hailing service airborne.
Dubbed Uber Elevate, Uber CEO Dara Khosrowshahi tentatively discussed the company’s plans during a technology conference in January 2018.
‘I think it’s going to happen within the next 10 years,’ he said.
Maya Lassiter / Miskin Lab / University of Pennsylvania
We’re far from realizing the kind of nanomachines envisioned in media like “The Diamond Age” and Metal Gear Solid, but scientists have just taken a meaningful step towards the next best thing.
A team of researchers from the University of Pennsylvania and University of Michigan say they’ve built a sub-millimeter sized robot packed with a computer, motor, and sensors, the Washington Post reports. It’s not an actual billionth of a meter in size, but being smaller than a grain of salt, it is still outrageously tiny: a microrobot.
The work, described in a new study in the journal Science Robotics, could be a platform for one day building microscopic robots that could be deployed inside the human body to perform all sorts of medical miracles, like repairing tissues or delivering treatment to areas difficult for surgeons to access.
“It’s the first tiny robot to be able to sense, think and act,” coauthor Marc Miskin, assistant professor of electrical and systems engineering at UPenn, told WaPo.
At present, the device is still highly experimental and isn’t suited to be used inside a human body — but “it would not surprise me if in 10 years, we would have real uses for this type of robot,” coauthor David Blaauw from U-M told the newspaper.
Building a microscopic robot that can move, sense its surroundings, and make decisions on its own has evaded scientists for decades. According to the team, roboticists have typically relied on externally controlling the microrobots so they can operate at smaller scales, but sacrificing their ability to process information. That prevents the robots from reacting with their environment, leaving them with a limited number of pre-programmed behaviors they can carry out — and as a result, limited real-world usefulness.
Having a robot on the scale of microns, or one millionth of a meter, would give us access to what corresponds to the smallest units of our biology, Miskin told WaPo.
“Every living thing is basically a giant composite of 100-micron robots, and if you think about that it’s quite profound that nature has singled out this one size as being how it wanted to organize life,” he said.
Visually, the researchers’ robot resembles a microchip, and is made of the same kinds of materials, including silicon, platinum, and titanium, WaPo noted. It’s sealed in a layer of what is essentially glass, Miskin said, protecting it from fluids.
The robot uses solar cells to convert energy that powers its onboard computer and its propulsion system, which uses a pair of electrodes to generate a flow in the water particles surrounding it. In a word, the robot swims. Its onboard computer is less than a thousandth of the speed of a modern laptop, per WaPo, but it’s enough to let it respond to changes it detects in its environment like temperature.
“At this scale, the robot’s size and power budget are comparable to many unicellular microorganisms,” the team wrote in the study.
Crucially, the robot is designed to still communicate with its human operators.
“We can send messages down to it telling it what we want it to do,” using a laptop, Miskin told WaPo, “and it can send messages back up to us to tell us what it saw and what it was doing.”
But the next step? Inter-microrobot communication.
“So the next holy grail really is for them to communicate with each other,” Blaauw told WaPo.
Tesla's humanoid robot, Optimus, has taken a suspicious tumble in a new demo.
In a viral clip, the robot suddenly jerks back, reaches up as if to remove something from its head, and tumbles backwards with a crash.
Many have compared the robot's strange movements to the distinctive gesture of someone taking off a virtual reality headset.
On social media, this has sparked a flurry of rumours that the supposedly autonomous robot is really controlled by a human.
The shocking moment was captured by a Reddit user who was filming Optimus handing out bottles of water at the Tesla The Future of Autonomy Visualized event in Miami.
In a post, the user wrote: 'I think Optimus needs an update.'
As the video spread online, others were quick to accuse Tesla of overstating its bot's ability to function on its own.
One commenter wrote: 'Honestly looks like the dude teleworking this bad boy took off his headset.'
Elon Musk's Tesla Optimus robot has taken a suspicious tumble in a viral video, as it appears to make the motion of removing a VR headset
On social media, tech fans have taken this as evidence that the robot was being remotely controlled by a teleoperator rather than operating autonomously
Commenters have mocked Tesla for the blunder, with one joking that robots would at least not be taking human jobs
The embarrassing video has caused hilarity for online tech fans, as one dubbed it 'my favourite video of all time.'
''At least humans will have jobs,' another commenter joked.
While another chimed in: 'idk looks to me like that robot just killed itself.'
Now, the strange behaviour of this Optimus robot moments before its collapse has led many to believe that it was also being teleoperated.
The video was captured at Tesla's 'The Future of Autonomy Visualized' event in Miami, where Optimus was handing out bottles of water
Many fans pointed out that the robot's motion towards its head has a clear resemblance to the gesture of someone removing a VR headset, which would make sense if it were being controlled remotely
Many commenters joked about the supposed teleoperator's speedy exit, with one suggesting that they felt a spider running up their leg
One commenter wrote: 'Looks like the operator took off their VR headset.'
'And spilled hot coffee on his lap right before that lol,' another chimed in.
Others made fun of the supposed teleoperator's speedy exit, with one writing: 'Teleoperator logged out early that day.'
Another joked: 'Looks like the operator felt a spider running up his / her leg and panicked.'
Meanwhile, one frustrated commenter added: 'It all feels very Wizard of Oz. Pay no attention to the man behind the curtain – our product is right around the corner!'
These accusations may be especially embarrassing for Tesla since the fall occurred at an event intended to showcase 'Autopilot technology and Optimus'.
However, the robot's supposed teleoperation isn't the only thing that is causing concern.
As the Tesla Optimus falls out of control, the video shows its hands swinging down with such force that they crush a bottle of water on the table.
This comes after Elon Musk, Tesla CEO, specifically insisted that Optimus was controlled by AI and was not teleoperated
Tesla has not confirmed whether the robot which collapsed was being controlled by AI or remotely by a human
Some commenters on X were more concerned by the power of Optimus' crash, which easily crushed a nearby water bottle
One Tesla fan branded the bot as 'too dangerous' and questioned whether the robot's swinging hand could 'crack a human skull'
'Interesting how easily Optimus crushed the bottle,' one commenter wrote.
Another added: 'I wonder if Optimus can crack a skull with that punch, too dangerous.'
'The spray on that little karate chop was pretty impressive don't let one of these fall on your dog or something,' a concerned commenter chimed in.
This comes amidst a boom of humanoid robots as investors bet that autonomous labour will replace humans in the future.
Tesla CEO Elon Musk has been a major champion of using robots for labour, and has frequently said that they could be used to replace humans in environments like factories to perform repetitive or dangerous tasks.
To achieve this, he hopes to massively scale up the production of robots and reduce their cost.
Speaking at a tech conference in Saudi Arabia last year, Musk predicted that there could be as many as 10 billion humanoid robots on Earth by 2040.
Elon Musk has been a major champion of using robots to replace human labour in factories. However, this will only be possible if his Optimus robots (pictured) can operate autonomously
Elon Musk wants to push technology to its absolute limit, from space travel to self-driving cars — but he draws the line at artificial intelligence.
The billionaire first shared his distaste for AI in 2014, calling it humanity's 'biggest existential threat' and comparing it to 'summoning the demon'.
At the time, Musk also revealed he was investing in AI companies not to make money but to keep an eye on the technology in case it gets out of hand.
His main fear is that in the wrong hands, if AI becomes advanced, it could overtake humans and spell the end of mankind, which is known as The Singularity.
That concern is shared among many brilliant minds, including the late Stephen Hawking, who told the BBC in 2014: 'The development of full artificial intelligence could spell the end of the human race.
'It would take off on its own and redesign itself at an ever-increasing rate.'
Despite his fear of AI, Musk has invested in the San Francisco-based AI group Vicarious, in DeepMind - which has since been acquired by Google - and OpenAI, creating the popular ChatGPT program that has taken the world by storm in recent months.
During a 2016 interview, Musk noted that he and OpenAI created the company to 'have democratisation of AI technology to make it widely available'.
Musk founded OpenAI with Sam Altman, the company's CEO, but in 2018 the billionaire attempted to take control of the start-up.
His request was rejected, forcing him to quit OpenAI and move on with his other projects.
In November, OpenAI launched ChatGPT, which became an instant success worldwide.
The chatbot uses 'large language model' software to train itself by scouring a massive amount of text data so it can learn to generate eerily human-like text in response to a given prompt.
ChatGPT is used to write research papers, books, news articles, emails and more.
But while Altman is basking in its glory, Musk is attacking ChatGPT.
He says the AI is 'woke' and deviates from OpenAI's original non-profit mission.
'OpenAI was created as an open source (which is why I named it 'Open' AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft, Musk tweeted in February.
The Singularity is making waves worldwide as artificial intelligence advances in ways only seen in science fiction - but what does it actually mean?
In simple terms, it describes a hypothetical future where technology surpasses human intelligence and changes the path of our evolution.
Experts have said that once AI reaches this point, it will be able to innovate much faster than humans.
There are two ways the advancement could play out, with the first leading to humans and machines working together to create a world better suited for humanity.
For example, humans could scan their consciousness and store it in a computer in which they will live forever.
The second scenario is that AI becomes more powerful than humans, taking control and making humans its slaves - but if this is true, it is far off in the distant future.
Researchers are now looking for signs of AI reaching The Singularity, such as the technology's ability to translate speech with the accuracy of a human and perform tasks faster.
Former Google engineer Ray Kurzweil predicts it will be reached by 2045.
He has made 147 predictions about technology advancements since the early 1990s - and 86 per cent have been correct.
Nils Westerboer’s science fiction novel Athos 2643 is a detective story set in the future, in a monastery on Neptune’s fictional moon Athos. However, the main intrigue revolves not around space or Christianity, but around artificial intelligence. The Ukrainian book was recently published by Lobster Publishing and can be purchased on the publisher’s official website.
What should artificial intelligence look like?
Contemporary science fiction
Athos 2643 by Nils Westerboer can confidently be called fresh science fiction. The novel was published in German in 2022, and in 2025, Lobster Publishing printed it in Ukrainian.
The events in it take place in the distant future, and the numbers 2643 indicate the year. Humanity has colonized the Solar System, mastered the genetic modification of living organisms, and created truly powerful artificial intelligence, which plays one of the key roles in the novel.
The novel is truly interesting, but initially it can be a real challenge for the reader. The fact is that many things, whose names seem quite understandable at first glance, begin to cause distrust after just a few paragraphs.
Nils Westerboer. Source: www.uni-koblenz.de
Main characters
Take, for example, the main character – the inquisitor Ruud Kartgeiser. From a man whose occupation is described by this word, you would expect a certain steadfastness in his faith, which is most likely Christian. However, when we first see him at the station orbiting Neptune, he is sitting naked in a hotel room while his AI assistant ties him to a pipe.
Yes, he has a personal AI named Zach, and she is only formally a tool that makes his life easier. In fact, she has her own will, sometimes stronger than Ruud’s, her own judgments, often takes the initiative in conversations with other characters, and in general, the reader perceives the events in the novel mainly through her eyes.
At first, it seems that Ruud Kartgeiser himself is a whiny, dependent, and dull appendage to his artificial intelligence. But then he demonstrates determination, composure, and independence, his story unfolds more fully, and it becomes clear that the whole point is that we are seeing a character who has suffered severe trauma in the past and has recently experienced personal problems. He gives vent to all of this when he is alone with the AI.
That’s how the imagination pictures the inquisitor of the future, and Ruud is nothing like that image. Source: phys.org
As for his profession, the word “inquisition” is translated from Latin as “investigation.” In other words, Ruud is simply an investigator, a typical detective story protagonist acting on behalf of an organization with certain powers. However, he does not deal with matters of religious belief, but rather with problematic artificial intelligence.
Athos 2643
It is precisely Ruud’s deceptive status that immediately catches your attention when you start reading the novel. Its title refers to Mount Athos in Greece, located on the peninsula of the same name. It is home to more than a dozen Orthodox monasteries, and in fact, the entire island is a territory where medieval religious rules still apply.
In Westerboer’s novel, this is the name of Neptune’s small satellite, only a few kilometers in diameter, on which there is a monastery of Orthodox hermits, where a very suspicious fatal accident occurred. Incidentally, the satellite itself is fictional, but it is described quite realistically, and it is indeed impossible to rule out that in the future, when the distant eighth planet is better explored, something similar will not be found near it.
But that’s not the important thing. The question that arises from the very first pages is: what is a representative of the Inquisition, usually associated with the Roman Catholic Church, doing in an Orthodox monastery? And why is he accompanied by his assistant, who is constantly present in the form of a hologram of a young, attractive woman, often dressed very lightly?
It seems that Westerboer tried to use the theme of Christianity, but never even understood its basics.
The monks on Mount Athos rarely discuss God, the soul, or faith with devout Christians. However, when it turns out that one of them is a woman, another hermit, very much in the spirit of the LGBT movement, asks, “Who decides who is a man and who is a woman?”
Monastery on Mount Athos in Greece. Source: phys.org
Against this backdrop, the presence of a farm producing halal meat on the satellite monastery is not so surprising. In Westerboer’s novel, Neptune’s system is inhabited mainly by Turks. So why should Orthodox hermits not produce food according to the instructions of the Prophet Muhammad?
Compared to Westerboer’s vision of a future world of advanced biotechnology, the appearance of the farm itself is more reminiscent of a depiction of hell.
But the most interesting thing is that later on, there is a very simple and rational explanation for everything. For this reason alone, it is really worth reading Athos 2643 to the end. After all, it is a detective story in which the reader, together with the main characters, not only searches for the possible murderer, but also tries to understand what is really going on around them. Westerboer also manages to present some key insights into the world in simple language, only after the 300th page, and after finishing the book, you somehow do not want to criticize him for it.
The most important technology
The world of Athos 2643 is shaped by technologies that have not yet been invented. Some of them seem, shall we say, overly bold and dubious. For example, when the author describes engines capable of accelerating a spacecraft to a speed of over a thousand kilometers per second in just a few dozen minutes, the question arises not so much about the physical process itself (which, at least in theory, has already been described), but rather about how the people on board can withstand such overloads.
There is an even bigger problem with Zach’s hologram emitter. Because it is only called a hologram in name, that is, a play of light and shadow. In reality, it has a density that allows it to wear a dress made of very light but real fabric. And the device that creates all this not only does not fry everything between it and the hologram, but can also fit in your pocket.
Neptune. Source: phys.org
And the way gravity is created on Athos. The author did well not to forget that on such a small body, it is very weak. But using special particles that need to be injected with anesthesia every few days instead of the magnetic boots familiar to science fiction is a rather unsuccessful idea, even without the bloody demonstration present in the book.
But the most important technology shaping this future is artificial intelligence, which is now not only on par with human intelligence but surpasses it. AI is not only found in the main character; it is everywhere, controlling all complex processes. So now even a dishwasher can show its personality.
Such AI exists on Mount Athos, and it does indeed cause problems. The novel begins with the story of a woman who, over several years, drove her husband to his death by using her knowledge of his weaknesses against him, while pretending to care for him. It is difficult to say whether modern lawyers would consider such actions a crime, but in the world of Athos 2643, they are.
After all, artificial intelligence is very good at studying the reactions of the people it interacts with and is quite good at predicting their behavior. This means that it is quite capable of manipulating people and leading them into situations where their death will appear natural or the result of an accident.
And this problem may indeed be real. We are accustomed to considering our actions to be the result of a mysterious process called our soul. However, in practice, they can be subjected to statistical analysis. Under normal conditions, it would be difficult to take advantage of this due to the large number of random processes and interactions that need to be taken into account. However, on an asteroid riddled with mines, where only six hermits live, it is much easier.
This is precisely what Ruud will have to deal with. Fortunately, he is very good at “talking” to artificial intelligences because he knows their weaknesses and is very attentive to how they play with words. How all this is connected to the murders in the cave monastery can be found out by reading the book to the end. You will discover that artificial intelligence is not even the key technology of this world, and that the characters have been missing the main miracle and mystery from the very beginning. But that’s why he is a detective, to find the answers himself.
Chinese officials are warning that the country’s humanoid robotics industry could be forming a massive bubble.
As Bloomberg reports, strategists from the National Development and Reform Commission (NDRC), which serves as the country’s state-run macroeconomic management agency, say that extreme levels of investment could be drowning out other markets and research initiatives.
It’s a notable shift in tone as the humanoid robot industry continues to attract billions of dollars in investment. Aided by advancements in artificial intelligence opening up new use cases — and plenty of unbridled enthusiasm — for the tech, investors are pouring untold sums into over 150 humanoid robot companies in China alone, according to the NDRC.
Many of those companies are producing robots that are extremely similar to each other, overspending that could overwhelm the market. Bike-sharing apps, for instance, flooded the market in China in 2017 and 2018, with dozens of them crowding each other out at the same time. The outcome: streets littered with unused bikes.
“Frontier industries have long grappled with the challenge of balancing the speed of growth against the risk of bubbles — an issue now confronting the humanoid robot sector as well,” NDRC spokeswoman Li Chao told reporters last week, as quoted by Bloomberg.
China has established itself as a clear global leader in the space, with Morgan Stanley predicting the humanoid robot market could surpass a whopping $5 trillion by 2050. Citigroup is even more optimistic, expecting the market to hit $7 trillion by that point.
New offerings by companies like Unitree have made bipedal robots far more affordable and advanced than ever before. Unitree’s G1 robot, in particular, has garnered tons of attention for its flashy abilities to throw punches in the ring or play basketball.
A burgeoning industry of far smaller Chinese competitors has cropped up as well, fueling even more investment — as well as concern from policymakers that the industry could be growing too fast.
Last month, Chinese robotics company UBTECH claimed it had rolled out the “world’s first mass delivery” of industrial humanoid robots. Startup AgiBot’s A2 also set a Guinness World Record for the longest distance ever walked by a humanoid robot, with its A2 covering over 66 miles while live-swapping its battery over and over.
Despite plenty of enthusiasm, turning humanoid robots into a viable and affordable product with a clear-cut use case remains a major challenge. Case in point, the current crop of androids still struggles significantly with completing household tasks, particularly without the help of a nearby human teleoperator.
To speed up the process of finding real-world applications, the NDRC is hoping to spread industrial resources across the country, while also accelerating research and development for “core technologies.”
The risks of a bubble are certainly there. Without consolidation, China’s market could soon be flooded with armies of largely identical humanoid robots — which is either a terrifying prospect, considering the possibility of them putting us all out of work, or risks a market crash if it turns out they’re not particularly good at real work.
The tech industry has become obsessed with the idea of humanoid robots, bipedal androids designed to complete tasks on behalf of their flesh-and-blood counterparts.
But as many experts have argued, having robots walk around on two legs and manipulate the world around them with two hands and arms may not always be the most efficient option. After all, plenty of industrial robots use wheels to roll around a warehouse, or feature one large, strong, and multi-pivoting arm instead of relying onseveral weaker ones.
Besides, the existing crop of humanoid robots is capable of a lot more than walking around and waving their hands.
Look no further than a video shared by robot tinkerer and researcher Logan Olson last month, which shows how a humanoid robot can turn itself into a surprisingly creepy crawling machine while using the full extent of its four limbs’ freedom of movement. The footage shows the robot dropping down to all fours in less than a second, unnervingly bending its arms and legs to crouch down and scuttle across a concrete patio — like a demon straight out of a horror movie.
Agility Robotics AI research scientist Chris Paxton, who recently reshared the video, used the footage as a reminder that a “lot of these robots are ‘faking’ the humanlike motions.”
“It’s a property of how they’re trained, not an inherent property of the hardware,” he wrote. “They’re actually capable of way weirder stuff and way faster motions.”
“Human motion is most efficient for humans; robots are not humans,” he added in a follow-up.
It’s a particularly pertinent topic as companies like Tesla, Figure, and China’s Unitree race to commercialize humanoid robots for the mass market. While companies have made major strides — in a separate tweet, Paxton argued that “running is now basically commoditized” — experts have questioned if it’s really the best form factor for every job.
Case in point, Chris Walti, the former lead of Tesla’s humanoid robot Optimus, told Business Insider earlier this year that humanoid robots simply don’t make much sense on the factory floor.
“It’s not a useful form factor,” he said at the time. “Most of the work that has to be done in industry is highly repetitive tasks where velocity is key.”
The human form “evolved to escape wolves and bears,” he added. “We weren’t designed to do repetitive tasks over and over again.”
While a creepy-crawling robot, as demonstrated in Olson’s video, admittedly may not be the pinnacle of productivity, it serves as a great — albeit nightmare-inducing — reminder that humanoid robots are technically capable of a lot more than masquerading as a human being, while walking around, shaking hands, and giving out popcorn.
At the same time, a humanoid robot distending its joints to crawl along the floor likely won’t endear it to humans, either.
“That is terrifyingly cool,” one user wrote in response to Olson’s video.
Pixar's adorable hopping lamp has been brought to life – and he could soon be lighting up your own desk at home.
Developed byCalifornia firm Interaction Labs, Ongo the robotic smart lamp can move, see, hear and even talk.
A promo clip shows the 'ambient desk lamp companion robot' peering curiously at objects and people around him while giving help around the home.
And to allay any privacy concerns, he even comes with a pair of sunglasses blocking his view.
Karim Rkha Chaham, co-founder and CEO Interaction Labs, said the 'expressive' bot can even remember users and anticipate their needs.
'Think of it as a cat trapped inside the body of a desk lamp,' he said.
On X, commentators called the design 'incredible', 'epic', 'very cool' and an 'amazing-looking piece' of tech.
One said it's 'definitely something I would have at home and not a creepy humanoid robot', while another added it 'might be the cutest robot on the market'.
According to the company, the cute desk lamp 'lights up your desk and your day' and brings ' a familiar magic presence' to your home
Developed by California firm Interaction Labs as a tribute to the Pixar character, Ongo the smart lamp can move, see, hear and even talk
Ongo has movements designed by Alec Sokolow, the Oscar-nominated writer of Pixar film Toy Story as well as Garfield: The Movie and Evan Almighty.
As the promo video shows, Ongo spins on its base and self-adjusts its axis just like the legendary Pixar character.
Depending on the user's needs, he can adjust levels of light that are emitted from his 'eyes' and bring them closer, for when reading a book after nightfall for example.
Ongo utters cheery greetings, helpful advice and instructions such as 'Hey, don't forget your keys' and 'Maybe try a dash of balsamic' during cooking.
Another adorable scene shows Ongo bopping to the sound of music in the next room when his owners are having a party.
According to the company, the cute desk lamp 'lights up your desk and your day' and brings 'a familiar magic presence' to your home.
'It brings your space to life with movement, personality and emotional intelligence,' Interaction Labs says on its website.
'It remembers what matters, senses how you feel, and supports you through the day with small, thoughtful interactions.
'Ongo senses the rhythm of your day and responds with quiet understanding, reading the subtle shifts in your environment.'
Ongo comes with 'fun' opaque sunglasses that snap on with magnets for when users want total privacy
Karim Rkha Chaham, co-founder and CEO Interaction Labs, said the 'expressive' bot can even remember users and anticipate their needs
One commentator posted: 'definitely something I would have at home and not a creepy humanoid robot'
Much like smart products packed with cameras, Ongo has an awareness of its surroundings but it processes vision on the device itself and never sends clips out to the cloud for company staff to watch.
When users want total privacy without Ongo peering in, they can put 'fun' opaque sunglasses over his eyes that snap on with magnets.
On X, several users said they found Ongo's voice 'annoying' and 'grating', but Chaham said it can be customised along with his personality.
The co-founder also admitted that the promo clip is computer-generated, but it gives users a good idea of what to expect as prototypes are being worked on – so it's not quite ready yet.
He said: 'Will be posting more and more videos of us interacting with the prototype.'
Ongo is available to pre-order on the company's website, with a fully refundable 'priority access deposit' costing $49/£38.38.
This deposit secures users a unit from the first batch and will be taken from the product's final price – which Chaham said will be 'about $300' (£225) – and those who pay now will get Ongo when shipping begins summer next year.
Ongo is an obvious nod to Pixar's original lamp, called Luxo Jr., which has appeared on the production logo of every Pixar film since the first one, Toy Story, back in 1995.
'That's not what a chair looks like': Ongo keeps an eye on his users and offers helpful suggestions and reminders - such as when they're trying to put furniture together
Pixar's lamp, called Luxo Jr., has appeared on the production logo of every Pixar film since the first one, Toy Story, back in 1995
In the short sequence, Luxo Jr. is seen hopping into view and jumping on the capital letter "I" in "PIXAR" to flatten it before turning his head.
Toy Story director John Lasseter created the character in August 1986, modeling it after his own Luxo brand lamp.
Luxo Jr. starred in his own short film of the same name that year, also directed by Lasseter, in which he appears with a larger lamp, 'Luxo Sr'.
It was also in 1986 that the animation studio was purchased by Apple co-founder Steve Jobs, having been owned by George Lucas' Lucasfilm.
After a run of hugely-successful films including Toy Story, A Bug's Life, Monsters Inc and Finding Nemo, Disney acquired Pixar in 2006.
Key milestones in Pixar's history
1979: George Lucas recruits Ed Catmull from the New York Institute of Technology to head Lucasfilm’s Computer Division.
1982-83: The division completes a scene for Star Trek II: The Wrath of Khan showing a lifeless planet being transformed by lush vegetation. It is the first completely computer animated sequence in a feature film.
1986: Apple co-founder Steve Jobs purchases the Computer Division from George Lucas and establishes the group as an independent company, Pixar. At this time about 40 people are employed.
1986: A short film called Luxo Jr. is completed featuring two anthropomorphic desk lamps
1991: Disney and Pixar announce an agreement 'to make and distribute at least one computer-generated animated movie' which will become Toy Story.
1995: Toy Story, the world’s first computer animated feature film, is released on November 22. It opens at #1 that weekend and goes on to become the highest grossing film of the year, making $192 million domestically and $362 million worldwide.
1998: Pixar's second feature-length film A Bug's Life is released in theaters on November 25
2006: The Walt Disney Company announces that it has agreed to purchase Pixar Animation Studios
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
01-12-2025
When science fiction becomes reality: Scientists reveal what would REALLY happen if the sun started to dim like in Project Hail Mary - with catastrophic results
Scientists have revealed the terrifying answer to this question, which is the subject of the upcoming science fiction blockbuster, Project Hail Mary.
The film, based on a novel of the same name by The Martian author, Andy Weir, follows a lone scientist on a mission to uncover why the sun is dimming.
In the movie, which is set to hit cinemas in March 2026, the sun's brightness is predicted to fall one per cent in a year and five per cent in 20 years.
These numbers might sound small.
But in reality, scientists say that these changes would be more than enough to wipe out humanity.
Professor David Stevenson, a planetary scientist from the California Institute of Technology, told Daily Mail: 'Extinguishing life on Earth would take a long time even if you eliminated solar energy because we know of organisms that live underground.
'But extinguishing humans could happen fast, especially since humans are not rational creatures for the most part.'
In Project Hail Mary, Ryan Gosling (pictured) plays a lone scientist sent on a mission to find out why the sun is dimming. But what would really happen if the sun did start to fade?
What happens when the sun starts to dim?
At a distance of around93 million miles (150 million kilometres) from Earth, the sun delivers about 1,365 Watts per square metre of energy, which scientists call the solar constant.
About 30 per cent of that energy is reflected back into space, while the remainder is absorbed, warming the Earth's atmosphere and surface.
Currently, our planet is holding on to more energy than it loses – but it wouldn't take much to tip the balance.
If the sun's brightness were to drop or if something prevented our atmosphere from absorbing the energy, then Earth could start to rapidly cool.
Professor Lucie Green, an expert on the sun from University College London, told Daily Mail: 'The Sun does naturally vary in brightness, but not by very much!
'The technical term is total solar irradiance. This is slightly variable, with the variability being a result of changes during the Sun’s 11–year sunspot cycle.'
These fluctuations are barely noticeable on Earth, but there have been much more dramatic shifts in the past.
The sun's output does naturally dip on an 11–year cycle that coincides with the number of sunspots appearing on the surface. However, these changes aren't enough to cool Earth dramatically
What would happen if the sun started to dim
If the sun started to dim, the total energy Earth receives would fall.
Eventually, Earth would start to lose more energy to the vacuum of space than it was gaining from the sun.
After this point, Earth would begin to rapidly cool.
Around 0.6°C (1.1 °F) of cooling would start to cause crops to fail in Europe due to a lack of warm weather.
By the time temperatures fell by 2°C (3.6°F), widespread famine could kill billions of people.
When global temperatures fall 6°C (10.8°F) lower, the Earth would enter a new Ice Age, and glaciers would cover most of the Northern Hemisphere.
By the time the sun was completely gone, temperatures would fall to –73°C (–100°F) and all life on Earth would go extinct.
Between 1645 and 1715, the sun went through a 70–year quiet period known as the Maunder Minimum.
Although the sun was only delivering 0.22 per cent less energy, some researchers think that this change was partially responsible for the deadly chill.
If Project Hail Mary's predictions came true and the sun's radiation continued to fall by one per cent, the results would soon become catastrophic.
As Earth would be losing more energy into space than it gained from the sun, global temperatures would soon fall several degrees below average.
Worryingly, Earth's history shows that even relatively small changes in the planet's average temperature can have a massively outsized impact.
During the Little Ice Age, less than a degree Celsius of cooling led to mass famine throughout Northern Europe.
Cold winters and cool summers led to crop failures, while the sea became so cold that Norse colonies in Greenland were cut off by the ice and collapsed through starvation.
In Project Hail Mary, the teacher turned astronaut Ryland Grace, played by Ryan Gosling (pictured), learns that the sun will cool by one per cent in a year
According to a recent study, global cooling of just 1.8°C (3.25°F) would cut production of maize, wheat, soybeans and rice to fall by as much as 11 per cent.
However, if Project Hail Mary came true and the sun cooled by one to five per cent in 20 years, the effects on the climate would be even more devastating.
In Project Hail Mary, the teacher turned astronaut Ryland Grace, played by Ryan Gosling in the upcoming film, remarks: 'That would mean an ice age. Like... right away. Instant ice age.'
That might sound dramatic, but scientists agree that it might not take much cooling for ice to reclaim the world.
According to a recent study from the University of Arizona, the average temperature during the last Ice Age, 20,000 years ago, was just 6°C (10.8°F) colder than today.
During this time, glaciers covered about half of North America, Europe and South America and many parts of Asia.
Dr Becky Smethurst, astrophysicist at the University of Oxford, told Daily Mail: 'A drop in energy of one per cent from the Sun would trigger a new Ice Age on Earth, with the polar ice caps expanding further towards the equator.
Just like the 2004 movie 'The Day After Tomorrow' (pictured), these major changes to the Earth's climate would eventually culminate in a new Ice Age that could wipe out life on Earth
According to a recent study from the University of Arizona, the average temperature during the last Ice Age, 20,000 years ago, was just 6°C (10.8°F) colder than today. This means it might not take much cooling for icy conditions to return
'Many ecosystems would collapse as the weather changed, farming would fail, and there would be severe food shortages. As a species, humans would likely survive this change thanks to modern technology, although we'd most likely be living underground.'
What would happen if the sun completely cooled?
Although humanity might be able to survive a global ice age, the situation would be very different if the sun completely vanished.
Within a week, the Earth's surface would fall below –18°C (0°F) and within a year it would dip below –73°C (–100°F).
Eventually, after cooling for millions of years, the planet would stabilise at a frigid –240°C (–400°F).
However, humanity would be long gone well before the planet ever got to that point.
Some humans might be able to cling on in the deepest parts of the ocean, using hydrothermal vents for warmth.
But once the oceans freeze over, there would be very little hope for anyone to survive.
In the original novel of Project Hail Mary, written by The Martian author Andy Weir, scientists make the terrifying prediction that the sun's brightness will fall one per cent in a year and five per cent in 20 years. If this were true, then humanity would very likely be destroyed
Dr Alexander James, a solar scientist from University College London, told Daily Mail: 'From a fundamental viewpoint, if the Sun completely faded, there would be no more light, meaning all our green plant life would be unable to photosynthesise.
'That means plants would not be producing oxygen, which, of course, we need to live. Temperatures would also plummet, so I don’t see how the majority of life as we know it would be able to survive without our Sun.'
Could this ever really happen?
Thankfully, scientists say there's no way that the sun could cool as fast as in Project Hail Mary.
Although the sun's activity does fluctuate, even in the most extreme events and quiet periods, the effects are not dramatic.
For example, many scientists have questioned how much the Maunder Minimum really contributed to the Little Ice Age during the 17th Century.
While most experts agree that a decline in solar activity did contribute to the cooling, other factors, such as volcanic activity, likely played a bigger role.
Additionally, most of the sun's natural variations are on a much smaller scale.
Luckily for us, the sun is so large that it cannot physically cool as quickly as Project Hail Mary suggests. Experts say the sun would only cool by one per cent in a million years if the core completely stopped producing energy
The amount of energy arriving from the sun usually only drops by 0.1 per cent during the solar cycle.
While large sunspots, cool regions on the solar surface, might cause a temporary dip as low as 0.25 per cent below average, this is nowhere near the five per cent change of Project Hail Mary.
In fact, many scientists believe that the sun cannot physically cool this fast.
Professor Michael Lockwood, a space environment physicist from the University of Reading, told Daily Mail: 'About half of the Sun's mass is in the radiative and convection zones outside the core – that is about a thousand, billion, billion, billion, billion kilograms.'
This enormous mass acts like a heat sink, storing colossal amounts of energy that would take billions of years to dissipate.
Professor Lockwood says: 'Roughly speaking, if the core ceased producing any energy, the power emitted by the Sun would only have dropped by about one per cent a million years later.
'Scientifically, anything faster than that is nonsense.'
So, even if the sun does start giving out on us, we will have plenty of time to find a better solution than sending out Ryan Gosling on a spaceship.
The Sun is a huge ball of electrically-charged hot gas that moves, generating a powerful magnetic field.
This magnetic field goes through a cycle, called the solar cycle.
Every 11 years or so, the Sun's magnetic field completely flips, meaning the sun's north and south poles switch places.
The solar cycle affects activity on the surface of the Sun, such as sunspots which are caused by the Sun's magnetic fields.
Every 11 years the Sun's magnetic field flips, meaning the Sun's north and south poles switch places. The solar cycle affects activity on the surface of the Sun, increasing the number of sunspots during stronger (2001) phases than weaker (1996/2006) ones
One way to track the solar cycle is by counting the number of sunspots.
The beginning of a solar cycle is a solar minimum, or when the Sun has the least sunspots. Over time, solar activity - and the number of sunspots - increases.
The middle of the solar cycle is the solar maximum, or when the Sun has the most sunspots.
As the cycle ends, it fades back to the solar minimum and then a new cycle begins.
Giant eruptions on the Sun, such as solar flares and coronal mass ejections, also increase during the solar cycle.
These eruptions send powerful bursts of energy and material into space that can have effects on Earth.
For example, eruptions can cause lights in the sky, called aurora, or impact radio communications and electricity grids on Earth.
Illustration by Tag Hartman-Simkins / Futurism. Source: Jorge Uzon / AFP via Getty Images
Geoffrey Hinton, one of the three so-called “godfathers” of AI, never misses an opportunity to issue foreboding proclamations about the tech he helped create.
During an hour-long public conversation with Senator Bernie Sanders at Georgetown University last week, the British computer science laid out all the alarming ways that he forecasts AI will completely upend society for the worst, seemingly leaving little room for human contrivances like optimism. One of the reasons why is that AI’s rapid deployment will be completely unlike technological revolutions in the past, which created new classes of jobs, he said.
“The people who lose their jobs won’t have other jobs to go to,” Hinton said, as quoted by Business Insider. “If AI gets as smart as people — or smarter — any job they might do can be done by AI.”
“These guys are really betting on AI replacing a lot of workers,” Hinton added.
Hinton pioneered the deep learning techniques that are foundational to the generative AI models fueling the AI boom today. His work on neural networks earned him a Turing Award in 2018, alongside University of Montreal researcher Yoshua Bengio and the former chief AI scientist at Meta Yann LeCun. The trio are considered to be the “godfathers” of AI.
All three scientists have been outspoken about the tech’s risks, to varying degrees. But it was Hinton who first began to turn the most heads when he said he regretted his life’s work after stepping down from his role at Google in 2023.
In his discussion with Sanders, Hinton reiterated these risks, adding that the multibillionaires spearheading AI, like Elon Musk, Mark Zuckerberg, and Larry Ellison haven’t really “thought through” the fact that “if the workers don’t get paid, there’s nobody to buy their products,” he said, per BI.
Previously, Hinton has said it wouldn’t be “inconceivable” that humankind gets wiped out by AI. He also believes we’re not that far away from achieving an artificial general intelligence, or AGI, a hypothetical AI system with human or superhuman levels of intelligence that is able to perform a vast array of tasks, which the AI industry is obsessed with building.
“Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI,” Hinton said in 2023. “And now I think it may be 20 years or less.”
Strikingly, Hinton now claims that the latest models like OpenAI’s GPT-5 “know thousands of times more than us already.”
While leading large language models are trained on a corpus of data vastly exceeding what a human could ever learn, many experts would disagree that this means that the AI actually “knows” what it’s talking about. Moreover, many efforts to replace workers with semi-autonomous models called AI agents have often failed embarrassingly, including in customer support roles that many predicted were the most vulnerable to being outmoded. In other words, it’s not quite set in stone that the tech will be to so easily replace even low-paying jobs.
Nonetheless, never put it past your overlords to find a way how to screw you over anyway. AI machines could be a great tool for carrying out imperial actions abroad; deploying AI robots to fight overseas would be great for the US military industrial complex, Hinton argued, since there wouldn’t be dead soldiers to cause “political blowback.”
“I think it will remove one of the main barriers to rich powerful countries just invading little countries like Granada,” Hinton told Sanders.
It’s one small step for man — and one giant, badass layup for robot kind.
Researchers at the Hong Kong University of Science and Technology (HKUST) have programmed a Unitree G1 humanoid robot to play basketball, almost perfectly mimicking the skills of a human athlete.
A video shared by HKUST PhD student Yinhuai Wang shows the robot dribbling, taking jump shots, and even pivoting on one of its feet to evade the student’s attempts to block it from taking a shot.
Wang called it the “first-ever real-world basketball demo by a humanoid robot,” boasting that he “became the first person to record a block against a humanoid.”
It’s an impressive demo, showcasing how far humanoid robotics has come in a matter of years. Unitree, in particular, has stood out in an increasingly crowded field, with its G1 rapidly picking up new skills.
Wang and his colleagues are teaching robots how to play basketball through a system they’ve dubbed “SkillMimic,” which is described on his website as a “data-driven approach that mimics both human and ball motions to learn a wide variety of basketball skills.”
“SkillMimic employs a unified configuration to learn diverse skills from human-ball motion datasets, with skill diversity and generalization improving as the dataset grows,” the writeup continues. “This approach allows training a single policy to learn multiple skills, enabling smooth skill switching even if these switches are not present in the reference dataset.”
While netizens were generally impressed by the robot’s basketball skills, others were a little more skeptical.
“Love that the programmer focused on showboating rather than fundamentals,” one wrote.
“Robots will do everything but fill the dishwasher,” another joked.
Others imagined a future in which bipedal robots dominate sports.
“Man, I hope I get to see proper robotics basketball leagues,” another Reddit user mused.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
29-11-2025
Kan AI ooit echt bewustzijn bereiken?
Kan AI ooit echt bewustzijn bereiken?
Kan AI ooit echt bewustzijn bereiken?
Key takeaways
AI-bewustzijn stuit op bezwaren door computationele, algoritmische en fysieke beperkingen.
De onderzoekers verdelen hun argumenten in drie categorieën op basis van overtuigingskracht: verbeterpunten, praktische obstakels in de huidige technologie, en fundamentele onmogelijkheden.
Het raamwerk verduidelijkt het debat over AI-bewustzijn en biedt een routekaart voor toekomstig onderzoek, ethiek en beleidsontwikkeling in artificiële intelligentie (AI).
De vraag of artificiële intelligentie (AI) bewustzijn kan bereiken, is al lange tijd onderwerp van intens debat tussen wetenschappers, filosofen en technologen. Een recente studie, Consciousness in Artificial Intelligence? A Framework for Classifying Objections and Constraints, probeert die complexe discussie te verduidelijken door een gestructureerd model te ontwikkelen voor het categoriseren van de verschillende bezwaren rondom digitaal bewustzijn.
Soorten bezwaren
Het onderzoek erkent dat argumenten tegen AI-bewustzijn vaak overlappen of verkeerd zijn gericht. Sommige bezwaren komen voort uit de overtuiging dat bewustzijn niet verklaard kan worden door louter computationele processen, terwijl andere het principe van computationeel bewustzijn wel accepteren, maar stellen dat huidige digitale systemen niet beschikken over de noodzakelijke architectuur. Weer andere argumenten verwerpen de mogelijkheid van digitaal bewustzijn juist op basis van inzichten uit de fysica of biologie, in plaats van uit de computatietheorie.
Analytische structuur op drie niveaus
Om die complexiteit te adresseren, stellen de auteurs een analytisch kader met drie niveaus voor, geïnspireerd op het cognitieve wetenschappelijke model van David Marr. Het eerste niveau richt zich op bewustzijn als een input-outputcorrespondentie, gestuurd door berekenbare functies. Het tweede niveau behandelt de specifieke algoritmen, architecturen en representatiestructuren die nodig zijn om bewustzijn te realiseren. Het derde niveau richt zich op de bezwaren dat het fysieke substraat zelf essentieel is voor bewuste ervaring.
Dit kader stelt onderzoekers, beleidsmakers en filosofen in staat om overeenkomsten en verschillen te identificeren, met een duidelijk onderscheid tussen argumenten tegen computationeel functionalisme en argumenten tegen digitaal bewustzijn.
Niet-berekenbare functies
Sommige critici beweren dat bewustzijn niet-computeerbare processen omvat die buiten het bereik van Turing-machines liggen, terwijl anderen stellen dat elk computationeel model van bewustzijn te complex zou zijn om op grote schaal te implementeren. De studie benadrukt tevens het belang van dynamische koppeling en suggereert dat bewustzijn mogelijk real-time interactie met omgevingen vereist—iets wat digitale systemen moeilijk kunnen nabootsen.
Algoritmische organisatie
Op algoritmisch niveau draait het debat om de organisatie van algoritmen. Theorieën onderzoeken of symbolische architecturen, neurale netwerken of hybride systemen in staat zijn bewuste toestanden te genereren. Sommigen benadrukken de noodzaak van analoge processen met continue waarden, die digitale systemen niet volledig kunnen emuleren. Anderen leggen de nadruk op synchronisatie en representatievormen die essentieel zijn voor subjectieve ervaring, maar die in de huidige digitale architecturen ontbreken.
Dit niveau omvat ook discussies over belichaming en enactivisme, waarin wordt gesteld dat bewustzijn uitsluitend voortkomt uit lichamen die handelen binnen een omgeving. Volgens die zienswijze kunnen grote taalmodellen, ondanks hun ogenschijnlijke intelligentie, de interactieve eigenschappen missen die essentieel zijn voor bewuste toestanden.
Fysieke substraat
Bezwaren met betrekking tot het fysieke substraat leggen de strengste beperkingen op. Die argumenten richten zich op de unieke eigenschappen van biologische hersenen die digitale hardware niet kan repliceren. Theorieën in deze categorie beweren dat bewustzijn afhankelijk is van informatie die in biologische netwerken is ingebed, de dynamica van elektromagnetische velden in de hersenen, of zelfs kwantumprocessen.
Volgens deze bezwaren zijn digitale AI-systemen fundamenteel niet in staat bewustzijn te bereiken door de cruciale rol van het fysieke substraat. Het onderzoek benadrukt dat deze beweringen eerder op het niveau van natuurkunde en biologie liggen dan op dat van de computerwetenschap, waarvoor nog empirisch bewijs nodig is over hoe bewustzijn in natuurlijke systemen ontstaat.
Evaluatiesysteem met drie niveaus
Om de kracht van elk bezwaar beter te verduidelijken, hanteert de studie een evaluatiesysteem op drie niveaus. Sommige bezwaren suggereren dat machinebewustzijn mogelijk is, mits bepaalde mogelijkheden of architecturen worden toegepast. Andere wijzen op praktische obstakels die bewuste AI onwaarschijnlijk maken met de huidige technologie. De sterkste bezwaren stellen dat digitale systemen, ongeacht technologische vooruitgang, nooit bewust kunnen worden.
Dit classificatiesysteem maakt onderscheid tussen conceptuele, technologische en metafysische bezwaren. Het benadrukt bovendien de gebieden waar empirisch onderzoek de meningsverschillen zou kunnen oplossen, evenals de domeinen die verder filosofisch onderzoek vereisen.
Kader voor bestuur, ethiek en AI-ontwikkeling
De studie sluit af met een bespreking van de praktische implicaties van het kader voor bestuur, ethiek en AI-ontwikkeling. Nu AI-modellen steeds vaker cruciale beslissingen beïnvloeden en op geavanceerde manieren met mensen interageren, is het essentieel het volledige spectrum van argumenten rondom digitaal bewustzijn te begrijpen. De voorgestelde classificatie kan beleidsmakers helpen bij het ontwikkelen van weloverwogen regelgeving, het opstellen van ethische richtlijnen en het ondersteunen van AI-ontwikkelaars bij verantwoorde uitspraken over de mogelijkheden van hun systemen. (fc)
AgiBot humanoid robot patrols at the waiting hall of Jinhua railway station on the first day of the Spring Festival travel rush on January 14, 2025 in Jinhua, Zhejiang Province of China.
The Chinese robotics company AgiBot has set a new world record for the longest continuous journey walked by a humanoid robot. AgiBot’s A2 walked 106.286 kilometers (66.04 miles), according to Guinness World Records, making the trek from Nov. 10-13.
The robot journeyed from Jinji Lake in China’s Jiangsu province to Shanghai’s Bund waterfront district, according to China’s Global Times news outlet. The robot never powered off and reportedly continued to operate while batteries were swapped out, according to UPI.
A video posted to YouTube shows a highly edited version of the walk that doesn’t give much insight into how it was presumably monitored by human handlers. But even if it did have some humans playing babysitter, the journey included just about everything you’d expect when traveling by foot in an urban environment, including different types of ground, limited visibility at night, and slopes, according to the Global Times.
The robot obeyed traffic signals, but it’s unclear what level of autonomy may have been at work. The company told the Global Times that “the robot was equipped with dual GPS modules along with its built-in lidar and infrared depth cameras, giving it the sensing capability needed for accurate navigation through changing light conditions and complex urban environments.”
That suggests it was fully autonomous, and the Guinness Book of World Records used the word “autonomous,” though Gizmodo couldn’t independently confirm that claim.
“Walking from Suzhou to Shanghai is difficult for many people to do in one go, yet the robot completed it,” Wang Chuang, partner and senior vice president at AgiBot, told the Global Times.
The amount of autonomy a robot is operating under is a big question when it comes to companies rolling out their demonstrations. Elon Musk’s Optimus robot has been ridiculed at various points because the billionaire has tried to imply his Tesla robot is more autonomous than it actually is in real life.
For example, Musk posted a video in January 2024 that appeared to show Optimus folding a shirt. That’s historically been a difficult task for robots to accomplish autonomously. And, as it turns out, Optimus was actually being teleoperated by someone who was just off-screen. Well, not too far off-screen. The teleoperator’s hand was peeking into the frame, which is how people figured it out.
Tesla’s Optimus robot folding laundry in Jan. 2024 with an annotation of a red arrow added by Gizmodo showing the human hand. Gif: Tesla / Gizmodo
Musk did something similar in October 2024 when he showed off Optimus robots supposedly pouring beer during his big Cybercab event in Los Angeles. They were teleoperated as well.
It’s entirely possible that AgiBot’s A2 walked the entire route autonomously. The tech really is getting that good, even if long-lasting batteries are still a big hurdle. But obviously, people need to remain skeptical when it comes to spectacular claims in the robot race.
We’ve been promised robotic servants for over a century now. And the people who have historically sold that idea are often unafraid to use deception to hype up their latest achievements. Remember Miss Honeywell of 1968? Or Musk’s own unveiling of Optimus? They were nothing more than humans in a robot costume.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.