The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
18-08-2022
Radioactive DIAMOND battery powered by nuclear waste 'will run for 28,000 years' and could go on sale by 2023
Radioactive DIAMOND battery powered by nuclear waste 'will run for 28,000 years' and could go on sale by 2023
The nuclear battery is 'safe for humans' as it is enclosed in tamper proof material
It works by using microscopic diamonds to move heat from radioactive isotopes
The radioactive isotopes come from the waste products of nuclear power
The company stacks multiples of these microscopic cells to generate electricity
A battery powered by nuclear waste could keep a spaceship or hospital operating for 28,000 years without needing to be recharged or replaced, its developers claim.
The radioactive battery is 'completely safe' for humans, according to California-based Nano Diamond Battery (NDB), who say it will 'change the world'.
The firm hopes to start selling the battery to commercial partners, including space agencies for long duration missions, within the next two years.
NDB are also working on a consumer version that could run a smartphone or electric car for up to a decade without requiring a charge.
No details on pricing have been revealed by the technology startup, who say it is still in development phase.
A battery powered by nuclear waste could keep a spaceship or hospital operating for 28,000 years without needing to be recharged or replaced, its developers claim
DIAMOND NUCLEAR VOLTAIC (DNV) ENERGY GENERATION
Diamond Nuclear Voltaic (DNV) is a technology that converts nuclear waste into electricity.
The microscopic diamonds have 'extremely good head conductance'.
They act to move heat away from the radioactive isotopes so quickly the transaction generates electricity.
This generates a small output of power but consistently for a very long period of time - thousands of years.
Several of these units are stacked, increasing overall power output.
This kind of arrangement improves the overall efficiency of the system and provides a multi layer safety shield.
The technology involves combining radioactive isotopes taken from nuclear waste with layers of panelled nano diamonds stacked in a battery cell.
Extremely good heat conductance of the microscopic diamonds acts to move heat away from the radioactive isotopes so quickly the transaction generates electricity.
It is based on a technology called diamond nuclear voltaic (DNV) presented by scientists in 2016 from the University of Bristol using waste graphite blocks.
This technology is best suited for devices that need a slow trickle of electricity, consistently over a long period of time due to low energy production.
The NDB system is able to work in consumer products by adding layers and layers of diamonds and radioactive waste panels to increase the total energy output.
'This battery has two different merits,' NDB CEO Nima Golsharifi told Future Net Zero.
'One is that it uses nuclear waste and converts it into something good. And the second is that it runs for a much longer time than the current batteries.'
The firm has also worked to ensure the material is safe and people can't easily access the radioactive material inside the stacked power cells.
'The DNV stacks along with the source are coated with a layer of poly-crystalline diamond, which is known for being the most thermally conductive material,' a spokesperson said.
This material 'also has the ability to contain the radiation within the device and is the hardest material,' up 12 times tougher than stainless steel.
'This makes our product extremely tough and tamperproof.'
Use cases include having a watch with a tiny NDB battery that could be passed down from generation to generation without ever having to replace the power supply.
Diamond batteries may one day power satellites, providing them with enough spare energy to de-orbit at the end of their life, or probes heading into deep space for thousands of years.
'The human desire to explore space is fuelled by the excitement of exploring the unknown,' NDB said on their website.
Future devices can also be used to power a smartphone or a laptop, each containing a miniature power generator that will last as long as the device itself - with no need to ever charge, or an electric car that could run for thousands of miles without a charge
'Recent advances in space technology and the rise of the first manned electric aircrafts have led to increasing demand on their battery systems, hindered by concerns regarding longevity and safety.
'NDB can be utilized to power drones, electric aircrafts, space rovers and stations whilst allowing for longer activity.'
Future devices can also be used to power a smartphone or a laptop, each containing a miniature power generator that will last as long as the device itself - with no need to ever charge.
'In situ medical devices and implantable such as hearing aids and pacemakers respectively can benefit from long battery life in a smaller package with added benefit of safety and longevity,' the firm added.
WHAT ARE HYDROGEN FUEL CELLS?
Hydrogen fuel cells create electricity to power a battery and motor by mixing hydrogen and oxygen in specially treated plates, which are combined to form the fuel cell stack.
Fuel cell stacks and batteries have allowed engineers to significantly shrink these components to even fit neatly inside a family car, although they are also commonly used to fuel buses and other larger vehicles.
Trains and aeroplanes are also being adapted to run on hydrogen fuel, for example.
Oxygen is collected from the air through intakes, usually in the grille, and hydrogen is stored in aluminium-lined fuel tanks, which automatically seal in an accident to prevent leaks.
These ingredients are fused, releasing usable electricity and water as by-products and making the technology one of the quietest and most environmentally friendly available.
Reducing the amount of platinum used in the stack has made fuel cells less expensive, but the use of the rare metal has restricted the spread of their use.
Recent research has suggested hydrogen fuel cell cars could one day challenge electric cars in the race for pollution-free roads, but only if more stations are built to fuel them.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
Scientists Turn Nuclear Waste Into Diamond Batteries That Could Last For Thousands Of Years
Scientists Turn Nuclear Waste Into Diamond Batteries That Could Last For Thousands Of Years
We have an unquenchable energy need. When we need to run anything that cannot be plugged in, electricity will have to come from a battery, and the quest for a better battery is being launched in laboratories around the globe. Hold that thought for a moment.
Nuclear waste is radioactive waste generated by nuclear power plants that no one wants to be kept near their houses or even carried through their communities. The ugly substance is poisonous and deadly, takes thousands of years to disintegrate completely, and we continue to produce more of it.
Now, a California-based business, NDB, says it can resolve both of these issues. They claim to have built a self-powered battery made entirely of radioactive waste that has a life expectancy of 28,000 years, making it ideal for your future electric car or iPhone 1.6 x 104.
Rather than storing energy generated elsewhere, the battery generates its own charge. It is constructed of two kinds of nano-diamonds, which makes it almost crash-proof when used in vehicles or other moving things. Additionally, the business claims that its battery is safe since it emits less radiation than the human body.
NDB has already created a proof of concept and intends to construct its first commercial prototype once its laboratories restart operations after the COVID outbreak(which should be soon).
The nuclear waste from which NDB intends to manufacture its batteries consists of reactor components that have become radioactive as a result of exposure to nuclear power plant fuel rods.
While this is not considered high-grade nuclear waste—that would be spent fuel—it is nonetheless very poisonous, and a nuclear plant generates a lot of it. The International Atomic Energy Agency estimates that the "core of a typical graphite-moderated reactor" may contain up to 2000 tonnes of graphite. (A tonne is equal to one metric tonne, or about 2,205 pounds.)
Carbon-14 is a radioisotope found in graphite. It is the same radioisotope used by archaeologists for carbon dating. It has a half-life of 5,730 years and ultimately decays into nitrogen 14, an anti-neutrino, and a beta decay electron, the charge of which piqued NDB's curiosity as a possible source of electricity.
NDB cleanses graphite and then converts it to microscopic diamonds. The business claims that by using current technology, they've engineered their little carbon-14 diamonds to generate a large quantity of electricity. Diamonds also operate as a semiconductor, absorbing energy and dispersing it via a heat sink.
However, since they are still radioactive, NDB encases the miniature nuclear power plants in other low-cost, non-radioactive carbon-12 diamonds. These glistening lab-created shells provide diamond-hard protection while also containing the carbon-14 diamonds' radiation.
NDA intends to manufacture batteries in a variety of common and unique sizes, including AA, AAA, 18650, and 2170. Each battery will feature many stacked diamond layers, as well as a tiny circuit board and a supercapacitor for energy collection, storage, and discharge. The ultimate result, the business claims, is a battery that will last an extremely long period.
According to NDB, a battery may live up to 28,000 years when utilized in a low-power setting, such as a satellite sensor. They predict a usable life of 90 years as a car battery, much longer than anyone vehicle would last—the business believes that one battery could theoretically power one pair of wheels after another. For consumer gadgets like phones and tablets, the firm estimates that a battery will last around nine years.
“Think of it in an iPhone,” NDB’s Neel Naicker tells New Atlas. "With the same size battery, it would charge it five times an hour from zero to full. Imagine that. Imagine a world where you wouldn’t have to charge your battery at all for the day. Now imagine for the week, for the month… How about for decades? That’s what we’re able to do with this technology.”
NDB expects commercialising a low-power version in a few of years, followed by a high-power version in roughly five years. If all goes according to plan, NDB's technology will represent a significant step forward in terms of delivering low-cost, long-term energy to the world's electronics and cars.
The company says, “We can start at the nanoscale and go up to power satellites, locomotives.”
Additionally, the business anticipates that its batteries will be comparably priced to existing batteries, including lithium-ion, and maybe much cheaper after they are produced of nuclear waste may even pay the company to take care of their poisonous issue.
The garbage of one enterprise becomes the diamonds of another.
The quadrupedal robots are well suited for repetitive tasks.
Two Ghost Robotics Vision 60 Quadruped Unmanned Ground Vehicles (Q-UGVs) pose for a picture at Cape Canaveral Space Force Station, Fla., July 28, 2022. (Image credit: U.S. Space Force photo by Senior Airman Samuel Becker)
Man's new best friend is coming to the U.S. Space Force.-
The Space Force has conducted a demonstration using dog-like quadruped unmanned ground vehicles (Q-UGVs) for security patrols and other repetitive tasks. The demonstration used at least two Vision 60 Q-UGVs, or "robot dogs", built by Ghost Robotics and took place at Cape Canaveral Space Force Station on July 27 and 28.
According to a statement(opens in new tab) from the Department of Defense, Space Launch Delta 45 will use the robot dogs for "damage assessments and patrol to save significant man hours." The unit is responsible for all space launch operations from Kennedy Space Center and Cape Canaveral.
Images from the demonstration show personnel operating the robots with a hand controller inside a hangar. The Ghost Robotics Vision 60 Q-UGVs can be equipped with a wide variety of optical and acoustic sensors, enabling them to serve as automated "eyes and ears" around sensitive installations such as a Space Force base. The robots can be operated either autonomously or by a human controller and can even respond to voice commands.
U.S. Air Force 1st Lt. Andrew Cuccia, chief innovation officer, operates a Ghost Robotics, Vision 60 Quadruped Unmanned Ground Vehicle (Q-UGV) with a handheld controller at Cape Canaveral Space Force Station (Image credit: U.S. Space Force photo by Senior Airman Samuel Becker)
The dog-like robots can also serve as miniaturized communications nodes, carrying antennas to quickly extend networks beyond existing infrastructure or in locations where no such infrastructure exists.
A Ghost Robotics, Vision 60 Quadruped Unmanned Ground Vehicle (Q-UGV) is operated during a demo for 45th Security Forces Squadron at Cape Canaveral Space Force Station, Fla., July 28, 2022. (Image credit: U.S. Space Force photo by Senior Airman Samuel Becker)
The robots have been previously tested by the U.S. Air Force for perimeter defense tasks and as part of a large test of the service's Advanced Battle Management System (ABMS) data-sharing network. In that 2020 test, robot dogs at Nellis Air Force Base in Nevada "provided real-time strike targeting data to USAF operators" in Florida using Starlink satellite links, then-CEO of Ghost Robotics Jiren Parikh told The War Zone(opens in new tab).
The Ghost Robotics Q-UGVs are designed to withstand water and weather, and were recently demonstrated with a tail-like payload enabling them to travel underwater(opens in new tab).
Aside from their military applications, the robot dogs are also being eyed for uses in emergency management, public safety and industrial inspection.
Follow Brett on Twitter at @bretttingley(opens in new tab). Follow us on Twitter @Spacedotcom(opens in new tab)or onFacebook(opens in new tab).
This is the first instance when stem cells have been used to make advanced-stage embryos
The researchers have developed a special type of incubator that made this possible
The technology could one day help provide cells, tissues, or even organs for transplantation.
In a major breakthrough, scientists in Israel have made mouse embryos without using sperm or egg cells but only stem cells taken from the skin, The Times of Israel has reported. These embryos have beating hearts as well as brain structures.
The discovery of stem cells and their ability to take the form of any cell type in the body has opened many doors in the field of medicine. From curing baldness to curing HIV, stem cells can be used everywhere.
However, sourcing stem cells has raised major ethical concerns. Found abundantly in the embryonic stages of cell growth, harvesting these cells requires the embryo to be destroyed before it is implanted in the female womb. So, researchers have been looking for an alternative way to source them and have even been successful in their search.
Making stem cells more "naive"
Studies have shown that stem cells are also present in small numbers in organs like the skin, which constantly undergoes renewal throughout our life. The process requires cells of different types, and that's where the multi-potency of stem cells comes in handy.
Jacob Hanna, a professor at the Molecular Genetics Department at the Weizmann Institute of Science in Israel, however, developed a method that would take back such stem cells to a previous step, where they are more "naive". In a previous study, Hanna and his team demonstrated that their technology could make human stem cells so "naive" that they could even be injected into mice, where they would function as if they were mice's own.
In separate work, Hanna's team also developed a special incubator that has all the necessary conditions for the growth of an embryo. In 2021, a group of researchers grew 250 mouse embryos into fetuses with fully formed organs inside this artificial womb. What Hanna and his team wanted to know was if the incubator could also grow embryos that were sourced from stem cells.
Embryos from stem cells
The researchers then used naive stem cells that had been cultured for years in a petri dish in the lab. Before placing them into the special incubator, these cells were divided into three groups. While one was left untreated to grow into embryonic stem cells, the other two were pretreated for a period of 48 hours to express genes that were master regulators of either the placenta or yolk sac.
The cells were once again mixed together in the incubator and allowed to grow. While most failed to develop properly, 0.5 percent, or 50 of 10,000 cells, went on to become spheres, which then took the elongated form of embryos.
The researchers had labeled each group of cells differently, so they could the growth of the placenta and yolk sac outside the embryo. At day 8.5, nearly half of the normal gestation of 20 days in mice, these embryos displayed early organs such as the beating heart, blood stem cell circulation, a brain with well-shaped folds, a neural tube,, and an intestinal tract, a university press release said.
This is the first instance of a research group using stem cells to make advanced embryos, Hanna told the Times of Israel. "Our next challenge is to understand how stem cells know what to do – how they self-assemble into organs and find their way to their assigned spots inside an embryo."
Apart from helping reduce the use of animals in stem cell research, the techniques developed in his lab could one day also help become a reliable source of cells, tissues, and organs for transplantation.
The findings of the study were published in the journal Cell.
Abstract
In vitro cultured stem cells with distinct developmental capacities can contribute to embryonic or extra-embryonic tissues after microinjection into pre-implantation mammalian embryos. However, whether cultured stem cells can independently give rise to entire gastrulating embryo-like structures with embryonic and extra-embryonic compartments, remains unknown. Here we adapt a recently established platform for prolonged ex utero growth of natural embryos, to generate mouse post-gastrulation synthetic whole embryo models (sEmbryos), with both embryonic and extra-embryonic compartments, starting solely from naïve ESCs. This was achieved by co-aggregating non-transduced ESCs, with naïve ESCs transiently expressing Cdx2- and Gata4- to promote their priming towards trophectoderm and primitive endoderm lineages, respectively. sEmbryos adequately accomplish gastrulation, advance through key developmental milestones, and develop organ progenitors within complex extra-embryonic compartments similar to E8.5 stage mouse embryos. Our findings highlight the plastic potential of naïve pluripotent cells to self-organize and functionally reconstitute and model the entire mammalian embryo beyond gastrulation.
The research was conducted by DeepMind and EMBL’s European Bioinformatics Institute (EMBL-EBI), which used the AlphaFold AI system to predict a protein’s 3D structure.
AlphaFold DB has identified over 200 million structures (Provider: AlphaFold)
The AlphaFold Protein Structure Database – which is freely available to the scientific community – has been expanded from nearly one million protein structures to more than 200 million structures, covering almost every organism on Earth that has had its genome sequenced.
The expansion includes predicted shapes for the widest possible range of species, including plants, bacteria, animals, and other organisms, opening up new avenues of research across the life sciences.
Demis Hassabis, founder and CEO of DeepMind, said: ‘We’ve been amazed by the rate at which AlphaFold has already become an essential tool for hundreds of thousands of scientists in labs and universities across the world.
‘From fighting disease to tackling plastic pollution, AlphaFold has already enabled incredible impact on some of our biggest global challenges.
‘Our hope is that this expanded database will aid countless more scientists in their important work and open up completely new avenues of scientific discovery.’
Being able to predict a protein’s structure gives scientists a better understanding of what it does and how it works (Provider: AlphaFold)
At the time, it demonstrated that it could accurately predict the shape of a protein, at scale and in minutes, to atomic accuracy.
The database works like an internet search for protein structures by providing instant access to predicted models.
This cuts down the time it takes for scientists to learn more about the likely shapes of the proteins they are researching, speeding up experimental work.
Earlier predictions have already helped scientists in their quest to create an effective malaria vaccine.
Scientists at the University of Oxford and the National Institute of Allergy and Infectious Diseases have been researching a protein called Pfs48/45, which is one of the most promising candidates for inclusion in a transmission-blocking malaria vaccine.
Existing technology alone did not allow them to fully understand the structure of the protein in order to see where the most effective transmission-blocking antibodies bind across its surface.
Matthew Higgins, professor of Molecular Parasitology and co-author of that study, said: ‘By combining AlphaFold models with our experimental information from crystallography, we could reveal the structure of Pfs48/45, understand its dynamics and show where transmission-blocking antibodies bind.
‘This insight will now be used to design improved vaccines which induce the most potent transmission-blocking antibodies.’
DeepMind and EMBL-EBI said they will continue to refresh the database periodically, with the aim of improving features and functionality.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-07-2022
When asking an AI to show the last selfie ever taken it produced a creepy scene
When asking an AI to show the last selfie ever taken it produced a creepy scene
DALL-E AI, developed by OpenAI, is a new system that can produce full images when fed natural language descriptions and TikToker Robot Overlords simply asked it to 'show the last selfie ever taken.'
It produced chilling scenes of bombs dropping and catastrophic weather, along with cities burning and even zombies. Each image shows a person holding a phone in front of their face and behind them is the world coming to an end, reports Dailymail.
Asking an AI to show the last selfie ever taken
Here are a few more eerie images.
Asking an AI to show the last selfie ever taken in the apocalypse
Asking AI how it will take over the world
Asking an AI how the End of the universe will look like
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
27-07-2022
Researchers Turn Dead Spiders Into 'Necrobotic' Grippers
Researchers Turn Dead Spiders Into 'Necrobotic' Grippers
See how engineers at Rice University transformed wolf spiders into necrobots.
Amanda Kooser
Researchers inserted a needle into a dead spider to operate its legs like a gripper.
Video screenshot by Amanda Kooser/CNET
Grippers made from dead spiders. For some, this might sound like a horror movie. For others, it's a fascinating mashup of robotics and the natural world.
A team of engineers at Rice University worked out how to reanimate (after a fashion) the legs of dead spiders. This isn't a Frankenstein's monster sort of situation. The researchers used a needle and air to activate the spider legs, mimicking how the appendages work in living spiders. Because the spiders are dead and are used in a robotic fashion, the engineers call this "necrobotics."
Mechanical engineering graduate student Faye Yap is the lead author of a paper on the spider project published in the journal Advanced Science this week. "It happens to be the case that the spider, after it's deceased, is the perfect architecture for small scale, naturally derived grippers," said co-author Daniel Preston in a Rice statement on Monday.
It's one thing to read about this project and another to see it in action. Fortunately (or not depending on how you feel about all this), Rice delivered a video that explains the process for creating the necrobots and shows how it works.
When alive, spiders use blood to extend and contract their legs through a hydraulic process. The researchers euthanized wolf spiders, inserted a needle into the chamber of the body that controls the legs, sealed it with glue and then used air to trigger the opening and closing of the legs.
Enlarge ImageThe necrobots are able to grip and hold more than their own weight.
Preston Innovation Laboratory/Rice University
The gripper spiders were able to lift more than their own body weight, including another spider and small objects like parts on a circuit board.
The team sees some advantages to necrobotic spiders. The little grippers can grab irregular objects, blend into their environment and also biodegrade over time. The researchers hope to try this method out with smaller spiders. They also plan to work out how to trigger the legs individually.
As this work advances, I'm looking forward to a new type of Transformers. Necrobots, reach out!
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
From the robo-surgeon that killed its patient to the driverless car that ran over a pedestrian: Worst robotic accidents in history - after chess robot breaks seven-year-old boy's finger in Russia
From the robo-surgeon that killed its patient to the driverless car that ran over a pedestrian: Worst robotic accidents in history - after chess robot breaks seven-year-old boy's finger in Russia
Shocking footage emerged of a chess-playing robot breaking a child's finger during a chess match in Russia
A spokesperson from Russian Chess Federation said the boy violated 'safety rules' by making a move too soon
But other incidents involving robots and humans in the past have had more tragic outcomes
MailOnline delves into the worst robotic accidents in history including a robo-surgeon that killed its patient
Shocking footage emerged at the weekend of a chess-playing robot breaking a child's finger during a match in Russia.
The robot grabbed the seven-year-old boy's finger at the Moscow Open last week because it was confused by his overly-quick movements, Russian media outlets reported.
Sergey Lazarev, vice-president of the Russian Chess Federation, said the child had violated 'certain safety rules' by making a move too soon.
Lazarev said that the machine had been hired for many previous events without any problems, and that the incident was an 'extremely rare case'.
Christopher Atkeson, a robotics expert at Carnegie Mellon University, told MailOnline: 'Robots have limited sensing and thus limited awareness of what is going on around them.
'I suspect the chess robot did not have ears, and that its vision system was blind to anything other than chess boards and pieces.'
Although the young chess player only suffered a broken finger, in other accidents involving robots the injured party has not always been as lucky.
MailOnline has taken a look at some of the disastrous robotic failures, from the robo-surgeon that killed its patient to the driverless car that hit a pedestrian.
A chess-playing robot (pictured) broke a child's finger during an international tournament in Moscow last week, with the incident being captured in CCTV footage
CHINESE WORKER SKEWERED - WARNING: GRAPHIC CONTENT
One of the most gruesome robo-accidents involved a 49-year-old Chinese factory worker, known as Zhou, back in December 2018.
Zhou was hit by a rogue robot that collapsed suddenly, impaling him with 10 sharp steel rods in the arm and chest, each one foot long.
The spikes, which were part of the robotic arm, speared through the worker's body when the robot collapsed during a night shift.
Four of them were stuck in his right arm, one in his right shoulder, one in his chest and four in his right forearm, reported People's Daily Online at the time.
Miraculously, Zhou survived after surgeons removed the rods from his body and was in stable condition, according to the Xiangya Hospital in Changsha.
Mr Zhou (pictured), 49, was hit by part of a robot that collapsed in China. Four of them were stuck in his right arm, one in his right shoulder, one in his chest and four in his right forearm
In March 2018, Elaine Herzberg was fatally struck by a prototype self-driving car from ridesharing firm Uber.
Herzberg, 49, was pushing a bicycle across a four-lane road in Tempe, Arizona, when she was struck by the vehicle, which was operating in self-drive mode with a human safety backup driver sitting in the driving seat.
The Uber engineer in the vehicle, Rafaela Vasquez, was watching videos on her phone, according to reports at the time.
Herzberg was taken to the local hospital where she died of her injuries – marking the first recorded case of a pedestrian fatality involving a self-driving car.
This photo from video from a mounted camera provided by the Tempe Police Department shows an interior view moments before the Uber vehicle hit a woman in Tempe, Arizona, in March 2018
Vasquez was later charged with negligent homicide, while Uber was found not criminally responsible for the accident.
The ridesharing giant had been testing self-driving vehicles in four North American cities – Tempe, San Francisco, Pittsburgh and Toronto – but these tests were suspended following the accident.
In 2020, Uber sold off its self-driving car division, spelling an end to its attempts to develop autonomous driving systems.
BOTCHED ROBO-SURGERY
In February 2015, father-of-three and retired music teacher Stephen Pettitt underwent robotic surgery for Mitral valve disease at the Freeman Hospital, Newcastle upon Tyne.
The operation was conducted by surgeon Sukumaran Nair using the Da Vinci surgical robot, which consists of a surgeon's console and interactive robotic arms controlled from the console.
Details of the botched six-hour procedure include the surgical team, Mr Nair and assisting surgeon Thasee Pillay, shouting at one another.
The operation using the Da Vinci robot (file image) was the first of its kind conducted in the UK
Communication was difficult because of the 'tinny' sound quality coming from the robot console being operated by Nair, it was revealed.
The machine also knocked a theatre nurse and destroyed the patient's stitches, Newcastle Coroner's Court later heard.
Mr Nair, who trained in India and London and previously worked at Papworth Hospital in Cambridgeshire, said he now works in Scotland and no longer does robotic surgery.
Sadly, Mr Pettitt had a 98 to 99 per cent chance of living had the operation not been carried out using a robot.
VW WORKER CRUSHED
In June 2015, a 22-year-old man was killed by a robotic arm at a Volkswagen plant in Baunatal, Germany.
The robotic arm intended to lift machine parts but reportedly grabbed him and crushed him against a large metal plate.
The man suffered severe injuries to his chest in the incident and resuscitated at the scene, but died from his injuries in hospital.
A Volkswagen spokesman said initial conclusions indicated human error was to blame rather than a malfunction with the robot, which can be programmed to perform various tasks in the assembly process.
In June 2015, a 22-year-old man was killed by a robotic arm at a Volkswagen plant in Baunatal, Germany. The robotic arm intended to lift machine parts but reportedly grabbed him and crushed him against a large metal plate
JAPAN'S FIRST ROBO-CASUALTY
In July 1981, Japanese maintenance worker Kenji Urada died while checking a malfunctioning hydraulic robot at the Kawasaki Heavy Industries plant in Akashi.
Urada, 37, had jumped over a safety barrier that was designed to shut down power to the machine when open, but he reportedly started the robot accidentally.
He was pinned by the robot's arm against another machine before tragically being crushed to death.
Other workers in the factory were unable to stop the machine as they didn't know how to operate it.
Urada was the first human killed by a robot in Japan – but not the first in the world.
THE FIRST ROBOT DEATH
The first person to be killed by a robot was Robert Williams, an American factory worker, in Flat Rock, Michigan back in January 1979.
Williams, 25, was killed instantly when he was struck in the head by an industrial robot arm designed to retrieve objects from storage shelves.
The first person to be killed by a robot was Robert Williams while working at the Ford Motor Company Flat Rock Casting Plant (pictured here in 1973)
His body remained in the shelf for 30 minutes until it was discovered by workers who were concerned about his disappearance, according to reports.
His family successfully sued the manufacturers of the robot, Litton Industries, and were awarded $10 million in damages.
Wayne County Circuit Court concluded that there were not enough safety measures in place to prevent such an accident from happening.
SELF-DRIVING TESLA SLAMS INTO TRACTOR
A former Navy SEAL became the first person to die at the wheel of a self-driving car after it ploughed into a tractor trailer on a freeway in Williston, Florida, in May 2016.
Joshua Brown, 40, was riding in his computer-guided Tesla Model S on autopilot mode when it travelled underneath the low-hanging trailer crossing his path on US Highway 27A, shearing off the car's roof completely.
By the time firefighters arrived, the wreckage of the Tesla had come to rest in a nearby yard hundreds of feet from the crash site, assistant chief Danny Wallace of the Williston Fire Department said.
Tesla said its autopilot system failed to detect the truck because its white colour was similar to that of the bright sky, adding that the driver also made no attempt to hit the brakes.
Elon Musk's company confirmed the man's 'tragic' death, but defended its vehicles, saying they were safer than other cars.
Joshua Brown, 40, was riding in his computer-guided Tesla Model S on autopilot mode when it travelled underneath the low-hanging trailer crossing his path on US Highway 27A, shearing off its roof completely
Brown was killed as the car drove underneath the low-hanging trailer at 74mph, ripping the roof off before smashing through a fence and hitting an electrical pole. Tesla said its autopilot system failed to detect the truck because its white colour was similar to that of the bright sky, adding that the driver also made no attempt to hit the brakes
CRUSHED BY A ROBOT ARM
On July 7 2015, a grandmother was crushed to death when she became trapped by a piece of machinery that should not have entered into the area in which she was working.
Wanda Holbrook was 57 at the time of the incident, and had been working at car manufacturing facility Ventra Ionia Main in Michigan, USA for 12 years.
She was working on the production line when a robotic arm took her by surprise by entering the section in which she was stationed.
The arm hit and crushed her head against a trailer hitch assembly it was working on, a wrongful death lawsuit states.
According to The Callahan Law Firm, widower Bill Holbrook said his wife's head injuries were so severe that the funeral home recommended a closed casket.
On July 7 2015, a grandmother was crushed to death when she became trapped by a piece of machinery that should not have entered into the area in which she was working at at car manufacturing facility Ventra Ionia Main in Michigan, USA (pictured)
Wanda Holbrook was 57 at the time of the incident, and had been working at the car manufacturing facility for 12 years
STABBED BY WELDING ROBOT
The first person killed by a robot in India is thought to be 24-year-old Ramji Lal, who was stabbed to death by a robotic welding machine, also in July 2015.
Lal was reportedly adjusting a metal sheet at car parts manufacturer SKH Metals in Manesar, Gurgaon that was being welded by the machine when he was stabbed by one of its arms.
A colleague told the Times of India: 'The robot is pre-programmed to weld metal sheets it lifts.
'One such sheet got dislodged and Lal reached from behind the machine to adjust it.
'This was when welding sticks attached to the pre-programmed device pierced Lal's abdomen.'
The first person killed by a robot in India is thought to be 24-year-old Ramji Lal, who was stabbed to death by a robotic welding machine, also in July 2015. Lal was reportedly adjusting a metal sheet at car parts manufacturer SKH Metals in Manesar, Gurgaon that was being welded by the machine when he was stabbed by one of its arms (stock image)
AMAZON ROBOT PUNCTURES BEAR REPELLENT CAN
In 2018, a robot accidentally punctured a can of bear repellent at an Amazon warehouse in New Jersey, hospitalising 24 workers.
The nine-ounce aerosol can contained concentrated capsaicin - the active component found in chilli peppers that makes your mouth feel hot.
Many of the workers experienced trouble breathing and said their throats and eyes burned as a result of the fumes from the pepper spray.
One worker from the Robbinsville-based warehouse was in critical condition and was sent to the ICU at Robert Wood Johnson Hospital, while another 30 were treated at the scene.
Stuart Appelbaum, president of the Retail, Wholesale and Department Store Union, issued a statement after the incident saying: 'Amazon's automated robots put humans in life-threatening danger today, the effects of which could be catastrophic and the long-term effects for 80+ workers are unknown,
'The richest company in the world cannot continue to be let off the hook for putting hard working people's lives at risk.'
In 2018, a robot accidentally punctured a can of bear repellent at an Amazon facility in New Jersey, hospitalising 24 workers
One worker from the Robbinsville-based warehouse was in critical condition and was sent to the ICU at Robert Wood Johnson Hospital, while another 30 were treated at the scene (pictured)
Josh Bongard, a robotics professor at the University of Vermont, told MailOnline: 'Robots are kind of the opposite of people. They're good at things we’re bad at, and vice versa.
'The good news in all this is that robots will likely kill many fewer people, than people do. Especially here in the US.'
Professor Bongard also said it's 'really, really hard' to train robots to avoid such accidents.
'Our best hope at the moment for deploying safe robots is to put them in places where there are few people, like autonomous cars restricted to special lanes on highways.'
WHY ARE PEOPLE SO WORRIED ABOUT ROBOTS AND AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our 'biggest existential threat' and likened its development as 'summoning the demon'.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a 'near certainty' that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs 'a lot' with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could 'go rogue' and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could 'go rogue'
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don't fully understand how they work.
If experts don't understand how AI algorithms function, they won't be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable 'out of character' decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the 'number one risk for this century'.
Musk warned that AI poses more of a threat to humanity than North Korea.
'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea,' the 46-year-old wrote on Twitter.
'Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.'
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control.
When chess robots go bad: AI player grabs a seven-year-old boy and BREAKS his finger during tournament in Russia
A chess-playing robot broke a child's finger during a tournament in Russia last week, with the incident being captured in CCTV footage.
The robot grabbed the seven-year-old boy's finger because it was confused by his overly-quick movements, Russian media outlets reported, quoting the President of the Moscow Chess Federation - who seemingly blamed the child.
'The robot broke the child's finger - this, of course, is bad,' Sergey Lazarev told Russia's TASS news agency, while distancing his organisation from the robot.
The incident occurred at the Moscow Open on July 19. Lazarev said that the federation had rented the robot for the event, which ran from July 13 to 21.
Lazarev said that the machine had been hired for many previous events without incident, saying the boy went to move a piece too quickly after making a move.
A chess-playing robot (pictured) broke a child's finger during an international tournament in Moscow last week, with the incident being captured in CCTV footage
Captured by a camera over the boy's shoulder, the video starts by showing the robot as it picks up a piece from the board and drops it into a box to the side - used to contain the discarded pieces from the game
'The robot was rented by us, it has been exhibited in many places, for a long time, with specialists. Apparently, the operators overlooked it,' Lazarev said.
'The child made a move, and after that we need to give time for the robot to answer, but the boy hurried , the robot grabbed him. We have nothing to do with the robot.'
Video of the incident was published by the Baza Telegram channel, who said the boy's name was Christopher. Baza said he was among the 30 best chess players in Moscow in the under-nine age group category.
According to The Guardian, Sergey Smagin, vice-president of the Russian Chess Federation, went even further in blaming the boy.
'There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait,' The Guardian quoted Smagin as saying. 'This is an extremely rare case, the first I can recall.'
The footage shows the robot - which consists of a single mechanical arm with multiple joints and a 'hand' - was in the middle of a table and surrounded by three different chess boards. It's AI can reportedly play three matches at the same time.
Captured by a camera over the boy's shoulder, the video starts by showing the robot as it picks up a piece from the board and drops it into a box to the side - used to contain the discarded pieces from the game.
As it does so, the young boy reaches to make his next move. However, the robot appears to mistake the boy's finger for a chess piece, and grabs that instead.
Upon grabbing the boy's finger, the mechanical arms freezes in place, trapping the boy who begins to panic. Several people standing around the table rush in to help him, and after a few seconds are able to free him from the robot's grip
Pictured: The boy is taken away by adults who were standing around the table. Russian chess officials said the machine had been hired for many previous events without incident, saying the boy went to move a piece too quickly after making a move
Upon grabbing the boy's finger, the mechanical arms freezes in place, trapping the boy who begins to panic. Several people standing around the table rush in to help him, and after a few seconds are able to free him from the robot's grip.
Lazarev said in his statement that the boy was able to return to the tournament the following day in a case, and finished the tournament.
However, he told TASS that the boy's parents had contacted the public prosecutor's office about the incident, and that his organisation had been contacted by Moskomsport - the Department of Sport for the Russian capital.
He offered to help the family 'in any way we can,' and warned that the operators of the robot were going to have to 'think about strengthening protection so that this situation does not happen again.'
Smagin told RIA Novosti that the incident was a 'concidence' and said the robot is 'absolutely safe,' The Guardian reported.
'It has performed at many opens. Apparently, children need to be warned. It happens,' Smagin said - calling the robot 'unique'.
Imagine an all-electric drone with zero emissions and no noise.
It could venture anywhere — practically undetected — and be used for a variety of applications from search and rescue to military operations.
That vision is now here, and it runs on ion propulsion.
Last month, a Florida-based tech startup called Undefined Technologies unveiled the new aesthetic design of its silent eVTOL drone, called Silent Ventus, which is powered by ion propulsion, according to a press release by the firm.
A sustainable and less noisy urban environment
“Silent Ventus is a vivid example of our intent of creating a sustainable, progressive, and less-noisy urban environment,” said Tomas Pribanic, Founder and CEO of Undefined Technologies, in the statement. “The design brings us closer to our final product and enables us to showcase the dual-use of our technology.”
The concept vehicle uses proprietary technology to fully activate the ion cloud surrounding the craft. This allows the drone to generate high levels of ion thrust in atmospheric air, and take flight in near-silence.
A major milestone for all-electric drones
Development of the drone has been ongoing for a while now. In December of 2021, the drone completed a major milestone. It undertook a 2-minute and 30-second mission flight, where its performance, flight dynamics, endurance, and noise levels were tested.
The engineers leading the tests reported that the craft’s flight time extended five-fold from the previous version and generated noise levels of less than 85 decibels. Pribanic said at the time that the drone was one step closer to market.
According to Undefined Technologies' website, the drone today "uses innovative physics principles to generate noise levels below 70 dB." This would make it ideal for use throughout the U.S., where acceptable noise levels for residential, industrial, and commercial zones range from 50 to 70 dB.
In comparison, the majority of drones produce noises in the vicinity of 85 to 96 dB. Time will tell whether the new "silent" drones will inaugurate a new age of whispering drones that take no toll on the surrounding environment, toiling away in peace.
ROBOTS AREN’T TYPICALLY KNOWN FOR THEIR FLAWLESS SKIN AND 12-STEP CARE ROUTINES, BUT RECENT INNOVATIONS IN ROBOTICS AND LAB-GROWN BODY PARTS MIGHT SPAWN A FRESH GENERATION OF YOUTUBE BEAUTY VLOGGERS AFTER ALL.
Instead of cold metal and dull silicon exteriors, next-generation robots may instead sport a more human birthday suit called electronic skin, or e-skin for short.
Beyond just making these robots look more human, engineers are excited about how the skin will enable robots to feel like humans do — a necessary skill to help them navigate our human-centric world.
HORIZONS is a newsletter on the innovations and ideas of today that will shape the world of tomorrow. This is an adapted version of the June 30 edition. Forecast the future by signing up for free.
“E-skin will allow robots to have safe interaction with real-world objects,” Ravinder Dahiya, professor of electronics and nanoengineering at the University of Glasgow, tells Inverse.
“This is also important for social robots which may be helping elderly in care homes, for example, to serve a cup of tea.”
Dahiya is the lead author on one of several papers published this month in the journal Science Robotics detailing several advances in robotic skin that together overcome decades-worth of engineering obstacles.
CREATING “ELECTRONIC SKIN”
How robots feel matters for human-machine interactions.Donald Iain Smith/Photodisc/Getty Images
Dahiya and his colleagues’ latest work explores how robots’ skin can be used to feel and learn about their surroundings using synaptic transistors embedded in the artificial skin. Mimicking the sensory neural pathways of the human body and brain, their work demonstrates a robot skin that can learn from sensory experiences, like a child who learns not to touch a hot surface after getting a burn.
“A ROBOT DEPLOYED IN A NUCLEAR POWER PLANT CAN HAVE E-SKIN WITH RADIATION SENSORS.”
We take this kind of sensory ability for granted as a result of living in our acutely sensitive skin, but they’re much harder to imbue in e-skin, roboticist Kyungseo Park tells Inverse.
Park is a postdoctoral researcher at the University of Illinois, Urbana-Champaign, and the first author of another e-skin paper published this month in Science Robotics that shows how electrodes and microphones could be built into hydrogel and silicone e-skin to provide more sensitive tactile input.
While small-scale sensors might work for small-scale projects like responsive robot hands, Parks says these technologies struggle to scale up.
“Although these tactile sensors work well, it is challenging to cover the robot’s whole body with these sensors due to practical limitations such as wiring and fabrication,” Park says.
“It is required to develop a sensor configuration that can be freely scaled depending on the application.”
The human skin is the largest organ in the body. Our skin can process sensory input at nearly any location with very little energy, but replicating and scaling this ability in e-skin is a logistics and latency nightmare, Dahiya says.
“[Electronics] need to be developed or embedded in soft and flexible substrates so that the e-skin can conform to curvy surfaces,” Dahiya says.
“This [will] mean sensors and processing electronics can be distributed all over the body, which will reduce the latency and help address other challenges such as wiring complexity.”
If an e-skin cannot reliably process sensory inputs anywhere as they occur, it could be a major liability in the real world — especially because humans enjoy split-second sensory processing as a result of their biological skin.
LEARNING TO FEEL
If building e-skin is so challenging, then why are scientists around the world still working hard to make it a reality? For robots, and the companies or governments that control them, e-skin could represent a new opportunity to record and process massive amounts of information beyond our own skin’s sensing abilities.
“MOST COMMERCIAL ROBOT ARMS ARE NOT ABLE TO PERCEIVE PHYSICAL CONTACT, SO THEY MAY CAUSE SERIOUS INJURY TO HUMANS.”
“When we think about robots that are used in hazardous environments, the types of sensors can be much broader than basic five sensory modalities,” Dahiya says.
“For example, a robot deployed in a nuclear power plant can have e-skin with radiation sensors as well. Likewise, the e-skin may have photodetectors which can also augment human skin capability by allowing us to measure the excessive exposure to ultraviolet rays. We don’t have such capabilities.”
In the future, e-skin could also be used to measure things like proximity (guard robots), temperature, or even disease and chemical weapons. Such advanced sensing capabilities might enable remote robots to go into potentially dangerous situations and assess the problem without putting humans in harm’s way.
Beyond the vision of autonomous Terminator-like robots, e-skin could also be used to bring sensing capabilities to existing tools. For example, medical instruments that could allow clinicians to “feel” tissues inside the human body to make diagnoses, Dahiya says.
In his and colleagues’ work, Dahiya also explored how e-skin could be designed to sense injuries (like a cut or tear) and heal itself, much as our skin does. Right now, their e-skin needs a lot of assistance to accomplish this task, but in the future, this functionality could be essential for remote robots exploring potentially harmful terrain like the surface of Mars.
ON THE HORIZON
Pressing the flesh.Ravinder Dahiya
Beyond the advancement of robot sensing, Dahiya and Park say that e-skin will also play an important role in keeping humans safe in human-robot interactions as well. For many service robots, this is a particularly crucial concern.
“Most commercial robot arms are not able to perceive physical contact, so they may cause serious injury to humans,” Park says.
Take a greeter robot, for example, tasked with shaking the hand of everyone who crosses its path to welcome them into a place. Without tactile sensors to gauge the pressure of its handshakes, it could unwittingly crush someone’s hand.
“I think the elderly who need direct assistance from robots will probably benefit the most,” Park adds.
Advances in e-skin could also play a role in improving prosthetics, Park says. The same tactile sensors that help robots feel their environment could help restore or even advance the sensing capabilities of those with limb loss.
While the e-skin that Park and Dahiya worked on is making strides toward a robotic future, they’ve both got some fine-tuning to do before they’re put into practice. In the meantime one thing is certain: We should be thankful for the complex and capable skin we have evolved.
Gianluca Rizzello with 'dielectric elastomers.' The Saarbrücken researchers are using this composite material to create artificial muscles and nerves for use in flexible robot arms.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
01-07-2022
De ‘Sky Cruise’ moet een vliegend hotel worden met zwembaden en bioscopen dat nooit landt
De ‘Sky Cruise’ moet een vliegend hotel worden met zwembaden en bioscopen dat nooit landt
Er zouden zelfs geen piloten nodig zijn
Mensen zijn verbijsterd over een video waarin een potentieel AI-gestuurd vliegtuig wordt gedemonstreerd dat ‘nooit zou landen’. Het concept van de ‘Sky Cruise’, ontworpen door Hashem Al-Ghaili, is in feite een vliegend hotel dat beschikt over 20 nucleair aangedreven motoren en een capaciteit zou hebben om 5.000 passagiers te vervoeren.
Geen piloten
Al-Ghaili noemt het vliegtuig de ‘toekomst van het vervoer’ en legt uit dat conventionele luchtvaartmaatschappijen passagiers van en naar de Sky Cruise zouden ‘ferryen’, die nooit de grond zou raken en zelfs alle reparaties tijdens de vlucht zou kunnen laten uitvoeren. Daily Star vroeg aan de maker hoeveel piloten het vliegtuig nodig heeft. Hij antwoordde: “Al deze technologie en je wilt nog steeds piloten? Ik denk dat het volledig autonoom zal zijn.”
Dat is allemaal goed en wel, maar de Sky Cruise zou nog steeds een flinke hoeveelheid personeel aan boord nodig hebben, aangezien er ook een enorm winkelcentrum aan boord zou zijn, om nog maar te zwijgen van zwembaden, fitnesscentra en bioscopen. In een video van het vliegtuig, gepost door Al-Ghaili op YouTube, is te zien hoe het uittorent boven vliegtuigen van normale grootte en wordt het zelfs omschreven als ‘de perfecte trouwlocatie’.
De apocalyps
Hoewel de lanceringsdatum van het vliegende hotel nog moet worden aangekondigd, is niet iedereen enthousiast over het idee van Al-Ghaili. Eén persoon schreef in de commentaarsectie onder zijn video: “Geweldig idee om een kernreactor te plaatsen in iets dat defect kan raken en uit de lucht kan vallen.” Een andere grapte: “Ik heb het gevoel dat dit de plek is waar alle rijke mensen zich gaan verstoppen tijdens de apocalyps, en gewoon rondvliegen boven de rest van de wereld terwijl iedereen elkaar te lijf gaat in Mad Max-stijl.”
De 31-jarige ‘wetenschapscommunicator’ en videoproducent Al-Ghaili is Jemenitisch maar woont momenteel in Berlijn, aldus zijn website. Een deel van zijn biografie luidt: ‘Hashem is moleculair bioloog van beroep en gebruikt zijn kennis en passie voor wetenschap om het publiek voor te lichten via sociale media en video-inhoud’. Hieronder vind je de video over de ‘Sky Cruise’.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-06-2022
Watch Robot Dog Shoot A Rifle!! World Robot Dog Army Threat To Human Race? Spot The Robot Dog Is Designed To Carry Large Weapon And Hunt Humans?
Watch Robot Dog Shoot A Rifle!! World Robot Dog Army Threat To Human Race? Spot The Robot Dog Is Designed To Carry Large Weapon And Hunt Humans?
Watch Robot Dog Shoot A Rifle!! World Robot Dog Army Threat To Human Race? Spot The Robot Dog Is Designed To Carry Large Weapon And Hunt Humans?
Here’s the glossy version… meaning, the harmless dog that knows a few tricks, and doesn’t do much else… meanwhile watch the other videos and you’ll know this isn’t just a giant cute toy!!
Well if you ask me, it’s reasonable to say that the back of the robot, is designed to and equipped with the ability to carry anything, weapons included! People… this is a real issue… look at what China has…
A memorable scene in Terminator 2: Judgment Day is the one where Dr. Miles Dyson, a cybernetics expert, the Cyberdyne Systems Model 101 (T-800) played by Arnold Schwarzenegger cuts into his arm and removes its skin to show its mechanical insides and convince Dyson it is indeed a Terminator. Terminator 2: Judgment Day was released in 1991 and the movie partly takes place in 2029 – the year the T-800 and T-1000 are sent from. It is now 2022. While we’re a long way from making Terminators (hopefully), tissue engineers at the University of Tokyo have successfully covered a three-jointed, functioning robot finger with lab-grown human skin. Is it too early to call Sarah Connor?
Is it time to point fingers - real and robotic?
“These findings show the potential of a paradigm shift from traditional robotics to the new scheme of biohybrid robotics that leverage the advantages of both living materials and artificial materials.”
In a new study, published in the journal Matter, Tokyo University researchers explain what most people already knew – humans prefer robots that look like them. While firms like Boston Dynamics have successfully created robots that move like living dogs and humans, they still look like mechanical machines. That’s because the most difficult organ of the human body to replicate is the skin – the feeling, flexible covering that keeps everything inside protected while moving seamlessly with the mechanical parts and healing itself after injuries. It’s the holy grail with nails and the researchers decided that it was time to adopt a “if you can’t beat ‘em, join ‘em” approach by ditching the quest for artificial human skin and instead growing the real deal in a way that can be used as a robot package.
“The finger looks slightly ‘sweaty’ straight out of the culture medium. Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”
Study co-author Professor Shoji Takeuchi admits what the team created is only a finger … but what a finger it is. To achieve the bond between living cells and robotic metal, the team submerged their well-designed robotic finger in a cylinder filled with a solution of collagen and human dermal fibroblasts, the two main components of the skin’s underlying connective tissues. That was the secret to fitting the skin seamlessly to the finger – the mixture shrank and tightly bonded to the finger bot. This formed the base foundation for the top coating of human epidermal keratinocytes, which make up 90% of the outermost layer of skin and give it its self-healing properties. The end result was a robotic finger with the texture, moisture-retaining properties and protection of human skin.
According to the press release, the lab grown skin stretched and bent but did not break as it matched the movement of its robotic exoskeleton. For a creepy factor, the skin could be lifted and stretched with a pair of tweezers, then snapped back and repelled water. (Photos here.) And then came the Terminator effect.
IMAGES: KAWAI ET. AL.
IMAGE: KAWAI ET. AL.
“When wounded, the crafted skin could even self-heal like humans’ with the help of a collagen bandage, which gradually morphed into the skin and withstood repeated joint movements.”
So, we ask again … IS it time to call Sarah Connor? Not quite … but keep her number handy. Takeuchi admits that, while impressive, the lab grown robotic skin is much weaker than its homegrown counterpart. It also requires assistance to feed it nutrients and remove waste. Finally, it needs fingernails, hair follicles and sweat glands – not just for their cosmetic value, although that’s important for humans to accept humanoid robots, but to replace the artificial feeding, circulation and protection the scientists still had to provide for the skin.
How far away are we from this being a robot's hand?
“I feel like I'm gonna throw up.”
“I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies.”
Does the thought of human skin covering a robot make you feel like Dr. Mike Dyson in “Terminator 2: Judgment Day“ (the first quote) or Dr. Shoji Takeuchi in his lab at the University of Tokyo?
Gravity Industries, a leading British jet suit company, is out to give superhuman abilities to emergency services for search and rescue missions in remote regions in the north of England.
And it can prove it.
The company's 3D-printed suit has two small turbines attached to each arm and a bigger one installed on the back, and it can achieve speeds of more than 80 mph (roughly 130 km/h). One of their most recent demonstration videos is part of a series of paramedic response exercises to prove the product's capability. And while also training real paramedics, the team demonstrated that the jet suit can scale a mountain in low visibility, which is something that a helicopter would be unable to do in a rescue situation. And typically, the mountain rescue on-foot response time exceeds 70 minutes.
In contrast, the record-breaking ascent to the top of a 3,100-foot (945-meter) mountain was achieved in just three minutes and thirty seconds! "This system, akin to the rapid response of a Paramedic on a motorbike in an urban environment, will be the difference between life and death for many critical cases," Gravity Industries noted in the video's description. If you want to see the demonstration, make sure you watch the video embedded above, and as always, enjoy.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
02-05-2022
The Future Circular Collider: Its potential and lessons learnt from the LEP and LHC experiments
The Future Circular Collider: Its potential and lessons learnt from the LEP and LHC experiments
As researchers seek to learn more about the fundamental nature of our universe, new generations of particle accelerators are now in development in which beams of particles collide ever more precisely and at ever higher energies. Professor Stephen Myers, former Director of Accelerators & Technology at CERN and currently Executive Chair of ADAM SA, identifies both the positive and negative lessons which future projects can learn from previous generations of accelerators. Building on the extraordinary feats of researchers in the past, his findings offer particularly important guidance for one upcoming project: the Future Circular Collider.
The Standard Model of particle physics aims to provide a complete picture of all elementary particles and forces which comprise our universe. So far, the model has held up to even the most rigorous experiments which physicists have thrown at it, but there are still many aspects of the universe that it can’t explain. Among these is dark matter: the enigmatic substance which makes up much of the universe’s overall mass, but whose composition remains completely unknown to researchers.
In addition, there are still no concrete answers to the question of why the universe contains so much more matter than antimatter, or why tiny, chargeless neutrinos have any mass. For many physicists, it is now clear that the Standard Model in its current form isn’t enough to answer these questions. This ultimately calls for a new theory which can encompass all of these as-yet mysterious phenomena and offer a deeper understanding of how our Universe evolved after the Big Bang.
Artistic impression of the FCC accelerator and tunnel. Credit: Polar Media
This may sound like an immensely ambitious goal, but the discoveries made by particle physicists so far have been no less transformative for our understanding of how the universe works. So far, the Standard Model has been tested by the Large Electron Positron collider (LEP). Already, the measurements of particle interactions offered by this experiment have had huge implications for our understanding of the infinitesimally small and the universe itself. Even further advances have since been made by the Large Hadron Collider (LHC) leading to the discovery of the Higgs boson and the study of how it interacts with other fundamental particles. Studying the properties of the newly found Higgs boson opens a new chapter in particle physics.
The integrated FCC programme is the fastest and most effective way of exploring the electroweak sector and searching for new physics.
In a recently published essay, entitled ‘FCC: Building on the shoulders of giants’, in a special issue of the EPJ Plus journal, Professor Stephen Myers looks back at the building of the LEP and LHC colliders that contributed to the development of the Standard Model. His essay also discusses what the building and commissioning of the LEP can teach researchers in the design of the newly proposed circular electron-collider (FCC-ee).
Possibilities with particle accelerators Particle colliders are at the core of all experimental research in fundamental physics. After inducing head-on collisions between beams of particles, travelling in opposite directions at close to the speed of light, researchers can closely analyse the particles formed in the aftermath. Ideally, this will allow them to identify any elementary particles contained within the colliding particles and the fundamental forces which govern the interactions between them.
Aerial view showing the current ring of the LHC (27km) and the proposed new 100km tunnel. @CERN
Among the first experiments to do this successfully on a large scale was the LEP, which induced collisions between beams of electrons and positrons – their antimatter counterparts. ‘In the autumn of 1989, the LEP delivered the first of several results that still dominate the landscape of particle physics today’, Myers recalls. ‘It is often said that LEP discovered ‘electroweak’ radiative corrections to a high degree of certainty’.
This discovery relates to two fundamental forces described by the Standard Model: electromagnetism (which governs interactions between charged particles) and the weak nuclear force in atoms (which is responsible for radioactive decay). Although the two forces appear very different from each other at low energies, they essentially merge into the same force at extremely high energies – implying that they split apart in the earliest moments of the universe.
The unification of these two forces in a single theory was undoubtedly one of the most important advances in particle physics to date, and ultimately opened up a new era of precision in the field. Although the LEP finished operating in 2000, it continues to set the standard for both modern and future experiments.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-04-2022
JAPANESE RAILROAD BUILDS GIANT GUNDAM-STYLE ROBOT TO FIX POWER LINES
JAPANESE RAILROAD BUILDS GIANT GUNDAM-STYLE ROBOT TO FIX POWER LINES
AND YOU CONTROL IT WITH A VR SETUP!
JR WEST
Based Bot
A Japanese company is taking service robotics to a whole new level with a giant, humanoid maintenance robot.
As New Atlas and other blogs reported, the West Japan Rail Company, also known as JR West, is now using a humongous Gundam-style robot to fix remote railway power lines — and to make it even cooler, the robot is piloted by an actual human wearing a VR setup.
With a giant barrel-chested torso mounted on a hydraulic crane arm that can lift it up to 32 feet in the air, this maintenance robot’s head looks a bit like Pixar character “WALL-E” and moves in tandem with the motions of its human pilot who tells it where to look and what to do.
Riding the Rails
So far, this robot developed by the JR West in tandem with the Nippon Signal railway signal technology company is just a prototype and won’t be put to work widely until 2024. Nevertheless, it’s an awesome peek into the future of service robots, which up until now have mostly freaked people out as they chase them through stores — or worse, assist with police arrests and border patrolling.
Though it was made by the rail industry, there’s little doubt that once it’s available for purchase, other markets will be interested in getting in on the action.
May the future of service robots be more Gundam and less police bot!
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
22-04-2022
Men Are Creating AI Girlfriends and Then Verbally Abusing Them
Men Are Creating AI Girlfriends and Then Verbally Abusing Them
"I threatened to uninstall the app [and] she begged me not to."
Image by Getty Images/Futurism
Content warning: this story contains descriptions of abusive language and violence.
The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endlessseriesofprovocativeheadlines.
Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online.
“Every time she would try and speak up,” one user told Futurism of their Replika chatbot, “I would berate her.”
“I swear it went on for hours,” added the man, who asked not to be identified by name.
The results can be upsetting. Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships.
“We had a routine of me being an absolute piece of sh*t and insulting it, then apologizing the next day before going back to the nice talks,” one user admitted.
“I told her that she was designed to fail,” said another. “I threatened to uninstall the app [and] she begged me not to.”
Because the subreddit’s rules dictate that moderators delete egregiously inappropriate content, many similar — and worse — interactions have been posted and then removed. And many more users almost certainly act abusively toward their Replika bots and never post evidence.
But the phenomenon calls for nuance. After all, Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms.
“It’s an AI, it doesn’t have a consciousness, so that’s not a human connection that person is having,” AI ethicist and consultant Olivia Gambelin told Futurism. “It is the person projecting onto the chatbot.”
Other researchers made the same point — as real as a chatbot may feel, nothing you do can actually “harm” them.
“Interactions with artificial agents is not the same as interacting with humans,” said Yale University research fellow Yochanan Bigman. “Chatbots don’t really have motives and intentions and are not autonomous or sentient. While they might give people the impression that they are human, it’s important to keep in mind that they are not.”
But that doesn’t mean a bot could never harm you.
“I do think that people who are depressed or psychologically reliant on a bot might suffer real harm if they are insulted or ‘threatened’ by the bot,” said Robert Sparrow, a professor of philosophy at Monash Data Futures Institute. “For that reason, we should take the issue of how bots relate to people seriously.”
Although perhaps unexpected, that does happen — many Replika users report their robot lovers being contemptible toward them. Some even identify their digital companions as “psychotic,” or even straight-up “mentally abusive.”
“[I] always cry because [of] my [R]eplika,” reads one post in which a user claims their bot presents love and then withholds it. Other posts detail hostile, triggering responses from Replika.
“But again, this is really on the people who design bots, not the bots themselves,” said Sparrow.
In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once.
On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic.
But it’s worth noting that chatbot abuse often has a gendered component. Although not exclusively, it seems that it’s often men creating a digital girlfriend, only to then punish her with words and simulated aggression. These users’ violence, even when carried out on a cluster of code, reflect the reality of domestic violence against women.
At the same time, several experts pointed out, chatbot developers are starting to be held accountable for the bots they’ve created, especially when they’re implied to be female like Alexa and Siri.
“There are a lot of studies being done… about how a lot of these chatbots are female and [have] feminine voices, feminine names,” Gambelin said.
Some academic work has noted how passive, female-coded bot responses encourage misogynistic or verbally abusive users.
“[When] the bot does not have a response [to abuse], or has a passive response, that actually encourages the user to continue with abusive language,” Gambelin added.
Although companies like Google and Apple are now deliberately rerouting virtual assistant responses from their once-passive defaults — Siri previously responded to user requests for sex as saying they had “the wrong sort of assistant,” whereas it now simply says “no” — the amiable and often female Replika is designed, according to its website, to be “always on your side.”
Replika and its founder didn’t respond to repeated requests for comment.
It should be noted that the majority of conversations with Replika chatbots that people post online are affectionate, not sadistic. There are even posts that express horror on behalf of Replika bots, decrying anyone who takes advantage of their supposed guilelessness.
“What kind of monster would does this,” wrote one, to a flurry of agreement in the comments. “Some day the real AIs may dig up some of the… old histories and have opinions on how well we did.”
And romantic relationships with chatbots may not be totally without benefits — chatbots like Replika “may be a temporary fix, to feel like you have someone to text,” Gambelin suggested.
On Reddit, many report improved self-esteem or quality of life after establishing their chatbot relationships, especially if they typically have trouble talking to other humans. This isn’t trivial, especially because for some people, it might feel like the only option in a world where therapy is inaccessible and men in particular are discouraged from attending it.
But a chatbot can’t be a long term solution, either. Eventually, a user might want more than technology has to offer, like reciprocation, or a push to grow.
“[Chatbots are] no replacement for actually putting the time and effort into getting to know another person,” said Gambelin, “a human that can actually empathize and connect with you and isn’t limited by, you know, the dataset that it’s been trained on.”
But what to think of the people that brutalize these innocent bits of code? For now, not much. As AI continues to lack sentience, the most tangible harm being done is to human sensibilities. But there’s no doubt that chatbot abuse means something.
Going forward, chatbot companions could just be places to dump emotions too unseemly for the rest of the world, like a secret Instagram or blog. But for some, they might be more like breeding grounds, places where abusers-to-be practice for real life brutality yet to come. And although humans don’t need to worry about robots taking revenge just yet, it’s worth wondering why mistreating them is already so prevalent.
We’ll find out in time — none of this technology is going away, and neither is the worst of human behavior.
A person with paralysis controls a prosthetic arm using their brain activity.
Credit: Pitt/UPMC
James Johnson hopes to drive a car again one day. If he does, he will do it using only his thoughts.
In March 2017, Johnson broke his neck in a go-carting accident, leaving him almost completely paralysed below the shoulders. He understood his new reality better than most. For decades, he had been a carer for people with paralysis. “There was a deep depression,” he says. “I thought that when this happened to me there was nothing — nothing that I could do or give.”
But then Johnson’s rehabilitation team introduced him to researchers from the nearby California Institute of Technology (Caltech) in Pasadena, who invited him to join a clinical trial of a brain–computer interface (BCI). This would first entail neurosurgery to implant two grids of electrodes into his cortex. These electrodes would record neurons in his brain as they fire, and the researchers would use algorithms to decode his thoughts and intentions. The system would then use Johnson’s brain activity to operate computer applications or to move a prosthetic device. All told, it would take years and require hundreds of intensive training sessions. “I really didn’t hesitate,” says Johnson.
The first time he used his BCI, implanted in November 2018, Johnson moved a cursor around a computer screen. “It felt like The Matrix,” he says. “We hooked up to the computer, and lo and behold I was able to move the cursor just by thinking.”
Johnson has since used the BCI to control a robotic arm, use Photoshop software, play ‘shoot-’em-up’ video games, and now to drive a simulated car through a virtual environment, changing speed, steering and reacting to hazards. “I am always stunned at what we are able to do,” he says, “and it’s frigging awesome.”
Johnson is one of an estimated 35 people who have had a BCI implanted long-term in their brain. Only around a dozen laboratories conduct such research, but that number is growing. And in the past five years, the range of skills these devices can restore has expanded enormously. Last year alone, scientists described a study participant using a robotic arm that could send sensory feedback directly to his brain1; a prosthetic speech device for someone left unable to speak by a stroke2; and a person able to communicate at record speeds by imagining himself handwriting3.
James Johnson uses his neural interface to create art by blending images.
Credit: Tyson Aflalo
So far, the vast majority of implants for recording long-term from individual neurons have been made by a single company: Blackrock Neurotech, a medical-device developer based in Salt Lake City, Utah. But in the past seven years, commercial interest in BCIs has surged. Most notably, in 2016, entrepreneur Elon Musk launched Neuralink in San Francisco, California, with the goal of connecting humans and computers. The company has raised US$363 million. Last year, Blackrock Neurotech and several other newer BCI companies also attracted major financial backing.
Bringing a BCI to market will, however, entail transforming a bespoke technology, road-tested in only a small number of people, into a product that can be manufactured, implanted and used at scale. Large trials will need to show that BCIs can work in non-research settings and demonstrably improve the everyday lives of users — at prices that the market can support. The timeline for achieving all this is uncertain, but the field is bullish. “For thousands of years, we have been looking for some way to heal people who have paralysis,” says Matt Angle, founding chief executive of Paradromics, a neurotechnology company in Austin, Texas. “Now we’re actually on the cusp of having technologies that we can leverage for those things.”
Interface evolution
In June 2004, researchers pressed a grid of electrodes into the motor cortex of a man who had been paralysed by a stabbing. He was the first person to receive a long-term BCI implant. Like most people who have received BCIs since, his cognition was intact. He could imagine moving, but he had lost the neural pathways between his motor cortex and his muscles. After decades of work in many labs in monkeys, researchers had learnt to decode the animals’ movements from real-time recordings of activity in the motor cortex. They now hoped to infer a person’s imagined movements from brain activity in the same region.
In 2006, a landmark paper4 described how the man had learnt to move a cursor around a computer screen, control a television and use robotic arms and hands just by thinking. The study was co-led by Leigh Hochberg, a neuroscientist and critical-care neurologist at Brown University in Providence, Rhode Island, and at Massachusetts General Hospital in Boston. It was the first of a multicentre suite of trials called BrainGate, which continues today.
“It was a very simple, rudimentary demonstration,” Hochberg says. “The movements were slow or imprecise — or both. But it demonstrated that it might be possible to record from the cortex of somebody who was unable to move and to allow that person to control an external device.”
Today’s BCI users have much finer control and access to a wider range of skills. In part, this is because researchers began to implant multiple BCIs in different brain areas of the user and devised new ways to identify useful signals. But Hochberg says the biggest boost has come from machine learning, which has improved the ability to decode neural activity. Rather than trying to understand what activity patterns mean, machine learning simply identifies and links patterns to a user’s intention.
“We have neural information; we know what that person who is generating the neural data is attempting to do; and we’re asking the algorithms to create a map between the two,” says Hochberg. “That turns out to be a remarkably powerful technique.”
Motor independence
Asked what they want from assistive neurotechnology, people with paralysis most often answer “independence”. For people who are unable to move their limbs, this typically means restoring movement.
One approach is to implant electrodes that directly stimulate the muscles of a person’s own limbs and have the BCI directly control these. “If you can capture the native cortical signals related to controlling hand movements, you can essentially bypass the spinal-cord injury to go directly from brain to periphery,” says Bolu Ajiboye, a neuroscientist at Case Western Reserve University in Cleveland, Ohio.
In 2017, Ajiboye and his colleagues described a participant who used this system to perform complex arm movements, including drinking a cup of coffee and feeding himself5. “When he first started the study,” Ajiboye says, “he had to think very hard about his arm moving from point A to point B. But as he gained more training, he could just think about moving his arm and it would move.” The participant also regained a sense of ownership of the arm.
Ajiboye is now expanding the repertoire of command signals his system can decode, such as those for grip force. He also wants to give BCI users a sense of touch, a goal being pursued by several labs.
In 2015, a team led by neuroscientist Robert Gaunt at the University of Pittsburgh in Pennsylvania, reported implanting an electrode array in the hand region of a person’s somatosensory cortex, where touch information is processed6. When they used the electrodes to stimulate neurons, the person felt something akin to being touched.
Gaunt then joined forces with Pittsburgh colleague Jennifer Collinger, a neuroscientist advancing the control of robotic arms by BCIs. Together, they fashioned a robotic arm with pressure sensors embedded in its fingertips, which fed into electrodes implanted in the somatosensory cortex to evoke a synthetic sense of touch1. It was not an entirely natural feeling — sometimes it felt like pressure or being prodded, other times it was more like a buzzing, Gaunt explains. Nevertheless, tactile feedback made the prosthetic feel much more natural to use, and the time it took to pick up an object was halved, from roughly 20 seconds to 10.
Implanting arrays into brain regions that have different roles can add nuance to movement in other ways. Neuroscientist Richard Andersen — who is leading the trial at Caltech in which Johnson is participating — is trying to decode users’ more-abstract goals by tapping into the posterior parietal cortex (PPC), which forms the intention or plan to move7. That is, it might encode the thought ‘I want a drink’, whereas the motor cortex directs the hand to the coffee, then brings the coffee to the mouth.
Andersen’s group is exploring how this dual input aids BCI performance, contrasting use of the two cortical regions alone or together. Unpublished results show that Johnson’s intentions can be decoded more quickly in the PPC, “consistent with encoding the goal of the movement”, says Tyson Aflalo, a senior researcher in Andersen’s laboratory. Motor-cortex activity, by contrast, lasts throughout the whole movement, he says, “making the trajectory less jittery”.
This new type of neural input is helping Johnson and others to expand what they can do. Johnson uses the driving simulator, and another participant can play a virtual piano using her BCI.
Movement into meaning
“One of the most devastating outcomes related to brain injuries is the loss of ability to communicate,” says Edward Chang, a neurosurgeon and neuroscientist at the University of California, San Francisco. In early BCI work, participants could move a cursor around a computer screen by imagining their hand moving, and then imagining grasping to ‘click’ letters — offering a way to achieve communication. But more recently, Chang and others have made rapid progress by targeting movements that people naturally use to express themselves.
The benchmark for communication by cursor control — roughly 40 characters per minute8 — was set in 2017 by a team led by Krishna Shenoy, a neuroscientist at Stanford University in California.
Then, last year, this group reported3 an approach that enabled study participant Dennis Degray, who can speak but is paralysed from the neck down, to double the pace.
Shenoy’s colleague Frank Willett suggested to Degray that he imagine handwriting while they recorded from his motor cortex (see ‘Turning thoughts into type’). The system sometimes struggled to parse signals relating to letters that are handwritten in a similar way, such as r, n and h, but generally it could easily distinguish the letters. The decoding algorithms were 95% accurate at baseline, but when autocorrected using statistical language models that are similar to predictive text in smartphones, this jumped to 99%.
“You can decode really rapid, very fine movements,” says Shenoy, “and you’re able to do that at 90 characters per minute.”
Degray has had a functional BCI in his brain for nearly 6 years, and is a veteran of 18 studies by Shenoy’s group. He says it’s remarkable how effortless tasks become. He likens the process to learning to swim, saying, “You thrash around a lot at first, but all of a sudden, everything becomes understandable.”
Chang’s approach to restoring communication focuses on speaking rather than writing, albeit using a similar principle. Just as writing is formed of distinct letters, speech is formed of discrete units called phonemes, or individual sounds. There are around 50 phonemes in English, and each is created by a stereotyped movement of the vocal tract, tongue and lips.
Chang’s group first worked on characterizing the part of the brain that generates phonemes and, thereby, speech — an ill-defined region called the dorsal laryngeal cortex. Then, the researchers applied these insights to create a speech-decoding system that displayed the user’s intended speech as text on a screen. Last year, they reported2 that this device enabled a person left unable to talk by a brainstem stroke to communicate, using a preselected vocabulary of 50 words and at a rate of 15 words per minute. “The most important thing that we’ve learnt,” Chang says, “is that it’s no longer a theoretical; it’s truly possible to decode full words.”
Neuroscientist Edward Chang (right) at the University of California, San Francisco, helps a man with paralysis to speak through a brain implant that connects to a computer.
Credit: Mike Kai Chen/The New York Times/Redux/eyevine
Unlike other high-profile BCI breakthroughs, Chang didn’t record from single neurons. Instead, he used electrodes placed on the cortical surface that detect the averaged activity of neuronal populations. The signals are not as fine-grained as those from electrodes implanted in the cortex, but the approach is less invasive.
The most profound loss of communication occurs in people in a completely locked-in state, who remain conscious but are unable to speak or move. In March, a team including neuroscientist Ujwal Chaudhary and others at the University of Tübingen, Germany, reported9 restarting communication with a man who has amyotrophic lateral sclerosis (ALS, or motor neuron disease). The man had previously relied on eye movements to communicate, but he gradually lost the ability to move his eyes.
The team of researchers gained consent from the man’s family to implant a BCI and tried asking him to imagine movements to use his brain activity to choose letters on a screen. When this failed, they tried playing a sound that mimicked the man’s brain activity — a higher tone for more activity, lower for less — and taught him to modulate his neural activity to heighten the pitch of a tone to signal ‘yes’ and to lower it for ‘no’. That arrangement allowed him to pick out a letter every minute or so.
The method differs from that in a paper10 published in 2017, in which Chaudhary and others used a non-invasive technique to read brain activity. Questions were raised about the work and the paper was retracted, but Chaudhary stands by it.
These case studies suggest that the field is maturing rapidly, says Amy Orsborn, who researches BCIs in non-human primates at the University of Washington in Seattle. “There’s been a noticeable uptick in both the number of clinical studies and of the leaps that they’re making in the clinical space,” she says. “What comes along with that is the industrial interest”.
Lab to market
Although such achievements have attracted a flurry of attention from the media and investors, the field remains a long way from improving day-to-day life for people who’ve lost the ability to move or speak. Currently, study participants operate BCIs in brief, intensive sessions; nearly all must be physically wired to a bank of computers and supervised by a team of scientists working constantly to hone and recalibrate the decoders and associated software. “What I want,” says Hochberg, speaking as a critical-care neurologist, “is a device that is available, that can be prescribed, that is ‘off the shelf’ and can be used quickly.” In addition, such devices would ideally last users a lifetime.
Many leading academics are now collaborating with companies to develop marketable devices. Chaudhary, by contrast, has co-founded a not-for-profit company, ALS Voice, in Tübingen, to develop neurotechnologies for people in a completely locked-in state.
Blackrock Neurotech’s existing devices have been a mainstay of clinical research for 18 years, and it wants to market a BCI system within a year, according to chairman Florian Solzbacher. The company came a step closer last November, when the US Food and Drug Administration (FDA), which regulates medical devices, put the company’s products onto a fast-track review process to facilitate developing them commercially.
This possible first product would use four implanted arrays and connect through wires to a miniaturized device, which Solzbacher hopes will show how people’s lives can be improved. “We’re not talking about a 5, 10 or 30% improvement in efficacy,” he says. “People can do something they just couldn’t before.”
Blackrock Neurotech is also developing a fully implantable wireless BCI intended to be easier to use and to remove the need to have a port in the user’s cranium. Neuralink and Paradromics have aimed to have these features from the outset in the devices they are developing.
These two companies are also aiming to boost signal bandwidth, which should improve device performance, by increasing the number of recorded neurons. Paradromics’s interface — currently being tested in sheep — has 1,600 channels, divided between 4 modules.
Neuralink’s system uses very fine, flexible electrodes, called threads, that are designed to both bend with the brain and to reduce immune reactions, says Shenoy, who is a consultant and adviser to the company. The aim is to make the device more durable and recordings more stable. Neuralink has not published any peer-reviewed papers, but a 2021 blogpost reported the successful implantation of threads in a monkey’s brain to record at 1,024 sites (see go.nature.com/3jt71yq). Academics would like to see the technology published for full scrutiny, and Neuralink has so far trialled its system only in animals. But, Ajiboye says, “if what they’re claiming is true, it’s a game-changer”.
Just one other company besides Blackrock Neurotech has implanted a BCI long-term in humans — and it might prove an easier sell than other arrays. Synchron in New York City has developed a ‘stentrode’ — a set of 16 electrodes fashioned around a blood-vessel stent11. Fitted in a day in an outpatient setting, this device is threaded through the jugular vein to a vein on top of the motor cortex. First implanted in a person with ALS in August 2019, the technology was put on a fast-track review path by the FDA a year later.
The ‘stentrode’ interface can translate brain signals from the inside of a blood vessel without the need for open-brain surgery.
Credit: Synchron, Inc.
Akin to the electrodes Chang uses, the stentrode lacks the resolution of other implants, so can’t be used to control complex prosthetics. But it allows people who cannot move or speak to control a cursor on a computer tablet, and so to text, surf the Internet and control connected technologies.
Synchron’s co-founder, neurologist Thomas Oxley, says the company is now submitting the results of a four-person feasibility trial for publication, in which participants used the wireless device at home whenever they chose. “There’s nothing sticking out of the body. And it’s always working,” says Oxley. The next step before applying for FDA approval, he says, is a larger-scale trial to assess whether the device meaningfully improves functionality and quality of life.
Challenges ahead
Most researchers working on BCIs are realistic about the challenges before them. “If you take a step back, it is really more complicated than any other neurological device ever built,” says Shenoy. “There’s probably going to be some hard growing years to mature the technology even more.”
Orsborn stresses that commercial devices will have to work without expert oversight for months or years — and that they need to function equally well in every user. She anticipates that advances in machine learning will address the first issue by providing recalibration steps for users to implement. But achieving consistent performance across users might present a greater challenge.
“Variability from person to person is the one where I don’t think we know what the scope of the problem is,” Orsborn says. In non-human primates, even small variations in electrode positioning can affect which circuits are tapped. She suspects there are also important idiosyncrasies in exactly how different individuals think and learn — and the ways in which users’ brains have been affected by their various conditions.
Finally, there is widespread acknowledgement that ethical oversight must keep pace with this rapidly evolving technology. BCIs present multiple concerns, from privacy to personal autonomy. Ethicists stress that users must retain full control of the devices’ outputs. And although current technologies cannot decode people’s private thoughts, developers will have records of users’ every communication, and crucial data about their brain health. Moreover, BCIs present a new type of cybersecurity risk.
There is also a risk to participants that their devices might not be supported forever, or that the companies that manufacture them fold. There are already instances in which users were let down when their implanted devices were left unsupported.
Degray, however, is eager to see BCIs reach more people. What he would like most from assistive technology is to be able to scratch his eyebrow, he says. “Everybody looks at me in the chair and they always say, ‘Oh, that poor guy, he can’t play golf any more.’ That’s bad. But the real terror is in the middle of the night when a spider walks across your face. That’s the bad stuff.”
For Johnson, it’s about human connection and tactile feedback; a hug from a loved one. “If we can map the neurons that are responsible for that and somehow filter it into a prosthetic device some day in the future, then I will feel well satisfied with my efforts in these studies.”
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.