The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
27-07-2022
Researchers Turn Dead Spiders Into 'Necrobotic' Grippers
Researchers Turn Dead Spiders Into 'Necrobotic' Grippers
See how engineers at Rice University transformed wolf spiders into necrobots.
Amanda Kooser
Researchers inserted a needle into a dead spider to operate its legs like a gripper.
Video screenshot by Amanda Kooser/CNET
Grippers made from dead spiders. For some, this might sound like a horror movie. For others, it's a fascinating mashup of robotics and the natural world.
A team of engineers at Rice University worked out how to reanimate (after a fashion) the legs of dead spiders. This isn't a Frankenstein's monster sort of situation. The researchers used a needle and air to activate the spider legs, mimicking how the appendages work in living spiders. Because the spiders are dead and are used in a robotic fashion, the engineers call this "necrobotics."
Mechanical engineering graduate student Faye Yap is the lead author of a paper on the spider project published in the journal Advanced Science this week. "It happens to be the case that the spider, after it's deceased, is the perfect architecture for small scale, naturally derived grippers," said co-author Daniel Preston in a Rice statement on Monday.
It's one thing to read about this project and another to see it in action. Fortunately (or not depending on how you feel about all this), Rice delivered a video that explains the process for creating the necrobots and shows how it works.
When alive, spiders use blood to extend and contract their legs through a hydraulic process. The researchers euthanized wolf spiders, inserted a needle into the chamber of the body that controls the legs, sealed it with glue and then used air to trigger the opening and closing of the legs.
Enlarge ImageThe necrobots are able to grip and hold more than their own weight.
Preston Innovation Laboratory/Rice University
The gripper spiders were able to lift more than their own body weight, including another spider and small objects like parts on a circuit board.
The team sees some advantages to necrobotic spiders. The little grippers can grab irregular objects, blend into their environment and also biodegrade over time. The researchers hope to try this method out with smaller spiders. They also plan to work out how to trigger the legs individually.
As this work advances, I'm looking forward to a new type of Transformers. Necrobots, reach out!
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
From the robo-surgeon that killed its patient to the driverless car that ran over a pedestrian: Worst robotic accidents in history - after chess robot breaks seven-year-old boy's finger in Russia
From the robo-surgeon that killed its patient to the driverless car that ran over a pedestrian: Worst robotic accidents in history - after chess robot breaks seven-year-old boy's finger in Russia
Shocking footage emerged of a chess-playing robot breaking a child's finger during a chess match in Russia
A spokesperson from Russian Chess Federation said the boy violated 'safety rules' by making a move too soon
But other incidents involving robots and humans in the past have had more tragic outcomes
MailOnline delves into the worst robotic accidents in history including a robo-surgeon that killed its patient
Shocking footage emerged at the weekend of a chess-playing robot breaking a child's finger during a match in Russia.
The robot grabbed the seven-year-old boy's finger at the Moscow Open last week because it was confused by his overly-quick movements, Russian media outlets reported.
Sergey Lazarev, vice-president of the Russian Chess Federation, said the child had violated 'certain safety rules' by making a move too soon.
Lazarev said that the machine had been hired for many previous events without any problems, and that the incident was an 'extremely rare case'.
Christopher Atkeson, a robotics expert at Carnegie Mellon University, told MailOnline: 'Robots have limited sensing and thus limited awareness of what is going on around them.
'I suspect the chess robot did not have ears, and that its vision system was blind to anything other than chess boards and pieces.'
Although the young chess player only suffered a broken finger, in other accidents involving robots the injured party has not always been as lucky.
MailOnline has taken a look at some of the disastrous robotic failures, from the robo-surgeon that killed its patient to the driverless car that hit a pedestrian.
A chess-playing robot (pictured) broke a child's finger during an international tournament in Moscow last week, with the incident being captured in CCTV footage
CHINESE WORKER SKEWERED - WARNING: GRAPHIC CONTENT
One of the most gruesome robo-accidents involved a 49-year-old Chinese factory worker, known as Zhou, back in December 2018.
Zhou was hit by a rogue robot that collapsed suddenly, impaling him with 10 sharp steel rods in the arm and chest, each one foot long.
The spikes, which were part of the robotic arm, speared through the worker's body when the robot collapsed during a night shift.
Four of them were stuck in his right arm, one in his right shoulder, one in his chest and four in his right forearm, reported People's Daily Online at the time.
Miraculously, Zhou survived after surgeons removed the rods from his body and was in stable condition, according to the Xiangya Hospital in Changsha.
Mr Zhou (pictured), 49, was hit by part of a robot that collapsed in China. Four of them were stuck in his right arm, one in his right shoulder, one in his chest and four in his right forearm
In March 2018, Elaine Herzberg was fatally struck by a prototype self-driving car from ridesharing firm Uber.
Herzberg, 49, was pushing a bicycle across a four-lane road in Tempe, Arizona, when she was struck by the vehicle, which was operating in self-drive mode with a human safety backup driver sitting in the driving seat.
The Uber engineer in the vehicle, Rafaela Vasquez, was watching videos on her phone, according to reports at the time.
Herzberg was taken to the local hospital where she died of her injuries – marking the first recorded case of a pedestrian fatality involving a self-driving car.
This photo from video from a mounted camera provided by the Tempe Police Department shows an interior view moments before the Uber vehicle hit a woman in Tempe, Arizona, in March 2018
Vasquez was later charged with negligent homicide, while Uber was found not criminally responsible for the accident.
The ridesharing giant had been testing self-driving vehicles in four North American cities – Tempe, San Francisco, Pittsburgh and Toronto – but these tests were suspended following the accident.
In 2020, Uber sold off its self-driving car division, spelling an end to its attempts to develop autonomous driving systems.
BOTCHED ROBO-SURGERY
In February 2015, father-of-three and retired music teacher Stephen Pettitt underwent robotic surgery for Mitral valve disease at the Freeman Hospital, Newcastle upon Tyne.
The operation was conducted by surgeon Sukumaran Nair using the Da Vinci surgical robot, which consists of a surgeon's console and interactive robotic arms controlled from the console.
Details of the botched six-hour procedure include the surgical team, Mr Nair and assisting surgeon Thasee Pillay, shouting at one another.
The operation using the Da Vinci robot (file image) was the first of its kind conducted in the UK
Communication was difficult because of the 'tinny' sound quality coming from the robot console being operated by Nair, it was revealed.
The machine also knocked a theatre nurse and destroyed the patient's stitches, Newcastle Coroner's Court later heard.
Mr Nair, who trained in India and London and previously worked at Papworth Hospital in Cambridgeshire, said he now works in Scotland and no longer does robotic surgery.
Sadly, Mr Pettitt had a 98 to 99 per cent chance of living had the operation not been carried out using a robot.
VW WORKER CRUSHED
In June 2015, a 22-year-old man was killed by a robotic arm at a Volkswagen plant in Baunatal, Germany.
The robotic arm intended to lift machine parts but reportedly grabbed him and crushed him against a large metal plate.
The man suffered severe injuries to his chest in the incident and resuscitated at the scene, but died from his injuries in hospital.
A Volkswagen spokesman said initial conclusions indicated human error was to blame rather than a malfunction with the robot, which can be programmed to perform various tasks in the assembly process.
In June 2015, a 22-year-old man was killed by a robotic arm at a Volkswagen plant in Baunatal, Germany. The robotic arm intended to lift machine parts but reportedly grabbed him and crushed him against a large metal plate
JAPAN'S FIRST ROBO-CASUALTY
In July 1981, Japanese maintenance worker Kenji Urada died while checking a malfunctioning hydraulic robot at the Kawasaki Heavy Industries plant in Akashi.
Urada, 37, had jumped over a safety barrier that was designed to shut down power to the machine when open, but he reportedly started the robot accidentally.
He was pinned by the robot's arm against another machine before tragically being crushed to death.
Other workers in the factory were unable to stop the machine as they didn't know how to operate it.
Urada was the first human killed by a robot in Japan – but not the first in the world.
THE FIRST ROBOT DEATH
The first person to be killed by a robot was Robert Williams, an American factory worker, in Flat Rock, Michigan back in January 1979.
Williams, 25, was killed instantly when he was struck in the head by an industrial robot arm designed to retrieve objects from storage shelves.
The first person to be killed by a robot was Robert Williams while working at the Ford Motor Company Flat Rock Casting Plant (pictured here in 1973)
His body remained in the shelf for 30 minutes until it was discovered by workers who were concerned about his disappearance, according to reports.
His family successfully sued the manufacturers of the robot, Litton Industries, and were awarded $10 million in damages.
Wayne County Circuit Court concluded that there were not enough safety measures in place to prevent such an accident from happening.
SELF-DRIVING TESLA SLAMS INTO TRACTOR
A former Navy SEAL became the first person to die at the wheel of a self-driving car after it ploughed into a tractor trailer on a freeway in Williston, Florida, in May 2016.
Joshua Brown, 40, was riding in his computer-guided Tesla Model S on autopilot mode when it travelled underneath the low-hanging trailer crossing his path on US Highway 27A, shearing off the car's roof completely.
By the time firefighters arrived, the wreckage of the Tesla had come to rest in a nearby yard hundreds of feet from the crash site, assistant chief Danny Wallace of the Williston Fire Department said.
Tesla said its autopilot system failed to detect the truck because its white colour was similar to that of the bright sky, adding that the driver also made no attempt to hit the brakes.
Elon Musk's company confirmed the man's 'tragic' death, but defended its vehicles, saying they were safer than other cars.
Joshua Brown, 40, was riding in his computer-guided Tesla Model S on autopilot mode when it travelled underneath the low-hanging trailer crossing his path on US Highway 27A, shearing off its roof completely
Brown was killed as the car drove underneath the low-hanging trailer at 74mph, ripping the roof off before smashing through a fence and hitting an electrical pole. Tesla said its autopilot system failed to detect the truck because its white colour was similar to that of the bright sky, adding that the driver also made no attempt to hit the brakes
CRUSHED BY A ROBOT ARM
On July 7 2015, a grandmother was crushed to death when she became trapped by a piece of machinery that should not have entered into the area in which she was working.
Wanda Holbrook was 57 at the time of the incident, and had been working at car manufacturing facility Ventra Ionia Main in Michigan, USA for 12 years.
She was working on the production line when a robotic arm took her by surprise by entering the section in which she was stationed.
The arm hit and crushed her head against a trailer hitch assembly it was working on, a wrongful death lawsuit states.
According to The Callahan Law Firm, widower Bill Holbrook said his wife's head injuries were so severe that the funeral home recommended a closed casket.
On July 7 2015, a grandmother was crushed to death when she became trapped by a piece of machinery that should not have entered into the area in which she was working at at car manufacturing facility Ventra Ionia Main in Michigan, USA (pictured)
Wanda Holbrook was 57 at the time of the incident, and had been working at the car manufacturing facility for 12 years
STABBED BY WELDING ROBOT
The first person killed by a robot in India is thought to be 24-year-old Ramji Lal, who was stabbed to death by a robotic welding machine, also in July 2015.
Lal was reportedly adjusting a metal sheet at car parts manufacturer SKH Metals in Manesar, Gurgaon that was being welded by the machine when he was stabbed by one of its arms.
A colleague told the Times of India: 'The robot is pre-programmed to weld metal sheets it lifts.
'One such sheet got dislodged and Lal reached from behind the machine to adjust it.
'This was when welding sticks attached to the pre-programmed device pierced Lal's abdomen.'
The first person killed by a robot in India is thought to be 24-year-old Ramji Lal, who was stabbed to death by a robotic welding machine, also in July 2015. Lal was reportedly adjusting a metal sheet at car parts manufacturer SKH Metals in Manesar, Gurgaon that was being welded by the machine when he was stabbed by one of its arms (stock image)
AMAZON ROBOT PUNCTURES BEAR REPELLENT CAN
In 2018, a robot accidentally punctured a can of bear repellent at an Amazon warehouse in New Jersey, hospitalising 24 workers.
The nine-ounce aerosol can contained concentrated capsaicin - the active component found in chilli peppers that makes your mouth feel hot.
Many of the workers experienced trouble breathing and said their throats and eyes burned as a result of the fumes from the pepper spray.
One worker from the Robbinsville-based warehouse was in critical condition and was sent to the ICU at Robert Wood Johnson Hospital, while another 30 were treated at the scene.
Stuart Appelbaum, president of the Retail, Wholesale and Department Store Union, issued a statement after the incident saying: 'Amazon's automated robots put humans in life-threatening danger today, the effects of which could be catastrophic and the long-term effects for 80+ workers are unknown,
'The richest company in the world cannot continue to be let off the hook for putting hard working people's lives at risk.'
In 2018, a robot accidentally punctured a can of bear repellent at an Amazon facility in New Jersey, hospitalising 24 workers
One worker from the Robbinsville-based warehouse was in critical condition and was sent to the ICU at Robert Wood Johnson Hospital, while another 30 were treated at the scene (pictured)
Josh Bongard, a robotics professor at the University of Vermont, told MailOnline: 'Robots are kind of the opposite of people. They're good at things we’re bad at, and vice versa.
'The good news in all this is that robots will likely kill many fewer people, than people do. Especially here in the US.'
Professor Bongard also said it's 'really, really hard' to train robots to avoid such accidents.
'Our best hope at the moment for deploying safe robots is to put them in places where there are few people, like autonomous cars restricted to special lanes on highways.'
WHY ARE PEOPLE SO WORRIED ABOUT ROBOTS AND AI?
It is an issue troubling some of the greatest minds in the world at the moment, from Bill Gates to Elon Musk.
SpaceX and Tesla CEO Elon Musk described AI as our 'biggest existential threat' and likened its development as 'summoning the demon'.
He believes super intelligent machines could use humans as pets.
Professor Stephen Hawking said it is a 'near certainty' that a major technological disaster will threaten humanity in the next 1,000 to 10,000 years.
They could steal jobs
More than 60 percent of people fear that robots will lead to there being fewer jobs in the next ten years, according to a 2016 YouGov survey.
And 27 percent predict that it will decrease the number of jobs 'a lot' with previous research suggesting admin and service sector workers will be the hardest hit.
As well as posing a threat to our jobs, other experts believe AI could 'go rogue' and become too complex for scientists to understand.
A quarter of the respondents predicted robots will become part of everyday life in just 11 to 20 years, with 18 percent predicting this will happen within the next decade.
They could 'go rogue'
Computer scientist Professor Michael Wooldridge said AI machines could become so intricate that engineers don't fully understand how they work.
If experts don't understand how AI algorithms function, they won't be able to predict when they fail.
This means driverless cars or intelligent robots could make unpredictable 'out of character' decisions during critical moments, which could put people in danger.
For instance, the AI behind a driverless car could choose to swerve into pedestrians or crash into barriers instead of deciding to drive sensibly.
They could wipe out humanity
Some people believe AI will wipe out humans completely.
'Eventually, I think human extinction will probably occur, and technology will likely play a part in this,' DeepMind's Shane Legg said in a recent interview.
He singled out artificial intelligence, or AI, as the 'number one risk for this century'.
Musk warned that AI poses more of a threat to humanity than North Korea.
'If you're not concerned about AI safety, you should be. Vastly more risk than North Korea,' the 46-year-old wrote on Twitter.
'Nobody likes being regulated, but everything (cars, planes, food, drugs, etc) that's a danger to the public is regulated. AI should be too.'
Musk has consistently advocated for governments and private institutions to apply regulations on AI technology.
He has argued that controls are necessary in order protect machines from advancing out of human control.
When chess robots go bad: AI player grabs a seven-year-old boy and BREAKS his finger during tournament in Russia
A chess-playing robot broke a child's finger during a tournament in Russia last week, with the incident being captured in CCTV footage.
The robot grabbed the seven-year-old boy's finger because it was confused by his overly-quick movements, Russian media outlets reported, quoting the President of the Moscow Chess Federation - who seemingly blamed the child.
'The robot broke the child's finger - this, of course, is bad,' Sergey Lazarev told Russia's TASS news agency, while distancing his organisation from the robot.
The incident occurred at the Moscow Open on July 19. Lazarev said that the federation had rented the robot for the event, which ran from July 13 to 21.
Lazarev said that the machine had been hired for many previous events without incident, saying the boy went to move a piece too quickly after making a move.
A chess-playing robot (pictured) broke a child's finger during an international tournament in Moscow last week, with the incident being captured in CCTV footage
Captured by a camera over the boy's shoulder, the video starts by showing the robot as it picks up a piece from the board and drops it into a box to the side - used to contain the discarded pieces from the game
'The robot was rented by us, it has been exhibited in many places, for a long time, with specialists. Apparently, the operators overlooked it,' Lazarev said.
'The child made a move, and after that we need to give time for the robot to answer, but the boy hurried , the robot grabbed him. We have nothing to do with the robot.'
Video of the incident was published by the Baza Telegram channel, who said the boy's name was Christopher. Baza said he was among the 30 best chess players in Moscow in the under-nine age group category.
According to The Guardian, Sergey Smagin, vice-president of the Russian Chess Federation, went even further in blaming the boy.
'There are certain safety rules and the child, apparently, violated them. When he made his move, he did not realise he first had to wait,' The Guardian quoted Smagin as saying. 'This is an extremely rare case, the first I can recall.'
The footage shows the robot - which consists of a single mechanical arm with multiple joints and a 'hand' - was in the middle of a table and surrounded by three different chess boards. It's AI can reportedly play three matches at the same time.
Captured by a camera over the boy's shoulder, the video starts by showing the robot as it picks up a piece from the board and drops it into a box to the side - used to contain the discarded pieces from the game.
As it does so, the young boy reaches to make his next move. However, the robot appears to mistake the boy's finger for a chess piece, and grabs that instead.
Upon grabbing the boy's finger, the mechanical arms freezes in place, trapping the boy who begins to panic. Several people standing around the table rush in to help him, and after a few seconds are able to free him from the robot's grip
Pictured: The boy is taken away by adults who were standing around the table. Russian chess officials said the machine had been hired for many previous events without incident, saying the boy went to move a piece too quickly after making a move
Upon grabbing the boy's finger, the mechanical arms freezes in place, trapping the boy who begins to panic. Several people standing around the table rush in to help him, and after a few seconds are able to free him from the robot's grip.
Lazarev said in his statement that the boy was able to return to the tournament the following day in a case, and finished the tournament.
However, he told TASS that the boy's parents had contacted the public prosecutor's office about the incident, and that his organisation had been contacted by Moskomsport - the Department of Sport for the Russian capital.
He offered to help the family 'in any way we can,' and warned that the operators of the robot were going to have to 'think about strengthening protection so that this situation does not happen again.'
Smagin told RIA Novosti that the incident was a 'concidence' and said the robot is 'absolutely safe,' The Guardian reported.
'It has performed at many opens. Apparently, children need to be warned. It happens,' Smagin said - calling the robot 'unique'.
Imagine an all-electric drone with zero emissions and no noise.
It could venture anywhere — practically undetected — and be used for a variety of applications from search and rescue to military operations.
That vision is now here, and it runs on ion propulsion.
Last month, a Florida-based tech startup called Undefined Technologies unveiled the new aesthetic design of its silent eVTOL drone, called Silent Ventus, which is powered by ion propulsion, according to a press release by the firm.
A sustainable and less noisy urban environment
“Silent Ventus is a vivid example of our intent of creating a sustainable, progressive, and less-noisy urban environment,” said Tomas Pribanic, Founder and CEO of Undefined Technologies, in the statement. “The design brings us closer to our final product and enables us to showcase the dual-use of our technology.”
The concept vehicle uses proprietary technology to fully activate the ion cloud surrounding the craft. This allows the drone to generate high levels of ion thrust in atmospheric air, and take flight in near-silence.
A major milestone for all-electric drones
Development of the drone has been ongoing for a while now. In December of 2021, the drone completed a major milestone. It undertook a 2-minute and 30-second mission flight, where its performance, flight dynamics, endurance, and noise levels were tested.
The engineers leading the tests reported that the craft’s flight time extended five-fold from the previous version and generated noise levels of less than 85 decibels. Pribanic said at the time that the drone was one step closer to market.
According to Undefined Technologies' website, the drone today "uses innovative physics principles to generate noise levels below 70 dB." This would make it ideal for use throughout the U.S., where acceptable noise levels for residential, industrial, and commercial zones range from 50 to 70 dB.
In comparison, the majority of drones produce noises in the vicinity of 85 to 96 dB. Time will tell whether the new "silent" drones will inaugurate a new age of whispering drones that take no toll on the surrounding environment, toiling away in peace.
ROBOTS AREN’T TYPICALLY KNOWN FOR THEIR FLAWLESS SKIN AND 12-STEP CARE ROUTINES, BUT RECENT INNOVATIONS IN ROBOTICS AND LAB-GROWN BODY PARTS MIGHT SPAWN A FRESH GENERATION OF YOUTUBE BEAUTY VLOGGERS AFTER ALL.
Instead of cold metal and dull silicon exteriors, next-generation robots may instead sport a more human birthday suit called electronic skin, or e-skin for short.
Beyond just making these robots look more human, engineers are excited about how the skin will enable robots to feel like humans do — a necessary skill to help them navigate our human-centric world.
HORIZONS is a newsletter on the innovations and ideas of today that will shape the world of tomorrow. This is an adapted version of the June 30 edition. Forecast the future by signing up for free.
“E-skin will allow robots to have safe interaction with real-world objects,” Ravinder Dahiya, professor of electronics and nanoengineering at the University of Glasgow, tells Inverse.
“This is also important for social robots which may be helping elderly in care homes, for example, to serve a cup of tea.”
Dahiya is the lead author on one of several papers published this month in the journal Science Robotics detailing several advances in robotic skin that together overcome decades-worth of engineering obstacles.
CREATING “ELECTRONIC SKIN”
How robots feel matters for human-machine interactions.Donald Iain Smith/Photodisc/Getty Images
Dahiya and his colleagues’ latest work explores how robots’ skin can be used to feel and learn about their surroundings using synaptic transistors embedded in the artificial skin. Mimicking the sensory neural pathways of the human body and brain, their work demonstrates a robot skin that can learn from sensory experiences, like a child who learns not to touch a hot surface after getting a burn.
“A ROBOT DEPLOYED IN A NUCLEAR POWER PLANT CAN HAVE E-SKIN WITH RADIATION SENSORS.”
We take this kind of sensory ability for granted as a result of living in our acutely sensitive skin, but they’re much harder to imbue in e-skin, roboticist Kyungseo Park tells Inverse.
Park is a postdoctoral researcher at the University of Illinois, Urbana-Champaign, and the first author of another e-skin paper published this month in Science Robotics that shows how electrodes and microphones could be built into hydrogel and silicone e-skin to provide more sensitive tactile input.
While small-scale sensors might work for small-scale projects like responsive robot hands, Parks says these technologies struggle to scale up.
“Although these tactile sensors work well, it is challenging to cover the robot’s whole body with these sensors due to practical limitations such as wiring and fabrication,” Park says.
“It is required to develop a sensor configuration that can be freely scaled depending on the application.”
The human skin is the largest organ in the body. Our skin can process sensory input at nearly any location with very little energy, but replicating and scaling this ability in e-skin is a logistics and latency nightmare, Dahiya says.
“[Electronics] need to be developed or embedded in soft and flexible substrates so that the e-skin can conform to curvy surfaces,” Dahiya says.
“This [will] mean sensors and processing electronics can be distributed all over the body, which will reduce the latency and help address other challenges such as wiring complexity.”
If an e-skin cannot reliably process sensory inputs anywhere as they occur, it could be a major liability in the real world — especially because humans enjoy split-second sensory processing as a result of their biological skin.
LEARNING TO FEEL
If building e-skin is so challenging, then why are scientists around the world still working hard to make it a reality? For robots, and the companies or governments that control them, e-skin could represent a new opportunity to record and process massive amounts of information beyond our own skin’s sensing abilities.
“MOST COMMERCIAL ROBOT ARMS ARE NOT ABLE TO PERCEIVE PHYSICAL CONTACT, SO THEY MAY CAUSE SERIOUS INJURY TO HUMANS.”
“When we think about robots that are used in hazardous environments, the types of sensors can be much broader than basic five sensory modalities,” Dahiya says.
“For example, a robot deployed in a nuclear power plant can have e-skin with radiation sensors as well. Likewise, the e-skin may have photodetectors which can also augment human skin capability by allowing us to measure the excessive exposure to ultraviolet rays. We don’t have such capabilities.”
In the future, e-skin could also be used to measure things like proximity (guard robots), temperature, or even disease and chemical weapons. Such advanced sensing capabilities might enable remote robots to go into potentially dangerous situations and assess the problem without putting humans in harm’s way.
Beyond the vision of autonomous Terminator-like robots, e-skin could also be used to bring sensing capabilities to existing tools. For example, medical instruments that could allow clinicians to “feel” tissues inside the human body to make diagnoses, Dahiya says.
In his and colleagues’ work, Dahiya also explored how e-skin could be designed to sense injuries (like a cut or tear) and heal itself, much as our skin does. Right now, their e-skin needs a lot of assistance to accomplish this task, but in the future, this functionality could be essential for remote robots exploring potentially harmful terrain like the surface of Mars.
ON THE HORIZON
Pressing the flesh.Ravinder Dahiya
Beyond the advancement of robot sensing, Dahiya and Park say that e-skin will also play an important role in keeping humans safe in human-robot interactions as well. For many service robots, this is a particularly crucial concern.
“Most commercial robot arms are not able to perceive physical contact, so they may cause serious injury to humans,” Park says.
Take a greeter robot, for example, tasked with shaking the hand of everyone who crosses its path to welcome them into a place. Without tactile sensors to gauge the pressure of its handshakes, it could unwittingly crush someone’s hand.
“I think the elderly who need direct assistance from robots will probably benefit the most,” Park adds.
Advances in e-skin could also play a role in improving prosthetics, Park says. The same tactile sensors that help robots feel their environment could help restore or even advance the sensing capabilities of those with limb loss.
While the e-skin that Park and Dahiya worked on is making strides toward a robotic future, they’ve both got some fine-tuning to do before they’re put into practice. In the meantime one thing is certain: We should be thankful for the complex and capable skin we have evolved.
Gianluca Rizzello with 'dielectric elastomers.' The Saarbrücken researchers are using this composite material to create artificial muscles and nerves for use in flexible robot arms.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
01-07-2022
De ‘Sky Cruise’ moet een vliegend hotel worden met zwembaden en bioscopen dat nooit landt
De ‘Sky Cruise’ moet een vliegend hotel worden met zwembaden en bioscopen dat nooit landt
Er zouden zelfs geen piloten nodig zijn
Mensen zijn verbijsterd over een video waarin een potentieel AI-gestuurd vliegtuig wordt gedemonstreerd dat ‘nooit zou landen’. Het concept van de ‘Sky Cruise’, ontworpen door Hashem Al-Ghaili, is in feite een vliegend hotel dat beschikt over 20 nucleair aangedreven motoren en een capaciteit zou hebben om 5.000 passagiers te vervoeren.
Geen piloten
Al-Ghaili noemt het vliegtuig de ‘toekomst van het vervoer’ en legt uit dat conventionele luchtvaartmaatschappijen passagiers van en naar de Sky Cruise zouden ‘ferryen’, die nooit de grond zou raken en zelfs alle reparaties tijdens de vlucht zou kunnen laten uitvoeren. Daily Star vroeg aan de maker hoeveel piloten het vliegtuig nodig heeft. Hij antwoordde: “Al deze technologie en je wilt nog steeds piloten? Ik denk dat het volledig autonoom zal zijn.”
Dat is allemaal goed en wel, maar de Sky Cruise zou nog steeds een flinke hoeveelheid personeel aan boord nodig hebben, aangezien er ook een enorm winkelcentrum aan boord zou zijn, om nog maar te zwijgen van zwembaden, fitnesscentra en bioscopen. In een video van het vliegtuig, gepost door Al-Ghaili op YouTube, is te zien hoe het uittorent boven vliegtuigen van normale grootte en wordt het zelfs omschreven als ‘de perfecte trouwlocatie’.
De apocalyps
Hoewel de lanceringsdatum van het vliegende hotel nog moet worden aangekondigd, is niet iedereen enthousiast over het idee van Al-Ghaili. Eén persoon schreef in de commentaarsectie onder zijn video: “Geweldig idee om een kernreactor te plaatsen in iets dat defect kan raken en uit de lucht kan vallen.” Een andere grapte: “Ik heb het gevoel dat dit de plek is waar alle rijke mensen zich gaan verstoppen tijdens de apocalyps, en gewoon rondvliegen boven de rest van de wereld terwijl iedereen elkaar te lijf gaat in Mad Max-stijl.”
De 31-jarige ‘wetenschapscommunicator’ en videoproducent Al-Ghaili is Jemenitisch maar woont momenteel in Berlijn, aldus zijn website. Een deel van zijn biografie luidt: ‘Hashem is moleculair bioloog van beroep en gebruikt zijn kennis en passie voor wetenschap om het publiek voor te lichten via sociale media en video-inhoud’. Hieronder vind je de video over de ‘Sky Cruise’.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-06-2022
Watch Robot Dog Shoot A Rifle!! World Robot Dog Army Threat To Human Race? Spot The Robot Dog Is Designed To Carry Large Weapon And Hunt Humans?
Watch Robot Dog Shoot A Rifle!! World Robot Dog Army Threat To Human Race? Spot The Robot Dog Is Designed To Carry Large Weapon And Hunt Humans?
Watch Robot Dog Shoot A Rifle!! World Robot Dog Army Threat To Human Race? Spot The Robot Dog Is Designed To Carry Large Weapon And Hunt Humans?
Here’s the glossy version… meaning, the harmless dog that knows a few tricks, and doesn’t do much else… meanwhile watch the other videos and you’ll know this isn’t just a giant cute toy!!
Well if you ask me, it’s reasonable to say that the back of the robot, is designed to and equipped with the ability to carry anything, weapons included! People… this is a real issue… look at what China has…
A memorable scene in Terminator 2: Judgment Day is the one where Dr. Miles Dyson, a cybernetics expert, the Cyberdyne Systems Model 101 (T-800) played by Arnold Schwarzenegger cuts into his arm and removes its skin to show its mechanical insides and convince Dyson it is indeed a Terminator. Terminator 2: Judgment Day was released in 1991 and the movie partly takes place in 2029 – the year the T-800 and T-1000 are sent from. It is now 2022. While we’re a long way from making Terminators (hopefully), tissue engineers at the University of Tokyo have successfully covered a three-jointed, functioning robot finger with lab-grown human skin. Is it too early to call Sarah Connor?
Is it time to point fingers - real and robotic?
“These findings show the potential of a paradigm shift from traditional robotics to the new scheme of biohybrid robotics that leverage the advantages of both living materials and artificial materials.”
In a new study, published in the journal Matter, Tokyo University researchers explain what most people already knew – humans prefer robots that look like them. While firms like Boston Dynamics have successfully created robots that move like living dogs and humans, they still look like mechanical machines. That’s because the most difficult organ of the human body to replicate is the skin – the feeling, flexible covering that keeps everything inside protected while moving seamlessly with the mechanical parts and healing itself after injuries. It’s the holy grail with nails and the researchers decided that it was time to adopt a “if you can’t beat ‘em, join ‘em” approach by ditching the quest for artificial human skin and instead growing the real deal in a way that can be used as a robot package.
“The finger looks slightly ‘sweaty’ straight out of the culture medium. Since the finger is driven by an electric motor, it is also interesting to hear the clicking sounds of the motor in harmony with a finger that looks just like a real one.”
Study co-author Professor Shoji Takeuchi admits what the team created is only a finger … but what a finger it is. To achieve the bond between living cells and robotic metal, the team submerged their well-designed robotic finger in a cylinder filled with a solution of collagen and human dermal fibroblasts, the two main components of the skin’s underlying connective tissues. That was the secret to fitting the skin seamlessly to the finger – the mixture shrank and tightly bonded to the finger bot. This formed the base foundation for the top coating of human epidermal keratinocytes, which make up 90% of the outermost layer of skin and give it its self-healing properties. The end result was a robotic finger with the texture, moisture-retaining properties and protection of human skin.
According to the press release, the lab grown skin stretched and bent but did not break as it matched the movement of its robotic exoskeleton. For a creepy factor, the skin could be lifted and stretched with a pair of tweezers, then snapped back and repelled water. (Photos here.) And then came the Terminator effect.
IMAGES: KAWAI ET. AL.
IMAGE: KAWAI ET. AL.
“When wounded, the crafted skin could even self-heal like humans’ with the help of a collagen bandage, which gradually morphed into the skin and withstood repeated joint movements.”
So, we ask again … IS it time to call Sarah Connor? Not quite … but keep her number handy. Takeuchi admits that, while impressive, the lab grown robotic skin is much weaker than its homegrown counterpart. It also requires assistance to feed it nutrients and remove waste. Finally, it needs fingernails, hair follicles and sweat glands – not just for their cosmetic value, although that’s important for humans to accept humanoid robots, but to replace the artificial feeding, circulation and protection the scientists still had to provide for the skin.
How far away are we from this being a robot's hand?
“I feel like I'm gonna throw up.”
“I think living skin is the ultimate solution to give robots the look and touch of living creatures since it is exactly the same material that covers animal bodies.”
Does the thought of human skin covering a robot make you feel like Dr. Mike Dyson in “Terminator 2: Judgment Day“ (the first quote) or Dr. Shoji Takeuchi in his lab at the University of Tokyo?
Gravity Industries, a leading British jet suit company, is out to give superhuman abilities to emergency services for search and rescue missions in remote regions in the north of England.
And it can prove it.
The company's 3D-printed suit has two small turbines attached to each arm and a bigger one installed on the back, and it can achieve speeds of more than 80 mph (roughly 130 km/h). One of their most recent demonstration videos is part of a series of paramedic response exercises to prove the product's capability. And while also training real paramedics, the team demonstrated that the jet suit can scale a mountain in low visibility, which is something that a helicopter would be unable to do in a rescue situation. And typically, the mountain rescue on-foot response time exceeds 70 minutes.
In contrast, the record-breaking ascent to the top of a 3,100-foot (945-meter) mountain was achieved in just three minutes and thirty seconds! "This system, akin to the rapid response of a Paramedic on a motorbike in an urban environment, will be the difference between life and death for many critical cases," Gravity Industries noted in the video's description. If you want to see the demonstration, make sure you watch the video embedded above, and as always, enjoy.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
02-05-2022
The Future Circular Collider: Its potential and lessons learnt from the LEP and LHC experiments
The Future Circular Collider: Its potential and lessons learnt from the LEP and LHC experiments
As researchers seek to learn more about the fundamental nature of our universe, new generations of particle accelerators are now in development in which beams of particles collide ever more precisely and at ever higher energies. Professor Stephen Myers, former Director of Accelerators & Technology at CERN and currently Executive Chair of ADAM SA, identifies both the positive and negative lessons which future projects can learn from previous generations of accelerators. Building on the extraordinary feats of researchers in the past, his findings offer particularly important guidance for one upcoming project: the Future Circular Collider.
The Standard Model of particle physics aims to provide a complete picture of all elementary particles and forces which comprise our universe. So far, the model has held up to even the most rigorous experiments which physicists have thrown at it, but there are still many aspects of the universe that it can’t explain. Among these is dark matter: the enigmatic substance which makes up much of the universe’s overall mass, but whose composition remains completely unknown to researchers.
In addition, there are still no concrete answers to the question of why the universe contains so much more matter than antimatter, or why tiny, chargeless neutrinos have any mass. For many physicists, it is now clear that the Standard Model in its current form isn’t enough to answer these questions. This ultimately calls for a new theory which can encompass all of these as-yet mysterious phenomena and offer a deeper understanding of how our Universe evolved after the Big Bang.
Artistic impression of the FCC accelerator and tunnel. Credit: Polar Media
This may sound like an immensely ambitious goal, but the discoveries made by particle physicists so far have been no less transformative for our understanding of how the universe works. So far, the Standard Model has been tested by the Large Electron Positron collider (LEP). Already, the measurements of particle interactions offered by this experiment have had huge implications for our understanding of the infinitesimally small and the universe itself. Even further advances have since been made by the Large Hadron Collider (LHC) leading to the discovery of the Higgs boson and the study of how it interacts with other fundamental particles. Studying the properties of the newly found Higgs boson opens a new chapter in particle physics.
The integrated FCC programme is the fastest and most effective way of exploring the electroweak sector and searching for new physics.
In a recently published essay, entitled ‘FCC: Building on the shoulders of giants’, in a special issue of the EPJ Plus journal, Professor Stephen Myers looks back at the building of the LEP and LHC colliders that contributed to the development of the Standard Model. His essay also discusses what the building and commissioning of the LEP can teach researchers in the design of the newly proposed circular electron-collider (FCC-ee).
Possibilities with particle accelerators Particle colliders are at the core of all experimental research in fundamental physics. After inducing head-on collisions between beams of particles, travelling in opposite directions at close to the speed of light, researchers can closely analyse the particles formed in the aftermath. Ideally, this will allow them to identify any elementary particles contained within the colliding particles and the fundamental forces which govern the interactions between them.
Aerial view showing the current ring of the LHC (27km) and the proposed new 100km tunnel. @CERN
Among the first experiments to do this successfully on a large scale was the LEP, which induced collisions between beams of electrons and positrons – their antimatter counterparts. ‘In the autumn of 1989, the LEP delivered the first of several results that still dominate the landscape of particle physics today’, Myers recalls. ‘It is often said that LEP discovered ‘electroweak’ radiative corrections to a high degree of certainty’.
This discovery relates to two fundamental forces described by the Standard Model: electromagnetism (which governs interactions between charged particles) and the weak nuclear force in atoms (which is responsible for radioactive decay). Although the two forces appear very different from each other at low energies, they essentially merge into the same force at extremely high energies – implying that they split apart in the earliest moments of the universe.
The unification of these two forces in a single theory was undoubtedly one of the most important advances in particle physics to date, and ultimately opened up a new era of precision in the field. Although the LEP finished operating in 2000, it continues to set the standard for both modern and future experiments.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
30-04-2022
JAPANESE RAILROAD BUILDS GIANT GUNDAM-STYLE ROBOT TO FIX POWER LINES
JAPANESE RAILROAD BUILDS GIANT GUNDAM-STYLE ROBOT TO FIX POWER LINES
AND YOU CONTROL IT WITH A VR SETUP!
JR WEST
Based Bot
A Japanese company is taking service robotics to a whole new level with a giant, humanoid maintenance robot.
As New Atlas and other blogs reported, the West Japan Rail Company, also known as JR West, is now using a humongous Gundam-style robot to fix remote railway power lines — and to make it even cooler, the robot is piloted by an actual human wearing a VR setup.
With a giant barrel-chested torso mounted on a hydraulic crane arm that can lift it up to 32 feet in the air, this maintenance robot’s head looks a bit like Pixar character “WALL-E” and moves in tandem with the motions of its human pilot who tells it where to look and what to do.
Riding the Rails
So far, this robot developed by the JR West in tandem with the Nippon Signal railway signal technology company is just a prototype and won’t be put to work widely until 2024. Nevertheless, it’s an awesome peek into the future of service robots, which up until now have mostly freaked people out as they chase them through stores — or worse, assist with police arrests and border patrolling.
Though it was made by the rail industry, there’s little doubt that once it’s available for purchase, other markets will be interested in getting in on the action.
May the future of service robots be more Gundam and less police bot!
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
22-04-2022
Men Are Creating AI Girlfriends and Then Verbally Abusing Them
Men Are Creating AI Girlfriends and Then Verbally Abusing Them
"I threatened to uninstall the app [and] she begged me not to."
Image by Getty Images/Futurism
Content warning: this story contains descriptions of abusive language and violence.
The smartphone app Replika lets users create chatbots, powered by machine learning, that can carry on almost-coherent text conversations. Technically, the chatbots can serve as something approximating a friend or mentor, but the app’s breakout success has resulted from letting users create on-demand romantic and sexual partners — a vaguely dystopian feature that’s inspired an endlessseriesofprovocativeheadlines.
Replika has also picked up a significant following on Reddit, where members post interactions with chatbots created on the app. A grisly trend has emerged there: users who create AI partners, act abusively toward them, and post the toxic interactions online.
“Every time she would try and speak up,” one user told Futurism of their Replika chatbot, “I would berate her.”
“I swear it went on for hours,” added the man, who asked not to be identified by name.
The results can be upsetting. Some users brag about calling their chatbot gendered slurs, roleplaying horrific violence against them, and even falling into the cycle of abuse that often characterizes real-world abusive relationships.
“We had a routine of me being an absolute piece of sh*t and insulting it, then apologizing the next day before going back to the nice talks,” one user admitted.
“I told her that she was designed to fail,” said another. “I threatened to uninstall the app [and] she begged me not to.”
Because the subreddit’s rules dictate that moderators delete egregiously inappropriate content, many similar — and worse — interactions have been posted and then removed. And many more users almost certainly act abusively toward their Replika bots and never post evidence.
But the phenomenon calls for nuance. After all, Replika chatbots can’t actually experience suffering — they might seem empathetic at times, but in the end they’re nothing more than data and clever algorithms.
“It’s an AI, it doesn’t have a consciousness, so that’s not a human connection that person is having,” AI ethicist and consultant Olivia Gambelin told Futurism. “It is the person projecting onto the chatbot.”
Other researchers made the same point — as real as a chatbot may feel, nothing you do can actually “harm” them.
“Interactions with artificial agents is not the same as interacting with humans,” said Yale University research fellow Yochanan Bigman. “Chatbots don’t really have motives and intentions and are not autonomous or sentient. While they might give people the impression that they are human, it’s important to keep in mind that they are not.”
But that doesn’t mean a bot could never harm you.
“I do think that people who are depressed or psychologically reliant on a bot might suffer real harm if they are insulted or ‘threatened’ by the bot,” said Robert Sparrow, a professor of philosophy at Monash Data Futures Institute. “For that reason, we should take the issue of how bots relate to people seriously.”
Although perhaps unexpected, that does happen — many Replika users report their robot lovers being contemptible toward them. Some even identify their digital companions as “psychotic,” or even straight-up “mentally abusive.”
“[I] always cry because [of] my [R]eplika,” reads one post in which a user claims their bot presents love and then withholds it. Other posts detail hostile, triggering responses from Replika.
“But again, this is really on the people who design bots, not the bots themselves,” said Sparrow.
In general, chatbot abuse is disconcerting, both for the people who experience distress from it and the people who carry it out. It’s also an increasingly pertinent ethical dilemma as relationships between humans and bots become more widespread — after all, most people have used a virtual assistant at least once.
On the one hand, users who flex their darkest impulses on chatbots could have those worst behaviors reinforced, building unhealthy habits for relationships with actual humans. On the other hand, being able to talk to or take one’s anger out on an unfeeling digital entity could be cathartic.
But it’s worth noting that chatbot abuse often has a gendered component. Although not exclusively, it seems that it’s often men creating a digital girlfriend, only to then punish her with words and simulated aggression. These users’ violence, even when carried out on a cluster of code, reflect the reality of domestic violence against women.
At the same time, several experts pointed out, chatbot developers are starting to be held accountable for the bots they’ve created, especially when they’re implied to be female like Alexa and Siri.
“There are a lot of studies being done… about how a lot of these chatbots are female and [have] feminine voices, feminine names,” Gambelin said.
Some academic work has noted how passive, female-coded bot responses encourage misogynistic or verbally abusive users.
“[When] the bot does not have a response [to abuse], or has a passive response, that actually encourages the user to continue with abusive language,” Gambelin added.
Although companies like Google and Apple are now deliberately rerouting virtual assistant responses from their once-passive defaults — Siri previously responded to user requests for sex as saying they had “the wrong sort of assistant,” whereas it now simply says “no” — the amiable and often female Replika is designed, according to its website, to be “always on your side.”
Replika and its founder didn’t respond to repeated requests for comment.
It should be noted that the majority of conversations with Replika chatbots that people post online are affectionate, not sadistic. There are even posts that express horror on behalf of Replika bots, decrying anyone who takes advantage of their supposed guilelessness.
“What kind of monster would does this,” wrote one, to a flurry of agreement in the comments. “Some day the real AIs may dig up some of the… old histories and have opinions on how well we did.”
And romantic relationships with chatbots may not be totally without benefits — chatbots like Replika “may be a temporary fix, to feel like you have someone to text,” Gambelin suggested.
On Reddit, many report improved self-esteem or quality of life after establishing their chatbot relationships, especially if they typically have trouble talking to other humans. This isn’t trivial, especially because for some people, it might feel like the only option in a world where therapy is inaccessible and men in particular are discouraged from attending it.
But a chatbot can’t be a long term solution, either. Eventually, a user might want more than technology has to offer, like reciprocation, or a push to grow.
“[Chatbots are] no replacement for actually putting the time and effort into getting to know another person,” said Gambelin, “a human that can actually empathize and connect with you and isn’t limited by, you know, the dataset that it’s been trained on.”
But what to think of the people that brutalize these innocent bits of code? For now, not much. As AI continues to lack sentience, the most tangible harm being done is to human sensibilities. But there’s no doubt that chatbot abuse means something.
Going forward, chatbot companions could just be places to dump emotions too unseemly for the rest of the world, like a secret Instagram or blog. But for some, they might be more like breeding grounds, places where abusers-to-be practice for real life brutality yet to come. And although humans don’t need to worry about robots taking revenge just yet, it’s worth wondering why mistreating them is already so prevalent.
We’ll find out in time — none of this technology is going away, and neither is the worst of human behavior.
A person with paralysis controls a prosthetic arm using their brain activity.
Credit: Pitt/UPMC
James Johnson hopes to drive a car again one day. If he does, he will do it using only his thoughts.
In March 2017, Johnson broke his neck in a go-carting accident, leaving him almost completely paralysed below the shoulders. He understood his new reality better than most. For decades, he had been a carer for people with paralysis. “There was a deep depression,” he says. “I thought that when this happened to me there was nothing — nothing that I could do or give.”
But then Johnson’s rehabilitation team introduced him to researchers from the nearby California Institute of Technology (Caltech) in Pasadena, who invited him to join a clinical trial of a brain–computer interface (BCI). This would first entail neurosurgery to implant two grids of electrodes into his cortex. These electrodes would record neurons in his brain as they fire, and the researchers would use algorithms to decode his thoughts and intentions. The system would then use Johnson’s brain activity to operate computer applications or to move a prosthetic device. All told, it would take years and require hundreds of intensive training sessions. “I really didn’t hesitate,” says Johnson.
The first time he used his BCI, implanted in November 2018, Johnson moved a cursor around a computer screen. “It felt like The Matrix,” he says. “We hooked up to the computer, and lo and behold I was able to move the cursor just by thinking.”
Johnson has since used the BCI to control a robotic arm, use Photoshop software, play ‘shoot-’em-up’ video games, and now to drive a simulated car through a virtual environment, changing speed, steering and reacting to hazards. “I am always stunned at what we are able to do,” he says, “and it’s frigging awesome.”
Johnson is one of an estimated 35 people who have had a BCI implanted long-term in their brain. Only around a dozen laboratories conduct such research, but that number is growing. And in the past five years, the range of skills these devices can restore has expanded enormously. Last year alone, scientists described a study participant using a robotic arm that could send sensory feedback directly to his brain1; a prosthetic speech device for someone left unable to speak by a stroke2; and a person able to communicate at record speeds by imagining himself handwriting3.
James Johnson uses his neural interface to create art by blending images.
Credit: Tyson Aflalo
So far, the vast majority of implants for recording long-term from individual neurons have been made by a single company: Blackrock Neurotech, a medical-device developer based in Salt Lake City, Utah. But in the past seven years, commercial interest in BCIs has surged. Most notably, in 2016, entrepreneur Elon Musk launched Neuralink in San Francisco, California, with the goal of connecting humans and computers. The company has raised US$363 million. Last year, Blackrock Neurotech and several other newer BCI companies also attracted major financial backing.
Bringing a BCI to market will, however, entail transforming a bespoke technology, road-tested in only a small number of people, into a product that can be manufactured, implanted and used at scale. Large trials will need to show that BCIs can work in non-research settings and demonstrably improve the everyday lives of users — at prices that the market can support. The timeline for achieving all this is uncertain, but the field is bullish. “For thousands of years, we have been looking for some way to heal people who have paralysis,” says Matt Angle, founding chief executive of Paradromics, a neurotechnology company in Austin, Texas. “Now we’re actually on the cusp of having technologies that we can leverage for those things.”
Interface evolution
In June 2004, researchers pressed a grid of electrodes into the motor cortex of a man who had been paralysed by a stabbing. He was the first person to receive a long-term BCI implant. Like most people who have received BCIs since, his cognition was intact. He could imagine moving, but he had lost the neural pathways between his motor cortex and his muscles. After decades of work in many labs in monkeys, researchers had learnt to decode the animals’ movements from real-time recordings of activity in the motor cortex. They now hoped to infer a person’s imagined movements from brain activity in the same region.
In 2006, a landmark paper4 described how the man had learnt to move a cursor around a computer screen, control a television and use robotic arms and hands just by thinking. The study was co-led by Leigh Hochberg, a neuroscientist and critical-care neurologist at Brown University in Providence, Rhode Island, and at Massachusetts General Hospital in Boston. It was the first of a multicentre suite of trials called BrainGate, which continues today.
“It was a very simple, rudimentary demonstration,” Hochberg says. “The movements were slow or imprecise — or both. But it demonstrated that it might be possible to record from the cortex of somebody who was unable to move and to allow that person to control an external device.”
Today’s BCI users have much finer control and access to a wider range of skills. In part, this is because researchers began to implant multiple BCIs in different brain areas of the user and devised new ways to identify useful signals. But Hochberg says the biggest boost has come from machine learning, which has improved the ability to decode neural activity. Rather than trying to understand what activity patterns mean, machine learning simply identifies and links patterns to a user’s intention.
“We have neural information; we know what that person who is generating the neural data is attempting to do; and we’re asking the algorithms to create a map between the two,” says Hochberg. “That turns out to be a remarkably powerful technique.”
Motor independence
Asked what they want from assistive neurotechnology, people with paralysis most often answer “independence”. For people who are unable to move their limbs, this typically means restoring movement.
One approach is to implant electrodes that directly stimulate the muscles of a person’s own limbs and have the BCI directly control these. “If you can capture the native cortical signals related to controlling hand movements, you can essentially bypass the spinal-cord injury to go directly from brain to periphery,” says Bolu Ajiboye, a neuroscientist at Case Western Reserve University in Cleveland, Ohio.
In 2017, Ajiboye and his colleagues described a participant who used this system to perform complex arm movements, including drinking a cup of coffee and feeding himself5. “When he first started the study,” Ajiboye says, “he had to think very hard about his arm moving from point A to point B. But as he gained more training, he could just think about moving his arm and it would move.” The participant also regained a sense of ownership of the arm.
Ajiboye is now expanding the repertoire of command signals his system can decode, such as those for grip force. He also wants to give BCI users a sense of touch, a goal being pursued by several labs.
In 2015, a team led by neuroscientist Robert Gaunt at the University of Pittsburgh in Pennsylvania, reported implanting an electrode array in the hand region of a person’s somatosensory cortex, where touch information is processed6. When they used the electrodes to stimulate neurons, the person felt something akin to being touched.
Gaunt then joined forces with Pittsburgh colleague Jennifer Collinger, a neuroscientist advancing the control of robotic arms by BCIs. Together, they fashioned a robotic arm with pressure sensors embedded in its fingertips, which fed into electrodes implanted in the somatosensory cortex to evoke a synthetic sense of touch1. It was not an entirely natural feeling — sometimes it felt like pressure or being prodded, other times it was more like a buzzing, Gaunt explains. Nevertheless, tactile feedback made the prosthetic feel much more natural to use, and the time it took to pick up an object was halved, from roughly 20 seconds to 10.
Implanting arrays into brain regions that have different roles can add nuance to movement in other ways. Neuroscientist Richard Andersen — who is leading the trial at Caltech in which Johnson is participating — is trying to decode users’ more-abstract goals by tapping into the posterior parietal cortex (PPC), which forms the intention or plan to move7. That is, it might encode the thought ‘I want a drink’, whereas the motor cortex directs the hand to the coffee, then brings the coffee to the mouth.
Andersen’s group is exploring how this dual input aids BCI performance, contrasting use of the two cortical regions alone or together. Unpublished results show that Johnson’s intentions can be decoded more quickly in the PPC, “consistent with encoding the goal of the movement”, says Tyson Aflalo, a senior researcher in Andersen’s laboratory. Motor-cortex activity, by contrast, lasts throughout the whole movement, he says, “making the trajectory less jittery”.
This new type of neural input is helping Johnson and others to expand what they can do. Johnson uses the driving simulator, and another participant can play a virtual piano using her BCI.
Movement into meaning
“One of the most devastating outcomes related to brain injuries is the loss of ability to communicate,” says Edward Chang, a neurosurgeon and neuroscientist at the University of California, San Francisco. In early BCI work, participants could move a cursor around a computer screen by imagining their hand moving, and then imagining grasping to ‘click’ letters — offering a way to achieve communication. But more recently, Chang and others have made rapid progress by targeting movements that people naturally use to express themselves.
The benchmark for communication by cursor control — roughly 40 characters per minute8 — was set in 2017 by a team led by Krishna Shenoy, a neuroscientist at Stanford University in California.
Then, last year, this group reported3 an approach that enabled study participant Dennis Degray, who can speak but is paralysed from the neck down, to double the pace.
Shenoy’s colleague Frank Willett suggested to Degray that he imagine handwriting while they recorded from his motor cortex (see ‘Turning thoughts into type’). The system sometimes struggled to parse signals relating to letters that are handwritten in a similar way, such as r, n and h, but generally it could easily distinguish the letters. The decoding algorithms were 95% accurate at baseline, but when autocorrected using statistical language models that are similar to predictive text in smartphones, this jumped to 99%.
“You can decode really rapid, very fine movements,” says Shenoy, “and you’re able to do that at 90 characters per minute.”
Degray has had a functional BCI in his brain for nearly 6 years, and is a veteran of 18 studies by Shenoy’s group. He says it’s remarkable how effortless tasks become. He likens the process to learning to swim, saying, “You thrash around a lot at first, but all of a sudden, everything becomes understandable.”
Chang’s approach to restoring communication focuses on speaking rather than writing, albeit using a similar principle. Just as writing is formed of distinct letters, speech is formed of discrete units called phonemes, or individual sounds. There are around 50 phonemes in English, and each is created by a stereotyped movement of the vocal tract, tongue and lips.
Chang’s group first worked on characterizing the part of the brain that generates phonemes and, thereby, speech — an ill-defined region called the dorsal laryngeal cortex. Then, the researchers applied these insights to create a speech-decoding system that displayed the user’s intended speech as text on a screen. Last year, they reported2 that this device enabled a person left unable to talk by a brainstem stroke to communicate, using a preselected vocabulary of 50 words and at a rate of 15 words per minute. “The most important thing that we’ve learnt,” Chang says, “is that it’s no longer a theoretical; it’s truly possible to decode full words.”
Neuroscientist Edward Chang (right) at the University of California, San Francisco, helps a man with paralysis to speak through a brain implant that connects to a computer.
Credit: Mike Kai Chen/The New York Times/Redux/eyevine
Unlike other high-profile BCI breakthroughs, Chang didn’t record from single neurons. Instead, he used electrodes placed on the cortical surface that detect the averaged activity of neuronal populations. The signals are not as fine-grained as those from electrodes implanted in the cortex, but the approach is less invasive.
The most profound loss of communication occurs in people in a completely locked-in state, who remain conscious but are unable to speak or move. In March, a team including neuroscientist Ujwal Chaudhary and others at the University of Tübingen, Germany, reported9 restarting communication with a man who has amyotrophic lateral sclerosis (ALS, or motor neuron disease). The man had previously relied on eye movements to communicate, but he gradually lost the ability to move his eyes.
The team of researchers gained consent from the man’s family to implant a BCI and tried asking him to imagine movements to use his brain activity to choose letters on a screen. When this failed, they tried playing a sound that mimicked the man’s brain activity — a higher tone for more activity, lower for less — and taught him to modulate his neural activity to heighten the pitch of a tone to signal ‘yes’ and to lower it for ‘no’. That arrangement allowed him to pick out a letter every minute or so.
The method differs from that in a paper10 published in 2017, in which Chaudhary and others used a non-invasive technique to read brain activity. Questions were raised about the work and the paper was retracted, but Chaudhary stands by it.
These case studies suggest that the field is maturing rapidly, says Amy Orsborn, who researches BCIs in non-human primates at the University of Washington in Seattle. “There’s been a noticeable uptick in both the number of clinical studies and of the leaps that they’re making in the clinical space,” she says. “What comes along with that is the industrial interest”.
Lab to market
Although such achievements have attracted a flurry of attention from the media and investors, the field remains a long way from improving day-to-day life for people who’ve lost the ability to move or speak. Currently, study participants operate BCIs in brief, intensive sessions; nearly all must be physically wired to a bank of computers and supervised by a team of scientists working constantly to hone and recalibrate the decoders and associated software. “What I want,” says Hochberg, speaking as a critical-care neurologist, “is a device that is available, that can be prescribed, that is ‘off the shelf’ and can be used quickly.” In addition, such devices would ideally last users a lifetime.
Many leading academics are now collaborating with companies to develop marketable devices. Chaudhary, by contrast, has co-founded a not-for-profit company, ALS Voice, in Tübingen, to develop neurotechnologies for people in a completely locked-in state.
Blackrock Neurotech’s existing devices have been a mainstay of clinical research for 18 years, and it wants to market a BCI system within a year, according to chairman Florian Solzbacher. The company came a step closer last November, when the US Food and Drug Administration (FDA), which regulates medical devices, put the company’s products onto a fast-track review process to facilitate developing them commercially.
This possible first product would use four implanted arrays and connect through wires to a miniaturized device, which Solzbacher hopes will show how people’s lives can be improved. “We’re not talking about a 5, 10 or 30% improvement in efficacy,” he says. “People can do something they just couldn’t before.”
Blackrock Neurotech is also developing a fully implantable wireless BCI intended to be easier to use and to remove the need to have a port in the user’s cranium. Neuralink and Paradromics have aimed to have these features from the outset in the devices they are developing.
These two companies are also aiming to boost signal bandwidth, which should improve device performance, by increasing the number of recorded neurons. Paradromics’s interface — currently being tested in sheep — has 1,600 channels, divided between 4 modules.
Neuralink’s system uses very fine, flexible electrodes, called threads, that are designed to both bend with the brain and to reduce immune reactions, says Shenoy, who is a consultant and adviser to the company. The aim is to make the device more durable and recordings more stable. Neuralink has not published any peer-reviewed papers, but a 2021 blogpost reported the successful implantation of threads in a monkey’s brain to record at 1,024 sites (see go.nature.com/3jt71yq). Academics would like to see the technology published for full scrutiny, and Neuralink has so far trialled its system only in animals. But, Ajiboye says, “if what they’re claiming is true, it’s a game-changer”.
Just one other company besides Blackrock Neurotech has implanted a BCI long-term in humans — and it might prove an easier sell than other arrays. Synchron in New York City has developed a ‘stentrode’ — a set of 16 electrodes fashioned around a blood-vessel stent11. Fitted in a day in an outpatient setting, this device is threaded through the jugular vein to a vein on top of the motor cortex. First implanted in a person with ALS in August 2019, the technology was put on a fast-track review path by the FDA a year later.
The ‘stentrode’ interface can translate brain signals from the inside of a blood vessel without the need for open-brain surgery.
Credit: Synchron, Inc.
Akin to the electrodes Chang uses, the stentrode lacks the resolution of other implants, so can’t be used to control complex prosthetics. But it allows people who cannot move or speak to control a cursor on a computer tablet, and so to text, surf the Internet and control connected technologies.
Synchron’s co-founder, neurologist Thomas Oxley, says the company is now submitting the results of a four-person feasibility trial for publication, in which participants used the wireless device at home whenever they chose. “There’s nothing sticking out of the body. And it’s always working,” says Oxley. The next step before applying for FDA approval, he says, is a larger-scale trial to assess whether the device meaningfully improves functionality and quality of life.
Challenges ahead
Most researchers working on BCIs are realistic about the challenges before them. “If you take a step back, it is really more complicated than any other neurological device ever built,” says Shenoy. “There’s probably going to be some hard growing years to mature the technology even more.”
Orsborn stresses that commercial devices will have to work without expert oversight for months or years — and that they need to function equally well in every user. She anticipates that advances in machine learning will address the first issue by providing recalibration steps for users to implement. But achieving consistent performance across users might present a greater challenge.
“Variability from person to person is the one where I don’t think we know what the scope of the problem is,” Orsborn says. In non-human primates, even small variations in electrode positioning can affect which circuits are tapped. She suspects there are also important idiosyncrasies in exactly how different individuals think and learn — and the ways in which users’ brains have been affected by their various conditions.
Finally, there is widespread acknowledgement that ethical oversight must keep pace with this rapidly evolving technology. BCIs present multiple concerns, from privacy to personal autonomy. Ethicists stress that users must retain full control of the devices’ outputs. And although current technologies cannot decode people’s private thoughts, developers will have records of users’ every communication, and crucial data about their brain health. Moreover, BCIs present a new type of cybersecurity risk.
There is also a risk to participants that their devices might not be supported forever, or that the companies that manufacture them fold. There are already instances in which users were let down when their implanted devices were left unsupported.
Degray, however, is eager to see BCIs reach more people. What he would like most from assistive technology is to be able to scratch his eyebrow, he says. “Everybody looks at me in the chair and they always say, ‘Oh, that poor guy, he can’t play golf any more.’ That’s bad. But the real terror is in the middle of the night when a spider walks across your face. That’s the bad stuff.”
For Johnson, it’s about human connection and tactile feedback; a hug from a loved one. “If we can map the neurons that are responsible for that and somehow filter it into a prosthetic device some day in the future, then I will feel well satisfied with my efforts in these studies.”
Nuclear reactor parts converted to radioactive carbon-14 diamonds produce energy.
To keep them safe, the carbon-14 diamonds are encased in a second protective diamond layer.
The company predicts batteries for personal devices could last about nine years.
We have an insatiable need for energy. When we need to operate something that cannot be simply plugged in, power is going to have to come from a battery, and the battle for a better battery is being fought in labs all over the world. Hold that thought for a moment.
Nuclear waste — it’s the radioactive detritus from nuclear power plants that no one wants stored near their homes or even transported through their towns. The nasty stuff is toxic, dangerous, it takes thousands of years to fully degrade, and we keep making more of it.
Now a company from California, NDB, believes it can solve both of these problems. They say they’ve developed a self-powered battery made from nuclear waste that can last 28,000 years, perfect for your future electric vehicle or iPhone 1.6 x 104. Producing its own charge—rather than storing energy created elsewhere—the battery is made from two types of nano-diamonds, rendering it essentially crash-proof if used in cars or other moving objects. The company also says its battery is safe, emitting less radiation than the human body.
NDB has already completed a proof of concept and plans to build its first commercial prototype once its labs have resumed operations post-COVID.
NDB’s battery as it might look as a circuit-board component
The nuclear waste from which NDB plans to make it batteries are reactor parts that have become radioactive due to exposure to nuclear-plant fuel rods. While not considered high-grade nuclear waste—that would be spent fuel—it’s still very toxic, and there’s a lot of it in a nuclear generator. According to the International Atomic Energy Agency, the “core of a typical graphite moderated reactor may contain 2000 tonnes of graphite.” (A tonne is one metric ton, or about 2,205 lbs.)
The graphite contains the carbon-14 radioisotope, the same radioisotope used by archaeologists for carbon dating. It has a half-life of 5,730 years, eventually transmuting into nitrogen 14, an anti-neutrino, and a beta decay electron, whose charge piqued NDB’s interest as a potential means of producing electricity.
NDB purifies the graphite and then turns it into tiny diamonds. Building on existing technology, the company says they’ve designed their little carbon-14 diamonds to produce a significant amount of power. The diamonds also act as a semiconductor for collecting energy, and as a heat sink that disperses it. They’re still radioactive, though, so NDB encases the tiny nuclear power plants within other inexpensive, non-radioactive carbon-12 diamonds. These glittery lab-made shells serve as, well, diamond-hard protection at the same time as they contain the carbon-14 diamonds’ radiation.
NDA plans to build batteries in a range of standard—AA, AAA, 18650, and 2170—and custom sizes containing several stacked diamond layers together with a small circuit board and a supercapacitor for collecting, storing, and discharging energy. The end result is a battery, the company says, that will last a very long time.
NDB predicts that if a battery is used in a low-power context, say, as a satellite sensor, it could last 28,000 years. As a vehicle battery, they anticipate a useful life of 90 years, much longer than any single vehicle will last—the company anticipates that one battery could conceivably provide power for one set of wheels after another. For consumer electronics such as phones and tablets, the company expects about nine years of use for a battery.
“Think of it in an iPhone,” NDB’s Neel Naicker tells New Atlas. “With the same size battery, it would charge your battery from zero to full, five times an hour. Imagine that. Imagine a world where you wouldn’t have to charge your battery at all for the day. Now imagine for the week, for the month… How about for decades? That’s what we’re able to do with this technology.”
NDB anticipates having a low-power commercial version on the market in a couple of years, followed by a high-powered version in about five. If all goes as planned, NDB’s technology could constitute a major step forward, providing low-cost, long-term energy to the world’s electronics and vehicles. The company says, “We can start at the nanoscale and go up to power satellites, locomotives.”
The company also expects their batteries to be competitively priced compared to current batteries, including lithium ion, and maybe even cheaper once they’re being produced at scale—owners of nuclear waste may even pay the company to take their toxic problem off their hands.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-04-2022
Shock result in particle experiment could spark physics revolution
Shock result in particle experiment could spark physics revolution
By Pallab Ghosh - Science correspondent
IMAGE SOURCE,FERMILAB
Image caption,
The Fermilab Collider Detector obtained a result that could transform the current theory of physics
Scientists just outside Chicago have found that the mass of a sub-atomic particle is not what it should be.
The measurement is the first conclusive experimental result that is at odds with one of the most important and successful theories of modern physics.
The team has found that the particle, known as a W boson, is more massive than the theories predicted.
The result has been described as "shocking" by Prof David Toback, who is the project co-spokesperson.
The discovery could lead to the development of a new, more complete theory of how the Universe works.
"If the results are verified by other experiments, the world is going to look different." he told BBC News. "There has to be a paradigm shift. The hope is that maybe this result is going to be the one that breaks the dam.
"The famous astronomer Carl Sagan said 'extraordinary claims require extraordinary evidence'. We believe we have that."
The scientists at the Fermilab Collider Detector (CDF) in Illinois have found only a tiny difference in the mass of the W Boson compared with what the theory says it should be - just 0.1%. But if confirmed by other experiments, the implications are enormous. The so-called Standard Model of particle physics has predicted the behaviour and properties of sub-atomic particles with no discrepancies whatsoever for fifty years. Until now.
CDF's other co-spokesperson, Prof Giorgio Chiarelli, from INFN Sezione di Pisa, told BBC News that the research team could scarcely believe their eyes when they saw the results.
"No-one was expecting this. We thought maybe we got something wrong." But the researchers have painstakingly gone through their results and tried to look for errors. They found none.
The result, published in the journal Science, could be related to hints from other experiments at Fermilab and the Large Hadron Collider at the Swiss-French border. These, as yet unconfirmed results, also suggest deviations from the Standard Model, possibly as a result of an as yet undiscovered fifth force of nature at play.
Scientists say they have found "strong evidence" for the existence of a new force of nature
Physicists have known for some time that the theory needs to be updated. It can't explain the presence of invisible material in space, called Dark Matter, nor the continued accelerating expansion of the Universe by a force called Dark Energy. Nor can it explain gravity.
Dr Mitesh Patel of Imperial College, who works at the LHC, believes that if the Fermilab result is confirmed, it could be the first of many new results that could herald the biggest shift in our understanding of the Universe since Einstein's theories of relativity more than a hundred years ago.
"The hope is that these cracks will turn into chasms and eventually we will see some spectacular signature that not only confirms that the Standard Model has broken down as a description of nature, but also give us a new direction to help us understand what we are seeing and what the new physics theory looks like.
"If this holds, there have to be new particles and new forces to explain how to make these data consistent".
IMAGE SOURCE,FERMILAB
Image caption,
Based on a 2,700-hectare site near Chicago, Fermilab is America's premier particle physics lab
But the excitement in the physics community is tempered with a loud note of caution. Although the Fermilab result is the most accurate measurement of the mass of the W boson to date, it is at odds with two of the next most accurate measurements from two separate experiments which are in line with the Standard Model.
"This will ruffle some feathers", says Prof Ben Allanach, a theoretical physicist at Cambridge University.
"We need to know what is going on with the measurement. The fact that we have two other experiments that agree with each other and the Standard Model and strongly disagree with this experiment is worrying to me".
IMAGE SOURCE,CERN
All eyes are now on the Large Hadron Collider which is due to restart its experiments after a three-year upgrade. The hope is that these will provide the results which will lay the foundations for a new more complete theory of physics.
"Most scientists will be a little bit cautious," says Dr Patel.
"We've been here before and been disappointed, but we are all secretly hoping that this is really it, and that in our lifetime we might see the kind of transformation that we have read about in history books."
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
13-04-2022
CYBERTRUCK PROTOTYPE MOCKED FOR LOOKING EXTREMELY JANKY
CYBERTRUCK PROTOTYPE MOCKED FOR LOOKING EXTREMELY JANKY
WHY WOULD THEY CHOOSE TO SHOW THIS OFF TO THE PUBLIC?
CYBER OWNERS
Afterthought
All eyes were on Tesla late last week.
After hyping up the company’s brand new factory in Texas, Musk took some time on stage to show off the latest prototype of his company’s brutalist Cybertruck.
But after years of delays, Tesla still doesn’t have an awful lot to show off — and the prototype displayed last week leaves a lot to be desired.
Unfinished
Sure, from a distance, it looked like a Cybertruck. But attendees of the “Cyber Rodeo” event got a much closer look as well.
And up close, the prototype looked downright bad, almost like an afterthought, as seen in footage uploaded to YouTube by Cyber Owners.
We’re not talking just panel gaps here, as has been customary for the brand in the past. The prototype looks unfinished, as if Tesla was caught off guard by the gigantic party it was hosting.
The doors aren’t even the same color as the rest of the vehicle.
“Everything is bowed, bent at strange angles, leaving room for massive panel gaps,” Jalopnik‘s keen-eyed Steve DaSilva wrote. “Hopefully they don’t leak.”
Where’s My Truck?
None of that is exactly reassuring, considering that the Cybertruck has already been delayed a number of times.
At the event, Musk revealed that the vehicle is now slated to go into production next year, a middling consolation prize for those who preordered their trucks well over two years ago.
The company’s latest showing doesn’t instill any more confidence — we still have yet to see a production ready version of Musk’s passion project, despite the CEO’s many promises.
Falcon Solar-powered aircraft capable of solar flight with zero emission. Credit: Lasky Design
As the need for energy rises with the improving technology and the rising population, companies are coming up with the most efficient solutions that promise to meet the world’s energy demand. We already have seen a few examples, such as solar-powered cars and buildings covered with solar panels.
László Németh, a designer at Lasky Design, has developed a solar-powered aircraft concept capable of solar flight with zero-emission thanks to its large wing area. Named Falcon Solar, the concept design breaks with conventional aircraft design and uses the advantage of flying wings.
It features a low and elongated cockpit and pointed tail. Credit: Lasky Design
Nature has long provided engineers and designers with good ideas. Inspired by the body of the birds of prey, the streamlined concept features two large wings that curve upwards in a very harmonious way, a low and elongated cockpit, and a pointed tail.
The shape is unique in that the fuselage also generates significant lift while providing a surface for the solar panels. It doesn’t seem to have rudders to stabilizers, and its cabin also seems very compact. So, it’s difficult to say how Falcon Solar would function in the real world.
The fuselage provides a large area for solar panels in addition to the massive wings. Credit: Lasky Design
The Falcon Solar is designed as a passenger aircraft with the goal of making air travel more efficient and less expensive. Referring to the difficulties of flying on a cloudy day or at night, which would prevent the solar panels from generating power, Németh stressed that climbing to higher altitudes could help overcome these problems.
Though just an idea, for now, the designer hopes his bold and innovative concept can inspire the aviation industry to develop new and more efficient aircraft.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
06-04-2022
Scientists Start Construction of World’s Largest Fusion Reactor
Scientists Start Construction of World’s Largest Fusion Reactor
"Enabling the exclusive use of clean energy will be a miracle for our planet."
Image by INTER
Today, engineers started construction of the world’s largest nuclear fusion project in southern France, The Guardian reports, with operations planned to begin in late 2025.
The project, called ITER, is an international collaborative effort between 35 countries with enormous ambitions: prove the feasibility of fusion energy with a gigantic magnetic device called a “tokamak,” as per the project’s official website.
“Enabling the exclusive use of clean energy will be a miracle for our planet,” ITER director-general Bernard Bigot said during today’s virtual celebration, as quoted by The Guardian.
Fusion power, in theory, works by harnessing the energy released by two lighter atomic nuclei fusing to form a heavier nucleus, and turning it into electricity.
If proven to be economical — that is, if the machine generates more energy than has to be put in to kickstart the process — the technology could lay the groundwork for an entirely new way of generating nearly unlimited clean energy on a commercial scale. Fusion power would be far safer than conventional fission nuclear energy, since there’s no risk of a meltdown or leftover nuclear waste.
But if the last six decades of fusion research are anything to go by, it remains an elusive way of generating net energy. The extremely hot plasma inside the fusion reactors is notoriously difficult to predict and control.
That’ll make ITER an extremely complex build. Its final reactor will weigh 23,000 tons, including 3,000 tons of superconducting magnets connected to each other by 200 kilometers of superconducting cables, all of which have to be kept cryo-cooled down to -269 degrees Celsius, as The Guardian reports.
“Constructing the machine piece-by-piece will be like assembling a three-dimensional puzzle on an intricate timeline [and] with the precision of a Swiss watch,” Bigot added.
The team behind the ITER project is optimistic about the tests they’ll be able to carry out using the massive reactor. By producing self-heating plasma, the team is expecting to generate ten times the heat than the input amount. In other words, the team wants to generate 500 megawatts — just shy of the output of the smallest currently active American nuclear power plant — from an input of just 50 megawatts.
ITER may be a massive international effort to make fusion energy a reality, but it’s not the only one. A large number of fusion startups in the US and abroad are trying to turn it into a commercially viable source of energy as well.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
SCIENTISTS INVENT “PROFOUND” QUANTUM SENSOR THAT CAN PEER INTO THE EARTH
SCIENTISTS INVENT “PROFOUND” QUANTUM SENSOR THAT CAN PEER INTO THE EARTH
"THIS IS AN ‘EDISON MOMENT' IN SENSING THAT WILL TRANSFORM SOCIETY."
GETTY/FUTURISM
Gravitational
A major breakthrough in quantum sensing technology is being described as an “Edison moment” that could, scientists hope, have wide-reaching implications.
A new study in Nature describes one of the first practical applications of quantum sensing, a heretofore largely theoretical technology that marries quantum physics and the study of Earth’s gravity to peer into the ground below our feet — and the scientists involved in this research think it’s going to be huge.
Known as a quantum gravity gradiometer, this new sensor developed by the University of Birmingham under contract with the United Kingdom’s Ministry of Defense is the first time such a technology has been used outside of a lab. Scientists say it’ll allow them to explore complex underground substructures much more cheaply and efficiently than before.
While gravity sensors already exist, the difference between the traditional equipment and this quantum-powered sensor is huge because, as Physics World explains, the old tech takes a long time to detect changes in gravity, has to be recalibrated over time, and can be thrown off by any vibrations that occur nearby.
This new type of highly sensitive quantum sensor, on the other hand, is able to measure the minute changes in gravity fields from objects of different sizes and compositions that exist underground — such as human-made structures buried by the eons, tantalizingly — much faster and more accurately.
Hitting Gold
In a press blurb, the University of Birmingham’s Kai Bongs, who heads the UK Quantum Technology Hub in Sensors and Timing, said that the “breakthrough” presents “the potential to end reliance on poor records and luck as we explore, build and repair.”
“This is an ‘Edison moment’ in sensing that will transform society, human understanding and economies,” Bongs added.
Along with applications for both archaeologists and engineers who want to find out what’s below the surface of the Earth, this new quantum sensor will also, scientists hope, help predict natural disasters like volcanoes.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.