The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
31-08-2020
Japan Successfully Tested a Flying Car
Japan Successfully Tested a Flying Car
Japanese company SkyDrive Inc. says it envisions a society where flying cars are an "accessible and convenient means of transportation."
SCREENSHOT FROM A VIDEO OF THE SD-03 FLYING CAR MODEL TEST FLIGHT BY SKYDRIVE.
PHOTO: SCREENSHOT/SKYDRIVE
Looks like futuristic fantasies of flying cars zipping through the sky just became closer to reality.
Japanese company SkyDrive Inc. announced on Friday, August 28, that it had successfully conducted a public test flight for its new SD-03 flying car model—billed as the first demonstration of its kind in Japan.
The SD-03 was tested at 10,000-square-meter (approximately 2.5-acre) Toyota Test Field, one of the largest test fields in Japan, the company said in a statement.
The single-seater aircraft-car mashup was manned by a pilot who circled the field for about four minutes before landing. The company said that the pilot was backed up by technical staff at the field who monitored conditions to ensure flight stability and safety.
SkyDrive CEO Tomohiro Fukuzawa said the company hopes to see its technological experiment become a reality by 2023.
“We are extremely excited to have achieved Japan’s first-ever manned flight of a flying car in the two years since we founded SkyDrive in 2018 with the goal of commercializing such an aircraft,” Fukuzawa said.
“We want to realize a society where flying cars are an accessible and convenient means of transportation in the skies and people are able to experience a safe, secure, and comfortable new way of life,” he added.
Designed to be the world’s smallest electric Vertical Take-Off and Landing (eVTOL) model, the flying car measures two meters high by four meters wide (six feet high by 13 feet wide). It takes as much space on the ground as two parked cars.
“We believe that this vehicle will play an active role as your travel companion, a compact coupe flying in the sky,” said Takumi Yamamot, the company’s design director. “As a pioneer of a new genre, we would like to continue designing the vehicles that everyone dreams of.”
The company has not listed a price for the aircraft as of yet, though executives feel confident that sci-fi enthusiasts and busy commuters alike will take to the new mode of transportation. According to the company’s timeline, they envision the SD-03 to be operating with “full autonomy” by 2030.
“The company hopes that its aircraft will become people’s partner in the sky rather than merely a commodity and it will continue working to design a safe sky for the future,” the company said in its statement.
Elon Musk has put a new spin on the expression “guinea pig” by trotting out a live pig to perform in his much-anticipated “Neurolink” demonstration. This was a real porker, not a rodent, and Musk played the ‘rat’ in the demo by touting it as a major breakthrough and attempting to recruit human volunteers while comparing the whole thing to the dystopian science fiction series, “Black Mirror.” Is Musk electrically driving us into a real-life Twilight Zone?
“In a lot of ways, it’s kind of like a Fitbit in your skull, with tiny wires. I could have a Neuralink right now and you wouldn’t know. Maybe I do.’”
Fitbit not in your skull … yet
Wannabe comedian Musk tried to put the audience in a pseudo Joe Rogan interview as he introduced a group of pigs. (Watch the entire presentation/demonstration here.) One was said to have had a ‘Link’ implanted and later removed, to demonstrate that the process is safe (for pigs, at least). Before you start thinking that this doesn’t sound too bad, the Link is about 23 millimeters (.9 inches) by 8 millimeters (.3 inches) and …
“Getting a link requires opening a piece of skull, removing a coin size piece of skull, robot insets electrodes and the device replaces the portion of skull that is closed up with super glue.”
If getting sawed open, probed and superglued by a so-called “sewing” robot is on your bucket list, the line starts at the company’s headquarters in San Francisco. However, you may want to talk to a former employee first. Some of them spoke out to STAT on the run-up to the demonstration that the company’s Muskian philosophy to “move fast and break things” has many employees “completely overwhelmed” – which turns them into ex-employees and explains why Musk used the pig demonstration to appeal for more workers … not pigs, of course. He’s more likely looking for engineers who don’t want to be left behind, but instead want to be part of his weird wide world where memories will be unloaded, downloaded, off-loaded and more.
“You could upload, you could basically store your memories as a backup, and restore the memories, and ultimately you could potentially download them into a new body or a robot body. The future’s going to be weird.”
Disappointingly, most of Musk’s ‘demonstration’ was videos and gonna-be-great commentary and predictions – like that the Neuralink could potentially be used for gaming or summoning your Tesla. If you’re interested in upping your game or your Tesla summoning, volunteers need to meet one more criteria, according to The Verge:
“The first clinical trials will be in a small number of patients with severe spinal cord injuries, to make sure it works and is safe. Last year, Musk said he hoped to start clinical trials in people in 2020. Long term, Musk said they will be able to restore full motion in people with those types of injuries using a second implant on the spine.”
There you go – you knew Musk had to have a noble cause hidden among the boasts of “general anesthesia,” “30 minutes or less” (If it takes longer, is it free? Asking for a friend), “like a Fitbit in your skull” and “Black Mirror.” Speaking of that last one, Musk likes the comparison because “I guess they’re pretty good at predicting.”
So were George Orwell and Rod Serling. Speaking of Orwell, do you think the pigs on “Animal Farm” would stand in line on their two legs to get a Fitbit in their brains from Elon Musk?
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
27-08-2020
Microscopic, Injectable Robots Could Soon Run In Your Veins
Microscopic, Injectable Robots Could Soon Run In Your Veins
“What should I do, doc? Take two microrobots and call me in the morning.” Transhumanism is the ultimate merging of technology with the human, and the drive to do so is relentless. Their endgame is life-extension and then immortality, with a dose of omniscience along the way. ⁃ Technocracy News and Trends Editor Patrick Wood
Scientists have created an army of microscopic four-legged robots too small to see with the naked eye that walk when stimulated by a laser and could be injected into the body through hypodermic needles, a study said Wednesday.
Microscopic robotics are seen as having an array of potential uses, particularly in medicine, and US researchers said the new robots offer “the potential to explore biological environments”.
One of the main challenges in the development of these cell-sized robots has been combining control circuitry and moving parts in such a small structure.
The robots described in the journal Nature are less than 0.1 millimetre wide — around the width of a human hair — and have four legs that are powered by on-board solar cells.
By shooting laser light into these solar cells, researchers were able to trigger the legs to move, causing the robot to walk around.
The study’s co-author Marc Miskin, of the University of Pennsylvania, told AFP that a key innovation of the research was that the legs — its actuators — could be controlled using silicon electronics.
“Fifty years of shrinking down electronics has led to some remarkably tiny technologies: you can build sensors, computers, memory, all in very small spaces,” he said. “But, if you want a robot, you need actuators, parts that move.”
The researchers acknowledged that their creations are currently slower than other microbots that “swim”, less easy to control than those guided by magnets, and do not sense their environment.
The robots are prototypes that demonstrate the possibility of integrating electronics with the parts that help the device move around, Miskin said, adding they expect the technology to develop quickly.
“The next step is to build sophisticated circuitry: can we build robots that sense their environment and respond? How about tiny programmable machines? Can we make them able to run without human intervention?”
Miskin said he envisions biomedical uses for the robots, or applications in materials science, such as repairing materials at the microscale.
“But this is a very new idea and we’re still trying to figure out what’s possible,” he added.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
01-08-2020
Totally New: “Drawn-on-Skin Electronics
Totally New: “Drawn-on-Skin Electronics" with an Ink Pen Can Monitor Physiological Information
A team of researchers led by Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston, has developed a new form of electronics known as “drawn-on-skin electronics,” allowing multifunctional sensors and circuits to be drawn on the skin with an ink pen.
The advance, the researchers report in Nature Communications, allows for the collection of more precise, motion artifact-free health data, solving the long-standing problem of collecting precise biological data through a wearable device when the subject is in motion.
Credit: University of Houston.
The imprecision may not be important when your FitBit registers 4,000 steps instead of 4,200, but sensors designed to check heart function, temperature and other physical signals must be accurate if they are to be used for diagnostics and treatment.
The drawn-on-skin electronics are able to seamlessly collect data, regardless of the wearer’s movements.
They also offer other advantages, including simple fabrication techniques that don’t require dedicated equipment.
“It is applied like you would use a pen to write on a piece of paper,” said Yu. “We prepare several electronic materials and then use pens to dispense them. Coming out, it is liquid. But like ink on paper, it dries very quickly.”
Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering, led a team reporting a new form of electronics known as “drawn-on-skin electronics,” which allows multifunctional sensors and circuits to be drawn on the skin with an ink pen.
Credit: University of Houston
Wearable bioelectronics – in the form of soft, flexible patches attached to the skin – have become an important way to monitor, prevent and treat illness and injury by tracking physiological information from the wearer. But even the most flexible wearables are limited by motion artifacts, or the difficulty that arises in collecting data when the sensor doesn’t move precisely with the skin.
The drawn-on-skin electronics can be customized to collect different types of information, and Yu said it is expected to be especially useful in situations where it’s not possible to access sophisticated equipment, including on a battleground.
The electronics are able to track muscle signals, heart rate, temperature and skin hydration, among other physical data, he said. The researchers also reported that the drawn-on-skin electronics have demonstrated the ability to accelerate healing of wounds.
Faheem Ershad, a doctoral student in the Cullen College of Engineering, served as first author for the paper.
Credit: University of Houston
In addition to Yu, researchers involved in the project include Faheem Ershad, Anish Thukral, Phillip Comeaux, Yuntao Lu, Hyunseok Shim, Kyoseung Sim, Nam-In Kim, Zhoulyu Rao, Ross Guevara, Luis Contreras, Fengjiao Pan, Yongcao Zhang, Ying-Shi Guan, Pinyi Yang, Xu Wang and Peng Wang, all from the University of Houston, and Jiping Yue and Xiaoyang Wu from the University of Chicago.
The drawn-on-skin electronics are actually comprised of three inks, serving as a conductor, semiconductor and dielectric.
“Electronic inks, including conductors, semiconductors, and dielectrics, are drawn on-demand in a freeform manner to develop devices, such as transistors, strain sensors, temperature sensors, heaters, skin hydration sensors, and electrophysiological sensors,” the researchers wrote.
This research is supported by the Office of Naval Research and National Institutes of Health.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-07-2020
WATCH BOSTON DYNAMICS’ ROBODOGS INVADE THIS FORD PRODUCTION PLANT
WATCH BOSTON DYNAMICS’ ROBODOGS INVADE THIS FORD PRODUCTION PLANT
These robots act like the best-behaved dogs you've ever seen. Plus they have five cameras.
Boston Dynamics is heading to the Midwest. The perpetually viral robotics company, known across the world for videos of robots blowing people’s minds, has signed a deal with Ford Motor Company. Ford will be leasing two robots from the company in order to better scan their factories for retooling.
"WOW, IT'S, IT'S ACTUALLY DOGLIKE."
WHAT ARE THESE ROBOTS
The robots, which are officially named Spot but have been nicknamed Fluffy by Ford, are four-legged walkers that can take 360-degree camera scans, handle 30-degree grades and climb stairs for extended periods of time. At 70 pounds with five cameras, they’re nimble, and Boston Dynamics wanted to make sure they had a dog-like quality as they save clients money.
As digital engineering manager at Ford’s Advanced Manufacturing Center, Mark Goderis was already quite familiar with the animal-like robots that have made Boston Dynamics famous.
But when he finally saw them in person, he tells Inverse, “I was like, wow, it's, it's actually doglike. I was really shocked at how an animal or dog like it really is. But then you start to think oh my god it is a robot. It was a moment of shock.”
One place that real dogs have the robots beat is speed: these bots can only go 3 MPH, a safety feature. But with handler Paula Wiebelhaus, who gave Fluffy its nickname in the first place, these robots will scan plant floors and give engineers a helping hand in updating Computer Aided Designs (CAD), which are used to help improve workplaces.
Paula Wiebelhaus taking Fluffy for a walk.
Ford
Wiebelhaus can control Fluffy with a device that's only somewhat bigger than a Playstation 4 controller.
Ford
Even engineering experts at Ford were surprised by how dog-like Fluffy can be.
Ford
WHY DOES FORD NEED THEM
Although plants generally don’t change that much over the years, Goderis says, smaller changes take place over time and eventually become noticeable to those who work in them every day.
“It's like when you get up in the dark to do something in your house. You know how to walk through your house. But say you’ve moved something, a rocking chair. You kick it in the middle of the night because it's dark,” Goderis says.
The changes can be “as small as if you took a trash can and moved it from one location to another. But then we release a new trim level addition (used by car manufacturers to track the variety of special features on each car model), so you get a new part content on the line. And you literally just slide that into a workstation.
When you're adjusting in the facility, after production starts on a new vehicle, a lot of the time the process kind of smooths out. And as it smooths out, and you move things around, and the CAD images don't get updated as accurately as they should.”
Fluffy can climb stairs for hours.
Ford
HOW WILL THEY SAVE FORD MONEY
The problem is that old, manual methods of updating CAD images are pricey and time-consuming. Before the Boston Dynamics robots, one would need to “walk around with a tripod,” Goderis says.
“So think about a camera mounted on top of a tripod and you're posing for a family picture, but instead of having a camera we have a laser scanner on top of it. So we walk into a facility that's roughly 3 million square feet, and you would walk around with that tripod.”
That time-consuming process can work for family portraits, but it’s no good when it comes to car manufacturing. Even walking around at 3 MPH, Ford expects robotic Fluffy to cut down their camera times by half. That means faster designs, faster turnaround, and engineering teams getting plant designs faster. All of that means cars coming out faster.
And on top of that, the cameras will allow Fluffy’s video feed to be viewed remotely, meaning Ford engineers can, hypothetically, study plants thousands of miles away.
For now, Fluffy will start at a single plant, the Van Dyke Transmission Plant. But more dogs are likely in the company’s future.
Fabien Cousteau, the grandson of legendary ocean explorer Jacques Cousteau, wants to build the equivalent of the International Space Station (ISS) — but on the ocean floor deep below the surface, asCNN reports.
All images: Courtesy Proteus/Yves Béhar/Fuseproject
With the help of industrial designer Yves Béhar, Cosuteau unveiled his bold ambition: a 4,000 square foot lab called Proteus that could offer a team of up to 12 researchers from all over the world easy access to the ocean floor. The plan is to build it in just three years.
The most striking design element of their vision is a number of bubble-like protruding pods, extending from two circular structures stacked on top of each other. Each pod is envisioned to be assigned a different purpose, ranging from medical bays to laboratories and personal quarters.
“We wanted it to be new and different and inspiring and futuristic,” Béhar told CNN. “So [we looked] at everything from science fiction to modular housing to Japanese pod [hotels].”
The team claims Proteus will feature the world’s first underwater greenhouse, intended for growing food for whoever is stationed there.
Power will come from wind, thermal, and solar energy.
“Ocean exploration is 1,000 times more important than space exploration for — selfishly — our survival, for our trajectory into the future,” Cousteau told CNN. “It’s our life support system. It is the very reason why we exist in the first place.”
Space exploration gets vastly more funding than its oceanic counterpart, according to CNN, despite the fact that humans have only explored about five percent of the Earth’s oceans — and mapped only 20 percent.
The Proteus would only join one other permanent underwater habitat, the Aquarius off the coast of Florida, which has been used by NASA to simulate the lunar surface.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
19-07-2020
You Might Have Never Seen Machines Doing These Kind Of Incredible Things
You Might Have Never Seen Machines Doing These Kind Of Incredible Things
In today’s world, technology is evolving faster than ever before and humans are powering it. Brilliant minds all around the world innovate day and night to produce the most advanced machines and equipment that can make our lives easier and our work more efficient. Sure, technology can get terrifying, if you think of it can do, such as tear down entire forests. But it’s also pretty amazing – we use machines to create bridges where humans just can’t on their own. Stick around to learn more about the top 12 most useful machines that help humans do incredible things!
By tinkering with the genetics of human cells, a team of scientists gave them the ability to camouflage.
To do so, they took a page out of the squid’s playbook, New Atlas reports. Specifically, they engineered the human cells to produce a squid protein known as reflectin, which scatters light to create a sense of transparency or iridescence.
Not only is it a bizarre party trick, but figuring out how to gene-hack specific traits into human cells gives scientists a new avenue to explore how the underlying genetics actually works.
Invisible Man
It would be fascinating to see this research pave the way to gene-hacked humans with invisibility powers — but sadly that’s not what this research is about. Rather, the University of California, Irvine biomolecular engineers behind the study think their gene-hacking technique could give rise to new light-scattering materials, according to research published Tuesday in the journal Nature Communications.
Or, even more broadly, the research suggests scientists investigating other genetic traits could mimic their methodology, presenting a means to use human cells as a sort of bioengineering sandbox.
Biological Sandbox
That sandbox could prove useful, as the Irvine team managed to get the human cells to fully integrate the structures producing the reflectin proteins. Basically, the gene-hack fully took hold.
“Through quantitative phase microscopy, we were able to determine that the protein structures had different optical characteristics when compared to the cytoplasm inside the cells,” Irvine researcher Alon Gorodetsky told New Atlas, “in other words, they optically behaved almost as they do in their native cephalopod leucophores.”
At the end of April, the artificial intelligence development firm OpenAI released a new neural net, Jukebox, which can create mashups and original music in the style of over 9,000 bands and musicians.
Alongside it, OpenAI released a list of sample tracks generated with the algorithm that bend music into new genres or even reinterpret one artist’s song in another’s style — think a jazz-pop hybrid of Ella Fitzgerald and Céline Dion.
It’s an incredible feat of technology, but Futurism’s editorial team was unsatisfied with the tracks OpenAI shared. To really kick the tires, we went to CJ Carr and Zack Zukowski, the musicians and computer science experts behind the algorithmically-generatedmusic group DADABOTS, with a request: We wanted to hear Frank Sinatra sing Britney Spears’ “Toxic.”
And boy, they delivered.
An algorithm that can create original works of music in the style of existing bands and artists raises unexplored legal and creative questions. For instance, can the artists that Jukebox was trained on claim credit for the resulting tracks? Or are we experiencing the beginning of a brand-new era of music?
“There’s so much creativity to explore there,” Zukowski told Futurism.
Below is the resulting song, in all its AI-generated glory, followed by Futurism’s lightly-edited conversation with algorithmic musicians Carr and Zukowski.
Futurism: Thanks for taking the time to chat, CJ and Zack. Before we jump in, I’d love to learn a little bit more about both of you, and how you learned how to do all this. What sort of background do you have that lent itself to AI-generated music?
Zack Zukowski: I think we’re both pretty much musicians first, but also I’ve been involved in tech for quite a while. I approached my machine learning studies from an audio perspective: I wanted to extend what was already being doing with synthesis and music technology. It seemed like machine learning was obviously the path that was going to make the most gains, so I started learning about those types of algorithms. SampleRNN is the tool we most like to use — that’s one of our main tools that we’ve been using for our livestreams and our Bandcamp albums over the last couple years.
CJ Carr: Musician first, motivated in computer science to do new things with music. DADABOTS itself comes out of hackathon culture. I’ve done 65 hackathons, and Zack and I together have won 15 or so. That environment inspires people to push what they’re doing in some new way, to do something provocative. That’s the spirit DADABOTS came out of in 2012, and we’ve been pushing it further and further as the tech has progressed.
Why did you make the decision to step up from individual hackathons and stick with DADABOTS? Where did the idea come from for your various projects?
CJ: When we started it, we were both interns at Berklee College of Music working in music tech. When I met Zack — for some reason it felt like I’ve known Zack my whole life. It was a natural collaboration. Zack knew more about signal processing than I did, I knew more about programming, and now we have both brains.
What’s your typical approach? What’s going on behind the scenes?
CJ: SampleRNN has been our main tool. It’s really fast to train — we can train it in a day or two on a new artist. One of the main things we love to do is collaborating with artists, when an artist says “hey I’d love to do a bot album.” But recently, Jukebox trumped the state of the art in music generation. They did a really good job.
SampleRNN and Jukebox, they’re similar in that they’re both sequence generators. It’s reading a sequence of audio at 44.1k or 16k sample rate, and then it’s trying to predict what the next sample is going to be. This net is making a decision at a fraction of a millisecond to come up with the next sample. This is why it’s called neural synthesis. It’s not copying and pasting audio from the training data, it’s learning to synthesize.
What’s different about them is that SampleRNN uses “Long Short Term Memory” (LSTM) architecture, whereas the jukebox uses a transformer architecture. The transformer has attention. This is a relatively new thing that’s come to popularity in deep learning, after RNN, after LSTM. It especially took over for language models. I don’t know if you remember fake news generators like GPT-2 and Grover. They use transformer architecture. Many of the language researchers left LSTM behind. No one had really applied it to audio music yet — that’s the big enhancement for Jukebox. They’re taking a language architecture and applying it to music.
They’re also doing this extra thing, called a “Vector-Quantized Variational AutoEncoder” (VQ-VAE). They’re trying to turn audio into language. They train a model that creates a codebook, like an alphabet. And they take this alphabet, which is a discrete set of 2048 symbols — each symbol is something about music — and then they train their transformer models on it.”
What does that alphabet look like? What is that “something about music?”
CJ: They didn’t do that analysis at all. We’re really curious. For instance, can we compose with it?
Zack: we have these 2048 characters, and so we wonder which ones are commonly used. Like in the alphabet we don’t use Zs too much. But what are the “vowels?” Which symbols are used frequently? It would be really interesting to see what happens when you start getting rid of some of these symbols and see what the net can do with what remains. The way we have the language of music theory with chords and scales, maybe this is something that we can compose with beyond making deepfakes of an artist.
What can that language tell us about the underlying rules and components of music, and how can we use these as building blocks themselves? They’re much higher-level than chords — maybe they’re genre-related. We really don’t know. It would be really cool to do that analysis and see what happens by using just a subset of the language.
CJ: They’ve come up with a new music theory.
Well, it sounds like the three of us have a lot of the same questions about all this. Have you started tinkering with it to learn what’s going on?
CJ: We’ve just got the code running. The first example is this Sinatra thing. But as we use this more, the philosophical implications here are that as musicians, we know intuitively that music is very language-like. It’s not just waves and noise, which is what it looks like at a small scale, but when we’re playing we’re communicating with each other. The bass and the drummer are in step, strings and vocals can be doing call-and-response. And OpenAI was just like “Hey, what if we treated music like language?”
If the sort of alphabet this algorithm uses could be seen as a new music theory, do you think this will be a tool for you two going forward? Or is it more of an oddity to play around with?
CJ: Maybe I should correct myself. Instead of being a music theory, these models can train music theory.
Zack: The theory isn’t something that we can explain right now. We can’t say “This value means this.” It’s not quite as human interpretable, I guess.
CJ: the model just learns probabilistic patterns, and that’s what music theory is. It’s these notes tend to have these patterns and produce these feelings. And those were human-invented. What if we just have a machine try to discover that on its own, and then we ask it to make music? And if it’s good at it, probably it’s learned a good quote-unquote “music theory.”
Zack: An analogy we thought of: Back in the days of Bach, and these composers who were really interested in having counterpoint — many voices moving in their own direction — they had a set of rules for this. The first melodic line the composer builds off is called cantus firmus. There was an educational game new composers would play — if you could follow the notes that were presented in the cantus firmus and guess what harmonizing notes were next, you’d be correct based on the music of the day.
We’re thinking this is kind of the machine version of that, in some ways. Something that can be used to make new music in the style of music that has been heard before.
I know it’s early days and that this is speculative, but do you have any predictions for how people might use Jukebox? Will it be more of these mashups, or do you think people will develop original compositions?
CJ: On the one hand, you have the fear of push-button art. A lot of people think push-button art is very grotesque. But I think push-button art, when a culture can achieve this — it’s a transcendent moment for that culture. It means the communication of that culture has achieved its capacity. Think about meme generators — I can take a picture of Keanu Reeves, put in some inside joke and send it to my friends, and then they can understand and appreciate what I’m communicating. That’s powerful. So it is grotesque, but it’s effectual.
On the other side, you’ll have these virtuosos — these creators — who are gonna do overkill and try to create a medium of art that’s never existed before. What interests us are these 24/7 generators, where it can just keep generating forever.
Zack: I think it’s an interesting tool for artists who have worked on a body of albums. There are artists who don’t even know they can be generated on Jukebox. So, I think many of them would like to know what can be generated in their likeness. It can be a variation tool, it can recreate work for an artist through a perspective they haven’t even heard. It can bend their work through similar artists or even very distantly-stylized artists. It can be a great training tool for artists.
You said you’d heard from some artists who approached you to generate music already — is that something you can talk about?
CJ: When bands approach us, they’ve mostly been staying within the lane of “Hey, use just my training data and let’s see what comes out — I’m really interested.”
Fans though, on YouTube, are like “Here’s a list of my four favorite bands, please make me something out of it.”
So, let’s talk about the actual track you made for us. For this new song, Futurism suggested Britney Spears’ “Toxic” as sung by Frank Sinatra. Did the technical side of pulling that together differ from your usual work?
CJ: This is different. With SampleRNN, we’re retraining it from scratch on usually one artist or one album. And that’s really where it shines — it’s not able to do these fusions very well. What OpenAI was able to do — with a giant multimillion-dollar compute budget — they were able to train these giant neural nets. And they trained them on over 9,000 artists in over 300 genres. You need a mega team with a huge budget just to make this generalizable net.
Zack: There are two options. There’s lyrics and no lyrics. No lyrics is sort of like how SampleRNN has worked. With lyrics it tries to get them all in order, but sometimes it loops or repeats. But it tries to go beginning to end and keep the flow going. If you have too many lyrics, it doesn’t understand. It doesn’t understand that if you have a chorus repeating, the music should repeat as well. So we find that these shorter compositions work better for us.
But you had lyrics in past projects that used SampleRNN, like “Human Extinction Party.” How did that differ?
CJ: That was smoke and mirrors.
Zack: That was kind of an illusion. The album we trained it on had vocals, so some made it through to. We had a text generator that made up lyrics whenever it heard a sound.
In a lot of these Jukebox mashups, I’ve noticed that the voice sounds sort of strained. Is that just a matter of the AI-generated voice being forced to hit a certain note, or does it have something more to do with the limitations of the algorithm itself?
Zack: Your guess sounds similar to what I’d say. It was probably just really unlikely that those lyrics or the phonemes, the sounds themselves of the words, showed up in a similar way to how we were forcing it to generate those syllables. It probably heard a lot more music that isn’t Frank Sinatra, so it can imagine some things that Frank Sinatra didn’t do. But it just comes down to being somewhat different from any of the original Frank Sinatra texts.
When you were creating this rendition of Toxic, did you hit any snags along the way? Or was it just a matter of giving the algorithm enough time to do its work?
CJ: Part of it is we need a really expensive piece of hardware that we need to rent on Amazon Cloud at three dollars per hour. And it takes — how long did it take to generate, Zack?
Zack: The final one I had generated took about a day, but I had been doing it over and over again for a week. You have so little control that sometimes you just gotta go again. It would get a few phrases and then it would lose track of the lyrics. Sometimes you’d get two lines but not the whole chorus in a row. It came down to luck — waiting for the right one to come along.
It could loop a line, or sometimes it could go into seemingly different songs. It would completely lose track of where it was. There are some pretty wild things that can happen. One time I was generating Frank Sinatra, and it was clearly a chorus of men and women together. It wasn’t even the right voice. It can get pretty ghostly.
Do you know if there are any legal issues involved in this kind of music? The capability to generate new music in the style or voice of an artist seems like uncharted territory, but are there issues with the mashups that use existing lyrics? Or are those more acceptable under the guise of fair use, sort of like parody songs?
CJ: We’re not legal people, we haven’t studied copyright issues. The vibe is that there’s a strong case for fair use, but artists may not like people creating these deepfakes.
Zack: I think it comes down to intention, and whatever the law decides they’ll decide. But as people using this tool, artists, there’s definitely a code of ethics that people should probably respect. Don’t piss people off. We try our best to cite the people who worked on the tech, the people who it was trained on. It all just depends how you’re putting it out and how respectful you’re being of people’s work.
Before I let you go, what else are you two working on right now?
CJ: Our long-term research is trying to make these models faster and cheaper so bedroom producers and 12-year-olds can be making music no one’s ever thought of. Of course, right now it’s very expensive and it takes days. We’re in a privileged position of being able to do it with the rented hardware.
Specifically, what we’re doing right now — there’s the list of 9,000-plus bands that the model currently supports. But what’s interesting is the bands weren’t asked to be a part of this dataset. Some machine learning researchers on Twitter were debating the ethics of that. There are two sides of that, of course, but we really want to reach out to those bands. If anyone knows these bands, if you are these bands, we will generate music for you. We want to take this technology, which we think is capable of brand-new forms of creativity, and give it back to artists.
A team of researchers from the Higher School of Economics University and Open University in Moscow, Russia claim they have demonstrated that an artificial intelligence can make accurate personality judgments based on selfies alone — more accurately than some humans.
The researchers suggest the technology could be used to help match people up in online dating services or help companies sell products that are tailored to individual personalities.
That’s apropos, because two co-authors listed on a paper about the research published today in Scientific Reports — a journal run by Nature — are affiliated with a Russian AI psychological profiling company called BestFitMe, which helps companies hire the right employees.
As detailed in the paper, the team asked 12,000 volunteers to complete a questionnaire that they used to build a database of personality traits. To go along with that data, the volunteers also uploaded a total of 31,000 selfies.
The questionnaire was based around the “Big Five” personality traits, five core traits that psychological researchers often use to describe subjects’ personalities, including openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism.
After training a neural network on the dataset, the researchers found that it could accurately predict personality traits based on “real-life photographs taken in uncontrolled conditions,” as they write in their paper.
While accurate, the precision of their AI leaves something to be desired. They found that their AI “can can make a correct guess about the relative standing of two randomly chosen individuals on a personality dimension in 58% of cases.”
That result isn’t exactly groundbreaking — but it’s a little better than just guessing, which is vaguely impressive.
Strikingly, the researchers claim their AI is better at predicting the traits than humans. While rating personality traits by human “close relatives or colleagues” was far more accurate than when rated by strangers, they found that the AI “outperforms an average human rater who meets the target in person without any prior acquaintance,” according to the paper.
Considering the woeful accuracy, and the fact that some of the authors listed on the study are working on commercializing similar tech, these results should be taken with a hefty grain of salt.
Neural networks have generated some impressive results, but any research that draws self-serving conclusions — especially when they require some statistical gymnastics — should be treated with scrutiny.
I'm a South London-based technology journalist, consultant and author
Patent showing laser decoy system
US NAVY PATENT
The U.S. Navy has patented technology to create mid-air images to fool infrared and other sensors. This builds on many years of laser-plasma research and offers a game-changing method of protecting aircraft from heat-seeking missiles. It may also provide a clue about the source of some recent UFO sightings by military aircraft.
The U.S. developed the first Sidewinder heat-seeking missile back in the 1950’s, and the latest AIM-9X version is still in frontline service around the world. This type of sensor works so well because hot jet engines exhausts shine out like beacons in the infrared, making them easy targets. Pilots under attack can eject decoy flares to lure a missile away from the launch aircraft, but these only provide a few seconds protection. More recently laser infrared countermeasures have been fielded which dazzle the infrared seeker.
A sufficiently intense laser pulse can ionize producing a burst of glowing plasma. The Laser Induced Plasma Effects program uses single plasma bursts as flash-bang stun grenades; a rapid series of such pluses can even be modulated to transmit a spoken message (video here). In 2011 Japanese company Burton Inc demonstrated a rudimentary system that created moving 3D images in mid-air with a series of rapidly-generated plasma dots (video here).
1:34 minutes video ‘Talking lasers and endless flashbangs: Pentagon develops plasma tech’ (‘Military Times’ YouTube)
1:53 minute video ‘True 3D Display in the Mid-Air Using Laser Plasma Technology’ (‘Deepak Gupta’ YouTube)
A more sophisticated approach uses an intense, ultra-short, self-focusing laser pulse to create a glowing filament or channel of plasma, an effect discovered in the 1990s. Known as laser-induced plasma filaments (LIPF) these can be created at some distance from the laser for tens or hundreds of meters. Because LIPFs conduct electricity, they have been investigated as a means of triggering lightning or creating a lightning gun
US Army 'lighting gun' experiment with laser-generated plasma channel
US ARMY
One of the interesting things about LIPFs is that with suitable tuning they can emit light of any wavelength: visible, infrared, ultraviolet or even terahertz waves. This technology underlies the Navy project, which uses LIPFs to create phantom images with infrared emissions to fool heat-seeking missiles.
The Navy declined to discuss the project, but the work is described in a 2018 patent: “wherein a laser source is mounted on the back of the air vehicle, and wherein the laser source is configured to create a laser-induced plasma, and wherein the laser-induced plasma acts as a decoy for an incoming threat to the air vehicle.”
The patent goes on to explain that the laser creates a series of mid-air plasma columns, which form a 2D or 3D image by a process of raster scanning, similar to the way old-style cathode ray TVs sets display a picture.
A single decoy halves the chances of an incoming missile picking the right target, but there is no reason to stop at one : “There can be multiple laser systems mounted on the back of the air vehicle with each laser system generating a ‘ghost image’ such that there would appear to be multiple air vehicles present.”
Unlike flares, the LIPF decoy can be created instantly at any desired distance from the aircraft, and can be moved around at will. Equally importantly, moves with the aircraft, rather than dropping away rapidly like a flare, providing protection for as long as needed.
The aircraft carrying the laser projector could also project decoys to cover other targets: “The potential applications of this LIP flare/decoy can be expanded, such as using a helicopter deploying flares to protect a battleship, or using this method to cover and protect a whole battle-group of ships, a military base or an entire city.”
The lead researcher in the patent is Alexandru Hening. A 2017 piece in the Navy’s own IT magazine says that Dr. Hening has been working on laser-induced plasma at Space and Naval Warfare Systems Center Pacific since 2012.
“If you have a very short pulse you can generate a filament, and in the air that can propagate for hundreds of meters, and maybe with the next generation of lasers you could produce a filament of even a mile,” Dr. Henning told the magazine, indicating that it should be possible to create phantoms at considerable distances.
Phantom aircraft that can move around at high speed and appear on thermal imagers may ring some bells. After months of debate, in April the Navy officially released infra-red videos of UFOs encountered by their pilots, although the Pentagon prefers to call them “unidentified aerial phenomena.” The objects in the videos appear to make sudden movements impossible for physical aircraft, rotate mid-air and zip along at phenomenal speed: all maneuvers which would be easy to reproduce with a phantom projected image.
It is unlikely the Pentagon would release videos of their own secret weapon in a bizarre double bluff. But other nations may have their own version. In the early 1990s the Russians claimed that they could produce glowing ‘plasmoids’ at high altitude using high-power microwave or laser beams; these were intended to disrupt the flight of ballistic missiles, an answer to the planned American ‘Star Wars’. Nothing came of the project, but the technology may have been refined for other applications in the subsequent decades.
Heat-seeking missiles will no doubt evolve ways to distinguish the plasma ghosts from real jets, leading to further refinement of the decoy technology, and so on. Whether humans also get smart enough to recognize such fakes remains to be seen.
Follow me on Twitter. Check out my website or some of my other work here.
Researchers say they’ve created a proof-of-concept bionic eye that could surpass the sensitivity of a human one.
“In the future, we can use this for better vision prostheses and humanoid robotics,” researcher Zhiyong Fan, at the Hong Kong University of Science and Technology, told Science News.
The eye, as detailed in a paper published in the prestigious journal Nature today, is in essence a three dimensional artificial retina that features a highly dense array of extremely light-sensitive nanowires.
The team, led by Fan, lined a curved aluminum oxide membrane with tiny sensors made of perovskite, a light-sensitive material that’s been used in solar cells.
Wires that mimic the brain’s visual cortex relay the visual information gathered by these sensors to a computer for processing.
The nanowires are so sensitive they could surpass the optical wavelength range of the human eye, allowing it to respond to 800 nanometer wavelengths, the threshold between visual light and infrared radiation.
That means it could see things in the dark when the human eye can no longer keep up.
“A human user of the artificial eye will gain night vision capability,” Fan told Inverse.
The researchers also claim the eye can react to changes in light faster than a human one, allowing it to adjust to changing conditions in a fraction of the time.
Each square centimeter of the artificial retina can hold about 460 million nanosize sensors, dwarfing the estimated 10 million cells in the human retina. This suggests that it could surpass the visual fidelity of the human eye.
Fan told Inverse that “we have not demonstrated the full potential in terms of resolution at this moment,” promising that eventually “a user of our artificial eye will be able to see smaller objects and further distance.”
Other researchers who were not involved in the project pointed out that plenty of work still has to be done to eventually be able to connect it to the human visual system, as Scientific American reports.
But some are hopeful.
“I think in about 10 years, we should see some very tangible practical applications of these bionic eyes,” Hongrui Jiang, an electrical engineer at the University of Wisconsin–Madison who was not involved in the research, told Scientific American.
Imagine you are out at an outdoor event, perhaps a BBQ or camping trip and a bug keeps flying by your face. You try to ignore it at first, perhaps lazily swat at it, but it keeps coming back for more. This is nothing unusual, as bugs have a habit of ruining the outdoors for people, but then it lands on your arm. Now you can see it doesn’t exactly look like a regular fly, something is off about it. You lean in, peer down at the little insect perched upon your arm, and that is when you notice that it is peering right back at you, with a camera in place of eyes. Welcome to the future of drone technology, with robotic flies and more, and it is every bit as weird as it sounds.
Everyone is familiar with drones nowadays. They seem to be everywhere, and they are getting smaller and cooler as time goes on, but how small can they really get, some may wonder. Well, looking at the trends in the technology these days, it seems that they can get very small, indeed. One private research team called Animal Dynamics has been working on tiny drones that use the concept of biomechanics, that is, mimicking the natural movements of insects and birds in nature. After all, what better designer is there than hundreds of millions of years of evolution? A prime example of this is one of their drones that aims to copy the shape and movements of a dragonfly, a drone called the “Skeeter.” The drone is launched by hand, its design allows it to maintain flight in high winds of up to more than 20 knots (23mph or 37km/h) due to its close approximation of an actual dragonfly, and its multiple wings give it deft movement control. One of the researchers who helped design it, Alex Caccia, has said of its biomechanical design:
The way to really understand how a bird or insect flies is to build a vehicle using the same principles. And that’s what we set up Animal Dynamics to do. Small drones often have problems maneuvering in heavy wind. Yet dragonflies don’t have this problem. So we used flapping wings to replicate this effect in our Skeeter. Making devices with flapping wings is very, very hard. A dragonfly is an awesome flyer. It’s just insane how beautiful they are, nothing is left to chance in that design. It has very sophisticated flight control.
In addition to its small size and sophisticated controls, the Skeeter also can be equipped with a camera and communications links, using the type of miniaturized tech found in mobile smartphones. Currently the Skeeter measures around 8 inches long, but of course the team is working on smaller, lighter versions. As impressive as it is, Skeeter is not even the smallest insect drone out there. Another model designed by a team at the Delft University of Technology is called the “Delfly,” and weighs less than 50 grams. The Delfly is meant to copy the movements of a fruit fly, and has advanced software that allows it to autonomously fly about and avoid obstacles on its four cutting edge wings, fashioned from ultra-light transparent foil. The drone has been designed for monitoring agricultural crops, and is equipped with a minuscule camera. The team behind the Delfly hope to equip it with dynamic AI that will allow it to mimic the way an insect erratically flies about and avoids objects, and it seems very likely someone could easily mistake it for an actual fly. The only problem it faces at the moment is that it is so small that it has limited battery life, only able to stay aloft for 6 to 9 minutes at a time.
Indeed, this is the challenge that any sophisticated technology faces; the limitations of battery life. There is only so small you can make a battery before its efficiency is compromised, no matter how light and small the equipment, and it is a problem we are stuck with until battery technology is seriously upgraded. In fact, many of the prototype insect drones currently rely on being tethered to an external power source for the time being. But what if your drone doesn’t need batteries at all? That is the idea behind another drone designed by engineers at the University of Washington, who have created a robotic flying insect, which they call the RoboFly, that does not rely on any battery or external power source at all. Instead, the drone, which is about the same weight as a toothpick, rides about on a laser beam. This beam is invisible, and is aimed at a photovoltaic cell on the drone, which is then amplified with a circuit and is enough to power its wings and other components. However, even with such a game changing development, the RoboFly, and indeed all insect-sized unmanned aerial vehicles (UAVs), which are usually referred to as micro aerial vehicles (MAVs), still face some big challenges going ahead. Sawyer Fuller, leader of the team that created the RoboFly and director of the slightly ominous sounding Autonomous Insect Robotics Laboratory, has said of this:
A lot of the sensors that have been used on larger robots successfully just aren’t available at fly size. Radar, scanning lasers, range finders — these things that make the perfect maps of the world, that things like self-driving cars use. So we’re going to have to use basically the same sensor suite as a fly uses, a little camera.
However, great progress is being made, and these little drones are becoming more sophisticated in leaps and bounds, with the final aim being a fully autonomous flying insect robot that can more or less operate on its own or with only minimal human oversight. Fuller is very optimistic about the prospects, saying, “For full autonomous I would say we are about five years off probably.” Such a MAV would have all manner of applications, including surveillance, logistics, agriculture, taking measurements in hostile environments that a traditional drone can’t fit into or operating in hazardous environments, finding victims of earthquakes or other natural disasters, planetary exploration, and many others. Many readers might be thinking about now whether the military has any interest in all of this, and the answer is, of course they do.
The use of these MAVs is seen as very promising to the military, and the U.S. government has poured in over a billion dollars of funding into such research. Indeed, Animal Dynamics has been courted by the military with funding, and the creators of the RoboFly have also received generous funding for their research. The U.S. government’s own Defense Advanced Research Projects Agency (DARPA) has been pursuing the technology for years, as have other countries. On the battlefield MAVs have obvious applications, such as spying and reconnaissance, but they are also seen as having other uses, such as attaching to enemies to serve as tracking devices or very literal “bugs,” attaching tags to enemy vehicles to make targeting easier, taking DNA samples, or even administering poisons or dangerous chemical or biological agents. There are quite a few world governments who are actively pursuing these insect drones, and one New Zealand based strategic analyst, Paul Buchanan, has said of this landscape:
The work on miniaturization began decades ago during the Cold War, both in the USA and USSR, and to a lesser extent the UK and China. The idea then and now was to have an undetectable and easily expendable weapons delivery or intelligence collection system. Nano technologies in particular have seen an increase in research on miniaturized UAVs, something that is not exclusive to government scientific agencies, but which also has sparked significant private sector involvement. That is because beyond the military, security and intelligence applications of miniaturized UAVs, the commercial applications of such platforms are potentially game changing. Within a few short years the world will be divided into those who have them and those who do not, with the advantage in a wide range of human endeavor going to the former.
While so far all of this is in the prototype stages and there are no working models in the field yet as far as we know, some conspiracy theorists believe that this is not even something for down the line in the future, but that the technology is already perfected and being used against an unsuspecting populace at this very moment. For instance, there was a report in 2007 in the Washington Post of several witnesses at an anti-war rally who claimed to have seen tiny drones like dragonflies or bumblebees darting about.One of these witnesses would say:
I look up and I’m like, ‘what the hell is that?’ They looked like dragonflies or little helicopters, but I mean, those are not insects. They were large for dragonflies and I thought, ‘is that mechanical or is that alive?
Such supposed sightings of these tiny drones have increased in recent years, leading to the idea that the technology is already being used to spy on us, but of course the government and research institutes behind it all insist that working models are still a thing of the future. Yet it is still a scary thought, scary enough to instill paranoia, which is only fueled by these reports and others like them. One famous recent meme that caused a lot of panic in 2019 was a post from Facebook user in South Africa, which shows an eerily mosquito-like robot perched on a human finger, accompanied by the text:
Is this a mosquito? No. It’s an insect spy drone for urban areas, already in production, funded by the US government. It can be remotely controlled and is equipped with a camera and a microphone. It can land on you, and may have the potential to take a DNA sample or leave RFID tracking nanotechnology on your skin. It can fly through an open window, or it can attach to your clothing until you take it home.
The post went viral, with rampant speculation on whether it was true or not. The debunking site Snopes came to the conclusion that the photo was fake and it was just a fictional meme, but others are not so sure, igniting the debate again on whether this is or will be a reality, or whether it ever should be. Regardless of the ethical and privacy concerns of having insect sized spy drones flying around, with all of the money and effort being put into this technology, the question of whether we will really have mosquito sized robots buzzing about seems to be not one of if, but of when. Perhaps they are even here already. So the next time you are out at a BBQ and that annoying fly keeps buzzing past your head, you might just want to take a closer look. Just in case.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
06-05-2020
Tom Cruise is Literally Going to Outer Space to Shoot an Action Movie with Elon Musk’s SpaceX [Update]
Tom Cruise is Literally Going to Outer Space to Shoot an Action Movie with Elon Musk’s SpaceX [Update]
Update: NASA administrator Jim Bridenstine says that this project will involve the International Space Station.
Jim Bridenstine✔@JimBridenstine
NASA is excited to work with @TomCruise on a film aboard the @Space_Station! We need popular media to inspire a new generation of engineers and scientists to make @NASA’s ambitious plans a reality.
The global superstar is set to literally leave the globe to star in a new movie which will be shot in space – and he’s teaming up with Elon Musk‘s SpaceX company to make it happen.
Deadline reports that this new Tom Cruise space movie is not a Mission: Impossible project, and that no studio is involved yet because it’s still early in development. But Cruise and SpaceX are working on the action/adventure project with NASA, and if it actually happens, it will be the first narrative feature film to be shot in outer space.
This is not the first time Cruise has flirted with leaving the Earth to make a movie. Twenty years ago (context: the same year Mission: Impossible II came out), none other than James Cameron approached Cruise and asked if he’d be interested in heading to the great unknown to make a movie together.
“I actually talked to [Cruise] about doing a space film in space, about 15 years ago,” Cameron said in 2018. “I had a contract with the Russians in 2000 to go to the International Space Station and shoot a high-end 3D documentary there. And I thought, ‘S—, man, we should just make a feature.’ I said, ‘Tom, you and I, we’ll get two seats on the Soyuz, but somebody’s gotta train us as engineers.’ Tom said, ‘No problem, I’ll train as an engineer.’ We had some ideas for the story, but it was still conceptual.”
Obviously that project never came together, but it sounds like Cameron may have planted a seed that some other filmmaker might get to harvest.
The fact that Musk, who is often the butt of jokes about how it seems like he could be a villain in a James Bond movie, is involved here (or at least his company is, so one assumes he will at least get an executive producer credit) is almost too perfect. Remember Moonraker? Bond went to space in that one. It’s…pretty bad. Fingers crossed this will turn out much, much better.
My favorite thing about Cruise is that he is in constant pursuit of perfection. He doesn’t always achieve it – see: Mummy, The – but by God, the dude is willing to lay it all on the line to entertain worldwide audiences, and he’s really effin’ good at it. Here’s hoping this actually comes together, and I’m extremely curious if this will end up being another Cruise/Christopher McQuarrie collaboration or if Cruise trusts any other director to lead him to these unprecedented heights.
Tom Cruise to shoot movie in SPACE with Elon Musk’s SpaceX: Is it new Mission: Impossible?
(Image: UP/GETTY)
Tom Cruise filming a movie in space was "inevitable" says Mission: Impossible director
A cutting-edge implant has allowed a man to feel and move his hand again after a spinal cord injury left him partially paralyzed,Wired reports.
According to a press release, it’s the first time both motor function and sense of touch have been restored using a brain-computer interface (BCI), as described in a paper published in the journal Cell.
After severing his spinal cord a decade ago, Ian Burkhart had a BCI developed by researchers at Battelle, a private nonprofit specializing in medical tech, implanted in his brain in 2014.
The injury completely disconnected the electrical signals going from Burkhart’s brain to his hands, through the spinal cord. But the researchers figured they could skip the spinal cord to hook up Burkhart’s primary motor cortex to his hands through a relay.
A port in the back of his skull sends signals to a computer. Special software decodes the signals and splits them between signals corresponding to motion and touch respectively. Both of these signals are then sent out to a sleeve of electrodes around Burkhart’s forearm.
But making sense of these signals is extremely difficult.
“We’re separating thoughts that are occurring almost simultaneously and are related to movements and sub-perceptual touch, which is a big challenge,” lead researcher at Battelle Patrick Ganzer told Wired.
The team saw some early successes regarding movement — the initial goal of the BCI — allowing Burkhart to press buttons along the neck of a “Guitar Hero” controller.
But returning touch to his hand was a much more daunting task. By using a simple vibration device or “wearable haptic system,” Burkhart was able to tell if he was touching an object or not without seeing it.
“It’s definitely strange,” Burkhart told Wired. “It’s still not normal, but it’s definitely much better than not having any sensory information going back to my body.”
Science fiction has always been a medium for futuristic imagination and while different colored aliens and intergalactic travel are yet to be discovered, there is an array of technologies that are no longer figments of the imagination thanks to the world of science fiction. Some of the creative inventions that have appeared in family-favorite movies like "Back to the Future" and "Total Recall," are now at the forefront of modern technology. Here are a few of our favorite technologies that went from science fiction to reality.
1. The mobile phone
The communicator was often used to communicate back to the USS Enterprise.
It's something that almost everyone has in their pockets. Mobile phones have become a necessity in modern life with a plethora of remarkable features. The first mobile phone was invented in 1973, the Motorola DynaTAC. It was a bulky thing that weighed 2.4 lbs. (1.1 kilograms) and had a talk time of about 35 minutes. It also cost thousands of dollars.
The Motorola DynaTAC was invented by Martin Cooper, who led a team that created the phone in just 90 days. A long-standing rumor was that Cooper got his inspiration from an episode of Star Trek where Captain Kirk used his hand-held communications device. However, Cooper stated in a 2015 interview that the original inspiration was from a comic strip called Dick Tracy, in which the character used a "wrist two-way radio."
2. The universal translator
Star Trek characters would often come across alien life with different languages. (Image credit: Paramount Pictures/CBS Studios) (Image credit: Paramount Pictures/CBS Studios)
From: "Star Trek: The Original Series"
While exploring space, characters such as Captain Kirk and Spock would come across alien life who spoke a different language. To understand the galactic foreigners, the Star Trek characters used a device that immediately translated the alien's unusual language. Star Trek's universal communicator was first seen on screen as Spock tampered with it in order to communicate with a non-biological entity (Series 2 Episode 9, Metamorphosis).
Although the idea in Star Trek was to communicate with intelligent alien life, a device capable of breaking down language barriers would revolutionize real-time communication. Now, products such as Sourcenext's Pocketalk and Skype's new voice translation service are capable of providing instantaneous translation between languages. Flawless real-time communication is far off, but the technological advancements over the last decade mean this feat is within reach.
3. Teleportation
The transporter is an iconic feature of the original Star Trek series. (Image credit: Paramount/AF archive/Alamy Stock Photo) (Image credit: Paramount/AF archive/Alamy Stock Photo)
From: "Star Trek: The Original Series"
The idea behind "beaming" someone up was that a person could be broken down into an energy form (dematerialization) and then converted back into matter at their destination (rematerialization). Transporting people this way on Star Trek's USS Enterprise had been around since the very beginning of the series, debuting in the pilot episode.
Scientists haven't figured out how to teleport humans yet, but they can teleport balls of energy known as photons. In this case, teleportation is based on a phenomenon known as quantum entanglement. This refers to a condition in quantum mechanics where two entangled particles may be very far from one another, yet remain connected so that actions performed on one affect the other, regardless of distance. The information exchange between the two photons occurs at least 10,000 times faster than the speed of light.
This hologram of Princess Leia features the iconic line, "Help me Obi-Wan Kenobi, you're my only hope." (Image credit: Lucasfilm/AF archive/Alamy Stock Photo) (Image credit: Lucasfilm/AF archive/Alamy Stock Photo)
From: "Star Wars: Episode IV — A New Hope"
Not long into the first Star Wars movie, Obi-Wan Kenobi receives a holographic message. By definition, a hologram is a 3D image created from the interference of light beams from a laser onto a 2D surface, and can only be seen in one angle.
In 2018, researchers from Brigham Young University in Provo, Utah, created a real hologram. Their technique, called volumetric display, works like an Etch-A-Sketch toy, but uses particles at high speeds. With lasers, researchers can trap particles and move them into a designated shape while another set of lasers emit red, green and blue light onto the particle and create an image. But so far, this can only happen on extremely small scales.
Even though using prosthetics had been common for a long time, Star Wars sparked an idea for bionic prosthetics. (Image credit: Disney/Lucasfilm) (Image credit: Disney/Lucasfilm)
From: "Star Wars: Episode V — The Empire Strikes Back"
Imagine getting your hand chopped off by your own father and falling to the bottom of a floating building to then have your long-lost sister come and pick you up. It's unlikely in reality, but not in the Star Wars movies. After losing his hand, Luke Skywalker receives a bionic version that has all the functions of a normal hand. This scenario is now more feasible than the previous one.
Researchers from the Georgia Institute of Technology in Atlanta, Georgia, have been developing a way for amputees to control each of their prosthetic fingers using an ultrasonic sensor. In the movie, Skywalker's prosthesis uses electromyogram sensors attached to his muscles. The sensors can be switched into different modes and are controlled by the flexing or contracting of his muscles. The prosthesis created by the Georgia Tech researchers, however, uses machine learning and ultrasound signals to detect fine finger-by-finger movement.
6. Digital Billboards
In Blade Runner, digital billboards were used to decorate the dystopian metropolis of Los Angeles. (Image credit: Warner Bros./courtesy Everett Collection/Alamy Stock Photo) (Image credit: Warner Bros./courtesy Everett Collection/Alamy Stock Photo)
From: "Blade Runner"
Director Ridley Scott presents a landscape shot of futuristic Los Angeles in the movie "Blade Runner." While scanning the skyscrapers, a huge, digital, almost-cinematic billboard appears on one of the buildings. This pre-internet concept sparked the imagination of Andrew Phipps Newman, the CEO of DOOH.com. DOOH — which stands for Digital Out Of Home — is a company dedicated to providing live, dynamic advertisements through the use of digital billboards. The company is now at the forefront of advertising as it offers a more enticing form; one that will make people stop and stare.
Digital billboards have come a long way since DOOH was founded in 2013. They have taken advantage of crowded cities, such as London and New York, to utilize this unique advertising tactic. Perhaps the more recent "Blade Runner 2049" will bring us even more new technologies.
The "Blade Runner" story heavily revolves around the idea of synthetic humans, which require artificial intelligence (AI). Some people might be worried about the potential fallout of giving computers intelligence, which has had disastrous consequences in many science-fiction works. But AI has some very useful applications in reality. For instance, astronomers have trained machines to find exoplanets using computer-based learning techniques. While sifting through copious amounts of data collected by missions such as NASA's Kepler and TESS missions, AI can identify the telltale signs of an exoplanet lurking in the data.
The inside design of the spacecraft in 2001: A Space Odyssey strikes an uncanny resemblance to the ISS. (Image credit: MGM/THE KOBAL COLLECTION) (Image credit: MGM/THE KOBAL COLLECTION)
From: "2001: A Space Odyssey"
Orbiting Earth in "2001: A Space Odyssey" is Space Station V, a large establishment located in low-Earth orbit where astronauts can bounce around in microgravity. Does this sound familiar?
The Space Station V provided inspiration for the International Space Station (ISS), which has been orbiting the Earth since 1998 and currently accommodates up to six astronauts at a time. Although Space Station V appears much more luxurious, the ISS has accomplished much more science. The ISS has been fundamental to microgravity research since the start of its construction in 1998.
The Space Station V wasn't just an out-of-this-world holiday experience, it was also employed as a pit-stop before traveling to the Moon and other long-duration space destinations. The proposed Deep Space Gateway would be a station orbiting the moon that would serve a similar purpose.
Tablets today are capable of recognizing fingerprints and even facial features of their owner for better security. (Image credit: Metro-Goldwyn-Mayer/AF archive/Alamy Stock Photo) (Image credit: Metro-Goldwyn-Mayer/AF archive/Alamy Stock Photo)
From: "2001: A Space Odyssey"
Tablets are wonderful handheld computers that can be controlled at the press of a finger. These handy devices are used by people across the globe, and even further upwards on the ISS. Apple claims to have invented the tablet with the release of its iPad. However, Samsung made an extremely interesting case in court that Apple was wrong: Stanley Kubrick and Sir Arthur C. Clarke did, by including the device in 2001: A Space Odyssey, released in 1968.
In the film, Dr. David Bowman and Dr. Frank Poole watch news updates from their flat-screen computers, which they called "newspads." Samsung claimed that these "newspads" were the original tablet, featured in a film over 40 years before the first iPad arrived in 2010. This argument was not successful though, as the judge ruled that Samsung could not utilize this particular piece of evidence.
10. Hoverboards
Marty McFly was able to hover over any surface, even water, with the hoverboard. (Image credit: Universal Pictures/AF archive/Alamy Stock Photo) (Image credit: Universal Pictures/AF archive/Alamy Stock Photo)
From: "Back to the Future Part II"
The Back to the Future trilogy is a highly enjoyable trio of time-traveling adventures, but it is Part II that presents the creators' vision of 2015. The film predicted a far more outlandish 2015 than what actually happened just five years ago, but it got one thing correct: hoverboards, just like the one Marty McFly "borrows" to make a quick escape.
Although they aren't as widespread as the film perceives, hoverboards now exist. The first real one was created in 2015 by Arx Pax, a company based in California. The company invented the Magnetic Field Architecture (MFA™) used to provide the levitation of a hoverboard. The board generates a magnetic field, which in turn creates an eddy current, which then creates another opposing magnetic field. These magnetic fields repel each other against a copper "hoverpark" that provides lift.
11. Driverless cars
Johnny Cab wasn't able to move unless he had the destination, ultimately leading to his demise. (Image credit: TriStar Pictures) (Image credit: TriStar Pictures)
From: "Total Recall"
In the 1990 film, set in 2084, Total Recall's main protagonist Douglas Quaid (played by Arnold Schwarzenegger) finds himself in the middle of a sci-fi showdown on Mars. In one scene Quaid is on the run from the bad guys and jumps into a driverless car. In the front is "Johnny Cab," which is the car's on-board computer system. All Johnny needs is an address to take the car to its intended destination.
Although the driverless car wasn't seen in action before the protagonist yells profanities and takes over the driving, the idea of having a car that takes you to your destination using its onboard satellite navigation has become increasingly popular. The company at the forefront of driverless cars is Waymo, as they want to eradicate the human error and inattention that results in dangerous and fatal accidents.
In 2017, NASA stated its intentions to help in the production of driverless cars, as they would improve the technologies of robotic vehicles on extraterrestrial surfaces such as the Moon or Mars.
A great name for a band, an intriguing plot for a movie … but a scary thing to find out your military has developed. Yes, they claim it’s for bomb-sniffing … just like gunpowder was originally developed for medicinal purposes. What could possibly go wrong? Let’s find out.
“Finally, we developed a minimally-invasive surgical approach and mobile multi-unit electrophysiological recording system to tap into the neural signals in a locust brain and realize a biorobotic explosive sensing system. In sum, our study provides the first demonstration of how biological olfactory systems (sensors and computations) can be hijacked to develop a cyborg chemical sensing approach.”
“Hijacked to develop a cyborg chemical sensing approach” means the brains of locusts – made famous by so many plagues – are connected via electrodes in their brains to tiny packs on their backs which transmit sensory images picked up by the locusts’ antennae and send them to a computer, which monitors the signals and sees when the grasshoppers detect the vapors of one of many different explosives, a process that takes a mere few hundred milliseconds, thanks to the 50,000 olfactory neurons in those sensitive antennae. That’s a brief summary of “Explosive sensing with insect-based biorobots,” a non-peer-reviewed paper published this week in BioRxiv describing four years of research by a team led by Baranidharan Raman, associate professor of biomedical engineering in the School of Engineering and Applied Science at Washington University in St. Louis.
If you refuse to let them go, I will bring locusts into your country tomorrow. They will cover the face of the ground so that it cannot be seen. They will devour what little you have left after the hail, including every tree that is growing in your fields. They will fill your houses and those of all your officials and all the Egyptians—something neither your fathers nor your forefathers have ever seen from the day they settled in this land till now. — Exodus 10:3–6
7- They will come out of the graves with downcast eyes like an expanding swarm of locusts. (54- The Moon, 7)
And as when beneath the onrush of fire locusts take wing to flee unto a river, and the unwearied fire burneth them with its sudden oncoming, and they shrink down into the water; even so before Achilles was the sounding stream of deep-eddying Xanthus filled confusedly with chariots and with men. The Iliad
Yeah, yeah, yeah … they know all about your biblical, Quran and Iliad locust plagues, but that didn’t stop the U.S. Office of Naval Research from investing $750,000 back in 2016 to develop cyborg locusts … or was that the reason for the investment? While it’s accepted that cheap bomb detectors are in demand by police, the military and airport security for dealing with terrorists, and by governments like Vietnam and South Korea who still have active minefields left over from not-so-recent wars, it’s easy to imagine other uses for these bionic bugs whose main claim to fame is not bomb-detection but swarming crop devastation of the plague kind. Back in 2016, Raman saw other uses for the cyborg locusts as well.
“But the real key, he says, is the relative simplicity of the locust’s brain. That’s what allows it to be hijacked, which, if all goes right, will allow for remote explosive sensing. Raman believes that eventually cyborg locusts could be used for other sniff-centric tasks, even medical diagnoses that rely on smell.”
Send them to countries with bombs and mines right before the crops are ripe. Have them do their jobs detecting the explosives, then reward them by letting the cyborg locusts detect the wheat fields and let the destruction begin.
What could possibly go wrong?
“Something neither your fathers nor your forefathers have ever seen from the day they settled in this land till now.”
Scientists from Tufts University, the University of Vermont, and the Wyss Institute at Harvard have developed tiny, living organisms that can be programmed. Called "xenobots," these robotswere made with frog stem cells.
The research, published in the scientific journal Proceedings of the National Academy of Sciences, is meant to aid development of soft robots that can repair themselves when damaged.
Ultimately, the hope is these xenobots will be useful in cleaning up microplastics, digesting toxic materials, or even delivering drugs inside our bodies.
What happens when you cross stem cells from a frog heart and frog skin? Not much—that is, until you program those cells to move. In that case, you've created a xenobot, a new type of organism that's part robot, part living thing.
And we've never seen anything like it before.
Researchers from Tufts University, the University of Vermont, and Harvard University have created the first xenobots from frog embryos after designing them with computer algorithms and physically shaping them with surgical precision. The skin-heart embryos are just one millimeter in size, but can accomplish some remarkable things for what they are, like physically squirming toward targets.
"These are novel living machines," Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research, said in a press statement. "They're neither a traditional robot nor a known species of animal. It's a new class of artifact: a living, programmable organism."
By studying these curious organisms, researchers hope to learn more about the mysterious world of cellular communication. Plus, these kinds of robo-organisms could possibly be the key to drug delivery in the body or greener environmental cleanup techniques.
"Most technologies are made from steel, concrete, chemicals, and plastics, which degrade over time and can produce harmful ecological and health side effects," the authors note in a research paper published in the scientific journal Proceedings of the National Academy of Sciences. "It would thus be useful to build technologies using self-renewing and biocompatible materials, of which the ideal candidates are living systems themselves."
Building Xenobots
Xenobots borrow their name from Xenopus laevis, the scientific name for the African clawed frog from which the researchers harvested the stem cells. To create the little organisms, which scoot around a petri dish a bit like water bears—those tiny microorganisms that are pretty much impossible to kill—the researchers scraped living stem cells from frog embryos. These were separated into single cells and left to incubate.
They differentiated the stem cells into two different kinds: heart and skin cells. The heart cells are capable of expanding and contracting, which ultimately aids the xenobot in locomotion, and the skin cells provide structure. Next, using tiny forceps and an even smaller electrode, the scientists cut the cells and joined them together under a microscope in designs that were specified by a computer algorithm.
Interestingly, the two different kinds of cells did merge together well and created xenobots that could explore their watery environment for days or weeks. When flipped like a turtle on its shell, though, they could no longer move.
Other tests showed whole groups of xenobots are capable of moving in circles and pushing small items to a central location all on their own, without intervention. Some were built with holes in the center to reduce drag and the researchers even tried using the hole as a pouch to let the xenobots carry objects. Bongard said it's a step in the right direction for computer-designed organisms that can intelligently deliver drugs in the body.
Evolutionary Algorithms
On the left, the anatomical blueprint for a computer-designed organism, discovered on a UVM supercomputer. On the right, the living organism, built entirely from frog skin (green) and heart muscle (red) cells. The background displays traces carved by a swarm of these new-to-nature organisms as they move through a field of particulate matter.
SAM KRIEGMAN, UVM
While these xenobots are capable of some spontaneous movement, they can't accomplish any coordinated efforts without the help of computers. Really, xenobots couldn't fundamentally exist without designs created through evolutionary algorithms.
Just as natural selection dictates which members of a species live and which die off—based on certain favorable or unfavorable attributes and ultimately influencing the species' characteristics—evolutionary algorithms can help find beneficial structures for the xenobots.
A team of computer scientists created a virtual world for the xenobots and then ran evolutionary algorithms to see which potential designs for the xenobots could help them move or accomplish some other goal. The algorithm looked for xenobots that performed well at those particular tasks while in a given configuration, and then bred those microorganisms with other xenobots that were considered "fit" enough to survive this simulated natural selection.
In the video above, for example, you can see a simulated version of the xenobot, which is capable of forward movement. The final organism takes on a similar shape to this design and is capable of (slowly) getting around. The red and green squares at the bottom of the structure are active cells, in this case the heart stem cells, while the blueish squares represent the passive skin stem cells.
DOUGLAS BLACKISTON
All of this design work was completed over the course of a few months on the Deep Green supercomputer cluster at the University of Vermont. After a few hundred runs of the evolutionary algorithm, the researchers filtered out the most promising designs. Then, biologists at Tufts University assembled the real xenobots in vitro.
What's the Controversy?
Anything dealing with stem cells is bound to meet at least some flack because detractors take issue with the entire premise of using stem cells, which are harvested from developing embryos.
That's compounded with other practical ethics questions, especially relating to safety and testing. For instance, should the organisms have protections similar to animals or humans when we experiment on them? Could we, ourselves, eventually require protection from the artificially produced creatures?
"When you’re creating life, you don’t have a good sense of what direction it’s going to take," Nita Farahany, who studies the ethical ramifications of new technologies at Duke University and was not involved in the study, told Smithsonian Magazine. "Any time we try to harness life … [we should] recognize its potential to go really poorly."
Michael Levin, a biophysicist and co-author of the study from Tufts University, said that fear of the unknown in this case is not reasonable:
"When we start to mess around with complex systems that we don't understand, we're going to get unintended consequences," he said in a press statement. "If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules."
At its heart, the study is a "direct contribution to getting a handle on what people are afraid of, which is unintended consequences," Levin said.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
22-01-2020
Meet the Chinese robot worm that could crawl into your brain
Meet the Chinese robot worm that could crawl into your brain
Scientists in Shenzhen are developing a machine that sounds like a form of ancient black magic because it could enter the brain and send signals to the neurons
Magnetically controlled device could be used to deliver drugs or interact directly with computers
The device could be sent into the brain and transmit electric pulses to the neurons.
Photo: Shutterstock
According to an ancient southern Chinese form of black magic known as Gu a small poisonous creature similar to a worm could be grown in a pot and used to control a person’s mind.
Now a team of researchers in Shenzhen have created a robot worm that could enter the human body, move along blood vessels and hook up to the neurons.
“In a way it is similar to Gu,” said Xu Tiantian, a lead scientist for the project at the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences.
“But our purpose is not developing a biological weapon. It’s the opposite,” she added.
In recent years, science labs around the world have produced many micro-bots but mostly they were only capable of performing some simple tasks.
But a series of videos released by the team alongside a study in Advanced Functional Materials earlier this month show that the tiny intelligent robots – which they dubbed iRobots – can hop over a hurdle, swim through a tube or squeeze through a gap half its body width.
The 1mm by 3mm robotic worms are not powered by computer chips or batteries, but by an external magnetic field generator.
A video from the team showed the robots performing a range of manoeuvres.
Photo: Handout
Changing the magnetic fields allows the researchers to twist the robot’s body in many different ways and achieve a wide range of movements such as crawling, swinging and rolling, according to their paper.
They can also squeeze through gaps by using infrared radiation to contract their bodies by more than a third.
The worm’s body is also capable of changing colour in different environments because it is made from a transparent, temperature-responsive hydrogel and the video shows that when added to a cup of water at room temperature they become almost invisible.
It also has a “head” made from a neodymium iron-boron magnet and a “tail” constructed from a special composite material.
Xu believes they will prove particularly useful for doctors in the future, for example by being injected into the body and delivering a package of drugs to a targeted area, for example a tumour.
This would limit the effect of the drug to the areas where it is needed and reduce the risk of side effects and the robot worm could exit the body once its task is complete.
The patient would need to lie in an MRI style machine that generates the magnetic field needed to control the robots during the procedure.
The robot worms are controlled by electromagnetic signals.
Photo: Handout
It could also be implanted into the brain because its high mobility and ability to transform means it can survive in this harsh environment where there are rapid blood flows and tiny blood vessels.
Currently, brain implants can only be inserted via a surgical procedure and have a limited capability to integrate with the neurons, which means they can only perform a few simple tasks.
But Xu said the new robots could “work as an implant for brain-computer interface” that would make it possible to communicate directly with a computer without needing a keyboard or even a screen.
She believes this would work by carrying a transmitter that converts external signals into an electric pulse and connect with brain cells to stimulate activities that are not possible using current technology.
Xu admitted that it may be possible to misuse the technology by turning it into a weapon, but said there were still some major barriers to making this effective.
For instance, the controller would need to build a powerful electric field generator with a long effective range to operate the robot worms.
It would also be very difficult to send the microbots to their designated locations without the cooperation of the person they are implanted into because they have to sit or lie down and stay perfectly still while they are moving through the body.
But improving the hardware may overcome these obstacles so Xu could not rule out the possibility the technology could be weaponised one day, but added: “We just hope that day will never come.”
“You see, their young enter through the ears and wrap themselves around the cerebral cortex. This has the effect of rendering the victim extremely susceptible to suggestion… Later, as they grow, follows madness and death…” – Khan Noonien Singh
Anyone who has ever seen Star Trek II: The Wrath Of Khan, the second movie in the series, can still remember the horror of Khan releasing larvae of Ceti eels into the ears of Reliant officers Commander Pavel Chekov and Captain Clark Terrell, where they wormed their way into their brains, wrapping themselves around the cerebral cortex to cause brain control, pain, madness and eventual death. It’s nice to know that’s pure fiction, right? RIGHT?
“Once you consume them, they can move throughout your body — your eyes, your tissues and most commonly your brain. They leave doctors puzzled in their wake as they migrate and settle to feed on the body they’re invading; a classic parasite, but this one can get into your head.”
According to CNN, in 2013 a British man of Chinese was found to have a tapeworm moving inside his brain in 2013 – a parasite known as Spirometra erinaceieuropaei. It’s extremely rare and found mostly in Asia – the adult parasite lives in dog and cat intestines, but the eggs can be spread via fecal matter, particularly in water, which appears to be how the man contracted it. In 2018, a man in India died after his brain, brain stem, and cerebellum were infected by the tapeworm Taenia solium. It’s a good thing these worms are rare and no one is trying to make robotic versions of them, right? RIGHT?
“It could also be implanted into the brain because its high mobility and ability to transform means it can survive in this harsh environment where there are rapid blood flows and tiny blood vessels.”
The South China Morning Post reports that scientists in Shenzhen have developed a tiny robot worm that can enter the human body, swim through along blood vessels and hook up to neurons in the brain. The 1mm-by-3mm (.04 in by .12 in) robots are powered externally by a magnetic field generator and use infrared radiation to contract their size by up to a third to squeeze through tight spots. On the noble cause side, Xu Tiantian — lead scientist for the project at the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences – says the robot worms will allow doctors to deliver drugs directly to a specific tumor and then exit the body when done.
“But our purpose is not developing a biological weapon. It’s the opposite.”
Needless to say, using the robot worm as a weapon is entirely possible as soon as a more powerful electric field generator with a longer effective range is available and the robot worms obtain the ability to move while the host human is in motion – they currently have to be lying perfectly still. If he were around today, robot designer Khan Noonien Singh might say: “Piece of cake.”
Xu agrees.
“We just hope that day will never come.”
Or is it already here? Mr. Chekov, care to comment?
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.