The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
23-07-2020
Check Out This Amazing Design for an Underwater “Space Station”
SUBNAUTICA IRL
Check Out This Amazing Design for an Underwater “Space Station”
"Ocean exploration is 1,000 times more important than space exploration."
Fabien Cousteau, the grandson of legendary ocean explorer Jacques Cousteau, wants to build the equivalent of the International Space Station (ISS) — but on the ocean floor deep below the surface, asCNN reports.
All images: Courtesy Proteus/Yves Béhar/Fuseproject
With the help of industrial designer Yves Béhar, Cosuteau unveiled his bold ambition: a 4,000 square foot lab called Proteus that could offer a team of up to 12 researchers from all over the world easy access to the ocean floor. The plan is to build it in just three years.
The most striking design element of their vision is a number of bubble-like protruding pods, extending from two circular structures stacked on top of each other. Each pod is envisioned to be assigned a different purpose, ranging from medical bays to laboratories and personal quarters.
“We wanted it to be new and different and inspiring and futuristic,” Béhar told CNN. “So [we looked] at everything from science fiction to modular housing to Japanese pod [hotels].”
The team claims Proteus will feature the world’s first underwater greenhouse, intended for growing food for whoever is stationed there.
Power will come from wind, thermal, and solar energy.
“Ocean exploration is 1,000 times more important than space exploration for — selfishly — our survival, for our trajectory into the future,” Cousteau told CNN. “It’s our life support system. It is the very reason why we exist in the first place.”
Space exploration gets vastly more funding than its oceanic counterpart, according to CNN, despite the fact that humans have only explored about five percent of the Earth’s oceans — and mapped only 20 percent.
The Proteus would only join one other permanent underwater habitat, the Aquarius off the coast of Florida, which has been used by NASA to simulate the lunar surface.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
19-07-2020
You Might Have Never Seen Machines Doing These Kind Of Incredible Things
You Might Have Never Seen Machines Doing These Kind Of Incredible Things
In today’s world, technology is evolving faster than ever before and humans are powering it. Brilliant minds all around the world innovate day and night to produce the most advanced machines and equipment that can make our lives easier and our work more efficient. Sure, technology can get terrifying, if you think of it can do, such as tear down entire forests. But it’s also pretty amazing – we use machines to create bridges where humans just can’t on their own. Stick around to learn more about the top 12 most useful machines that help humans do incredible things!
By tinkering with the genetics of human cells, a team of scientists gave them the ability to camouflage.
To do so, they took a page out of the squid’s playbook, New Atlas reports. Specifically, they engineered the human cells to produce a squid protein known as reflectin, which scatters light to create a sense of transparency or iridescence.
Not only is it a bizarre party trick, but figuring out how to gene-hack specific traits into human cells gives scientists a new avenue to explore how the underlying genetics actually works.
Invisible Man
It would be fascinating to see this research pave the way to gene-hacked humans with invisibility powers — but sadly that’s not what this research is about. Rather, the University of California, Irvine biomolecular engineers behind the study think their gene-hacking technique could give rise to new light-scattering materials, according to research published Tuesday in the journal Nature Communications.
Or, even more broadly, the research suggests scientists investigating other genetic traits could mimic their methodology, presenting a means to use human cells as a sort of bioengineering sandbox.
Biological Sandbox
That sandbox could prove useful, as the Irvine team managed to get the human cells to fully integrate the structures producing the reflectin proteins. Basically, the gene-hack fully took hold.
“Through quantitative phase microscopy, we were able to determine that the protein structures had different optical characteristics when compared to the cytoplasm inside the cells,” Irvine researcher Alon Gorodetsky told New Atlas, “in other words, they optically behaved almost as they do in their native cephalopod leucophores.”
At the end of April, the artificial intelligence development firm OpenAI released a new neural net, Jukebox, which can create mashups and original music in the style of over 9,000 bands and musicians.
Alongside it, OpenAI released a list of sample tracks generated with the algorithm that bend music into new genres or even reinterpret one artist’s song in another’s style — think a jazz-pop hybrid of Ella Fitzgerald and Céline Dion.
It’s an incredible feat of technology, but Futurism’s editorial team was unsatisfied with the tracks OpenAI shared. To really kick the tires, we went to CJ Carr and Zack Zukowski, the musicians and computer science experts behind the algorithmically-generatedmusic group DADABOTS, with a request: We wanted to hear Frank Sinatra sing Britney Spears’ “Toxic.”
And boy, they delivered.
An algorithm that can create original works of music in the style of existing bands and artists raises unexplored legal and creative questions. For instance, can the artists that Jukebox was trained on claim credit for the resulting tracks? Or are we experiencing the beginning of a brand-new era of music?
“There’s so much creativity to explore there,” Zukowski told Futurism.
Below is the resulting song, in all its AI-generated glory, followed by Futurism’s lightly-edited conversation with algorithmic musicians Carr and Zukowski.
Futurism: Thanks for taking the time to chat, CJ and Zack. Before we jump in, I’d love to learn a little bit more about both of you, and how you learned how to do all this. What sort of background do you have that lent itself to AI-generated music?
Zack Zukowski: I think we’re both pretty much musicians first, but also I’ve been involved in tech for quite a while. I approached my machine learning studies from an audio perspective: I wanted to extend what was already being doing with synthesis and music technology. It seemed like machine learning was obviously the path that was going to make the most gains, so I started learning about those types of algorithms. SampleRNN is the tool we most like to use — that’s one of our main tools that we’ve been using for our livestreams and our Bandcamp albums over the last couple years.
CJ Carr: Musician first, motivated in computer science to do new things with music. DADABOTS itself comes out of hackathon culture. I’ve done 65 hackathons, and Zack and I together have won 15 or so. That environment inspires people to push what they’re doing in some new way, to do something provocative. That’s the spirit DADABOTS came out of in 2012, and we’ve been pushing it further and further as the tech has progressed.
Why did you make the decision to step up from individual hackathons and stick with DADABOTS? Where did the idea come from for your various projects?
CJ: When we started it, we were both interns at Berklee College of Music working in music tech. When I met Zack — for some reason it felt like I’ve known Zack my whole life. It was a natural collaboration. Zack knew more about signal processing than I did, I knew more about programming, and now we have both brains.
What’s your typical approach? What’s going on behind the scenes?
CJ: SampleRNN has been our main tool. It’s really fast to train — we can train it in a day or two on a new artist. One of the main things we love to do is collaborating with artists, when an artist says “hey I’d love to do a bot album.” But recently, Jukebox trumped the state of the art in music generation. They did a really good job.
SampleRNN and Jukebox, they’re similar in that they’re both sequence generators. It’s reading a sequence of audio at 44.1k or 16k sample rate, and then it’s trying to predict what the next sample is going to be. This net is making a decision at a fraction of a millisecond to come up with the next sample. This is why it’s called neural synthesis. It’s not copying and pasting audio from the training data, it’s learning to synthesize.
What’s different about them is that SampleRNN uses “Long Short Term Memory” (LSTM) architecture, whereas the jukebox uses a transformer architecture. The transformer has attention. This is a relatively new thing that’s come to popularity in deep learning, after RNN, after LSTM. It especially took over for language models. I don’t know if you remember fake news generators like GPT-2 and Grover. They use transformer architecture. Many of the language researchers left LSTM behind. No one had really applied it to audio music yet — that’s the big enhancement for Jukebox. They’re taking a language architecture and applying it to music.
They’re also doing this extra thing, called a “Vector-Quantized Variational AutoEncoder” (VQ-VAE). They’re trying to turn audio into language. They train a model that creates a codebook, like an alphabet. And they take this alphabet, which is a discrete set of 2048 symbols — each symbol is something about music — and then they train their transformer models on it.”
What does that alphabet look like? What is that “something about music?”
CJ: They didn’t do that analysis at all. We’re really curious. For instance, can we compose with it?
Zack: we have these 2048 characters, and so we wonder which ones are commonly used. Like in the alphabet we don’t use Zs too much. But what are the “vowels?” Which symbols are used frequently? It would be really interesting to see what happens when you start getting rid of some of these symbols and see what the net can do with what remains. The way we have the language of music theory with chords and scales, maybe this is something that we can compose with beyond making deepfakes of an artist.
What can that language tell us about the underlying rules and components of music, and how can we use these as building blocks themselves? They’re much higher-level than chords — maybe they’re genre-related. We really don’t know. It would be really cool to do that analysis and see what happens by using just a subset of the language.
CJ: They’ve come up with a new music theory.
Well, it sounds like the three of us have a lot of the same questions about all this. Have you started tinkering with it to learn what’s going on?
CJ: We’ve just got the code running. The first example is this Sinatra thing. But as we use this more, the philosophical implications here are that as musicians, we know intuitively that music is very language-like. It’s not just waves and noise, which is what it looks like at a small scale, but when we’re playing we’re communicating with each other. The bass and the drummer are in step, strings and vocals can be doing call-and-response. And OpenAI was just like “Hey, what if we treated music like language?”
If the sort of alphabet this algorithm uses could be seen as a new music theory, do you think this will be a tool for you two going forward? Or is it more of an oddity to play around with?
CJ: Maybe I should correct myself. Instead of being a music theory, these models can train music theory.
Zack: The theory isn’t something that we can explain right now. We can’t say “This value means this.” It’s not quite as human interpretable, I guess.
CJ: the model just learns probabilistic patterns, and that’s what music theory is. It’s these notes tend to have these patterns and produce these feelings. And those were human-invented. What if we just have a machine try to discover that on its own, and then we ask it to make music? And if it’s good at it, probably it’s learned a good quote-unquote “music theory.”
Zack: An analogy we thought of: Back in the days of Bach, and these composers who were really interested in having counterpoint — many voices moving in their own direction — they had a set of rules for this. The first melodic line the composer builds off is called cantus firmus. There was an educational game new composers would play — if you could follow the notes that were presented in the cantus firmus and guess what harmonizing notes were next, you’d be correct based on the music of the day.
We’re thinking this is kind of the machine version of that, in some ways. Something that can be used to make new music in the style of music that has been heard before.
I know it’s early days and that this is speculative, but do you have any predictions for how people might use Jukebox? Will it be more of these mashups, or do you think people will develop original compositions?
CJ: On the one hand, you have the fear of push-button art. A lot of people think push-button art is very grotesque. But I think push-button art, when a culture can achieve this — it’s a transcendent moment for that culture. It means the communication of that culture has achieved its capacity. Think about meme generators — I can take a picture of Keanu Reeves, put in some inside joke and send it to my friends, and then they can understand and appreciate what I’m communicating. That’s powerful. So it is grotesque, but it’s effectual.
On the other side, you’ll have these virtuosos — these creators — who are gonna do overkill and try to create a medium of art that’s never existed before. What interests us are these 24/7 generators, where it can just keep generating forever.
Zack: I think it’s an interesting tool for artists who have worked on a body of albums. There are artists who don’t even know they can be generated on Jukebox. So, I think many of them would like to know what can be generated in their likeness. It can be a variation tool, it can recreate work for an artist through a perspective they haven’t even heard. It can bend their work through similar artists or even very distantly-stylized artists. It can be a great training tool for artists.
You said you’d heard from some artists who approached you to generate music already — is that something you can talk about?
CJ: When bands approach us, they’ve mostly been staying within the lane of “Hey, use just my training data and let’s see what comes out — I’m really interested.”
Fans though, on YouTube, are like “Here’s a list of my four favorite bands, please make me something out of it.”
So, let’s talk about the actual track you made for us. For this new song, Futurism suggested Britney Spears’ “Toxic” as sung by Frank Sinatra. Did the technical side of pulling that together differ from your usual work?
CJ: This is different. With SampleRNN, we’re retraining it from scratch on usually one artist or one album. And that’s really where it shines — it’s not able to do these fusions very well. What OpenAI was able to do — with a giant multimillion-dollar compute budget — they were able to train these giant neural nets. And they trained them on over 9,000 artists in over 300 genres. You need a mega team with a huge budget just to make this generalizable net.
Zack: There are two options. There’s lyrics and no lyrics. No lyrics is sort of like how SampleRNN has worked. With lyrics it tries to get them all in order, but sometimes it loops or repeats. But it tries to go beginning to end and keep the flow going. If you have too many lyrics, it doesn’t understand. It doesn’t understand that if you have a chorus repeating, the music should repeat as well. So we find that these shorter compositions work better for us.
But you had lyrics in past projects that used SampleRNN, like “Human Extinction Party.” How did that differ?
CJ: That was smoke and mirrors.
Zack: That was kind of an illusion. The album we trained it on had vocals, so some made it through to. We had a text generator that made up lyrics whenever it heard a sound.
In a lot of these Jukebox mashups, I’ve noticed that the voice sounds sort of strained. Is that just a matter of the AI-generated voice being forced to hit a certain note, or does it have something more to do with the limitations of the algorithm itself?
Zack: Your guess sounds similar to what I’d say. It was probably just really unlikely that those lyrics or the phonemes, the sounds themselves of the words, showed up in a similar way to how we were forcing it to generate those syllables. It probably heard a lot more music that isn’t Frank Sinatra, so it can imagine some things that Frank Sinatra didn’t do. But it just comes down to being somewhat different from any of the original Frank Sinatra texts.
When you were creating this rendition of Toxic, did you hit any snags along the way? Or was it just a matter of giving the algorithm enough time to do its work?
CJ: Part of it is we need a really expensive piece of hardware that we need to rent on Amazon Cloud at three dollars per hour. And it takes — how long did it take to generate, Zack?
Zack: The final one I had generated took about a day, but I had been doing it over and over again for a week. You have so little control that sometimes you just gotta go again. It would get a few phrases and then it would lose track of the lyrics. Sometimes you’d get two lines but not the whole chorus in a row. It came down to luck — waiting for the right one to come along.
It could loop a line, or sometimes it could go into seemingly different songs. It would completely lose track of where it was. There are some pretty wild things that can happen. One time I was generating Frank Sinatra, and it was clearly a chorus of men and women together. It wasn’t even the right voice. It can get pretty ghostly.
Do you know if there are any legal issues involved in this kind of music? The capability to generate new music in the style or voice of an artist seems like uncharted territory, but are there issues with the mashups that use existing lyrics? Or are those more acceptable under the guise of fair use, sort of like parody songs?
CJ: We’re not legal people, we haven’t studied copyright issues. The vibe is that there’s a strong case for fair use, but artists may not like people creating these deepfakes.
Zack: I think it comes down to intention, and whatever the law decides they’ll decide. But as people using this tool, artists, there’s definitely a code of ethics that people should probably respect. Don’t piss people off. We try our best to cite the people who worked on the tech, the people who it was trained on. It all just depends how you’re putting it out and how respectful you’re being of people’s work.
Before I let you go, what else are you two working on right now?
CJ: Our long-term research is trying to make these models faster and cheaper so bedroom producers and 12-year-olds can be making music no one’s ever thought of. Of course, right now it’s very expensive and it takes days. We’re in a privileged position of being able to do it with the rented hardware.
Specifically, what we’re doing right now — there’s the list of 9,000-plus bands that the model currently supports. But what’s interesting is the bands weren’t asked to be a part of this dataset. Some machine learning researchers on Twitter were debating the ethics of that. There are two sides of that, of course, but we really want to reach out to those bands. If anyone knows these bands, if you are these bands, we will generate music for you. We want to take this technology, which we think is capable of brand-new forms of creativity, and give it back to artists.
A team of researchers from the Higher School of Economics University and Open University in Moscow, Russia claim they have demonstrated that an artificial intelligence can make accurate personality judgments based on selfies alone — more accurately than some humans.
The researchers suggest the technology could be used to help match people up in online dating services or help companies sell products that are tailored to individual personalities.
That’s apropos, because two co-authors listed on a paper about the research published today in Scientific Reports — a journal run by Nature — are affiliated with a Russian AI psychological profiling company called BestFitMe, which helps companies hire the right employees.
As detailed in the paper, the team asked 12,000 volunteers to complete a questionnaire that they used to build a database of personality traits. To go along with that data, the volunteers also uploaded a total of 31,000 selfies.
The questionnaire was based around the “Big Five” personality traits, five core traits that psychological researchers often use to describe subjects’ personalities, including openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism.
After training a neural network on the dataset, the researchers found that it could accurately predict personality traits based on “real-life photographs taken in uncontrolled conditions,” as they write in their paper.
While accurate, the precision of their AI leaves something to be desired. They found that their AI “can can make a correct guess about the relative standing of two randomly chosen individuals on a personality dimension in 58% of cases.”
That result isn’t exactly groundbreaking — but it’s a little better than just guessing, which is vaguely impressive.
Strikingly, the researchers claim their AI is better at predicting the traits than humans. While rating personality traits by human “close relatives or colleagues” was far more accurate than when rated by strangers, they found that the AI “outperforms an average human rater who meets the target in person without any prior acquaintance,” according to the paper.
Considering the woeful accuracy, and the fact that some of the authors listed on the study are working on commercializing similar tech, these results should be taken with a hefty grain of salt.
Neural networks have generated some impressive results, but any research that draws self-serving conclusions — especially when they require some statistical gymnastics — should be treated with scrutiny.
I'm a South London-based technology journalist, consultant and author
Patent showing laser decoy system
US NAVY PATENT
The U.S. Navy has patented technology to create mid-air images to fool infrared and other sensors. This builds on many years of laser-plasma research and offers a game-changing method of protecting aircraft from heat-seeking missiles. It may also provide a clue about the source of some recent UFO sightings by military aircraft.
The U.S. developed the first Sidewinder heat-seeking missile back in the 1950’s, and the latest AIM-9X version is still in frontline service around the world. This type of sensor works so well because hot jet engines exhausts shine out like beacons in the infrared, making them easy targets. Pilots under attack can eject decoy flares to lure a missile away from the launch aircraft, but these only provide a few seconds protection. More recently laser infrared countermeasures have been fielded which dazzle the infrared seeker.
A sufficiently intense laser pulse can ionize producing a burst of glowing plasma. The Laser Induced Plasma Effects program uses single plasma bursts as flash-bang stun grenades; a rapid series of such pluses can even be modulated to transmit a spoken message (video here). In 2011 Japanese company Burton Inc demonstrated a rudimentary system that created moving 3D images in mid-air with a series of rapidly-generated plasma dots (video here).
1:34 minutes video ‘Talking lasers and endless flashbangs: Pentagon develops plasma tech’ (‘Military Times’ YouTube)
1:53 minute video ‘True 3D Display in the Mid-Air Using Laser Plasma Technology’ (‘Deepak Gupta’ YouTube)
A more sophisticated approach uses an intense, ultra-short, self-focusing laser pulse to create a glowing filament or channel of plasma, an effect discovered in the 1990s. Known as laser-induced plasma filaments (LIPF) these can be created at some distance from the laser for tens or hundreds of meters. Because LIPFs conduct electricity, they have been investigated as a means of triggering lightning or creating a lightning gun
US Army 'lighting gun' experiment with laser-generated plasma channel
US ARMY
One of the interesting things about LIPFs is that with suitable tuning they can emit light of any wavelength: visible, infrared, ultraviolet or even terahertz waves. This technology underlies the Navy project, which uses LIPFs to create phantom images with infrared emissions to fool heat-seeking missiles.
The Navy declined to discuss the project, but the work is described in a 2018 patent: “wherein a laser source is mounted on the back of the air vehicle, and wherein the laser source is configured to create a laser-induced plasma, and wherein the laser-induced plasma acts as a decoy for an incoming threat to the air vehicle.”
The patent goes on to explain that the laser creates a series of mid-air plasma columns, which form a 2D or 3D image by a process of raster scanning, similar to the way old-style cathode ray TVs sets display a picture.
A single decoy halves the chances of an incoming missile picking the right target, but there is no reason to stop at one : “There can be multiple laser systems mounted on the back of the air vehicle with each laser system generating a ‘ghost image’ such that there would appear to be multiple air vehicles present.”
Unlike flares, the LIPF decoy can be created instantly at any desired distance from the aircraft, and can be moved around at will. Equally importantly, moves with the aircraft, rather than dropping away rapidly like a flare, providing protection for as long as needed.
The aircraft carrying the laser projector could also project decoys to cover other targets: “The potential applications of this LIP flare/decoy can be expanded, such as using a helicopter deploying flares to protect a battleship, or using this method to cover and protect a whole battle-group of ships, a military base or an entire city.”
The lead researcher in the patent is Alexandru Hening. A 2017 piece in the Navy’s own IT magazine says that Dr. Hening has been working on laser-induced plasma at Space and Naval Warfare Systems Center Pacific since 2012.
“If you have a very short pulse you can generate a filament, and in the air that can propagate for hundreds of meters, and maybe with the next generation of lasers you could produce a filament of even a mile,” Dr. Henning told the magazine, indicating that it should be possible to create phantoms at considerable distances.
Phantom aircraft that can move around at high speed and appear on thermal imagers may ring some bells. After months of debate, in April the Navy officially released infra-red videos of UFOs encountered by their pilots, although the Pentagon prefers to call them “unidentified aerial phenomena.” The objects in the videos appear to make sudden movements impossible for physical aircraft, rotate mid-air and zip along at phenomenal speed: all maneuvers which would be easy to reproduce with a phantom projected image.
It is unlikely the Pentagon would release videos of their own secret weapon in a bizarre double bluff. But other nations may have their own version. In the early 1990s the Russians claimed that they could produce glowing ‘plasmoids’ at high altitude using high-power microwave or laser beams; these were intended to disrupt the flight of ballistic missiles, an answer to the planned American ‘Star Wars’. Nothing came of the project, but the technology may have been refined for other applications in the subsequent decades.
Heat-seeking missiles will no doubt evolve ways to distinguish the plasma ghosts from real jets, leading to further refinement of the decoy technology, and so on. Whether humans also get smart enough to recognize such fakes remains to be seen.
Follow me on Twitter. Check out my website or some of my other work here.
Researchers say they’ve created a proof-of-concept bionic eye that could surpass the sensitivity of a human one.
“In the future, we can use this for better vision prostheses and humanoid robotics,” researcher Zhiyong Fan, at the Hong Kong University of Science and Technology, told Science News.
The eye, as detailed in a paper published in the prestigious journal Nature today, is in essence a three dimensional artificial retina that features a highly dense array of extremely light-sensitive nanowires.
The team, led by Fan, lined a curved aluminum oxide membrane with tiny sensors made of perovskite, a light-sensitive material that’s been used in solar cells.
Wires that mimic the brain’s visual cortex relay the visual information gathered by these sensors to a computer for processing.
The nanowires are so sensitive they could surpass the optical wavelength range of the human eye, allowing it to respond to 800 nanometer wavelengths, the threshold between visual light and infrared radiation.
That means it could see things in the dark when the human eye can no longer keep up.
“A human user of the artificial eye will gain night vision capability,” Fan told Inverse.
The researchers also claim the eye can react to changes in light faster than a human one, allowing it to adjust to changing conditions in a fraction of the time.
Each square centimeter of the artificial retina can hold about 460 million nanosize sensors, dwarfing the estimated 10 million cells in the human retina. This suggests that it could surpass the visual fidelity of the human eye.
Fan told Inverse that “we have not demonstrated the full potential in terms of resolution at this moment,” promising that eventually “a user of our artificial eye will be able to see smaller objects and further distance.”
Other researchers who were not involved in the project pointed out that plenty of work still has to be done to eventually be able to connect it to the human visual system, as Scientific American reports.
But some are hopeful.
“I think in about 10 years, we should see some very tangible practical applications of these bionic eyes,” Hongrui Jiang, an electrical engineer at the University of Wisconsin–Madison who was not involved in the research, told Scientific American.
Imagine you are out at an outdoor event, perhaps a BBQ or camping trip and a bug keeps flying by your face. You try to ignore it at first, perhaps lazily swat at it, but it keeps coming back for more. This is nothing unusual, as bugs have a habit of ruining the outdoors for people, but then it lands on your arm. Now you can see it doesn’t exactly look like a regular fly, something is off about it. You lean in, peer down at the little insect perched upon your arm, and that is when you notice that it is peering right back at you, with a camera in place of eyes. Welcome to the future of drone technology, with robotic flies and more, and it is every bit as weird as it sounds.
Everyone is familiar with drones nowadays. They seem to be everywhere, and they are getting smaller and cooler as time goes on, but how small can they really get, some may wonder. Well, looking at the trends in the technology these days, it seems that they can get very small, indeed. One private research team called Animal Dynamics has been working on tiny drones that use the concept of biomechanics, that is, mimicking the natural movements of insects and birds in nature. After all, what better designer is there than hundreds of millions of years of evolution? A prime example of this is one of their drones that aims to copy the shape and movements of a dragonfly, a drone called the “Skeeter.” The drone is launched by hand, its design allows it to maintain flight in high winds of up to more than 20 knots (23mph or 37km/h) due to its close approximation of an actual dragonfly, and its multiple wings give it deft movement control. One of the researchers who helped design it, Alex Caccia, has said of its biomechanical design:
The way to really understand how a bird or insect flies is to build a vehicle using the same principles. And that’s what we set up Animal Dynamics to do. Small drones often have problems maneuvering in heavy wind. Yet dragonflies don’t have this problem. So we used flapping wings to replicate this effect in our Skeeter. Making devices with flapping wings is very, very hard. A dragonfly is an awesome flyer. It’s just insane how beautiful they are, nothing is left to chance in that design. It has very sophisticated flight control.
In addition to its small size and sophisticated controls, the Skeeter also can be equipped with a camera and communications links, using the type of miniaturized tech found in mobile smartphones. Currently the Skeeter measures around 8 inches long, but of course the team is working on smaller, lighter versions. As impressive as it is, Skeeter is not even the smallest insect drone out there. Another model designed by a team at the Delft University of Technology is called the “Delfly,” and weighs less than 50 grams. The Delfly is meant to copy the movements of a fruit fly, and has advanced software that allows it to autonomously fly about and avoid obstacles on its four cutting edge wings, fashioned from ultra-light transparent foil. The drone has been designed for monitoring agricultural crops, and is equipped with a minuscule camera. The team behind the Delfly hope to equip it with dynamic AI that will allow it to mimic the way an insect erratically flies about and avoids objects, and it seems very likely someone could easily mistake it for an actual fly. The only problem it faces at the moment is that it is so small that it has limited battery life, only able to stay aloft for 6 to 9 minutes at a time.
Indeed, this is the challenge that any sophisticated technology faces; the limitations of battery life. There is only so small you can make a battery before its efficiency is compromised, no matter how light and small the equipment, and it is a problem we are stuck with until battery technology is seriously upgraded. In fact, many of the prototype insect drones currently rely on being tethered to an external power source for the time being. But what if your drone doesn’t need batteries at all? That is the idea behind another drone designed by engineers at the University of Washington, who have created a robotic flying insect, which they call the RoboFly, that does not rely on any battery or external power source at all. Instead, the drone, which is about the same weight as a toothpick, rides about on a laser beam. This beam is invisible, and is aimed at a photovoltaic cell on the drone, which is then amplified with a circuit and is enough to power its wings and other components. However, even with such a game changing development, the RoboFly, and indeed all insect-sized unmanned aerial vehicles (UAVs), which are usually referred to as micro aerial vehicles (MAVs), still face some big challenges going ahead. Sawyer Fuller, leader of the team that created the RoboFly and director of the slightly ominous sounding Autonomous Insect Robotics Laboratory, has said of this:
A lot of the sensors that have been used on larger robots successfully just aren’t available at fly size. Radar, scanning lasers, range finders — these things that make the perfect maps of the world, that things like self-driving cars use. So we’re going to have to use basically the same sensor suite as a fly uses, a little camera.
However, great progress is being made, and these little drones are becoming more sophisticated in leaps and bounds, with the final aim being a fully autonomous flying insect robot that can more or less operate on its own or with only minimal human oversight. Fuller is very optimistic about the prospects, saying, “For full autonomous I would say we are about five years off probably.” Such a MAV would have all manner of applications, including surveillance, logistics, agriculture, taking measurements in hostile environments that a traditional drone can’t fit into or operating in hazardous environments, finding victims of earthquakes or other natural disasters, planetary exploration, and many others. Many readers might be thinking about now whether the military has any interest in all of this, and the answer is, of course they do.
The use of these MAVs is seen as very promising to the military, and the U.S. government has poured in over a billion dollars of funding into such research. Indeed, Animal Dynamics has been courted by the military with funding, and the creators of the RoboFly have also received generous funding for their research. The U.S. government’s own Defense Advanced Research Projects Agency (DARPA) has been pursuing the technology for years, as have other countries. On the battlefield MAVs have obvious applications, such as spying and reconnaissance, but they are also seen as having other uses, such as attaching to enemies to serve as tracking devices or very literal “bugs,” attaching tags to enemy vehicles to make targeting easier, taking DNA samples, or even administering poisons or dangerous chemical or biological agents. There are quite a few world governments who are actively pursuing these insect drones, and one New Zealand based strategic analyst, Paul Buchanan, has said of this landscape:
The work on miniaturization began decades ago during the Cold War, both in the USA and USSR, and to a lesser extent the UK and China. The idea then and now was to have an undetectable and easily expendable weapons delivery or intelligence collection system. Nano technologies in particular have seen an increase in research on miniaturized UAVs, something that is not exclusive to government scientific agencies, but which also has sparked significant private sector involvement. That is because beyond the military, security and intelligence applications of miniaturized UAVs, the commercial applications of such platforms are potentially game changing. Within a few short years the world will be divided into those who have them and those who do not, with the advantage in a wide range of human endeavor going to the former.
While so far all of this is in the prototype stages and there are no working models in the field yet as far as we know, some conspiracy theorists believe that this is not even something for down the line in the future, but that the technology is already perfected and being used against an unsuspecting populace at this very moment. For instance, there was a report in 2007 in the Washington Post of several witnesses at an anti-war rally who claimed to have seen tiny drones like dragonflies or bumblebees darting about.One of these witnesses would say:
I look up and I’m like, ‘what the hell is that?’ They looked like dragonflies or little helicopters, but I mean, those are not insects. They were large for dragonflies and I thought, ‘is that mechanical or is that alive?
Such supposed sightings of these tiny drones have increased in recent years, leading to the idea that the technology is already being used to spy on us, but of course the government and research institutes behind it all insist that working models are still a thing of the future. Yet it is still a scary thought, scary enough to instill paranoia, which is only fueled by these reports and others like them. One famous recent meme that caused a lot of panic in 2019 was a post from Facebook user in South Africa, which shows an eerily mosquito-like robot perched on a human finger, accompanied by the text:
Is this a mosquito? No. It’s an insect spy drone for urban areas, already in production, funded by the US government. It can be remotely controlled and is equipped with a camera and a microphone. It can land on you, and may have the potential to take a DNA sample or leave RFID tracking nanotechnology on your skin. It can fly through an open window, or it can attach to your clothing until you take it home.
The post went viral, with rampant speculation on whether it was true or not. The debunking site Snopes came to the conclusion that the photo was fake and it was just a fictional meme, but others are not so sure, igniting the debate again on whether this is or will be a reality, or whether it ever should be. Regardless of the ethical and privacy concerns of having insect sized spy drones flying around, with all of the money and effort being put into this technology, the question of whether we will really have mosquito sized robots buzzing about seems to be not one of if, but of when. Perhaps they are even here already. So the next time you are out at a BBQ and that annoying fly keeps buzzing past your head, you might just want to take a closer look. Just in case.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
06-05-2020
Tom Cruise is Literally Going to Outer Space to Shoot an Action Movie with Elon Musk’s SpaceX [Update]
Tom Cruise is Literally Going to Outer Space to Shoot an Action Movie with Elon Musk’s SpaceX [Update]
Update: NASA administrator Jim Bridenstine says that this project will involve the International Space Station.
Jim Bridenstine✔@JimBridenstine
NASA is excited to work with @TomCruise on a film aboard the @Space_Station! We need popular media to inspire a new generation of engineers and scientists to make @NASA’s ambitious plans a reality.
The global superstar is set to literally leave the globe to star in a new movie which will be shot in space – and he’s teaming up with Elon Musk‘s SpaceX company to make it happen.
Deadline reports that this new Tom Cruise space movie is not a Mission: Impossible project, and that no studio is involved yet because it’s still early in development. But Cruise and SpaceX are working on the action/adventure project with NASA, and if it actually happens, it will be the first narrative feature film to be shot in outer space.
This is not the first time Cruise has flirted with leaving the Earth to make a movie. Twenty years ago (context: the same year Mission: Impossible II came out), none other than James Cameron approached Cruise and asked if he’d be interested in heading to the great unknown to make a movie together.
“I actually talked to [Cruise] about doing a space film in space, about 15 years ago,” Cameron said in 2018. “I had a contract with the Russians in 2000 to go to the International Space Station and shoot a high-end 3D documentary there. And I thought, ‘S—, man, we should just make a feature.’ I said, ‘Tom, you and I, we’ll get two seats on the Soyuz, but somebody’s gotta train us as engineers.’ Tom said, ‘No problem, I’ll train as an engineer.’ We had some ideas for the story, but it was still conceptual.”
Obviously that project never came together, but it sounds like Cameron may have planted a seed that some other filmmaker might get to harvest.
The fact that Musk, who is often the butt of jokes about how it seems like he could be a villain in a James Bond movie, is involved here (or at least his company is, so one assumes he will at least get an executive producer credit) is almost too perfect. Remember Moonraker? Bond went to space in that one. It’s…pretty bad. Fingers crossed this will turn out much, much better.
My favorite thing about Cruise is that he is in constant pursuit of perfection. He doesn’t always achieve it – see: Mummy, The – but by God, the dude is willing to lay it all on the line to entertain worldwide audiences, and he’s really effin’ good at it. Here’s hoping this actually comes together, and I’m extremely curious if this will end up being another Cruise/Christopher McQuarrie collaboration or if Cruise trusts any other director to lead him to these unprecedented heights.
Tom Cruise to shoot movie in SPACE with Elon Musk’s SpaceX: Is it new Mission: Impossible?
(Image: UP/GETTY)
Tom Cruise filming a movie in space was "inevitable" says Mission: Impossible director
A cutting-edge implant has allowed a man to feel and move his hand again after a spinal cord injury left him partially paralyzed,Wired reports.
According to a press release, it’s the first time both motor function and sense of touch have been restored using a brain-computer interface (BCI), as described in a paper published in the journal Cell.
After severing his spinal cord a decade ago, Ian Burkhart had a BCI developed by researchers at Battelle, a private nonprofit specializing in medical tech, implanted in his brain in 2014.
The injury completely disconnected the electrical signals going from Burkhart’s brain to his hands, through the spinal cord. But the researchers figured they could skip the spinal cord to hook up Burkhart’s primary motor cortex to his hands through a relay.
A port in the back of his skull sends signals to a computer. Special software decodes the signals and splits them between signals corresponding to motion and touch respectively. Both of these signals are then sent out to a sleeve of electrodes around Burkhart’s forearm.
But making sense of these signals is extremely difficult.
“We’re separating thoughts that are occurring almost simultaneously and are related to movements and sub-perceptual touch, which is a big challenge,” lead researcher at Battelle Patrick Ganzer told Wired.
The team saw some early successes regarding movement — the initial goal of the BCI — allowing Burkhart to press buttons along the neck of a “Guitar Hero” controller.
But returning touch to his hand was a much more daunting task. By using a simple vibration device or “wearable haptic system,” Burkhart was able to tell if he was touching an object or not without seeing it.
“It’s definitely strange,” Burkhart told Wired. “It’s still not normal, but it’s definitely much better than not having any sensory information going back to my body.”
Science fiction has always been a medium for futuristic imagination and while different colored aliens and intergalactic travel are yet to be discovered, there is an array of technologies that are no longer figments of the imagination thanks to the world of science fiction. Some of the creative inventions that have appeared in family-favorite movies like "Back to the Future" and "Total Recall," are now at the forefront of modern technology. Here are a few of our favorite technologies that went from science fiction to reality.
1. The mobile phone
The communicator was often used to communicate back to the USS Enterprise.
It's something that almost everyone has in their pockets. Mobile phones have become a necessity in modern life with a plethora of remarkable features. The first mobile phone was invented in 1973, the Motorola DynaTAC. It was a bulky thing that weighed 2.4 lbs. (1.1 kilograms) and had a talk time of about 35 minutes. It also cost thousands of dollars.
The Motorola DynaTAC was invented by Martin Cooper, who led a team that created the phone in just 90 days. A long-standing rumor was that Cooper got his inspiration from an episode of Star Trek where Captain Kirk used his hand-held communications device. However, Cooper stated in a 2015 interview that the original inspiration was from a comic strip called Dick Tracy, in which the character used a "wrist two-way radio."
2. The universal translator
Star Trek characters would often come across alien life with different languages. (Image credit: Paramount Pictures/CBS Studios) (Image credit: Paramount Pictures/CBS Studios)
From: "Star Trek: The Original Series"
While exploring space, characters such as Captain Kirk and Spock would come across alien life who spoke a different language. To understand the galactic foreigners, the Star Trek characters used a device that immediately translated the alien's unusual language. Star Trek's universal communicator was first seen on screen as Spock tampered with it in order to communicate with a non-biological entity (Series 2 Episode 9, Metamorphosis).
Although the idea in Star Trek was to communicate with intelligent alien life, a device capable of breaking down language barriers would revolutionize real-time communication. Now, products such as Sourcenext's Pocketalk and Skype's new voice translation service are capable of providing instantaneous translation between languages. Flawless real-time communication is far off, but the technological advancements over the last decade mean this feat is within reach.
3. Teleportation
The transporter is an iconic feature of the original Star Trek series. (Image credit: Paramount/AF archive/Alamy Stock Photo) (Image credit: Paramount/AF archive/Alamy Stock Photo)
From: "Star Trek: The Original Series"
The idea behind "beaming" someone up was that a person could be broken down into an energy form (dematerialization) and then converted back into matter at their destination (rematerialization). Transporting people this way on Star Trek's USS Enterprise had been around since the very beginning of the series, debuting in the pilot episode.
Scientists haven't figured out how to teleport humans yet, but they can teleport balls of energy known as photons. In this case, teleportation is based on a phenomenon known as quantum entanglement. This refers to a condition in quantum mechanics where two entangled particles may be very far from one another, yet remain connected so that actions performed on one affect the other, regardless of distance. The information exchange between the two photons occurs at least 10,000 times faster than the speed of light.
This hologram of Princess Leia features the iconic line, "Help me Obi-Wan Kenobi, you're my only hope." (Image credit: Lucasfilm/AF archive/Alamy Stock Photo) (Image credit: Lucasfilm/AF archive/Alamy Stock Photo)
From: "Star Wars: Episode IV — A New Hope"
Not long into the first Star Wars movie, Obi-Wan Kenobi receives a holographic message. By definition, a hologram is a 3D image created from the interference of light beams from a laser onto a 2D surface, and can only be seen in one angle.
In 2018, researchers from Brigham Young University in Provo, Utah, created a real hologram. Their technique, called volumetric display, works like an Etch-A-Sketch toy, but uses particles at high speeds. With lasers, researchers can trap particles and move them into a designated shape while another set of lasers emit red, green and blue light onto the particle and create an image. But so far, this can only happen on extremely small scales.
Even though using prosthetics had been common for a long time, Star Wars sparked an idea for bionic prosthetics. (Image credit: Disney/Lucasfilm) (Image credit: Disney/Lucasfilm)
From: "Star Wars: Episode V — The Empire Strikes Back"
Imagine getting your hand chopped off by your own father and falling to the bottom of a floating building to then have your long-lost sister come and pick you up. It's unlikely in reality, but not in the Star Wars movies. After losing his hand, Luke Skywalker receives a bionic version that has all the functions of a normal hand. This scenario is now more feasible than the previous one.
Researchers from the Georgia Institute of Technology in Atlanta, Georgia, have been developing a way for amputees to control each of their prosthetic fingers using an ultrasonic sensor. In the movie, Skywalker's prosthesis uses electromyogram sensors attached to his muscles. The sensors can be switched into different modes and are controlled by the flexing or contracting of his muscles. The prosthesis created by the Georgia Tech researchers, however, uses machine learning and ultrasound signals to detect fine finger-by-finger movement.
6. Digital Billboards
In Blade Runner, digital billboards were used to decorate the dystopian metropolis of Los Angeles. (Image credit: Warner Bros./courtesy Everett Collection/Alamy Stock Photo) (Image credit: Warner Bros./courtesy Everett Collection/Alamy Stock Photo)
From: "Blade Runner"
Director Ridley Scott presents a landscape shot of futuristic Los Angeles in the movie "Blade Runner." While scanning the skyscrapers, a huge, digital, almost-cinematic billboard appears on one of the buildings. This pre-internet concept sparked the imagination of Andrew Phipps Newman, the CEO of DOOH.com. DOOH — which stands for Digital Out Of Home — is a company dedicated to providing live, dynamic advertisements through the use of digital billboards. The company is now at the forefront of advertising as it offers a more enticing form; one that will make people stop and stare.
Digital billboards have come a long way since DOOH was founded in 2013. They have taken advantage of crowded cities, such as London and New York, to utilize this unique advertising tactic. Perhaps the more recent "Blade Runner 2049" will bring us even more new technologies.
The "Blade Runner" story heavily revolves around the idea of synthetic humans, which require artificial intelligence (AI). Some people might be worried about the potential fallout of giving computers intelligence, which has had disastrous consequences in many science-fiction works. But AI has some very useful applications in reality. For instance, astronomers have trained machines to find exoplanets using computer-based learning techniques. While sifting through copious amounts of data collected by missions such as NASA's Kepler and TESS missions, AI can identify the telltale signs of an exoplanet lurking in the data.
The inside design of the spacecraft in 2001: A Space Odyssey strikes an uncanny resemblance to the ISS. (Image credit: MGM/THE KOBAL COLLECTION) (Image credit: MGM/THE KOBAL COLLECTION)
From: "2001: A Space Odyssey"
Orbiting Earth in "2001: A Space Odyssey" is Space Station V, a large establishment located in low-Earth orbit where astronauts can bounce around in microgravity. Does this sound familiar?
The Space Station V provided inspiration for the International Space Station (ISS), which has been orbiting the Earth since 1998 and currently accommodates up to six astronauts at a time. Although Space Station V appears much more luxurious, the ISS has accomplished much more science. The ISS has been fundamental to microgravity research since the start of its construction in 1998.
The Space Station V wasn't just an out-of-this-world holiday experience, it was also employed as a pit-stop before traveling to the Moon and other long-duration space destinations. The proposed Deep Space Gateway would be a station orbiting the moon that would serve a similar purpose.
Tablets today are capable of recognizing fingerprints and even facial features of their owner for better security. (Image credit: Metro-Goldwyn-Mayer/AF archive/Alamy Stock Photo) (Image credit: Metro-Goldwyn-Mayer/AF archive/Alamy Stock Photo)
From: "2001: A Space Odyssey"
Tablets are wonderful handheld computers that can be controlled at the press of a finger. These handy devices are used by people across the globe, and even further upwards on the ISS. Apple claims to have invented the tablet with the release of its iPad. However, Samsung made an extremely interesting case in court that Apple was wrong: Stanley Kubrick and Sir Arthur C. Clarke did, by including the device in 2001: A Space Odyssey, released in 1968.
In the film, Dr. David Bowman and Dr. Frank Poole watch news updates from their flat-screen computers, which they called "newspads." Samsung claimed that these "newspads" were the original tablet, featured in a film over 40 years before the first iPad arrived in 2010. This argument was not successful though, as the judge ruled that Samsung could not utilize this particular piece of evidence.
10. Hoverboards
Marty McFly was able to hover over any surface, even water, with the hoverboard. (Image credit: Universal Pictures/AF archive/Alamy Stock Photo) (Image credit: Universal Pictures/AF archive/Alamy Stock Photo)
From: "Back to the Future Part II"
The Back to the Future trilogy is a highly enjoyable trio of time-traveling adventures, but it is Part II that presents the creators' vision of 2015. The film predicted a far more outlandish 2015 than what actually happened just five years ago, but it got one thing correct: hoverboards, just like the one Marty McFly "borrows" to make a quick escape.
Although they aren't as widespread as the film perceives, hoverboards now exist. The first real one was created in 2015 by Arx Pax, a company based in California. The company invented the Magnetic Field Architecture (MFA™) used to provide the levitation of a hoverboard. The board generates a magnetic field, which in turn creates an eddy current, which then creates another opposing magnetic field. These magnetic fields repel each other against a copper "hoverpark" that provides lift.
11. Driverless cars
Johnny Cab wasn't able to move unless he had the destination, ultimately leading to his demise. (Image credit: TriStar Pictures) (Image credit: TriStar Pictures)
From: "Total Recall"
In the 1990 film, set in 2084, Total Recall's main protagonist Douglas Quaid (played by Arnold Schwarzenegger) finds himself in the middle of a sci-fi showdown on Mars. In one scene Quaid is on the run from the bad guys and jumps into a driverless car. In the front is "Johnny Cab," which is the car's on-board computer system. All Johnny needs is an address to take the car to its intended destination.
Although the driverless car wasn't seen in action before the protagonist yells profanities and takes over the driving, the idea of having a car that takes you to your destination using its onboard satellite navigation has become increasingly popular. The company at the forefront of driverless cars is Waymo, as they want to eradicate the human error and inattention that results in dangerous and fatal accidents.
In 2017, NASA stated its intentions to help in the production of driverless cars, as they would improve the technologies of robotic vehicles on extraterrestrial surfaces such as the Moon or Mars.
A great name for a band, an intriguing plot for a movie … but a scary thing to find out your military has developed. Yes, they claim it’s for bomb-sniffing … just like gunpowder was originally developed for medicinal purposes. What could possibly go wrong? Let’s find out.
“Finally, we developed a minimally-invasive surgical approach and mobile multi-unit electrophysiological recording system to tap into the neural signals in a locust brain and realize a biorobotic explosive sensing system. In sum, our study provides the first demonstration of how biological olfactory systems (sensors and computations) can be hijacked to develop a cyborg chemical sensing approach.”
“Hijacked to develop a cyborg chemical sensing approach” means the brains of locusts – made famous by so many plagues – are connected via electrodes in their brains to tiny packs on their backs which transmit sensory images picked up by the locusts’ antennae and send them to a computer, which monitors the signals and sees when the grasshoppers detect the vapors of one of many different explosives, a process that takes a mere few hundred milliseconds, thanks to the 50,000 olfactory neurons in those sensitive antennae. That’s a brief summary of “Explosive sensing with insect-based biorobots,” a non-peer-reviewed paper published this week in BioRxiv describing four years of research by a team led by Baranidharan Raman, associate professor of biomedical engineering in the School of Engineering and Applied Science at Washington University in St. Louis.
If you refuse to let them go, I will bring locusts into your country tomorrow. They will cover the face of the ground so that it cannot be seen. They will devour what little you have left after the hail, including every tree that is growing in your fields. They will fill your houses and those of all your officials and all the Egyptians—something neither your fathers nor your forefathers have ever seen from the day they settled in this land till now. — Exodus 10:3–6
7- They will come out of the graves with downcast eyes like an expanding swarm of locusts. (54- The Moon, 7)
And as when beneath the onrush of fire locusts take wing to flee unto a river, and the unwearied fire burneth them with its sudden oncoming, and they shrink down into the water; even so before Achilles was the sounding stream of deep-eddying Xanthus filled confusedly with chariots and with men. The Iliad
Yeah, yeah, yeah … they know all about your biblical, Quran and Iliad locust plagues, but that didn’t stop the U.S. Office of Naval Research from investing $750,000 back in 2016 to develop cyborg locusts … or was that the reason for the investment? While it’s accepted that cheap bomb detectors are in demand by police, the military and airport security for dealing with terrorists, and by governments like Vietnam and South Korea who still have active minefields left over from not-so-recent wars, it’s easy to imagine other uses for these bionic bugs whose main claim to fame is not bomb-detection but swarming crop devastation of the plague kind. Back in 2016, Raman saw other uses for the cyborg locusts as well.
“But the real key, he says, is the relative simplicity of the locust’s brain. That’s what allows it to be hijacked, which, if all goes right, will allow for remote explosive sensing. Raman believes that eventually cyborg locusts could be used for other sniff-centric tasks, even medical diagnoses that rely on smell.”
Send them to countries with bombs and mines right before the crops are ripe. Have them do their jobs detecting the explosives, then reward them by letting the cyborg locusts detect the wheat fields and let the destruction begin.
What could possibly go wrong?
“Something neither your fathers nor your forefathers have ever seen from the day they settled in this land till now.”
Scientists from Tufts University, the University of Vermont, and the Wyss Institute at Harvard have developed tiny, living organisms that can be programmed. Called "xenobots," these robotswere made with frog stem cells.
The research, published in the scientific journal Proceedings of the National Academy of Sciences, is meant to aid development of soft robots that can repair themselves when damaged.
Ultimately, the hope is these xenobots will be useful in cleaning up microplastics, digesting toxic materials, or even delivering drugs inside our bodies.
What happens when you cross stem cells from a frog heart and frog skin? Not much—that is, until you program those cells to move. In that case, you've created a xenobot, a new type of organism that's part robot, part living thing.
And we've never seen anything like it before.
Researchers from Tufts University, the University of Vermont, and Harvard University have created the first xenobots from frog embryos after designing them with computer algorithms and physically shaping them with surgical precision. The skin-heart embryos are just one millimeter in size, but can accomplish some remarkable things for what they are, like physically squirming toward targets.
"These are novel living machines," Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research, said in a press statement. "They're neither a traditional robot nor a known species of animal. It's a new class of artifact: a living, programmable organism."
By studying these curious organisms, researchers hope to learn more about the mysterious world of cellular communication. Plus, these kinds of robo-organisms could possibly be the key to drug delivery in the body or greener environmental cleanup techniques.
"Most technologies are made from steel, concrete, chemicals, and plastics, which degrade over time and can produce harmful ecological and health side effects," the authors note in a research paper published in the scientific journal Proceedings of the National Academy of Sciences. "It would thus be useful to build technologies using self-renewing and biocompatible materials, of which the ideal candidates are living systems themselves."
Building Xenobots
Xenobots borrow their name from Xenopus laevis, the scientific name for the African clawed frog from which the researchers harvested the stem cells. To create the little organisms, which scoot around a petri dish a bit like water bears—those tiny microorganisms that are pretty much impossible to kill—the researchers scraped living stem cells from frog embryos. These were separated into single cells and left to incubate.
They differentiated the stem cells into two different kinds: heart and skin cells. The heart cells are capable of expanding and contracting, which ultimately aids the xenobot in locomotion, and the skin cells provide structure. Next, using tiny forceps and an even smaller electrode, the scientists cut the cells and joined them together under a microscope in designs that were specified by a computer algorithm.
Interestingly, the two different kinds of cells did merge together well and created xenobots that could explore their watery environment for days or weeks. When flipped like a turtle on its shell, though, they could no longer move.
Other tests showed whole groups of xenobots are capable of moving in circles and pushing small items to a central location all on their own, without intervention. Some were built with holes in the center to reduce drag and the researchers even tried using the hole as a pouch to let the xenobots carry objects. Bongard said it's a step in the right direction for computer-designed organisms that can intelligently deliver drugs in the body.
Evolutionary Algorithms
On the left, the anatomical blueprint for a computer-designed organism, discovered on a UVM supercomputer. On the right, the living organism, built entirely from frog skin (green) and heart muscle (red) cells. The background displays traces carved by a swarm of these new-to-nature organisms as they move through a field of particulate matter.
SAM KRIEGMAN, UVM
While these xenobots are capable of some spontaneous movement, they can't accomplish any coordinated efforts without the help of computers. Really, xenobots couldn't fundamentally exist without designs created through evolutionary algorithms.
Just as natural selection dictates which members of a species live and which die off—based on certain favorable or unfavorable attributes and ultimately influencing the species' characteristics—evolutionary algorithms can help find beneficial structures for the xenobots.
A team of computer scientists created a virtual world for the xenobots and then ran evolutionary algorithms to see which potential designs for the xenobots could help them move or accomplish some other goal. The algorithm looked for xenobots that performed well at those particular tasks while in a given configuration, and then bred those microorganisms with other xenobots that were considered "fit" enough to survive this simulated natural selection.
In the video above, for example, you can see a simulated version of the xenobot, which is capable of forward movement. The final organism takes on a similar shape to this design and is capable of (slowly) getting around. The red and green squares at the bottom of the structure are active cells, in this case the heart stem cells, while the blueish squares represent the passive skin stem cells.
DOUGLAS BLACKISTON
All of this design work was completed over the course of a few months on the Deep Green supercomputer cluster at the University of Vermont. After a few hundred runs of the evolutionary algorithm, the researchers filtered out the most promising designs. Then, biologists at Tufts University assembled the real xenobots in vitro.
What's the Controversy?
Anything dealing with stem cells is bound to meet at least some flack because detractors take issue with the entire premise of using stem cells, which are harvested from developing embryos.
That's compounded with other practical ethics questions, especially relating to safety and testing. For instance, should the organisms have protections similar to animals or humans when we experiment on them? Could we, ourselves, eventually require protection from the artificially produced creatures?
"When you’re creating life, you don’t have a good sense of what direction it’s going to take," Nita Farahany, who studies the ethical ramifications of new technologies at Duke University and was not involved in the study, told Smithsonian Magazine. "Any time we try to harness life … [we should] recognize its potential to go really poorly."
Michael Levin, a biophysicist and co-author of the study from Tufts University, said that fear of the unknown in this case is not reasonable:
"When we start to mess around with complex systems that we don't understand, we're going to get unintended consequences," he said in a press statement. "If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules."
At its heart, the study is a "direct contribution to getting a handle on what people are afraid of, which is unintended consequences," Levin said.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
22-01-2020
Meet the Chinese robot worm that could crawl into your brain
Meet the Chinese robot worm that could crawl into your brain
Scientists in Shenzhen are developing a machine that sounds like a form of ancient black magic because it could enter the brain and send signals to the neurons
Magnetically controlled device could be used to deliver drugs or interact directly with computers
The device could be sent into the brain and transmit electric pulses to the neurons.
Photo: Shutterstock
According to an ancient southern Chinese form of black magic known as Gu a small poisonous creature similar to a worm could be grown in a pot and used to control a person’s mind.
Now a team of researchers in Shenzhen have created a robot worm that could enter the human body, move along blood vessels and hook up to the neurons.
“In a way it is similar to Gu,” said Xu Tiantian, a lead scientist for the project at the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences.
“But our purpose is not developing a biological weapon. It’s the opposite,” she added.
In recent years, science labs around the world have produced many micro-bots but mostly they were only capable of performing some simple tasks.
But a series of videos released by the team alongside a study in Advanced Functional Materials earlier this month show that the tiny intelligent robots – which they dubbed iRobots – can hop over a hurdle, swim through a tube or squeeze through a gap half its body width.
The 1mm by 3mm robotic worms are not powered by computer chips or batteries, but by an external magnetic field generator.
A video from the team showed the robots performing a range of manoeuvres.
Photo: Handout
Changing the magnetic fields allows the researchers to twist the robot’s body in many different ways and achieve a wide range of movements such as crawling, swinging and rolling, according to their paper.
They can also squeeze through gaps by using infrared radiation to contract their bodies by more than a third.
The worm’s body is also capable of changing colour in different environments because it is made from a transparent, temperature-responsive hydrogel and the video shows that when added to a cup of water at room temperature they become almost invisible.
It also has a “head” made from a neodymium iron-boron magnet and a “tail” constructed from a special composite material.
Xu believes they will prove particularly useful for doctors in the future, for example by being injected into the body and delivering a package of drugs to a targeted area, for example a tumour.
This would limit the effect of the drug to the areas where it is needed and reduce the risk of side effects and the robot worm could exit the body once its task is complete.
The patient would need to lie in an MRI style machine that generates the magnetic field needed to control the robots during the procedure.
The robot worms are controlled by electromagnetic signals.
Photo: Handout
It could also be implanted into the brain because its high mobility and ability to transform means it can survive in this harsh environment where there are rapid blood flows and tiny blood vessels.
Currently, brain implants can only be inserted via a surgical procedure and have a limited capability to integrate with the neurons, which means they can only perform a few simple tasks.
But Xu said the new robots could “work as an implant for brain-computer interface” that would make it possible to communicate directly with a computer without needing a keyboard or even a screen.
She believes this would work by carrying a transmitter that converts external signals into an electric pulse and connect with brain cells to stimulate activities that are not possible using current technology.
Xu admitted that it may be possible to misuse the technology by turning it into a weapon, but said there were still some major barriers to making this effective.
For instance, the controller would need to build a powerful electric field generator with a long effective range to operate the robot worms.
It would also be very difficult to send the microbots to their designated locations without the cooperation of the person they are implanted into because they have to sit or lie down and stay perfectly still while they are moving through the body.
But improving the hardware may overcome these obstacles so Xu could not rule out the possibility the technology could be weaponised one day, but added: “We just hope that day will never come.”
“You see, their young enter through the ears and wrap themselves around the cerebral cortex. This has the effect of rendering the victim extremely susceptible to suggestion… Later, as they grow, follows madness and death…” – Khan Noonien Singh
Anyone who has ever seen Star Trek II: The Wrath Of Khan, the second movie in the series, can still remember the horror of Khan releasing larvae of Ceti eels into the ears of Reliant officers Commander Pavel Chekov and Captain Clark Terrell, where they wormed their way into their brains, wrapping themselves around the cerebral cortex to cause brain control, pain, madness and eventual death. It’s nice to know that’s pure fiction, right? RIGHT?
“Once you consume them, they can move throughout your body — your eyes, your tissues and most commonly your brain. They leave doctors puzzled in their wake as they migrate and settle to feed on the body they’re invading; a classic parasite, but this one can get into your head.”
According to CNN, in 2013 a British man of Chinese was found to have a tapeworm moving inside his brain in 2013 – a parasite known as Spirometra erinaceieuropaei. It’s extremely rare and found mostly in Asia – the adult parasite lives in dog and cat intestines, but the eggs can be spread via fecal matter, particularly in water, which appears to be how the man contracted it. In 2018, a man in India died after his brain, brain stem, and cerebellum were infected by the tapeworm Taenia solium. It’s a good thing these worms are rare and no one is trying to make robotic versions of them, right? RIGHT?
“It could also be implanted into the brain because its high mobility and ability to transform means it can survive in this harsh environment where there are rapid blood flows and tiny blood vessels.”
The South China Morning Post reports that scientists in Shenzhen have developed a tiny robot worm that can enter the human body, swim through along blood vessels and hook up to neurons in the brain. The 1mm-by-3mm (.04 in by .12 in) robots are powered externally by a magnetic field generator and use infrared radiation to contract their size by up to a third to squeeze through tight spots. On the noble cause side, Xu Tiantian — lead scientist for the project at the Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences – says the robot worms will allow doctors to deliver drugs directly to a specific tumor and then exit the body when done.
“But our purpose is not developing a biological weapon. It’s the opposite.”
Needless to say, using the robot worm as a weapon is entirely possible as soon as a more powerful electric field generator with a longer effective range is available and the robot worms obtain the ability to move while the host human is in motion – they currently have to be lying perfectly still. If he were around today, robot designer Khan Noonien Singh might say: “Piece of cake.”
Xu agrees.
“We just hope that day will never come.”
Or is it already here? Mr. Chekov, care to comment?
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
17-01-2020
‘PigeonBot’ brings flying robots closer to real birds
‘PigeonBot’ brings flying robots closer to real birds
Try as they might, even the most advanced roboticists on Earth struggle to recreate the effortless elegance and efficiency with which birds fly through the air. The “PigeonBot” from Stanford researchers takes a step towards changing that by investigating and demonstrating the unique qualities of feathered flight.
On a superficial level, PigeonBot looks a bit, shall we say, like a school project. But a lot of thought went into this rather haphazard looking contraption. Turns out the way birds fly is really not very well understood, as the relationship between the dynamic wing shape and positions of individual feathers are super complex.
Mechanical engineering professor David Lentink challenged some of his graduate students to “dissect the biomechanics of the avian wing morphing mechanism and embody these insights in a morphing biohybrid robot that features real flight feathers,” taking as their model the common pigeon — the resilience of which Lentink admires.
As he explains in an interview with the journal Science:
The first Ph.D.student, Amanda Stowers, analyzed the skeletal motion and determined we only needed to emulate the wrist and finger motion in our robot to actuate all 20 primary and 20 secondary flight feathers. The second student, Laura Matloff,uncovered how the feathers moved via a simple linear response to skeletal movement. The robotic insight here is that a bird wing is a gigantic underactuated system in which a bird doesn’t have to constantly actuate each feather individually. Instead, all the feathers follow wrist and finger motion automatically via the elastic ligament that connects the feathers to the skeleton. It’s an ingenious system that greatly simplifies feather position control.
In addition to finding that the individual control of feathers is more automatic than manual, the team found that tiny microstructures on the feathers form a sort of one-way Velcro-type material that keeps them forming a continuous surface rather than a bunch of disconnected ones. These and other findings were published in Science, while the robot itself, devised by “the third student,” Eric Chang, is described in Science Robotics.
Using 40 actual pigeon feathers and a super-light frame, Chang and the team made a simple flying machine that doesn’t derive lift from its feathers — it has a propeller on the front — but uses them to steer and maneuver using the same type of flexion and morphing as the birds themselves do when gliding.
Studying the biology of the wing itself, then observing and adjusting the PigeonBot systems, the team found that the bird (and bot) used its “wrist” when the wing was partly retracted, and “fingers” when extended, to control flight. But it’s done in a highly elegant fashion that minimizes the thought and the mechanisms required.
PigeonBot’s wing. You can see that the feathers are joined by elastic connections so moving one moves others.
It’s the kind of thing that could inform improved wing design for aircraft, which currently rely in many ways on principles established more than a century ago. Passenger jets, of course, don’t need to dive or roll on short notice, but drones and other small craft might find the ability extremely useful.
“The underactuated morphing wing principles presented here may inspire more economical and simpler morphing wing designs for aircraft and robots with more degrees of freedom than previously considered,” write the researchers in the Science Robotics paper.
Up next for the team is observation of more bird species to see if these techniques are shared with others. Lentink is working on a tail to match the wings, and separately on a new bio-inspired robot inspired by falcons, which could potentially have legs and claws as well. “I have many ideas,” he admitted.
On the left, the anatomical blueprint for a computer-designed organism, discovered on a UVM supercomputer. On the right, the living organism, built entirely from frog skin (green) and heart muscle (red) cells. The background displays traces carved by a swarm of these new-to-nature organisms as they move through a field of particulate matter.
(Credit: Sam Kriegman, UVM)
A book is made of wood. But it is not a tree. The dead cells have been repurposed to serve another need.
Now a team of scientists has repurposed living cells—scraped from frog embryos—and assembled them into entirely new life-forms. These millimeter-wide "xenobots" can move toward a target, perhaps pick up a payload (like a medicine that needs to be carried to a specific place inside a patient)—and heal themselves after being cut.
"These are novel living machines," says Joshua Bongard, a computer scientist and robotics expert at the University of Vermont who co-led the new research. "They're neither a traditional robot nor a known species of animal. It's a new class of artifact: a living, programmable organism."
The new creatures were designed on a supercomputer at UVM—and then assembled and tested by biologists at Tufts University. "We can imagine many useful applications of these living robots that other machines can't do," says co-leader Michael Levin who directs the Center for Regenerative and Developmental Biology at Tufts, "like searching out nasty compounds or radioactive contamination, gathering microplastic in the oceans, traveling in arteries to scrape out plaque."
The results of the new research were published January 13 in the Proceedings of the National Academy of Sciences.
Bespoke living systems
People have been manipulating organisms for human benefit since at least the dawn of agriculture, genetic editing is becoming widespread, and a few artificial organisms have been manually assembled in the past few years—copying the body forms of known animals.
But this research, for the first time ever, "designs completely biological machines from the ground up," the team writes in their new study.
With months of processing time on the Deep Green supercomputer cluster at UVM's Vermont Advanced Computing Core, the team—including lead author and doctoral student Sam Kriegman—used an evolutionary algorithm to create thousands of candidate designs for the new life-forms. Attempting to achieve a task assigned by the scientists—like locomotion in one direction—the computer would, over and over, reassemble a few hundred simulated cells into myriad forms and body shapes. As the programs ran—driven by basic rules about the biophysics of what single frog skin and cardiac cells can do—the more successful simulated organisms were kept and refined, while failed designs were tossed out. After a hundred independent runs of the algorithm, the most promising designs were selected for testing.
Then the team at Tufts, led by Levin and with key work by microsurgeon Douglas Blackiston—transferred the in silico designs into life. First they gathered stem cells, harvested from the embryos of African frogs, the species Xenopus laevis. (Hence the name "xenobots.") These were separated into single cells and left to incubate. Then, using tiny forceps and an even tinier electrode, the cells were cut and joined under a microscope into a close approximation of the designs specified by the computer.
Assembled into body forms never seen in nature, the cells began to work together. The skin cells formed a more passive architecture, while the once-random contractions of heart muscle cells were put to work creating ordered forward motion as guided by the computer's design, and aided by spontaneous self-organizing patterns—allowing the robots to move on their own.
These reconfigurable organisms were shown to be able move in a coherent fashion—and explore their watery environment for days or weeks, powered by embryonic energy stores. Turned over, however, they failed, like beetles flipped on their backs.
Later tests showed that groups of xenobots would move around in circles, pushing pellets into a central location—spontaneously and collectively. Others were built with a hole through the center to reduce drag. In simulated versions of these, the scientists were able to repurpose this hole as a pouch to successfully carry an object. "It's a step toward using computer-designed organisms for intelligent drug delivery," says Bongard, a professor in UVM's Department of Computer Science and Complex Systems Center.
A manufactured quadruped organism, 650-750 microns in diameter—a bit smaller than a pinhead.
(Credit: Douglas Blackiston, Tufts University.)
Living technologies
Many technologies are made of steel, concrete or plastic. That can make them strong or flexible. But they also can create ecological and human health problems, like the growing scourge of plastic pollution in the oceans and the toxicity of many synthetic materials and electronics. "The downside of living tissue is that it's weak and it degrades," say Bongard. "That's why we use steel. But organisms have 4.5 billion years of practice at regenerating themselves and going on for decades." And when they stop working—death—they usually fall apart harmlessly. "These xenobots are fully biodegradable," say Bongard, "when they're done with their job after seven days, they're just dead skin cells."
Your laptop is a powerful technology. But try cutting it in half. Doesn't work so well. In the new experiments, the scientists cut the xenobots and watched what happened. "We sliced the robot almost in half and it stitches itself back up and keeps going," says Bongard. "And this is something you can't do with typical machines."
University of Vermont professor Josh Bongard.
(Photo: Joshua Brown)
Cracking the Code
Both Levin and Bongard say the potential of what they've been learning about how cells communicate and connect extends deep into both computational science and our understanding of life. "The big question in biology is to understand the algorithms that determine form and function," says Levin. "The genome encodes proteins, but transformative applications await our discovery of how that hardware enables cells to cooperate toward making functional anatomies under very different conditions."
To make an organism develop and function, there is a lot of information sharing and cooperation—organic computation—going on in and between cells all the time, not just within neurons. These emergent and geometric properties are shaped by bioelectric, biochemical, and biomechanical processes, "that run on DNA-specified hardware," Levin says, "and these processes are reconfigurable, enabling novel living forms."
The scientists see the work presented in their new PNAS study—"A scalable pipeline for designing reconfigurable organisms,"—as one step in applying insights about this bioelectric code to both biology and computer science. "What actually determines the anatomy towards which cells cooperate?" Levin asks. "You look at the cells we've been building our xenobots with, and, genomically, they're frogs. It's 100% frog DNA—but these are not frogs. Then you ask, well, what else are these cells capable of building?"
"As we've shown, these frog cells can be coaxed to make interesting living forms that are completely different from what their default anatomy would be," says Levin. He and the other scientists in the UVM and Tufts team—with support from DARPA's Lifelong Learning Machines program and the National Science Foundation—believe that building the xenobots is a small step toward cracking what he calls the "morphogenetic code," providing a deeper view of the overall way organisms are organized—and how they compute and store information based on their histories and environment.
Future Shocks
Many people worry about the implications of rapid technological change and complex biological manipulations. "That fear is not unreasonable," Levin says. "When we start to mess around with complex systems that we don't understand, we're going to get unintended consequences." A lot of complex systems, like an ant colony, begin with a simple unit—an ant—from which it would be impossible to predict the shape of their colony or how they can build bridges over water with their interlinked bodies.
"If humanity is going to survive into the future, we need to better understand how complex properties, somehow, emerge from simple rules," says Levin. Much of science is focused on "controlling the low-level rules. We also need to understand the high-level rules," he says. "If you wanted an anthill with two chimneys instead of one, how do you modify the ants? We'd have no idea."
"I think it's an absolute necessity for society going forward to get a better handle on systems where the outcome is very complex," Levin says. "A first step towards doing that is to explore: how do living systems decide what an overall behavior should be and how do we manipulate the pieces to get the behaviors we want?"
In other words, "this study is a direct contribution to getting a handle on what people are afraid of, which is unintended consequences," Levin says—whether in the rapid arrival of self-driving cars, changing gene drives to wipe out whole lineages of viruses, or the many other complex and autonomous systems that will increasingly shape the human experience.
"There's all of this innate creativity in life," says UVM's Josh Bongard. "We want to understand that more deeply—and how we can direct and push it toward new forms."
When we get to a point where literally just about everything can be done more cheaply and more efficiently by robots, the elite won’t have any use for the rest of us at all. For most of human history, the wealthy have needed the poor to do the work that is necessary to run their businesses and make them even wealthier. In this day and age we like to call ourselves “employees”, but in reality we are their servants. Some of us may be more well paid than others, but the vast majority of us are expending our best years serving their enterprises so that we can pay the bills. Unfortunately, that paradigm is rapidly changing, and many of the jobs that humans are doing today will be done by robots in the not too distant future. In fact, millions of human workers have already been displaced, and as you will see below experts are warning that the job losses are likely to greatly accelerate in the years to come.
Competition with technology is one of the reasons why wage growth has been so stagnant over the past couple of decades. The only way it makes sense for an employer to hire you is if you can do a job less expensively than some form of technology can do it.
As a result, close to two-thirds of the jobs that have been created in the United States over the past couple of decades have been low wage jobs, and the middle class is being steadily hollowed out.
But as robots continue to become cheaper and more efficient, even our lowest paying jobs will be vanishing in enormous numbers.
For example, it is being reported that executives at Walmart plan to greatly increase the size of their “robot army”…
Walmart Inc.’s robot army is growing. The world’s largest retailer will add shelf-scanning robots to 650 more U.S. stores by the end of the summer, bringing its fleet to 1,000. The six-foot-tall Bossa Nova devices, equipped with 15 cameras each, roam aisles and send alerts to store employees’ handheld devices when items are out of stock, helping to solve a vexing problem that costs retailers nearly a trillion dollars annually, according to researcher IHL Group.
The new robots, designed by San Francisco-based Bossa Nova Robotics Inc., join the ranks of Walmart’s increasingly automated workforce which also includes devices to scrub floors, unload trucks and gather online-grocery orders.
Walmart is testing out a new employee structure within its stores in an attempt to cut down the size of its store management staff.
The nation’s biggest employer is looking to see if it can have fewer midlevel store managers overseeing workers, with these managers seeing both their responsibilities and their pay increase.
So the employees that survive will get a “pay increase” to go with a huge increase in responsibility, but what about all the others that are having their jobs eliminated?
Don’t worry, because in an interview about this new initiative one Walmart executive assured us that their employees “like smaller teams”…
“Associates like smaller teams, and they like having a connection with a leader. They want something they can own and to know if they are winning or losing every day. And today that does not always happen,” Drew Holler, U.S. senior vice president of associate experience, said in an interview.
Today, Wal-Mart is the largest employer in the United States by a wide margin.
But these coming changes will ultimately mean a lot more robot workers and a lot less human workers.
Of course countless other heartless corporations are implementing similar measures. And considering the fact that one recent survey found that 97 percent of U.S. CFOs believe that a recession is coming in 2020, we are likely to see a “thinning of the ranks” in company after company as this year rolls along.
Sadly, even if there was no economic downturn coming we would continue to lose jobs to robots. According to one study, a whopping 45 percent of our current jobs “can be automated”…
Here’s the truth: Robots are already starting to take jobs from hourly human workers, and it’s going to continue. Research from McKinsey found that 45% of current jobs can be automated. We need to stop avoiding the situation and create real solutions to help displaced workers.
In this day and age, no worker is safe.
I know someone that gave his heart and soul to a big corporation for many years, and then one day he was called into the office when he arrived for work and he was out of a job by lunch.
He hadn’t done anything wrong at all. It is just that his heartless corporate bosses had decided to eliminate his position throughout the entire company.
If you think that they actually care about you, then you are just fooling yourself.
Unfortunately, the job losses are just going to keep accelerating. In fact, it is being projected that approximately 20 million manufacturing jobs around the globe could be taken over by robots by the year 2030…
Robots could take over 20 million manufacturing jobs around the world by 2030, economists claimed Wednesday.
According to a new study from Oxford Economics, within the next 11 years there could be 14 million robots put to work in China alone.
And as wealthy executives lay off low wage workers in staggering numbers, that will make the growing gap between the rich and the poor even worse…
“As a result of robotization, tens of millions of jobs will be lost, especially in poorer local economies that rely on lower-skilled workers. This will therefore translate to an increase in income inequality,” the study’s authors said.
The good news is that the full extent of this ominous scenario is not likely to completely play out. The bad news is that this is because our society is rapidly moving toward complete and utter collapse.
I wish that there was an easy solution to this growing problem.
In a free market system, should anyone be trying to mandate that employers must hire human workers?
But if millions upon millions of men and women can’t feed their families because they don’t have jobs, that will create the sort of social nightmare that we cannot even imagine right now.
This is something that all of the 2020 presidential candidates should be talking about, because this is a crisis that is spinning out of control, and it is getting worse with each passing day.
About the Author: I am a voice crying out for change in a society that generally seems content to stay asleep. My name is Michael Snyder and I am the publisher of The Economic Collapse Blog, End Of The American Dream and The Most Important News, and the articles that I publish on those sites are republished on dozens of other prominent websites all over the globe. I have written four books that are available on Amazon.com including The Beginning Of The End, Get Prepared Now, and Living A Life That Really Matters. (#CommissionsEarned) By purchasing those books you help to support my work. I always freely and happily allow others to republish my articles on their own websites, but due to government regulations I need those that republish my articles to include this “About the Author” section with each article. In order to comply with those government regulations, I need to tell you that the controversial opinions in this article are mine alone and do not necessarily reflect the views of the websites where my work is republished. This article may contain opinions on political matters, but it is not intended to promote the candidacy of any particular political candidate. The material contained in this article is for general information purposes only, and readers should consult licensed professionals before making any legal, business, financial or health decisions. Those responding to this article by making comments are solely responsible for their viewpoints, and those viewpoints do not necessarily represent the viewpoints of Michael Snyder or the operators of the websites where my work is republished. I encourage you to follow me on social media on Facebook and Twitter, and any way that you can share these articles with others is a great help.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
13-01-2020
Scientists Create
Scientists Create "Lifelike" Motile Material That's Powered By Its Own Metabolism
Cornell professor of biological and environmental engineering Dan Luo and research associate Shogo Hamada have created a DNA material capable of metabolism, in addition to self-assembly and organization.
A group of engineers at Cornell University have constructed a new type of biomaterial using artificial DNA as its base. Their approach has given the material a number of lifelike properties, such as a metabolism and the ability to self-assemble and self-organize.
The artificial metabolism is particularly interesting. The material was programmed to move and this was powered by its metabolism. As reported in Science Robotics, the material can autonomously grow and decay. It was created using DASH (DNA-based Assembly and Synthesis of Hierarchical) materials.
“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism," senior author Dan Luo, professor of biological and environmental engineering, said in a statement. "We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before.”
The material is equipped with DNA instructions that give it its metabolism and allow it to regenerate autonomously. The material began its life as nanoscale building blocks in a reaction solution. It then arranged itself into polymer strands which then formed shapes measuring just a few millimeters in length. The reaction solution was then injected with a microfluidic device, which provided a liquid flow of energy and the right building blocks for biosynthesis (the production of complex molecules in living things) to occur.
At that point, the researchers witnessed the material growing at the end facing the flow of energy and degrading at the other. This growth and degradation allowed the material to move forward against the flow in a way reminiscent of how slime molds move. The team was then able to make different sets of the material compete against each other in a race. The winners and losers were decided by the randomness of the system rather than by intrinsic advantages of particular shapes.
“The designs are still primitive, but they showed a new route to create dynamic machines from biomolecules. We are at a first step of building lifelike robots by artificial metabolism,” lead author Shogo Hamada, lecturer and research associate in Cornell's Luo lab, explained. “Even from a simple design, we were able to create sophisticated behaviors like racing. Artificial metabolism could open a new frontier in robotics.”
The team is now interested in creating a material that can respond to stimuli like light and perhaps even detect danger. The use of synthetic DNA means there's a possibility that the material will self-evolve, creating better and better versions of itself. The approach could be employed to detect pathogens, create new nanomaterials, produce proteins, and maybe even act as a base for biocomputers.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
03-01-2020
Soft Robotic Insect Survives Being Flattened By A Fly Swatter
Soft Robotic Insect Survives Being Flattened By A Fly Swatter
Researchers at EPFL have developed an ultra-light robotic insect that uses its soft artificial muscles to move at 3 cm per second across different types of terrain. It can be folded or crushed and yet continue to move.
Credit: EPFL
Imagine swarms of robotic insects moving around us as they perform various tasks. It might sound like science fiction, but it’s actually more plausible than you might think.
Researchers at EPFL’s School of Engineering have developed a soft robotic insect, propelled at 3 cm per second by artificial muscles.
The team developed two versions of this soft robot, dubbed DEAnsect. The first, tethered using ultra-thin wires, is exceptionally robust. It can be folded, hit with a fly swatter or squashed by a shoe without impacting its ability to move. The second is an untethered model that is fully wireless and autonomous, weighing less than 1 gram and carrying its battery and all electronic components on its back. This intelligent insect is equipped with a microcontroller for a brain and photodiodes as eyes, allowing it to recognize black and white patterns, enabling DEAnsect to follow any line drawn on the ground.
Researchers at EPFL have developed an ultra-light robotic insect that uses its soft artificial muscles to move at 3 cm per second across different types of terrain. The robot can be folded or crushed and yet continue to move.
Credit: EPFL
DEAnsect is equipped with dielectric elastomer actuators (DEAs), a type of hair-thin artificial muscle that propels it forward through vibrations. These DEAs are the main reason why the insect is so light and quick. They also enable it to move over different types of terrain, including undulating surfaces.
Credit: EPFL
An untethered model that is fully wireless and autonomous, weighing less than 1 gram and carrying its battery and all electronic components on its back.
Credit: EPFL
The artificial muscles consist of an elastomer membrane sandwiched between two soft electrodes. The electrodes are attracted to one another when a voltage is applied, compressing the membrane, which returns to its initial shape when the voltage is turned off. The insect has such muscles fitted to each of its three legs. Movement is generated by switching the voltage on and off very quickly – over 400 times per second.
Credit: EPFL
The team used nanofabrication techniques to enable the artificial muscles to work at relatively low voltages, by reducing the thickness of the elastomer membrane and by developing soft, highly conductive electrodes only a few molecules thick. This clever design allowed the researchers to dramatically reduce the size of the power source. “DEAs generally operate at several kilovolts, which required a large power supply unit,” explains LMTS director Herbert Shea. “Our design enabled the robot, which itself weighs just 0.2 gram, to carry everything it needs on its back.” “This technique opens up new possibilities for the broad use of DEAs in robotics, for swarms of intelligent robotic insects, for inspection or remote repairs, or even for gaining a deeper understanding of insect colonies by sending a robot to live amongst them.”
A video explaining the main concepts and results of this study.
Credit: Ji et al., Sci. Robot. 4, eaaz6451 (2019)
“We’re currently working on an untethered and entirely soft version with Stanford University,” says Shea. “In the longer term, we plan to fit new sensors and emitters to the insects so they can communicate directly with one another.”
Untethered DEAnsect (soft robot) autonomously navigates a figure 8 path, then stops at the end.
Video of the DEAnsect autonomously navigating a path in the shape of a figure 8.
Credit: Ji et al., Sci. Robot. 4, eaaz6451 (2019)
DEAnsect the robotic ant: how it works and what it can do
Contacts and sources:
Herbert Shea
Soft Transducers Laboratory (LMTS)
Ecole polytechnique fédérale de Lausanne (EPFL)
Other videos about robotica- and dronsesystem, peter2011
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.