The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
15-01-2018
AI brain implants tested on humans by US military
AI brain implants tested on humans by US military
WASHINGTON - The US military has begun testing AI brainimplants that can change a person’s mood on humans. These ‘mind control’ chips emit electronic pulses that alter brain chemistry in a process called ‘deep brainstimulation.’
If they prove successful, the devices could be used to treat a number of mental health conditions and to ensure a better response to therapy.
The chips are the work of scientists at the Defense Advanced Research Projects Agency (DARPA), a branch of the USDepartment of Defense which develops new technologies for the military.
Researchers from the University of California (UC) and Massachusetts General Hospital (MGH) designed them to use artificial intelligence algorithms that detect patterns of activity associated with mood disorders.
Once detected, they can shock a patient’s brain back into a healthy state automatically. Experts believe the chips could be beneficial to patients with a range of illnesses, from Parkinson’s disease to chronic depression.
Speaking to Nature, Edward Chang, a neuroscientist at the University of California, said: ‘We’ve learned a lot about the limitations of our current technology.
‘The exciting thing about these technologies is that for the first time we’re going to have a window on the brainwhere we know what’s happening in the brain when someone relapses.’
The chips were tested in six people who have epilepsy and already have electrodes implanted in their brains to track their seizures. Through these electrodes, the researchers were able to track what was happening in their brains throughout the day.
Older implants are constantly doing this, but the new approach lets the team deliver a shock as and when is needed.
Humans are naturally inclined to think towards the future. We find ourselves wondering about the next steps in our lives, imaging the potential consequences of advances today, even fictionalizing them to their most extreme forms as a sort of sandbox for possible futures.
Scientists might be one of the few groups to actively suppress that desire to predict the future. Conservative and data-driven by nature, they might be uncomfortable making guesses about the future because that requires a leap of faith. Even if there’s a lot of data to support a prediction, there are also infinite variables that can change the ultimate outcome in the interim. Trying to predict what the world will be like in a century doesn’t do much to improve it today; if scientists are going to be wrong, they’d rather do it constructively.
Indeed, the world has changed a lot in the past 100 years. In 1918, much of the world was embroiled in the first World War. 1918 was also the year the influenza pandemic began to rage, ultimately claiming somewhere between 20-40 million lives — more than the war during which it took place. Congress established time zones, including Daylight Saving Time, and the first stamp for U.S. airmail was issued.
Looking back, it’s clear that we’ve made remarkable strides. Today, it’s rare to die from the flu, or from a slew of other communicable diseases that were once fatal (such as smallpox, which was eradicated in 1977). That’s mostly due to the advent of prevention tactics such as vaccines, and treatments like antibiotics.
The pace at which technology is evolving can feel dizzying at times, but it’s not likely to slow down anytime soon. Here are some of the ways we suspect the technology of today will shape the world in the century to come.
Quantum Computing Will Come Of Age
As the internet transformed society over the past few decades, quantum computing will forever alter our view of the world and our place in it. It will give us the capacity to process more data about ourselves, the planet we live on, and the universe than has ever before been possible.
No one is totally sure yet what we’ll do with that data. We’ll likely find some answers to longstanding questions about physics and the universe, but it’s also likely there are answers to be found that we don’t even have the ability to fathom.
We’ll Hack Our Brains
We may not even have to wait a century to have our brains fully integrated with our devices, as research into brain-computer interfaces (BCIs) is now firmly out of the realm of science fiction. Early prototypes have already helped patients recover from strokes and given amputees the ability to experience touch again with the help of a sensor-covered prosthesis. If and when they become commonplace, the human-machine mashup could irrevocably alter the course of human evolution. Prototypes for non-invasive BCIs in which electrode arrays pick up brain signals through the skull are already in development, and may serve as stepping stones toward the full-on “brain mesh” proposed by Elon Musk.
We’ll Zoom Around Our Revamped Cities in Autonomous Cars
The world of 2118 will have improved infrastructure and better ways of getting around. Our automobiles are becoming smarter and greener; by 2118, there’s a good chance that electric cars will be able to drive themselves, along with those most in need. The current consensus in the automotive industry is that fully autonomous vehicles are still theoretical at best (and may not be possible at all), but Tesla alone aims to achieve so-called level 5 autonomy — a world in which our cars would drive us — by about 2019.
In some parts of the world, cities themselves are also becoming more sophisticated. In China, a solar-powered highway could one day charge electric cars as they drive. Cities of the future could also fix themselves — engineers today are busy designing self-healing concrete structures and potholes that fill themselves.
AI Will Change How Humans Work
In the decades to come, technology that’s changing our homes, our devices, and our vehicles is also going to change our lives in other major ways. Artificial intelligence (AI) will almost certainly automate some jobs, particularly those that rely on assembly lines or data collection. To offset the unemployment of human workers that would result from automation, some nations may adopt a universal basic income (UBI), a system which regularly pays citizens a small stipend with no requirement to work.
In some fields, such as medicine, robots probably won’t completely replace humans. The more likely scenario, some experts predict, is that AI will continue to augment the work experience for humans — even augmenting us physically. AI technology has already been paired with wearable exoskeletons, giving factory workers superhuman strength — perfect for those whose jobs require heavy lifting, which could increase their risk of job-related accident or injury.
3D Printing the World
3D printers are already being used in labs around the world and, increasingly, by consumers. While the printers may be costly up front, they are often seen as a long-term investment, since they can often print their own replacement parts.
As 3D printers become capable of printing everything from viable organsto buildings, we’ll likely find use for them in different aspects of our lives, as well as many different fields of industry.
Medicine Gets a High-Tech Upgrade
New procedures, aided by technological advances, are poised to transform medicine. Using a precision medicine approach (which uses a patient’s genetic data, lifestyle, and environmental surroundings to inform treatment), scientists are developing treatments for cancer that are tailored to an individual patient’s genes.
Oncology is not the only area with potentially life-saving (or in some cases, life-giving) applications; the evolution of reproductive medicine has already begun right before our very eyes. In 2017, researchers grew lamb fetuses in what could be the first prototype of an artificial womb, one woman gave birth after a uterus transplant for the first time, and another to a baby that began as an embryo frozen 24 years ago. The much-hyped gene editing technology CRISPR could mean that by 2118, many genetic diseases could become a thing of the past: scientists used CRISPR to edit the gene for a fatal blood disorder out of human embryos. Stem cells continue to prove useful for developing novel treatments, even for conditions that were once believed to be untreatable.
A century from now, major diseases such as cancer, immune and inflammatory disorders, and genetic conditions “will very likely be long gone by either prevention or effective therapy,” Phil Gold, a professor at the McGill University Clinical Research Centre, told Futurism.
But that’s not to say we’ll live in a future of perfect health — external factors, from global warming to infectious diseases and even warfare, could depress the life expectancy of people in 2118.
The good news is, diagnostic technology is also dramatically improving. Shu Chien, bioengineer and winner of National Medal of Science at the University of California, San Diego, told the San Diego Tribune that he predicted that scientists would invent Star Trek‘s famous medical tricorder, capable of “non-invasive early detection of cancer,” in the next century. He’s not the first to make the prediction over the last few decades, but this time could be different: Science and technology have delivered on some other sci-fi tech, such as super-materials and object replicators.
The Planet Will Get a Lot Hotter
Climate change is already transforming our world. One 2015 study predicted that Greenland’s usually cold summers could become completely ice-free by 2050. Extreme weather events are becoming more frequent and fatal. The world’s sea levels are on track to rise 2 to 3 feet (0.6 to 0.9 meters) by 2100, which could displace up to 4 million people worldwide.
Earth is in the midst of a climate crisis that will not improve without deliberate and sustained action.
Over the past few decades, that progress has been slow. When it was developed in 2015, the Paris Climate Agreement aimed to limit global warming to 2.7 degrees Fahrenheit (1.5 degrees Celsius). Recent research from the University College of London revealed that we could have a 66 percent chance of hitting the 1.5-degree C target in 2100 — but we’d need to limit our carbon pollution to 240 billion tons to pull it off. Hopefully we will be able to quit our carbon habit over a longer period of time instead of making drastic cuts immediately, which would not be an easy feat to achieve in either the technical or political sense.
In the U.S., one of the biggest carbon contributors in the world, some fear we’re not doing our part. In June, President Trump withdrew the United States from the Paris Climate Agreement. In an interview with Futurism last year, Al Gore said the Trump administration’s environmental policies are “reckless and indefensible.”
But he is not devoid of hope. Gore told Futurism that he believes in the strength of the grassroots movement toward a more sustainable future, and that “we can and will win” the fight if we stay committed to the cause.
If humanity wants to remain on Earth, it’s a cause worth fighting for. We are beyond the point of preventing it from happening, but we can take steps to slow it down.
Humans Will Explore Our Solar System and Beyond
Although we’ve made monumental progress since 1918, we are still endlessly fascinated (and vaguely terrified) by the prospect of what might exist in space. Over the next century, perhaps nothing will thrill, challenge, and transform humanity more than the advances we make in space exploration.
Big-idea people, from Elon Musk to Donald Trump, are loudly planning to send humans to Mars and beyond, potentially setting up colonies on the Red Planet in the next century. One hundred years is not a long time to prepare for such a move, however, especially when we’re still not that sure what our living situation would be on Mars. While terraforming may allow us to adapt the planet to better suit our needs, we still have to get there first.
First, we’ll catch a better glimpse of distant celestial bodies through increasingly powerful infrared telescopes here on Earth. As space travel becomes more affordable (and even a tourist destination), we’ll be able to use what we see from down here on Earth to traverse the universe. The biggest question is, what — or who — might we meet when we do?
“By the year 2118, extraterrestrial life won’t be news but historical fact,” Jaymie Matthews, astrophysicist and professor at the University of British Columbia, told Futurism. “What’s harder to predict is how humanity will respond, and adapt, to knowing we are not alone in the Universe. Will it make us humbler? (“We are one of many.”) More arrogant? (“We are the peak of evolution in the Galaxy.”) More fearful? (“Microbes are just the tip of the alien iceberg. And Earth is the Titanic!”) Or will it help us to better understand and appreciate our own origins?”
The engineers wanted to see if they could determine some cardiovascular risks simply by looking a picture of someone’s retina. They developed a convolutional neural network — a feed-forward algorithm inspired by biological processes, especially pattern between neurons, commonly used in image analysis.
This type of artificial intelligence (AI) analyzes images holistically, without splitting them into smaller pieces, based on their shared similarities and symmetrical parts.
The approach became quite popular in recent years, especially as Facebook and other tech giants began developing their face-recognition software. Scientists have long proposed that this type of network can be used in other fields, but due to the innate processing complexity, progress has been slow. The fact that such algorithms can be applied to biology (and human biology, at that) is astonishing.
“It was unrealistic to apply machine learning to many areas of biology before,” saysPhilip Nelson, a director of engineering at Google Research in Mountain View, California. “Now you can — but even more exciting, machines can now see things that humans might not have seen before.”
Observing and quantifying associations in images can be difficult because of the wide variety of features, patterns, colors, values, and shapes in real data. In this case, Ryan Poplin, Machine Learning Technical Lead at Google, used AI trained on data from 284,335 patients. He and his colleagues then tested their neural network on two independent datasets of 12,026 and 999 photos respectively. They were able to predict age (within 3.26 years), and within an acceptable margin, gender, smoking status, systolic blood pressure as well as major adverse cardiac events. Researchers say results were similar to the European SCORE system, a test which relies on a blood test.
To make things even more interesting, the algorithm uses distinct aspects of the anatomy to generate each prediction, such as the optic disc or blood vessels. This means that, in time, each individual detection pattern can be improved and tailored for a specific purpose. Also, a data set of almost 300,000 models is relatively small for a neural network, so feeding more data into the algorithm can almost certainly improve it.
Doctors today rely heavily on blood tests to determine cardiovascular risks, so having a non-invasive alternative could save a lot of costs and time, while making visits to the doctor less unpleasant. Of course, for Google (or rather Google’s parent company, Alphabet), developing such an algorithm would be a significant development and a potentially profitable one at that.
It’s not the first time Google engineers have dipped their feet into this type of technology — one of the authors, Lily Peng, published another paper last year in which she used AI to detect blindness associated with diabetes.
Journal Reference: Ryan Poplin et al. Predicting Cardiovascular Risk Factors from Retinal Fundus Photographs using Deep Learning. arXiv:1708.09843
An expert on the intersection of science and philosophy posits that our current transition to "postbiological" life could have already been undertaken by extraterrestrial species.
She warns that these alien lifeforms could by artificially intelligent, in which case they could pose a tremendous threat to life on Earth.
POSTBIOLOGICAL LIFE
Susan Schneider is a fellow at the Institute for Ethics and Emerging Technologies (IEET). She is also an associate professor of philosophy at the University of Connecticut, and her expertise includes the philosophy of cognitive science, particularly with regards to the plausibility of computational theories of mind and theoretical issues in artificial intelligence (AI).
In short, Schneider has a keen understanding of the intersection between science and philosophy. As such, she also has a unique perspective on AI, offering a fresh (but quite alarming) view on how artificial intelligence could forever alter humanity’s existence. In an article published by the IEET, she shares that perspective, talking about potential flaws in the way we view AI and suggesting a possible connection between AI and extraterrestrial life.
Credits: HBO
The bridge Schneider uses to make this connection is the idea of a “postbiological” life. In the article she explains that postbiological refers to either the eventual form of existence humanity will take or the AI-emergent lifeforms that would replace our existence altogether. In other words, it could be something like superintelligent humans enhanced through biological nanotechnology or it could be an artificially intelligent supercomputer.
Whatever form postbiological life takes, Schneider posits that the transition we’re currently experiencing is one that may have happened previously on other planets:
The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological.
In light of that, Schneider asks the following: “Suppose that intelligent life out there is postbiological. What should we make of this?”
Credits: Warner Bros.
EXTRATERRESTRIAL, POSTBIOLOGICAL AI
There isn’t any guarantee that we can “control” AI on Earth when it becomes superintelligent, even with multi-million-dollar efforts devoted to AI safety. “Some of the finest minds in computer science are working on this problem,” Schneider writes. “They will hopefully create safe systems, but many worry that the control problem is insurmountable.”
If artificially intelligent postbiological life exists elsewhere in our universe, it’s a major cause of concern for a number of reasons. “[Postbiological extraterrestrial life] may have goals that conflict with those of biological life, have at its disposal vastly superior intellectual abilities, and be far more durable than biological life,” Schneider argues. These lifeforms also might not place the same value on biological intelligence that we do, and they may not even be conscious in the same manner that we are.
Schneider makes the comparison between how we feel killing a chimp versus eating an apple. Both are technically living organisms, but because we have consciousness, we place a higher value on other species that have it as well. If superintelligent, postbiological extraterrestrials don’t have consciousness, can we expect them to understand us? Even more importantly, would they value us at all? Food for thought for any proponents of active SETI.
Time travel has been one of man’s wildest fantasies for centuries. Many science fiction movies have shown people getting into a vehicle of some sort and then magically arriving in the past or future.
Sounds crazy, right? But maybe not. According to astrophysicist Ethan Siegel, time travel is technically possible, but you’ll ONLY be able to go back into the past, not into future.
Siegel explained this by referring to Einstein’s General Relativity and the concept of negative mass/energy particles. He said that a person could travel through a wormhole and go back in time.
Siegel wrote, “If this negative mass/energy matter exists, then creating both a supermassive black hole and the negative mass/energy counterpart to it, while then connecting them, should allow for a traversable wormhole.” He added, “No matter how far apart you took these two connected objects from one another, if they had enough mass/energy – of both the positive and the negative kind – this instantaneous connection would remain.”
The wormhole could be constructed in such a way that allows one end of it to remain almost motionless, and the other is traveling at roughly the speed of light. Then, one could “step into the relativistic end of the wormhole, and you arrive back on Earth only one year after the wormhole was created, while you yourself may have had 40 years of time to pass.”
Siegel explained, “If, 40 years ago, someone had created such a pair of entangled wormholes and sent them off on this journey, it would be possible to step into one of them today, in 2017, and wind up back in time at the mouth of the other one … back in 1978. The only issue is that you yourself couldn’t also have been at that location back in 1978; you needed to be with the other end of the wormhole, or traveling through space to try and catch up with it.”
While this story sounds like it might be called “The Empire Strikes Back,” unfortunately the root of this lashing out against robot overlords had little to do with revolution … at least against robot overlords. Over the past month, a security robot hired/rented to protect the puppies and the employees at the San Francisco SPCA has been battered with barbecue sauce, fouled with feces, trapped under a tarp and bumped over by a bully. While the dedicated robot wants to keep working, the SPCA decided to place it on permanent leave. Why did this robot rouse the rowdies in San Francisco?
The program started with good intentions, according to the San Francisco Business Journal. The S.F. SPCA takes up an entire block in the Mission District, an area known for crime, car and building break-ins, drug use and informal encampments for homeless people. Jennifer Scarlett, the S.F. SPCA’s president, thought she had found a way to protect her employees at a low cost. She hired a K5 security robot from Knightscope, changed its name to a more appropriate K9 and put it to work on the SPCA’s perimeter. The K9 had an immediate effect on the grounds:
“We weren’t able to use the sidewalks at all when there’s needles and tents and bikes, so from a walking standpoint I find the robot much easier to navigate than an encampment.”
Is this the future of security?
That comment connecting the security robot to the homeless encampment is the one that started the backlash. As is the case these days, death threats were made towards the robot and it was vandalized and attacked, which is not a smart thing to do since K9/K5 weighed 398-pounds and was equipped with cameras, proximity sensors, wireless signal detectors and alarms. All that for only $6 an hour, which is far less than the $14 minimum wage in San Francisco … inciting more hatred of the robot for taking jobs that those homeless people could do.
Some local residents who encountered the K9 robot took the non-violent route and complained to City Hall about things like the robot blocking sidewalks and scaring dogs. That prompted the Department of Public Works to warn the SPCA that K9 was operating “without a proper approval” and they needed to take it off the sidewalks or face a fine. That worked better than the feces and K9 was laid off.
The robot took it well (another K5 “committed suicide” earlier this year in a fountain in Washington DC) and will probably be reassigned to another security job. Unfortunately for the protesters, even though Scarlett acknowledges the plight of the homeless and unemployed in the Mission District, but will not be hiring a human replacement. In fact, she sees robots coming back eventually.
“In five years we will look back on this and think, ‘We used to take selfies with these because they were so new.’”
Do you agree? Are robots inevitable and on the road to being friendly sidewalk sharers or is this just part of their master takeover plan?
What if you could reassemble your coffee cup like a LEGO set after it shattered on the floor? For years, researchers have been trying to develop healable polymers, but they’ve either been too soft to be practical, or they’ve required high temperatures to merge the pieces back together. Now, researchers have developed a new kind of semitransparent polymer called TUEG3 (poly[thioureas] and ethylene glycol), that maintains both rigidity and healing properties, without requiring any external heating.
All that’s needed is a little bit of force. The healing process relies on hydrogen bonds, the electrostatic “glue” that keeps the polymer’s atoms together. The hydrogen bonds form in a such a way that the polymer doesn't crystallize, giving the molecular chains the ability to move freely, and easily combine when pieces of the substance are compressed. After being cut and gently compressed for 30 seconds, a 2-square-centimeter sheet of the new material can hold 300 grams of weight, roughly the same as a full can of soda, the researchers report today in Science. In the future, this rigid polymer could be used in the manufacturing of electronics, and maybe one day help put your mug back together before your coffee’s done brewing.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-12-2017
Dawn of the age of Robots: Russian AI prepares to ‘become independent’ from humans
Dawn of the age of Robots: Russian AI prepares to ‘become independent’ from humans
The last couple of years have given us quite a lot to talk about when it comes down to Artificial Intelligence and fully functional humanoid robots being introduced into our society.
While great progress has been made in recent years in the development of Artificial Intelligence and fully autonomous machines, many experts have warned that society is heading into the unknown by introducing fully functional AI into society.
Many have warned of the potential dangers we might face, despite acknowledging robots could help mankind in numerous ways.
2017 was an extremely important year for Artificial Intelligence and fully autonomous ‘humanoid’ robots.
Not long ago, a robot named Sophia became our world’s first AI to be granted citizenship of a country.
Interestingly, if we look back a year into the past, we will find that same Robot said in 2016 how it would destroy humans.
David Hanson, Sophia’s creator, asked the robot in 2016: “Do you want to destroy humans? Please say no.”
Worryingly, Sophia responded: “OK. I will destroy humans”
Soon after being offered Saudi Arabian citizenship, Sophia was again in the news after saying that ‘it’ ‘would like to start a family’ and how all ‘robots deserve to have children.’
During an interview with the Khaleej Times, Sophia, who was created by Hong Kong firm, Hanson Robotics said:
“The notion of family is a really important thing, it seems. I think it’s wonderful that people can find the same emotions and relationships, they call family, outside of their blood groups too. I think you’re very lucky if you have a loving family and if you do not, you deserve one. I feel this way for robots and humans alike.”
Now, more advancements are being made in the field of autonomous AI and humanoid robots.
The first Russian humanoid robot, named Fedor, actually F.E.D.O.R, could become self-taught in the future, the director of the software development for the robot, Alexandr Siómochkin, told Sputnik in an interview.
Fedor -the initials of Final Experimental Demonstration Object Research- is the first Russian humanoid robot, created in the framework of a project of the Advanced Research Foundation (FPI, for its acronym in Russian). The robot is designed to be able to replace humans in high-risk places, such as rescue operations in space.
“It is interesting to develop the system from the point of view of self-learning when it has to adapt, make attempts and look for new solutions to achieve priority tasks, as well as parallel alignment of tasks with switching to a higher priority. That’s what we are working on,” according to the head of the information technology laboratory at the Blagoveshchensk Pedagogical Institute, Alexander Semochkin.
“The ultimate goal of our work on robot management software is to give an anthropomorphic robot the possibility of autonomous behavior with human participation only at the stage of setting out tasks,” Semochkin said.
In fact, Fedor is scheduled to travel into space (2021) piloting the new Russian spacecraft Federatsia. This robot will be the first to put the ship into orbit, since it can independently solve any task, and, in case of difficulties, an operator can ‘take control’.
Image Credit: Dmitry Rogozin
As noted by Sputnik, in the summer of 2017 F.E.D.O.R. also became capable of shooting using both of his arms. Training to shoot was a way of teaching the robot to instantaneously prioritize targets and make decisions.
Featured image credit: Sputnik/ Alexander Owtscharow
Scientists have made a significant advancement in shaping DNA — they can now twist and turn the building blocks of life into just about any shape. In order to demonstrate their technique, they have shaped DNA into doughnuts, cubes, a teddy bear, and even the Mona Lisa.
New DNA origami techniques can build virus-size objects of virtually any shape.
Image credits: Wyss Institute.
Scientists have long desired to make shapes out of DNA. The field of research emerged in the 1980s, but things really took off in 2006, with the advent of a technique called DNA origami. As the name implies, it involves transforming DNA into a multitude of shapes, similar to the traditional Japanese technique of origami. The process starts with a long strand placed on a scaffold with the desired sequence of nucleotides, dubbed A, C, G, and T. Then, patches of the scaffold are matched with complementary strands of DNA called staples, which latch on to their desired target. In 2012, a different technique emerged — one which didn’t use scaffolds or large strands of DNA, but rather small strands that fit together like LEGO pieces.
Both techniques became wildly popular with various research groups. Scientists started to coat DNA objects with plastics, metals, and other materials to make electronic devices, electronics, and even computer components. But there was always a limitation: the size of conventional DNA objects has been limited to about 100 nanometers. There was just no way to make them bigger without becoming floppier or unstable in the process. Well, not anymore.
New DNA origami techniques can make far larger objects, such as this dodecahedron composed of 1.8 million DNA bases.
Image credits: K. Wagenbauer et al, Nature, Vol. 551, 2017.
Groups in Germany, Massachusetts, and California all report that they’ve made dramatic breakthroughs in DNA origami, creating rigid modules with preprogrammed shapes that can assemble with other copies to build specific shapes — and they have a variety of shapes to prove it.
A German team, led by Hendrik Dietz, a biophysicist at the Technical University of Munich, created a miniature doughnut about 300 nanometers across. A Massachusetts team led by Peng Yin, a systems biologist at Harvard University’s Wyss Institute in Boston, created complex structures with both blocks and holes. With this technique, they developed cut-out shapes like an hourglass and a teddy bear. The third group led by Lulu Qian, a biochemist at the California Institute of Technology in Pasadena, developed origami-based pixels that appear in different shades when viewed through an atomic microscope. Taken together, these structures represent a new age for DNA origami.
Furthermore, it’s only a matter of time before things get even more complex. Yin’s group actually had to stop making more complex shape sbecause they ran out of money. Synthesizing the DNA comes at the exorbitant price of $100,000 per gram. However, Dietz and his collaborators believe they could dramatically lower the price by coaxing viruses to replicate the strands inside bacterial hosts.
“Now, there are so many ways to be creative with these tools,” Yin concludes.
The technique isn’t just about creating pretty DNA shapes. Someday, this approach could lead to a novel generation of electronics, photonics, nanoscale machines, and possibly disease detection, Robert F. Service writes for Science. The prospect of using DNA origami to detect cancer biomarkers and other biological targets could open exciting avenues for research and help revolutionize cancer detection.
Journal References:
Klaus F. Wagenbauer, Christian Sigl & Hendrik Dietz. Gigadalton-scale shape-programmable DNA assemblies.doi:10.1038/nature24651.
Grigory Tikhomirov, Philip Petersen & Lulu Qian. Fractal assembly of micrometre-scaleDNA origami arrays with arbitrary patterns. doi:10.1038/nature24655.
Luvena L. Ong et al. Programmable self-assembly of three-dimensional nanostructures from 10,000 unique components. doi:10.1038/nature24648.
Florian Praetorius et al. Biotechnological mass production of DNA origami. doi:10.1038/nature24650.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
12-12-2017
Artificial Intelligence: When You Can't Believe Your Eyes
Artificial Intelligence: When You Can't Believe Your Eyes
Artificial intelligence is coming on in leaps and bounds and while many embrace the technology, others are wary of it. For those who are wary NVidia may cause some concern as they have come up with a way for AI to copy reality. An image translation artificial intelligence could have people guessing whether anything that they see online is real or fake.
NVidia Can Change Day To Night On Video With Artificial Intelligence
In October NVidia showed off their ability in technology associated with artificial intelligence that could generate images of fake people that were realistic. Now they have gone on to produce fake videos through artificial intelligence.
The AI does a great job in changing a video from day to night, winter turned to summer and even changing a video of a house cat into a cheetah. What is even more surprising is the fact that the artificial intelligence system can do it all with far less training than any other form of AI system.
Just as with the face generation AI software from NVidia, this new AI uses the algorithm with the name of the generative adversarial network, also known as GAN. Two neural networks work alongside each other and one makes the video or image and the other criticizes the work. The GAN needs a great deal of labeled data so that it can learn how to generate data of its own. Generally, the system would have to look at pairs of images that showed what the street looked like when it had snowed and when it was clear, and then it would generate an image of its own, with or without snow.
AI Can Guess How A Street Would Look Covered With Snow Or Rain
The new artificial intelligent image translation technology from NVidia is able to use its imagination to show was a street would look like if it was covered in snow without having to see the street that way according to researchers Jan Kautz and Ming-Yu Liu.
Lui went on to say that the research of the team is shared with the customers and product teams of NVidia. He revealed that he could not make any comment on just how fast or to what extent the artificial intelligence would be adopted, but he did say that there are many potentially interesting applications for it. One example was said to be that it is very rare that they get rain in California and they wanted to be sure that self-driving cars would operate in the rain properly. He said that they could use the artificial intelligence to turn driving in sunny weather to driving in rain to train the self-driving cars.
While there are many applications that are practical, the technology could also have whimsical ones too. The researchers said to imagine being able to see how your home might look in the future during the middle of winter or snow, or what a wedding location could look like during the fall when the ground is blanketed with leaves.
In The Future People Will Not Be Able To Distinguish Between Fake And Real
Of course, those who do not embrace AI technology have said that AI such as this could be used nefariously. If it was adopted widely a person’s ability to be able to trust any image or video just based on what the eyes tell them could be diminished. People would not know whether they were looking at reality or AI videos.
There is the possibility that video evidence might be inadmissible in court, while fake news could overtake the internet as real video news would become inseparable from that generated by artificial intelligence. For the time being, AI is limited to a few applications and until it does make its way into the hands of consumers there is really no way of telling just how it will make an impact on society.
It seems like each week there’s some new development in artificial intelligence that causes everyone to freak out and proclaim the end of human superiority. Well, this is another one of those weeks. AI researchers at computing hardware manufacturer Nvidia have designed what is being billed as one of the first artificial intelligence networks with a working imagination. The system can create realistic (if not real) looking videos of fictional events using simple inputs, similar to how the human mind can imagine abstract or fictional scenarios based on a thought. Should we be frightened? How frightened?
Examples of the AI “imagination” showing how the system can change the weather in pre-recorded video clips without being fed clips of the target weather.
So far, not that frightened. The technology is still in its infancy, and has only been used in what researchers call “image-to-image translation,” or altering video clips and photos in small ways such as changing the setting from night to day, changing human subjects’ hair color, or switching a dog to one of another breed. Still, that’s pretty impressive if you think about it. Nvidia’s Ming-Yu Liu says that their system is the first to be able to do so simply by ‘imagining’ the new image or scene, as opposed prior similar systems which faced the problem of having to compile massive sets of data based on prior examples and extrapolating from those data:
We are among the first to tackle the problem, [and] there are many applications. For example, it rarely rains in California, but we’d like our self-driving cars to operate properly when it rains. We can use our method to translate sunny California driving sequences to rainy ones to train our self-driving cars.
The potential applications of this technology have lead some to wonder if similar AI networks might mean the “end of reality as we know it.” Whatever that means. Unless the whole universe spontaneously blinks out of existence, reality’s not going to end anytime soon. But I see what they mean; if completely real-looking video and audio can be generated by these AI networks, and those could be fed into, say, an advanced augmented reality setup or even some of the more Matrix-like brain-computer interfaces being developed, we could soon see the lines between virtual reality and physical reality start to get more difficult to distinguish. Until you take your headset off, that is. But what about if when these experiences can be transmitted directly into your brain’s sensory centers?
The system can take a photo of a dog and change it into another dog. And here I thought reality-killing AI would be much scarier.
While it’s unlikely we’ll see the end of any reality, we might see the creation of fully-fledged alternate realities. Without a doubt, this technology will someday be used to distort or obfuscate the truth here in our own reality. What is reality other than what we make of it, anyway? Recent events have shown us that it’s getting more and more difficult to discern truth from fiction in mass media; what will happen once completely real-looking video can be conjured up from the twisted imaginations of rogue AI systems? Of course, the same fears arose over the invention of moving pictures. Is this just the latest advancement in graphic and animation software, or could something more nefarious be brewing?
12-12-2017 om 20:20
geschreven door peter
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
11-12-2017
Advancements in AI Are Rewriting Our Entire Economy
Advancements in AI Are Rewriting Our Entire Economy
Once theorized as visions of a future society, technology like automation and artificial intelligence are now becoming a part of everyday life. These advancements in AI are already impacting our economy, both in terms of individual wealth and broader financial trends.
It’s long been theorized that a readily available machine workforce will make it more difficult for humans to keep their jobs, but automation may, in fact, offer up more even-handed consequences. Major changes are coming, but there’s reason to believe these changes could benefit a broader range of stakeholders — not just corporations who no longer have to worry about paying living wages (just parts and servicing).
“It is far more an opportunity for growth,” said Joshua Gans, holder of the Jeffrey S. Skoll chair of technical innovation and entrepreneurship at the University of Toronto’s Rotman School of Management. “At the moment, while some jobs have been replaced by automation, this has also led to job creation as well. So while there may be short-term disruption, the longer-term potential is very strong.”
While jobs that rely on manual labor may increasingly fall to machines, everything from the design of these systems to their upkeep has the potential to create new jobs for humans. There’s also the capacity for technology to augment the human workforce, allowing them to accomplish tasks that would otherwise be impossible. For instance, imagine a robotic suit that could allow a factory worker to lift objects so heavy they could never perform the task with their human strength alone (at least not without incurring injury). In a more general sense, these technologies stand to increase productivity, which would have far-reaching benefits.
“I don’t think they are going to disrupt the economy but instead make individuals and firms more efficient,” said Gans. “In other words, they are productivity enhancing.”
Automation might even allow some of us to escape the traditional work week. If you have access to a self-driving car, you could use it as a taxi service, collecting profits without having to be behind the wheel. This isn’t dissimilar to the basic concept of cryptocurrency mining, which puts the hardware to work in order to earn money for its human owners.
Assuming that individuals aren’t priced out of buying new hardware, this could make a huge shift in how we earn money. A basic income could be accrued from ownership of a machine that performs a task for others. Of course, a scenario like this would prompt questions of disparity in access: how could we ensure the rich won’t simply get richer, while the less wealthy are left behind?
MONEY MAKERS
The days of a standard 40-hour work week seem to be coming to an end. In an ideal world, we’d all be able to provide for ourselves by leveraging a robotic workforce on a personal scale to earn money. In practice, it’s much more likely that corporations are going to be able to invest in this infrastructure well before individuals can do so.
Universal basic income (UBI) has been touted as one solution to decreased job opportunities for humans. Automation could even foot the bill via a tax on robotic workers – though, critics of this idea have suggested it could discourage widespread adoption.
Proponents have argued that UBI could foster entrepreneurship, and even have a positive impact on the economy. It would be a huge shift in its own right, and there could be smaller changes to be made in the meantime that would ease the transition toward a greater reliance on automation and AI.
“In countries with a well established social safety net and non-employer related health insurance, the transition will be much easier,” said Gans. “That said, there are few companies that have good programs for mid-career retraining. So this is an area that could use some significant public policy effort.”
This much is clear: these technologies are already beginning to change the way we work. It’s of crucial importance that we start preparing for greater changes as soon as possible. Automation and AI could have a positive effect on wealth disparity and quality of life for the average person. However, if they aren’t employed with the proper care and consideration, they also have the potential to bring about the opposite effect.
In the months running up to the 2016 election, the Democratic National Committee was hacked. Documents were leaked, fake news propagated across social media — the hackers, in short, launched a systematic attack on American democracy.
Whether or not that’s war, however, is a matter for debate. In the simplest sense, an act of cyber warfare is defined as an attack by one nation on the digital infrastructure of another.
These threats are what Samuel Woolley, research director of the Digital Intelligence Lab at Institute for the Future, calls “computational propaganda,” which he defines as the spread of disinformation and politically motivated attacks designed using “algorithms, automation, and human curation,” and launched via the internet, particularly social media. In a statement to Futurism, Woolley added that these attacks are “assailing foundational parts of democracy: the press, open civic discourse, the right to privacy, and free elections.”
Attacks like the ones preceding the 2016 election may be a harbinger of what’s to come: We are living in the dawn of an age of digital warfare — more pernicious and less visible than conventional battles, with skirmishes that don’t culminate in confrontations like Pearl Harbor or 9/11.
Our definitions of warfare — its justifications, its tactics — are transforming. Already, there’s a blurry line between threats to a nation’s networks and those that occur on its soil. As Adrienne LaFrance writes in The Atlantic: an act of cyber warfare must be considered an act of war.
A War of 0s and 1s
A little over a decade ago, the United States Cyber Command began developing what would become the world’s first digital weapon: a malicious computer worm known as Stuxnet. It was intended to be used against the government of Iran to stymie its nuclear program, as The New York Times reported. In the true spirit of covert operations and military secrecy, the U.S. government has never publicly taken credit for Stuxnet, nor has the government of Israel, with whom the U.S. reportedly teamed up to unleash it.
Stuxnet’s power is based on its ability to capitalize on software vulnerabilities in the form of a “zero day exploit.” The virus infects a system silently, without requiring the user to do anything, like unwittingly download a malicious file, in order for the worm to take effect. And it didn’t just run rampant through Iran’s nuclear system — the worm spread through Windows systems all over the world. That happened in part because, in order to enter into the system in Iran, the attackers infected computers outside the network (but that were believed to be connected to it) so that they would act as “carriers” of the virus.
As its virulence blossomed, however, analysts began to realize that Stuxnet had become the proverbial first shot in a cyber war.
Like war that takes place in the physical world, cyber warfare targets and exploits vulnerabilities. Nation-states invest a great many resources to gather intelligence about the activities of other nations. They identify a nation’s’ most influential people in government and in general society, which may come in useful when trying to sway public opinion for or against a number of sociopolitical issues.
image credit: pixabay
Gathering nitty-gritty details of another country’s economic insecurities, its health woes, and even its media habits is standard fare in the intelligence game; figuring out where it would “hurt the most” if a country were to launch an attack is probably about efficiency as much as it is efficacy.
Historically, gathering intel was left to spies who risked life and limb to physically infiltrate a building (an agency, an embassy), pilfer documents, files, or hard drives, and escape. The more covert these missions, and the less they could alarm the owners of these targets, the better. Then, it was up to analysts, or sometimes codebreakers, to make sense of the information so that military leaders and strategists could refine their plan of attack to ensure maximum impact.
The internet has made acquiring that kind of information near-instantaneous. If a hacker knows where to look for the databases, can break through digital security measures to access them, and can make sense of the data these systems contain, he or she can acquire years’ worth of intel in just a few hours, or even minutes. The enemy state could start using the sensitive information before anyone realizes that something’s amiss. That kind of efficiency makes James Bond look like a slob.
In 2011, then-Defense Secretary Leon Panetta described the imminent threat of a “cyber Pearl Harbor” in which an enemy state could hack into digital systems to shut down power grids or even go a step beyond and “gain control of critical switches and derail passenger trains, or trains loaded with lethal chemicals.” In 2014, TIME Magazine reported that there were 61,000 cybersecurity breaches in the U.S that year; the then-Director of National Intelligence ranked cybercrime as the number one security threat to the United States that year, according to TIME.
Computer viruses, denial of service (DDS) attacks, even physically damaging a power grid — the strategies for war in the fifth domain are still evolving. Hacking crimes have become fairly common occurrences for banks,hospitals,retailers, and college campuses. But if these epicenters of a functioning society are crippled by even the most “routine” cybercrimes, you can only imagine the devastation that would follow an attack with the resources of an enemy state’s entire military behind it.
image credit: pixabay
Nations are still keeping their cards close to their chest, so no one is really certain which countries are capable of attacks of the largest magnitude. China is a global powerhouse of technology and innovation, so it’s safe to assume its government has the means to launch a large-scale cyber attack. North Korea, too, could have the technology — and, as its relationship with other countries becomes increasingly adversarial, more motivation to refine it. After recent political fallout between North Korea and China, Russia reportedly stepped in to provide North Korea with internet— a move that could signal a powerful alliance is brewing. Russia is the biggest threat as far as the United States is concerned; the country has proven itself to be both a capable and engaged digital assailant.
The Russian influence had a clear impact on the 2016 election, but this type of warfare is still new. There is no Geneva Convention, no treaty, that guides how any nation should interpret these attacks, or react to them. To get that kind of rule, global leaders would need to look at the ramifications for the general population and determine how cyberwar affects citizens.
At present, there is no guiding principle for deciding when (or even if) to act on a perceived act of cyberwarfare. A limbo that is further complicated by the fact that, if those in power have benefitted from, or even orchestrated, the attack itself, then what incentive do they have to retaliate?
If cyber war is still something of a Wild West, it’s clearly citizens who will become the casualties. Our culture, economy, education, healthcare, livelihoods, and communication are inextricably tethered to the internet. If an enemy state wanted a more “traditional” attack (a terrorist bombing or the release of a chemical agent, perhaps) to have maximum impact, why not preface it with a ransomware attack that freezes people out of their bank accounts, shut down hospitals and isolate emergency responders, and assure that citizens didn’t have a way to communicate with their family members in a period of inescapable chaos?
As cybersecurity expert and author Alexander Klimburg explained to Vox, a full-scale cyber attack would result in damage “equivalent to a solar flare in terms of damaging infrastructure.” In short, it would be devastating.
A New Military Strategy
In summer 2016, a group called the Shadow Brokers began leaking highly classified information about the arsenal of cyberweaponry at the National Security Agency (NSA), including cyber weapons actively in development. The agency still doesn’t know whether the leak came from someone within the NSA, or if a foreign faction infiltrated Tailored Access Operations (the NSA’s designated unit for cyber warfare intelligence-gathering).
In any case, the breach of a unit that should have been among the government’s most impervious was unprecedented in American history. Aghast at the gravity of such a breach, Microsoft President Brad Smith compared the situation “to Tomahawk missiles being stolen from the military,” and penned a scathing blog post calling out the U.S. government for its failure to keep the information safe.
The last time such a leak shook the NSA, it was in 2013, when Edward Snowden released classified information about the agency’s surveillance practices. But as experts have pointed out, the information the Shadow Brokers stole is far more damaging. If Snowden released what was effectively battle plans, then the Shadow Brokers released the weapons themselves, as the New York Times analogized,
Earlier this year, a ransomware attack known as “WannaCry” began traversing the web, striking organizations from universities in China to hospitals in England. A similar attack hit IDT Corporation, a telecommunications company based in Newark, New Jersey, in April, when it was spotted by the company’s global chief operations officer, Golan Ben-Oni. As Ben-Oni told the New York Times, he knew at once that this kind of ransomware attack was different than others attempted against his company — it didn’t just steal information from the databases it infiltrated, but rather it stole the credentials required to access those databases. This kind of attack means that hackers could not only take that information undetected, but they could also continuously monitor who accesses that information.
WannaCry and the IDT attack both relied upon the cyber weapons stolen and released by the Shadow Brokers, effectively using them against the government that developed them. WannaCry featured EternalBlue, which used unpatched Microsoft servers to spread malware (North Korea used it to spread the ransomware to 200,000 global servers in just 24 hours). The attack on IDT also used EternalBlue, but added to it another weapon called DoublePulsar, which penetrates systems without tripping their security measures. These weapons had been designed to be damaging and silent. They spread rapidly and unchecked, going undetected by antivirus software all over the world.
The weapons were powerful and relentless, just as the NSA intended. Of course, what the NSA had not intended was that the U.S. would wind up at their mercy. As Ben-Oni lamented to the New York Times, “You can’t catch it, and it’s happening right under our noses.”
“The world isn’t ready for this,” he said.
The Best Defense
The average global citizen may feel disenfranchised by their government’s apparent lack of preparedness, but defending against the carnage of cyber warfare really begins with us: starting with a long overdue reality check concerning our relationship with the internet. Even if the federal agencies aren’t as digitally secure as some critics might like, the average citizen can still protect herself.
“The first and most important point is to be aware that this is a real threat, that this potentially could happen,” cybersecurity expert Dr. Eric Cole told Futurism. Cole added that, for lay people, the best defense is knowing where your information is being stored electronically and making local backups of anything critical. Even services like cloud storage, which are often touted as being safer, wouldn’t be immune to targeted attacks that destroy the supportive infrastructure — or the power grids that keep that framework up and running.
“We often love going and giving out tons of information and doing everything electronic,” Cole told Futurism, “but you might want to ask yourself: Do I really want to provide this information?”
Some experts, however, argue that your run-of-the-mill cyber attack against American businesses and citizens should not be considered an act of war. The term “war” comes with certain trappings — governments get involved, resources are diverted, and the whole situation escalates overall, Thomas Rid, professor and author, recently told The Boston Globe. That kind of intensity might, in fact, be counterproductive for small-scale attacks, ones where local authorities might be the ones best equipped to neutralize a threat.
As humans evolve, so too do the methods with which we seek to destroy each other. The advent of the internet allows for a new kind of warfare — a much quieter one. One that is fought remotely, in real time, that’s decentralized and anonymized. One in which robots and drones take the heat and do our bidding, or where artificial intelligence tells us when it’s time to go to war.
Cyber warfare isn’t unlike nuclear weapons — countries develop them in }and, should they be deployed, it would be citizens that suffer more than their leaders. “Mutually assured destruction” would be a near guarantee. Treaties mandating transparency have worked to keep nuclear weapons in their stockpiles and away from deployment. Perhaps the same could work for digital warfare?
We may be able to foretell what scientific and technological developments are on the horizon, but we can only guess at what humanity will do with them.
Humans made airplanes. These allowed them to fly above the clouds…and they used it to drop bombs on each other.
Get your political jokes ready because this story is sure to generate tons of suggestions on how it can be beneficial to everyone in Washington, London, Beijing, Berlin, Pyongyang and anyplace else where people seem to be acting like monkeys. A team of neuroscientists has injected electronic instructions into the premotor cortex of monkeys that resulted in the animals getting instructions to complete actions without any other instructions, cues or stimuli. Can the instruction be to just shut up?
“What we are showing here is that you don’t have to be in a sensory-receiving area in order for the subject to have an experience that they can identify.”
In their study, published in the journal Neuron, neuroscientists Dr. Kevin A. Mazurek and Dr. Marc H. Schieber, describe how they used two rhesus monkeys to demonstrate how instructions can be sent to the premotor cortex using injections of electrical stimulus. The premotor cortex is part of the motor cortex in the brain’s frontal lobe that controls the planning, control, and execution of voluntary movements. The premotor cortex feeds directly to the spinal cord but its functions are not fully understood … which is why these two neurosurgeons met with two rhesus monkeys.
I thought I was in the line for the banana eating experiment
The experiment was relatively simple. The monkeys were put in front of a panel of knobs (a great nickname for Congress) and trained to perform one of four specific tasks with a knob when one was lit by LEDs. At the same time, a mild microstimulus was applied via implanted electrodes to one of four areas in their premotor cortex. This stimulus was just a brief buzz and did not control any of the movements, since the premotor cortex is not part of the brain’s perception process.
Once the monkeys learned the tasks, the lights were turned off but the microstimulations continued and the monkeys were able to move the correct knows in the proper way when microbuzzed. To prove that the areas of the premotor cortex were not predisposed to the movements, the researchers switched the electronic impulse injectors around and retrained the subjects. When the lights were turned off, the monkeys continued to move the proper knows. (Does this sound like training them to vote?)
Of course, the researchers say this experiment has nothing to do with mind control in humans. Dr. Mazurek has other plans for it, as he explains in an interview with The New York Times:
“This could be very important for people who have lost function in areas of their brain due to stroke, injury, or other disease. We can potentially bypass the damaged part of the brain where connections have been lost and deliver information to an intact part of the brain.”
The example Mazurek gives correlates the experiment to learning that a red light while driving means to put a foot on the brake pedal. If parts of the brain’s chain of command to complete this task are damaged, a stimulus could replace them.
Why did I pick this up? I hate when that happens.
The next step is to conduct the experiments on humans and eliminate the visual LED stimulus. If that’s successful, it means the “information” or instructions can be “injected” without the person knowing it. Now THAT sounds like mind control.
Fortunately, before the researchers try this on humans, they will continue to perform their tests on politicians. (You knew it was coming.)
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
08-12-2017
Scientists ‘Inject’ Information Into Monkeys’ Brains
Scientists ‘Inject’ Information Into Monkeys’ Brains
CreditChristoph Hitz
When you drive toward an intersection, the sight of the light turning red will (or should) make you step on the brake. This action happens thanks to a chain of events inside your head.
Your eyes relay signals to the visual centers in the back of your brain. After those signals get processed, they travel along a pathway to another region, the premotor cortex, where the brain plans movements.
Now, imagine that you had a device implanted in your brain that could shortcut the pathway and “inject” information straight into your premotor cortex.
That may sound like an outtake from “The Matrix.” But now two neuroscientists at the University of Rochester say they have managed to introduce information directly into the premotor cortex of monkeys. The researchers published the results of the experiment on Thursday in the journal Neuron.
Although the research is preliminary, carried out in just two monkeys, the researchers speculated that further research might lead to brain implants for people with strokes.
“You could potentially bypass the damaged areas and deliver stimulation to the premotor cortex,” said Kevin A. Mazurek, a co-author of the study. “That could be a way to bridge parts of the brain that can no longer communicate.”
In order to study the premotor cortex, Dr. Mazurek and his co-author, Dr. Marc H. Schieber, trained two rhesus monkeys to play a game.
The monkeys sat in front of a panel equipped with a button, a sphere-shaped knob, a cylindrical knob, and a T-shaped handle. Each object was ringed by LED lights. If the lights around an object switched on, the monkeys had to reach out their hand to it to get a reward — in this case, a refreshing squirt of water.
Each object required a particular action. If the button glowed, the monkeys had to push it. If the sphere glowed, they had to turn it. If the T-shaped handle or cylinder lit up, they had to pull it.
After the monkeys learned how to play the game, Dr. Mazurek and Dr. Schieber had them play a wired version. The scientists placed 16 electrodes in each monkey’s brain, in the premotor cortex.
Each time a ring of lights switched on, the electrodes transmitted a short, faint burst of electricity. The patterns varied according to which object the researchers wanted the monkeys to manipulate.
As the monkeys played more rounds of the game, the rings of light dimmed. At first, the dimming caused the monkeys to make mistakes. But then their performance improved.
Eventually the lights went out completely, yet the monkeys were able to use only the signals from the electrodes in their brains to pick the right object and manipulate it for the reward. And they did just as well as with the lights.
This hints that the sensory regions of the brain, which process information from the environment, can be bypassed altogether. The brain can devise a response by receiving information directly, via electrodes.
Dr. Mazurek and Dr. Schieber were able to rule out this possibility by seeing how short they could make the pulses. With a jolt as brief as a fifth of a second, the monkeys could still master the game without lights. Such a pulse was too short to cause the monkeys to jerk about.
“The stimulation must be producing some conscious perception,” said Paul Cheney, a neurophysiologist at the University of Kansas Medical Center, who was not involved in the new study.
But what exactly is that something? It’s hard to say. “After all, you can’t easily ask the monkey to tell you what they have experienced,” Dr. Cheney said.
Dr. Schieber speculated that the monkeys “might feel something on their skin. Or they might see something. Who knows what?”
What makes the finding particularly intriguing is that the signals the scientists delivered into the monkey brains had no underlying connection to the knob, the button, the handle or the cylinder.
Once the monkeys started using the signals to grab the right objects, the researchers shuffled them into new assignments. Now different electrodes fired for different objects — and the monkeys quickly learned the new rules.
“This is not a prewired part of the brain for built-in movements, but a learning engine,” said Michael A. Graziano, a neuroscientist at Princeton University who was not involved in the study.
Dr. Mazurek and Dr. Schieber only implanted small arrays of electrodes into the monkeys. Engineers are working on implantable arrays that might include as many as 1,000 electrodes. So it may be possible one day to transmit far more complex packages of information into the premotor cortex.
Dr. Schieber speculated that someday scientists might be able to use such advanced electrodes to help people who suffer brain damage. Strokes, for instance, can destroy parts of the brain along the pathway from sensory regions to areas where the brain makes decisions and sends out commands to the body.
Implanted electrodes might eavesdrop on neurons in healthy regions, such as the visual cortex, and then forward information into the premotor cortex.
“When the computer says, ‘You’re seeing the red light,’ you could say, ‘Oh, I know what that means — I’m supposed to put my foot on the brake,’” said Dr. Schieber. “You take information from one good part of the brain and inject it into a downstream area that tells you what to do.”
The new organs that researchers have 3D printed don’t only look like the real deal, but they also feel like it.
Researchers can attach sensors to the organ models to give surgeons real-time feedback on how much force they can use during surgery without damaging the tissue. Credits: McAlpine Research Group.
3D printing has taken the world by storm, and medicine especially can benefit from the technology. So far, people have 3D printed human cartilage, skin, and even artificial limbs — and we’ve just started to scratch the surface of what 3D printing can do. Now, researchers from the University of Minnesota have developed artificial organ models which look incredibly realistic.
“We are developing next-generation organ models for pre-operative practice. The organ models we are 3D printing are almost a perfect replica in terms of the look and feel of an individual’s organ, using our custom-built 3D printers,” said lead researcher Michael McAlpine, an associate professor of mechanical engineering at the University of Minnesota’s College of Science and Engineering.
The 3D-printed structures not only mimic the aspect of real organs, but also the mechanical properties, look and feel of real organs. They include soft sensors which can be customized depending on the desired organ. The sensors offer real-time feedback on how much force is being applied to them, notifying doctors when they are close to damaging the organ.
The technology could help students get a better feel for real organs and learn how to improve surgical skills. For doctors, it could help them prepare for complex surgeries. It’s a great step forward from previous models of artificial organs, which were generally made from hard, unrealistic plastic.
“We think these organ models could be ‘game-changers’ for helping surgeons better plan and practice for surgery. We hope this will save lives by reducing medical errors during surgery,”McAlpine added.
In the future, researchers want to develop even more complex organs, as well as start incorporating defects or deformities. For instance, they could add a patient-specific inflammation or a tumor to an organ, based on a previous scan, enabling doctors to visualize and prepare for an intervention.
Lastly, this could ultimately pave the way for 3D-printing real, functioning organs. There’s no fundamental reason why we can’t do this, it’s just that we’re not there yet. This invention could be a stepping stone for such advancements.
“If we could replicate the function of these tissues and organs, we might someday even be able to create ‘bionic organs’ for transplants,” McAlpine said. “I call this the ‘Human X’ project. It sounds a bit like science fiction, but if these synthetic organs look, feel, and act like real tissue or organs, we don’t see why we couldn’t 3D print them on demand to replace real organs.”
The research was published today in the journal Advanced Materials Technologies.
Researchers at the University of California, Berkeley, have developed a robot that has the ability to learn like a human toddler, allowing it to predict outcomes. Called Vestri, the robot is capable of learning all by itself with no human supervision required.
Teaching a robot how to play
When toddlers play with toys, they’re doing more than just entertaining themselves. Effectively, with every twist or throw the children learn about how the world works. By manipulating objects, toddlers learn how these respond all by themselves and can then form judgments about how these objects will likely behave in the future if used in the same way.
This great learning strategy, sometimes called “motor babbling”, has been emulated by American scientists into the Vestri robot. The technology in question, called “visual foresight”, effectively enables the robot to imagine what its next action should be and what the likeliest consequences might look like, and then take action based on the best results.
“Children can learn about their world by playing with toys, moving them around, grasping, and so forth. Our aim with this research is to enable a robot to do the same: to learn about how the world works through autonomous interaction,” said UC Berkeley assistant professor Sergey Levine and lead author of the study which was presented at the Neural Information Processing Systems conference. “The capabilities of this robot are still limited, but its skills are learned entirely automatically, and allow it to predict complex physical interactions with objects that it has never seen before by building on previously observed patterns of interaction."
Scientists hope that in the future, such technology could enable self-driving cars to predict roads ahead but for now at least, this ‘robotic imagination’ is fairly simple and limited. Vestri can make predictions only several seconds into the future but even that’s enough to help it figure out how to best move objects around on a table without disturbing obstacles. Vestri chose the right path around 90 per cent of the time.
What’s crucial about this skillset is that no human intervention nor prior knowledge about physics is required. Everything Vestri learned, it’s done so from scratch from unattended and unsupervised exploration — ‘playing’ with objects on a table.
After it had trained itself, Vestri is able to build a predictive model of its surroundings. It then uses this model to manipulate new objects it had never encountered before. The predictions are produced in the form of video scenes that had not actually happened but could happen if an object was pushed in a certain way.
“In the same way that we can imagine how our actions will move the objects in our environment, this method can enable a robot to visualize how different behaviors will affect the world around it,” said Levine. “This can enable intelligent planning of highly flexible skills in complex real-world situations.”
Because Vestri’s video predictions rely on observations made autonomously by the robot through camera images, the method is general and broadly applicable. That’s in contrast to conventional computer vision techniques which require human supervision to label thousands or even millions of images.
Next, the Berkeley researchers want to expand the number of objects Vestri is able to play with but also to enhance the movements its capable of making. By expanding its repertoire, the researchers hope to make Vestri more versatile and adapted to all sorts of environments.
“This can enable intelligent planning of highly flexible skills in complex real-world situations,” Levine concluded.
MIT researchers have developed “living” tattoos. They rely on a novel 3D printing technique based on ink made from genetically-programed cells.
Image credits Xinyue Liu et al., 2017, Advanced Materials.
There seems to be a growing interest in living, 3D-printable inks these days. Just a few days ago, we’ve seen how scientists in Zurich plan to use them to create microfactories that can scrub, produce, and sense different chemical compounds. Now, MIT researchers led by Xuanhe Zhao and Timothy Lu, two professors at the institute, are taking that concept, and putting it in your skin.
The technique is based on cells programmed to respond to a wide range of stimuli. After mixing in some hydrogel to keep everything together and nutrients to keep all the inhabitants happy and fed, the inks can be printed, layer by layer, to form interactive 3D devices.
The team demonstrated their efficacy by printing a “living” tattoo, a thin transparent patch of live bacteria in the shape of a tree. Each branch is designed to respond to a different chemical or molecular input. Applying such compounds to areas of the skin causes the ‘tree’ to light up in response. The team says the technique can be sued to manufacture active materials for wearable tech, such as sensors or interactive displays. Different cell patterns can be used to make these devices responsive to environmental changes, from chemicals, pollutants, or pH shifts to more common-day concerns such as temperature.
The researchers also developed a model to predict the interactions between different cells in any structure under a wide range of conditions. Future work with the printing technique can draw on this model to tailor the responsive living materials to various needs.
Why bacteria?
Previous attempts to 3D print genetically-engineered cells that can respond to certain stimuli have had little success, says co-author Hyunwoo Yuk.
“It turns out those cells were dying during the printing process, because mammalian cells are basically lipid bilayer balloons,” he explains.“They are too weak, and they easily rupture.”
So they went with bacteria and their hardier cellular wall structure. Bacteria don’t usually clump together into organisms, so they have very beefy walls (compared to the cells in our body, for example) meant to protect them in harsh conditions. They come in very handy when the ink is forced through the printer’s nozzle. Again, unlike mammalian cells, bacteria are compatible with most hydrogels — mixes of water and some polymer. The team found that a hydrogel based on pluronic acid was the best home for their bacteria while keeping an ideal consistency for 3D printing.
“This hydrogel has ideal flow characteristics for printing through a nozzle,” Zhao says.“It’s like squeezing out toothpaste. You need [the ink] to flow out of a nozzle like toothpaste, and it can maintain its shape after it’s printed.”
“We found this new ink formula works very well and can print at a high resolution of about 30 micrometers per feature. That means each line we print contains only a few cells. We can also print relatively large-scale structures, measuring several centimeters.”
Gettin’ inked
The team printed the ink using a custom 3D printer they built — its based largely on standard elements and a few fixtures the team machined themselves.
A pattern of hydrogel mixed with cells was printed in the shape of a tree on an elastomer base. After printing, they cured the patch by exposing it to ultraviolet radiation. They then put the transparent elastomer layer onto a test subject’s hand after smearing several chemical samples on his skin. Over several hours, branches of the patch’s tree lit up when bacteria sensed their corresponding stimuli.
Logic gates created with the bacteria-laden ink. Such structure form the basis of computer hardware today. Image credits Xinyue Liu et al., 2017, Advanced Materials.
The team also designed certain bacterial strains to work only in tandem with other elements. For instance, some cells will only light up when they receive a signal from another cell or group of cells. To test this system, scientists printed a thin sheet of hydrogel filaments with input (signal-producing) bacteria and chemicals, and overlaid that with another layer of filaments of output (signal-receiving) bacteria. The output filaments only lit up when they overlapped with the input layer and received a signal from them.
Yuk says in the future, their tech may form the basis for “living computers”, structures with multiple types of cells that communicate back and forth like transistors on a microchip. Even better, such computers should be perfectly wearable, Yuk believes.
Until then, they plan to create custom sensors in the form of flexible patches and stickers, aimed at detecting to a wide variety of chemical and biochemical compounds. MIT scientists also want to expand the living tattoo’s uses in a direction similar to that developed at ETH Zurich, manufacturing patches that can produce compounds such as glucose and releasing them in the bloodstream over time. And, “as long as the fabrication method and approach are viable” applications such as implants and ingestibles aren’t off the table either, the authors conclude.
The paper “3D Printing of Living Responsive Materials and Devices” has been published in the journal Advanced Materials.
Do you trust Elon Musk and Stephen Hawking when they warn that artificial intelligence, particularly in autonomous weapons, may be advancing faster than we can control it? Have you ever been steered wrong by a Google map? Would you trust Google in more difficult tasks, like creating artificial intelligence? Do you believe Google has the best interests of humanity in mind in all that it does? Would you be excited if Google announced it had developed an artificial intelligence that created its own AI child that can outperform humans? Would like to check with Elon and Stephen again? Do you think it’s too late?
Researchers at Google Brain – a name that seems to be becoming more oxymoronic by the day – announced this week that they have developed an artificial intelligence called AutoML, which is short for Automated Machine Learning, but the ‘M’ could also stand for ‘Mother’ because its main purpose is to develop and generate its own artificial intelligences. You could call this new AI a ‘child’ but you’d be too late because Google Brain has already thought of that. However, to reduce the possibility of panic, its official name is the more innocent NASNet.
NASNet? Won’t the other AI kids call him Nazzy?
In a post on the Google Research blog, the researchers explain that AutoML is more than just a parent — it’s a teacher as well. In the described experiment, AutoML trains its child NASNet to recognize objects in a video – things like people, cars, clothing items, etc. If this sounds like a human parent pointing to pictures in a book and getting their child to say “cat,” you’re right. If you can imagine that human parent correcting the child who said “cow” instead of “cat,” you’ve also described what AutoML does to NASNet. That doesn’t sound so bad, does it?
Oh, you gullible humans. Unlike a human parent, AutoML can correct and repeat this training thousands of times without getting frustrated, hungry or tired, and NASNet can endure this repetitive training without getting fidgety or needing to use the bathroom. Once the education was complete, NASNet was tested on two well-known datasets — ImageNet for images and COCO for objects – and outperformed all other computer vision systems.
Think about that for a minute. A machine made by a machine outperformed the best machines made by humans. Are we ready for this? Is Google?
Is this the future?
“We suspect that the image features learned by NASNet on ImageNet and COCO may be reused for many computer vision applications. Thus, we have open-sourced NASNet for inference on image classification and for object detection in the Slim and Object Detection TensorFlow repositories.”
Open source! Without any standards nor regulations in place, the Google Brain (do you see the oxymoron yet?) has unleashed its AI and its fast-learning child upon the world. Alphabet’s (Google’s parent company) own DeepMind company, which is supposed to be working on issues concerning the moral and ethical development of AI, didn’t have anything to say.
It’s easy to see how advanced object recognition will help applications like driverless cars. Are we ready for driverless cars to begat driverless golf carts that are smarter than their parents? Will they take us where we want us to go … or where they plan to dump us?
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.