The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
24-02-2019
China’s Gene-Edited Twins Might Have Accidentally Been Given Super Intelligence
China’s Gene-Edited Twins Might Have Accidentally Been Given Super Intelligence
In November, 2018, a Chinese biophysicist named He Jiankui came into the international spotlight after reports that he used the CRISPR gene editing technology to create thefirst genetically modified humans. The genetically modified twins, known by the pseudonyms Lulu and Nana, were born on November 8, 2018, to widespread international condemnation of He’s actions, seen by a large part of the scientific community as reckless tinkering. He Jiankui says the genetic modifications done to the twin baby girls were to make them immune to the HIV virus. How could anyone be mad about that? Well, it turns out that besides HIV immunity, He Jiankui’s tinkering might have “accidentally” given the twins super intelligence, enhancing their cognition, memory, and ability to learn. Accidentally.
According to the MIT Technology Review, the HIV immunity and enhanced intelligence are inseparable from one another. To make Lulu and Nana HIV immune, He used CRISPR to delete a gene known as CCR5. HIV needs the CCR5 gene in infect blood cells. But there’s another interesting part of the CCR5 gene: it’s been known since 2016 that removing the gene from mice enhances their memories. What’s more, people who are naturally lacking CCR5 seem to recover better from strokes and perform better in school. A recent journal article names CCR5 as a “suppressor of memories and synaptic connections.”
CRISPR gene editing technology first came to prominence in 2015.
While there is no evidence yet that this was He Jiankui’s true intentions all along, it does seem awfully suspicious at a time when a new biotechnology race between the U.S. and China seems to be starting up. He Jiankui apparently reached out to other scientists around the world for advice and support, but there is no record of him asking any questions about the link between CCR5 and intelligence. Maybe it’s just a happy accident, after all. But it’s also likely that asking those sorts of questions might get you in a bit of hot water before you have your mad-scientist fun.
It is certain that He Jiankui at least knew about the research on CCR5 and cognition. He addressed it at a conference, dismissing it as needing it “more independent verification.” Questioned again on a separate occasion, He stated that he was against genetic modification for enhancements.
One of the authors of new paper on the link between the CCR5 gene and intelligence, Alcino J. Silva, a neurobiologist at the University of California Los Angeles, doesn’t buy it. When the news of the twins’ birth was announced on November 25, Silva immediately suspected that cognitive enhancement was the true aim of He Jiankui’s experiments:
“I suddenly realized—Oh, holy shit, they are really serious about this bullshit.
My reaction was visceral repulsion and sadness.”
Silva sees these kind of genetic experiments as irresponsible and morally repugnant. While we have evidence of what removing this gene does to mice, we have no idea what it will do to humans and even less of an idea what unchecked genetic tinkering would do to human societies. Silva says:
“Could it be conceivable that at one point in the future we could increase the average IQ of the population? I would not be a scientist if I said no. The work in mice demonstrates the answer may be yes. But mice are not people. We simply don’t know what the consequences will be in mucking around. We are not ready for it yet.”
Regardless of He Jiankui’s intentions, the proverbial cat has exited the proverbial bag, and only time will tell what the results of this “mucking around” will be.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
18-02-2019
Printed Sensors Could Simplify NASA’s Extraterretrial Scanning
Printed Sensors Could Simplify NASA’s Extraterrestrial Scanning
It’s no secret that remotely scanning extraterrestrial environments requires quite a lot of state-of-the-art technology. Aside from the space travel tech, there is the problem of building the actual sensors that will be picking up light traces of water vapor, gases or temperature changes. Luckily however, NASA is looking to develop 3D printed sensors that are lighter and more compact than ever. The sensors will serve as the basis for a potentially revolutionary, nanomaterial-based detector platform.
Mahmooda Sultana is the lead technologist for the project, having won funding to advance this concept through a $2 million technology development award. Potentially, the system will be capable of sensing everything from minute concentrations of gases and vapor, atmospheric pressure and temperature. It will then transmit all this data, using a wireless antenna, back to NASA’s ground controllers.
What’s most impressive about the project is that it could do all this from a single, self-contained platform. It’s also a marvel that the platform could measure just two-by-three-inches in size. The potential for miniaturisation that printed sensors provide is a major boon to simplifying NASA’s extraterrestrial terrain scanning capabilities.
Currently, the team is busy measuring which set-up is best for the design. This reequires determining which combination of materials can best measure minute (down to parts-per-billion) concentrations of water, ammonia, methane and hydrogen.
Miniaturization & Space Exploration
The miniaturization of technologies is a crucial aspect of modern space travel. Compact and lighter equipment allows for the economisation of space and fuel costs. Thus, it’s been on NASA’s mind for a while and 3D printing is definitely playing a part in multiple ways. The team at NASA’s Goddard Space Flight Center are working with could simplify both the production and the packaging of these essential platforms.
The project is looking into nanomaterials, like carbon nanotubes, graphene, etc. as the basis. Another unique aspect of the proposed method is that it will print all the necessary sensors on the same substrate using a single process. They are even looking int printing a part of the wireless communication circuitry needed for the platform and the printed sensors to relay the data to ground controllers.
Nanomaterials, such as carbon nanotubes, graphene, molybdenum disulfide and others, possess useful physical properties. They display high sensitivity and can remain stable in extreme conditions, thus they are ideal candidates. As one would imagine, they are also lightweight, resistant against radiation and require less power.
Once finalised, Northeastern University will use their Nanoscale Offset Printing System to apply the nanomaterials. Sultana’s group, meanwhile, will functionalize individual sensors by depositing additional layers of nanoparticles, enhancing their sensitivity. They will also integrate the sensors with readout electronics and package the entire platform.
Featured image courtesy of NASA.
About the author | Rawal AhmedRawal Ahmed is a freelance journalist and politics correspondent with an avid interest in futurism, science and technology.
The latest terrifying artificial intelligence development to foreshadow humanity’s future under the cold, steel boots of ruthless robots comes by way of OpenAI, a San Francisco-based research institute funded in part by Elon Musk. OpenAI has reportedly created an AI capable of generating realistic-but-fake news stories that are credible enough to fool most human readers. In fact, the AI is so good at what it does that its own creators believe it’s too dangerous to release. How much longer until one of these systems is let loose on an unsuspecting public?
And how long until it decides humanity is a plague?
OpenAI’s newest hellish creation is called GPT2. The program is essentially a text generator which can analyze existing text and then produce its own based on what it expects might come after it. What separates GPT2 from other natural language bots is the fact that it can produce realistic texts in perfect prose – and that’s where the danger comes in.
Jack Clark, policy director at OpenAI, says that because the program writes such realistic-looking text, it could be easily used to fool or mislead readers with fake news stories. “We started testing it, and quickly discovered it’s possible to generate malicious-esque content quite easily,” Clark told the MIT Technology Review. “It’s very clear that if this technology matures—and I’d give it one or two years—it could be used for disinformation or propaganda. We’re trying to get ahead of this.”
Real-life Bond villain Elon Musk is also trying to get ahead of the dangers of GPT2 by distancing himself from OpenAI altogether. Musk left the company this week, citing his commitments to his other endeavors. While Musk is without a doubt a busy man, many suspect his departure might be due to the terrifying possibilities GPT2 foreshadows.
As artificial intelligence networks continue to get better at fooling humans, the line between what is real and what is fake is beginning to blur. Already, sophisticated AI programs can produce perfectly real-looking video and audio content depicting people saying or doing things which never actually happened. What’s going to happen when these start flooding the news cycle? Are we destined to lose our ability to tell what is real and true?
“Shhh. Too many questions. Just sit back and smash that ‘Subscribe’ button. It’ll all be over soon.”
Perhaps we already have. Many technologists and historians believe we may already be controlled by AI. Could that explain the geopolitical high strangeness of the last few years? Is it all a carefully curated illusion designed to manipulate the minds of the masses?
Kill your TV before it’s too late. And your phone and computer while you’re at it. Better yet, just gouge your own eyes out and rip your ears off. It’s not as difficult as it may sound. It’s the only real way to avoid the hellish nightmare the future is turning out to be.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-02-2019
SEE A ROBOT MELT ITS OWN BONES TO AVOID OBSTACLES
SEE A ROBOT MELT ITS OWN BONES TO AVOID OBSTACLES
COLORADO STATE UNIVERSITY
KRISTIN HOUSER
Adapt. React. Readapt. Act.
Cheetahs are the fastest animals on land, and they owe their speed in part to thedesign of their skeletons— the tibia and fibula in their legs are fused, helping them maintain stability while sprinting after prey.
However, this unique characteristic also prevents cheetahs from being effective climbers like many other cats. If it could somehow separate its leg bones at will, the animal would be far more formidable.
Alas, the cheetah is stuck with the skeleton evolution gave it. But a new robot out of Colorado State University (CSU) doesn’t suffer from the same limitation. It can melt and solidify its bones on the fly — changing its skeleton to best suit whatever task it currently faces.
Mighty Morphing Robo Joints
In a new paper published in the journal IEEE Robotics & Automation Letters, the CSU team describes how it gave its robot the ability to adapt to different challenges by equipping it with “shape morphing joints.”
Each of these joints starts out rigid, but when heated up with electricity, it becomes pliable within about 10 seconds. Stop the flow of electricity, and the joint once again becomes rigid.
In a video, the researchers demonstrate how their robot can use its SMJs to lower itself enough to slink below an obstacle it would otherwise hit.
Next Steps
The CSU team plans to work on building a robot capable of more than just one type of locomotion next — a bot that can both swim and walk, for example, or one that can walk and fly. However, it already sees a number of potential uses for its technology as is.
“Our morphing technique is ideal for robots that are small but need to perform different tasks or adapt to different environments” researcher Jianguo Zhao toldIEEE Spectrum. “Those robots can be used for a wide range of applications including environmental monitoring, military surveillance, as well as search and rescue in disaster areas.”
The year is 2030. You’ve just received an email: The dream job in Japan is yours. You start making phone calls, looking up the rent on Tokyo apartments, and getting ready to make the career move of a lifetime. There’s just one problem: Can your Siri get a visa?
It’s a potential roadblock that’s less farfetched than you’d think. In November 2018, Maltese government minister Silvio Schembri announced an initiative to grapple with questions like how many robots to let into the country at one time and more. Malta.ai is aimed at making Malta one of the top 10 countries in the world when it comes to readiness for advanced A.I.. One of its first tasks is to explore, along with SingularityNET, how to institute a kind of citizenship test for robots. SingularityNET CEO Ben Goertzel elaborated on the idea a few three days after the announcement in a blog post. His goal is to make sure that, as robots and A.I. continue to become more sophisticated and autonomous, they will still know how to follow and respect the laws of the land.
“I know what it means to be a citizen of the U.S. or Europe,” Goertzel tells Inverse. “If you’re a naturalized citizen of the U.S., you take a simple test on constitution and government and so forth. That’s what I was thinking, what tests can be given to an A.I., or robot controlled by an A.I., to make it reasonable to consider making that A.I. a citizen.”
Does Siri need citizenship?
Why Futuristic Siris May Need a Passport
The initiative strikes to the heart of humanity’s relationship with machines. Laws are designed to accommodate humans and organizations, the only ones capable of taking responsibility. But as our computers move from dumb servants to sophisticated setups capable of passing the Turing test, legislators worldwide will need to consider how these pseudo-people function in legal systems designed for yesteryear. Benoît Hamon made taxing robots a key plank of his run for the French presidency in 2017, and Andrew Yang is running for the American presidency on a “basic income” platform to offset the job losses from automation. The European Parliament has called for ethical standards to guide the development of such machines and, in the United States, the billionaire philanthropist Bill Gates has called for a robot tax as well.
But as the line between simple tool and thinking-entity continues to blur, the legal designations separating life and artificiality will have to evolve.77
In Goertzel’s opinion, this means developing an A.I. that can understand the laws of a country, correctly answer questions about said laws, and apply those regulations to real-life situations. However, he readily admits that the task force will have to refine these ideas — and it may need to work through them fast.
“Artificial Intelligence is being seen in many quarters as the most transformative technology since the invention of electricity,” Francois Piccione, policy advisor for the Maltese government, tells Inverse. “To realize that such a revolution is taking place and not do one’s best to prepare for it would be irresponsible.”
Issues surrounding liability are already starting to emerge with autonomous cars. Current systems request users remain alert at all times, but once a computer can take full control, it raises a number of questions.
“Autonomy, inevitably, raises questions about responsibility and liability,” Piccione says. “To illustrate the point, if a driverless car causes an accident, who or what is liable? It could be the manufacturer, or the user of the system, or other intermediaries. But liability could also be attributed directly and solely to the robot or system itself.”
Maltese robots would not even be the first to gain citizenship. Sophia, the humanoid robot powered by SingularityNET, was granted honorary citizenship by Saudi Arabia in October 2017. The stunt was supposed to spark a conversation about robots in society. Instead, press attention focused on how Sophia seemed to enjoy more rights in Saudi Arabia than actual human women, as she didn’t need a male guardian in public.
Sophia the robot.
Which of course raises an even more complex question: In a world where humanrights are far from a settled issue, it seems somewhat tone-deaf to begin discussing robot privileges for machines that haven’t even been invented yet. But Goertzel has stood by the initiative as “a genuinely forward-thinking and positive act on the part of the Saudi government.”
A Marketing Play?
Other experts in the field remain unconvinced. David Gunkel, a Northern Illinois University professor whose book Robot Rights considers the ethics of granting such benefits to machines, tells Inverse that Sophia’s citizenship was “mainly about marketing,” aimed at attracting the tech industry to the country’s Future Investment Summit. After all, it was only an honorary citizenship, basically akin to an honorary university degree.
“I have yet to see a well-reasoned and/or persuasive argument for granting A.I. or robots citizenship,” Gunkel says. “I do see good reasons to consider questions of legal personality for A.I.s and robots, but that is an entirely different set of questions.”
The problem of Siri’s citizenship, then, actually encompasses two distinct debates. The first concerns what happens when an A.I. does something wrong, a debate already occurring around autonomous cars. But the second is much more complicated: Siri and others command respect to the point where society starts to consider granting such rights as “just.”
“Neither of these questions require that A.I./robots be citizens,” Gunkel says. “In fact, we have already addressed and answered these question for another class of artificial entity — the multinational corporation. Corporations are legal persons for the purpose of making them subjects of and subject to national and international law. This has and can done without granting the corporation citizenship.”
Goertzel, however, suggests that even corporate personhood has its issues. What if a decentralized autonomous organization, for example a cryptocurrency, wants to register itself as a corporation? Does it need a human to finish the task?
“The focus is on how to provide certification in Malta to these systems, which would also include limited rights and obligations,” Piccione says. “Taking this route would not, in fact, be a new concept as today companies and other registered entities carry liability but also have rights, for example to own property. This could be the same mechanism used for ‘robots’ or other A.I. systems including autonomous vehicles.”
Will the autonomous car need a passport?
Should Citizenship Imply Non-Legal Rights, Too?
Corporate personhood can only answer so many questions. Gunkel says that we are living in a “robot invasion” where machines “are now everywhere and doing virtually everything.” As they move from simple tools to an actor in society, consigning them to the status of human-run entities seems ill-fitting.
“I believe we will need consider — and in fact have already begun considering — the question of moral and legal personhood for A.I. and robots apart from issue having to do with citizenship,” Gunkel says. “And what is perhaps worse, I worry that speculation about ‘robot citizenship’ might eclipse the more immediate questions regarding the moral and legal standing of A.I./robots.”
Goertzel predicts that a human-level artificial intelligence could emerge as early as 2029. If that prediction holds true, it means something halfway human-like could launch as soon as 2025. That only leaves around six years before legislators will have to consider how to to treat entities with close to a regular citizen’s intelligence.
Whether the answer is citizenship itself, however, is less clear, but one thing’s for certain: The line between man and machine is about to look a lot blurrier. Films like Her and Ex Machina explore the interplay between human-seeming systems and the resultant relationship. Even if we solve all of Siri’s visa issues, the boundaries may still remain unsettled in more ways than just the legal question.
Singularity Sophia Interview from UN A.I. For Good Project
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
05-02-2019
THE US ARMY IS EQUIPPING SOLDIERS WITH POCKET-SIZED RECON DRONES
THE US ARMY IS EQUIPPING SOLDIERS WITH POCKET-SIZED RECON DRONES
JON CHRISTIAN
Recon Drones
The U.S. Army has placed a $39 million order for tiny reconnaissance drones, small enough to fit in a soldier’s pocket or palm.
The idea behind the drones, which are made by FLIR Systems and look like tiny menacing helicopters, is that soldiers will be able to send them into the sky of the battlefield in order to get a “lethal edge” during combat,according to Business Insider.
Battlefield View
FLIR Systems is currently delivering its “nano-unmanned aerial vehicles,” which it calls Black Hornet Personal Reconnaissance Systems, according to a press release that says the Army is starting an “initial integration” of the drones.
“This contract represents a significant milestone with the operational large-scale deployment of nano-UAVs into the world’s most powerful Army,” said Jim Cannon, the CEO of FLIR Systems, in the press release.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
02-02-2019
WATCH A SUPER-FAST 3D PRINTER SCIENTISTS CALL THE “REPLICATOR”
WATCH A SUPER-FAST 3D PRINTER SCIENTISTS CALL THE “REPLICATOR”
NATURE/UNIVERSITY OF CALIFORNIA
VICTOR TANGERMANN
Fabrication Station
3D printers work by laboriously printing objects layer by layer. For larger objects, that process can take hours or even days.
But now scientists at the University of California, Berkeley have found a shortcut: a printer that can fabricate objects in one shot using light — and which could, potentially, revolutionize rapid manufacturing technology.
The Replicator
The research, published in the journal Science yesterday, describes a printer the researchers nicknamed “the replicator” in a nod to “Star Trek.”
It works more like a computed tomography (CT) scan than a conventional 3D printer. It builds a 3D image by scanning an object from multiple angles, then projects it into a tube of synthetic resin that solidifies when exposed to certain intensities of light. In two minutes, for instance, the team was able to fabricate a tiny figurine of Auguste Rodin’s famous “The Thinker” statue.
3D Printing 2.0
The replicator might have groundbreaking implications, but it does have some inherent limitations as well: the objects it produces are small, and require special synthetic resin to produce.
But it’s an exciting new technology — and one that could lead to a “Star Trek” future.
The eggs laid by a group of “pampered” hens in the U.K. contain something special in their whites.
After researchers from the University of Edinburgh spliced a human gene into the chickens’ DNA, the animals began laying eggs boasting a significant amount of two proteins used to treat diseases including cancer in humans — and the process, they say, is far cheaper than current methods of protein production.
“Production from chickens can cost anywhere from 10 to 100 times less than the factories,” researcher Lissa Herron told the BBC. “So hopefully we’ll be looking at at least 10 times lower overall manufacturing cost.”
Protein Packed
The human body naturally produces the proteins found in the new hen eggs — IFNalpha2a and macrophage-CSF, if you’re wondering — and they each play an important role in the immune system. Drugs containing both proteins are widely used by doctors to treat cancers and other diseases, but producing the proteins in the lab is difficult and expensive.
For their study, published in the journal BMC Biotechnology, the Edinburgh researchers inserted the gene that produces the proteins in humans into the part of the chickens’ DNA that handles the production of the white in its eggs. When they tested the hens’ eggs, they found that just three eggs contained a dose-worth of the proteins.
The genetically modified chickens, which live “pampered” lives in large pens, are none the wiser either, according to Herron. “As far as the chicken knows, it’s just laying a normal egg,” she told the BBC. “It doesn’t affect its health in any way, it’s just chugging away, laying eggs as normal.”
Multiple Baskets
Though their research yielded promising results, the team believes it’d take between 10 and 20 years before regulatory agencies would approve for human use any drugs developed from their genetically modified chickens. But the proteins in the animals’ eggs could serve a purpose in the interim.
“We are not yet producing medicines for people,” researcher Helen Sang told the BBC, “but this study shows that chickens are commercially viable for producing proteins suitable for drug discovery studies and other applications in biotechnology.”
To help cut down at the horrifically-long donor organ waitlist, some scientists are looking up to outer space.
Several doctors have tried to 3D print organs in the lab, with mixed results — organs with complex internal structures, like hearts and lungs, tend to collapse under their own weight.
Now, instead of supporting them with complex scaffolding systems, some scientists are wondering if it’d be better to send the 3D printer up to the zero-gravity environment of the International Space Station (ISS) in order to print hearts in space, according to BBC News — a convergence of space and medicine that could prove either be a grim folly or shape the future of surgery.
My Sides Are In Orbit
A number of scientists have already explored the idea of microgravity 3D printing. Next up will be a startup called Techshot, which has partnered with NASA to develop a biological 3D printer that it plans to send to the ISS this coming May.
First, the company plans to spend about a year on experiments that will determine how well the printer works in space. At that point, the company will largely focus on developing cardiac tissues, according to BBC News.
Open Source
“After our test protocols have been completed, we’ll open the program up to outside researchers who want to use our device,” Techshot VP Rich Boling told BBC News. After all those tests are done, Boling explained that the company will modify and optimize its printer before sending it back into space to fabricate even more complex tissues.
Once everything is up and running, Techshot hopes to manufacture hearts and other complex organs for people in need, Boling told BBC News. Not only could printed organs cut waitlist times, but Techshot also hopes that printing organs using the recipient’s stem cells will improve the odds of a person’s body accepting the new organ.
Defense Newsreportsthat the U.S. navy is planning to unleash unmanned surface combatants — military robot warships, basically — to accompany other boats that are controlled by a human crew.
The move may come in response to China and Russia’s heavy investment in similar technologies that could put U.S. aircraft carriers at risk, according to Defense News’ analysis. Naval superiority is a priority for the Chinese military — which the Pentagon wants to challenge with artificial intelligence and automation investments.
Sea Hunters
Last year’s naval National Defense Strategy — when it was announced at the beginning of 2018 — was focused on backing up existing aircraft carriers and bolstering peacekeeping efforts. The new focus differs: smaller surface combatants, many of which will be unmanned, and equipped with state-of-the-art sensors.
The idea is to overwhelm the enemy and make it difficult for them to track a large number of smaller ships. Having a larger number of autonomous ships will also make sensor data collection more reliable and accurate.
“We want everything to be only as big as it needs to be. You make it smaller and more distributable, given all dollars being about equal,” U.S. Navy Surface Warfare Director Ronald Boxall toldDefense News in a December interview. “And when I look at the force, I think: ‘Where can we use unmanned so that I can push it to a smaller platform?'”
One such autonomous warship has already made headlines in the past: Defense Advanced Research Projects Agency’s (DARPA) Sea Hunter is a submarine-hunting warship that can operate without humans on board for 60 to 90 days straight. Details are becoming more sparse about the Sea Hunter since the Navy recently classified any information about its future.
IPhone Warships
The U.S. navy is also moving to update the way it builds warships, and how computers and sensors on board function. The Navy wants all modern warships to be built around a single combat system that runs on every ship.
“For us to get faster, we either have to keep going with the model we had where we upgrade our flip phones, or we cross over the mentality to where it says: ‘I don’t care what model of iPhone you have — 7 or X or whatever you have — it will still run Waze or whatever [applications] you are trying to run,” Boxall toldDefense News.
One of the latest projects to come out of DARPA and the Pentagon’s emerging technologies unit sounds like something straight out of a long-lost Michael Crichton manuscript. According to acall for proposalsissued last week, DARPA wants to explore methods of creating “conscious” robots and technologies using insect brains. Don’t these guys watch or read science fiction? Maybe they’re reading and watching too much science fiction.
The project is being called “Microscale Bio-mimetic Robust Artificial Intelligence Networks”, or μBRAIN (pronounced “microBRAIN”). According to its synopsis, the project seeks to combine the latest in artificial intelligence and machine learning technologies with the incredible feats of physiology and cognition performed by insects, some of the smallest members of the animal kingdom:
The Defense Advanced Research Projects Agency (DARPA) is issuing an Artificial Intelligence Exploration (AIE) opportunity inviting submissions of innovative basic research concepts exploring new computational frameworks and strategies drawn from the impressive computational capabilities of very small flying insects for whom evolutionary pressures have forced scale/size/energy reduction without loss of performance.
The proposal states that while AI systems are advancing exponentially, the amount of hardware and software required to create and maintain these systems makes most modern AI systems too large and unwieldy for many uses in the field. A computer smarter than the whole human race combined is great, but if you have to house it in super-cooled hangar in the desert, how could it possibly wreak havoc on other nations’ electrical grid or spy on the CEOs of Chinese telecom firms?
That’s where tiny insects come in. Or maybe tiny cyborg insects reporting back to their shadowy black budget handlers. According to a document in the Pentagon’s proposal, studying the tiny-but-advanced brains of insects could open up new possibilities in form factor and function for AI systems, opening up new avenues for solving problems in tiny packages:
Nature has forced on these small insects drastic miniaturization and energy efficiency, some having only a few hundred neurons in a compact form-factor, while maintaining basic functionality. Furthermore, these organisms are possibly able to display increased subjectivity of experience, which extends simple look-up table responses to potentially AI-relevant problem solving.
Of course, the science fiction fan in me makes me wonder what would happen if we start transplanting insect brains or neural networks into advanced machines. What happens if one gets loose and trained an army of insects to do its bidding? Didn’t these Pentagon spooks see the latest iteration of the Planet of the Apes franchise?
Of course, they’re not talking about taking a honeybee’s brain, dropping it into a tiny drone, opening the window, and pointing it in the direction of an Iranian nuclear facility. Studying neural networks is one thing, creating advanced insect cyborg spies is another. Still, as mathematician Ian Malcom notes, nature finds a way, even when we try to tamper with it – maybe especiallywhen we try to tamper with it. If DARPA and the Pentagon are involved, there are military or intelligence applications here, meaning the stakes are high if something goes awry. Like all things AI, though, we won’t know the end result until it’s too late.
Light is the fastest thing in the universe, so trying to catch it on the move is necessarily something of a challenge. We’ve had some success, but a new rig built by Caltech scientists pulls downa mind-boggling 10 trillion frames per second, meaning it can capture light as it travels along — and they have plans to make it a hundred times faster.
Understanding how light moves is fundamental to many fields, so it isn’t just idle curiosity driving the efforts of Jinyang Liang and his colleagues — not that there’d be anything wrong with that either. But there are potential applications in physics, engineering, and medicine that depend heavily on the behavior of light at scales so small, and so short, that they are at the very limit of what can be measured.
You may have heard about billion- and trillion-FPS cameras in the past, but those were likely “streak cameras” that do a bit of cheating to achieve those numbers.
A light pulse as captured by the T-CUP system.
If a pulse of light can be replicated perfectly, then you could send one every millisecond but offset the camera’s capture time by an even smaller fraction, like a handful of femtoseconds (a billion times shorter). You’d capture one pulse when it was here, the next one when it was a little further, the next one when it was even further, and so on. The end result is a movie that’s indistinguishable in many ways from if you’d captured that first pulse at high speed.
This is highly effective — but you can’t always count on being able to produce a pulse of light a million times the exact same way. Perhaps you need to see what happens when it passes through a carefully engineered laser-etched lens that will be altered by the first pulse that strikes it. In cases like that, you need to capture that first pulse in real time — which means recording images not just with femtosecond precision, but only femtoseconds apart.
Simple, right?
That’s what the T-CUP method does. It combines a streak camera with a second static camera and a data collection method used in tomography.
“We knew that by using only a femtosecond streak camera, the image quality would be limited. So to improve this, we added another camera that acquires a static image. Combined with the image acquired by the femtosecond streak camera, we can use what is called a Radon transformation to obtain high-quality images while recording ten trillion frames per second,” explained co-author of the study Lihong Wang. That clears things right up!
At any rate the method allows for images — well, technically spatiotemporal datacubes — to be captured just 100 femtoseconds apart. That’s ten trillion per second, or it would be if they wanted to run it for that long, but there’s no storage array fast enough to write ten trillion datacubes per second to. So they can only keep it running for a handful of frames in a row for now — 25 during the experiment you see visualized here.
Those 25 frames show a femtosecond-long laser pulse passing through a beam splitter — note how at this scale the time it takes for the light to pass through the lens itself is nontrivial. You have to take this stuff into account!
This level of precision in real time is unprecedented, but the team isn’t done yet.
“We already see possibilities for increasing the speed to up to one quadrillion (1015) frames per second!” enthused Liang in the press release. Capturing the behavior of light at that scale and with this level of fidelity is leagues beyond what we were capable of just a few years ago and may open up entire new fields or lines of inquiry in physics and exotic materials.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
27-12-2018
‘Fotonenchip’ rekent ultrasnel
‘Fotonenchip’ rekent ultrasnel
Computerchips kunnen een pak sneller worden gemaakt als de signaaltransmissie gedeeltelijk zou gebeuren met licht.
De binaire informatie die wordt gegenereerd in de transistors op een computerchip, wordt via elektronen getransporteerd naar andere chips, en eventueel ook processors. Deze transmissie gaat al heel snel, maar volgens computerwetenschappers nog niet snel genoeg. Zij zoeken niet alleen naar methoden om chips nog kleiner te maken (en processors dus nog sneller en geheugens krachtiger) maar ook om de snelheid van deze signaaloverdracht te verhogen.
Een team van Amerikaanse, Britse en Franse ingenieurs1 heeft nu een begin gemaakt met een doorbraak, en wel door een manier te ontwikkelen waarop de elektronische informatie wordt overgezet op lichtstralen. Fotonen hebben immers het voordeel dat ze (nog) veel sneller reizen dan elektronen. Helaas zorgde de signaaloverdracht tussen elektronen en fotonen (en vice versa) tot nu voor problemen.
De onderzoekers lijken dat probleem te hebben opgelost, en wel door gebruik te maken van een nieuwe soort metamaterialen. Dat zijn materialen die eigenschappen van licht ingrijpend kunnen veranderen. Tot voor kort werden ze enkel gebruikt om lichtstralen om te buigen en af te leiden, maar de vorsers hebben ze nu ingezet om licht snel in een andere kleur (frequentie) om te zetten. Dat zorgt voor een vlottere transmissie tussen elektronen en lichtdeeltjes.
Als de signaaloverdacht via fotonen kan verlopen, hoeven chips ook niet meer voorzien te worden van minuscule koperen draadjes, waardoor ze ook nog eens een pak kleiner kunnen worden gemaakt.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
26-12-2018
The Terrafugia Transition could end the long wait for flying cars
The Terrafugia Transition could end the long wait for flying cars
The car-plane hybrid goes on sale in 2019, but experts say air taxis might steal its thunder.
Terrafugia's Transition vehicle looks like a cross between a car and a miniature airplane — and can fit right in your garage.
Courtesy Terrafugia
By Tom Metcalfe
It might not the most elegant-looking thing on the road or in the sky, but an automobile-airplane hybrid that’s being called the world's first practical flying car is almost ready to spread its wings.
The two-passengerTransitionwill go on sale in the U.S. next year atan estimated price of $400,000, according to Terrafugia, the Woburn, Massachusetts-based firm that makes it.
The Transition has four wheels, folding wings and a rear-mounted “pusher” propeller. Powered by a four-cylinder hybrid-electric engine, it can fly 100 miles an hour at altitudes of up to 9,000 feet, with a flying range of 400 miles. There are controls for both flying and driving: for the roads, conventional brake and accelerator pedals and a steering wheel; for flying, the usual yoke and rudder pedals.
The vehicle converts from driving to flying mode in less than a minute, according to Terrafugia. But don’t expect it to get you out of a traffic jam. Though it’s the first vehicle certified to drive on U.S. roads and fly in U.S. skies, it can take off and land only at airfields — and you’ll need a pilot's license.
Many flying car prototypes have been built in recent decades, but none has proven practical enough to become a full-fledged production vehicle. The Transition is designed mainly for light aircraft owners who don’t want to get stuck when bad weather makes flying impossible, or who want to avoid airfield parking fees and fuel costs, according to Terrafugia. The vehicle runs on ordinary premium gasoline and can be kept in a garage at home.
But the company, now owned by Chinese car maker Geely, also hopes the Transition will attract people who are new to private aviation. “We would like people who never thought of becoming a pilot before to consider it because, hey, it’s a flying car,” Carl Dietrich, Terrafugia’s founder, told Smithsonian Air and Space magazine.
Terrafugia was founded in 2006 with a plan for a “flying SUV” that earned Dietrich a prestigious technology prize. But with recent advances in autonomous passenger drones — electrically powered “air taxis” that take off and land vertically without a runway — experts say vehicles like the Transition are likely to be a niche product.
“The world changed while they were working on that vehicle,” says Richard Anderson, an aerospace engineering professor at Embry-Riddle Aeronautical University in Daytona Beach, Florida.
Investors worldwide are pouring billions of dollars into the development of air taxis, with manufacturers like Airbus and Germany’s Volocopter claiming to be well on the way to bringing the vehicles to market. “The business case is absolutely crystal clear, and the technology is here,” Anderson said.
Terrafugia said it is developing its own vertical-takeoff-and-landing passenger vehicle, dubbed the TF-2, that could take to the skies as a piloted aircraft in 2023. That’s likely to be several years before the first self-piloted air taxis get approval from the Federal Aviation Administration, Anderson said.
“These vehicles are things that were never seen before, so there's a learning process,” Anderson said of the FAA. “Even if they are willing to embrace the technology, they have to understand it before they're going to let it fly over our heads in a city.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-12-2018
Real life 'shrink ray' can reduce 3D structures to one thousandth of their original size - and could be used to make the next generation of miniature robots
Real life 'shrink ray' can reduce 3D structures to one thousandth of their original size - and could be used to make the next generation of miniature robots
The 'shrink ray' can reduce 3D structures to one thousandth of their original size
Scientists can put all kinds of materials in the polymer before they shrink it
This could include a variety of materials such as metals, quantum dots or DNA
These tiny structures could be be used in many fields, including in robotics
MIT researchers have created a real life 'shrink ray' that can reduce 3D structures to one thousandth of their original size.
Scientists can put all kinds of useful materials in the polymer before they shrink it, including metals, quantum dots, and DNA.
The process - called implosion fabrication - is essentially the opposite of expansion microscopy, which is widely used by scientists to create 3D visualisations of microscopic cells.
Instead of making things bigger, scientists attach special molecules which block negative charges between molecules so they no longer repel which makes them contract.
Experts say that making such tiny structures could be useful in many fields, including in medicine and for creating nanoscale robotics.
MIT researchers have created a real life 'shrink ray' that can reduce 3D structures (pictured) to one thousandth of their original size
'It's a way of putting nearly any kind of material into a 3-D pattern with nanoscale precision,' said Edward Boyden, an associate professor of biological engineering and of brain and cognitive sciences at MIT.
Using the new technique, researchers can create any shape and structure they want, according to the paper published in Science.
The method can create lots of different shapes, including tiny hollow spheres to microscopic chains.
After attaching useful materials to the polymer 'scaffold', they shrink it, generating structures one thousandth the volume of the original.
The researchers shrank hollow linked cubes and an Alice in Wonderland etching using the method.
Scientists say the technique uses equipment that many biology and materials science labs already have, making it widely accessible for researchers who want to try it.
Currently scientists are able to directly print 3D nanonscale objects.
However, this is only possible with specialised materials like polymers and plastics which have limited applications.
After attaching useful materials to the polymer 'scaffold', they shrink it, generating structures one thousandth the volume of the original. The researchers shrank hollow linked cubes (pictured) using this method
Researchers shrank an Alice in Wonderland etching using the method. Scientists say the technique uses equipment that many biology and materials science labs already have, making it widely accessible for researchers who want to try it
To overcome this, researchers decided to adapt a technique that was developed a few years ago for high-resolution imaging of brain tissue.
This technique, known as expansion microscopy, involves embedding tissue into a hydrogel and then expanding it.
Hundreds of research groups in biology and medicine are now using expansion microscopy as it enables 3D visualisation of cells and tissues with ordinary hardware.
The new technique involves reversing the process.
By doing this, scientists could create large-scale objects embedded in expanded hydrogels and then shrink them to the nanoscale.
They call this approach 'implosion fabrication.'
Just like they did in expansion microscopy, the researchers used a very absorbent material made of polyacrylate. This is a plastic commonly found in nappies.
Scientists can put all kinds of useful materials in the polymer before they shrink it such as metals, quantum dots and DNA. Pictured is the machine used to shrink objects
The polyacrylate forms the scaffold over which other materials can be attached.
It is then bathed in a solution that contains molecules of fluorescein, which attach to the scaffold when they are activated by laser light.
Then, they use two-photon microscopy to target points deep within the structure.
They attach fluorescein molecules to these specific locations within the gel.
These acts as anchors that bind to other types of molecules that are in the structure.
'You attach the anchors where you want with light, and later you can attach whatever you want to the anchors,' Dr Boyden said.
'It could be a quantum dot, it could be a piece of DNA, it could be a gold nanoparticle.'
Researchers think these nanobjects could be used to create better lenses for cell phone cameras, microscopes (stock image), or endoscopes
Once the desired molecules are attached in the right locations, the researchers shrink the entire structure by adding an acid.
The acid blocks the negative charges in the polyacrylate gel so that they no longer repel each other, causing the gel to contract.
Using this technique, researchers can shrink the objects 10-fold in each dimension (for an overall 1,000-fold reduction in volume).
This ability to shrink not only allows for increased resolution, but also makes it possible to assemble materials in a low-density scaffold.
This means it can be easily modified and later the material becomes a dense solid when it is shrunk.
Researchers think these nanobjects could be used to create better lenses for cell phone cameras, microscopes, or endoscopes.
Farther in the future, researchers say that this approach could be used to build nanoscale electronics or robots.
WILL GLOBAL WARMING CAUSE SPECIES TO SHRINK?
A study conducted by the University of British Columbia (UBC) in Canada found that over the last century, the beetles in the region have shrunk.
By looking at eight species of beetle and measuring the animals from past and present they found that some beetles were adapting to a reduced body size.
The data also showed that the larger beetles were shrinking, but the smaller ones were not.
Around 50 million years ago the Earth warmed by three degrees Celsius (5.4°F) and as a result, animal species at the time shrunk by 14 per cent.
Another warming event around 55 million years ago - called the Paleocene-Eocene Thermal Maximum (PETM) - warmed the earth by up to eight degrees Celsius (14.4°F).
In this instance, animal species of the time shrunk by up to a third.
Woolly mammoths were a victim of warming climate, shrinking habitat and increased hunting from a growing early-human population which drove them to extinction - along with many large animals
Shrinking in body size is seen from several global warming events.
With the global temperatures set to continue to rise, it is expected the average size of most animals will decrease.
As well as global warming, the world has seen a dramatic decrease in the amount of large animals.
So called 'megafauna' are large animals that go extinct. With long life-spans and relatively small population numbers, they are less able to adapt to rapid change as smaller animals that reproduce more often.
Often hunted for trophies or for food, large animals like the mastadon, mammoths and the western black rhino, which was declared extinct in 2011, have been hunted to extinction.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
Scientists are able to shrink objects
Scientists are able to shrink objects
For the first time, researchers have produced nano-objects by shrinking. First, they assembled 3D objects in a special hydrogel, then an acid caused the gel and its contents to shrink. The 3D design thus became an object ten to a thousand times smaller - without distortions or defects. The big advantage: this "implosion fabrication" method is feasible with conventional technology and enables completely new nanoconstructs, as the researchers report in the specialist journal "Science".
Many research labs are already stocked with the equipment required for this kind of fabrication. Credit: The researchers
Team invents method to shrink objects to the nanoscale
Researchers at the Massachusetts Institute of Technology (MIT) have developed a method that, for the first time, produces detailed 3D objects on a nanoscale - by shrinking. To do this, they first position the components of the object in a larger pre-variant. Then they shrink the whole thing and create the desired object in nano format.
This so-called "implosion fabrication" is made possible by a special hydrogel made of polyacrylate/polyacrylamide. If, for example, this gel is exposed to an acid, the water content and the chemical bonds change so that the entire gel contracts evenly.
Complex 3D structures on the nanoscale - produced by shrinking. A 3D pattern created using implosion fabricationCredit: MIT/ Daniel Oran
The new method considerably expands the existing possibilities of nano-fabrication, as the researchers emphasize:
"With implosion fabrication, we can produce all kinds of structures, gradients, unconnected shapes or objects from several materials"
The big advantage is that these 3D structures can be assembled and designed before shrinking with a precision that is hardly possible in nano size.
Basic lab equipment can produce minuscule 3D-printed objects
Ed Boyden and colleagues
Alice in Wonderland created using implosion fabrication before and after shrinking - But Boyden thinks it can go much smaller. In a handful of tests, they were able to expand and shrink the structure by 8000 times.
DeepMind is on the forefront of artificial intelligence (A.I.). The computer system it developed, known as AlphaZero, amazed (and terrified) the world in 2017 when it was able to defeat human chess masters at their own game, despite only learning it four hours previous to the matches. That machine has been tested numerous times by even more chess grandmasters, and now people are seeing it do something not yet seen within machines – it is improvising.
Not only was AlphaZero a master at chess, it has also taught itself games such as shogi, commonly called Japanese chess, and Go. In each attempt, AlphaZero was able to beat the previous world champions of the games, who were all human. On DeepMind’s website, developers say they are “thrilled” to see the program developing improvisation and intuition skills, which are not previously known to be in machines.
In a paper published in Science Magazine, it is stated the machine’s ability to master the complicated game of Go and defeat the world champion showed it had use of “deep convolutional neural networks” because it developed a massive knowledge of the game simply by playing it repeatedly to the point the writers of the paper said it has “superhuman performance” in the game.
Computers have been beating humans at chess since 1997, but the addition of shogi, which is far more complicated than chess, and Go, which relies on practice and intuition, shows AlphaZero is able to not only defeat humans at their own games, but ultimately learn how to do it in better and more efficient ways.
When pitted against another chess computer, Stockfish, AlphaZero won 155 of 1,000 matches, with six losses and the rest being draws. Unlike most chess-playing A.I.’s, however, AlphaZero does not prefer to save its pieces, instead opting to sacrifice them for the greater good.
This ability comes from what developers describe as a “neural network with millions of different tunable parameters, each learning its own rules of what is good in chess.” With all of these variables, the machine, much like a human, can look at a situation and know what the best thing to do is.
AlphaZero began with a blank slate mind, developing strategies and tactics based only on the basic rules of the games it plays. It developed its human-like ability to play games based on its experiences.
While many prominent thinkers such as Elon Musk have warned against A.I., citing the possibility such mechanical minds could ultimately lead to human extinction, DeepMind researchers believe studying the way this machine learns how to play games can lead to real issues, such as why proteins become misfolded in Parkinson’s and Alzheimer’s. That protein folding conundrum is ultimately the goal of A.I.’s such as AlphaZero built by DeepMind.
Artificial intelligence is already beginning to spiral out of our control, a new report from top researchers warns. Not so much in a Skynet kind of sense, but more in a ‘technology companies and governments are already using AI in ways that amp up surveillance and further marginalize vulnerable populations’ kind of way.
On Thursday, the AI Now Institute, which is affiliated with New York University and is home to top AI researchers with Google and Microsoft, released a report detailing, essentially, the state of AI in 2018, and the raft of disconcerting trends unfolding in the field. What we broadly define as AI—machine learning, automated systems, etc.—is currently being developed faster than our regulatory system is prepared to handle, the report says. And it threatens to consolidate power in the tech companies and oppressive governments that deploy AI while rendering just about everyone else more vulnerable to its biases, capacities for surveillance, and myriad dysfunctions.
The report contains 10 recommendations for policymakers, all of which seem sound, as well as a diagnosis of the most potentially destructive trends. “Governments need to regulate AI,” the first recommendation exhorts, “by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain.” One massive Department of AI or such that attempts to regulate the field writ large won’t cut it, researchers warn—the report suggests regulators follow examples like the one set by the Federal Aviation Administration and tackle AI as it manifests field by field.
But it also conveys a succinct assessment of the key problem areas in AI as they stand in 2018. As detailed by AI Now, they are:
The accountability gap between those who build the AI systems (and profit off of them) and those who stand to be impacted by the systems (you and me) is growing.Don’t like the idea of being subjected to artificially intelligent systems that harvest your personal data or determine various outcomes for you? Too bad! The report finds that the recourse most public citizens have to address the very artificially intelligent systems that may impact them is shrinking, not growing.
AI is being used to amplify surveillance, often in horrifying ways. If you think the surveillance capacities of facial recognition technology are disturbing, wait till you see its even less scrupulous cousin, affect recognition. The Intercept’s Sam Biddle has a good write-up of the report’s treatment of affect recognition, which is basically modernized phrenology, practiced in real time.
The government is embracing autonomous decision software in the name of cost-savings, but these systems are often a disaster for the disadvantaged.From systems that purport to streamline benefits application processes online to those that claim to be able to determine who’s eligible for housing, so-called ADS systems are capable of uploading bias and erroneously rejecting applicants on baseless grounds. As Virginia Eubanks details in her book Automating Inequality, the people these systems fail are those who are least able to muster the time and resources necessary to address them.
AI testing “in the wild” is rampant already. “Silicon Valley is known for its ‘move fast and break things’ mentality,” the report notes, and that is leading to companies testing AI systems in the public sector—or releasing them into the consumer space outright—without substantial oversight. The recent track record of Facebook—the original move fast, break thingser and AI evangelist—alone is example enough of why this strategy can prove disastrous.
Technological fixes to biased or problematic AI systems are proving inadequate.Google made waves when it announced it was tackling the ethics of machine learning, but efforts like these are already proving too narrow and technically oriented. Engineers tend to think they can fix engineering problems with, well, more engineering. But what is really required, the report argues, is a much deeper understanding of the history and social contexts of the datasets AI systems are trained on.
The full report is well worth reading, both for a tour of the myriad ways AI entered the public sphere—and collided with the public interest—in 2018, and for a detailed recipe for how our institutions might stay on top of this ever-complicating situation.
This story is part of Automaton, an ongoing investigation into the impacts of AI and automation on the human landscape. For tips, feedback, or other ideas about living with the robots, I can be reached at bmerchant@gizmodo.com.
An MIT Media Lab team build a plant-cyborg. Its name is Elowan, and it can move around.
Image credits Harpreet Sareen, Elbert Tiao // MIT Media Labs.
For most people, the word ‘cyborg’ doesn’t bring images of plants to mind — but it does at MIT’s Media Lab. Researchers in Harpreet Sareen’s lab at MIT have combined a plant with electronics to allow it to move. The cyborg — Elowan — relies on the plant’s sensory abilities to detect light and an electric motor to follow it.
Our photosynthesizing overlords
Plants are actually really good at detecting light. Sunflowers are a great example: you can actually see them move to follow the sun on its heavenly trek. Prior research has shown that plants accomplish this through the use of several natural sensors and response systems — among others, they keep track of humidity, temperature levels, and the amount of water in the soil.
However, plant’s aren’t very good at moving to a different place even if their ‘sensor and response systems’ tell them conditions aren’t very great. The MIT team wanted to fix that. They planned to give one plant more autonomy by fitting its pot with wheels, an electric motor, and assorted electrical sensors.
The way the cyborg works is relatively simple. The sensors pick up on the electrical signals generated by the plant and generate commands for the motor and wheels based on them. The result is, in effect, a plant that can move closer to light sources. The researchers proved this by placing the cyborg between two table lamps and then turning them on or off. The plant moved itself, with no prodding, toward the light that was turned on.
While undeniably funny, the research is practical, too. Elowan could be modified in such a way as to allow it to move solar panels on a house’s roof to maximize their light exposure. Alternatively, additional sensors and controlling units would allow a similar cyborg to maintain optimal temperature and humidity levels in, say, an office. With this in mind, the team plans to continue their research, including more species of plants to draw on their unique evolutionary adaptations.
While we all tacitly agree not to think about the many secret Island of Doctor Moreau-style genetic research labs hidden throughout the world creatinghorrible genetic hybridsand superhumans as we speak, last week’svery public announcementof the successful births of two twins whose genes were altered using CRISPR/Cas-9 shocked the world due to its brazenness. According to a video posted to YouTube, a Chinese researcher named He Jiankui claims to have modified the genomes of twin baby girls in order to make them completely resistant to HIV.
What could go wrong?
Within hours, the scientific community and journalists around the world began to criticize the experiment as reckless and dangerous, even going so far as to call He the “Chinese Frankenstein.” Harvard and MIT’s David Liu, one of the inventors of CRISPR techniques, called the experiment “an appalling example of what not to do about a promising technology that has great potential to benefit society,” adding he hopes “it never happens again.” Who knew tampering with the genetic makeup of living human beings would be so controversial? Aside from, you know, everyone. Aside from the criticism, the story of He Jiankui and the genetically modified twins has taken a turn for the strange this week when the researcher seems to have gone mysteriously missing. Where could He be?
Probably making Christmas lights in a Chinese prison.
There are conflicting reports about He’s whereabouts and Chinese news outlets are predictably tight-lipped about the matter. When asked about the geneticist, a spokeswoman for the Southern University of Science and Technology where He was an employee gave a rather enigmatic statement:
Right now nobody’s information is accurate, only the official channels are. We cannot answer any questions regarding the matter right now, but if we have any information, we will update it through our official channels.
Of course, their official channels have not been updated. In the meantime, He’s laboratory has been shut down by Chinese authorities who stated that “clinical procedures of gene-editing on human embryos for reproduction purposes are explicitly banned in China.” Is He merely in hiding to avoid all of the negative press and criticism from the entire scientific community, or has he been disappeared in classically Chinese fashion for causing the Middle Kingdom to lose face? Until – or if – this “Chinese Frankenstein” resurfaces, this one will remain a mystery.
Ah, who are we kidding? He’s organs have already been harvested in the back of a mobile execution van. Such is the price of scientific “progress.”
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 75 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.