The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
17-11-2018
Exploring Sophia’s multiple intelligences
Exploring Sophia’s multiple intelligences
Sophia The Robot, Hanson Robotics Limited's David Hanson, and SingularityNET's Ben Goertzel at RISE 2018.
Have you ever heard that a robot has many different intelligences? This session with Ben Goertzel and David Hanson will show that they can be as broadly intelligent as humans.
An Interview With AI Robot Sophia
Piers Morgan FLIRTS with a robot called Sophia
The Dangers of Artificial Intelligence - Robot Sophia makes fun of Elon Musk - A.I. 2018
My Greatest Weakness is Curiosity | Sophia the Robot at Brain Bar
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
Be Aware! Xinhua's first English Artificial Intelligence Anchor makes debut!
Be Aware! Xinhua's first English Artificial Intelligence Anchor makes debut!
The idea of super intelligent machines may sound like the plot of "The Terminator" or "The Matrix," but many experts say the idea isn't far-fetched. Some even think the singularity — the point at which artificial intelligence can match, and then overtake, human smarts — might happen in just 16 years.
The question is, could we evolve ourselves out of existence, being gradually replaced by the machines?
Many experts believe that our future society will be built on effective human-machine collaboration. But a lack of trust remains the single most important factor stopping this from happening.
AI’s decision-making process is usually too difficult for most people to understand. And interacting with something we don’t understand can cause anxiety and make us feel like we’re losing control.
Many people are also simply not familiar with many instances of AI actually working, because it often happens in the background. Instead, they are acutely aware of instances where AI goes terribly wrong.
Intelligent Robots Will Overtake Humans and a striking example is Xinhua's first English AI anchor who made its debut at the World Internet Conference in Wuzhen 2018.
China’s state-run Xinhua News Agency unveiled a deeply creepy artificial intelligence news anchor at the government’s World Internet Conference tech expo.
The AI newscaster is a pure virtual mouthpiece in a country with tight controls on press freedom, and its first report this week featuring fawning coverage of China’s trade show felt distinctly dystopian.
“I will work tirelessly to keep you informed as texts will be typed into my system uninterrupted,” the AI anchor said in its debut video at the conference.
The virtual anchor’s features are based on those of a real-life Xinhua host named Zhang Zhao.
But as with most attempts at computer-generated humanity, the virtual anchor is both impressively realistic and yet unsettlingly soulless.
Its limited range of motion and expressions becomes repetitive after a short time, while its gray crisp suit and perfectly coiffed hair are even more rigid than human cable news hosts.
According to huffingtonpost, newsrooms have increasingly implemented AI technology in recent years, with outlets including The Washington Post using AI to write short reports on such topics as the outcome of sporting events or to send news alerts.
Newscasters, cable news show anchors and news readers are finding their jobs in precarious positions for a number of reasons, but those in Western countries have no threat to their employment quite s big as the one facing their counterparts in China, where media providers are testing virtual newsreaders created by combining images and voices of humans with artificial intelligence (AI) technology. Mr. President! Mr. President! Robbie Gort here from AI 2001 News. What’s your stance on robot rights?
“AI anchors have officially become members of the Xinhua News Agency reporting team. They will work with other anchors to bring you authoritative, timely and accurate news information in both Chinese and English.”
Xinhua News Agency is the official state-run news provider of the People’s Republic of China. It’s the largest media organization in China and the largest news agency in the world in number of correspondents (10,000) worldwide. That last figure, according to South China Morning Post, is why the organization is turning to AI.
“Celebrity anchors are regarded as important assets at major news networks in the US. The highest paid news anchor, CNN’s Anderson Cooper, is reportedly paid US$100 million a year, while Diane Sawyer at ABC and Sean Hannity at Fox News earn US$80 million each. Celebrity anchors in China are generally paid a lot less because they work for state-run TV stations but they often earn extra money from product endorsements and book sales.”
Let’s take a look at the weather map
Not only do Xinhua’s lifelike AI anchors work cheap, they never need bathroom breaks, lunch hours, sleep, vacations, huge paychecks or ego-stroking. As long as human editors keep feeding them news items (and those people could be eliminated soon as many news agencies are testing automated news aggregators and story writers), the AI anchors will keep staring into the camera and talking — 24 hours a day, seven days a week … or at least until viewers get tired of staring at the same face, which may not be a problem (or at least one that anyone will feel safe complaining about) since Xinhua is the state news agency. (See a video of the first AI anchor here.)
The AI news anchors were announced at the recent World Internet Conference in Wuzhen and developed by Sogou, China’s second-largest search engine operator. It has proprietary technology in natural language processing – a branch of AI dealing with how computers understand and interpret human language – so it is undoubtedly working on improvements to give the AI newsreaders more realistic-looking speech, lip movements and facial expressions.
Who is in control?
What about questioning authorities or telling the truth? There has been ongoing controversy in the U.S. over local TV news staffs being forced to read copy written by the station’s owners and management without question and without changing any words. Artificial intelligence hasn’t reached the point yet where an AI anchor can get disgusted and walk off the set while yelling, “I’m mad as Hell and I’m not going to take this anymore!”
Can it? Should it? Will other media providers see this as something to fight or a means to greater profits?
What do you think … while you can still think for yourself? Anderson?
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
08-11-2018
Male Pleasure Robots Will Replace Men For Good
Male Pleasure Robots Will Replace Men For Good
Men might be soon replaced by male love robots offering an ultimate experience in pleasure.
Men may be soon be made redundant in the bedroom as they might be replaced by male love robots who offer the ultimate experience in pleasure. Pleasure dolls have come on in leaps and bounds during the last couple of years; they are no longer the plastic looking blow-up dolls they once were. Real Doll is one company that has changed how people think about pleasure dolls, and now a company is creating a male love doll.
Male Love Doll Can Respond Verbally
Real Doll said that the male love doll is just as real as a living partner, with the doll being able to respond to verbal communication. This may mean that along with the doll being able to provide pleasure in the bedroom it will also be able to have a conversation afterward instead of turning over to fall asleep.
Tthe male love doll from Real Doll offers the ultimate pleasure experience. The doll, with the name of “Gabriel”, can “go all night” if needed, as it does not tire nor suffer from performance issues as the majority of men do. This means it might take over from real men in the bedroom.
The male dolls will be programmed with unique conversations, and each will have their own stories. Women do not have to worry about going unfulfilled in the bedroom as the robot goes on and on. Women will also not have to worry about making any effort to get pleasure in the bedroom as the doll satisfies every whim.
Women Say They Cannot Tell Lifelike Male Dolls Apart from Real Men
Real looking pleasure robots are becoming the in-thing, today they look more lifelike than ever before. Love dolls today not only look real but also feel a lot more real than the older blow-up dolls made from plastic. Today love dolls are traditionally made from silicone, which gives a skin-like feel.
Video Reveals How Male Pleasure Robots Are Intricately Made
A teaser video has been released showing the male robot. However, it might be a bit strange at first waking up at the side of a robot, as seen in the video.
The robots are made from silicone and look extremely lifelike after being spray painted as Sinthetics. Each of the dolls is customized to meet the requirements of the customer, including the head, body, and gender. The dolls are hand painted with a variety of paint colors, including such as freckles, birthmarks along with scars.
The private parts for the male robots are all based on real-life ones and come in various sizes. Gone are the times when the person who bought a love doll was known as the creepy man. Today the love doll industry is huge, selling to both men and women from all walks of life.
Inspired by the natural world, researchers have designed a microdrone that can pull objects up to 40 times its weight. By anchoring itself to various surfaces using adhesive action inspired by geckos and wasps, the tiny aerial vehicle is able to lift cameras, water bottles, and even pull door handles, while the drone itself is as light as a bar of soap.
FlyCroTugs’ multimodal operation allows them to combine small size, high mobility in cluttered and unstructured environments, and forceful manipulation.
Credit: Science Robotics.
The FlyCroTugs drone was developed by Stanford’s Mark Cutkosky and Dario Floreano at the École Polytechnique Fédérale de Lausanne in Switzerland. Other similar drones demonstrated previously could only lift twice their own weight using aerodynamic forces alone. To drastically improve their tiny aerial vehicle’s towing power, the researchers turned to one of the most feared predators in the insect world: the wasp.
When a wasp captures prey too big to transport by flight, it chooses to drag it using different attachment options. Researchers studied the various ways wasps choose to transport prey and computed the ratio of flight-related muscle to total mass that determines whether the predator flies or drags its prey.
“When you’re a small robot, the world is full of large obstacles,” said Matthew Estrada, a graduate student at Stanford and lead author of the new study published inScience Robotics. “Combining the aerodynamic forces of our aerial vehicle along with interaction forces that we generate with the attachment mechanisms resulted in something that was very mobile, very forceful and micro as well.”
When encountering a smooth surface, FlyCroTugs uses gecko-like grippers that create non-sticking intermolecular forces between the adhesive and surface. For rough surfaces, the tiny flying robot sticks its 32 microspines into the small pits of a surface, latching onto it.
Fitting all this hardware inside a robot with only twice the weight of a golf ball was no easy feat, but the team was up to the challenge. What they wound up with was a fast, small, and maneuverable flying robot capable of moving very large loads up to 40 times its own weight.
“People tend to think of drones as machines that fly and observe the world, but flying insects do many other things – such as walking, climbing, grasping, building – and social insects can even cooperate to multiply forces,” said Floreano in a statement. “With this work, we show that small drones capable of anchoring to the environment and collaborating with fellow drones can perform tasks typically assigned to humanoid robots or much larger machines.”
FlyCroTugs represents a paradigm shift away from drones occupying a single niche. Not only does it show that drones are excellent for navigating remote locations, but they can also be used to interact with the physical world. In tests, FlyCroTugs flew atop a crumbling structure from where it hauled up a camera and even opened a door with the help of another drone (see the video).
In the future, the team hopes to develop an autonomous system that enables them to maneuver and coordinate multiple FlyCroTugs at once.
“The tools to create vehicles like this are becoming more accessible,” said Estrada. “I’m excited at the prospect of increasingly incorporating these attachment mechanisms into the designer’s tool belt, enabling robots to take advantage of interaction forces with their environment and put these to useful ends.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
20-10-2018
THE ROBOT COMPANY BOSTON DYNAMICS HAS LEARNED TO DANCE TO UPTOWN FUNK BY BRUNO MARS
THE ROBOT COMPANY BOSTON DYNAMICS HAS LEARNED TO DANCE TO UPTOWN FUNK
BYBRUNO MARS
The American company that has developed a robotics Boston Dynamics is known for its developments.
Especially impressed users of the Network video in which the robot dog Boston Dynamics opens the door, even resisting someone who is trying to hold him.
A well-mannered four-legged machine SpotMini has already proved that she can easily open the door and enter it, even though man armed with a hockey stick, stopping her.
Now the robot dog Boston Dynamics has learned to dance to the song Uptown Funk by American musician Mark Ronson, recorded with the singer Bruno Mars.
One user commented that to dance like robots to be after the victory over humanity.
The other resents the fact that huge funds were invested in robotic ATV to get him to dance funk and shake your ass on the screen.
Anotherviewer writes that the dance community in 2594 year. The other confirms that it’s tin man dances better cuts of meat.
Another user claims that the company Boston Dynamics was bought by the Japanese, and that the robot repeats the wonderful mechanics of living beings, but he has no “brain” and all it shows is the result of the operator’s work behind the scenes. As an example, he cites the Japanese robot ASIMO.
And “brains”, the AI is much more important than chassis with high-quality gyroscope and a powerful processor, repeating all commands. But users doubt that the robot has the AI of a sufficient scale to dance on their own.
WHEN SOPHIA THE ROBOT first switched on, the world couldn’t get enough. It had a cheery personality, it joked with late-night hosts, it had facial expressions that echoed our own. Here it was, finally — a robot plucked straight out of science fiction, the closest thing to true artificial intelligence that we had ever seen.
There’s no doubt that Sophia is an impressive piece of engineering. Parents-slash-collaborating-tech-companies Hanson Robotics and SingularityNET equipped Sophia with sophisticated neural networks that give Sophia the ability to learn from people and to detect and mirror emotional responses, which makes it seem like the robot has a personality. It didn’t take much to convince people of Sophia’s apparent humanity — many of Futurism’s own articlesrefer to the robot as “her.” Piers Morgan even decided to try his luck for a date and/or sexually harass the robot, depending on how you want to look at it.
“Oh yeah, she is basically alive,” Hanson Robotics CEO David Hanson said of Sophia during a 2017 appearance on Jimmy Fallon’s Tonight Show. And while Hanson Robotics never officially claimed that Sophia contained artificial general intelligence — the comprehensive, life-like AI that we see in science fiction — the adoring and uncritical press that followed all those public appearances only helped the company grow.
But as Sophia became more popular and people took a closer look, cracks emerged. It became harder to believe that Sophia was the all-encompassing artificial intelligence that we all wanted it to be. Over time, articles that might have once oohed and ahhed about Sophia’s conversational skills became more focused on the fact that they were partially scripted in advance.
Ben Goertzel, CEO of SingularityNET and Chief Scientist of Hanson Robotics, isn’t under any illusions about what Sophia is capable of. “Sophia and the other Hanson robots are not really ‘pure’ as computer science research systems, because they combine so many different pieces and aspects in complex ways. They are not pure learning systems, but they do involve learning on various levels (learning in their neural net visual systems, learning in their OpenCog dialogue systems, etc.),” he told Futurism.
But he’s interested to find that Sophia inspires a lot of different reactions from the public. “Public perception of Sophia in her various aspects — her intelligence, her appearance, her lovability — seems to be all over the map, and I find this quite fascinating,” Goertzel said.
Hanson finds it unfortunate when people think Sophia is capable of more or less than she really is, but also said that he doesn’t mind the benefits of the added hype. Hype which, again, has been bolstered by the two companies’ repeated publicity stunts.
“Sophia and the other Hanson robots are not really ‘pure’ as computer science research systems…”
Highly-publicized projects like Sophia convince us that true AI — human-like and perhaps even conscious — is right around the corner. But in reality, we’re not even close.
The true state of AI research has fallen far behind the technological fairy tales we’ve been led to believe. And if we don’t treat AI with a healthier dose of realism and skepticism, the field may be stuck in this rut forever.
NAILING DOWN A TRUE definition of artificial intelligence is tricky. The field of AI, constantly reshaped by new developments and changing goalposts, is sometimes best described by explaining what it is not.
“People think AI is a smart robot that can do things a very smart person would — a robot that knows everything and can answer any question,” Emad Mousavi, a data scientist who founded a platform called QuiGig that connects freelancers, told Futurism. But this is not what experts really mean when they talk about AI. “In general, AI refers to computer programs that can complete various analyses and use some predefined criteria to make decisions.”
Among the ever-distant goalposts for human-level artificial intelligence (HLAI) are the ability to communicate effectively — chatbots and machine learning-based language processors struggle to infer meaning or to understand nuance — and the ability to continue learning over time. Currently, the AI systems with which we interact, including those being developed for self-driving cars, do all their learning before they are deployed and then stop forever.
“They are problems that are easy to describe but are unsolvable for the current state of machine learning techniques,” Tomas Mikolov, a research scientist at Facebook AI, told Futurism.
Right now, AI doesn’t have free will and certainly isn’t conscious — two assumptions people tend to make when faced with advanced or over-hyped technologies, Mousavi said. The most advanced AI systems out there are merely products that follow processes defined by smart people. They can’t make decisions on their own.
In machine learning, which includes deep learning and neural networks, an algorithm is presented with boatloads of training data — examples of whatever it is that the algorithm is learning to do, labeled by people — until it can complete the task on its own. For facial recognition software, this means feeding thousands of photos or videos of faces into the system until it can reliably detect a face from an unlabeled sample.
Our best machine learning algorithms are generally just memorizing and running statistical models. To call it “learning” is to anthropomorphize machines that operate on a very different wavelength from our brains. Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.
Artificial intelligence is now such a big catch-all term that practically any computer program that automatically does something is referred to as AI.
If you train an algorithm to add two numbers, it will just look up or copy the correct answer from a table, Mikolov, the Facebook AI scientist, explained. But it can’t generalize a better understanding of mathematical operations from its training. After learning that five plus two equals seven, you as a person might be able to figure out that seven minus two equals five. But if you ask your algorithm to subtract two numbers after teaching it to add, it won’t be able to.The artificial intelligence, as it were, was trained to add, not to understandwhat it means to add. If you want it to subtract, you’ll need to train it all over again — a process that notoriously wipes out whatever the AI system had previously learned.
“It’s actually often the case that it’s easier to start learning from scratch than trying to retrain the previous model,” Mikolov said.
These flaws are no secret to members of the AI community. Yet, all the same, these machine learning systems are often touted as the cutting edge of artificial intelligence. In truth, they’re actually quite dumb.
Take, for example, an image captioning algorithm. A few years back, one of these got some wide-eyed coverage because of the sophisticated language it seemed to generate.
“Everyone was very impressed by the ability of the system, and soon it was found that 90 percent of these captions were actually found in the training data,” Mikolov told Futurism. “So they were not actually produced by the machine; the machine just copied what it did see that the human annotators provided for a similar image so it seemed to have a lot of interesting complexity.” What people mistook for a robotic sense of humor, Mikolov added, was just a dumb computer hitting copy and paste.
“It’s not some machine intelligence that you’re communicating with. It can be a useful system on its own, but it’s not AI,” said Mikolov. He said that it took a while for people to realize the problems with the algorithm. At first, they were nothing but impressed.
Image Credit: Victor Tangermann
WHERE DID WE GO so off course? The problem is when our present-day systems, which are so limited, are marketed and hyped up to the point that the public believes we have technology that we have no goddamn clue how to build.
“I am frequently entertained to see the way my research takes on exaggerated proportions as it progresses through the media,” Nancy Fulda, a computer scientist working on broader AI systems at Brigham Young University, told Futurism. The reporters who interview her are usually pretty knowledgeable, she said. “But there are also websites that pick up those primary stories and report on the technology without a solid understanding of how it works. The whole thing is a bit like a game of ‘telephone’ — the technical details of the project get lost and the system begins to seem self-willed and almost magical. At some point, I almost don’t recognize my own research anymore.”
“At some point, I almost don’t recognize my own research anymore.”
Some researchers themselves are guilty of fanning this flame. And then the reporters who don’t have much technical expertise and don’t look behind the curtain are complicit. Even worse, some journalists are happy to play along and add hype to their coverage.
Other problem actors: people who make an AI algorithm present the back-end work they did as that algorithm’s own creative output. Mikolov calls this a dishonest practice akin to sleight of hand. “I think it’s quite misleading that some researchers who are very well aware of these limitations are trying to convince the public that their work is AI,” Mikolov said.
That’s important because the way people think AI research is going will depend on whether they want money allocated to it. This unwarranted hype could be preventing the field from making real, useful progress. Financial investments in artificial intelligence are inexorably linked to the level of interest (read: hype) in the field. That interest level — and corresponding investments — fluctuate wildly whenever Sophia has a stilted conversation or some new machine learning algorithm accomplishes something mildly interesting. That makes it hard to establish a steady, baseline flow of capital that researchers can depend on, Mikolov suggested.
Mikolov hopes to one day create a genuinely intelligent AI assistant — a goal that he told Futurism is still a distant pipedream. A few years ago, Mikolov, along with his colleagues at Facebook AI, published a paper outlining how this might be possible and the steps it might take to get there. But when we spoke at the Joint Multi-Conference on Human-Level Artificial Intelligence held in August by Prague-based AI startup GoodAI, Mikolov mentioned that many of the avenues people are exploring to create something like this are likely dead ends.
One of these likely dead ends, unfortunately, is reinforcement learning. Reinforcement learning systems, which teach themselves to complete a task through trial and error-based experimentation instead of using training data (think of a dog fetching a stick for treats), are often oversold, according to John Langford, Principal Researcher for Microsoft AI. Almost anytime someone brags about a reinforcement-learning AI system, Langford said, they actually gave the algorithm some shortcuts or limited the scope of the problem it was supposed to solve in the first place.
The hype that comes from these sorts of algorithms helps the researcher sell their work and secure grants. Press people and journalists use it to draw audiences to their platforms. But the public suffers — this vicious cycle leaves everyone else unaware as to what AI can really do.
There are telltale signs, Mikolov says, that can help you see through the misdirection. The biggest red flag is whether or not you as a layperson (and potential customer) are allowed to demo the technology for yourself.
“A magician will ask someone from the public to test that the setup is correct, but the person specifically selected by the magician is working with him. So if somebody shows you the system, then there’s a good likelihood you are just being fooled,” Mikolov said. “If you are knowledgeable about the usual tricks, it’s easy to break all these so-called intelligent systems. If you are at least a little bit critical, you will see that what [supposedly AI-driven chatbots] are saying is very easy to distinguish from humans.”
Mikolov suggests that you should question the intelligence of anyone trying to sell you the idea that they’ve beaten the Turing Test and created a chatbot that can hold a real conversation. Again, think of Sophia’s prepared dialogue for a given event.
“Maybe I should not be so critical here, but I just can’t help myself when you have these things like the Sophia thing and so on, where they’re trying to make impressions that they are communicating with the robot at so on,” Mikolov told Futurism.”Unfortunately, it’s quite easy for people to fall for these magician tricks and fall for the illusion, unless you’re a machine learning researcher who knows these tricks and knows what’s behind them.”
Unfortunately, so much attention to these misleading projects can stand in the way of progress by people with truly original, revolutionary ideas. It’s hard to get funding to build something brand new, something that might lead to AI that can do what people already expect it to be able to do, when venture capitalists just want to fund the next machine learning solution.
If we want those projects to flourish, if we ever want to take tangible steps towards artificial general intelligence, the field will need to be a lot more transparent about what it does and how much it matters.
“I am hopeful that there will be some super smart people who come with some new ideas and will not just copy what is being done,” said Mikolov. “Nowadays it’s some small, incremental improvement. But there will be smart people coming with new ideas that will bring the field forward.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
WATCH THE BOSTON DYNAMICS ROBODOG TWERK AND MOONWALK TO “UPTOWN FUNK”
WATCH THE BOSTON DYNAMICS ROBODOG TWERK AND MOONWALK TO “UPTOWN FUNK”
KRISTIN HOUSER__FILED UNDER: ROBOTS & MACHINES
The last time we saw Boston Dynamics’s dog-like robot SpotMini, it was on the job, prancing around a construction site to demonstrate its surveying skills.
But all work and no play makes SpotMini a dull bot. Sometimes, a robodog just wants to dance.
Just Watch
On Tuesday, Boston Dynamics release a video of SpotMini dancing to Mark Ronson’s “Uptown Funk.”
In the clip, the bot demonstrates a bevy of surprising slick moves. It twerks. It moonwalks. It suggestively gyrates its robot hips while seemingly staring directly into your soul.
Clearly, if things don’t work out in the construction industry, SpotMini could always pursue a career as a backup dancer. And that’s great because even a robodog needs to have options.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-10-2018
Scientists Have Created Shape Shifting Liquid Metal That Can Be Programmed
Scientists Have Created Shape Shifting Liquid Metal That Can Be Programmed
This metal can be applied to many operations in the future, including soft robotics and even flexible computer displays.
By Danielle De La Bastide
In a terrifying breakthrough similar to the metal morphing villain in Terminator 2, scientists at the University of Sussex and Swansea University have discovered a way to apply electrical charges to liquid metal and coax it into 3D shapes such as letters and even a heart.
This discovery has been called an “extremely promising” new kind of material that can be programmed to alter its shape.
Yutaka Tokuda, the Research Associate, working on this project at the University of Sussex, says: “This is a new class of programmable materials in a liquid state which can dynamically transform from a simple droplet shape to many other complex geometry in a controllable manner.
“While this work is in its early stages, the compelling evidence of detailed 2D control of liquid metals excites us to explore more potential applications in computer graphics, smart electronics, soft robotics and flexible displays.”
The scientists used electric fields to shape the liquid; these areas are created by a computer meaning that both the position and form of the liquid metal can be manipulated dynamically.
Professor Sriram Subramanian, head of the INTERACT Lab at the University of Sussex, said: “Liquid metals are an extremely promising class of materials for deformable applications; their unique properties include voltage-controlled surface tension, high liquid-state conductivity and liquid-solid phase transition at room temperature.
One of the long-term visions of us and many other researchers is to change the physical shape, appearance, and functionality of any object through digital control to create intelligent, dexterous and useful objects that exceed the functionality of any current display or robot.”
The research was presented last month at the ACM Interactive Surfaces and Spaces 2017 conference in Brighton.
Carnegie Mellon Metal Alloy
This development is not the only one to emerge within the science community, research engineers at Carnegie Mellon University have created a new metal alloy that exists in a liquid state at room temperature and can capacitate liquid metal transistors, flexible circuitry and perhaps even self-repairing circuits in the far-flung future.
Created at the Soft Machines Lab at Carnegie Mellon by researchers Carmel Majidi, Michael Dickey, and James Wissman, this alloy is the result of a marrying of indium and gallium. It would only take two drops of this liquid metal to form or break a circuit thereby opening or closing an entry, similar to a traditional transistor.Better yet, it only requires a voltage of 1 - 10 volts.
Electronic Blood
These molten metals or “electronic blood” are set to completely change computing for upcoming generations. IBM has also been developing its own form of electric sustenance since 2013, REPCOOL, or Redox flow electrochemistry for power delivery and cooling, is a project modeled after the structure and power supply of the brain, where our blood capillary system both cools and supplies energy to this vital organ. With electronic blood, the researchers believe the same effect can be applied to overheated computers.
"Compared to today's top computers, however, the human brain is roughly 10,000 times denser and 10,000 times more energy-efficient. The research team believes that their approach could reduce the size of a computer with a performance of 1 petaflop/s from the dimensions of a school classroom to that of an average PC, or in other words to a volume of about 10 liters," said Dr. Bruno Michel from IBM Research.
The first applications of this Electronic Blood should take place in 2030.
Robots and artificial intelligence are rapidly spreading to more and more areas of our daily lives in a sometimes uneasy alliance. Of all these developments, though, the one which most strongly hints at the strange-but-inevitable future in which humans and robots share the Earth as equals is the rise of sex robots. I guess it was inevitable that as soon as we were able to make walking, talking robots, people would begin finding ways to do naughty things them. Our reproductive drives are among the strongest in the human experience, and taking the human element out of the equation somewhat simplifies things.
Emphasis on “somewhat.”
Of course, there are dangers and downsides to sexbots. Like any technology, they can malfunction or be hacked, and sex robots also open the possibility that people will use them for acts which are illegal or dangerous with real humans. Nevertheless, many groups including the Church of Satan believe that sex robots will be a positive development for humankind, offering the “freedom of choice to satisfy your most secret desires with no-one to be bothered” or harmed by whatever you happen to do once your clothes come off. Or stay on, you do you.
QUERY: WAS THE INTERACTION EQUALLY ACCEPTABLE FOR BOTH PARTIES?
While ethicists, roboticists, and psychologists debate the merits and negatives of sex with robots, interesting questions have arisen. For one: if you asked your sex robot to whip you, choke you, pour hot wax on you, or engage in other acts of BDSM play (bondage, discipline, sadism, or masochism), would the robots be able to comply? Should they be able to comply? After all, chief among Isaac Asimov’s Three Laws of Robotics is that the law “a robot may not injure a human being or, through inaction, allow a human being to come to harm.” The second law states that “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.” Put these in the context of BDSM sexbots, and it’s easy to see how complicated things could get.
While Asimov’s laws aren’t necessarily followed by everyone in the robotics industry, they have for decades served as a sort of baseline for how we conceptualize human-robot interactions. Thus, these laws can serve as food for thought for how we might shape laws concerning all sorts of robotic companions, including sex robots. This week, Gizmodo surveyed a group of lawyers, ethicists, computer scientists, and philosophers whether or not (hypothetical) BDSM sex robots would violate Isaac Asimov’s Three Laws of Robotics, and the answers, as you might imagine, were mixed and thought-provoking.
Most, however, seemed to center on the fact that “harm” and the types of pain inflicted by BDSM practices aren’t quite the same thing andthat harm inflicted by a BDSM robot would be presumably welcomed by the human user. Should robots be able to whip us if we ask them to? I mean, why not? With a few bucks’ worth of parts from the local hardware store and some duct tape, you could rig up your electric drill to do the same thing. Why not give the drill a human face and natural language processing to make the whole thing more exciting?
While these types of questions might seem silly, commercial sex robots are already available for sale and recent developments suggest it will only be a few users before full-on robot brothels start popping up around the world. We’ll soon have to grapple with questions of BDSM robots, suicide-by-robot, and other ethical gray areas which will appear as robots rise up to take their place alongside humankind.
Scientists have built an amoeba-like robot with 32 individually controlled legs in a bid to find the perfect combination of stability and control.
The theory for the unusual robot comes from previous experience with robot building that found a robot with more legs is often easier to control.
The robot is called Mochibot and is based on a shape called a rhombic triacontahedron - a polyhedron with 32 vertices and 30 faces made of rhombuses (or rhombi).
Scroll down for video
The robot Mochibot (pictured) and is based on a shape called a rhombic triacontahedron - a polyhedron with 32 vertices and 30 faces made of rhombuses (or rhombi)
The ideal shape for an robot that can travel in any direction at any moment is a sphere.
However, this shape is flawed because they rely on only a single point of contact with the floor, making the machine unstable.
Mochibot is based on a sphere but the team made some improvements to make it easier to control.
Its deformability allows it to adapt to the terrain and how much ground contact it has.
The innovative design moves by retracting the arms inthe direction of motion and simultaneously extending arms on the other side.
To stop it flattens itself out parallel to the ground, as per IEEE Spectrum.
The deformability comes from the individually telescoping legs as each one is made of three sliding rails (which behave like linear actuators).
This allows them to extend to just over half a meter in length or contract to less than a quarter meter.
In practice, these extreme differences are not practical and the maximum diameter of Mochibot is about a meter (40 inches).
It weighs 10 kilograms (22 pounds) including the batteries, and has plenty of room inside for a payload.
A variety of cameras, sensors, or sampling devices could be integrated into the arms.
It’s somewhat similar to one of the tensegrity robots that Nasa has been working on for a while but Mochibot isn’t squishy in the same way that a tensegrity robot is.
The Nasa robot can survive being thrown from a roof, the Mochibot is not quite this durable.
Tensegrity robots locomote by flopping over themselves in a series of motions so complex that it’s taken machine-learning algorithms to figure out how to get them to move efficiently.
This makes them unfathomably complicated and very difficult to steer in a particular direction.
Mochibot’s big advantage over a tensegrity robot is that it can move smoothly and continuously in any direction you like by altering its shape.
Nasa said in a report on its tensegrity robots: 'Nasa desires lightweight, deployable and reliable devices.
'Thus, the long term goal of this work is to develop actively controlled tensegrity structures and devices, which can be deployed from a small volume and used in a variety of applications including limbs used for grappling and manipulating the environment or used as a stabilizing and balancing limb during extreme terrain access.'
The maximum diameter of Mochibot (pictured) is about a meter (40 inches). It weighs 10 kilograms (22 pounds) including the batteries, and has plenty of room inside for a payload. A variety of cameras, sensors, or sampling devices could be integrated into the arms
It’s somewhat similar to one of the tensegrity robots that Nasa has been working on for a while but Mochibot (pictured) isn’t squishy in the same way that a tensegrity robot is. The Nasa robot can survive being thrown from a roof, the Mochibot is not quite this durable
The plethora of legs give the urchin-like robot an advantage over wheeled exploration robots as well as all directions can be optimal for movement, rather than just forward and backward.
The designers also suggest that Mochibot is better at dealing with deformable terrain like sand or loose rock, because its method of rolling locomotion is much less traction dependent.
For applications like planetary exploration or disaster response, Mochibot has the potential to be a versatile platform.
Mochibot has the potential to be a versatile platform. It’s highly mobile, very redundant, and looks like it could manage a hefty science payload
It’s highly mobile, very redundant, and looks like it could manage a hefty science payload.
The next step is to experiment with different kinds of terrain, making sure that Mochibot can roll itself up and down slopes and over rocks and gullies without any problems.
Robots are an integral part of life and they continue to cease the attention of the public with the ever-widening guises and uses they come in.
Boston dynamic's robo-dog went viral when a video surfaced of the machine climbing stais and opening doors.
Since then the company has developed a humanoid robot called Atlas.
According to the company, Atlas is a 'high mobility, humanoid robot designed to negotiate outdoor, rough terrain'.
Atlas measures 1.5m (4.9ft) tall and weighs 75kg (11.8st).
WHAT IS BOSTON DYNAMICS' ATLAS HUMANOID ROBOT?
Atlas the most human-like robot in Boston Dynamic's line-up.
It was first unveiled to the public on 11 July 11 2013.
According to the company, Atlas is a 'high mobility, humanoid robot designed to negotiate outdoor, rough terrain'.
Atlas measures 1.5m (4.9ft) tall and weighs 75kg (11.8st).
The humanoid walks on two legs, leaving its arms free to lift, carry, and manipulate objects in its environment.
Atlas is able to hold its balance when it is jostled or pushed by an external force. Should it fall over, the humanoid robot is capable of getting up again on its own
Stereo vision, range sensing and other sensors allow Atlas to walk across rough terrain and keep its balance.
'In extremely challenging terrain, Atlas is strong and coordinated enough to climb using hands and feet, to pick its way through congested spaces,' Boston Dynamics claims.
Atlas is able to hold its balance when it is jostled or pushed.
If the humanoid robot should fall over, it can get up on its own.
Atlas is designed to help emergency services in search and rescue operations.
The robot will be used to shut-off valves, opening doors and operate powered equipment in environments where human rescuers could not survive.
The US Department of Defence said it has no interest in using Atlas in warfare.
The aviation industry, only born a little over a century ago, is on the brink of an enormous change. While electric cars and the electric scooters are already dotting city streets, planes are preparing to join the emissions-free club.
On October 1, HES Energy Systems announced plans to craft the first regional hydrogen-electric passenger plane in the world. The company aims for the four-passenger aircraft, named Element One, to take to the skies in 2025.
“We are looking at innovative business models and exploring collaboration with companies such as Wingly,” Taras Wankewycz, CEO of of HES Energy Systems told Inverse in an email. Wingly, a flight-sharing startup, sees a perfect pairing between Element One and France’s underused airfields.
The zero-emission plane boasts a range of 500 km to 5,000 km in service, thanks to its lightweight hydrogen fuel cell technology.
What Makes Today the Moment to Go Electric?
The title of first electric plane to take flight actually goes to Heino Brditschka, an airplane manufacturer who made it 300 meters in the air for about 10 minutes in 1973. But the electric aircraft industry only took off in earnest over the past 9 years, spurred on mostly by start-ups and new players in the aviation, according to consulting firm Roland Berger in a Financial Times report. That’s helped drive more innovation: Companies like Siemens, with its record-breaking 200-plus mile per hour electric 330LE, as well as the new electric face of Boeing 737s are also working on similar initiatives.
Aside from competition, the recent push to electric flight is chiefly motivated by environmental concerns. Aviation makes up 3 percent of global carbon emissions, according to the EU’s Clean Sky 2 initiative. And with air travel projected to increase threefold by 2050, the industry is trying to avoid contributing to the problem of climate change any more than it already is.
See also: NASA Is Developing A Supersonic Plane That Is (Hopefully) Super Quiet
In the context of rising emissions, this makes a plane like Element One — designed to create zero-emissions — absolutely transformative. The aircraft would use ultra-light hydrogen fuel cells (stored either as a gas or liquid) to tackle the industry-wide challenge of battery density not matching traditional fuel density (in other words the weight of batteries needed to power aircraft could be overwhelming). The Element One will also only takes 10 minutes to refuel, and may eventually use solar or wind energy to recharge mid-flight. Although the prototype fits four passengers, the aircraft could scale up to 10-20 passengers or more, according to Wankewycz. Innovations like these allow Element One to outperform other battery-electric airplanes, reaching a range of 500 km (a little longer than the Grand Canyon) to 5,000 km (a little over the distance from L.A. to New York).
Current challenges include certification and testing that faces every new aircraft, but Wankewycz is confident in preparing the Element One for success.
Hydrogen fuel cells could be swapped in as little as 10 minutes, and may eventually be recharged using other forms of renewable energy.
And with the expanded range of Element One, new promising opportunities for regional travel open. Wingly, a French flight-sharing startup collaborating with HES Energy Systems, found the perfect opportunity in France’s unused airfields.
“We analyzed the millions of destination searches made by the community of 200,000 pilots and passengers on our platform and confirm there is a tremendous need for inter-regional transport between secondary cities,” says Emeric de Waziers, CEO of Wingly in a press release. “By combining autonomous emission free aircraft such as Element One, digital community-based platforms like Wingly and the existing high density network of airfields, we can change the paradigm. France alone offers a network of more than 450 airfields but only 10% of these are connected by regular airlines. We will simply connect the remaining 90%.”
In today’s paradigm, small, short-distance flights like the ones de Waziers describes are a luxury of the terrifically rich. But at the intersection of hydrogen-electric technology and forward thinkers of startups like Wingly, passengers from diverse economic backgrounds may soon have a quieter, greener (and sleeker) reason to clap at the end of their flight.
“Star Wars,” “Her,” and “iRobot.” What do all these movies have in common? The artificial intelligence (AI) depicted in there is crazy-sophisticated. These robots can think creatively, continue learning over time, and maybe even pass for conscious.
Real-life artificial intelligence experts have a name for AI that can do this — it’s Artificial General Intelligence (AGI). For decades, scientists have tried all sorts of approaches to create AGI, using techniques such as reinforcement learning and machine learning. No approach has proven to be much better than any other, at least not yet.
Indeed, there’s a catch here: despite all the excitement, we have no idea how to build AGI.
Either way, most experts think it’s coming — sooner rather than later. In a poll of conference attendees, AI research companies GoodAI and SingularityNet found that 37 percent of respondents think people will create HLAI within 10 years. Another 28 percent think it will take 20 years. Just two percent think HLAI will never exist.
Almost every expert who had an opinion hedged their bets — most responses to the question were peppered with caveats and “maybes.” There’s a lot that we don’t know about the path towards HLAI, such as questions over who will pay for it or how we’re going to combine our algorithms that can think or reason but can’t do both.
Futurism caught up with a number of AI researchers, investors, and policymakers to get their perspective on when HLAI will happen. The following responses come from panels and presentations from the conference and exclusive interviews.
Irakli Beridze, Head of the Centre for Artificial Intelligence and Robotics, UNICRI, United Nations
At the moment, there is absolutely no indication that we are anywhere near AGI. And no one can say with any kind of authority or conviction that this would happen within a certain time frame. Or even worse, no one can say this can even happen period. We may never have AGI, so we need to take that into account when we are discussing anything.
Seán Ó hÉigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk
There’s still a lot of work to be done; there are still many things we don’t understand. Given we have this understanding, maybe it’s possible that it happens within 50 years.
I think we should enjoy the technology while it advances. We should be looking out for where to go in the future. But on the other hand, it’s not like we have human-level AI right now and I don’t think it’s going to happen very quickly. I think that if I’m lucky it’ll happen in my lifetime.
A worm’s level of intelligence is actually pretty doable. If you try to look at vision and planning, this is kind of narrowly doable. The integration of planning and learning, planning as its own thing is pretty well solved. But planning in a way which works with [machine learning] is not very well solved.
I think we are almost there. I am not predicting we will have general AI in three years, 30 years. But I am confident it can happen any day.
Ben Goertzel, CEO at SingularityNET and Chief Scientist at Hanson Robotics
I don’t think we need fundamentally new algorithms. I think we do need to connect our algorithms in different ways than we do now. If I’m right, then we already have the core algorithms that we need… I believe we are less than ten years from creating human-level AI.
I don’t think we’re almost there in the technology for General AI. I think general AI is almost a branding for a very general idea. Lifelong learning is an example of that — it’s a very particular type of AI. We know the theoretical foundation of that already, we know how nature does it, and it’s very well defined. There is a very clear direction, there is a metric. I think we can reach it in a close time.
On the last day of the conference, a number of participants participated in a lightning round of sorts. Almost entirely for fun, these experts were encouraged to throw out a date at which they expected us to figure out how to make HLAI. The following answers, some of whom were given by the same people who already answered the question, should be taken with an entire shaker of salt — some were meant as jokes and others are total guesses.
John Langford
Maybe 20 [years]?
Marek Rosa
I really have no idea which year, but if I have to say one year I’d say ten years in the future. The reason is its kind fo vague, you know like anything can happen in ten years.
A sluggish, yet precise robot designed by Japanese engineers demonstrates what construction sites might look like in the future.
Credit: AIST.
The prototype developed at Japan’s National Institute of Advanced Industrial Science and Technology was recently featured in a video picking up a piece of plasterboard and screwing it into a wall.
The robot, called HRP-5P, is much less productive than a human worker. However, its motions are very precise, meaning that this prototype could evolve into a rugged model that’s apt for real-life applications in demanding fields such as constructions.
While most manufacturing fields are being disrupted by automation, with robots doing most of the work in microchip plants or car assembly lines, supervised by human personnel, the same can’t be said about construction. This field is way too dynamic — with every project being unique — and filled with all sorts of obstacles that are too challenging for today’s robots to navigate. HRP-5P, however, suggests that automation could one day become feasible in construction works as well.
For Japan, construction bots are not meant to put people out of jobs, but rather to supplement a dwindling workforce. There’s a great shortage of manual labor in the island state, which is suffering from declining birthrates and an aging population.
Previously, a New York-based company demonstrated a mason-bot capable of laying down 3,000 bricks a day — six times faster than the average human, and cheaper too. Elsewhere, such as at MIT, researchers are experimenting with 3-D printing entire homes in one go.
The Center for Process Innovation, a British technology research company, thinks they’ve got the next big step in aviation transportation figured out. They want to remove the windows from passenger planes and replace them with OLED touch-screens that extend along the plane’s entire length and display the view from outside through cameras mounted on the plane’s exterior.
According to them, windows are one of the greatest sources of unnecessary weight in passenger planes. Solid walls are stronger and allow the walls to be built thinner as well. The OLED screens that replace the windows would display the view outside and allow passengers to select entertainment and stewardess service.
The technology does have its detractors, however – some are concerned about light pollution inside the cabin, and the panoramic view probably won’t do much to help those who are afraid of flying.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
16-09-2018
Scientists are pushing the limits of 3D printing with these shape-shifting materials
Scientists are pushing the limits of 3D printing with these shape-shifting materials
The tech could be used to create magnetically controlled implants or "soft robots."
by Tom Metcalfe
MIT researchers are developing 3D-printed materials that can change their shape in response to changes in magnetic fields.
Ben Gruber / Reuters file
Three-dimensional printing has been used to create all sorts of things, from car parts and experimental rocket engines to entire houses. Now scientists at MIT have found a way to 3D print objects that can change shape almost instantaneously in response to magnetic fields.
So far the researchers have created a few demonstration objects with the new technology, which uses plastic “3D ink” infused with tiny iron particles and an electromagnet-equipped printing nozzle. These include a plastic sheet that folds and unfolds, a star-shaped object that can catch a ball and a six-pointed structure that scrunches up and crawls like a spider.
But the researchers see broad applications for small shape-changing devices — what some call "soft robots" — especially in medicine.
“You can imagine this technology being used in minimally invasive surgeries,” said Xuanhe Zhao, a professor of engineering at MIT and a member of the team that developed the 3D-printed shape-shifting technology. “A self-steering catheter inside a blood vessel, for example — now you can use external magnetic fields to accurately steer the catheter.”
Other uses could include magnetically controlled implants to control the flow of blood, and devices that could be guided by magnet through the body — to take pictures, clear a blockage or deliver drugs to a specific location, Zhao said.
The technology might one day make it possible to 3D print entire soft robots, Zhao said. These could have information stored as magnetic data directly inside their structural materials, instead of needing additional electronics.
“The MIT soft robotics development is very cool ... It's an important step in terms of being able to control materials,” said Jim McGuffin-Cawley, an engineering and materials science professor at Case Western Reserve University in Cleveland, who was not involved with the MIT project. He noted that the technology allows researchers to make precise changes to the shape-shifting materials by using magnetic fields to control very small moving parts inside the materials themselves.
MIT is releasing free software and a recipe for its magnetic ink so that other scientists around the world can use the technology and print their own shape-shifting materials, Zhao said.
“With these three components they can design their own untethered, fast-transforming soft robots,” he said. “We hope this method can find very important applications in the fields of soft robotics [and] materials.”
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
14-09-2018
Thousands are embedding microchips under their skin to 'make their daily lives easier'
Thousands are embedding microchips under their skin to 'make their daily lives easier'
Thousands of Swedish people are having microchips embedded under their skin instead of using ID cards, key cards and even having to purchase train tickets.
3,000 Swedes Have Microchips Installed
It is estimated that 3,000 people have had a microchip installed in Sweden and the chip is smaller than a single grain of rice. The increase in people having chips installed under their skin has been on the increase over the last three years. The microchip technology was first used in Sweden in 2015.
Ulrika Celsing said that the microchip in her hand had helped her to replace the need to carry around many daily necessities. This includes her office key card and gym card. When the 28-year-old arrives at work, she simply has to wave her hand close to a small box and then enter a code, and the door opens.
Rail Line Scans Passengers Hands to Take Fares
SJ Rail Line, which is owned by the state, began to scan passengers hands with microchips to take their fare when they were onboard the train. It has been said that the chips could also be used to make purchases in much the same way as a contactless credit card, but up to now, no one has tested it.
The procedure to insert the microchip is much the same as having a piercing. The chip is inserted by syringe into the hand of the person. Celsing said that she had her chip installed during a work event and all she felt during the procedure was a slight sting. Ben Libberton, a microbiologist for the MAX IV laboratory in Sweden, said that the microchip implants might cause a reaction or infection in the immune system of the body.
Group Micro-chipping Is Becoming the In-thing
There is also the risk of biohacking, modification of bodies using technology. This is said to be on the increase as more people use technology such as Fitbit and Apple Watches. Bionyfiken, a biohacking group from Sweden, began organizing implant parties where people in groups had chips inserted en masse in the US, UK, Germany, France, and Mexico.
50 employees at a vending machine company in Wisconsin had microchips inserted into their hands, and they could use them to purchase snacks, log into their computers and use other office equipment.
The 10 million strong country has proven that Sweden is more willing to share their personal details. Details have been recorded by the social-security system of the country and are readily available. It was said that people could find out the salaries of other people by calling up the public tax authority. Many people in Sweden do not have the belief that the technology is at risk of being hacked. A microbiologist said that the data that is collected is and shared is limited, so there should be no fear of hacking.
The Body Might be the Next Big Platform for Technology
It has been estimated that the human body will become the next big platform for technology. All of the wearable technology of today will be implanted within the body within the next 5 to 10 years. No one wants to carry around a smartphone or smartwatch that is clumsy when they can have the same technology installed in their body.
When you return to school after summer break, it may feel like you forgot everything you learned the year before. But if you learned like an AI system does, you actually would have — as you sat down for your first day of class, your brain would take that as a cue to wipe the slate clean and start from scratch.
AI systems’ tendency to forget the things it previously learned upon taking on new information is called catastrophic forgetting.
That’s a big problem. See, cutting-edge algorithms learn, so to speak, after analyzing countless examples of what they’re expected to do. A facial recognition AI system, for instance, will analyze thousands of photos of people’s faces, likely photos that have been manually annotated, so that it will be able to detect a face when it pops up in a video feed. But because these AI systems don’t actually comprehend the underlying logic of what they do, teaching them to do anything else, even if it’s pretty similar — like, say, recognizing specific emotions — means training them all over again from scratch. Once an algorithm is trained, it’s done, we can’t update it anymore.
For years, scientists have been trying to figure out how to work around the problem. If they succeed, AI systems would be able to learn from a new set of training data without overwriting most of what they already knew in the process. Basically, if the robots should someday rise up, our new overlords would be able to conquer all life on Earth and chew bubblegum at the same time.
But still, catastrophic forgetting is one of the major hurdles preventing scientists from building an artificial general intelligence (AGI) — AI that’s all-encompassing, empathetic, and imaginative, like the ones we see in TV and movies.
In fact, a number of AI experts who attended The Joint Multi-Conference on Human-Level Artificial Intelligence last week in Prague said, in private interviews with Futurism or during panels and presentations, that the problem of catastrophic forgetting is one of the top reasons they don’t expect to see AGI or human-level AI anytime soon.
Catastrophic forgetting is one of the top reasons experts don’t expect to see human-level AI anytime soon.
But Irina Higgins, a senior research scientist at Google DeepMind, used her presentation during the conference to announce that her team had begun to crack the code.
She had developed an AI agent — sort of like a video game character controlled by an AI algorithm — that could think more creatively than a typical algorithm. It could “imagine” what the things it encountered in one virtual environment might look like elsewhere. In other words, the neural net was able to disentangle certain objects that it encountered in a simulated environment from the environment itself.
This isn’t the same as a human’s imagination, where we can come up with new mental images altogether (think of a bird — you can probably conjure up an image of what a fictional spherical, red bird might look like in your mind’s eye.) The AI system isn’t that sophisticated, but it can imagine objects that it’s already seen in new configurations or locations.
“We want a machine to learn safe common sense in its exploration so it’s not damaging itself,” said Higgins in her speech at the conference, which had been organized by GoodAI. She had published her paper on the preprint server arXiv earlier that week and also penned an accompanying blog post.
Let’s say you’re walking through the desert (as one does) and you come across a cactus. One of those big, two-armed ones you see in all the cartoons. You can recognize that this is a cactus because you have probably encountered one before. Maybe your office bought some succulents to liven up the place. But even if your office is cactus-free, you could probably imagine what this desert cactus would look like in a big clay pot, maybe next to Brenda from accounting’s desk.
Now Higgins’ AI system can do pretty much the same thing. With just five examples of how a given object looks from various angles, the AI agent learns what it is, how it relates to the environment, and also how it might look from other angles it hasn’t seen or in different lighting. The paper highlights how the algorithm was trained to spot a white suitcase or an armchair. After its training, the algorithm can then imagine how that object would look in an entirely new virtual world and recognize the object when it encounters it there.
“We run the exact setup that I used to motivate this model, and then we present an image from one environment and ask the model to imagine what it would look like in a different environment,” Higgins said. Again and again, her new algorithm excelled at the task compared to AI systems with entangled representations, which could predict fewer qualities and characteristics of the objects.
Image Credit: Emily Cho
In short, the algorithm is able to note differences between what it encounters and what it has seen in the past. Like most people but unlike most other algorithms, the new system Higgins built for Google can understand that it hasn’t come across a brand new object just because it’s seeing something from a new angle. It can then use some spare computational power to take in that new information; the AI system updates what it knows about the world without needing to be retrained and re-learn everything all over again. Basically, the system is able to transfer and apply its existing knowledge to the new environment. The end result is a sort of spectrum or continuum showing how it understands various qualities of an object.
Higgins’ model alone won’t get us to AGI, of course. But it marks an important first step towards AI algorithms that can continuously update as they go, learning new things about the world without losing what they already had.
“I think it’s very crucial to reach anything close to artificial general intelligence,” Higgins said.
“I think it’s very crucial to reach anything close to artificial general intelligence.”
And this work is all still in its early stages. These algorithms, like many other object recognition AI tools, excel at a rather narrow taskwith a constrained set of rules, such as looking at a photo and picking out a face among many things that are not faces. But Higgins’ new AI system is doing a narrow task in such a way that more closely resembles creativity and some digital simulation of an imagination.
And even though Higgins’ research didn’t immediately bring about the era of artificial general intelligence, her new algorithm already has the ability to improve the existing AI systems we use all the time. For instance, Higgins tried her new AI system on a major set of data used to train facial recognition software. After analyzing the thousands and thousands of headshots found in the dataset, the algorithm could create a spectrum of any quality with which those photos have been labeled. As an example, Higgins presented the spectrum of faces rankedby skin tone.
Higgins then revealed that her algorithm was able to do the same for the subjective qualities that also find their ways into these datasets, ultimately teaching human biases to facial recognition AI. Higgins showed how images that people had labeled as “attractive” created a spectrum that pointed straight towards the photos of young, pale women. That means any AI system that had been trained with these photos — and there are many of them out there — now hold the same racist views as do the people who labeled the photos in the first place: that white people are more attractive.
This creative new algorithm is already better than we are when it comes to finding new ways to detect human biases in other algorithms so engineers can go in and remove them.
So while it can’t replace artists quite yet, Higgins’ team’s work is a pretty big step towards getting AI to imagine more like a human and less like an algorithm.
Here's a recipe for freaking out Twitter: Borrow a video of a realistic humanoid robot strolling up a driveway. Post it on Twitter. Wait for world famous mentalist Derren Brown to retweet it. Gather nearly 5 million video views. Enjoy the comment fallout as people question whether it's real.
Brown's retweet of a short robot video on Saturday helped spread the footage across Twitter, whose users described it as "scary," "creepy," "terrifying" and "my worst nightmare." It helped that Brown wrote, "WE ARE ALL GOING TO DIE" in his tweet caption.
Blomkamp didn't have anything to do with the driveway video, though he chimed in on Brown's Twitter thread, writing, "Props to the artist who implemented Adam into live action footage."
The artist behind the video appears to be 3D artist Maxim Sullivan, who originally posted the video to Twitter on Aug. 12 with the message, "Glitchy test of Adam from the @oatsstudios Unity film, going for a walk."
Sullivan answered some questions in his own Twitter thread, saying the creation is indeed CGI. While Sullivan's original tweeted video has almost 13,000 views, the version tweeted out of context by another Twitter user (and retweeted by Brown) has almost 5 million.
While Adam isn't real, there are plenty of actual robots out there that can give you the willies. Boston Dynamics' running robot Atlas is a top candidate that should slot nicely into your robo-fear nightmares.
There are fears that tend to come up when people talk about futuristic artificial intelligence — say, one that could teach itself to learn and become more advanced than anything we humans might be able to comprehend. In the wrong hands, perhaps even on its own, such an advanced algorithm might dominate the world’s governments and militaries, impart Orwellian levels of surveillance, manipulation, and social control over societies, and perhaps even control entire battlefields of autonomous lethal weapons such as military drones.
But some artificial intelligence experts don’t think those fears are well-founded. In fact, highly-advanced artificial intelligence could be better at managing the world than humans have been. These fears themselves are the real danger, because they may hold us back from making that potential a reality.
“Maybe not achieving AI is the danger for humanity.”
As a species, Mikolov explained, humans are pretty terrible at making choices that are good for us in the long term. People have carved away rainforests and other ecosystems to harvest raw materials, unaware of (or uninterested in) how they were contributing to the slow, maybe-irreversible degradation of the planet overall.
But a sophisticated artificial intelligence system might be able to protect humanity from its own shortsightedness.
“We as humans are very bad at making predictions of what will happen in some distant timeline, maybe 20 to 30 years from now,” Mikolov added. “Maybe making AI that is much smarter than our own, in some sort of symbiotic relationship, can help us avoid some future disasters.”
Granted, Mikolov may be in the minority in thinking a superior AI entity would be benevolent. Throughout the conference, many other speakers expressed these common fears, mostly about AI used for dangerous purposes or misused by malicious human actors. And we shouldn’t laugh off or downplay those concerns.
We don’t know for sure whether it will ever be possible to create artificial general intelligence, often considered the holy grail of sophisticated AI that’s capable of doing pretty much any cognitive task humans can, maybe even doing it better.
The future of advanced artificial intelligence is promising, but it comes with a lot of ethical questions. We probably don’t know all the questions we’ll have to answer yet.
But most of the panelists at the HLAI conference agreed that we still need to decide on the rules before we need them. The time to create international agreements, ethics boards, and regulatory bodies across governments, private companies, and academia? It’s now. Putting these institutions and protocols in place would reduce the odds that a hostile government, unwitting researcher, or even a cackling mad scientistwould unleash a malicious AIsystem or otherwise weaponize advanced algorithms. And if something nasty did get out there, then these systems would ensure we’d have ways to handle it.
With these rules and safeguards in place, we will be much more likely to usher in a future in advanced AI systems live harmoniously with us, or perhaps even save us from ourselves.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.