The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
24-04-2023
ChatGPT wrote a nice poem about UFO Sightings Daily today, UFO Sighting News.
ChatGPT wrote a nice poem about UFO Sightings Daily today, UFO Sighting News.
Several AI tools aim to summarize scientific findings to help researchers.
Credit: Dimitri Otis/Getty
As large language models (LLMs) gallop ever onwards — including GPT-4, OpenAI’s latest incarnation of the technology behind ChatGPT— scientists are beginning to make use of their power. The explosion of tools powered by artificial intelligence (AI) includes several search engines that aim to make it easier for researchers to grasp seminal scientific papers or summarize a field’s major findings. Their developers claim the apps will democratize and streamline access to research.
But some tools need more refinement before researchers can use them to help their studies, say scientists who have experimented with them. Clémentine Fourrier is a Paris-based researcher who evaluates LLMs at Hugging Face, a company in New York City that develops open-source AI platforms. She used an AI search engine called Elicit, which uses an LLM to craft its answers, to help find papers for her PhD thesis. Elicit searches papers in the Semantic Scholar database ad identifies the top studies by comparing the papers’ titles and abstracts with the search question.
Variable success
Fourrier says that, in her experience, Elicit didn’t always pick the most relevant papers. The tool is good for suggesting papers “that you probably wouldn’t have looked at”, she says. But its paper summaries are “useless”, and “it’s also going to suggest a lot of things that are not directly relevant”, she adds. “It’s very likely that you’re going to make a lot of mistakes if you only use this.”
Jungwon Byun, chief operating officer at Ought, the company in San Francisco, California, that built Elicit, says: “We currently have hundreds of thousands of users with diverse specializations so Elicit will inevitably be weaker at some queries.” The platform works differently from other search engines, says Byun, because it focuses less on keyword match, citation count and recency. But users can filter for those things.
Other researchers have had more positive experiences with the tool. “Elicit.org is by far my favourite for search,” says Aaron Tay, a librarian at Singapore Management University. “It is close to displacing Google Scholar as my first go-to search for academic search,” he says. “In terms of relevancy, I had the opposite experience [to Fourrier] with Elicit. I normally get roughly the same relevancy as Google Scholar — but once in a while, it interprets my search query better.”
These discrepancies might be field-dependent, Tay suggests. Fourrier adds that, in her research area, time is critical. “A year in machine learning is a century in any other field,” she says. “Anything prior to five years is completely irrelevant,” and Elicit doesn’t pick up on this, she adds.
Full-text search
Another tool, scite, whose developers are based in New York City, uses an LLM to organize and add context to paper citations — including where, when and how a paper is cited by another paper. Whereas ChatGPT is notorious for ‘hallucinations’ — inventing references that don’t exist — scite and its ‘Assistant’ tool remove that headache, says scite chief executive Josh Nicholson. “The big differentiator here is that we’re taking that output from ChatGPT, searching that against our database, and then matching that semantically against real references.” Nicholson says that scite has partnered with more than 30 scholarly publishers including major firms such as Wiley and the American Chemical Society and has signed a number of indexing agreements — giving the tool access to the full text of millions of scholarly articles.
Nicholson says that scite is also collaborating with Consensus — a tool that “uses AI to extract and distill findings” directly from research — launched in 2022 by programmers Eric Olson and Christian Salem, both in Boston, Massachusetts. Consensus was built for someone who’s not an expert in what they’re searching for, says Salem. “But we actually have a lot of researchers and scientists using the product,” he adds.
Like Elicit, Consensus uses Semantic Scholar data. “We have a database of 100-million-plus claims that we’ve extracted from papers. And then when you do a search, you’re actually searching over those claims,” says Olson. Consensus staff manually flag contentious or disproven claims — for example, that vaccines cause autism, says Olson. “We want to get to a state where all of that is automated,” says Salem, “reproducing what an expert in this field would do to detect some shoddy research.”
Room for improvement
Meghan Azad, a child-health paediatrician at the University of Manitoba in Winnipeg, Canada, asked Consensus whether vaccines cause autism, and was unconvinced by the results, which said that 70% of research says vaccines do not cause autism. “One of the citations was about ‘do parents believe vaccines cause autism?’, and it was using that to calculate its consensus. That’s not a research study giving evidence, yes or no, it’s just asking what people believe.”
Mushtaq Bilal, a postdoc at the University of Southern Denmark in Odense, tests AI tools and tweets about how to get the most out of them. He likes Elicit, and has looked at Consensus. “What they’re trying to do is very useful. If you have a yes/no question, it will give you a consensus, based on academic research,” he says. “It gives me a list of the articles that it ran through to arrive at this particular consensus,” Bilal explains.
Azad sees a role for AI search engines in academic research in future, for example replacing the months of work and resources required to pull together a systematic review. But for now, “I’m not sure how much I can trust them. So I’m just playing around,” she says.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-04-2023
Artificial Intelligence Produces a Sharper Image of M87’s Big Black Hole
Three views of a black hole, from left to right: Event Horizon Telescope's original image, PRIMO reconstruction and image blurred to match EHT's resolution. (Credit: Lia Medeiros et al. / ApJL, 2023)
The image should guide scientists as they test their hypotheses about the behavior of black holes, and about the gravitational rules of the road under extreme conditions.
The EHT image of the supermassive black hole at the center of an elliptical galaxy known as M87, about 55 million light-years from Earth, wowed the science world in 2019. The picture was produced by combining observations from a worldwide array of radio telescopes — but gaps in the data meant the picture was incomplete and somewhat fuzzy.
“With our new machine learning technique, PRIMO, we were able to achieve the maximum resolution of the current array,” study lead author Lia Medeiros of the Institute for Advanced Study said in a news release.
PRIMO slimmed down and sharpened up the EHT’s view of the ring of hot material that swirled around the black hole as it fell into the gravitational singularity. That makes for more than just a prettier picture, Medeiros explained.
“Since we cannot study black holes up close, the detail of an image plays a critical role in our ability to understand its behavior,” she said. “The width of the ring in the image is now smaller by about a factor of two, which will be a powerful constraint for our theoretical models and tests of gravity.”
Tens of thousands of simulated EHT images were fed into the PRIMO model, covering a wide range of structural patterns for the gas swirling into M87’s black hole. The simulations that provided the best fit for the available data were blended together to produce a high-fidelity reconstruction of missing data. The resulting image was then reprocessed to match the EHT’s actual maximum resolution.
The researchers say the new image should lead to more precise determinations of the mass of M87’s black hole and the extent of its event horizon and accretion ring. Those determinations, in turn, could lead to more robust tests of alternative theories relating to black holes and gravity.
The sharper image of M87 is just the start. PRIMO can also be used to sharpen up the Event Horizon Telescope’s fuzzy view of Sagittarius A*, the supermassive black hole at the center of our own Milky Way galaxy. And that’s not all: The machine learning techniques employed by PRIMO could be applied to much more than black holes. “This could have important implications for interferometry, which plays a role in fields from exoplanets to medicine,” Medeiros said.
OpenAI hired a team of experts to examine whether GPT-4 could present prejudiced responses or assist illegal activities.
Jaap Arriens/NurPhoto via Getty Images
A professor hired by OpenAI to test GPT-4 said people could use it to do "dangerous chemistry."
He was one of 50 experts hired by OpenAI last year to examine the risks of GPT-4.
Their research showed that GPT-4 could help users write hate speech or even find unlicensed guns.
One professor hired by OpenAI to test GPT-4, which powers chatbot ChatGPT, said there's a "significant risk" of people using it to do "dangerous chemistry" – in an interview with the Financial Times published on Friday.
Andrew White, an associate professor of chemical engineering at the University of Rochester in New York state, was one of 50 experts hired to test the new technology over a six-month period in 2022. The group of experts – dubbed the "red team" – asked the AI tool dangerous and provocative questions to examine how far it can go.
White told the FT that he asked GPT-4 to suggest a compound that could act as a chemical weapon. He used "plug-ins" – a new feature that allows certain apps to feed information into the chatbot – to draw information from scientific papers and directories of chemical manufacturers. The chatbot was then able to find somewhere to make the compound, the FT said.
"I think it's going to equip everyone with a tool to do chemistry faster and more accurately," White said in an interview with the FT. "But there is also significant risk of people . . . doing dangerous chemistry. Right now, that exists."
The team of 50 experts' findings was presented in a technical paper on the new model, which also showed that the AI tool could help users write hate speech and help find unlicensed guns online.
White and the other testers' findings helped OpenAI to ensure that these issues were amended before GPT-4 was released for public use.
OpenAI did not immediately respond to Insider's request for comment made outside of regular working hours.
Twitter CEO Elon Musk and hundreds of AI experts, academics, and researchers signed an open letter last month to call for a six-month pause on developing AI tools more powerful than GPT-4.
The letter said that powerful AI systems should only be developed "once we are confident that their effects will be positive and their risks will be manageable."
Machine Learning & Artificial Intelligence by mikemacmarketing Credits: Flickr/CC BY 2.0.
Artificial Intelligence (AI) has been making headlines for years as one of the most transformative technologies in the modern era. As AI growth continues to get bigger, its impact on the global economy and the job market is increasingly felt.
An artificial intelligence gold rush has begun over the past few months to extract the predicted business prospects from generative AI models like ChatGPT, whether it is founded on hallucinatory beliefs or not.
In an effort to understand the sensational text-generating bot that OpenAI unveiled last November, app developers, venture-backed companies, and some of the biggest organizations in the world are all frantically trying to understand it.
Although businesses and executives clearly perceive an opportunity to profit, it is much less clear how the technology will affect labor and the economy as a whole. ChatGPT and other recently released generative AI models promise to automate a variety of tasks previously thought to be solely within the realm of human creativity and reasoning, from writing to creating graphics to summarizing and analyzing data. Despite their limitations, chief among which is their propensity for making stuff up, ChatGPT and other recently released generative AI models are not without their own limitations. Because of this, economists are uncertain of how jobs and general productivity may be impacted.
Artificial Intelligence, AI, by mikemacmarketing. Credits: Flickr/CC BY 2.0.
The Global AI Market: Size and Growth
The global AI market has been growing rapidly, driven by advancements in machine learning, deep learning, and natural language processing. According to a report by Grand View Research, the AI market size was valued at USD 62.35 billion in 2020 and is expected to grow at a compound annual growth rate (CAGR) of 40.2% from 2021 to 2028. The current value of the global AI market is $136.6 billion and is expected to reach $1.81 trillion by 2030!
Major Players in the AI Market
The AI market is dominated by several key players, including:
Google: Google’s AI subsidiary, DeepMind, has developed various AI solutions, including the well-known AlphaGo and AlphaFold systems. Google also offers AI services such as TensorFlow, an open-source machine learning framework.
IBM: IBM has been a significant player in AI research and development for decades, with its AI platform Watson being one of the most recognizable AI brands in the world.
Microsoft: Microsoft’s AI efforts include Azure AI, a suite of AI services and tools, as well as investments in AI research and development across various domains.
Amazon: Amazon Web Services (AWS) offers a range of AI services, such as machine learning, computer vision, and natural language processing, catering to businesses and developers alike.
NVIDIA: As a leading provider of GPU hardware, NVIDIA plays a vital role in enabling AI growth through its hardware solutions and software frameworks designed for machine learning and deep learning applications.
The Future of AI: What to Expect
Artificial Intelligence – Resembling Human Brain by deepakiqlect. Credits: Flickr/ CC BY-SA 2.0.
Already, generative AIs are able to converse, produce poetry, develop computer code, and respond to questions. They are initially being introduced in conversational formats like ChatGPT, Bing, and Google’s Bard, as the term “chatbot” suggests.
But that won’t last for very long. These AI technologies will already be present in Microsoft and Google goods, according to plans. These will enable you to accomplish a variety of cool feats, such as automatically summarizing meetings, crafting savvy marketing messages, and writing a rough copy of an email.
Other IT firms can integrate GPT-4 into their applications and products using the A.P.I. that OpenAI also provides. Additionally, it has developed a number of plug-ins that enhance ChatGPT’s functionality from businesses including Instacart, Expedia, and Wolfram Alpha, enabling future users to house a real personal assistant on their devices. AI applications are becoming more widespread across various sectors, such as healthcare, finance, retail, and manufacturing, driving the further market growth.
AI Growth and the Job Market: Opportunities and Threats
AI-Driven Job Opportunities
As AI continues to expand, new job opportunities are emerging in fields such as:
Data Science: Data scientists play a crucial role in training AI algorithms and interpreting the results of AI-driven analyses.
AI Engineering: AI engineers develop and maintain AI systems, ensuring their efficient operation and integration with other technologies.
AI Ethics and Policy: As AI growth raises ethical concerns, there is an increasing demand for professionals who can navigate the complexities of AI ethics and develop policies to ensure responsible AI development and use.
Jobs at Risk of Extinction
While AI growth presents new opportunities, it also threatens some jobs, particularly those that involve repetitive tasks and can be easily automated, like:
Manufacturing and Assembly: With the rise of AI-powered robots, many manual assembly jobs are at risk of being replaced by automated processes.
Over 50 Years of Production – The TMHE Production Line by Toyota Material Handling EU.Credits: Flickr/CC BY-NC-ND 2.0.
Data Entry and Analysis: AI algorithms can process and analyze large amounts of data more quickly and accurately than humans, which may lead to a decline in demand for data entry and analysis positions.
Customer Service: AI-driven chatbots and virtual assistants are increasingly handling customer service tasks, potentially reducing the need for human customer service representatives.
So, where is this going? How will the AI economy be shaped?
Kevin Roose, author of ‘Futureproof: 9 Rules for Humansin the Age of Automation’, discusses how AI is changing the nature of work, pointing out instances like “labor displacement that we traditionally think of when we think about automation, ” though he notes that this is occurring in a wider range of industries than it previously has, including white-collar workplaces. The replacement of management tasks is less well known: ” There’s now a whole industry of worker surveillance and performance tracking software, and in some cases automatically making decisions about hiring and firing.” By 2034, this may result in the replacement of 47% of all job functions.
The two-tiered economy predicted by Mr. Roose will consist of the machine economy and the human economy. The former’s goods will become incredibly affordable. He claims that AI will make it possible for the managers of those businesses to eliminate all waste and inefficiency.
The human economy, in contrast, will be made up of individuals who focus more on creating sensations and experiences than on producing goods and rendering services. Examples of such individuals include healthcare professionals, educators, and artists. Why end there then? Because their job is to make others feel comfortable, even those you wouldn’t consider irreplaceable, like bartenders, baristas, and flight attendants, fall into the category. The human touch is what makes everything so important.
According to Mr. Roose, this will lead to an increase in the creation of higher-touch versions of hyper-scale digital companies’ services, such as premium Netflix where movie curators choose movies for you. According to Mr. Roose, there will be layers within these businesses where customers pay more for human interaction on top of the fundamental layer. He foresees a new wave of businesses that scale human interaction without losing their humanity.
In conclusion, we have new jobs that will be born for AI, some jobs, especially manual ones will become extinct and we also have the ones that are in the middle: lawyers, digital marketers, content writers, journalists, etc.
While AI presents many opportunities for increased efficiency and cost-effectiveness, it also poses challenges and may require professionals to adapt their skills and roles. AI’s ability to analyze vast amounts of data quickly and accurately can be a game-changer for professions such as lawyers and digital marketers. For instance, AI-powered legal tools can streamline document review, case research, and contract analysis, allowing attorneys to focus on more strategic tasks and provide better client service. Similarly, AI algorithms can help digital marketers optimize campaigns, analyze consumer behavior, and predict trends, leading to more targeted and effective marketing strategies.
To stay relevant, professionals in these sectors must focus on developing skills that complement AI, such as creativity, empathy, critical thinking, and emotional intelligence. Moreover, as AI assumes a larger part in decision-making procedures, ethical considerations will become more crucial. To guarantee that AI technologies are used fairly and responsibly, experts will need to keep an eye on concerns like algorithmic bias, transparency, and data privacy.
The emergence of AI in the field of journalism has raised concerns about the future of the profession. AI-driven tools, such as natural language generation systems, can create news articles, summaries, and headlines in a fraction of the time it takes a human journalist. Moreover, these AI-generated articles can be tailored to suit specific audiences, further enhancing their appeal.
Artificial Intelligence & AI & Machine Learning by mikemacmarketing Credits: Flickr/CC BY 2.0.
However, it is essential to recognize that AI-driven journalism has its limitations. While AI can handle repetitive, data-driven reporting tasks, it struggles with more complex aspects of journalism that require critical thinking, empathy, and a deep understanding of context. Human journalists play a vital role in investigating stories, providing nuanced analysis, and holding the powerful accountable. Therefore, it is unlikely that AI will entirely replace journalism; instead, it may serve as a complement to human journalists, enabling them to focus on high-value tasks that AI cannot perform.
In the future, we can expect a more collaborative relationship between AI and journalists, with the technology taking on more mundane tasks and freeing up journalists to focus on in-depth reporting and analysis.
In conclusion, AI has the potential to transform non-manual positions in the service industries by presenting fresh chances for productivity and development. But, in order to succeed in this shifting environment, professionals will need to adapt and acquire new abilities. Professionals may use the power of AI to build a better future for their fields by embracing lifelong learning, emphasizing human-centric skills, and managing ethical dilemmas.
And with the addition of all the following tools, professionals will enjoy a kind of personal butler that does the “boring” tasks for them, so that they will have more time to be creative and develop their businesses.
Microsoft sign outside building 99 by Robert Scoble Credits: Flickr</CC BY 2.0.
Microsoft CoPilot and Google’s AI in Google Workspace
Microsoft CoPilot
Microsoft CoPilot is an AI-driven code completion tool designed to assist developers in writing code more efficiently. Recently Microsoft announced its expansion into its 365 platform, which is bound to revolutionize the way students, home users and professionals work.
In essence, in the near future a user can ask a virtual assistant to compose their emails based on their emailing history, craft pro-grade presentations or gather statistics and create graphs from complex Excel data, just based on natural language prompts; like asking your personal assistant.
Google’s AI in Google Workspace
Google Workspace, formerly known as G Suite, is a suite of cloud-based productivity and collaboration tools that includes applications like Gmail, Google Drive, Google Docs, and Google Meet. Google has been integrating AI into these tools to enhance their functionality and improve user experience.
For example, Google Docs feature AI-driven features like Smart Compose for quite some time, which uses AI to predict and suggest phrases as users type, allowing for faster and more accurate typing. Google Workspace also utilizes AI for grammar and spell-checking, helping users create polished and professional documents.
Additionally, Google’s AI powers the “Explore” feature in Google Sheets, which enables users to ask natural language questions about their data and receive instant insights and visualizations. This feature simplifies data analysis and helps users make data-driven decisions more easily.
These AI-powered features in Microsoft CoPilot and Google Workspace showcase the potential for AI to enhance productivity and streamline tasks across various industries. As AI continues to advance, we can expect to see even more sophisticated AI-driven tools that help users work more efficiently and effectively.
Scientists have discovered “unexpected physics” by opening up “slits” in time, a new study reports, achieving a longstanding dream that can help to probe the behavior of light and pioneer advanced optical technologies.
The mind-boggling approach is a time-based variation on the famous double-slit experiment, first performed by Thomas Young in 1801, which opened a window into the weird probabilistic world of quantum mechanics by revealing the dual nature of light as both a particle and a wave.
The new temporal version of this test offered a glimpse of the mysterious physics that occur at ultrafast timescales, which may inform the development of quantum computing systems, among other next-generation applications.
In the original version of the double-slit experiment, light passes through two slits that are spatially separated on an opaque screen. A detector on the other side of the screen records the pattern of the light waves that emerges from the slits. These experiments show that the light waves change direction and interfere with each other after going through the slits, demonstrating that light behaves as both a wave and particle.
This insight is one of the most important milestones in our ongoing journey into the quantum world, and it has since been repeated with other entities, such as electrons, exposing the trippy phenomena that occurs at the small scales of atoms.
Now, scientists led by Romain Tirole, a PhD student studying nanophotonics at Imperial College London, have created a “temporal analogue of Young’s slit experiment” by firing a beam of light at a special metamaterial called Indium Tin Oxide, according to a study published on Monday in Nature Physics.
Metamaterials are artificial creations endowed with superpowers that are not found in nature. For instance, the Indium Tin Oxide used in the new study can change its properties in mere femtoseconds, a unit equal to a millionth of a billionth of a second. This incredible variability allows light waves to interact with the metamaterial at key moments in ultrafast succession, called “time slits,” which produces a time-based diffraction pattern that is analogous to the results returned in the spatial version of the experiment.
“Showing diffraction from a double slit in time requires to flick a switch extremely fast, on time scales comparable to how fast the light field oscillates, about a few femtoseconds,” said Tirole in an email to Motherboard. “If the entire history of the universe from the Big Bang to the moment you read this was a second, an oscillation of light would only take the equivalent of a single day!”
“Switching at this speed has long been difficult, but a few years ago a new material, Indium Tin Oxide, which already covers the screens of our mobile phones or televisions, was shown to switch very fast when you shine an intense laser beam on it,” he continued. “This has enabled a rapid progress of the field—see for example a conference we are organizing.”
IMAGE: THOMAS ANGUS, IMPERIAL COLLEGE LONDON
In other words, the super-speedy changeability of Indium Tin Oxide finally made a time slit experiment possible, after many years of eluding scientists. To bring this vision to reality, Tirole and his colleagues used lasers to switch the reflectance of the material on and off at high speeds.
When the material was turned on, it essentially became a mirror that allowed the team to record the diffraction patterns of light beams that interacted with the highly reflective surface. The brief moments when light was reflected off the metamaterial’s mirror state were the so-called time slits that form the basis of the experiment. The separation between these slits determined the pattern of oscillations that were observed by the researchers.
To the team’s astonishment, the results of the experiment revealed more oscillations than predicted by existing theories, as well as far sharper observations, which points to “unexpected physics” in the findings, according to the study.
“When we measured the spectra, we were very surprised by how clear they showed up on the detectors,” Tirole said. “How visible these oscillations are depends on how fast we can switch our metasurface on and off [and] this means that the speed at which our metamaterial changes is much faster than what was previously thought and accepted. This is exciting as it implies that new physical mechanisms are still to be uncovered and exploited.”
“In our experiment we show that this wonder material has an even faster switching speed, 10-100 times faster than previously thought, which enables a much stronger control of light,” he also noted.
This temporal version of the double-slit experiment altered the frequency of the light, changing its color, which created distinctive patterns in which some colors were enhanced while others were canceled out. The results are similar to the patterns created by the traditional spatial version of the test, which produces light waves that bolster and nullify each other after they have passed through the slits.
The breakthrough paves the way toward new research into the enigmatic properties of light, and the many emerging technologies that rely on optical phenomena. Tirole and his colleagues are especially eager to try to repeat the experiment with a time crystal, a very strange quantum system that has revolutionized many fields in physics.
“A double slit experiment is the first brick on the road to more complex temporal modulations, such as the much sought time-crystal where the optical properties are temporally modulated in a periodic fashion,” Tirole concluded. “This could have very important applications for light amplification, light control, for example for computation, and maybe even quantum computation with light.”
Researchers have modified amino acids and peptides and then coaxed it into a transparent glass. Here they demonstrate moulding it into sea-shell shapes.
Credit: R.Xing et al./Science Advances (CC BY 4.0)
Researchers have transformed amino acids and peptides — the building blocks of proteins — into glass, according to a study published in Science Advances1. Not only is the biomolecular glass transparent, but it can be 3D printed and cast in moulds. The paper suggests that the glass biodegrades pretty quickly, but wouldn’t be suitable for applications such as drinks bottles because the liquid would cause it to decompose.
“Nobody ever tried this with biomaterials in the past,” says Jun Liu, a materials scientist at the University of Washington in Seattle. “It’s a good discovery.”
Standard glass is made using inorganic molecules, mainly silicon dioxide. The ingredients are melted down at high temperatures and then rapidly cooled. Glass can be recycled easily, but despite this, a substantial amount ends up in landfill, where it can take thousands of years to break down.
But amino acids are readily broken down by microorganisms, meaning that instead of sitting for years in a dump, the nutrients in biomolecular glass could, in principle, rejoin the ecosystem.
“The development of renewable, benign and degradable materials is highly appealing for a sustainable future,” says Xuehai Yan, a co-author of the study and a chemist at the Chinese Academy of Sciences in Beijing.
Typically, when amino-acid chains, known as peptides, are heated, the molecules start to split up before they melt. Yan and his colleagues modified the ends of the amino acids to change how they assemble and stop them from breaking up. After melting these modified amino acids, the researchers rapidly supercooled them — a process that takes molecules to below their freezing point while allowing them to retain its liquid arrangement. The researchers then further cooled the substance to solidify it into glass. It stayed solid when it returned to room temperature.
This method prevents the amino acids and peptides from forming a crystalline structure when they solidify, which would make the glass cloudy, although the authors note that in some cases the glass was not completely colourless.
When the researchers exposed the biomolecular glass to digestive fluids and compost, it took between a few weeks and several months to break down, depending on the chemical modification and amino acid or peptide used.
The glass is just a lab curiosity at this stage: “This is a very fundamental study,” says Ting Xu, a materials scientist at the University of California, Berkeley. However, she says it opens a new path for materials researchers to explore.
Because it can biodegrade, the glass would not be appropriate for use in environments that are very humid or wet, Xu says. Organic chemical bonds tend to be weaker than inorganic bonds, so she speculates that the peptide glass would be less rigid than standard glass. But she says that this property could be beneficial in flexible, miniature devices, such as the lenses of a microscope.
doi: https://doi.org/10.1038/d41586-023-00826-3
References
Xing, R., Yuan, C., Fan, W., Ren, X. & Yan, X. Sci. Adv.9, eadd8105(2023).
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
31-03-2023
FUTURE COMPUTERS COULD RUN ON LAB-GROWN
FUTURE COMPUTERS COULD RUN ON LAB-GROWN "BRAINS"
Get ready for organ-powered devices.
WRITTEN BY RAHUL RAO
Computers are not mechanical brains, and our brains are not biological computers. They differ in function, organization, and composition. Both have circuits, sure, but computer chips are ultimately bits of silicon alloys pressed into highly designed, extremely convenient sizes and shapes, while our brains are carbon-based masses whose structure is still largely a mystery to neuroscientists.
Since the mid-20th century, people have touted the similarities — and considered the possibility of combining — brains and computers. Sci-fi author Isaac Asimov helped to devise the idea of a “positronic brain” that could bestow robots with the intelligence and self-awareness of a human in 1950.
Computer scientists still dwell on the shared features between minds and machines. Artificial neural networks, which power many of today’s AI, mimic the organization of neurons in the human brain. Other researchers are trying to make computer hardware more brain-like, for instance, by replicating the electrical activity of a neuron on a chip.
Researchers designed artificial “neurons” for a futuristic computer chip.
UNIVERSITY OF BATH
There are also researchers like Thomas Hartung, a biochemist and physician at Johns Hopkins University. Hartung and his colleagues are growing “brain organoids,” collections of human skin cells coaxed into resembling brain cells, in the lab. They want to connect the organoids to sensors and other devices and train them to process and store information with the help of machine learning.
Hartung has lofty goals for these organoids. They could help neuroscientists study how brain cells work together. They could also aid pharmacologists who study brain chemistry — for example, people developing treatments for Alzheimer’s disease. Hartung believes brain organoids can eventually replace the animal subjects typically used in these experiments.
But ultimately, Hartung wants to turn the creations into “biological hardware” for computers. In theory, organoids could perform certain tasks using less energy and hold far more memory than current silicon machines.
Hartung and his colleague Lena Smirnova with an image of a brain organoid.
COURTESY OF THOMAS HARTUNG
This dream is already taking shape. A team of researchers in Australia recently taught a collection of brain cells to play the video game Pong using a method somewhat similar to training a dog.
The team hooked their organoid up to electrodes and fed it details on the ball’s position; the organoid sent electrical signals back to control the paddle. If the organoid successfully hit the ball, the researchers “rewarded” it with an electrical stimulus, somewhat like a treat for a pup that sits on command. The organoid didn’t master Pong, but it managed to perform better with training than it would by random chance.
We spoke to Hartung about what a brain organoid might do next — and when to expect organ-powered computers.
This interview has been edited and condensed for clarity.
If brain cells can play Pong, can they defeat humans?
They only were able to show acute, or short-term, memory. The organoid culture became better and better in each training session, but the next day, everything was forgotten. The expectation is that, now, with the potential to establish long-term memory, we can actually move into memory and learning in the sense people would understand it.
And you cannot easily build production of such complex cell cultures. It takes at least a year. We train many people, but it takes them a year, on average, to get them done.
What’s next for organoid research?
We’re planning to use brain-machine interfaces to control robots. That’s on the plan for about a year’s time from now. So, we want to demonstrate the capability of long-term learning and, ideally, learning a sequence of tasks in a brain organoid.
One of the big changes at the moment is to scale first. We are limited with the brain organoids to about half a millimeter in size … otherwise, we don’t get enough oxygen and nutrients into the center of this cell ball. But that’s just the number of neurons of a fly, so it’s not really worth training. You might lose your organoid and can’t find it anymore!
Our work at the moment aims at producing an organoid which is about 1 centimeter large — which is then, already, twice the size of a mouse brain. That’s substantial, but it requires perfusion, where we create an equivalent to blood vessels to get nutrients into the brain. That’s not rocket science; this has been done for other organs already, but nobody had seen a need so far to produce larger brains.
Tiny organoids in a petri dish at Hartung’s Center for Alternatives to Animal Testing.
CAROLINA ROMERO, CENTER FOR ALTERNATIVE TO ANIMAL TESTING, BLOOMBERG SCHOOL OF PUBLIC HEALTH
How is this method different from using a mouse brain?
At the moment, when you bring a mouse or mouse brain into an experiment, it has a history: There is complex behavior, there is already an architecture in response to the mouse’s life experiences.
With our organoids, we really start from zero. We can influence and control every moment, and by what you feed into it, you can also determine what you study.
Many people are concerned about whether the organoids could suffer, for example. If I don’t give them pain receptors, there cannot be pain reception.
What about a computer that runs on a human brain?
With organoids, we can really control the input. With a human, you cannot really control what this human is experiencing. Even if you put them into a certain controlled environment, you’re limited. You’re also very much limited because — we have a skull. You cannot really poke many electrodes into the human brain easily and then control the experimental situations.
That’s exactly what we can do with organoids.
Hartung creates organoids from human skin cells in his lab.
THOMAS HARTUNG
What advantages might brain organoids have over computers?
There’s a couple of aspects which make the brain still superior to computers. For example, our capability of concluding on the basis of incomplete information, or what we would call intuitive thinking. We can be very fast and take shortcuts. We are often right — not always, but it is much easier to live with a decision that is based on incomplete data.
For example, a child can distinguish cats and dogs after 10 pictures with a pretty good error rate. A computer needs hundreds of pictures.
We can also add information much easier. You learn 10 words in Italian and you add it to your current “model.” Most computers have to just rerun their entire model to integrate this information.
But we should not compete with silicon computers where they are good. My handheld calculator is better than me at doing calculations. Why should I use a brain organoid to make it a calculator? It will likely be limited to what my brain is capable of doing — if it ever achieves something like this.
Organoids could work even quicker than today’s supercomputers, like Japan's record-breaking Fugaku.
STR/AFP/GETTY IMAGES
How far are we from brain-powered computers?
For the last 60 to 70 years, the more we have understood the brain, we have tried to make computers more brain-like, because there are still some advantages.
You can either envisage that you use it as a model to change our computer architecture, or you could at some point even have a biological component to your computational system.
That’s certainly the furthest away. It is science fiction, but I would say: 20 years ago, the iPhone was science fiction.
Are you considering the obvious question: What if these organoids become self-aware?
We are far from anything which is really producing concerns. There is no suffering, there is no self-awareness or consciousness that you can expect from these organoids, for the foreseeable future. But we have to discuss it, because people are feeling uneasy.
So, one of the things our ethicists at Johns Hopkins are doing at the moment is surveying the general population. They are asking, “what do you think about this?” At some point, people say, “Uh, perhaps we should think about this better?”
Then you give them information like, “there’s an informed consent by the donors of these cells,” or “this is done to find drugs for Alzheimer’s.” You test out what people feel about it, and this helps with the communication of this research.
We don’t want this to suddenly backfire. We want to work for the greater good.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
29-03-2023
AI-blog | AI-experts en Elon Musk roepen op om onderzoek stil te leggen
AI-blog | AI-experts en Elon Musk roepen op om onderzoek stil te leggen
Foto: BELGA
Hoe wij omgaan met informatie op het internet is helemaal aan het veranderen. En onze job ziet er straks ook heel anders uit. In deze blog volgen we de pijlsnelle opmars van ChatGPT en generatieve AI.
Gastheer van deze blog: Dominique Deckmyn
Het onderzoek in de meest geavanceerde AI-systemen moet voor zes maanden worden stilgelegd, om overleg mogelijk te maken over noodzakelijke veiligheidsmaatregelen. Die oproep is de jongste uren ondertekend door onder meer Elon Musk, Apple-oprichter Steve Wozniak en historicus Yuval Noah Harari.
De open brief staat op de website van het Future of Life institute. Het groeiende lijstje ondertekenaars omvat al heel wat vooraanstaande tech-ondernemers en denkers. Daar zitten medewerkers bij van verschillende grote spelers in de AI, zoals Deepmind (een zusterbedrijf van Google) en Stability AI (ontwikkelaar van de beeldgenerator Stable Diffusion). Voorlopig is nog niemand gesignaleerd van OpenAI, het bedrijf dat enkele weken geleden zijn meest geavanceerde AI-model, GPT-4 lanceerde.
Opvallend is dat de brief heel nadrukkelijk oproept om alleen de ontwikkeling stil te leggen van systemen ‘krachtiger dan GPT-4’. Koploper OpenAI zou dus de ontwikkeling van een hypothetisch GPT-5 voor zes maanden moeten stilleggen, maar concurrenten zouden ongestoord verder kunnen werken om hun achterstand op OpenAI in te halen. Dat kan de reden zijn dat de topmensen van OpenAI niet, maar die van de concurrenten wél bij de ondertekenaars zijn. Ook het ontwikkelen van AI-toepassingen gebaseerd op bestaande AI-modellen zou niet worden gehinderd.
De brief opent met een waarschuwing voor ‘human-competitive intelligence’ en de maatschappelijke risico’s die samenhangen met het gebruik van AI die de mens naar de kroon steekt. Zo’n ingrijpende verandering in de geschiedenis van het leven op aarde, zegt de brief, moet met grote zorg gepland worden. Maar in plaats daarvan is een ‘ongecontroleerde race’ aan de gang ‘die niemand, zelfs niet de makers, kan begrijpen, voorspellen of controleren’.
De oproep om de ontwikkeling van systemen die GPT-4 overtreffen, voor zes maanden stil te leggen, is geadresseerd aan alle AI-labs. Als daar geen akkoord over wordt bereikt, moeten overheden een moratorium opleggen.
De adempauze van zes maanden moet worden gebruikt om regelgeving en regelgevende autoriteiten op punt te stellen. Een andere geëiste maatregel is een systeem van ingebouwde watermerken, zodat teksten geproduceerd door zo’n AI-systeem als zodanig kunnen worden herkend.
We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1] and acknowledged by top AI labs.[2] As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.
Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.
AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.
Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.[5] We can do so here. Let's enjoy a long AI summer, not rush unprepared into a fall.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
28-03-2023
Filmmaker James Cameron and the Godfather of AI Agree: AI Could Destroy Us Soon
Filmmaker James Cameron and the Godfather of AI Agree: AI Could Destroy Us Soon
Paul Seaburn
Artificial intelligence is here and two people who are closely connected to it in very different ways agree that it has the potential to eliminate humanity … and the takeover may already be underway. One is James Cameron – the filmmaker and screenwriter responsible for some of the most futuristic and dystopian films of all time, including The Terminator, Aliens, The Abyss, Terminator 2: Judgment Day, Avatar and Avatar: The Way of Water. Cameron not only made movies about artificial intelligence – he pioneered is usage in the process of film production. The other is Geoffrey Hinton, a British computer scientist known as the "godfather of artificial intelligence" for his work in training multi-layer neural networks used in artificial intelligence. Both were recently interviewed on the subjects of artificial intelligence and artificial general intelligence and both agree that AI has the ability to take over humanity and the process may have already begun. Are we living in a James Cameron movie? Is it Titanic?
Is this a movie or our destiny?
“I think A.I. can be great, but also it could literally be the end of the world.”
Appearing recently on the SmartLess podcast, James Cameron was pondering whether an uprising of artificially intelligent machines in The Terminator is possible. Not only does he think it can happen, he says the current state of artificial intelligence makes him “'pretty concerned about the potential for misuse of A.I.” For those not familiar with the film (spoiler alert), The Terminator is a cybernetic android sent from the future to kill the person whose not yet born son is responsible for eventually stopping an artificially intelligent defense network called Skynet which will become hostile and self-aware and trigger a global nuclear war to exterminate all humans. Needless to say, the recent revelations of conversations with GPT-4 chatbots such as OpenAI’s ChatGPT, Google's PaLM and Microsoft’s Bing AI turning strange, hostile and violent have caused many to equate them to The Terminator and Skynet. Cameron says he understands why.
"You talk to all the AI scientists and every time I put my hand up at one of their seminars they start laughing. The point is that no technology has ever not been weaponized. And do we really want to be fighting something smarter than us that isn't us? On our own world? I don't think so.”
Cameron is, of course, correct in his assessment of the weaponization of technology. However, it is his next comment that is the real cause for concern.
“AI could have taken over the world and already be manipulating it but we just don't know because it would have total control over all the media and everything."
Think about the fears being expressed about ChatGPT and other forms of AI being used to collect news, write news stories and even deliver them in the form of very humanlike – and in this case, ironic – avatars of human newscasters. Could AI have already penetrated the media and be working its way into taking over the world? Is this another Terminator sequel in real life?
"I think it's very reasonable for people to be worrying about these issues now, even though it's not going to happen in the next year or two. People should be thinking about those issues."
In an interview with CBS News, Geoffrey Hinton, the "godfather of artificial intelligence," thinks Cameron is right to be worried about the weaponization of artificial intelligence and a possible takeover of humanity that could lead to its destruction. Hinton knows what he’s talking about. He is the descendent of computer and mathematics royalty – his great-great-grandmother was Mary Everest Boole, who was influential in promoting mathematics education for both boys and girls, and her husband was logician George Boole, whose invention of Boolean algebra and Boolean logic is credited with laying the foundations for modern computer science and the Information Age. Hinton has carried on the tradition of his illustrious ancestors – he was awarded the 2018 Turing Award, with Yoshua Bengio and Yann LeCun, for their work on deep learning. On the subject of the weaponization of AI, Hinton has been speaking out against it for years – he moved from the U.S. to Canada because he was against the military funding of artificial intelligence, and has regularly spoken out against lethal autonomous weapons. One concern he expressed in the CBS interview was the rapidity of AI development.
"Until quite recently, I thought it was going to be like 20 to 50 years before we have general purpose AI. And now I think it may be 20 years or less.”
He is also worried about one of the very things he helped develop – computers coming up with their own ideas for self-improvement, warning that “We have to think hard about how you control that." When asked about the possibility of one of Cameron’s Terminators being developed with artificial general intelligence that takes it beyond human capabilities to the point of acting on its own and potentially threatening the very existence of humanity, he answered cautiously:
"It's not inconceivable, that's all I'll say,"
Not inconceivable? Or already deliverable?
Not inconceivable! This is from the godfather of artificial intelligence! Why are we not panicking? Why is Hinton not panicking? Or moving farther away than Canada? He explains that on the more conceivable side, things aren’t so bad.
“The phrase ‘artificial general intelligence’ carries with it the implication that this sort of single robot is suddenly going to be smarter than you. I don’t think it’s going to be that. I think more and more of the routine things we do are going to be replaced by AI systems — like the Google Assistant.”
What about ChatGPT?
"We're at this transition point now where ChatGPT is this kind of idiot savant, and it also doesn't really understand about truth."
That is a key problem with ChatGPT – its responses are often far from the truth … but presented as facts as it tries to figure out what it is doing and works towards being truthful, factual and consistent. Hinton’s final warning comes straight out of the Wizard of Oz … we need to be worried about who is doing the development and corking the controls behind the curtain.
"You don't want some big for-profit company deciding what's true."
James Cameron and Godfrey … geniuses in different fields who agree on the potential dangers of artificial general intelligence. Are we going to listen to them or the big for-profit companies?
Artificial intelligence is permeating every sector of society. Systems like ChatGPT have been rolled out for public consumption boasting an interactive dialogue, and an ability to write ‘in your voice.’ But how ‘intelligent’ is this new artificial intelligence? We have a little fun putting it to the test.
For years now, scientists have been raising ethical concerns about the creation and use of lab-grown mini brains.
At the same time, other scientists are plowing full steam ahead, creating these brain organoids and trying to find ways to put them to good use.
Now, a group of scientists that fall into the latter category are trying to develop something called “organoid intelligence.”
They shared their research in a recent edition of the journalFrontiers in Science.
Essentially, they want to use these lab-grown mini brains as biological hardware for new biocomputers, LiveScience reports.
“While silicon-based computers are certainly better with numbers, brains are better at learning,” said one of the scientists, Thomas Hartung of Johns Hopkins University. “For example, AlphaGo [the AI that beat the world’s number one Go player in 2017] was trained on data from 160,000 games. A person would have to play five hours a day for more than 175 years to experience these many games.”
But… brains? In a computer? Why?
Fluorescent images illustrating cell types in brain organoids.
In apress releaseabout their research, the scientists wrote, “Brains are not only superior learners, they are also more energy efficient. For instance, the amount of energy spent training AlphaGo is more than is needed to sustain an active adult for a decade.”
Hartung added. “We’re reaching the physical limits of silicon computers because we cannot pack more transistors into a tiny chip. But the brain is wired completely differently. It has about 100bn neurons linked through over 1015 connection points. It’s an enormous power difference compared to our current technology.”
In parallel, the authors are also developing technologies to communicate with the organoids: in other words, to send them information and read out what they’re ‘thinking’. The authors plan to adapt tools from various scientific disciplines, such as bioengineering and machine learning, as well as engineer new stimulation and recording devices.
“We developed a brain-computer interface device that is a kind of an EEG cap for organoids, which we presented in an article published last August. It is a flexible shell that is densely covered with tiny electrodes that can both pick up signals from the organoid, and transmit signals to it,” said Hartung.
But what about all those sticky ethical questions about creating mini-brains just to do tasks for us humans?
Creating human brain organoids that can learn, remember, and interact with their environment raises complex ethical questions. For example, could they develop consciousness, even in a rudimentary form? Could they experience pain or suffering? And what rights would people have concerning brain organoids made from their cells?
The authors are acutely aware of these issues.
“A key part of our vision is to develop OI in an ethical and socially responsible manner,” Hartung said. “For this reason, we have partnered with ethicists from the very beginning to establish an ‘embedded ethics’ approach. All ethical issues will be continuously assessed by teams made up of scientists, ethicists, and the public, as the research evolves.”
A couple of years prior to that, scientists worried that the mini brains they grew in a lab may be sentient and feel pain.
Is hooking these mini brains up to a computer really a great idea? What if they get access to the internet and start secretly communicating with self-healingsuperhuman robots?
ALL RELATED VIDEOS, selected and posted by peter2011
What are 'minibrains'? Everything to know about brain organoids
Nicoletta Lanese
In the past decade, lab-grown blobs of human brain tissue began making news headlines, as they ushered in a new era of scientific discovery and raised a slew of ethical questions.
These blobs — scientifically known as brain organoids, but often called "minibrains" in the news — serve as miniature, simplified models of full-size human brains. These organoids can potentially be useful in basic research, drug development and even computer science.
However, as scientists make these models more sophisticated, there's a question as to whether they could ever become too similar to human brains and thus gain consciousness, in some form or another.
How are minibrains made?
Scientists grow brain organoids from stem cells, a type of immature cell that can give rise to any cell type, whether blood, skin, bowel or brain.
The stem cells used to grow organoids can either come from adult human cells, or more rarely, human embryonic tissue, according to a 2021 review in the Journal of Biomedical Science. In the former case, scientists collect adult cells and then expose them to chemicals in order to revert them into a stem cell-like state. The resulting stem cells are called "induced pluripotent stem cells" (iPSC), which can be made to grow into any kind of tissue.
To give rise to a minibrain, scientists embed these stem cells in a protein-rich matrix, a substance that supports the cells as they divide and form a 3D shape. Alternatively, the cells may be grown atop a physical, 3D scaffold, according to a 2020 review in the journal Frontiers in Cell and Developmental Biology.
To coax the stem cells to form different tissues, scientists introduce specific molecules and growth factors — substances that spur cell growth and replication — into the cell culture system at precise points in their development. In addition, scientists often place the stem cells in spinning bioreactors as they grow into minibrains. These devices keep the growing organoids suspended, rather than smooshed against a flat surface; this helps the organoids absorb nutrients and oxygen from the well-stirred solution surrounding them.
Brain organoids grow more complex as they develop, similar to how human embryos grow more and more complex in the womb. Over time, the organoids come to contain multiple kinds of cells found in full-size human brains; mimic specific functions of human brain tissue; and show similar spatial organization to isolated regions of the brain, though both their structure and function are simpler than that of a real human brain, according to the Journal of Biomedical Science review.
Why are scientists growing minibrains?
Minibrains can be used in a variety of applications. For example, scientists are using the blobs of tissue to study early human development.
To this end, scientists have grown brain organoids with a set of eye-like structures called "optic cups;" in human embryos in the womb, the optic cup eventually gives rise to the light-sensitive retina at the back of the eye. Another group grew organoids that generate brain waves similar to those seen in preterm babies, and another used minibrains to help explain why a common drug can cause birth defects and developmental disorders if taken during pregnancy. Models like these allow researchers to glimpse the brain as it appears in early pregnancy, a feat that would be both difficult and unethical in humans.
Minibrains can also be used to model conditions that affect adults, including infectious diseases that affect the brain, brain tumors and neurodegenerative disorders like Alzheimer's and Parkinson's disease, according to the Frontiers in Cell and Developmental Biology review. In addition, some groups are developing minibrains for drug screening, to see if a given medication could be toxic to human patients' brains, according to a 2021 review in the journal Frontiers in Genetics.
Such models could complement or eventually replace research conducted with cells in lab dishes and in animals; even studies in primates, whose brains closely resemble humans', can't reliably capture exactly what happens in human disease. For now, though, experts agree that brain organoids are not advanced enough to partially or fully replace established cell and animal models of disease. But someday, scientists hope these models will lead to the development of new drugs and reduce the need for animal research; some researchers are even testing whether it could be feasible to repair the brain by "plugging" injuries with lab-grown human minibrains.
histological image shows a cross section of a rat's brain, depicted in red, with a glowing green blob on the top right side; the blob is a clump of cells called an organoid that's been derived from human stem cells and transplanted into the rat's brain
Beyond medicine and the study of human development, minibrains can also be used to study human evolution. Recently, scientists used brain organoids to study which genes allowed the human brain to grow so large, and others have used organoids to study how human brains differ from those of apes and Neanderthals.
Finally, some scientists want to use brain organoids to power computer systems. In an early test of this technology, one group recently crafted a minibrain out of human and mouse brain cells that successfully played "Pong" after being hooked up to a computer-controlled electrode array.
And in a recent proposal published in the journal Frontiers in Science, scientists announced their plans to grow large brain organoids, containing tens of thousands to millions of cells, and link them together to create complex networks that can serve as the basis for future biocomputers.
Could minibrains ever be sentient?
Although sometimes called "minibrains," brain organoids aren't truly miniaturized human brains. Rather, they are roughly spherical balls of brain tissue that mimic some features of the full-size human brain. For example, cerebral organoids, which contain cell types found in the cerebral cortex, the wrinkled outer surface of the brain, contain several layers of tissue, as a real cortex would.
Similarly, brain organoids can generate chemical messages and brain waves similar to what's seen in a full-size brain, but that doesn't mean they can "think," experts say. That said, one sticking point in this discussion is the fact that neuroscientists don't have an agreed-upon definition of consciousness, nor do they have standardized ways to measure the phenomenon, Nature reported in 2020.
The National Academies of Sciences, Engineering, and Medicine assembled a committee to tackle these quandaries and released a report in 2021, outlining some of the potential ethical issues of working with brain organoids.
At the time, the authors concluded that "In the foreseeable future, it is extremely unlikely that [brain organoids] would possess capabilities that, given current understanding, would be recognized as awareness, consciousness, emotion, or the experience of pain. From a moral perspective, neural organoids do not differ at present from other in vitro human neural tissues or cultures. However, as scientists develop significantly more complex organoids, the possible need to make this distinction should be revisited regularly."
When Rohit Bhattacharya began his PhD in computer science, his aim was to build a tool that could help physicians to identify people with cancer who would respond well to immunotherapy. This form of treatment helps the body’s immune system to fight tumours, and works best against malignant growths that produce proteins that immune cells can bind to. Bhattacharya’s idea was to create neural networks that could profile the genetics of both the tumour and a person’s immune system, and then predict which people would be likely to benefit from treatment.
But he discovered that his algorithms weren’t up to the task. He could identify patterns of genes that correlated to immune response, but that wasn’t sufficient1. “I couldn’t say that this specific pattern of binding, or this specific expression of genes, is a causal determinant in the patient’s response to immunotherapy,” he explains.
Bhattacharya was stymied by the age-old dictum that correlation does not equal causation — a fundamental stumbling block in artificial intelligence (AI). Computers can be trained to spot patterns in data, even patterns that are so subtle that humans might miss them. And computers can use those patterns to make predictions — for instance, that a spot on a lung X-ray indicates a tumour2. But when it comes to cause and effect, machines are typically at a loss. They lack a common-sense understanding of how the world works that people have just from living in it. AI programs trained to spot disease in a lung X-ray, for example, have sometimes gone astray by zeroing in on the markings used to label the right-hand side of the image3. It is obvious, to a person at least, that there is no causal relationship between the style and placement of the letter ‘R’ on an X-ray and signs of lung disease. But without that understanding, any differences in how such markings are drawn or positioned could be enough to steer a machine down the wrong path.
For computers to perform any sort of decision making, they will need an understanding of causality, says Murat Kocaoglu, an electrical engineer at Purdue University in West Lafayette, Indiana. “Anything beyond prediction requires some sort of causal understanding,” he says. “If you want to plan something, if you want to find the best policy, you need some sort of causal reasoning module.”
Incorporating models of cause and effect into machine-learning algorithms could also help mobile autonomous machines to make decisions about how they navigate the world. “If you’re a robot, you want to know what will happen when you take a step here with this angle or that angle, or if you push an object,” Kocaoglu says.
In Bhattacharya’s case, it was possible that some of the genes that the system was highlighting were responsible for a better response to the treatment. But a lack of understanding of causality meant that it was also possible that the treatment was affecting the gene expression — or that another, hidden factor was influencing both. The potential solution to this problem lies in something known as causal inference — a formal, mathematical way to ascertain whether one variable affects another.
Computer scientist Rohit Bhattacharya (back) and his team at Williams College in Williamstown, Massachusetts, discuss adapting machine learning for causal inference.
Credit: Mark Hopkins
Causal inference has long been used by economists and epidemiologists to test their ideas about causation. The 2021 Nobel prize in economic sciences went to three researchers who used causal inference to ask questions such as whether a higher minimum wage leads to lower employment, or what effect an extra year of schooling has on future income. Now, Bhattacharya is among a growing number of computer scientists who are working to meld causality with AI to give machines the ability to tackle such questions, helping them to make better decisions, learn more efficiently and adapt to change.
A notion of cause and effect helps to guide humans through the world. “Having a causal model of the world, even an imperfect one — because that’s what we have — allows us to make more robust decisions and predictions,” says Yoshua Bengio, a computer scientist who directs Mila – Quebec Artificial Intelligence Institute, a collaboration between four universities in Montreal, Canada. Humans’ grasp of causality supports attributes such as imagination and regret; giving computers a similar ability could transform their capabilities.
Climbing the ladder
The headline successes of AI over the past decade — such as winning against people at various competitive games, identifying the content of images and, in the past few years, generating text and pictures in response to written prompts — have been powered by deep learning. By studying reams of data, such systems learn how one thing correlates with another. These learnt associations can then be put to use. But this is just the first rung on the ladder towards a loftier goal: something that Judea Pearl, a computer scientist and director of the Cognitive Systems Laboratory at the University of California, Los Angeles, refers to as “deep understanding”.
In 2011, Pearl won the A.M. Turing Award, often referred to as the Nobel prize for computer science, for his work developing a calculus to allow probabilistic and causal reasoning. He describes a three-level hierarchy of reasoning4. The base level is ‘seeing’, or the ability to make associations between things. Today’s AI systems are extremely good at this. Pearl refers to the next level as ‘doing’ — making a change to something and noting what happens. This is where causality comes into play.
A computer can develop a causal model by examining interventions: how changes in one variable affect another. Instead of creating one statistical model of the relationship between variables, as in current AI, the computer makes many. In each one, the relationship between the variables stays the same, but the values of one or several of the variables are altered. That alteration might lead to a new outcome. All of this can be evaluated using the mathematics of probability and statistics. “The way I think about it is, causal inference is just about mathematizing how humans make decisions,” Bhattacharya says.
Yoshua Bengio (front) directs Mila – Quebec Artificial Intelligence Institute in Montreal, Canada.
Credit: Mila-Quebec AI Institute
Bengio, who won the A.M. Turing Award in 2018 for his work on deep learning, and his students have trained a neural network to generate causal graphs5 — a way of depicting causal relationships. At their simplest, if one variable causes another variable, it can be shown with an arrow running from one to the other. If the direction of causality is reversed, so too is the arrow. And if the two are unrelated, there will be no arrow linking them. Bengio’s neural network is designed to randomly generate one of these graphs, and then check how compatible it is with a given set of data. Graphs that fit the data better are more likely to be accurate, so the neural network learns to generate more graphs similar to those, searching for one that fits the data best.
This approach is akin to how people work something out: people generate possible causal relationships, and assume that the ones that best fit an observation are closest to the truth. Watching a glass shatter when it is dropped it onto concrete, for instance, might lead a person to think that the impact on a hard surface causes the glass to break. Dropping other objects onto concrete, or knocking a glass onto a soft carpet, from a variety of heights, enables a person to refine their model of the relationship and better predict the outcome of future fumbles.
Face the changes
A key benefit of causal reasoning is that it could make AI more able to deal with changing circumstances. Existing AI systems that base their predictions only on associations in data are acutely vulnerable to any changes in how those variables are related. When the statistical distribution of learnt relationships changes — whether owing to the passage of time, human actions or another external factor — the AI will become less accurate.
For instance, Bengio could train a self-driving car on his local roads in Montreal, and the AI might become good at operating the vehicle safely. But export that same system to London, and it would immediately break for a simple reason: cars are driven on the right in Canada and on the left in the United Kingdom, so some of the relationships the AI had learnt would be backwards. He could retrain the AI from scratch using data from London, but that would take time, and would mean that the software would no longer work in Montreal, because its new model would replace the old one.
A causal model, on the other hand, allows the system to learn about many possible relationships. “Instead of having just one set of relationships between all the things you could observe, you have an infinite number,” Bengio says. “You have a model that accounts for what could happen under any change to one of the variables in the environment.”
Humans operate with such a causal model, and can therefore quickly adapt to changes. A Canadian driver could fly to London and, after taking a few moments to adjust, could drive perfectly well on the left side of the road. The UK Highway Code means that, unlike in Canada, right turns involve crossing traffic, but it has no effect on what happens when the driver turns the wheel or how the tyres interact with the road. “Everything we know about the world is essentially the same,” Bengio says. Causal modelling enables a system to identify the effects of an intervention and account for it in its existing understanding of the world, rather than having to relearn everything from scratch.
Judea Pearl, director of the Cognitive Systems Laboratory at the University of California, Los Angeles, won the 2011 A.M. Turing Award.
Credit: UCLA Samueli School of Engineering
This ability to grapple with changes without scrapping everything we know also allows humans to make sense of situations that aren’t real, such as fantasy movies. “Our brain is able to project ourselves into an invented environment in which some things have changed,” Bengio says. “The laws of physics are different, or there are monsters, but the rest is the same.”
Counter to fact
The capacity for imagination is at the top of Pearl’s hierarchy of causal reasoning. The key here, Bhattacharya says, is speculating about the outcomes of actions not taken.
Bhattacharya likes to explain such counterfactuals to his students by reading them ‘The Road Not Taken’ by Robert Frost. In this poem, the narrator talks of having to choose between two paths through the woods, and expresses regret that they can’t know where the other road leads. “He’s imagining what his life would look like if he walks down one path versus another,” Bhattacharya says. That is what computer scientists would like to replicate with machines capable of causal inference: the ability to ask ‘what if’ questions.
Imagining whether an outcome would have been better or worse if we’d taken a different action is an important way that humans learn. Bhattacharya says it would be useful to imbue AI with a similar capacity for what is known as ‘counterfactual regret’. The machine could run scenarios on the basis of choices it didn’t make and quantify whether it would have been better off making a different one. Some scientists have already used counterfactual regret to help a computer improve its poker playing6.
The ability to imagine different scenarios could also help to overcome some of the limitations of existing AI, such as the difficulty of reacting to rare events. By definition, Bengio says, rare events show up only sparsely, if at all, in the data that a system is trained on, so the AI can’t learn about them. A person driving a car can imagine an occurrence they’ve never seen, such as a small plane landing on the road, and use their understanding of how things work to devise potential strategies to deal with that specific eventuality. A self-driving car without the capability for causal reasoning, however, could at best default to a generic response for an object in the road. By using counterfactuals to learn rules for how things work, cars could be better prepared for rare events. Working from causal rules rather than a list of previous examples ultimately makes the system more versatile.
Using causality to program imagination into a computer could even lead to the creation of an automated scientist. During a 2021 online summit sponsored by Microsoft Research, Pearl suggested that such a system could generate a hypothesis, pick the best observation to test that hypothesis and then decide what experiment would provide that observation.
Right now, however, this remains a way off. The theory and basic mathematics of causal inference are well established, but the methods for AI to realize interventions and counterfactuals are still at an early stage. “This is still very fundamental research,” Bengio says. “We’re at the stage of figuring out the algorithms in a very basic way.” Once researchers have grasped these fundamentals, algorithms will then need to be optimized to run efficiently. It is uncertain how long this will all take. “I feel like we have all the conceptual tools to solve this problem and it’s just a matter of a few years, but usually it takes more time than you expect,” Bengio says. “It might take decades instead.”
Bhattacharya thinks that researchers should take a leaf from machine learning, the rapid proliferation of which was in part because of programmers developing open-source software that gives others access to the basic tools for writing algorithms. Equivalent tools for causal inference could have a similar effect. “There’s been a lot of exciting developments in recent years,” Bhattacharya says, including some open-source packages from tech giant Microsoft and from Carnegie Mellon University in Pittsburgh, Pennsylvania. He and his colleagues also developed an open-source causal module they call Ananke. But these software packages remain a work in progress.
Bhattacharya would also like to see the concept of causal inference introduced at earlier stages of computer education. Right now, he says, the topic is taught mainly at the graduate level, whereas machine learning is common in undergraduate training. “Causal reasoning is fundamental enough that I hope to see it introduced in some simplified form at the high-school level as well,” he says.
If these researchers are successful at building causality into computing, it could bring AI to a whole new level of sophistication. Robots could navigate their way through the world more easily. Self-driving cars could become more reliable. Programs for evaluating the activity of genes could lead to new understanding of biological mechanisms, which in turn could allow the development of new and better drugs. “That could transform medicine,” Bengio says.
Even something such as ChatGPT, the popular natural-language generator that produces text that reads as though it could have been written by a human, could benefit from incorporating causality. Right now, the algorithm betrays itself by producing clearly written prose that contradicts itself and goes against what we know to be true about the world. With causality, ChatGPT could build a coherent plan for what it was trying to say, and ensure that it was consistent with facts as we know them.
When he was asked whether that would put writers out of business, Bengio says that could take some time. “But how about you lose your job in ten years, but you’re saved from cancer and Alzheimer’s,” he says. “That’s a good deal.”
The US National Ignition Facility has reported that it has achieved the phenomenon of ignition.
Credit: Jason Laurea/Lawrence Livermore National Laboratory
Scientists at the world’s largest nuclear-fusion facility have for the first time achieved the phenomenon known as ignition — creating a nuclear reaction that generates more energy than it consumes. News of the breakthrough at the US National Ignition Facility (NIF), made on 5 December and announced today by US President Joe Biden’s administration, has excited the global fusion-research community. That research aims to harness nuclear fusion — the phenomenon that powers the Sun — to provide a source of near-limitless clean energy on Earth. Researchers caution that, despite this latest success, a long path remains to achieving that goal.
“It’s an incredible accomplishment,” says Mark Herrmann, the deputy programme director for fundamental weapons physics at Lawrence Livermore National Laboratory in California, which houses the fusion laboratory. The landmark experiment follows years of work by multiple teams on everything from lasers and optics to targets and computer models, Herrmann says. “That is of course what we are celebrating.”
A flagship experimental facility of the US Department of Energy’s nuclear-weapons programme, designed to study thermonuclear explosions, NIF originally aimed to achieve ignition by 2012 and has faced criticism for delays and cost overruns. In August 2021, NIF scientists announced that they had used their high-powered laser device to achieve a record reaction that crossed a key threshold in achieving ignition, but efforts to replicate that experiment failed. Ultimately, scientists scrapped efforts to replicate that shot, and rethought the experimental design — a choice that paid off last week.
“There were a lot of people who didn’t think it was possible, but I and others who kept the faith feel somewhat vindicated,” says Michael Campbell, former director of the laser energetics laboratory at the University of Rochester in New York and an early proponent of NIF while at Lawrence Livermore lab. “I’m having a cosmo to celebrate.”
Nature looks at NIF’s latest experiment and what it means for fusion science.
What did NIF achieve?
The facility used its set of 192 lasers to deliver 2.05 megajoules of energy onto a pea-sized gold cylinder containing a frozen pellet of the hydrogen isotopes deuterium and tritium. The laser’s pulse of energy caused the capsule to collapse, reaching temperatures only seen in stars and thermonuclear weapons, and the hydrogen isotopes fused into helium, releasing additional energy and creating a cascade of fusion reactions. The laboratory’s analysis suggests that the reaction released some 3.15 MJ of energy — roughly 54% more than went into the reaction, and more than double the previous record of 1.3 MJ.
“Fusion research has been going on since the early 1950s, and this is the first time in the laboratory that fusion has ever produced more energy than it consumed,” says Campbell.
However, although the fusion reactions produced more than 3 MJ of energy — more than was delivered to the target — NIF’s lasers consumed 322 MJ of energy in the process. Still, the experiment qualifies as ignition, a benchmark criterion for fusion reactions.
“It’s a big milestone, but NIF is not a fusion-energy device,” says David Hammer, a nuclear-energy engineer at Cornell University in Ithaca, New York.
Herrmann acknowledges as much, saying that there are many steps on the path to laser fusion energy. “NIF was not designed to be efficient,” he says. “It was designed to be the biggest laser we could possibly build to give us the data we need for the [nuclear] stockpile research programme.”
NIF scientists made multiple changes before the latest laser shot, based in part on analysis and computer modelling of previous experiments. In addition to boosting the laser’s power by around 8%, scientists reduced the number of imperfections in the target and adjusted how they delivered the laser energy to create a more spherical implosion. Operating at the cusp of fusion ignition, the scientists knew that “little changes can make a big difference”, Herrmann says.
Why are these results significant?
On one level, it’s about proving what is possible, and many scientists have hailed the result as a milestone in fusion science. But the results carry particular significance at NIF: the facility was designed to help nuclear-weapons scientists study the intense heat and pressures inside explosions, and that is possible only if the laboratory produces high-yield fusion reactions.
It took more than a decade, “but they can be commended for reaching their goal”, says Stephen Bodner, a physicist who formerly headed the laser plasma branch of the US Naval Research Laboratory in Washington DC. Bodner says the big question now is what the Department of Energy will do next: double down on weapons research at the NIF or pivot to a laser programme geared towards fusion-energy research.
What does this mean for fusion energy?
The latest results have already renewed buzz about a future powered by clean fusion energy, but experts warn that there is a long road ahead.
NIF was not designed with commercial fusion energy in mind — and many researchers doubt that laser-driven fusion will be the approach that ultimately yields fusion energy. Nevertheless, Campbell thinks that its latest success could boost confidence in the promise of laser fusion power and spur a programme focused on energy applications. “This is absolutely necessary to have the credibility to sell an energy programme,” he says.
Lawrence Livermore National Laboratory director Kim Budil described the achievement as a proof of concept. “I don’t want to give you a sense that we’re going to plug the NIF into the grid: that is definitely not how this works,” she said during a press conference in Washington DC. “But this is the fundamental building block of an inertial confinement fusion power scheme.”
There are many other experiments worldwide that are trying to achieve fusion for energy applications using different approaches. But engineering challenges remain, including the design and construction of plants that extract the heat produced by the fusion and use it to generate significant amounts of energy to be turned into usable electricity.
“Although positive news, this result is still a long way from the actual energy gain required for the production of electricity,” said Tony Roulstone, a nuclear-energy researcher at the University of Cambridge, UK, in a statement to the Science Media Centre in London.
Still, “the NIF experiments focused on fusion energy absolutely are valuable on the path to commercial fusion power”, says Anne White, a plasma physicist at the Massachusetts Institute of Technology in Cambridge.
What are the next major milestones in fusion?
To demonstrate that the type of fusion studied at NIF can be a viable way of producing energy, the efficiency of the yield — the energy released compared to the energy that goes into producing the laser pulses — needs to grow by at least two orders of magnitude.
Researchers will also need to dramatically increase the rate at which the laser’s pulses can be produced and how quickly they can clear the target chamber to prepare for another burn, says Tim Luce, head of science and operation at the international nuclear-fusion reactor ITER, which is under construction in St-Paul-lès-Durance, France.
“Sufficient fusion-energy-producing events at repeated performance would be a major milestone of interest,” says White.
The US$22-billion ITER project — a collaboration between China, the European Union, the United Kingdom, India, Japan, South Korea, Russia and the United States — aims to achieve self-sustaining fusion, meaning that the energy from fusion produces more fusion, through a different technique from NIF’s ‘inertial confinement’ approach. ITER will keep a plasma of deuterium and tritium confined in a doughnut-shaped vacuum chamber, known as a tokamak, and heat it up until the nuclei fuse. Once the reactor starts working towards fusion, currently planned for 2035, it will aim to reach ‘burning’ stage, “where the self-heating power is the dominant source of heating”, Luce explains.
What does it mean for other fusion experiments?
NIF and ITER use only two of many fusion-technology concepts being pursued worldwide. The approaches include the magnetic confinement of plasma, using tokamaks and devices called stellarators — inertial confinement, used by NIF, and a hybrid.
The technology required to generate electricity from fusion is largely independent of the concept, says White, and this latest milestone won’t necessarily lead researchers to abandon or consolidate their concepts.
The engineering challenges faced by NIF are different from those at ITER and other facilities. But the symbolic achievement could have widespread effects. “A result like this will bring increased interest in the progress of all types of fusion, so it should have a positive impact on fusion research in general,” says Luce.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
This Plane Will Change Travel Forever
This Plane Will Change Travel Forever
Before the experts decided on the ultimate design for planes, a ton of bizarre flying contraptions were proposed, but just because we now have designs that function doesn't stop individuals from imagining new methods to explore the sky.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
21-02-2023
How real is 3D holographic blue beam technology?
How real is 3D holographic blue beam technology?
There is much written about the infamous Blue Beam project from which it is said that it will be used to create a fake alien invasion.
Now, there are rumors that soon they are going to rollout this blue beam technology in order to convince the people that there is a threat from outer space.
Of course there are secret government projects that use certain technologies and they have powerful satellites and ground based systems that can project holograms. They have been developing and perfecting this holographic blue beam technology for decades and it looks real.
But creating a fake alien invasion would require an enormous amount of resources and would be very difficult to keep it secret. Additionally, such an event would likely have major ethical and political implications that the US government would not want to risk, therefore it is unlikely that it would be used to create a false flag event, but you never know.
Watching the video below of a large fire-breathing dragon flying around at the opening of a baseball game at Happy Dream Park in South Korea in 2019 which was streamed live through sports broadcasting channels then it's not hard to imagine what's possible by using 3D holographic blue beam technology.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
20-02-2023
How will AI change mathematics? Rise of chatbots highlights discussion
How will AI change mathematics? Rise of chatbots highlights discussion
Machine learning tools already help mathematicians to formulate new theories and solve tough problems. But they’re set to shake up the field even more.
AI tools have allowed researchers to solve complex mathematical problems.
Credit: Fadel Senna/AFP/Getty
As interest in chatbots spreads like wildfire, mathematicians are beginning to explore how artificial intelligence (AI) could help them to do their work. Whether it’s assisting with verifying human-written work or suggesting new ways to solve difficult problems, automation is beginning to change the field in ways that go beyond mere calculation, researchers say.
“We’re looking at a very specific question: will machines change math?” says Andrew Granville, a number theorist at the University of Montreal in Canada. A workshop at the University of California, Los Angeles (UCLA), this week explored this question, aiming to build bridges between mathematicians and computer scientists. “Most mathematicians are completely unaware of these opportunities,” says one of the event’s organizers, Marijn Heule, a computer scientist at Carnegie Mellon University in Pittsburgh, Pennsylvania.
Akshay Venkatesh, a 2018 winner of the prestigious Fields Medal who is at the Institute for Advanced Study in Princeton, New Jersey, kick-started a conversation on how computers will change maths at a symposium in his honour in October. Two other recipients of the medal, Timothy Gowers at the Collège de France in Paris and Terence Tao at UCLA, have also taken leading roles in the debate.
“The fact that we have people like Fields medallists and other very famous big-shot mathematicians interested in the area now is an indication that it’s ‘hot’ in a way that it didn’t used to be,” says Kevin Buzzard, a mathematician at Imperial College London.
AI approaches
Part of the discussion concerns what kind of automation tools will be most useful. AI comes in two major flavours. In ‘symbolic’ AI, programmers embed rules of logic or calculation into their code. “It’s what people would call ‘good old-fashioned AI’,” says Leonardo de Moura, a computer scientist at Microsoft Research in Redmond, Washington.
The other approach, which has become extremely successful in the past decade or so, is based on artificial neural networks. In this type of AI, the computer starts more or less from a clean slate and learns patterns by digesting large amounts of data. This is called machine-learning, and it is the basis of ‘large language models’ (including chatbots such as ChatGPT), as well as the systems that can beat human players at complex games or predict how proteins fold. Whereas symbolic AI is inherently rigorous, neural networks can only make statistical guesses, and their operations are often mysterious.
2018 Fields Medal winner Akshay Venkatesh (centre) has spoken about how computers will change mathematics.
Credit: Xinhua/Shutterstock
De Moura helped symbolic AI to score some early mathematical successes by creating a system called Lean. This interactive software tool forces researchers to write out each logical step of a problem, down to the most basic details, and ensures that the maths is correct. Two years ago, a team of mathematicians succeeded at translating an important but impenetrable proof — one so complicated that even its author was unsure of it — into Lean, thereby confirming that it was correct.
The researchers say the process helped them to understand the proof, and even to find ways to simplify it. “I think this is even more exciting than checking the correctness,” de Moura says. “Even in our wildest dreams, we didn’t imagine that.”
As well as making solitary work easier, this sort of ‘proof assistant’ could change how mathematicians work together by eliminating what de Moura calls a “trust bottleneck”. “When we are collaborating, I may not trust what you are doing. But a proof assistant shows your collaborators that they can trust your part of the work.”
Sophisticated autocomplete
At the other extreme are chatbot-esque, neural-network-based large language models. At Google in Mountain View, California, former physicist Ethan Dyer and his team have developed a chatbot called Minerva, which specializes in solving maths problems. At heart, Minerva is a very sophisticated version of the autocomplete function on messaging apps: by training on maths papers in the arXiv repository, it has learnt to write down step-by-step solutions to problems in the same way that some apps can predict words and phrases. Unlike Lean, which communicates using something similar to computer code, Minerva takes questions and writes answers in conversational English. “It is an achievement to solve some of these problems automatically,” says de Moura.
Minerva shows both the power and the possible limitations of this approach. For example, it can accurately factor integer numbers into primes — numbers that can’t be divided evenly into smaller ones. But it starts making mistakes once the numbers exceed a certain size, showing that it has not ‘understood’ the general procedure.
Still, Minerva’s neural network seems to be able to acquire some general techniques, as opposed to just statistical patterns, and the Google team is trying to understand how it does that. “Ultimately, we’d like a model that you can brainstorm with,” Dyer says. He says it could also be useful for non-mathematicians who need to extract information from the specialized literature. Further extensions will expand Minerva’s skills by studying textbooks and interfacing with dedicated maths software.
Dyer says the motivation behind the Minerva project was to see how far the machine-learning approach could be pushed; a powerful automated tool to help mathematicians might end up combining symbolic AI techniques with neural networks.
Maths v. machines
In the longer term, will programs remain part of the supporting cast, or will they be able to conduct mathematical research independently? AI might get better at producing correct mathematical statements and proofs, but some researchers worry that most of those would be uninteresting or impossible to understand. At the October symposium, Gowers said that there might be ways of teaching a computer some objective criteria for mathematical relevance, such as whether a small statement can embody many special cases or even form a bridge between different subfields of maths. “In order to get good at proving theorems, computers will have to judge what is interesting and worth proving,” he said. If they can do that, the future of humans in the field looks uncertain.
Computer scientist Erika Abraham at RWTH Aachen University in Germany is more sanguine about the future of mathematicians. “An AI system is only as smart as we program it to be,” she says. “The intelligence is not in the computer; the intelligence is in the programmer or trainer.”
Melanie Mitchell, a computer scientist and cognitive scientist at the Santa Fe Institute in New Mexico, says that mathematicians’ jobs will be safe until a major shortcoming of AI is fixed — its inability to extract abstract concepts from concrete information. "While AI systems might be able to prove theorems, it’s much harder to come up with interesting mathematical abstractions that give rise to the theorems in the first place.”
Something about dancingrobots seems to tickle people’s fancies. After all, millions of people watched a 2020 video from the robotics company Boston Dynamics of its fancy humanoid and quadrupedal devices getting down to ‘60s soul.
More recently, the murderous (yet campy) villain in the movie M3GAN has captivated the internet with her moves — and even earned recognition as a gay icon. Whether we’re obsessing over grooving robots or moving like robots ourselves, automaton choreography clearly holds a place in our hearts.
M3GAN’s creepy yet delightful dance has captivated the internet.
So it’s no surprise that a niche research field dubbed choreorobotics has gained traction in recent years. Brown University even has an entire course dedicated to the subject. Not only are labs programming robots to gyrate and hop, but dance experts are also helping scientists give their devices more fluid, human-like movements. Ultimately, this kind of work could help us feel closer to robots in an increasingly automated world.
Kate Sicchio, a choreographer and digital artist at Virginia Commonwealth University, combines her dance and tech knowledge to devise robot performances.Last year, Sicchio worked with Patrick Martin from the university’s engineering department to produce a (surprisingly touching) human-automaton duet. Offstage, she also helps design machines with more realistic motions.
Inverse talked to Sicchio to learn more about choreorobotics — and whether increasingly limber robots could actually become blood-thirsty killers like M3GAN.
WHY DO YOU THINK ROBOT DANCING VIDEOS GET SO POPULAR?
Boston Dynamics regularly stages elaborate bot performances.
It's really interesting to have this unfamiliar device do this uncanny human thing. It’s similar to why we love putting googly eyes on everything. This makes it human even though it's not supposed to be. And that becomes funny or endearing somehow. It's very popular to make the robot do this very human, expressive thing when it's not human or expressive on its own.
WHAT MAKES A ROBOT PERFORMANCE POWERFUL?
One of the things we found is that a robot on its own feels very isolated and cold. We have this piece called “Amelia and the Machine.”In the opening, this dancer is actually moving the robot arm around.
People are really moved by this intimacy with the robot and the fact that she's touching it.
It's a small manipulator robot, so it's probably the size of a toddler. The fact that she’s sitting next to it — that small connection really changes how people see the robot because it's no longer this isolated thing. All of a sudden it has a companion.
WHAT STYLE OF DANCE DO ROBOTS DO BEST?
A performance of “Amelia and the Machine” co-choreographed by Kate Sicchio at Virginia Commonwealth University.
ANTHONY JOHNSON
My home is contemporary dance, so that's where I go first. That tends to work well because, with the robot we’re using, it's not a one-to-one mapping of the human body onto the robot. Sometimes it's hard to do traditional ballet, where there are really specific positions to hit. It’s really hard to map an arabesque onto a robot that doesn't have a leg.
I think contemporary dance, where there's a lot of freedom and creativity in how you develop movement, works well. I would be interested in doing things with dance forms with more rhythm or more structure and timing — that would be a really interesting study to follow up with at some point. More tutting or street dance forms could be really interesting to play with.
THE M3GAN DANCE SEEMS TO FRIGHTEN, OR AT LEAST CONFUSE, VIEWERS. CAN DANCING DEVICES BACKFIRE AND ACTUALLY ALIENATE US FROM ROBOTS?
That’s something that we're also studying. There's this weird space where it totally can go wrong and could be like, “They're trying too much to make it human,” and it just falls short and becomes scary. I think what's interesting about M3GAN is that it's a very humanoid robot. The robots I work with do not look human at all, and I'm not interested in trying to make them look human. I get a lot of recommendations to put costumes on them. But I don't know that it needs a hand or a hat, or a tiara. It’s this weird moment where it can become scary instead of endearing or friendly.
One thing that's interesting about M3GAN is how it quickly becomes a killer robot. That is an ethical concern in this field — where might this go wrong? Could this become weaponized somehow if it becomes so good at moving? That's something I think about, too: How do we keep them ethical? I've never taken DARPA funding, but I know people who have gotten military funding for projects like this.
DO YOU HAVE A FAVORITE HOLLYWOOD DANCING ROBOT SCENE?
Sicchio enjoys this unnerving performance from Ex Machina.
The scene from Ex Machina. What I like about that dancing robot scene is it’s kind of the reveal that, guess what, this is all training for this AI robot, and all these women you keep seeing in the house aren't really women — and I'm going to show you because we can do this crazy dance routine together.
What stands out and makes it so interesting is that they do all these disco moves, but their eyes are locked on the guy watching. They never move their heads, which is what makes it so weird and un-human: They never unlock their focus. They're not having fun.
WHAT TYPES OF ROBOTS HAVE THE BEST MOVES?
The “Amelia and the Machine” piece uses a relatively simple robot, which Sicchio says works well for performance.
With simpler robots, you can better appreciate the movement they can do and see how that can be made into something more expressive or more collaborative with the human. I think that’s less scary because it's not trying to be human and then failing.
Most researchers use more simple devices — a lot do big industrial arms. It's almost become a trope, the pretty ballerina with the big industrial arm. And then Boston Dynamics has the bipedal, more human sort of robots. The company’s dance spectacles look seamless, but they are actually really hard to program. So they never do them live, you only see the edited videos. They’re a huge production that takes several days to film to get you three minutes of a Bruno Mars song or whatever.
The humanoid ones are just tricky, that center of gravity thing is really hard — it’s easier when the robot is low to the ground. With our small robots, if you make a movement too fast or wild, it will fall over. So you can imagine a big humanoid robot trying to get it to jump, and land is very difficult.
WHY IS CHOREOROBOTICS IMPORTANT BEYOND PERFORMANCE?
I make stage pieces with Patrick Martin, an assistant professor of electrical and computer engineering. But we're also doing scientific studies during that process. We found that, because dancers are interested in doing extreme or different movements, they're very good at finding the boundaries of what a robot can do very quickly. A friend of mine calls dancers “extreme user testers.”
We’ve been doing a lot with machine learning and creating new algorithms for robots to move and we’ve been doing that by studying dancers. We do things like motion capture of dancers doing certain gestures, and then see how we can map those to the robot and see if we can get it to move with new qualities or in ways that normal programming hasn't thought of.
I also think it’s interesting when roboticists engage with choreography themselves. We did a workshop with Patrick Martin and his graduate students and some of my dance students — getting them to move. We explored a variety of prompts around moving the body in space, ways to repeat lines of the body with other body parts, and other approaches of responding to the geometry of the body.
When roboticists think about movement, they're always thinking of it outside of their own body. I think about it like getting the robot to follow my arm. Getting roboticists to actually do the dance and be in their bodies is a really interesting place for us to go next. That will start to develop this kind of kinesthetic empathy that perhaps we're searching for with dancing robots. I think roboticists should become dancers.
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
10-02-2023
Metal robot can melt its way out of tight spaces to escape
Metal robot can melt its way out of tight spaces to escape
A millimetre-sized robot made from a mix of liquid metal and microscopic magnetic pieces can stretch, move or melt. It could be used to fix electronics or remove objects from the body
A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.
Robots that are soft and malleable enough to work in narrow, delicate spaces like those in the human body already exist, but they can’t make themselves sturdier and stronger when under pressure or when they must carry something heavier than themselves. Carmel Majidi at Carnegie Mellon University in Pennsylvania and his colleagues created a robot that can not only shape-shift but also become stronger or weaker by alternating between being a liquid and a solid.
They made the millimetre-sized robot from a mix of the liquid metal gallium and microscopic pieces of a magnetic material made of neodymium, iron and boron. When solid, the material was strong enough to support an object 30 times its own mass. To make it soften, stretch, move or melt into a crawling puddle as needed for different tasks, the researchers put it near magnets. The magnets’ customised magnetic fields exerted forces on the tiny magnetic pieces in the robot, moving them and deforming the surrounding metal in different directions.
For instance, the team stretched a robot by applying a magnetic field that pulled these granules in multiple directions. The researchers also used a stronger field to yank the particles upwards, making the robot jump. When Majidi and his colleagues used an alternating magnetic field – one whose shape changes predictably over time – electrons in the robot’s liquid metal formed electric currents. The coursing of these currents through the robot’s body heated it up and eventually made it melt.
“No other material I know of is this good at changing its stiffness this much,” says Majidi.
Wang and Pan et al.
Exploiting this flexibility, the team made two robots carry and solder a small light bulb onto a circuit board. When they reached their target, the robots simply melted over the light bulb’s edges to fuse it to the board. Electricity could then run through their liquid metal bodies and light the light bulb.
In an experiment inside an artificial stomach, the researchers applied another set of magnetic fields to make the robot approach an object, melt over it and drag it out. Finally, they shaped the robot like a Lego minifigure, then helped it escape from a cage by liquefying it and making it flow out between the bars. Once the robot puddle dribbled into a mould, it set back into its original, solid shape.
Wang and Pan et al.
These melty robots could be used for emergency fixes in situations where human or traditional robotic hands become impractical, says Li Zhang at the Chinese University of Hong Kong. For example, a liquefied robot might replace a lost screw on a spacecraft by flowing into its place and then solidifying, he says. However, to use them inside living stomachs, researchers must first develop methods for precisely tracking the position of the robot at every step of the procedure to ensure the safety of the patient, says Zhang.
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.