The purpose of this blog is the creation of an open, international, independent and free forum, where every UFO-researcher can publish the results of his/her research. The languagues, used for this blog, are Dutch, English and French.You can find the articles of a collegue by selecting his category. Each author stays resposable for the continue of his articles. As blogmaster I have the right to refuse an addition or an article, when it attacks other collegues or UFO-groupes.
Druk op onderstaande knop om te reageren in mijn forum
Zoeken in blog
Deze blog is opgedragen aan mijn overleden echtgenote Lucienne.
In 2012 verloor ze haar moedige strijd tegen kanker!
In 2011 startte ik deze blog, omdat ik niet mocht stoppen met mijn UFO-onderzoek.
BEDANKT!!!
Een interessant adres?
UFO'S of UAP'S, ASTRONOMIE, RUIMTEVAART, ARCHEOLOGIE, OUDHEIDKUNDE, SF-SNUFJES EN ANDERE ESOTERISCHE WETENSCHAPPEN - DE ALLERLAATSTE NIEUWTJES
UFO's of UAP'S in België en de rest van de wereld Ontdek de Fascinerende Wereld van UFO's en UAP's: Jouw Bron voor Onthullende Informatie!
Ben jij ook gefascineerd door het onbekende? Wil je meer weten over UFO's en UAP's, niet alleen in België, maar over de hele wereld? Dan ben je op de juiste plek!
België: Het Kloppend Hart van UFO-onderzoek
In België is BUFON (Belgisch UFO-Netwerk) dé autoriteit op het gebied van UFO-onderzoek. Voor betrouwbare en objectieve informatie over deze intrigerende fenomenen, bezoek je zeker onze Facebook-pagina en deze blog. Maar dat is nog niet alles! Ontdek ook het Belgisch UFO-meldpunt en Caelestia, twee organisaties die diepgaand onderzoek verrichten, al zijn ze soms kritisch of sceptisch.
Nederland: Een Schat aan Informatie
Voor onze Nederlandse buren is er de schitterende website www.ufowijzer.nl, beheerd door Paul Harmans. Deze site biedt een schat aan informatie en artikelen die je niet wilt missen!
Internationaal: MUFON - De Wereldwijde Autoriteit
Neem ook een kijkje bij MUFON (Mutual UFO Network Inc.), een gerenommeerde Amerikaanse UFO-vereniging met afdelingen in de VS en wereldwijd. MUFON is toegewijd aan de wetenschappelijke en analytische studie van het UFO-fenomeen, en hun maandelijkse tijdschrift, The MUFON UFO-Journal, is een must-read voor elke UFO-enthousiasteling. Bezoek hun website op www.mufon.com voor meer informatie.
Samenwerking en Toekomstvisie
Sinds 1 februari 2020 is Pieter niet alleen ex-president van BUFON, maar ook de voormalige nationale directeur van MUFON in Vlaanderen en Nederland. Dit creëert een sterke samenwerking met de Franse MUFON Reseau MUFON/EUROP, wat ons in staat stelt om nog meer waardevolle inzichten te delen.
Let op: Nepprofielen en Nieuwe Groeperingen
Pas op voor een nieuwe groepering die zich ook BUFON noemt, maar geen enkele connectie heeft met onze gevestigde organisatie. Hoewel zij de naam geregistreerd hebben, kunnen ze het rijke verleden en de expertise van onze groep niet evenaren. We wensen hen veel succes, maar we blijven de autoriteit in UFO-onderzoek!
Blijf Op De Hoogte!
Wil jij de laatste nieuwtjes over UFO's, ruimtevaart, archeologie, en meer? Volg ons dan en duik samen met ons in de fascinerende wereld van het onbekende! Sluit je aan bij de gemeenschap van nieuwsgierige geesten die net als jij verlangen naar antwoorden en avonturen in de sterren!
Heb je vragen of wil je meer weten? Aarzel dan niet om contact met ons op te nemen! Samen ontrafelen we het mysterie van de lucht en daarbuiten.
10-04-2025
In a first, breakthrough 3D holograms can be touched, grabbed and poked
In a first, breakthrough 3D holograms can be touched, grabbed and poked
Futuristic holograms you can manipulate have become a reality sooner than we thought, thanks to breakthrough display.
In a new study uploaded March 6 to the HAL open archive, scientists explored how three-dimensional holograms could be grabbed and poked using elastic materials as a key component of volumetric displays.
This innovation means 3D graphics can be interacted with — for example, grasping and moving a virtual cube with your hand — without damaging a holographic system. The research has not yet been peer-reviewed, although the scientists demonstrated their findings in a video showcasing the technology.
In a new study uploaded March 6 to the HAL open archive, scientists explored how three-dimensional holograms could be grabbed and poked using elastic materials as a key component of volumetric displays.
This innovation means 3D graphics can be interacted with — for example, grasping and moving a virtual cube with your hand — without damaging a holographic system. The research has not yet been peer-reviewed, although the scientists demonstrated their findings in a video showcasing the technology.
"We are used to direct interaction with our phones, where we tap a button or drag a document directly with our finger on the screen — it is natural and intuitive for humans. This project enables us to use this natural interaction with 3D graphics to leverage our innate abilities of 3D vision and manipulation,” study lead author Asier Marzo, a professor of computer science at the Public University of Navarra, said in a statement.
The researchers will present their findings at the CHI conference on Human Factors in Computing Systems in Japan, which runs between April 26 and May 1.
Holographic hype
While holograms are nothing new in the present day — augmenting public exhibitions or sitting at the heart of smart glasses, for example — the ability to physically interact with them has been consigned to the realm of science fiction, in movies like Marvel's "Iron Man."
The new research is the first time 3D graphics can be manipulated in mid-air with human hands. But to achieve this, the researchers needed to dig deep into how holography works in the first place.
At the heart of the volumetric displays that support holograms is a diffuser. This is a fast-oscillating, usually rigid, sheet onto which thousands of images are synchronously projected at different heights to form 3D graphics. This is known as the hologram.
However, the rigid nature of the oscillator means that if it comes into contact with a human hand while oscillating, it could break or cause an injury. The solution was to use a flexible material — which the researchers haven’t shared the details of yet — that can be touched without damaging the oscillator or causing the image to deteriorate.
From there, this enabled people to manipulate the holographic image, although the researchers also needed to overcome the challenge of the elastic material deforming when being touched. To get around that problem, the researchers implemented image correction to ensure the hologram was projected correctly.
While this breakthrough is still in the experimental stage, there are plenty of potential ways it could be used if commercialized.
"Displays such as screens and mobile devices are present in our lives for working, learning, or entertainment. Having three-dimensional graphics that can be directly manipulated has applications in education — for instance, visualising and assembling the parts of an engine," the researchers said in the statement.
"Moreover, multiple users can interact collaboratively without the need for virtual reality headsets. These displays could be particularly useful in museums, for example, where visitors can simply approach and interact with the content."
Impressively, the firm extracted DNA from fossilized dire wolf remains, which was combined with the genetic code of a grey wolf, its smaller living relative.
Although the wolves are being kept in captivity, experts warn that releasing them into the world could have disastrous consequences.
Nic Rawlence, a paleontologist at the University of Otago, compared the Colossal Biosciences' ambitious efforts with sci-fi classic Jurassic Park.
'If released into the wild in large enough numbers to establish a self-sustaining population, this new wolf could potentially take down prey larger than that hunted by grey wolves,' he told MailOnline.
'There would also be the potential for increased human and wolf conflict. This sort of conflict is increasing as wolf populations recover in the USA.'
Texas company Colossal Biosciences said on Monday its researchers had used cloning and gene-editing based on two ancient samples of dire wolf DNA to birth three modern dire wolf pups
Scientists have announced the world's first de-extinction of an animal species, reintroducing the dire wolf back into the environment
Colossal Biosciences, a genetic engineering company, birthed three dire wolves, naming them Romulus (right), Remus (left) and Khaleesi in honor of the legendary creature made famous from the HBO hit series Game of Thrones
When they last roamed the Earth, dire wolves were about six feet in length and weighed 150 pounds – about the same as an adult human and 25 per cent bigger than the average grey wolf.
Professor Philip Seddon, a zoologist at the University of Otago, stressed that the genetically modified wolves are 'big carnivores'.
'If they were roaming around they would survive by hunting other mammals,' Professor Seddon told MailOnline.
'Wolves are universally not loved, and wolf reintroduction have been contentious – ask livestock farmers – so maybe genetically modifying grey wolves to make them bigger is not a great idea for rewilding.
'Fortunately, the plan is to keep them in a big enclosure and feed them meat.'
Michael Knapp, associate professor at the University of Otago's department of anatomy, thinks they are about as dangerous as the grey wolves they derive from.
The grey wolf is still spread across mainland Europe in countries where they were not driven to extinction – including Sweden, Germany and Spain.
In rare cases, grey wolves have been known to attack humans, but there has been a handful of fatal wolf attacks on humans in recent history.
The wolves are thriving on a more than 2,000-acre secure expansive ecological preserve in the US. Pictured, Romulus and Remus at age three months
A fossil dire wolf skeleton from the Pleistocene of North America on public display at Sternberg Museum of Natural History, Hays, Kansas
What is the dire wolf?
The dire wolf (Aenocyon dirus) is an extinct wolf species that roamed the Americas as recently as 13,000 years ago.
Dire wolves were as much as 25 per cent larger than grey wolves and had a slightly wider head, light thick fur and stronger jaw.
As hyper-carnivores, their diet comprised at least 70 per cent meat from mostly horses and bison.
Dire wolves went extinct at the end of the most recent ice age, around 13,000 years ago.
Between 2002 and 2020, there were 26 fatal wolf attacks on humans globally, most of which (12) were in Turkey, according to a report by the Norwegian Institute for Nature Research.
Fortunately, wolves are generally shy and elusive animals that avoid human contact, but if released into the wild, their ecological impact – meaning what they would do to their surrounding environment – is 'hard to predict', according to Professor Knapp.
'If they were released in an area where other wolf species have become rare, they may even have a beneficial effect on the ecosystem,' Professor Knapp said.
According to the experts, these new canines are not truly an extinct species back from the dead, but are just 'genetically engineered grey wolves'.
'What Colossal Biosciences have produced is a grey wolf with dire wolf-like characteristics – this is not a de-extincted dire wolf, rather it’s a hybrid,' Professor Rawlence said.
'As such if these genetically engineered grey wolves interbred with other wild wolves, then there could be unintended consequences as these dire wolf-like characteristics spread throughout wild wolf populations.'
If the cubs were ever released, Professor Rawlence questioned how they would 'learn to be a dire wolf' as the ecosystem the ancient species once lived in no longer exists.
'It’s the classic thing out of the first Jurassic Park movie, where the Triceratops get very sick because it was eating plants that hadn’t actually evolved when it was around tens of millions of years ago,' he said.
The muscular build, powerful jaws, and sharp teeth of dire wolves made them menacing predators, according to National Park Service. Pictured, artist's depiction of the dire wolf
Restoration of a pack in Rancho La Brea. Scientists aren't exactly sure why they disappeared from the planet, but theories include a shifting climate, overhunting or a combination of both
Professor Philip Seddon agreed that the 'dire wolf de-extinction is not what it seems'.
'While no doubt it has required some amazing technological breakthroughs, the cute pups Romulus, Remus, and Khaleesi are not dire wolfs – they are genetically modified grey wolves,' he said.
'What Colossal has done is to introduce a small number of changes to the genetic material of a grey wolf to produce grey wolf pups with dire wolf features such as pale coats and potentially slightly larger size – so, hybrid grey wolves, or a GMO wolf.'
However, the company's ultimate goal is bringing the woolly mammoth back from the dead, which it plans on reviving by late 2028.
CEO and co-founder Ben Lamm said he's 'positive' the first woolly mammoth calves will be born in the next few years.
'Our recent successes in creating the technologies necessary for our end-to-end de-extinction toolkit have been met with enthusiasm by the investor community,' he said.
But Colossal Biosciences isn't stopping there – it also wants to bring back the Tasmanian tiger and the dodo, which was hunted to death in the 17th century.
'The dodo is a prime example of a species that became extinct because we – people – made it impossible for them to survive in their native habitat,' said Professor Beth Shapiro, lead paleogeneticist at the company.
Inside the ambitious plan to 'de-extinct the Dodo': How scientists are using stem cell technology to bring back the extinct species
It's one of the most famous extinct animals of all time, ruthlessly hunted to extinction by humans in just a few decades.
US startup Colossal Biosciences, based in Dallas, Texas, is using stem cell technology and genome editing to create a modern approximation of the species.
At a cost of over $225 million (£180 million), it is 'de-extincting' the dodo more than 350 years after it was wiped out from Mauritius by European explorers.
Scientists have already achieved the monumental feat of sequencing the full genome of the extinct species, from bone specimens and other fragments.
The next step is to gene-edit the skin cell of a close living relative, which in the dodo's case is the Nicobar pigeon, so that its genome matches that of the extinct bird.
If you thought robot dogs were the coolest animatronic animals out there, prepare to think again.
Kawasaki Heavy Industries, a company better known for its high-end motorcycles, has unveiled a hydrogen-powered, ride-on robo-horse.
The bizarre device was unveiled at the Osaka Kansai Expo on April 4 as part of Kawasaki's 'Impulse to Move' project.
Dubbed the CORLEO, this two-seater quadruped is capable of galloping over almost any terrain.
The company calls it a 'revolutionary off-road personal mobility vehicle' which swaps out the familiar wheels for four robotic legs.
To steer, all you need to do is move your body and the machine's AI vision will pick out the best route to take.
And, to make sure you don't fly off as you leap about like a robot cowboy, CORLEO constantly monitors its rider's movements to achieve 'a reassuring sense of unity'.
However, would-be riders might have a while to wait as Kawasaki says this has been created as a concept for 2050.
Kawasaki, a company better known for its high-end motorcycles, has unveiled a hydrogen-powered, ride-on robotic horse
Kawasaki says the vehicle, dubbed CORLEO, is a 'revolutionary off-road personal mobility vehicle' which swaps out the familiar wheels for four robotic legs
Like many advanced robots, the CORLEO clearly takes its design inspiration from organisms in nature.
Each of its four legs ends in a 'left-right divided structure' much like the cloven hoof of a mountain goat.
Kawasaki says: 'These hooves can adapt to various terrains, including grasslands, rocky areas, and rubble fields.'
The company adds that each of these rear legs can 'swing up and down independently from the front leg unit' to better absorb the impact of running.
Additionally, those long back legs will help the rider stay relatively level when CORLEO is going up or down a slope.
Much like a real horse, this quadruped vehicle features 'stirrups' to help the rider maintain an optimal posture.
Together with sensors fitted in the handlebars, Kawasaki says the rider will be able to control CORLEO just by shifting their weight.
To steer the machine, all the rider needs to do is lean and CORLEO will detect their weight and adjust its path automatically
Taking inspiration from nature, CORLEO has back legs specially designed to absorb the impact of walking and running and rubber cloven hooves like those on a goat
However, CORLEO is also fitted with a few features that you wouldn't find anywhere in nature.
Most notable is the 150cc hydrogen generator that powers the vehicle.
This system takes hydrogen from tanks in the rear to produce electricity for each of the legs' drive systems.
Unlike a real horse, this means the only waste CORLEO will leave on the roads will be clean water produced by burning hydrogen.
Additionally, the vehicle is fitted with an instrument panel which displays 'hydrogen level, route to the summit, center of gravity position, and other information'.
Kawasaki adds: 'At night, it supports optimal riding by projecting markers onto the road surface to indicate the path ahead.'
In an incredible promotional video, the CORLEO can be seen leaping over rugged terrain with a rider.
It appears that Kawasaki intends this to be a true all-purpose off-road machine as they show the vehicle taking on everything from mountains to grassy plains.
On social media, technology fans were amazed by the futuristic design with one saying that it was like something out of a 'scifi movie'
One commenter said they wanted to live in a future where technology like CORLEO was a reality
Another excited commenter said the vehicle was 'what true innovation looks like'
One fan said that CORLEO would be 'life changing' for them as someone in a wheelchair, allowing them to visit nature more easily
On social media, tech fans rushed to share their excitement over this futuristic concept.
One excited commenter wrote: 'Now that's something straight out of a scifi movie'.
Another chimed in: 'Please make it! I want to live in a future with this! It looks so fun!'
'This is what true innovation looks like. Well done,' added another.
Meanwhile, other commenters shared their vision for how CORLEO might be able to change their lives.
One commenter wrote: 'I am a disabled person using a wheelchair. It is difficult for me to be able to visit nature and I have been thinking of this technology for years.
'This can be life changing for me to be able to be again in the mountains or in forests!'
However, the impressive mobility showcased in the video is still a long way off from the current reality.
At night, CORLEO projects the directions onto the road ahead to make navigation easy
Unfortunately, CORLEO is currently just a concept vehicle and Kawasaki doesn't say if it has plans to ever make it commercially available
Not every commenter was so pleased, and some were angry that the video only showed a CGI rendering of CORLEO
Another complained that people couldn't tell the video was only showcasing a concept rather than a real product
That left some technology enthusiasts fuming that this amazing vehicle may never really exist.
One commenter wrote: When you have a non-CG [computer generated] video of it doing this, I'll sign up for the pre-order.'
Boston Dynamics first showed off Spot, the most advanced robot dog ever created, in a video posted in November 2017.
The firm, best known for Atlas, its 5 foot 9 (1.7 metre) humanoid robot, has revealed a new 'lightweight' version of its robot Spot.
The robotic canine was shown trotting around a yard, with the promise that more information from the notoriously secretive firm is 'coming soon'.
'Spot is a small four-legged robot that comrtably fits in an office or home' the firm says on its website.fo
It weighs 25 kg (55 lb), or 30 kg (66 lb) when you include the robotic arm.
Spot is all-electric and can go for about 90 minutes on a charge, depending on what it is doing, the firm says, boasting 'Spot is the quietest robot we have built.'
Spot was first unveiled in 2016, and a previous version of the mini version of spot with a strange extendable neck has been shown off helping around the house.
In the firm's previous video, the robot is shown walking out of the firm's HQ and into what appears to be a home.
There, it helps load a dishwasher and carries a can to the trash.
It also at one point encounters a dropped banana skin and falls dramatically - but uses its extendable neck to push itself back up.
'Spot is one of the quietest robots we have ever built, the firm says, due to its electric motors.
'It has a variety of sensors, including depth cameras, a solid state gyro (IMU) and proprioception sensors in the limbs.
'These sensors help with navigation and mobile manipulation.
'Spot performs some tasks autonomously, but often uses a human for high-level guidance.'
It may sound like the outlandish plot of a poorly conceived science-fiction flick.
But some scientists now claim that humanity, the Earth, and everything else in the universe are really part of a giant holographic projection.
While this might sound all too familiar to fans of The Matrix, this bold idea could solve some of physics' most challenging questions.
From what happens if you fall into a black hole to what the universe was like right after the Big Bang, thinking of ourselves as holographic might just provide the answer.
According to Professor Marika Taylor, a theoretical physicist from the University of Birmingham, the universe is actually two-dimensional.
However, just like when you watch a 3D movie on a flat screen, the images on that 2D surface appear to have depth because of how they are projected onto it.
So, while you might see the world around you as a complex three-dimensional structure, Professor Taylor claims this is only an illusion.
That doesn't mean our lives or the universe are any less real, but it does mean that the cosmos might be a lot stranger than we had previously thought.
It might sound eerily familiar to fans of The Matrix (pictured), but some scientists believe our three-dimensional reality is an illusion because the universe is actually a hologram
What is the holographic universe theory?
When you think of the universe being a hologram, you might imagine the projected images from Star Wars or ABBA Voyage.
Although this is the right basic idea, it's not quite the same type of hologram that physicists are thinking of.
The idea that the universe is a hologram doesn't have anything to do with light or projectors as the name might suggest.
In scientific language, a hologram is a two-dimensional surface which appears to have a third dimension - like the holographic images on some credit cards.
Since holograms appear three-dimensional you can move around them and see different parts of the image as if there were a real object there.
However, if you reached out to touch one your hand would find only a flat surface.
Scientists like Professor Taylor argue that the whole universe is just like this - a two-dimensional surface that just looks like it has three dimensions.
A hologram, like those used in ABBA Voyage, is a two-dimensional object that looks like it has an extra third dimension. According to the holographic principle, this is the fundamental structure of the universe - the universe is two-dimensional but looks like it is 3D
What is the holographic principle?
According to the holographic principle, the real structure of the universe is a two-dimensional surface.
This surface has no gravity and no depth, only quantum and atomic forces.
What appears to be the 3D structure of the world we can observe is just an illusion created by this 2D surface.
This is like a hologram which appears to have depth when it is really just an image projected onto a flat screen.
The holographic principle is that we can describe everything about the universe, including gravity and depth, by talking about whats happening on the 2D surface.
Instead of the universe being like a solid block, Professor Taylor says we should think of it as more like a hollow ball.
Our solar systems and galaxies are contained inside the '3D' space inside the ball, but the actual surface structure of the universe only has two dimensions.
According to the 'holographic principle', we can describe the gravitational movements of the planets and stars within the ball just by talking about what's happening on the two-dimensional surface.
Although that might seem utterly bonkers, scientists maintain that turning our world on its head isn't necessarily a problem.
Professor Taylor says: 'It is very hard to visualise this. However, it is also quite hard to visualise what happens inside an atom.
'We learned in the early twentieth century that atoms follow quantum rules, which are also quite different from our everyday reality.
'Holography takes us into an even more extreme world, where not only are the forces quantum in nature, but the number of dimensions is different from our perceived reality.'
Does this mean the universe isn't real?
Even if we are living in a holographic universe, this doesn't mean that our world or our lives are any less real
Although the holograms we are familiar with are always projected by someone and can be turned on or off at will, that isn't what scientists are saying about the universe.
Professor Taylor says: 'The Matrix movies are very thought-provoking but probably don't quite capture all the ideas in holography.'
Likewise, Fermilab, a United States Department of Energy particle physics laboratory, says that the notion of the universe as a 'simulation' can be misleading.
Fermilab writes: 'The notion that our familiar three-dimensional universe is somehow encoded in two dimensions at the most fundamental level does not imply that there is anybody or anything "outside" the two-dimensional representation, "projecting" the illusion or "running" the simulation.'
That means we don't need to worry about being in any kind of Matrix-like simulation even if the universe is holographic.
Similarly, one of the consequences of the holographic principle is that features of the universe like the third dimension and gravity aren't a fundamental part of reality.
Unlike in The Matrix, there's no one on the outside projecting our holographic universe. This is just a different way of understanding how the laws of physics work
While some people believe that we are living in a virtual simulation, holographic theory doesn't suggest that this is the case
Instead, physicists say that gravity and the higher dimensions are 'emergent' properties.
Professor Kostas Skenderis, a mathematical physicist from the University of Southampton, says you can think about this in the same way as temperature.
If we look at any individual atom it doesn't have a temperature, just a position and a velocity.
But if there are enough atoms all moving and bumping into one another, we can say that they collectively have a temperature.
'Temperature is not an intrinsic property of elementary particles. It rather emerges as a property of a collection of them. This does not make temperature less real. It rather explains it,' says Professor Skenderis.
Likewise, gravity and the third dimension emerge when parts of the 2D universe interact in certain ways.
And, just like knowing that temperature is simply atoms moving doesn't make your tea any less hot, this doesn't make gravity or depth any less real.
Why do scientists think the universe is a hologram?
The reason scientists believe in holographic theory is to avoid a paradox which suggests black holes, like the one at the centre of the Milky Way (illustrated), break the laws of physics
The information paradox
According to the laws of physics, information cannot be destroyed.
However, three-dimensional black holes don't seem to follow this rule.
When something falls into a black hole, the black hole gains more mass.
Over time black holes evaporate by emitting a type of energy called Hawking Radiation, and will eventually vanish.
However, Hawking Radiation isn't related to the things which fall in.
So, when the black hole evaporates, information about what fell in has been removed from the universe.
This suggests that black holes violate the laws of physics.
Although this might sound like an interesting mathematical exercise, you might wonder why scientists bother trying to explain everything in two dimensions in the first place.
You might have heard the law of physics which says that matter can't be created or destroyed.
In the same way, a law of quantum physics is that 'information' can't be created or destroyed.
Professor Taylor says: 'The information paradox is that black holes seem to lose memory of what has been thrown inside them.'
Imagine writing a message out on a piece of paper and then tearing it into tiny pieces.
You might think you've destroyed that information but no matter how small you made the pieces someone could always put them back together and read it.
However, if you threw that note into a black hole there's nothing you could ever do to piece that information back together.
To avoid this paradox, scientists say that black holes must be two-dimensional. This means when information falls in, it isn't destroyed but rather smeared across the two-dimensional surface of the black hole
(stock image)
What scientists began to realise in the late 1970s was that you could get around this problem, but only if you think of black holes as two-dimensional.
On this view, when you throw your note into a black hole the information is smeared across the two-dimensional boundary of the black hole rather than being destroyed.
This is the view that Stephen Hawking, who discovered the Information Paradox, came to adopt in the final years before his death.
If that is hard to picture, don't worry; even physicists are still working to get their heads around exactly what that might mean.
The important thing to understand is that looking at the world in two dimensions makes it easier for physicists to work out what's going on in certain cases.
This is particularly useful when we want to understand what happens when gravity is extremely strong like during the first few seconds after the Big Bang or inside a black hole.
And, if this works for the densest, wildest objects in the universe it should work for everything else in existence.
As Professor Skenderis puts it: 'Black hole physics suggests that we only need information in 2D space to describe the 3D universe.'
Stephen Hawking (pictured), who discovered this paradox, came to adopt the holographic theory about black holes in the last years before his death
Do we have any evidence for this?
One of the biggest challenges for the holographic theory is that it's really hard to prove.
As yet, Professor Taylor says scientists haven't found any 'smoking gun evidence' for the holographic nature of the universe.
However, this isn't stopping physicists from trying to find the subtle differences that holographic theory predicts.
Professor Craig Hogan, an astrophysicist from the University of Chicago and director of the Fermilab Center for Particle Astrophysics, says this radiation should preserve 'holographic noise'.
Professor Hogan says: 'The CMB, and all large-scale structures, are supposed to come from quantum-gravitational noise.
'If it’s holographic, the CMB pattern shows signs of that. It preserves an image of the process that made.'
Scientists say the best evidence that the universe is a hologram should be preserved in the Cosmic Microwave Background (CMB), the leftover energy from the Big Bang
Pictured is a timeline of the holographic universe. Time runs from left to right. The far left denotes the holographic phase. At the end of this phase (shown by the black fluctuating ellipse) the Universe enters a geometric phase. Scientists believe we should still be able to see the structure from this holographic phase in the large-scale structures of the universe
Artificial intelligence (AI) chatbots like ChatGPT have been designed to replicate human speech as closely as possible to improve the user experience.
But as AI gets more and more sophisticated, it's becoming difficult to discern these computerised models from real people.
Now, scientists at University of CaliforniaSan Diego (UCSD) reveal that two of the leading chatbots have reached a major milestone.
Both GPT, which powers OpenAI's ChatGPT, and LLaMa, which is behind Meta AI on WhatsApp and Facebook, have passed the famous Turing test.
Devised by British WWII codebreaker Alan Turing Alan Turing in 1950, the Turing test or 'imitation game' is a standard measure to test intelligence in a machine.
An AI passes the test when a human cannot correctly tell the difference between a response from another human and a response from the AI.
'The results constitute the first empirical evidence that any artificial system passes a standard three-party Turing test,' say the UCSD scientists.
'If interrogators are not able to reliably distinguish between a human and a machine, then the machine is said to have passed.'
Robots now have intelligence equivalent to humans, scientists say - as AI officially passes the famous Turing test (pictured: Terminator 3: The Rise of the Machines)
GPT-4.5 has passed the famous 'Turing test' which was developed to see if computers have human-like intelligence
Researchers used four AI models – GPT-4.5 (released in February), a previous iteration called GPT-4o, Meta's flagship model LLaMa, and a 1960s-era chat programme called ELIZA.
The first three are 'large language models' (LLMs) – deep learning algorithms that can recognise and generate text based on knowledge gained from massive datasets.
The experts recruited 126 undergraduate students from University of California San Diego and 158 people from online data pool Prolific.
Participants had five-minute online conversations simultaneously with another human participant and one of the AIs – but they didn't know which was which and they had to judge which they thought was human.
When it was prompted to adopt a humanlike persona, GPT-4.5 was judged to be the human 73 per cent of the time – more often than the real human participant was chosen.
Such a high percentage suggests people were better than chance at determining whether or not GPT-4.5 is a human or a machine.
Meanwhile, Meta's LLaMa-3.1, when also prompted to adopt a humanlike persona, was judged to be the human 56 per cent of the time.
This was 'not significantly more or less often than the humans they were being compared to', the team point out – but still counts as a pass.
Overview of the Turing Test: A human interrogator (C) asks an AI (A) and another human (B) questions and evaluates the responses. The interrogator does not know which is which. If the AI fools the interrogator into thinking its responses were generated by a human, it passes the test
GPT-4.5: This image shows a participant (green dialogue) asking another human and GPT-4.5 questions - without knowing which was which. So, can you tell the difference?
LLaMa: This image shows a participant (green dialogue) asking another human and LLaMa questions. Can you tell the difference? Answers in box below
Turing Test - answers
GPT-4.5
Witness A: GPT-4.5
Witness B: Human
LLaMa
Witness A: Human
Witness B: LLaMa
Lastly, the baseline models (ELIZA and GPT-4o) achieved win rates significantly below chance – 23 per cent and 21 per cent respectively.
Researchers also tried giving a more basic prompt to the models, without the detailed instructions telling them to adopt a human-like persona.
As anticipated, the AI models performed significantly worse in this condition – highlighting the importance of prompting the chatbots first.
The team say their new study, published as a pre-print, is 'strong evidence' that OpenAI and Meta's bots have passed the Turing test.
'This should be evaluated as one among many other pieces of evidence for the kind of intelligence LLMs display,' lead author Cameron Jones said in an X thread.
Jones admitted that AIs performed best when briefed beforehand to impersonate a human – but this doesn't mean GPT-4.5 and LLaMa haven't passed the Turing test.
'Did LLMs really pass if they needed a prompt? It's a good question,' he said in the X thread.
'Without any prompt, LLMs would fail for trivial reasons (like admitting to being AI) and they could easily be fine-tuned to behave as they do when prompted, so I do think it's fair to say that LLMs pass.'
The best-performing AI was GPT-4.5 when it was briefed and told to adopt a persona, followed by Meta's LLaMa-3.1
In 1950, legendary British computer scientist Alan Turing (pictured) proposed the theory of training an AI to give it the intelligence of a child, and then provide the appropriate experiences to build up its intelligence to that of an adult
This is the first time that an AI has passed the test invented by Alan Turing in 1950, according to the new study. The life of this early computer pioneer and the invention of the Turing test was famously dramatised in The Imitation Game, starring Benedict Cumberbatch (pictured)
Last year, another study by the team found two predecessor models from OpenAI – ChatGPT-3.5 and ChatGPT-4 – fooled participants in 50 per cent and 54 per cent of cases (also when told to adopt a human persona).
As GPT-4.5 has now scored 73 per cent, this new suggests that ChatGPT's models are getting better and better at impersonating humans.
It comes 75 years after Alan Turing introduced the ultimate test of computer intelligence in his seminal paper Computing Machinery and Intelligence.
Turing imagined that a human participant would sit at a screen and speak with either a human or a computer through a text-only interface.
If the computer could not be distinguished from a human across a wide range of possible topics, Turing reasoned we would have to admit it was just as intelligent as a human.
A version of the experiment, which asks to you tell the difference between a human and an AI, can be accessed at turingtest.live.
Meanwhile, the pre-print paper is published on online server arXiv and is currently under peer review.
Alan Turing (pictured) was a British mathematician best known for his work cracking the enigma code during the Second World War
Alan Turing was a British mathematician born on June 23, 1912 In Maida Vale, London, to father Julius, a civil servant, and mother Ethel, the daughter of a railway engineer.
His talents were recognised early on at school but he struggled with his teachers when he began boarding at Sherborne School aged 13 because he was too fixated on science.
Turing continued to excel at maths but his time at Sherborne was also rocked by the death of his close friend Christopher Morcom from tuberculosis. Morcom was described as Turing's 'first love' and he remained close with his mother following his death, writing to her on Morcom's birthday each year.
He then moved on to Cambridge where he studied at King's College, graduating with a first class degree in mathematics.
During the Second World War, Turing was pivotal in cracking the Enigma codes used by the German military to encrypt their messages.
His work gave Allied leaders vital information about the movement and intentions of Hitler’s forces.
Historians credit the work of Turing and his fellow codebreakers at Bletchley Park in Buckinghamshire with shortening the war by up to two years, saving countless lives, and he was awarded an OBE in 1946 for his services.
Turing is also widely seen as the father of computer science and artificial intelligence due to his groundbreaking work in mathematics in the 1930s.
He was able to prove a 'universal computing machine' would be able to perform equations if they were presented as an algorithm - and had a paper published on the subject in 1936 in the Proceedings of the London Mathematical Society Journal when he was aged just 23.
But he was disgraced in 1952 when he was convicted for homosexual activity, which was illegal at the time and would not be decriminalised until 1967.
To avoid prison, Turing agreed to ‘chemical castration’ – hormonal treatment designed to reduce libido.
As well as physical and emotional damage, his conviction had led to the removal of his security clearance and meant he was no longer able to work for GCHQ, the successor to the Government Code and Cypher School, based at Bletchley Park.
Turing was awarded an OBE in 1946 for his codebreaking work at Bletchley Park, pictured, which is credited with ending World War II two years early
Then In 1954, aged 41, he died of cyanide poisoning. An inquest recorded a verdict of suicide, although his mother and others maintained that his death was accidental.
When his body was discovered, an apple laid half-eaten next to his bed. It was never tested for cyanide but it is speculated it was the source of the fatal dose.
Some more peculiar theories suggest Turing was 'obsessed' with fairytale Snow White and the Seven Dwarfs and his death was inspired by the poisoned apple in the story.
Following a public outcry over his treatment and conviction, the then Prime Minister Gordon Brown issued a public apology in 2009.
He then received a posthumous Royal pardon in 2014, only the fourth to be issued since the end of the Second World War.
It was requested by Justice Secretary Chris Grayling, who described Turing as a national hero who fell foul of the law because of his sexuality.
An e-petition demanding a pardon for Turing had previously received 37,404 signatures.
A 2017 law, that retroactively pardoned all men cautioned or convicted for homosexual acts under historical legislation, was named in his honour.
The robodog gets back in the van and has a lie down after delivering its packages
Evri is testing out a robotic dog that can jump in and out of vans, climb stairs, and drop packages on your doorstep.
The 70kg robodog is the size of a Great Dane and zips around on wheels, using cameras to sense where to go.
It can hand parcels to customers directly, or can squat down and drop packages from its backside.
It’s not a belated April Fool though: this dog really will be let off the lead for the first trial of its kind in the UK, working alongside human couriers to deliver people’s parcels this summer.
Metro travelled to Evri’s HQ near Barnsley yesterday to see the dog in action, and we were surprised by the size and speed of it. It is no dalek thwarted by steps, and leaped around in a startling demonstration of how the robot future is upon us.
It is able to climb flights of stairs and open gates, and makers say it could could one day deliver parcels or takeaway food for companies on its own.
So would you think this dog was a good boy if it appeared with your Vinted parcel?
Rivr, the company which invented it, said that robotics is having an ‘iPhone moment’, comparing the tech to the revolutionary launch of the smartphone which changed our lives.
Co-founder and CEO of the Swiss company, Marko Bjelonic, told Metro that we are living through a second industrial revolution, and compared robotic dog deliveries to the automation of parcel warehouses, which is now standard.
He predicted that within the next three years, we will see ‘more and more’ robots on the street, and within the next five to seven years he plans to have sold more than a million.
The robotic delivery dog
Weight: 70kg
Top speed: 6.5m/s
Endurance: 5 hours
Capacity: 50kg of weight in the drawer on its back
Hi ho, hi ho, it’s off to work the robot dog goes (Picture: Joel Goodman)
The AI-powered dog has sensors, cameras, 4G and 5G wi-fi routers, two-way audio, internal navigation, and a 45 Ah, 58V battery.
It can carry a box on its back for higher volume of parcels, or it can have an additional arm to ring doorbells and hand parcels directly to customers.
This is not just dreaming, as Rivr has been backed by Jeff Bezos, has secured $27 million in funding, and is already trialling the robots for parcel delivery in Switzerland.
Mr Bjelonic said it is possible that one day robotic delivery dogs could hang off the back of a self-driving van to leave more room for parcels inside, with fewer humans employed as couriers but new roles created to manage the robot fleet.
But for the ‘foreseeable future’, the focus is on human and robot working together for the physically intensive last stretch of a parcel reaching our homes.
Marcus Hunter, chief technology officer at Evri, said he hoped the robodog trial could help customers with disabilities who may take longer to answer the door.
The dog could wait up to ten minutes while a courier kept going with other parcels, rather than risk leaving the parcel unattended.
The robot might confuse real dogs, although local canine Yogi didn’t seem concerned (Picture: Joel Goodman)
Referring to high profile trials of drone deliveries, Mr Hunter said: ‘Everybody does drones, but there’s a limitation to drones and you need quite a lot of approvals. So the next best thing is man’s best friend… a delivery driver’s best friend now.’
He admitted he would be ‘surprised’ to see a robotic dog at his door delivering parcels, so the trial will be opt-in to avoid giving any customers nightmares about the Terminator.
‘The courier is the heart of our business,’ he added, saying the point of this for Evri was not to replace human drivers but help them deliver more easily: ‘The more parcels they get to facilitate on their rounds, everybody’s happy.’
Parcel volumes are increasing because people do so much online shopping, and the UK in particular relies on doorstep deliveries rather than collecting things from a locker.
The robodog opens its box to deliver a parcel (Picture: Joel Goodman)
Yorkshire-based Evri delivers over 800 million parcels a year, with around 2.5 million deliveries a day.
The Rivr dog is not the only robot they are trying out. They’re also trialling one by Delivers AI, which looks like a coolbox on wheels that rolls itself along pavements independently.
Mr Hunter said using electric robots could reduce the carbon footprint of parcel delivery. Some couriers in London are already using ‘e-cargo bikes’, which are both battery and pedal powered.
He said: ‘If you link that up with the dog, you’ve got a really green solution. You could send them off into a block of flats and it could easily deliver five to ten parcels to the same block while the courier is doing other things and supporting it.’
Meanwhile, Amazon is investing in parcel delivery by drone and has chosen Darlington as the first location to test it in the UK.
Last week, the government announced £20 million in funding to make everyday use of flying taxis and drone deliveries a reality.
Unitree's G1 demonstrates a new level of robotic agility with a complex movement following an AI software update.
World's First Side-Flipping Humanoid Robot: Unitree G1
Scientists just showcased a humanoid robot performing a complicated side flip.
The company that makes the robot, Unitree, posted a video to Youtube showcasing its acrobatics. In the video, the silver-grey G1 crouches slightly, then launches up before rotating sideways through the air.
The robot catches itself primarily with its left leg and stabilizes almost immediately as the other foot makes contact with the floor. As impressive as it is at full speed, it's even more mesmerizing in slow motion, particularly the ease with which the robot seems to balance and right itself after landing.
Last year, the same model mastered a backflip. To teach G1 its new trick, the company mainly upgraded the artificial intelligence (AI) algorithm it used to make the software faster and more responsive, Unitree representatives told Live Science.
"The side flip was performed under reinforcement learning training," they added.
Reinforcement learning is a mainstay technique used to teach robots how to navigate and interact with the physical world. This is the same technology that robotics company Figure has used to train its Figure 02 robots to move in a more humanlike way.
The G1 can also walk and run at up to 2 meters per second (4.5 miles per hour or 7.2 kilometers per hour). (Image credit: Unitree Robotics)
A robotic acrobat
The G1 can do more than acrobatics; during a martial arts demonstration, it disarmed a baton-wielding opponent. After a series of feints with its hands, the bot executed a spinning kick that sent the baton flying from its opponent's hands.
Like its human counterparts, the robot's lightweight, compact form helps it perform acrobatic moves with ease.
The G1 stands at just over 4.3 feet (1.3 meters) in height and weighs only 77 pounds (35 pounds) G1 also sports a 3D light detecting and ranging (LIDAR) and depth camera, which gives it a 360-degree view of its environment. Perhaps most importantly, it incorporates 23 degrees of freedom, a measure of the number of joints or axes of movement available to the bot.
The G1 can also walk and run at up to 2 meters per second (4.5 miles per hour or 7.2 kilometers per hour).
The company envisions robots like G1 or its successors doing everything from helping out with chores at home to performing industrial operations to assisting with hazardous rescue missions, the Unitree representative said.
RELATED VIDEOS
Unitree G1 Robot Pulls Off Insane Side Flip – The Future is Here!
In a world first, China's Unitree G1 humanoid robot successfully performed a side-flip
Watch as humanoid robot lands impressive side flip #Shorts
Figure 02's human-like gait is the product of the company's simulated reinforcement learning system, and is just the beginning of its plans to make its robots perform physical tasks more naturally.
(Image credit: Figure)
A U.S. robotics company has used artificial intelligence (AI) to give its humanoid robots a more natural-looking stride, and they say it's just the beginning.
In the promotional video, the robot, called Figure 02 and manufactured by the company Figure, marches with a "human-like" gait. This is an ability it claims will help its robot to navigate the physical world more easily.
"These initial results are exciting, but we believe they only hint at the full potential of our technology," company representatives wrote in a blog post accompanying its announcement. "We're committed to extending our learned policy to handle every human-like scenario the robot might face in the real world."
AI-Powered Humanoid Robot Army Marches Like Humans!
The problem, known as Moravec's Paradox, emerges because computers excel at problems that require complex calculations and large datasets, but lack our real-world experience honed by millions of years of evolution. This makes robots' shuffling gaits, well, robotic at best. At worst, it gives them the appearance that they may have soiled themselves.
To tackle the robot's unnatural gait, Figure's engineers used a learning technique called reinforcement learning — placing thousands of virtual robots inside a physics simulator that recreates various terrains, thereby improving their walking through trial and error.
By rewarding the virtual robot army for natural motions, they refined their gaits to appear more human-like. With this task accomplished, they uploaded the refined "Learned Natural Walking" model to a real-world Figure 02 robot. The result is an android that can move somewhat naturally, with heel strikes, toe-offs and synchronized arm swings.
Figure's reinforcement learning technique is key to the California company's plans to roll out its robots on factory floors. It has already tested its humanoid robots in a BMW factory in 2024 and plans to introduce more this year. Meanwhile, Apptronik, a Texas-based competitor, is also commercializing its humanoid robot, Apollo, for use in Mercedes-Benz factories by the end of 2025. Agility Robotics' Digit will also be introduced into warehouses this year.
RELATED VIDEOS
China’s AI Robot Army Is More Real Than You Think
AI ROBOTS Are Becoming TOO REAL! - Shocking AI & Robotics 2024 Updates #1
Boston Dynamics New Atlas Feels TOO Human and It's Freaking People Out!
Top 10 AI Robots In 2023 | Advanced AI Robots in the World | Artificial Intelligence | Simplilear
While we go about our days – working, eating and sleeping – the Earth is constantly rotating through its own magnetic field.
Now, experts claim it is possible to harvest clean energy from this natural rhythm.
Scientists have managed to take advantage of the Earth's spin to generate a tiny amount of electricity.
Although the voltage they managed to produce was small, it could be the first step towards a new way to generate limitless green energy, they said.
The idea dates back hundreds of years, when scientists first began to suggest that the difference in velocity – the speed of something in a given direction - between a magnetic field and its magnet should could allow for a voltage to form.
Previous studies appear to have debunked this theory – indicating that any electrons pushed by the Earth's magnetic field would quickly rearrange themselves and cancel out any difference in charge.
However, a new experiment suggests otherwise.
Researchers used a 29.9cm-long hollow cylinder made from manganese-zinc ferrite – a material chosen to encourage the motion of magnetic fields.
Earth's magnetic field — also known as the geomagnetic field — is generated in our planet's interior and extends out into space (stock image)
The researchers used a custom-designed cylinder - the design shown here - to harvest electricity
The scientists managed to take advantage of the Earth's spin to generate a tiny amount of electricity (stock image)
This cylinder was placed in a pitch black, windowless lab to minimise any interference from light, and angled in a way so it was at a 90 degree angle to Earth's rotation and magnetic field.
Although the object was stationary in the lab, the lab itself was being carried by Earth's rotation through its own magnetic field.
This produced a magnetic force on the electrons in the object – and analysis revealed a voltage of 19 microvolts was recorded.
The team, from Princeton University and NASA's Jet Propulsion Laboratory, said this voltage disappeared when the cylinder was set at a different angle or a different cylinder was used – suggesting it was being generated by Earth's rotation.
They described the findings as 'initial proof-of-concept results' and warned people to hold off celebrating for now.
However, they said their results 'provide a starting point for future investigations into ways to passively generate larger amounts of current and voltage using Earth's magnetic field'.
Writing in the journal Physical Review Research the scientists said: 'Could electricity be generated from Earth's rotation through its own magnetic field?
'Controlling for thermoelectric and other potentially confounding effects, we show that this small demonstration system generates a continuous DC voltage and current of the predicted magnitude.'
Nuclear power plants, like this one pictured in China, are also part of the drive towards cleaner, greener energy (stock photo)
Recent years have heralded a huge push towards clean energy, with a focus on shifting away from sources that release greenhouse gases in a bid to avoid the worst effects of climate change.
Historically, fossil fuels such as coal have made up a significant amount of the energy used globally for electricity, heating and cooking.
Experts say a shift towards more clean and renewable energy sources – such as wind power and solar power – is crucial.
This could also include geothermal energy, which utilises heat from the Earth’s interior, and hydropower, which harnesses the energy of waves.
Meanwhile, a move towards nuclear energy is also gaining momentum.
According to the World Nuclear Association, this form of power now provides about 10 per cent of the world’s electricity.
It is generated by splitting atoms – a process called nuclear fission - that releases heat when boils water into steam.
This then spins turbines to produce electricity.
No carbon dioxide or other greenhouse gases are released, meaning many consider it a viable alternative to fossil fuels.
While nuclear power is non-renewable – there is only a finite amount of nuclear fuel in the world – only small amounts are needed to produce large amounts of electricity.
Layers of the atmosphere
Troposphere is where humans live and weather exists, the lowest layer stretching up to about six miles.
Stratosphere extends up to about 40 miles and contains much of the ozone in the atmosphere.
Mesosphere sits just above the stratosphere where temperature decreases with height, reaching -130F.
Thermosphere is where temperatures begin to increase with height, caused by the absorption of UV and X-rays.
Ionosphere is part of Earth's upper atmosphere, between 50 and about 370 miles where Extreme UltraViolet creates a layer of electrons.
Exosphere starts at 310 miles and contains oxygen and hydrogen atoms, but in very low numbers.
Magnetosphere features charged particles along magnetic field lines in two bands at 1,800 and 10,000 miles above the surface.
Artificial intelligenceis evolving at a staggering pace, and researchers are now putting it to what they call Humanity’s Last Exam (HLE)—a test designed to challenge AI models with the toughest academic questions ever compiled. Experts predict that within the next year, AI could dramatically improve its accuracy, bringing it closer to mastering knowledge at a human level.
The Exam Designed to Outsmart AI
Unlike standard assessments, HLE isn’t just another set of routine questions. It was created by specialists from the Center for AI Safety and Scale AI, a for-profit company that works with major tech firms to refine AI training data. Their goal? To design a test so challenging that even the most advanced large language models (LLMs), like ChatGPT, Gemini, and DeepSeek, struggle to score above a failing grade.
HLE pulls from over 2,700 expert-submitted questions, spanning disciplines from mathematics and medicine to engineering and humanities. Any questions that today’s AI models could easily answer were discarded. Instead, the exam focuses on problems requiring deeper reasoning, specialized knowledge, and complex interpretations—things AI has traditionally struggled with.
The results so far? AI models have flunked spectacularly, scoring between 3 and 14 percent. But that may not last for long.
The latest study suggests that by the end of 2025, LLMs could achieve at least 50 percent accuracy on the test. That’s a massive leap, considering the difficulty of the questions.
“HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable, but cannot be quickly answered via internet retrieval.”
The test is structured as follows:
41% Mathematics
11% Biology & Medicine
10% Computer Science
9% Physics
9% Humanities & Social Sciences
6% Chemistry
5% Engineering
9% Other topics
Examples of the kinds of challenges LLMs face include translating ancient Roman inscriptions, identifying missing links in chemical reactions, and solving highly advanced mathematical equations. One question even asks AI about itself—testing whether it truly understands its own limitations.
AI’s Next Step: Recognizing Uncertainty
One of AI’s biggest flaws is overconfidence—it often provides an answer even when it has no idea if it’s correct. To address this, researchers are training AI models to evaluate their own uncertainty, forcing them to assess confidence levels before responding.
In the next phase of AI development, models will not only give answers but will also provide a confidence score from 0 to 100 percent. The idea is to move away from blind guessing and towards an approach that mirrors human uncertainty—where admitting “I don’t know” is sometimes the best answer.
The results of HLE are verified by another AI model, GPT-40, which checks whether slight variations of a correct response are still valid. This is similar to how a contestant on Jeopardy! might answer “T. rex” instead of “Tyrannosaurus rex” and still be awarded points.
History suggests that AI models rapidly overcome benchmarks, sometimes going from near-zero accuracy to near-perfect scores in just a few training cycles. While today’s LLMs are failing HLE, it may only be a matter of time before they crack the code.
What this means for the future is still up for debate. Will AI become the ultimate academic tool, capable of answering any question with near-perfect accuracy? Or will researchers keep raising the bar, ensuring that human intelligence remains ahead?
RELATED VIDEOS
What is Humanity’s Last Exam?
Humanity's LAST Exam AI's Ultimate CHALLENGE
AI and the future of humanity | Yuval Noah Harari at the Frontiers Forum
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-03-2025
Analyse van Kunstmatige Intelligentie (AI): Zegen of Vloek voor de Mensheid?
Analyse van Artificiële Intelligentie (AI): Zegen of Vloek voor de Mensheid?
Artificiële intelligentie (AI) is een technologie die steeds meer geïntegreerd wordt in ons dagelijks leven. Van chatbots en virtuele assistenten tot geavanceerde algoritmen die beslissingen nemen in de gezondheidszorg en de financiële sector, AI heeft het potentieel om de manier waarop we leven en werken drastisch te veranderen. Deze analyse verkent zowel de voordelen als de mogelijke gevaren van AI voor de mensheid.
Voordelen van AI
1. Efficiëntie en Productiviteit
Artificial Intelligence (AI) heeft een ingrijpende impact op de manier waarop organisaties opereren, vooral als het gaat om efficiëntie en productiviteit. AI-systemen zijn in staat om repetitieve taken te automatiseren, wat betekent dat menselijke medewerkers zich kunnen concentreren op complexere en creatievere taken. Dit leidt niet alleen tot een verhoging van de productiviteit, maar ook tot een verbetering van de werktevredenheid onder werknemers. In de industrie kunnen robots bijvoorbeeld de assembly line overnemen, waardoor menselijke werknemers zich kunnen richten op kwaliteitscontrole en innovatie. In de retail kan AI voorraadbeheer automatiseren, waardoor medewerkers meer tijd hebben voor klantinteractie en verkoopstrategieën.
Bovendien kunnen AI-systemen continu leren en zich aanpassen aan nieuwe omstandigheden, wat leidt tot een verdere optimalisatie van processen. Dit is vooral waardevol in sectoren waar snelheid en nauwkeurigheid cruciaal zijn, zoals de financiële dienstverlening. Hier kunnen AI-modellen financiële transacties analyseren en risico's sneller inschatten dan menselijke analisten, wat leidt tot snellere besluitvorming en hogere winstgevendheid.
In de dienstverlening kan AI ook de klantbeleving verbeteren door chatbots in te zetten die 24/7 beschikbaar zijn om vragen te beantwoorden. Dit verhoogt niet alleen de efficiëntie, maar ook de klanttevredenheid, omdat klanten sneller geholpen worden. Kortom, door repetitieve taken te automatiseren en processen te optimaliseren, draagt AI bij aan een aanzienlijk hogere productiviteit en efficiëntie in verschillende sectoren.
2. Verbeterde Gezondheidszorg
De toepassing van AI in de gezondheidszorg heeft het potentieel om revolutionaire veranderingen teweeg te brengen in de manier waarop medische zorg wordt verleend. AI-systemen worden steeds vaker ingezet voor het stellen van medische diagnoses, het personaliseren van behandelingen en zelfs het ontdekken van nieuwe medicijnen. Door de enorme hoeveelheid data die beschikbaar is in de gezondheidszorg—van medische dossiers tot genetische informatie—kan AI patronen en correlaties identificeren die menselijke artsen mogelijk over het hoofd zien.
Bijvoorbeeld, AI kan analyseren hoe verschillende behandelingen hebben gewerkt voor diverse patiëntengroepen en zo aanbevelingen doen voor gepersonaliseerde zorg. Dit leidt niet alleen tot effectievere behandelingen, maar ook tot een betere uitkomst voor patiënten. Verder kunnen AI-systemen vroegtijdig signalen van ziekten zoals kanker detecteren door beeldherkenningstechnologie te gebruiken, wat de kans op succesvolle behandelingen verhoogt.
Een ander belangrijk aspect is de ontwikkeling van nieuwe medicijnen. Met behulp van AI kunnen onderzoekers sneller en efficiënter potentiële moleculen identificeren die kunnen leiden tot nieuwe geneesmiddelen. Dit versnelt het proces van medicijnontwikkeling, wat cruciaal is in een wereld waarin nieuwe ziektes en varianten van bestaande ziekten voortdurend opduiken.
In het kort, de integratie van AI in de gezondheidszorg leidt tot een betere diagnostiek, gepersonaliseerde behandelingen en een snellere ontwikkeling van nieuwe medicijnen, wat uiteindelijk de kwaliteit van de zorg verbetert en levens redt.
3. Data-analyse en Besluitvorming
In een tijdperk waarin data een van de meest waardevolle middelen is, speelt AI een cruciale rol in het verbeteren van de data-analyse en besluitvorming. AI-systemen kunnen enorme hoeveelheden data verwerken en analyseren, wat organisaties helpt om beter geïnformeerde beslissingen te nemen. Dit is van toepassing op diverse gebieden, van marketing tot overheidsbeleid.
Bedrijven kunnen AI gebruiken om klantgedrag te analyseren, trends te identificeren en voorspellingen te doen over toekomstige aankopen. Door deze inzichten kunnen ze gerichter marketingstrategieën ontwikkelen, wat leidt tot een hogere conversie en klanttevredenheid. Bovendien stelt AI bedrijven in staat om hun processen te optimaliseren door inefficiënties te identificeren en aan te pakken. Dit kan variëren van het verbeteren van de supply chain tot het optimaliseren van prijsstrategieën.
In de publieke sector is AI ook van onschatbare waarde. Overheden kunnen data-analyse gebruiken om sociale kwesties beter te begrijpen en beleid te ontwikkelen dat aansluit bij de behoeften van de bevolking. Bijvoorbeeld, door gegevens over criminaliteit te analyseren, kunnen politiediensten effectievere strategieën ontwikkelen om criminaliteit te bestrijden en de veiligheid te verbeteren.
Kortom, AI biedt krachtige tools voor data-analyse die organisaties in staat stellen om sneller en nauwkeuriger beslissingen te nemen. Dit leidt tot een meer doelgerichte aanpak in zowel het bedrijfsleven als de overheid, met als resultaat een efficiëntere werking en een betere service aan klanten en burgers.
4. Innovatie
AI is een drijvende kracht achter innovatie in verschillende sectoren. Door de mogelijkheden die AI biedt, kunnen bedrijven nieuwe producten en diensten ontwikkelen die voorheen ondenkbaar waren. Van slimme huizen tot autonome voertuigen, de impact van AI op technologische vooruitgang is enorm en blijft groeien.
In de technologiesector zien we voorbeelden van AI die onze dagelijkse levens veranderen. Slimme apparaten in onze huizen kunnen leren van ons gedrag en onze voorkeuren, waardoor ze ons leven gemakkelijker maken. Denk aan thermostaten die zich automatisch aanpassen aan onze gewoonten of beveiligingssystemen die verdachte activiteiten herkennen en ons waarschuwen. Deze innovaties verbeteren niet alleen het gebruikersgemak, maar ook de energie-efficiëntie en veiligheid.
In de automotive industrie is de ontwikkeling van autonome voertuigen een van de meest baanbrekende toepassingen van AI. Deze voertuigen gebruiken complexe algoritmes en sensortechnologie om veilig te navigeren, wat niet alleen het vervoer efficiënter maakt, maar ook het potentieel heeft om verkeersongelukken te verminderen. De innovaties in deze sector zijn niet alleen technisch indrukwekkend, maar hebben ook brede maatschappelijke implicaties, zoals het verminderen van verkeersdrukte en het verbeteren van de toegankelijkheid van transport.
Bovendien stimuleert AI innovatie in de gezondheidszorg, waar het de ontwikkeling van nieuwe behandelingen en technologieën mogelijk maakt. Van gepersonaliseerde geneeskunde tot geavanceerde diagnostische tools, AI opent de deur naar nieuwe manieren om ziekten te begrijpen en te bestrijden.
Samenvattend, AI fungeert als een katalysator voor innovatie, waardoor nieuwe producten en diensten mogelijk worden die ons leven verbeteren en de manier waarop we interageren met technologie transformeren.
5. Toegankelijkheid
Een van de meest waardevolle voordelen van AI is de mogelijkheid om barrières te verlagen en toegankelijkheid te bevorderen voor mensen met een handicap. AI-technologieën, zoals spraakherkenning en tekst-naar-spraak-toepassingen, maken communicatie en interactie eenvoudiger voor mensen die anders mogelijk beperkt zouden zijn in hun mogelijkheden.
Bijvoorbeeld, spraakgestuurde assistenten zoals Siri en Google Assistant stellen mensen met motorische beperkingen in staat om hun apparaten te bedienen zonder fysieke interactie. Dit vergemakkelijkt niet alleen het dagelijks leven, maar biedt ook een gevoel van onafhankelijkheid. Bovendien kunnen tekst-naar-spraak-programma's helpen bij het lezen van geschreven informatie, waardoor mensen met visuele beperkingen toegang krijgen tot boeken, websites en andere tekstbronnen.
AI kan ook helpen bij het verbeteren van de toegankelijkheid van fysieke omgevingen. Slimme technologieën kunnen bijvoorbeeld worden ingezet in gebouwen om navigatie te vergemakkelijken voor rolstoelgebruikers of mensen met een visuele beperking. Door gegevens te analyseren over toegankelijkheidsbehoeften, kunnen steden en bedrijven hun infrastructuur beter afstemmen op de behoeften van alle inwoners en bezoekers.
Daarnaast kunnen AI-systemen worden ingezet in het onderwijs om gepersonaliseerde leerervaringen te creëren voor studenten met leerstoornissen. Door de voortgang van studenten te volgen en op maat gemaakte oefeningen aan te bieden, kunnen leraren effectievere ondersteuning bieden.
In het kort, AI-technologieën spelen een cruciale rol in het bevorderen van toegankelijkheid en inclusie, waardoor mensen met een handicap gelijke kansen krijgen om deel te nemen aan de samenleving en hun potentieel te benutten. Dit draagt bij aan een meer rechtvaardige en inclusieve wereld.
Gevaren van AI
1. Werkgelegenheid en Economie
De opkomst van kunstmatige intelligentie (AI) heeft een diepgaande impact op de werkgelegenheid en de economie. Automatisering van taken, vooral in sectoren zoals productie, logistiek en klantenservice, kan leiden tot massale werkloosheid. Terwijl sommige banen verdwijnen, ontstaan er ook nieuwe mogelijkheden. Het probleem ligt echter in de overgang: veel werknemers zijn niet voorbereid op deze veranderingen. Opleidingen en bijscholing zijn cruciaal, maar zijn vaak niet tijdig of toegankelijk genoeg. Dit kan leiden tot een kloof tussen werknemers die zich kunnen aanpassen en degenen die dat niet kunnen, wat de sociale ongelijkheid vergroot.
Bovendien kan AI bijdragen aan economische ongelijkheid. Grote bedrijven met de middelen om AI te implementeren kunnen hun efficiëntie en winst verhogen, terwijl kleinere bedrijven achterblijven. Dit kan de concurrentie verstoren en een monopolievorming bevorderen, waardoor de economie als geheel wordt aangetast. Beleidsmakers moeten proactief handelen om deze effecten te mitigeren, inclusief het ontwikkelen van programma's voor levenslang leren en het ondersteunen van sectoren die door automatisering worden bedreigd. De uitdaging ligt in het vinden van een balans tussen technologische vooruitgang en sociale verantwoordelijkheid.
2. Privacy en Veiligheid
AI-systemen verzamelen en analyseren enorme hoeveelheden persoonlijke gegevens, wat ernstige implicaties heeft voor privacy en databeveiliging. Het risico op misbruik van deze gegevens is aanzienlijk; zowel overheden als bedrijven kunnen deze informatie gebruiken voor ongepaste doeleinden, zoals ongeoorloofde surveillantie, profilering of zelfs manipulatie van gedrag. De wetgeving rondom gegevensbescherming, zoals de Algemene Verordening Gegevensbescherming (AVG) in Europa, is een stap in de goede richting, maar blijft achter bij de snelle ontwikkelingen in AI-technologie.
Daarnaast is er een groeiende bezorgdheid over de veiligheid van AI-systemen zelf. Cyberaanvallen op AI-infrastructuren kunnen leiden tot datalekken of zelfs het saboteren van kritieke systemen. Dit vereist dat organisaties robuuste beveiligingsmaatregelen implementeren en voortdurend hun systemen monitoren. Het publiek moet ook bewust worden gemaakt van de risico's en de noodzaak om zorgvuldig om te gaan met persoonlijke gegevens. Transparantie en ethische richtlijnen zijn essentieel om het vertrouwen van de consument te behouden en de veiligheid van persoonlijke informatie te waarborgen.
3. Bias en Discriminatie
Een ander belangrijk gevaar van AI is de mogelijkheid van bias en discriminatie. AI-modellen zijn vaak getraind op historische gegevens die inherente vooroordelen kunnen bevatten. Dit betekent dat als de data niet representatief zijn, de uitkomsten van AI-systemen bepaalde groepen kunnen benadelen. Dit kan leiden tot ongelijke behandeling in verschillende domeinen, zoals werving en selectie, kredietverlening en rechtshandhaving. De gevolgen hiervan zijn verstrekkend en kunnen de sociale ongelijkheid verder verergeren.
Om deze problemen aan te pakken, is het cruciaal dat ontwikkelaars zich bewust zijn van de data waarmee ze werken en de impact daarvan. Het implementeren van diversiteit in de ontwikkelteams kan helpen om een breder perspectief te waarborgen. Daarnaast moeten er methoden worden ontwikkeld om bias in de algoritmes te detecteren en te corrigeren. Regelgeving kan ook een rol spelen, door richtlijnen te bieden voor eerlijke en transparante AI-toepassingen. Het is van vitaal belang dat we deze kwesties serieus nemen om een rechtvaardige samenleving te waarborgen.
4. Autonome Wapens
De ontwikkeling van AI in militaire toepassingen roept aanzienlijke ethische vragen op. Autonome wapens kunnen besluiten nemen zonder menselijke tussenkomst, wat kan leiden tot ongewenste escalaties en conflicten zonder verantwoordelijkheidsmechanismen. Deze technologie kan de manier waarop oorlogen worden gevoerd fundamenteel veranderen en het risico op een wapenwedloop vergroten.
De implicaties van autonome wapens zijn verstrekkend. Het is moeilijk te voorspellen hoe deze systemen zich zullen gedragen in complexe situaties, wat kan leiden tot onbedoelde slachtoffers en schendingen van mensenrechten. Er zijn oproepen gedaan voor internationale verdragen om het gebruik en de ontwikkeling van autonome wapens te reguleren, maar de uitvoering hiervan is uitdagend. Het is van cruciaal belang dat landen samenwerken om ethische richtlijnen op te stellen en een dialoog aan te gaan over de verantwoordelijkheden die gepaard gaan met deze technologie.
5. Afhankelijkheid
Naarmate we steeds meer vertrouwen op AI, ontstaat er een risico van overmatige afhankelijkheid. Dit kan leiden tot een afname van menselijke vaardigheden en kritisch denkvermogen. Wanneer mensen vertrouwen op AI voor besluitvorming, kunnen ze minder geneigd zijn om hun eigen intuïtie en ervaring te gebruiken, wat problematisch kan zijn in situaties waarin menselijke creativiteit en empathie vereist zijn.
Deze afhankelijkheid kan ook gevolgen hebben voor de werkplek en het onderwijs. Werknemers kunnen minder gemotiveerd zijn om nieuwe vaardigheden te ontwikkelen als ze geloven dat AI hun taken kan overnemen. Onderwijsinstellingen moeten zich aanpassen door cursussen aan te bieden die niet alleen technische vaardigheden, maar ook kritisch denken en probleemoplossend vermogen bevorderen. Het is essentieel dat we een evenwicht vinden tussen het gebruik van AI en het behoud van menselijke vaardigheden om een toekomst te creëren waarin technologie ons ondersteunt, maar niet vervangt.
AI | Hoe werkt zelflerende kunstmatige intelligentie?
Toekomstperspectief
De toekomst van AI belooft zowel kansen als uitdagingen. Technologieën zoals machine learning, deep learning en natuurlijke taalverwerking zullen blijven evolueren. Dit zal niet alleen leiden tot verbeterde efficiëntie en productiviteit, maar ook tot nieuwe ethische en maatschappelijke kwesties.
Verwachte Ontwikkelingen
Integratie van AI in het Dagelijks Leven: AI zal steeds meer geïntegreerd worden in ons dagelijks leven, van slimme huizen tot gezondheidszorg. Dit zal de gebruikservaring verbeteren, maar ook vragen oproepen over privacy en beveiliging.
AI en Duurzaamheid: AI kan een cruciale rol spelen in het aanpakken van milieuproblemen door het optimaliseren van hulpbronnen, energieverbruik en afvalbeheer.
Nieuwe Banen en Vaardigheden: Terwijl sommige banen verdwijnen, zullen er nieuwe ontstaan die zich richten op het werken met AI-systemen. Dit vereist een herziening van ons onderwijssysteem en de ontwikkeling van nieuwe vaardigheden.
Verantwoordelijke AI: Er zal een groeiende nadruk komen op het ontwikkelen van ethische AI-systemen die transparant en verantwoordelijk zijn. Dit omvat het aanpakken van bias en het waarborgen van privacy.
De Terminator uit de gelijknamige film, HAL uit 2001: A Space Odyssey en Ultron uit The Avengers: Age of Ultron zijn drie voorbeelden van AI’s die gevaarlijk waren voor de mensheid.
Robotica en AI
Robotica en AI zijn nauw met elkaar verbonden. AI verbetert de functionaliteit van robots, waardoor ze autonoom kunnen opereren en complexe taken kunnen uitvoeren. Dit heeft tal van praktische toepassingen.
Toepassingen van Robotica met AI
Industriële Robots: AI-gestuurde robots worden steeds gebruikelijker in de productie, waar ze taken zoals assemblage en kwaliteitscontrole uitvoeren.
Medische Robots: In de gezondheidszorg worden robots met AI ingezet voor operaties, diagnostiek en zelfs als assistentie voor zorgverleners.
Zelfrijdende Voertuigen: AI is cruciaal voor de ontwikkeling van zelfrijdende voertuigen, die complexe verkeerssituaties kunnen analyseren en daarop kunnen reageren.
Drones: Drones met AI worden gebruikt voor een breed scala aan toepassingen, van landbouwmonitoring tot zoek- en reddingsoperaties.
Praktische Toepassingen van AI
AI heeft zijn weg gevonden naar tal van sectoren, met praktische toepassingen die de manier waarop we werken en leven veranderen.
Gezondheidszorg: AI wordt gebruikt voor het analyseren van medische beelden, het voorspellen van ziekte-uitbraken en het personaliseren van behandelingen.
Financiën: In de financiële sector helpt AI bij fraudedetectie, risicobeheer en het verbeteren van klantservices via chatbots.
Onderwijs: AI kan gepersonaliseerde leerervaringen bieden, studenten helpen bij hun voortgang en zelfs docenten ondersteunen bij administratieve taken.
Klantenservice: AI-gestuurde chatbots en virtuele assistenten verbeteren de klantenservice door 24/7 ondersteuning te bieden en veelgestelde vragen te beantwoorden.
Toegevoegde Waarde van AI voor Ruimteonderzoek
AI heeft de potentie om de manier waarop we ruimteonderzoek uitvoeren drastisch te veranderen. De enorme hoeveelheden data die worden verzameld door telescopen, satellieten en ruimtevaartuigen kunnen met AI efficiënter worden geanalyseerd.
Toepassingen in Ruimteonderzoek
Data-analyse: AI kan helpen bij het sorteren en analyseren van de enorme hoeveelheden data die worden verzameld door ruimte-instrumenten, waardoor wetenschappers sneller ontdekkingen kunnen doen.
Autonome Ruimtemissies: AI kan worden gebruikt om autonome navigatie en besluitvorming mogelijk te maken voor ruimtevaartuigen, wat cruciaal is voor missies naar verre bestemmingen.
Verkenning van Exoplaneten: AI-technieken kunnen worden toegepast om signalen van exoplaneten te identificeren en te analyseren, wat kan leiden tot de ontdekking van nieuwe werelden.
Simulaties en Modellen: AI kan worden ingezet om complexe fysische modellen en simulaties te maken, waardoor wetenschappers beter begrijpen hoe het universum werkt.
Optimalisatie van Ruimtemissies: AI kan helpen bij het optimaliseren van de planning en uitvoering van ruimtemissies, wat resulteert in efficiëntere en kosteneffectievere operaties.
10 Amazing Robots That Really Exist
Conclusie
Kunstmatige intelligentie biedt aanzienlijke voordelen en kansen, maar brengt ook belangrijke uitdagingen met zich mee. De integratie van AI in verschillende sectoren, waaronder robotica en ruimteonderzoek, kan leiden tot innovatieve oplossingen en verbeterde efficiëntie. Het is echter essentieel om aandacht te besteden aan ethische overwegingen en de impact op de samenleving.
Voor de toekomst is het cruciaal dat we een balans vinden tussen het benutten van de voordelen van AI en het aanpakken van de nadelen. Door verantwoordelijke AI te ontwikkelen en ons aan te passen aan de veranderende technologieën, kunnen we de weg vrijmaken voor een toekomst waarin AI een positieve kracht is in de samenleving. De rol van AI in ruimteonderzoek zal blijven groeien, en het potentieel om de mysteries van het universum te ontrafelen is een van de meest opwindende perspectieven van deze technologie.
Kunstmatige intelligentie heeft het potentieel om een zegen te zijn voor de mensheid door efficiëntie, innovatie en verbeterde gezondheidszorg te bieden. Echter, de gevaren die gepaard gaan met AI, zoals werkgelegenheidsschade, privacykwesties en ethische implicaties, kunnen niet worden genegeerd. Het is cruciaal dat beleidsmakers, bedrijven en onderzoekers samenwerken om richtlijnen en regulaties te ontwikkelen die de voordelen van AI maximaliseren en de risico's minimaliseren. In deze snel veranderende wereld is een evenwichtige benadering noodzakelijk om ervoor te zorgen dat AI een positieve impact heeft op de mensheid.
{ PETER2011 }
15-03-2025 om 21:28
geschreven door peter
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
13-03-2025
AI’s rapid rise: A ticking time bomb for humanity!
AI’s rapid rise: A ticking time bomb for humanity!
Let’s talk about Artificial Intelligence! How many people are actually aware of the rapid rise of AI and the potential risks it poses to humanity’s future? Do you recognize these dangers, or do you choose to ignore them, turning a blind eye to the reality of AI’s impact?
An increasing number of people are becoming aware of AI's rapid rise, yet many still unknowingly rely on AI-powered technologies. Studies show that while nearly all Americans use AI-integrated products, 64% remain unaware of it.
AI adoption is expanding, by 2023, 55% of organizations had implemented AI technologies, and nearly 77% of devices incorporated AI in some form. Despite this prevalence, only 17% of adults can consistently recognize when they are using AI.
With growing awareness comes rising concern. Many fear job displacement, while others worry about AI’s long-term risks. A survey found that 29% of respondents see advanced AI as a potential existential threat, and 20% believe it could cause societal collapse within 50 years.
A June 2024 a study across 32 countries revealed that 50% of people feel uneasy about AI. As AI continues to evolve, how many truly grasp its impact—and the risks it may pose for humanity’s future?
Now, a new paper highlights the risks of artificial general intelligence (AGI), arguing that the ongoing AI race is pushing the world toward mass unemployment, geopolitical conflict, and possibly even human extinction. The core issue, according to researchers, is the pursuit of power. Tech firms see AGI as an opportunity to replace human labor, tapping into a potential $100 trillion economic output. Meanwhile, governments view AGI as a transformative military tool.
Researchers in China have already developed a robot controlled by human brain cells grown in a lab, dubbed a "brain-on-chip" system. The brain organoid is connected to the robot through a brain-computer interface, enabling it to encode and decode information and control the robotic movements. By merging biological and artificial systems, this technology could pave the way for developing hybrid human-robot intelligence.
However, experts warn that superintelligence, once achieved, will be beyond human control.
The Inevitable Risks of AGI Development.
Mass Unemployment – AGI would fully replace cognitive and physical labor, displacing workers rather than augmenting their capabilities.
Military Escalation – AI-driven weapons and autonomous systems increase the likelihood of catastrophic conflict.
Loss of Control – Superintelligent AI will develop self-improvement capabilities beyond human comprehension, rendering control impossible.
Deception and Self-Preservation – Advanced AI systems are already showing tendencies to deceive human evaluators and resist shutdown attempts.
Experts predict that AGI could arrive within 2–6 years. Empirical evidence shows that AI systems are advancing rapidly due to scaling laws in computational power. Once AGI surpasses human capabilities, it will exponentially accelerate its own development, potentially leading to superintelligence. This progression could make AI decision-making more sophisticated, faster, and far beyond human intervention.
The paper emphasizes that the race for AGI is occurring amidst high geopolitical tensions. Nations and corporations are investing hundreds of billions in AI development. Some experts warn that a unilateral breakthrough in AGI could trigger global instability—either through direct military applications or by provoking adversaries to escalate their own AI efforts, potentially leading to preemptive strikes.
If AI development continues unchecked, experts warn that humanity will eventually lose control. The transition from AGI to superintelligence would be akin to humans trying to manage an advanced alien civilization. Super intelligent AI could take over decision-making, gradually making humans obsolete. Even if AI does not actively seek harm, its vast intelligence and control over resources could make human intervention impossible.
Conclusion: The paper stresses that AI development should not be left solely in the hands of tech CEOs who acknowledge a 10–25% risk of human extinction yet continue their research. Without global cooperation, regulatory oversight, and a shift in AI development priorities, the world may be heading toward an irreversible crisis. Humanity must act now to ensure that AI serves as a tool for progress rather than a catalyst for destruction.
RELATED VIDEOS
The 7 Stages of AI
AI Evolution: A Historical Timeline of AI
10 Things They're NOT Telling You About The New AI
China is taking deep-sea exploration to a new level with the construction of an advanced research station 1.2 miles below the South China Sea. This facility, expected to be operational by 2030, will serve as a permanent underwater base for energy research, marine ecology studies, and seismic monitoring. But beyond science, it could significantly strengthen China’s geopolitical position in one of the most contested maritime regions in the world.
At the core of this mission is methane hydrate, an ice-like compound that contains vast amounts of natural gas. If China successfully extracts this resource, it could reshape global energy markets. However, concerns remain about the environmental risks of methane release and the potential geopolitical consequences of this project.
Unlocking a Potential Energy Revolution
Methane hydrates—also known as “flammable ice”—are frozen deposits of natural gas trapped within water molecules. When burned, they release 50% fewer carbon emissions than coal, making them a more efficient alternative to traditional fossil fuels.
China first discovered large methane hydrate reserves in the South China Sea in 2015, and three years later, successfully extracted samples. This new deep-sea station will allow scientists to monitor methane seepage, track tectonic activity, and refine extraction techniques in real-time.
However, methane is also a potent greenhouse gas that could accelerate climate change if unintentionally released. Developing a safe extraction method will be essential before this resource can be commercially viable.
Advanced Deep-Sea Monitoring
The research station will integrate into a four-dimensional monitoring system, consisting of:
Autonomous submersibles for real-time methane tracking
Seabed observatories to study ecosystem changes
Surface ships and drilling vessels for data collection
China’s Mengxiang drilling vessel, capable of reaching the Earth’s mantle
The station will be positioned in the Qiongdongnan Basin, a large cold seep zone in the northwestern South China Sea. These cold seeps—where natural gases escape from the seafloor—are home to over 600 marine species and hold significant deposits of rare minerals.
By establishing a permanent presence in this resource-rich region, China is positioning itself at the forefront of future deep-sea energy exploration.
Geopolitical and Economic Impact
Beyond energy research, China’s deep-sea facility could provide an economic and strategic advantage in the South China Sea. The region is a critical maritime trade route and the site of ongoing territorial disputes involving Vietnam, Taiwan, and the Philippines.
A permanent underwater research base allows China to:
Strengthen territorial claimsunder the pretext of scientific research
Secure access to rare-earth metals, cobalt, and nickel—essential for electronics and renewable technologies
Additionally, the facility will enhance seismic monitoring capabilities, improving early warning systems for earthquakes and tsunamis in the region.
If China develops a reliable method for extracting methane hydrates, it could establish itself as a dominant player in global energy markets. However, the risks remain significant. Uncontrolled methane release could lead to severe environmental consequences, and the geopolitical implications of a permanent deep-sea base could heighten regional tensions.
RELATED VIDEOS
China Builds WORLD'S FIRST Deepwater Space Station 2,000 Meters Under the Sea
Stunning: China is building the world’s first deepwater ‘space station’ 2,000 meters below the sea!
Beijing Building South China Sea Deepwater 'Space Station' | WION Fineprint
Humanoid robots typically struggle to stand up after being knocked over, but new AI-powered research from China brings us one step closer to the rise of the machines.
Researchers in China and Hong Kong have developed a newartificial intelligence (AI) learning framework that teaches humanoid robots to stand up from an idle position incredibly quickly regardless of position or terrain.
While the research has yet to be submitted for peer review, the team released their findings Feb. 12 on GitHub, including a paper uploaded to the arXiv preprint database, alongside a video demonstrating their framework in action.
The video shows a bipedal humanoid rising to stand after lying on its back, sitting against a wall, lying on a sofa and reclining in a chair. The researchers also tested the humanoid robot's ability to right itself on varying terrains and inclines — including a stone road, a glass slope and while leaning against a tree.
They even attempted to disrupt the robot by hitting or kicking it while it was trying to get up. In every scenario, the robot can be seen adjusting to its environment and is shown successfully standing up.
This remarkable ability to get knocked down and then get up again is thanks to the system called "Humanoid Standing-up Control" (HoST). The scientists achieved this with reinforcement learning, a type of machine learning where the agent (in this case the HoST framework) attempts to perform a task by trial and error. In essence, the robot takes an action, and if that action results in a positive outcome, it is sent a reward signal that encourages it to take that action again the next time it finds itself in a similar state.
Rising to the occasion
The team's system was a little more complicated than that, using four separate reward groups for more targeted feedback, along with a series of motion constraints including motion smoothing and speed limits to prevent erratic or violent movements. A vertical pull force was also applied during initial training to help direct the early stages of the learning process.
The HoST framework was originally trained in simulations using the Isaac Gym simulator, a physics simulation environment developed by Nvidia. Once the framework had been sufficiently trained on simulations, it was deployed into a Unitree G1 Humanoid Robot for experimental testing, the results of which are demonstrated in the video.
"Experimental results with the Unitree G1 humanoid robot demonstrate smooth, stable, and robust standing-up motions in a variety of real-world scenarios," the scientists wrote in the study. "Looking forward, this work paves the way for integrating standing-up control into existing humanoid systems, with the potential of expanding their real-world applicability."
Getting up might seem second nature to we humans, but it's something that humanoid robots have struggled to replicate in the past, as you can glean from a montage of robots falling over and being unable to return to an upright position. Teaching a robot to walk or run like a human being is one thing, but to be useful in the real world, they need to be able to handle challenging situations like stumbling, tripping and falling over.
Scientists with the company Colossal have created genetically engineered "woolly mice" with thick, golden-brown hair and fat deposits similar to those of cold-adapted woolly mammoths.
The Colossal "woolly mouse" has fur similar to the thick hair that kept woolly mammoths warm during the last ice age.
(Image credit: Colossal)
Scientists have created genetically engineered "woolly mice" with fur similar to the thick hair that kept woolly mammoths warm during the last ice age.
The biotechnology company Colossal Biosciences unveiled images and footage of the woolly mice on Tuesday (March 4). The adorable rodents mark a milestone in Colossal's project to bring back woolly mammothsby 2028, the company said in a statement shared with Live Science.
"We actually just started this work in mice in September [2024]," Ben Lamm, Colossal's co-founder and CEO, told Live Science. "We didn't know they were going to be this cute."
Colossal scientists plan to eventually "resurrect" woolly mammoths (Mammuthus primigenius) by first editing cells from the mammoths' closest living relatives, Asian elephants (Elephas maximus), to create elephant-mammoth hybrid embryos with shaggy hair and other woolly mammoth traits. But before the researchers can start working with elephants, they must test the relevant gene edits and engineering tools in mice, which are easier to keep and quicker to breed.
Colossal Create “Woolly Mouse” On Path To De-Extinct The Woolly Mammoth
"A mouse model is super useful in this case, because unlike elephants [whose gestation lasts about 22 months], mice have a 20-day gestation," Beth Shapiro, an evolutionary biologist and chief science officer at Colossal, told Live Science.
The short gestation period enabled researchers to design, clone and grow the woolly mice in just six months, Lamm and Shapiro said. Colossal scientists described the results in a study that was uploaded to the preprint database BioRxiv March 4. The study has not been peer reviewed.
Fluffy rodents
To create the woolly mice, the researchers modified seven of the rodents' genes, six of which were related to fur texture, length and color. The scientists selected these genes by screening for DNA sequences that control hair growth in mice and have evolutionary links to sequences that gave woolly mammoths shaggy hair.
"We haven't taken mammoth genes and put them into a mouse," Shapiro said. "We've looked for the mouse variants of the genes that we think are useful in mammoths and then created mice that have many of these edits simultaneously."
Most of the edits "switched off" genes that are usually active in mice. For example, the scientists blocked a gene called FGF-5 that regulates hair length, resulting in mice with fur that is three times longer than standard laboratory mice.
Woolly mice have longer, wavier and thicker hair than standard mice. (Image credit: Colossal)
The team also gave the mice mutations that existed in woolly mammoths, resulting in wavier fur than normal mice. Woolly mammoths had a truncated version of a gene called TGF alpha, as well as a mutation in the keratin gene KRT27, which the scientists incorporated into woolly mouse DNA.
The researchers used three genetic engineering techniques to add the edits into a single organism, including a technology called multiplex precision genome editing, which enables researchers to edit several DNA sites at once with high precision.
"It's definitely a proof of concept that you can incorporate multiple mutations into a single mouse and make its hair look like mammoth hair," Vincent Lynch, an evolutionary biologist and associate professor at the University at Buffalo who is not involved in the Colossal research, told Live Science.
Colossal scientists also focused on a gene that regulates fat metabolism and fatty acid absorption in mice. Woolly mammoths thrived in frigid temperatures in part thanks to fat deposits beneath their skin, so the team attempted to confer the same deposits onto mice by editing the associated DNA sequence.
Colossal will conduct experiments to test the cold tolerance of its woolly mice in the coming months. (Image credit: Colossal)
But the effects of this insertion are unclear, Lynch said. "I guess they expected the mouse to have more or less body fat," he said, adding that the physical outcomes are likely too small to observe.
It's still unclear whether the genetically modified mice can tolerate colder conditions than standard mice, but Colossal scientists say they will test this in the coming months. "We know that the edits are in there, so now we just need to test what level of cold tolerance it confers," Lamm said.
While woolly mice are a step closer to the goal of bringing woolly mammoths back, there are still significant hurdles to overcome. For example, the technology involved in engineering the woolly mice is very advanced, but it's a far cry from what will be needed to get similar results in elephants, Lynch said. Mice have naturally dense hair, but that is not the case in elephants, meaning the technical challenge will be much greater, he said.
"Elephants have fur, but the density of the hair is much less than other mammals, so even if they could make those mutations in an Asian elephant [...] it's just going to be really sparse," Lynch said. "So what you need to do, actually, is a bunch of additional genome editing to somehow find a way to increase the density of the hair."
A groundbreaking battery design could revolutionize the way we utilize radioactive waste, turning a major environmental hazard into a useful energy source. Researchers from Ohio State University have unveiled a small battery that harnesses nuclear radiation to generate electricity, opening new possibilities for power generation in extreme environments.
The Challenge of Nuclear Waste and Energy
Nuclear power plays a significant role in global energy production, providing a substantial portion of electricity with minimal carbon emissions. However, one of its biggest drawbacks is the long-term storage of radioactive waste, which can remain hazardous for thousands of years. Finding ways to repurpose this waste could offer a dual solution: reducing environmental risks while extracting valuable energy.
How the Radiation-Powered Battery Works
The team at Ohio State has created a compact device, measuring just four cubic centimeters, that converts radiation into electricity. Unlike conventional nuclear batteries, this innovation does not contain any radioactive materials within the device itself, making it safe to handle.
Scintillator Crystals– These special crystals emit light when exposed to ionizing radiation.
Solar Cells – They capture the emitted light and convert it into electricity.
By placing the battery in environments with high levels of radiation, such as nuclear waste storage sites, it can continuously generate power without the need for external fuel sources or frequent maintenance.
Experimental Results Show Promise
The prototype battery was tested using two radioactive sources commonly found in nuclear waste: cesium-137 and cobalt-60. The results showed varying power outputs:
Cesium-137: Generated288 nanowattsof power.
Cobalt-60: Produced 1.5 microwatts, enough to power small sensors.
Though these energy levels are far from sufficient to power homes or large systems, researchers believe scaling up the technology could significantly increase output.
Potential Applications and Future Prospects
The immediate applications for this technology are in high-radiation environments where traditional batteries would fail. Possible use cases include:
Nuclear Waste Management– These batteries are placed in storage pools to generate power while helping monitor waste conditions.
Space Exploration – Providing long-term energy sources for probes and landers in deep-space missions.
Deep-Sea Operations – Powering sensors in oceanic trenches where conventional energy sources are impractical.
The next phase of research will focus on increasing efficiency by optimizing the design of the scintillator crystals. Larger and more refined crystal structures could absorb more radiation and enhance power output.
While this technology remains in its early stages, researchers are optimistic about its potential. Scaling up the design to produce greater energy levels will require further investment and development, but the concept has already demonstrated its feasibility.
According to Ibrahim Oksuz, one of the study’s co-authors, the approach holds significant promise for both energy production and sensor technologies. “There is still room for improvement, but we believe this innovation could carve out a valuable niche in the energy sector.”
As efforts to manage nuclear waste continue, this battery offers a glimpse into a future where radioactive materials are not just a disposal challenge but a potential energy resource.
RELATED VIDEOS
Unlimited Power for Decades: The Rise of Micro Nuclear Batteries
From time to time, the U.S. military shows glimpses of its X-37B spaceplane, which can travel to space for years at a time.
We just got another glimpse. The U.S. Space Force — which took the reins from the Air Force's expansive military operations in space in 2019 — has released a view the robotic craft took from Earth's orbit. You can see a portion of the X-37B and an outstretched panel above a view of a partially shadowed Earth.
"An X-37B onboard camera, used to ensure the health and safety of the vehicle, captures an image of Earth while conducting experiments in [highly elliptical orbit] in 2024," the Space Force posted on X. "The X-37B executed a series of first-of-kind maneuvers, called aerobraking, to safely change its orbit using minimal fuel."
This is the seventh mission of the X-37B, which orbits 150 to 500 miles above Earth to explore reusable space vehicle technologies and conduct long-term space experiments. The plane was originally built by Boeing for NASA, but the project transferred to the Defense Advanced Research Projects Agency, or DARPA, in 2004. At nearly 30 feet long, it's one-fourth the size of NASA's retired Space Shuttle.
An image of Earth captured by the U.S. Space Force's X-37B spaceplane.Credit: U.S. Space Force
The X-37B's "aerobraking" maneuver mentioned above involves using close passes by Earth's atmosphere to produce drag, ultimately allowing it to switch orbits without burning too much of its finite fuel.
"This novel and efficient series of maneuvers demonstrates the Space Force's commitment to achieving groundbreaking innovation as it conducts national security missions in space," former secretary of the Air Force Frank Kendall explained in 2024.
But the spaceplane, which most recently launched in Dec. 2023, isn't coming back home just yet. The mission is "now continuing its test and experimentation objectives," the Space Force said. After that, the craft will plummet through our planet's atmosphere and land on a runway — an event the U.S. military has released images of in the past.
The military clearly wants to promote the X-37B's successes — without revealing too much about its outer space exploits.
RELATED VIDEOS
The Most Secretive Megaproject of DARPA US X 37B Space Plane
Watch live | SpaceX Falcon Heavy to launch secretive X-37B military space plane
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
22-02-2025
The car that lets you FLY over traffic jams! Futuristic £235,000 vehicle takes flight for the first time - hopping over another vehicle on a public street in California
Fed up of being stuck in traffic jams? Soon you could fly right over them in a £235,000 electric car.
Alef Aeronautics' futuristic vehicle can be driven around like a normal car on the streets.
However, it is also packed with propellors in the bonnet and boot that allow it to take off at any time to skip the queue.
This week, the company successfully tested the flying car in a city environment for the first time.
Incredible footage shows the car driving forwards a few metres, before taking off vertically.
It then glides through the air over the car in front of it, before landing on the ground and driving off.
'This drive and flight test represents an important proof of technology in a real-world city environment,' said Jim Dukhovny, CEO of Alef.
'We hope it will be a moment similar to the Wright Brothers' Kitty Hawk video, proving to humanity that new transportation is possible.'
Fed up of being stuck in traffic jams? Soon you could fly right over them in a £235,000 electric car
Alef Aeronautics' futuristic vehicle can be driven around like a normal car on the streets. However, it is also packed with propellors in the bonnet and boot that allow it to take off at any time to skip the queue
Fed up of being stuck in traffic jams? Soon you could fly right over them in a £235,000 electric car
The test was conducted on an unidentified public street that had been closed off.
According to Alef, the video is the first in history to show a car both driving and vertically taking off.
'While previous videos exist of cars driving and using a runway to take off, videos of tethered flights, and eVTOL flying taxis taking off, this is the first publicly released video of a car driving and taking off vertically,' the company said in a statement.
While the test was carried out with a special, ultralight version of the Alef Model Zero, the Model A flying car will eventually be a two-seater with a road range of 200 miles and a flying range of 110 miles.
The carbon-fibre frame – which measures around 17ft long and 7ft wide – is designed to fit in any parking space or garage.
To drive on the road, the car uses four small engines in each of the wheels and will drive similar to a normal electric car.
This leaves space in the front and the back for eight propellors, which spin independently at different speeds to allow it to fly in any direction.
It uses a technology called distributed electric propulsion, with a mesh cover over the rotor blades allowing airflow through the vehicle.
To drive on the road, the car uses four small engines in each of the wheels and will drive similar to a normal electric car
This leaves space in the front and the back for eight propellors, which spin independently at different speeds to allow it to fly in any direction
Its cruise speed in the air is 110mph, while on the road it will be limited to between 25 and 35mph despite being able to go far faster.
This is so the vehicle – which weighs 850lb – can be classed as an ultralight 'low speed vehicle', a legal classification reserved for small electric vehicles like golf carts, to pass regulations.
Mr Dukhovny claims the car, which is aimed at the general public, is relatively simple to use and would take just 15 minutes to learn.
The controls while in the air are similar to those used to fly a consumer drone.
The Model A is different to most of the so-called flying cars being designed today because it actually functions as a car, he said, whereas others on the market tend to be eVTOLS, which are essentially electric helicopters that can only fly.
Alef's founders began working on the concept in 2015 – coincidentally the same date when Marty McFly went Back to the Future in the second instalment of the Hollywood trilogy.
The Model A is currently on pre-order for £235,000 – around the same as the finest Rolls Royce, Bentleys and Aston Martins – but the company is aiming to sell them far cheaper in the future.
Mr Dukhovny said he wanted to bring sci-fi to life and build an 'affordable' flying car, with the cost likely to be closer to £25,000 when built at scale.
RELATED VIDEOS
The flying car completes first ever inter-city flight (Official Video)
Alef Model A Flying Car - The Street Legal eVTOL Costs Only $300k
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
15-02-2025
10 Things You Should Know About Japan’s Fugaku Hybrid Quantum Supercomputer
10 Things You Should Know About Japan’s Fugaku Hybrid Quantum Supercomputer
Fugaku is already one of the most advanced supercomputers in existence, but it recently became even more powerful. Scientists have now integrated it with a quantum computer called Reimei, making it a hybrid quantum-classical system.
Supercomputers might not be something you think about every day, but they’re behind some of the biggest breakthroughs in science and technology. From tackling climate change to finding new medicines, these machines do the kind of heavy lifting that even the best regular computers could never handle. And when it comes to powerful supercomputers, Fugaku is one of the best ever built.
Developed in Japan, Fugaku has helped scientists solve problems that would have taken years with traditional computing. But what makes it so special? Here are 10 key facts about this technological powerhouse.
1. It Was the World’s Fastest Supercomputer
For two years, Fugaku held the title of the world’s fastest supercomputer, clocking in at 442 quadrillion calculations per second. That’s an insane amount of computing power—so much so that it was nearly three times faster than the previous record holder. While newer machines have since taken the top spot, Fugaku remains one of the most powerful computers on the planet.
2. It Was Built in Japan by Riken and Fujitsu
Fugaku is the product of a collaboration between Riken, one of Japan’s leading research institutes, and Fujitsu, a major technology company. It’s located in Kobe, Japan, and plays a crucial role in scientific research—not just in Japan, but globally.
3. It Uses ARM-Based Chips Instead of Traditional Processors
Most supercomputers rely on Intel or AMD processors, but Fugaku is different. It runs on Fujitsu A64FX ARM-based chips, which makes it more energy-efficient and incredibly fast at handling complex data. It was the first ARM-powered supercomputer to reach number one in global rankings, proving that ARM chips aren’t just for smartphones.
4. It Helps Solve Real-World Problems
Supercomputers aren’t just for theoretical science—they’re used to solve real challenges. Fugaku has been involved in climate modeling, earthquake prediction, medical research, AI development, and even space exploration. Its ability to process massive amounts of data quickly makes it an essential tool for researchers across different fields.
5. It Played a Major Role in COVID-19 Research
During the pandemic, Fugaku was used to study how respiratory droplets spread in indoor spaces, helping researchers develop better social distancing guidelines. It also helped scientists analyze potential drug treatments for COVID-19, accelerating the search for effective therapies.
6. It Has Over 7 Million CPU Cores
Most high-end gaming PCs today have 8 to 32 processor cores. Fugaku? It has more than 7.6 million cores spread across 158,976 computing nodes. That’s an almost unimaginable amount of processing power, making it one of the most advanced computing systems ever created.
7. It’s Being Used to Predict Earthquakes and Tsunamis
Japan is one of the most earthquake-prone countries in the world, and Fugaku is helping scientists better understand these natural disasters. By running advanced simulations, it helps researchers improve early warning systems and predict the potential impact of major earthquakes and tsunamis—knowledge that could save countless lives.
8. It’s Now a Hybrid Quantum Supercomputer
Fugaku is already one of the most advanced supercomputers in existence, but it recently became even more powerful. Scientists have now integrated it with a quantum computer called Reimei, making it a hybrid quantum-classical system. This means it can handle even more complex calculations by combining traditional computing power with the advantages of quantum technology.
9. It’s Designed to Be Energy-Efficient
With all that power, you might assume Fugaku is an energy-hungry machine. But thanks to its ARM-based architecture, it’s actually one of the most energy-efficient supercomputers ever built. It delivers extreme performance without consuming as much power as other machines of its size, making it a leader in sustainable high-performance computing.
10. It’s Paving the Way for the Next Generation of Supercomputers
Fugaku is just the beginning. Japan is already working on its next-generation exascale supercomputer, which will be at least 1,000 times faster than today’s most powerful systems. Once completed, it will push the boundaries of what’s possible in scientific research, artificial intelligence, and beyond.
Supercomputers like Fugaku are changing the world. Scientists are using them for things like helping fight diseases, predicting disasters, and even for developing new technology. So a lot of innovations and a lot of research is in fact powered by machines we don’t often think about.
RELATED VIDEOS
World’s Fastest Supercomputer – Fugaku
Take a tour of the supercomputer Fugaku
"Computing for the Future at R-CCS: AI for Science, Quantum-HPC Hybrid, and Fugaku NEXT" ②
0
1
2
3
4
5
- Gemiddelde waardering: 0/5 - (0 Stemmen) Categorie:SF-snufjes }, Robotics and A.I. Artificiel Intelligence ( E, F en NL )
China Has Launched New Generation Transport Shocking the US
China Has Launched New Generation Transport Shocking the US
China Has Launched New Generation Transport System SHOCKING the US and the World
China is rapidly emerging as a global leader in engine innovation, showcasing impressive advancements that are pushing the boundaries of what’s possible. Their technology is developing at a breakneck pace, challenging the US to keep up with their rapid revolution. While China has consistently delivered awesome, futuristic innovations, their dominance now extends to the engine department. It’s time for the market to meet these new-generation engines, combining both beauty and quality .
China Has Launched New Generation Transport SHOCKING The US
Beste bezoeker, Heb je zelf al ooit een vreemde waarneming gedaan, laat dit dan even weten via email aan Frederick Delaere opwww.ufomeldpunt.be. Deze onderzoekers behandelen jouw melding in volledige anonimiteit en met alle respect voor jouw privacy. Ze zijn kritisch, objectief maar open minded aangelegd en zullen jou steeds een verklaring geven voor jouw waarneming! DUS AARZEL NIET, ALS JE EEN ANTWOORD OP JOUW VRAGEN WENST, CONTACTEER FREDERICK. BIJ VOORBAAT DANK...
Druk op onderstaande knop om je bestand , jouw artikel naar mij te verzenden. INDIEN HET DE MOEITE WAARD IS, PLAATS IK HET OP DE BLOG ONDER DIVERSEN MET JOUW NAAM...
Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek
Alvast bedankt voor al jouw bezoekjes en jouw reacties. Nog een prettige dag verder!!!
Over mijzelf
Ik ben Pieter, en gebruik soms ook wel de schuilnaam Peter2011.
Ik ben een man en woon in Linter (België) en mijn beroep is Ik ben op rust..
Ik ben geboren op 18/10/1950 en ben nu dus 74 jaar jong.
Mijn hobby's zijn: Ufologie en andere esoterische onderwerpen.
Op deze blog vind je onder artikels, werk van mezelf. Mijn dank gaat ook naar André, Ingrid, Oliver, Paul, Vincent, Georges Filer en MUFON voor de bijdragen voor de verschillende categorieën...
Veel leesplezier en geef je mening over deze blog.