/How babies can teach AI to understand classical and quantum physics (via Qpute.com)

How babies can teach AI to understand classical and quantum physics (via Qpute.com)

How babies can teach AI to understand classical and quantum physics

A team of MIT researchers has recently harnessed the incredible potential of the human brain to develop an AI model that includes physics and physics. some humans. And by some we hear babies three months old.

This may not seem like much, but at the age of three months, an infant has a basic understanding of how physical things work. They include advanced concepts such as solidity and permanence – objects do not usually intersect and do not disappear – and they can predict movement. To study this, researchers show children videos of objects that act as they should, for example, passing behind an object and emerging, and others where they apparently violate the laws of physics.

Scientists have learned that babies have varying levels of surprise when objects do not act as they should.

Kevin Smith, researcher at MIT, said:

At the age of 3 months, infants have the notion that objects do not disappear and can not disappear, and that they can neither move nor teleport. We wanted to capture and formalize this knowledge to turn infant cognition into agents of artificial intelligence. We now look a lot like humans in the way models can separate implausible or plausible basic scenes.

https://www.youtube.com/watch?v=95HlF9nCca4 (/ embed)

The main idea for the MIT team was to train AIs to recognize whether a physical event should be considered surprising or not, and then to express this surprise in its results. By MIT press release:

Gross object descriptions are introduced into a physical engine – software that simulates the behavior of physical systems, such as rigid or fluidic bodies, and commonly used for movies, video games, and computer graphics. The physics engine of researchers “pushes objects in time” (by co-author Tomer Ullman). This creates a range of predictions, or a “distribution of beliefs,” for what will happen to these objects in the next frame.

Then, the model observes the actual next frame. Once again, he captures the representations of object, which he then aligns on one of the predicted object representations from his belief distribution. If the object obeyed the laws of physics, there will not be much difference between the two representations. On the other hand, if the object did something implausible – say, it disappeared behind a wall – there would be a major imbalance.

Classical physics is difficult. The myriad of predictions and calculations needed to determine what will happen in a sequence of events is incredibly complex and requires huge amounts of computations for non-IA systems. Unfortunately, even AI systems are starting to produce diminishing returns according to conventional computer paradigms. To move forward, it is likely that we will have to abandon the current method of brutally compressing data in a black box, and then use hundreds, if not thousands, of parallel processing units. adjust and take advantage of useful outputs from a network of artificial neurons.

Some experts believe that we need a quantum solution capable of time travel, or get to several outings at a time, then area responds autonomously like the human brain. This places us a little bit in “Catch 22” because our understanding of the human brain, artificial neural networks and quantum physics is considered incomplete. The hope is that ongoing research in all three areas will work as a rising tide that will lift all ships.

For the moment, scientists hope that artificial curiosity and the codification of “surprise” will help bridge the gap between the human brain and artificial neural networks. Ultimately, this new exploration-based learning method could be combined with quantum computing technology to create the basis for “thinking” machines.

We may have a long way to go before that happens, but current research is the first no baby towards AI at the human level. To deepen the work of the MIT team, see his conference paper here.

This is a syndicated post. Read the original post at Source link .