Machine learning – on classical computers- has made great progress in the past five years. Computer translation of speech and text is just one example. In Leiden, some researchers expect that machine learning, empowered by quantum systems, even if they only contain a few dozen qubits, can lead to a big boost for this and other types of artificial intelligence.
Machine learning is nowadays often done by neural networks: computers that, to some extent, mimic how our brains work. Data – pixels of an image, for instance – go in at the front end, get processed by a first layer of virtual neurons – thousands of them – that pass that information on to the next layer, and so on. The last layer puts out the answer: ‘this is a picture of a dog‘. The intelligence of the system is all in the ‘weigths’, the adjustable strengths of the data-connections between neurons in successive layers. These weights get adjusted by training the neural network with a large number of pictures of dogs and other subjects.
How would you simulate a network of thousands of neurons in today’s quantum systems? It seems impossible, because today, the largest functioning quantum system, Google’s Sycamore, has just 53 qubits. But according to Vedran Dunjko, assistant professor at the Leiden Institute of Advanced Computer Science (LIACS), already a few dozen qubits should be able to perform amazing feats of machine learning
Dunjko’s research extends to Noisy Intermediate Scale Quantum (NISQ) computing: computing with quantum systems that are too small, and their qubits too noisy, to build a real quantum computer. A full-fledged quantum computer must be able to perform exact, error free calculations. While in a perfect world, even a few hundred qubits would be enough to outperform all existing classical computers, in the real world, a lot more qubits are needed – perhaps hundreds of thousands – to compensate for noise that infiltrates the system.
Dunjko is part of a group of physicists, chemists, computer scientists and mathematicians at Leiden called <aQa>. Together, they hope to exploit the noise resilience of quantum machine learning.
Dunjko: ‘We have quantum computation systems we can control, but only to a certain degree. And this offers a chance, because in a sense, machine learning is designed to extract signal from noise.’
In the example above, a neural network learns ‘the idea of a dog’ from a large number of noisy pictures, which enables it to correctly identify a dog in a picture it has never seen before. But the network does not have a formula for a dog, nor does any exact calculation take place.
According to Dunjka, quantum researchers, including himself, have not yet thought deeply enough about this essential difference: ‘Most of quantum computation research is about making it faster than classical computing. Quantum machine learning can be completely different.’ Even the behaviour of a small quantum circuit of entangled qubits with feedback loops is so complicated, that it can not be simulated on a classical computer. ‘These quantum circuits will be able to learn entirely new things,’ Dunjko expects. Interestingly, there are similarities between this and advances in quantum chemistry explored by Tomas O’Brien and other colleagues in <aQa>.
And while a first universal quantum computer might take another ten years to build, he expects concrete results from NISQ in three to five years: ‘It will not have an economic impact in three years, but if it doesn’t happen in 15 years, we probably did something wrong.’
This is a syndicated post. Read the original post at Source link .