/Quantum computing: a new tool ensures that qubits do not lie (via Qpute.com)

Quantum computing: a new tool ensures that qubits do not lie (via Qpute.com)

Quantum computing: a new tool ensures that qubits do not lie


All quantum supremacy is fleeting
Scott Fulton explains how Google astonished the world by stating that its quantum processor prototype has achieved “supremacy”: the ability to process a task in seconds that a classic supercomputer would take more than a lifetime. But how long can Google reign supremely before IBM challenges its legitimacy? Months? Days? Read more: https://zd.net/2K5bKiK

The promise of quantum computers is that they could solve problems that would take thousands of years to overcome classical computers, but this perspective raises a practical question: who, or rather, what, can correct the answers provided by qubits?

With this problem in mind, researchers at the University of Warwick began thinking of ways to verify the results of quantum calculations; and they just published their results in the form of a “verification protocol”.


The new tool addresses the root cause of the error in quantum computing: noise. Caused by random factors such as manufacturing failures or temperature fluctuations, noise is the nemesis of any quantum physicist, because it is the reason why quantum computers are so prone to errors.

SEE: Special report: how to automate the company (free ebook)

By verifying that the noise that affects a computer is below a certain level, therefore, scientists can make sure that their calculations are resolved accurately.

The new protocol developed in Warwick proposes to do exactly that. To understand how it works, explained the main author of the article Samuele Ferracin, you should imagine a quantum calculation as a complicated circuit made of doors, cables, measurements, etc.

The tool he developed with his team draws several alternative versions of a given circuit, which are similar to the original calculation, but all can be simulated on a classic computer.

In other words, the protocol creates easier calculations called “trap circuits,” which however reflect the noise that occurs within the original quantum circuit.

Therefore, classical computers can establish the accuracy of the results generated by the trap circuits, providing the basis for researchers to determine how accurate the quantum computer will be to solve the “most difficult” calculation given to it.

“By hiding the larger calculation behind several smaller circuits,” Ferracin said, “we can verify things that we cannot simulate on a classic computer.”

Classical computers accept the result of alternative calculations as “correct” or “potentially incorrect,” which gives researchers an indication of where the computer is on the noise scale.

The test even produces two percentages to refine the verification: how close do you think the quantum computer is the right result and how safe a user of that proximity can be.

The use of trap circuits to verify the accuracy of a quantum calculation is not an innovation in itself, Ferracin said. It is already a common approach to simulate smaller operations on classic computers, but the method is limited.

Due to the complexity of quantum calculations, current simulations can only be performed by creating traps that are larger than the original circuit.

Verification, therefore, is feasible for small quantum circuits, but cannot be extended much. On the contrary, the traps in the new protocol do not carry more qubits or doors than the original circuit being tested.

“The circuits implemented in our protocol are not larger than the circuit whose output we want to verify,” Ferracin said. “This had not been done before, and it means that the test is practical and scalable.”

SEE: What is quantum computing? Understand the how, why and when of quantum computers

He estimated that the test would take only a few minutes to run for the 53-qubits quantum computer from Google, for example.

The technology giant claims that its technology has reached quantum supremacy by taking 20 minutes to perform a calculation that would have taken classic computers 10,000 years to complete.

The new test, however, remains only a protocol. Ferracin said the research team is working with the experimenters to see how the tool works and keep improving.


This is a syndicated post. Read the original post at Source link .