Early on in D-Wave’s history, the company made bold claims about its quantum annealer outperforming algorithms run on traditional CPUs. Those claims turned out to be premature, as improvements to these algorithms pulled the traditional hardware back in front. Since then, the company has been far more circumspect about its performance claims, even as it brought out newer generations of hardware.
But in the run-up to the latest hardware, the company apparently became a bit more interested in performance again. And it recently got together with Google scientists to demonstrate a significant boost in performance compared to a classical algorithm, with the gap growing as the problem became complex—although the company’s scientists were very upfront about the prospects of finding a way to boost classical hardware further. Still, there are a lot of caveats even beyond that, so it’s worth taking a detailed look at what the company did.
Magnets, how do they flip?
D-Wave’s system is based on a large collection of quantum devices that are connected to some of their neighbors. Each device can have its state set separately, and the devices are then given the chance to influence their neighbors as the system moves through different states and individual devices change their behavior. These transitions are the equivalent of performing operations. And because of the quantum nature of these devices, the hardware seems to be able to “tunnel” to new states, even if the only route between them involves high-energy states that are impossible to reach.
In the end, if the system is operated properly, the final state of the devices can be read out as an answer to the calculation performed by the operations. And because of the quantum effects, it can potentially provide solutions that a classical computer might find difficult to reach.
Validating that idea, however, has proven challenging, as noted above. Where the system has done best is in modeling quantum systems that look a lot like the quantum annealing hardware itself. And that’s what the D-Wave/Google team has done here. The problem can be described as an array of quantum magnets, with the orientation of each magnet influencing that of its neighbors. The system is in the lowest energy state when all of a magnet’s neighbors have the opposite orientation. Depending on the precise configuration of the array, however, that might not be possible to satisfy.
Now, imagine that you start the system in a configuration where the magnets aren’t in a stable state—there are too many cases where neighboring magnets have the same orientation. Magnets will start flipping to get there, but in the process, they may cause their neighbors to flip. The whole thing may work through a variety of intermediate configurations to make its way toward stability. Because of the quantum nature of the device’s components, the progression through different states may involve some steps that are, to our non-quantum brains, difficult to understand.
Quantum Monte Carlo
This system is interesting for a couple of reasons: it’s an approachable way to examine complicated quantum behaviors, and other interesting problems can be mapped onto its behavior. So researchers have figured out how to look at its behavior using computer algorithms. The one the research team says shows the highest performance is what’s called Path-Integral Monte Carlo. “Path-integral” simply indicates that there are multiple valid paths between a starting state and a low-energy state, and the software looks at a subset of them, since there are so many. “Monte Carlo” is an indication that the paths it does sample are chosen randomly.
But the D-Wave system looks a lot like an array of quantum magnets, so it’s possible to configure it so that it behaves a lot like what is being modeled. There’s a chance that configuring the D-Wave machine properly can get it to very efficiently recapitulate the behavior of the system being modeled.
This is what the team tried for the paper, but it found out there was a little problem. With the traditional computing algorithm, it’s easy to essentially stop the system and look at how it’s evolving. With the D-Wave system, things moved so quickly that it ended up carrying on to the final state before it could be sampled. Instead, the researchers had to arrange some fairly tortured configurations to slow the D-Wave hardware down long enough to have a look at what was going on.
The performance measurement the team cared about isn’t the final state; instead, it’s trying to figure out how quickly a given configuration of magnets will take to reach a stable, equilibrium state.
For generating this measure, the researchers found that the D-Wave hardware could outperform the x86 CPU they were using (a hyperthreading Xeon with 26 cores). And the advantage grew larger as the research team increased the complexity of the magnets’ arrangement, reaching up to 3 million times faster. And while the entire D-Wave system didn’t behave as a single quantum object, there were quantum interactions that were larger than the smallest groups of magnets in the D-Wave hardware (linked groups of four).
To start with, the gap in performance is between a single Xeon and a chip that requires a cabinet-sized cooling system with some pretty hefty energy use. Should the classical computer algorithm scale with additional processors, it should be relatively simple to put this on a cluster and take a big chunk out of D-Wave’s speed advantage. But Ars’ own Chris Lee notes that even on the simpler problems, the Xeon (which has 26 cores) was already struggling with any increase in complexity. This might be a sign that there are only limited gains we can expect from throwing more processors at the issue.
That said, D-Wave was also not operating at its full advantage. While it recently introduced a new generation of processors, the work was done on an experimental processor that was part of the development of the new generation. This had the same hardware layout—same number and connections among the quantum devices—as the previous generation of hardware. But it was made with a new manufacturing process that lowered the noise in the system and was put into full use in the latest generation of chips.
In addition, the new generation more than doubles the quantum devices on the chip and boosts the connectivity among them. These advances should allow the system to model larger and more complicated magnet arrays, expanding D-Wave’s advantage back.
Finally, the team behind the work emphasizes that there may be ways to optimize the performance of the classical algorithm as well, saying, “Our study does not constitute a demonstration of superiority over all possible classical methods.” How this all shakes out will undoubtedly come with additional work, so we may not have an update on where performance stands for a couple of years.
Still, it’s interesting that D-Wave has become so interested in performance again. The company recently announced that it had adapted its control software so that a specific type of operation (a quadratic unconstrained binary optimization) could be both used by a D-Wave machine and sent to the Qiskit software package that would allow it to run on IBM’s quantum computers. This makes sense for the company’s user base; a large percentage of the base is made up of companies that are simply trying to make sure they’re ready for any disruptive computing technologies, so they are looking at all the quantum hardware on the market. But in the press release announcing the data, the company says this “opens the door to performance comparisons.”
This is a syndicated post. Read the original post at Source link .