/Delivering quantum-like capabilities to the desktop to overcome combinational optimisation problems (via Qpute.com)
IDGConnect_quantum_desktop_shutterstock_1483602617_1200x800

Delivering quantum-like capabilities to the desktop to overcome combinational optimisation problems (via Qpute.com)


This is a contributed article by Kosuke Tatsumura, Chief Research Scientist, Toshiba.

Speed and scale are key to success in business today, yet combinational optimisation problems can often prove a stumbling block for organisations trying to achieve such levels of operational efficiency. These problems are economically valuable but computationally hard to solve, causing stumbling blocks across a number of different sectors.

Even the brute force of supercomputers is impractical when added variables increase the complexity of a question exponentially. As the scale of any issue grows larger, the total number of possible combinations grows exponentially and determining optimal combinations – the ones with maximum intended effect – becomes increasingly difficult. This combinatorial explosion is what makes these problems notoriously difficult to solve.

A popular example that makes the scale of these challenges easier to understand is the “Traveling Salesman Problem”. The task is to determine the shortest possible route for a salesman who needs to visit multiple cities, each only once, before returning home. Even with just 10 cities, there are 181,440 possible combinations to consider. With 50 cities, the numbers grow so large, it becomes almost impossible to solve the conundrum by complicated calculations with regular computers. Beyond supply chain management, there are a myriad of ways in which such calculations impact organisations today, such as molecular modelling or risk assessment. Take trading, for example, where industry innovations like cryptocurrencies are rapidly making calculations ever-more complicated.

The role of Ising machines

Ising machines to date have played a crucial role in overcoming this issue. These specialised computers first convert combinational optimisation to ground-state search problems of Ising spin models, using mathematical models to describe the up-and-down spins of magnetic materials interacting with each other. Those spins can then be used to represent a combinatorial problem. The optimal solution, then, becomes the equivalent of finding the ground-state spin configuration of the model. The scalability for an Ising machine, as well as its speed performance, determines the practical applicability.

Ising machines have been developed using a variety of different approaches. These include quantum computers based on quantum annealing or quantum adiabatic optimisation, coherent Ising machines (CIMs) implemented with laser pulses, and Ising machines based on simulated annealing implemented with digital circuits such as field-programmable gate array (FPGA).

However, these machines often come with their own limitations – requiring special hardware or having limited scalability and connectivity. Furthermore, those reliant on quantum computing bring with them additional concerns – be they cost-related or security, given the risks associated with the quantum threat. For this reason, a new development has been pioneered which uses FPGAs to deliver quantum-inspired algorithms to traditional workstations – offering new levels of scalability and speed without the costs and complexities associated with quantum or super computers.

The arrival of simulated bifurcation algorithms

Specifically, this development comes via the arrival of more advanced Ising machines using simulated bifurcation algorithms. These quantum-inspired algorithms can be up to ten times faster than competing technologies, as is the case with Toshiba’s Simulated Bifurcation MachineTM (SBM). SBMs apply a cyber-physical approach to Ising to help eliminate the barriers alluded to earlier, such as limited scalability. This is achieved by numerically simulating adiabatic evolutions of classical nonlinear Hamiltonian systems exhibiting bifurcation phenomena, where two branches of the bifurcation in each nonlinear oscillator correspond to two states of each Ising spin. This model can also be simulated using current digital computers, a significant advantage over quantum-based systems – enabling the treatment of large-scale problems with dense connectivity, which are challenging for current quantum computers.

Real world application

Applying this to financial services, tests demonstrated that the algorithm can detect and execute optimal arbitrage opportunities from among a huge number of currency transaction paths in real-time, and issue a trade order in microseconds. Using the bifurcation phenomenon, adiabatic process and ergodic process in classical mechanics, the new algorithm searches instantly for solutions with high accuracy – with the likelihood of finding the most profitable arbitrage opportunities enhanced to more than 90 per cent. This showcases another vital element of simulated bifurcation when compared to other models – the abundance of parallelism existing within the algorithm. From a computational perspective, simulated bifurcation and simulated annealing are not substantially different, but the advanced parallelism of the bifurcation algorithm allows users to update variables simultaneously at each time-evolution step which leads to a significant increase in operational speed by massively parallel processing using FPGAs and graphics processing units (GPUs).

No one, until now, has been able to employ combinatorial optimisation in high-frequency trading. But with SBM, testing is taking place for an end-to-end arbitrage system that can easily be installed in a forex trading centre for calculations with foreign currency exchanges. The response including all input, calculating and output times is just 30 microseconds. This is much shorter than the time required for an exchange to execute a trade, which is around one millisecond.

Scaling out SBM

The end-to-end arbitrage system was based on the single-FPGA simulated bifurcation technology, so the capability of the transaction machine was limited to fully-connected 16-asset problems (a market graph with 16 nodes and 256 directed edges). Yet more assets can be taken into account through cutting-edge scale-out technology which supports continued increases in computing speed and scale. At the heart of this is a partitioned version of the simulated bifurcation algorithm, which enables multiple FPGAs to exchange information on variables with each other, thereby triggering an autonomous synchronisation mechanism through exchange processes of spin information between neighbouring chips. This leads to enhanced scalability of the processing speed of the cluster, while also avoiding performance saturation due to communication overheads. Such an approach is called scale-out in computer architecture, with the relationship between the computing speed and number of FPGAs demonstrated to be exactly linear.

This means that simulated bifurcation with a cluster of eight FPGAs achieves computational throughput 5.4 times higher than an SBM with single FPGA, solving problems up to 16 times larger. This method has successfully demonstrated the world’s first simultaneous scale-out of computing speed and problem size for all-to-all connection-type combinatorial optimisation problems.

Where is all this heading?

SBM is one example of a cyber-physical approach to solving combinatorial optimisation problems. Cyber-physical systems are defined as those which collect real-world data and analyse it within cyber or digital environments – using technologies like Artificial Intelligence (AI) – before then applying the learnings within the physical world to create added value. This methodology is being applied today across a wide range of sectors to help organisations build transformative, more sustainable solutions in a more effective way.

Kosuke Tatsumura is a Chief Research Scientist at Corporate R&D Center at Toshiba Corporation, based in Japan. His focus is on cutting-edge FPGA computing technology and its applications to innovative industrial systems including ultra-high speed custom processors for combination optimization problems, financial trading machines, AI-enhanced fabs, swarm robotics, etc. At the company, he leads a research group called the Domain-Specific Computing, mainly developing FPGA-based accelerators for innovative industrial systems. He is also in charge of creating new businesses based on cutting-edge technology.


This is a syndicated post. Read the original post at Source link .