When it comes to computation, the modern approach seems to involve an enormous bucket of bits, vigorous shaking, and not a lot of explanation of how it all works. If you ever wondered how Excel became such an abomination, now you know.

We don’t seem to have a problem creating and filling enormous buckets of bits, but shaking them up is energy-intensive and slow. Modern processors, as good as they are, simply don’t cope well with some problems. A light-based, highly parallel processor may just be the (rather bulky) co-processor that we’ve been looking for to handle these tasks.

## Solutions are downhill

One way to compute a solution to a problem is called annealing. I’ve written a lot about annealing in the context of quantum computing, but annealing works for classical computers as well. The essential idea is that a problem is recast so that the solution is the lowest energy state of an energy landscape. The landscape determines how strongly the value of one bit affects the value of the surrounding bits.

We start with all the bit values set randomly, then shake. As we shake, the bits have a chance to flip, which can also induce neighboring bits to flip. The chance of flipping a bit to a value that reduces the total energy is always more likely than the reverse. Over time, the total energy reduces until the system reaches its lowest possible energy. The value of the bits now represent the solution to your problem.

Annealing is a bit niche, though, because a modern processor can’t flip bits or count up the total amount of energy without laboriously churning through them one by one (or in small blocks). Even with multiple processors, the coupling between bits means that one processor spends a fair bit of time waiting for results from its neighbors. In a lot of cases, it just isn’t worth it.

## Do everything at the same time

This is where our new optical processor comes in: everything happens in parallel. I should note that the sort of optical processor that I’m about to describe is not entirely new. However, this may be a case where revisiting the earlier idea with new technology may give optical computing a new lease on life.

To compute, you need a light source that is pixelated. At each pixel, you can vary two properties: the phase and the amplitude. The variation task is done by a spatial light modulator. The amplitude (or brightness) of the pixel controls how much the light from that pixel interferes with light from all the other pixels. This interaction strength is the part that encodes the problem to be solved. The answer to the problem lies in the phase of each pixel. The phase of the light can be switched between two values, representing logical zero and logical one, by the spatial light modulator.

How do you know when the computer has the right answer? You image the output beam. If the image is a single bright point, then you’ve reached the energy minimum and the answer is awaiting you. To get to the energy minimum, you flip the phase of each pixel between logical one and logical zero and check if the image on the camera is closer to the desired point. You just keep iterating and shaking bit values until a shiny bright point of light is achieved. Once you have that (and can see again), you can read out the bit value of each pixel to obtain your answer.

As far as computers go, this is pretty simple stuff. Using off-the-shelf components, the researchers were able to compute with 40kbits, which is a pretty good start. It takes the researchers about 1,400 iterations to obtain the solution to their chosen problem. The problem itself is not important for us, but it was a classic physics magnetism problem (the 2D Ising model).

The researchers claim that with better equipment, it would take about a millisecond or so to set new bit values, which means about 1.4s per Ising model. Other problems may take longer to reach a solution, but probably not much, since every problem can be reduced to an Ising model.

## More bits, same speed

However, the key here is scalability. The solver can easily be expanded to hold more pixels—the researchers’ own equipment could manage a million pixels, while 10-million-pixel spatial light modulators are available. And there is no time penalty for adding more pixels: the time it takes to set the pixels on a spatial light modulator is determined by how fast each pixel’s liquid crystals rotate. The pixel voltages are set well before the liquid crystals have finished rotating, meaning that additional pixels do not add time to set bit values.

The big issue is the setup you need to do before performing any calculations. As usual with optical setups, everything has to be carefully aligned and nothing may move during a computation. Under ideal circumstances, you also want to make sure that the number and size of the pixels on the camera matches the pixels used in the spatial light modulator. Otherwise, you can’t ensure that the minimum energy is found.

More specifically, the light pattern from a spatial light modulator is given by a set of spatial modes. The number of modes is limited by the number and density of its pixels. At the other end, the detector is capable of detecting a limited set of modes, also set by the number and density of its pixels. If the detector is not as good or better than the spatial light modulator, then you cannot minimize the energy of your problem and obtain the solution. Or you have to limit yourself to problems that only involve modes that can be detected.

That means that for very large problems, you need an exceptionally good detector and a gigantic spatial light modulator. Nevertheless, if the researchers can show how to turn this into something robust, parallel light computers will turn up in specialized applications as a sort of co-processor.

You’ll have to provide your own darkened room, though.

Physical Review Letters, 2019, DOI: 10.1103/PhysRevLett.122.213902 (About DOIs)

This is a syndicated post. Read the original post at Source link .