/Cutting Corners on Numbers | Tufts Now (via Qpute.com)
A steam of digital zeros and ones. Tufts mathematicians find the way we represent numbers in computers can lead to large computational errors, especially for modeling dynamical systems.

Cutting Corners on Numbers | Tufts Now (via Qpute.com)

Researchers at Tufts University and University College of London have made a discovery about a fundamental approach to computing that suggests we could be way off on many important calculations in science and engineering, in some cases introducing errors of up to 15 percent.

The so-called “floating-point” standard, by which real numbers are represented on digital computers in binary form (using 1s and 0s) has long been regarded as one of the great success stories of computational science and engineering. It serves as a basis for important simulations of real-world phenomena such as tracking hurricanes, predicting climate change, and designing aircraft. The idea has even been popularized in movies like The Matrix, in which an entire world is simulated in code.

What is less commonly known beyond the world of scientists and engineers is that this binary code may only approximate the numbers we are familiar with in daily life. Now, a study by Tufts professor of mathematics Bruce Boghosian, his graduate student Hongyan Wang, G19, and their collaborator Peter Coveney, professor of chemistry and director of the Centre for Computational Science at University College London, reveals that those approximations can wreak havoc on certain complex simulations.

There are two main sources of error. The first is called rounding error. A simple explanation can be drawn by analogy. We know the number pi has a precise value obtained by dividing the circumference of a circle by its diameter—3.14159265 and so on, in a non-ending sequence of numbers. Any written form of the number is an imprecise representation—at some point, we have to stop writing digits to make it readable. The same thing happens in the binary world. In fact, the simple fraction 0.1—in our familiar base 10 system—has a non-ending sequence in the binary world:


If our computer lets us use only fifty-two binary digits, we have to cut that sequence short, and if we do that, and put it back onto decimal, this is what we get:


In other words, not exactly what we started with, but usually too small of a difference to affect ordinary calculations.

While also related to rounding, the second source of error—and the focus of this study—“manifests itself when floating-point numbers are used to represent the state of chaotic dynamical systems, where it can introduce substantial inaccuracies in computation,” said Boghosian.

This comes from the fact that floating-point numbers are distributed unevenly; there are as many between 1/8 and 1/4 as there are between 1/4 and 1/2, as there are between 1/2 and 1, and so on. “If this does not correspond to the way that the states of the dynamical system are naturally distributed, roundoff errors can effectively be amplified,” said Boghosian.

“It has long been believed that the rounding errors are not problematic, especially if we use double-precision floating-point numbers—binary numbers using sixty-four bits, instead of thirty-two,” said Boghosian. “But in our study, we have demonstrated a problem that is due to the uneven distribution of the fractions represented by the floating-point numbers, and that is not likely to disappear merely by increasing the number of bits.”

Boghosian and colleagues looked at dynamic simulations, which often involve an extremely large number of calculations that depend on the results of previous calculations, and they observed a “butterfly effect”—a tiny error at the beginning can have a very large effect in the end result.

This much was expected, but the group went on to find that even the statistical properties of the dynamical system “could be rendered seriously inaccurate by the floating-point discretization,” Boghosian said. The study, published today in Advanced Theory and Simulations, concludes that digital computers may not reliably reproduce the behavior of dynamic chaotic systems, such as those used in meteorology. This fundamental constraint could have implications across the board in science and engineering, and even in the emerging field of artificial intelligence.

These problematic results would persist even if double-precision floating-point numbers were used, explained Boghosian, but they would also persist if you used a different system of computation, such as ternary trits (based on three values 0, 1, 2) rather than binary bits. Not even quantum computing can rescue us from this inherent computational flaw if real numbers are still represented in floating-point format.

“We are now aware that this simplification—representation by floating-point numbers—may not accurately represent the complexity of chaotic dynamical systems, and this is a problem for such simulations on all current and future digital computers,” said Boghosian.

“Our work shows that the behavior of the chaotic dynamical systems is richer than any digital computer can capture,” said Coveney. “Chaos is more commonplace than many people may realize, and even for very simple chaotic systems, numbers used by digital computers can lead to errors that are not obvious, but can have a big impact. Ultimately, computers can’t simulate everything.”

More research is needed, Boghosian said, to examine the extent to which the use of floating-point arithmetic is causing problems in everyday computational science and modelling and, if errors are found, how to correct them.

Mike Silver can be reached at [email protected]. Kalimah Redd Knight can be reached at [email protected].

This is a syndicated post. Read the original post at Source link .