Uncertainty Isn’t Always a Problem—It Can Be the Solution

In 1978 my wife and I, with our two sons, then aged 2 and 4, were traveling home after a year in Connecticut. We had eight suitcases plus carry-on bags, and were flying from Hartford to New York on TWA, connecting to a flight to Toronto on Air Canada. There we planned to stay for a week before heading on to the UK. Air Canada had been on strike, but we were reliably informed that the strike had ended. No mobiles or internet in those days, and our phone had been disconnected because we were leaving. So, when we got to New York, it turned out that the strike was back on, and we were stranded. (Why TWA let us board the connecting flight I have no idea.) Our bags had also gone missing. Eventually, after I threw a tantrum, the airline found them. Not wanting to wait three days for a standby flight, we piled into a cab and headed to the Greyhound Terminal in downtown Manhattan. The cab dropped us at the wrong end of the terminal building, so we had to ferry all the bags, plus kids, a few hundred yards. Then wait six hours for a crowded Greyhound. After several other traumatic events, we finally got on the bus, and arrived in Toronto at 6 am, 12 hours late.

It worked, but all of our plans, carefully put in place over several weeks, had gone out the window. We live in an uncertain world, and even when you think you know what’s going to happen, the universe can bite you.

Basic Books

Excerpted from Do Dice Play God: The Mathematics of Uncertainty, by Ian Stewart, emeritus professor of mathematics at the University of Warwick. Buy on Amazon.

On the other hand, uncertainty isn’t always bad. We like surprises, as long as they’re pleasant ones. Many of us enjoy a flutter on the horses, and most sports would be pointless if we knew at the start who was going to win. Some prospective parents are keen not to be told the sex of the baby. Most of us, I suspect, would prefer not to know in advance the date of their own death, let alone how it will occur. But those are exceptions. Life is a lottery. Uncertainty often breeds doubt, and doubt makes us feel uncomfortable, so we want to reduce, or better still eliminate, uncertainty. We worry about what will happen. We look out for the weather forecast, even though we know that weather is notoriously unpredictable and the forecast is often wrong.

Human affairs have always been messy, but even in science, the old idea of nature obeying exact laws has given way to a more flexible view. We can find rules and models that are approximately true (in some areas “approximate” means “to 10 decimal places”; in others it means “between 10 times as small and 10 times as large”) but they’re always provisional, to be displaced if and when fresh evidence comes along. Chaos theory tells us that even when something does obey rigid rules, it may still be unpredictable. Quantum theory tells us that deep down at its smallest levels, the universe is inherently unpredictable.

Uncertainty isn’t just a sign of human ignorance; it’s what the world is made of.

And sometimes uncertainty can actually be useful. Many areas of technology deliberately create controlled amounts of uncertainty, in order to make devices and processes work better. Mathematical techniques for finding the best solution to an industrial problem use random disturbances to avoid getting stuck on strategies that are the best compared to near neighbors, but not as good as more distant ones. Random changes to recorded data improve the accuracy of weather forecasts. Space missions exploit chaos to save expensive fuel.

Random numbers—more precisely, the pseudorandom numbers that computers can be told to generate—can be really useful. SatNav uses streams of pseudorandom numbers to avoid problems with electrical interference. Medical trials use them to randomize patients and treatments, to avoid human bias. In fact, a bit of uncertainty can often act to our advantage. So although uncertainty is usually seen as a problem, it can also be a solution, though not always to the same problem.

Beth Holzer

Take computers. Digital computers are deterministic. Give one a program, and it will carry out every instruction to the letter. But this determinism also makes it hard for computers to behave randomly.

There are three main solutions. You can engineer in some nondigital component that behaves unpredictably; you can provide inputs from some unpredictable real-world process such as radio noise; or you can set up instructions to generate pseudorandom numbers. These are sequences of numbers that appear to be random, despite being generated by a deterministic mathematical procedure. They’re simple to implement and have the advantage that you can run exactly the same sequence again when you’re debugging your program.

The general idea is to start by telling the computer a single number, the “seed.” An algorithm then transforms the seed mathematically to get the next number in the sequence, and the process is repeated. If you know the seed and the transformation rule, you can reproduce the sequence. If you don’t, it may be hard to find out what the procedure is. SatNav (the Global Positioning System) in cars makes essential use of pseudorandom numbers. GPS requires a series of satellites sending out timing signals, which are received by the gadget in your car and analyzed to work out where your car is. To avoid interference, the signals are sequences of pseudorandom numbers, and the gadget can recognize the right signals. By comparing how far along the sequence the message arriving from each satellite has got, it computes the relative time delays between all the signals. That gives the relative distances of the satellites, from which your position can be found using old-fashioned trigonometry.


Now, suppose you want to generate a sequence, and you want to be convinced that it truly is random. Maybe you’re setting up the key to some encryption scheme, for example. In 2018 Peter Bierhorst and coworkers published a paper showing that you can get round this restriction using quantum mechanics. Essentially, the idea is that quantum indeterminacy can be translated into specific sequences, with a physical guarantee that they’re random. That is, no potential enemy can deduce the mathematical algorithm that creates them—because there isn’t one.

It might seem that the security of a random number generator can be guaranteed only if it satisfies two conditions. The user must know how the numbers are generated, otherwise they can’t be sure that it generates truly random numbers. And the enemy must be unable to deduce the internal workings of the random number generator. However, there’s no way to satisfy the first condition in practice using a conventional random number generator, because whatever algorithm it implements, it might go wrong. Keeping an eye on its internal workings might do the trick, but that’s usually not practical. The second condition violates a basic principle of cryptography called Kerckhoff’s principle: You must assume that the enemy knows how the encoding system works. Just in case they do. Walls have ears. (What you hope they don’t know is the decoding system.)

Quantum mechanics leads to a remarkable idea. Assuming no deterministic hidden-variable theory exists, you can create a quantum-mechanical random number generator that is provably secure and random, such that the two conditions above both fail. Paradoxically, the user doesn’t know anything about how the random number generator works, but the enemy knows this in complete detail.

The device uses entangled photons, a transmitter, and two receiving stations. Generate pairs of entangled photons with highly correlated polarizations. Send one photon from each pair to one station, and the other to a second station. At each station, measure the polarization. The stations are far enough apart that no signal can travel between them while they make this measurement, but by entanglement the polarizations they observe must be highly correlated.

Now comes the nifty footwork. Relativity implies that the photons can’t be used as a faster-than-light communicator. This implies that the measurements, though highly correlated, must be unpredictable. The rare occasions on which they disagree must therefore be genuinely random.

Random numbers (I’ll drop the “pseudo” now) are used in a huge variety of applications. Innumerable problems in industry and related areas involve optimizing some procedure to produce the best possible result. For example, an airline may wish to timetable its routes so that it uses the smallest number of aircraft, or to use a given number of aircraft to cover as many routes as possible. Or, more precisely, to maximize the profit that arises. A factory may wish to schedule maintenance of its machines to minimize the “down time.” Doctors might want to administer a vaccine so that it can be most effective.

Mathematically, this kind of optimization problem can be represented as locating the maximum value of some function. Geometrically, this is like finding the highest peak in a landscape. The landscape is usually multidimensional, but we can understand what’s involved by thinking of a standard landscape, which is a two-dimensional surface in three-dimensional space. The optimum strategy corresponds to the position of the highest peak. How can we find it? The simplest approach is hill climbing. Start somewhere, chosen however you wish. Find the steepest upward path and follow it. Eventually you’ll reach a point where you can’t go any higher. This is the peak. Well, maybe not. It’s a peak, but it need not be the highest one. If you’re in the Himalayas and climb the nearest mountain, it probably isn’t Everest.

Hill climbing works well if there’s only one peak, but if there are more, the climber can get trapped on the wrong one. It always finds a local maximum (nothing else nearby is higher), but maybe not a global one (nothing else is higher). One way to avoid getting trapped is to give the climber a kick every so often, teleporting them from one location to another one. If they’re stuck on the wrong peak, this will get them climbing a different one, and they’ll climb higher than before if the new peak is higher than the old one, and they don’t get kicked off it too soon. This method is called simulated annealing, because of a metaphorical resemblance to the way atoms in liquid metal behave as the metal cools down, and finally freezes to a solid. Heat makes atoms move around randomly, and the higher the temperature, the more they move. So the basic idea is to use big kicks early on, and then reduce their size, as if the temperature is cooling down. When you don’t initially know where the various peaks are, it works best if the kicks are random. So the right kind of randomness makes the method work better. Most of the cute mathematics goes into choosing an effective annealing schedule—the rule for how the size of kicks decreases.

Another related technique, which can solve many different kinds of problem, is to use genetic algorithms. These take inspiration from Darwinian evolution, implementing a simple caricature of the biological process. Alan Turing proposed the method in 1950 as a hypothetical learning machine. The evolutionary caricature goes like this. Organisms pass on their characteristics to their offspring, but with random variation (mutation). Those that are fitter to survive in their environment live to pass on their characteristics to the next generation, while the less fit ones don’t (survival of the fittest or natural selection). Continue selecting for enough generations, and the organism gets very fit indeed—close to optimal.

Evolution can be modeled rather coarsely as an optimization problem: A population of organisms wanders randomly around a fitness landscape, climbing the local peaks, and the ones that are too low down die out. Eventually the survivors cluster around a peak. Different peaks correspond to different species. It’s far more complicated, but the caricature is sufficient motivation. Biologists make a big song and dance about evolution being inherently random. By this they mean (entirely sensibly) that evolution doesn’t start out with a goal and aim toward it. It didn’t decide millions of years ago to evolve human beings, and then keep choosing apes that got closer and closer to that ideal until it reached perfection in us. Evolution doesn’t know ahead of time what the fitness landscape looks like. In fact, the landscape itself may change over time as other species evolve, so the landscape metaphor is somewhat strained. Evolution finds out what works better by testing different possibilities, close to the current one but randomly displaced. Then it keeps the better ones and continues the same procedure. So the organisms keep improving, step by tiny step. That way, evolution simultaneously constructs the peaks of the fitness landscape, finds out where they are, and populates them with organisms. Evolution is a stochastic hill-climbing algorithm, implemented in wetware.

A genetic algorithm mimics evolution. It starts with a population of algorithms that try to solve a problem, randomly varies them, and selects those that perform better than the rest. Do this again for the next generation of algorithms, and repeat until you’re happy with the performance. It’s even possible to combine algorithms in a parody of sexual reproduction, so that two good features, one from each of two different algorithms, can be united in one. This can be seen as a kind of learning process, in which the population of algorithms discovers the best solution by trial and error. Evolution can be seen as a comparable learning process, applied to organisms instead of algorithms.

Beth Holzer

Randomness comes in many forms, and chaos theory tells us that a butterfly flap can radically change the weather. We’ve discussed the sense in which this is true: “Change” really means “redistribute and modify.” To redistribute a hurricane, find the right butterfly.

We can’t actually do this for a hurricane or a tornado. Not even a light drizzle. But we can do it for the electrical waves in a heart pacemaker, and it’s widely used to plan fuel-efficient space missions when time isn’t vital. In both cases, the main mathematical effort goes into selecting the right butterfly. That is, sorting out how, when, and where to interfere very slightly with the system to obtain the desired result. Edward Ott, Celso Grebogi, and James Yorke worked out the basic mathematics of chaotic control in 1990. Chaotic attractors typically contain huge numbers of periodic trajectories, but these are all unstable: any slight deviation from one of them grows exponentially. Ott, Grebogi, and Yorke wondered whether controlling the dynamical system in the right way can stabilize such a trajectory. These embedded periodic trajectories are typically saddle points, so that some nearby states are initially attracted towards them, but others are repelled. Almost all of the nearby states eventually cease to be attracted and fall into the repelling regions, hence the instability. The Ott–Grebogi–Yorke method of chaotic control repeatedly changes the system by small amounts. These perturbations are chosen so that every time the trajectory starts to escape, it’s recaptured: not by giving the state a push, but by modifying the system and moving the attractor, so the state finds itself back on the in-set of the periodic trajectory.

A human heart beats fairly regularly, a periodic state, but sometimes fibrillation sets in and the heartbeat becomes seriously irregular—enough to cause death if not stopped quickly. Fibrillation occurs when the regular periodic state of the heart breaks up to give a special type of chaos: spiral chaos, in which the usual series of circular waves traveling across the heart falls apart into a lot of localized spirals.

A standard treatment for irregular heartbeats is to fit a pacemaker, which sends electrical signals to the heart to keep its beats in sync. The electrical stimuli supplied by the pacemaker are quite large. In 1992 Alan Garfinkel, Mark Spano, William Ditto, and James Weiss reported experiments on tissue from a rabbit heart.

They used a chaotic control method to convert spiral chaos back into regular periodic behavior, by altering the timing of the electrical pulses making the heart tissue beat. Their method restored regular beats using voltages far smaller than those in conventional pacemakers. In principle, a less disruptive pacemaker might be constructed along these lines, and some human tests were carried out in 1995.

Chaotic control is now common in space missions. The dynamical feature that makes it possible goes right back to Poincaré’s discovery of chaos in three-body gravitation. In the application to space missions, the three bodies might be the sun, a planet, and one of its moons. The first successful application, proposed by Edward Belbruno in 1985, involved the sun, Earth, and the moon. As Earth circles the sun and the moon circles Earth, the combined gravitational fields and centrifugal forces create an energy landscape with five stationary points where all the forces cancel out: one peak, one valley, and three saddles. These are called Lagrange points. One of them, L1, sits between the moon and Earth, where their gravitational fields and the centrifugal force of Earth circling the sun cancel out. Near this point, the dynamics is chaotic, so the paths of small particles are highly sensitive to small perturbations.

A space probe counts as a small particle here. In 1985 the International Sun–Earth Explorer ISEE-3 had almost completely run out of the fuel used to change its trajectory. If it could be transported to L1 without using up much fuel, it would be possible to exploit the butterfly effect to redirect it to some distant objective, still using hardly any fuel. This method allowed the satellite to rendezvous with comet Giacobini–Zinner. In 1990 Belbruno urged the Japanese space agency to use a similar technique on their probe, Hiten, which had used up most of its fuel completing its main mission. So they parked it in a lunar orbit and then redirected it to two other Lagrange points to observe trapped dust particles. This kind of chaotic control has been used so often in unmanned space missions that it’s now a standard technique when fuel efficiency and lower costs are more important than speed.

For all that, we’re still children “playing on the seashore,” as Newton put it, “finding a smoother pebble or a prettier shell than ordinary, whilst the great ocean of truth lay all undiscovered before [us].” Many deep questions remain unanswered. We don’t really understand the global financial system, even though everything on the planet depends on it. Our medical expertise lets us spot most disease epidemics early on, so we can take steps to mitigate their effects, but we can’t always predict how they spread. Every so often new diseases appear, and we’re never sure when and where the next one will strike. We can make exquisitely accurate measurements of earthquakes and volcanoes, but our track record of predicting them is as shaky as the ground beneath our feet.

The more we find out about the quantum world, the more hints there are that some deeper theory can make its apparent paradoxes more reasonable. Physicists have given mathematical proofs that quantum uncertainty can’t be resolved by adding a deeper layer of reality. But proofs involve assumptions, which are open to challenge, and loopholes keep turning up. New phenomena in classical physics have uncanny similarities to quantum puzzles, and we know that their workings have nothing to do with irreducible randomness. If we’d known about them, or about chaos, before discovering quantum weirdness, today’s theories might have been very different. Or perhaps we’d have wasted decades looking for determinism where none exists.


Excerpted from Do Dice Play God: The Mathematics of Uncertainty by Ian Stewart. Copyright © 2019. Available from Basic Books, an imprint of Hachette Book Group, Inc.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.


WIRED Theme Week: How We Learn


Read More