Chapter 3

The Biggest Video Game in the World: Understanding Uncertainty

“I don’t know if we each have a destiny, or if we’re all just floating around accidental, like on a breeze. But I think maybe it’s both.” – Forrest Gump, resolver of the oldest quandary in philosophy

The fundamental question of what passes for popular debate about global warming is the familiar one so many people want an answer to: what’s going to happen? The problem with discussions rooted in that question is that they all fall into yawning gaps of improbability. Asking what is ultimately going to happen with climate is like arguing about who’s going to win the World Series in 2030.

Someone is going to win, we know that much for sure, but we have no idea who, and the MVP might be in middle school right now. Thirty years out, we don’t even know what major league baseball will look like; the 2050 World Series might be the crowning achievement of a dynasty team from Saskatoon or something.

That’s how uncertainty works. The farther out you get, the more unknown questions you have, and the wider the range of possibilities.

To take an example closer to home, consider your daily commute to work/school/whatever. Let’s say it usually takes you about twenty minutes.* Some days, if you hit the stoplights just right, it may take only fifteen. Other days, if you have displeased the traffic gods, there’s a crash on the freeway, and the lights suck, and all of a sudden you’re at thirty minutes or worse.

(*If you have a much longer commute than this and your forehead vein just popped out, I apologize.)

This holds true for taking a bus, riding a train, pedaling a bike, or any other method of regularly getting from one place to another. Accidents, breakdowns, unpredictable weather, and other factors beyond your control influence your routine. That’s uncertainty.

Here’s the wrinkle,* uncertainty increases with distance. Your next commute is a short, well defined span that’s going to happen very soon. So you can say with a lot of confidence that Friday’s trip is going to be a lot like Monday’s. But a year from now, who knows? Maybe they put in a new traffic light. Maybe they take out a traffic light and replace it with a traffic circle. Maybe the place you get your coffee changes hands and you no longer like it enough to stop in every day.

(*In time, if you will.)

If you stretch the time horizon out to two years, or five years, or ten years or more, uncertainty piles up until you can’t know much about your commute at all. Maybe you finish school or change your job, maybe you move to a whole new place with an entirely different commute, maybe you retire or start working from home and don’t have a commute at all. The more unknowable variables pile up, the less certainty you can have, which is another way of saying that the farther out you go, the less you can know for sure.

This is also how projections about climate work, only on a longer time frame. We have a very good idea what the climate is going to look like five years from now, a less good idea for ten years from now, educated guesses for thirty years out, and projections so wide for eighty years out that basing anything on them is like planning a commute before you know what city you live in. This kind of long-term unknowability is exploited by people who dismiss global warming.

“Well,” they say in a condescending voice, “Your projections are just projections, so why should we spend any money in the here and now?”

The answer is that uncertainty is a two way street. Your commute didn’t just pop into existence this morning. Years of decisions and factors beyond your control went into it. Similarly, global warming is no overnight phenomenon.

We may only have wild guesses for what’s going to happen in 2100, but we know for damn sure what’s happened since the 1800s. There’s been more than a century of theory, measurement, and research whittling away at our uncertainty over global warming. That’s how we can be uncertain about what precisely is going to happen in the future while being quite certain about how and why we got to where we are.

 

Dead White Men

“If you could get out that whole time, why didn’t you?” – Morty Smith, hopeful idiot
“Because I waited until I was certain it was what I wanted to do.” – Rick Sanchez, realist boozehound 

The science behind CO2 warming wasn’t discovered on a stone tablet in 1988 and rushed to Congress in a whirl of sirens. It was a slow and uncoordinated process that was filled with missteps and began by accident back when Mark Twain was still alive. Across whole generations of scientists, engineers, and meteorologists, it went down a dozen dead ends and finally landed on a conclusion that would’ve shocked the people who originally proposed the idea.

Way back in 1896, a Swedish scientist found himself on the losing end of a bitter divorce. Like a lot of freshly dumped people, he had time on his hands and a painful need to distract himself. Since modern coping mechanisms like junk food and binge watching were far in the future, he set his mind to solving a then fashionable mystery: why did Ice Ages happen?

All he had was paper, pencils, the basic concepts of physics and chemistry,* and a few tantalizing scraps of data. After months of painstaking calculation that would take about an hour and a half in Excel, he had a wildly incorrect answer and a profound insight that would sit ignored until well after his death: the relative amount of CO2 in the atmosphere could cause huge shifts in temperature.[1]

(*Especially that CO2 holds heat while N2, O2, and AR do not.)

In the 1920s, a Russian chemist, who had spent World War I mobilizing production for the armies of the Tsar, came to a frightful conclusion. The industrial capacity of human civilization had grown so huge that it now rivaled the natural processes that had shaped the atmosphere in the first place. This included the ability to significantly shift the percentage of CO2 in the air.[2]

During and after World War II, the U.S. government poured money into atmospheric research. Partly this was for sober military reasons like improving the accuracy of weather predictions before battle. And partly it was for drunken military reasons like wanting to control the weather and have storms beat the enemy for you.[3]

Along the way they funded detailed studies of atmospheric composition that would’ve wowed earlier scientists, plus they created computers that made months of manual calculation obsolete. A researcher at Lockheed, whose day job was heat seeking missiles, spent his spare time using these new tools to confirm the theory of fifty years before: adding or subtracting CO2 from the atmosphere could have huge impacts on temperature. He thought it would be several centuries before it would matter one way or the other, but it definitely could happen.[4]

That was the mid-1950s, and while his concept was sound, his conclusion was way off. The simple reason for his inaccuracy was that no one had yet figured out a reliable way to measure how much CO2 was actually in the air. Up to that point, it was all theory. The next step was to get accurate measurements of an invisible trace gas that is colorless, odorless, and everywhere. That baffling task fell to a man named Dave Keeling.*

(*So far I have left these brilliant people anonymous. They and countless others were vital to solving this epic detective story, but a bare outline like this only has room for a few names. If you’re curious, you can easily look them up in the source material. But Keeling is a name worth remembering for reasons that will shortly be clear.)

Before Keeling came along, getting a reliable measurement of atmospheric CO2 had been next to impossible. Just about anything could disturb the number, from a shift in the wind to a nearby herd of sheep (seriously). Keeling started from scratch and built his own instruments, then got the money to buy and build even better ones. He trekked to the top of Mauna Loa in Hawaii and all the way down to Antarctica to isolate his new toys from local emission sources that would distort the numbers.[5]

The reward for all that dedication and work was immediate. His first measurements in 1957 not only produced reliable results, but also showed an unmistakable increase in the span of just a few months. The initial Antarctic measurement showed 311 parts-per-million (ppm) of CO2 in the atmosphere. One year later, it was up to 313. The year after that, it was 314.

The results from Hawaii matched the results from Antarctica and have been continued ever since. Every spring, the number goes down as the flowers, grasses, and forests of the Northern Hemisphere bloom, sprout, and grow, pulling CO2 from the air. Every fall, the number goes up as those plants die or go dormant for winter, releasing their carbon back into the atmosphere.

That natural pattern creates a zig-zag line (up in the winter, down in the summer). But after every annual cycle is complete, the graph ends up just a bit higher than it was. Keeling’s instruments were seeing the extra carbon coming out of smokestacks and tailpipes, the long buried plant matter returning its carbon to the air after millions of years.

A pop culture journey from C02 at ~315ppm to the ~415ppm we have today. Why wouldn't anyone leave Britney alone?

It came to be known as the Keeling Curve. It is the thermostat for the planet. And it is undeniably going up.

Of course, all Keeling and his predecessors and collaborators had done was show that CO2 held heat and that it was rising. That much was certain. What remained uncertain were two vitally important questions:

1) How much warming would occur?
2) How fast would it occur?

At the time, these were open discussions with hugely variable answers and no reliable means of testing. If the Earth might warm a single degree in hundreds of years, then the question was one of academic curiosity (fun, sure, but not all that important). But if the Earth might warm rapidly enough that serious effects would be felt within decades, then it was as urgent a threat as people had ever faced, on the same level as thermonuclear war.

Another new tool was needed to chop down the uncertainty of those two questions. It turned out to be the world’s most boring video game.

 

Joystick of the Apocalypse

“Super Nintendo, Sega Genesis, when I was dead broke, man, I couldn’t picture this.” – The Notorious B.I.G., poetic gourmand

An underappreciated aspect of video games is that, at heart, all they are is a series of interlocking math problems. When you push the button to make Mario jump, there’s a numerical value that says how high he’ll go depending on how long you hold the button. How far he leaps depends on how fast you had him running before he jumped. Each of these actions has a number assigned to it that is then plugged into an equation that determines where he lands.

Going all the way back to Pong, every video game ever made works on this principle. Your actions create variables that the software uses in predefined equations to calculate specific outcomes in rapid succession. Whether you’re pushing a button, turning a knob, or moving a joystick, all you’re really doing is giving the computer the numbers it needs to solve for X, Y, and Z.

As boring as that sounds, it makes for very fun games. But that idea can also be used for more serious purposes than squashing Koopas. Specifically, with enough data and enough refined equations, you can simulate the unfathomable complexity of the atmosphere.

Efforts to model the climate began back in the 1800s with pencil and paper and a few individual temperature measurements. As the decades went along, more and more reliable temperature data was collected and the first big, hulking computers came on-line. But the models kept crashing.

It turns out that the atmosphere has far more interacting variables than anyone at first suspected. Sunlight alone has dozens and dozens of sub-variables: cloud cover, angle, time of year, humidity, etcetera. And the sub-variables have sub-variables: what kind of clouds, how thick, how cold, how dark. The early models didn’t even account for terrain; they assumed a flat, featureless Earth. And still they crashed.

Identical starting conditions might produce an atmosphere that boiled itself away or froze solid. Huge and chaotic swings were the norm, and none of them ever stabilized into something that looked like the actual atmosphere.[6] This was like playing a Mario game where the same button press might make him jump, or take two steps to the left, or fly off the screen.

But scientists are a determined bunch, and the models kept getting better. Just as importantly, the data used to create the models got better as well. The first weather satellites came on-line in the late 1960s, providing accurate measurements of global temperature far more reliably than scattered weather stations ever could. Year over year, the data got more precise and the equations got more refined.

By the early 1970s, multiple groups working with different models were beginning to produce stable simulations of the Earth’s atmosphere. They were limited at first, the calculations needed were so complex that they could only model a small column of air with the processors available at the time. But they could take real data (temperature, pressure, humidity) and produce relatively accurate predictions for future weather.[7]

Because uncertainty is always present, the key word there is ‘relatively’. The simulations never ran quite the same way twice. But running the simulations thousands of times with the same starting values produced a reliable range of outcomes. And if that sounds like voodoo or guesswork, just remember that your strikingly attractive local TV meteorologist uses this exact technique to tell you if you’re going to need a jacket tomorrow.

A bell curve showing the difference between different standard deviations of probability as they relate to weather. Also: fuck that racist sack of crap Charles Murray for besmirching these otherwise fine curves.

This is uncertainty in action. We can’t be sure of any one outcome, but we can be highly confident about tomorrow, less confident about the five day forecast, and so on.

Beyond the day-to-day, if you ever watch coverage of a hurricane on TV and see that cone of where they think the storm will go, it’s being produced by models descended from those first 70s versions. The modern ones are more accurate and the people behind them don’t wear platform shoes or bell-bottoms, but everything works on the same principles.

These intensely boring video games were born out of the Pentagon’s need for reliable weather forecasts, but they can also be used to simulate what the Earth’s air would do with more CO2 in it. There may be only one atmosphere, but there are an unlimited number of simulation runs, so all you need to do is take the original variables, turn the Carbon dial a bit to the right, and see what the computer spits out.

Finally, eighty years (80!) after the questions had first been asked,* there was a tool that could answer the twin questions of 1) how much warming? and 2) how fast? The results were alarming.

(*Much early climate research was motivated by the Ice Ages. It turns out that the Earth’s tilt and orbit wobble on a regular 100,000 year cycle thanks to the gravity of Jupiter and other cosmic neighbors, a long term theory that was finally accepted in the 1970s. I told you we’d use astronomy, didn’t I?)

With Keeling’s Curve as a baseline, the models gave rough but testable estimates of when we’d start to see meaningful warming. And the answer wasn’t centuries in the future. It was decades.

In 1981, James Hansen – seven years prior to his famous testimony before Congress – predicted that measurable warming would be detected by the year 2000 and reach “unprecedented magnitude” afterward. The newly inaugurated Reagan Administration didn’t want to hear that and cut his funding to the bone, but Hansen didn’t have to wait long to be proven correct.[8]

The 1980s turned out to be the hottest decade ever recorded. The 1990s broke that record. Then the 2000s broke that record. The final number isn’t quite in yet for the 2010s, but a new mark has already been set.[9] The 2020s will set a new one as well.

These aren’t scenarios, and they aren’t uncertain. These are real world measurements that were predicted with frightful accuracy forty years ago. The uncertainty has been beaten out of them through decades of argument, investigation, research, and name calling.

But all the certainty in the world can’t stop jackasses from jackassing. And that brings us to the deniers.

Don’t touch that dial! The next part of this story brings in our villains, people who are worse than Darth Vader, Voldemort, and Skynet combined! Continue to Chapter 4 – The Long Con: Denying Global Warming for Fun and Profit

Endnotes for Chapter 3:

[1 – The Discovery of Global Warming (Revised and Expanded Edition) by Spencer R. Weart, Harvard University Press, 2008, p5-6. Most of this chapter is derived from this excellent and easily readable scientific history. Highly recommended.]
[2 – Weart, p14-15]
[3 – Behind the Curve: Science and the Politics of Global Warming by Joshua P. Howe, University of Washington Press, 2014, p 26]
[4 – Weart, p22-23]
[5 – Weart, p34-37]
[6 – Weart, p91-93]
[7 – Weart, p95-101]
[8 – Howe, p 130-132]
[9 – https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature]