Computer models
Your weathercaster might have more charisma, but computer models are the building blocks of almost every weather forecast you'll ever get on TV or anywhere else. Modern meteorology would be impossible without them. A computer model that predicts the weather is nothing more or less than a miniature replica of the atmosphere. Models are each made up of thousands of lines of computer code, but they behave more like dynamic entities. Similar to an elaborate computer game, a model takes input and makes things happen. Cold fronts and low-pressure centres march across the face of a modelled Earth every day. These weather features aren't merely pulled out of storage to match the day's conditions; once a model is given today's weather, it actually produces tomorrow's from scratch, following the laws of atmospheric physics coded in the software. Some of the biggest weather features of recent years – such as the gigantic winter storm on the US east coast in March 1993, or the Christmas 1999 windstorm that pummelled France – came to life inside computer models a week or more before they materialized in the real world.
Waiting for computers
The idea of “calculating the weather”, as Frederik Nebeker calls it in his book of the same name, didn't take hold until weather was measured by numbers rather than language. Before weather instruments came along in the seventeenth century, practically all observations of the weather were qualitative; “cloudy” skies, “heavy” rain, “strong” wind. The organized practice of recording and sharing weather data within countries wasn't in place until after 1850, and the global exchange of this data is a product of the twentieth century.
The seeds of a forecast revolution were sown in Bergen, Norway, around the time of the World War I by the same group of meteorologists that came up with the idea of cold and warm fronts. Vilhelm Bjerknes and his colleagues devised a list of seven variables that controlled our weather. According to the Norwegians, if we had full knowledge of these seven factors, we could completely describe the weather across the entire planet. These elements were:
- Wind speed In all three directions (east-west, north-south and vertically)
- Air pressure
- Temperature
- Moisture by volume
- Mass density, or the amount of air in a given space
Computers weren't yet on the scene, so the Bergen group considered it impractical to extend these elements much more than a couple of days into the future. In the Bergen style of prediction – which led the field for decades, and is still influential – the forecaster tracks fronts, lows and other weather features and compares them to idealized life cycles that were rendered (quite elegantly) in three-dimensional drawings by the Norwegians. This approach relies heavily on a discerning eye and an aptitude for picturing where a front or low might go and how it might evolve.
In the 1920s, the eclectic British scientist L F Richardson took the Norwegian ideas a step further. He came up with a set of seven equations that connected the seven elements above with the rules of physics laid down by Isaac Newton, Robert Boyle and Jacques Charles. If you could solve these seven equations, then Richardson believed you could not only describe the current weather, but extend it into the future. He envisioned a “forecast factory”, where hundreds of clerks would carry out the adding, subtracting, multiplying and dividing needed to create a forecast by numbers. Richardson and his wife spent six weeks doing their own number-crunching in order to test his ideas on a single day's weather.
Although Richardson's test was unsuccessful, his equations materialized again almost thirty years later in the world's first general-purpose electronic computer. The ENIAC (Electronic Numerical Integrator And Computer) was created in the US at the close of World War II. In order to show the machine's prowess and promise, its developers hunted for a science problem that could benefit from raw computing power. Weather prediction filled the bill. Richardson's equations were updated and translated into machine language, and on March 5,1950, the ENIAC began cranking out the first computerized weather forecast. The computer took almost a week to complete its first 24-hour outlook. Later that spring, one-day forecasts were being finished in just about a day. The really good news was that, unlike Richardson's earlier test, nothing went horribly awry. In fact, ENIAC did a better than expected job of portraying large-scale weather a day in advance. Computer modelling was on its way, and in the years since, it's gone from strength to strength. Computer power, model sophistication and forecast accuracy have increased hand in hand.
How a model works
Imagine a 3-D lattice of intersecting boxes, with a tiny weather station in the centre of each box. Now extend that lattice over a whole continent, or the whole globe, and you have some idea of the framework used by a computer model.
To do its job, the model first breaks the atmosphere down into boxes that represent manageable chunks of air, far wider than they are tall In a typical weather-forecast model, each chunk might be about 30km/18 miles across and anywhere from 8-800 metres/25-2500ft deep, with the sharpest resolutions closest to ground level. The model uses the physical equations above to track the weather elements at the centre of each box across a very short time interval. Two or three minutes of weather might be modelled in only a few seconds of computer time. At the end of each step forward, the data may be adjusted so that each box is in sync with adjacent boxes. Then the weather in the model moves ahead another couple of minutes. At regular intervals – perhaps every three to six hours within the model's virtual time – a set of maps emerges, used by forecasters to analyse the weather to come.
Today, just as in 1950, modelling the real atmosphere demands a stunning amount of computer horsepower. The weather changes every instant, and no human being or weather station can be everywhere around the world to monitor those changes 24 hours a day. The typical weather-forecast model is run on a supercomputer that can perform many billions of calculations each second, and it still takes an hour or so to finish its work. Global climate models demand still more computer time, as we'll see in our primer on climate change.
Because high-speed computers are so expensive (tens of millions of US dollars for the best ones) and because their time is so valuable, there's tremendous pressure on modellers to be efficient Most models that predict longer-term weather, up to ten or fifteen days out, span at least one hemisphere (northern or southern). The maps extracted from these models usually focus on a region of interest, plus enough surrounding territory to depict any weather features moving in from afar.
Where it all happens
Nearly all of the world's nations have their own weather services, but only a few of them operate full-blown global weather models. In these countries, modelling tends to be focused at a single location where computers can be concentrated. Some of these centres of action include;
- US National Meteorological Center, near Washington, DC
- The Canadian Meteorological Centre, just outside Montreal
- UK's Met. Office headquarters, based in Exeter
- Australia's Bureau of Meteorology, Melbourne
- Japan Meteorological Agency, Tokyo
- The co-operative European Centre for Medlum-Range Weather Forecasts,
Reading, England
Each of these centres has one or more of the world's top supercomputers, all produced by a handful of firms. Upgrades are frequent: in recent years the speed of computer processors has doubled about every eighteen months. This gives scientists the freedom to create more detailed and realistic models. A number of other countries, universities, multinational consortia and private companies operate various weather models that cover a region or nation rather than the entire globe.
It takes co-operation between countries to make continent-scale models happen. Global agreements that emerged in the twentieth century still guide the collection and sharing of weather data and the exchange of model output. Each model run is kicked off with a good first guess: the forecast for the present time pulled from the previous run. Then, data from a variety of sources – including ground-based weather stations, balloon-borne instruments, satellites and aircraft – are “assimilated” into the model. (Since east-ward-flying aircraft often take advantage of the jet stream, this river of air is often one of the best-sampled regions of the upper atmosphere.) Weather balloons are traditionally launched at two key times each day: 00:00 and 12:00 Universal Time, which corresponds to midnight and noon at Greenwich, England. Ground-station reports are collected hourly at many locations, and more often than that by automated stations. As these reports come in, they're run through a sophisticated set of quality-control procedures.
Once all this raw material is fed into a model, it has to be sliced and diced into manageable portions. Each point in the model is assigned starting-point data that best match the surrounding observations, which may be 16km/10 miles apart in the Netherlands or 1600km/1000 miles apart over the ocean. When models get into trouble Even the best models have to make inevitable compromises if they're going to process the atmosphere's action quickly enough to be useful. A small mountain range may fall into a single chunk or two of model space, which means that all the detail and nuance of the peaks are translated into the equivalent of a giant concrete block. Instead of tracking realistic clouds, large-scale models rely on vastly simplified portrayals of the cloud types that reside within a chunk of air.
All this can obviously lead to inaccuracies. For instance, tropical forecasters have long had to watch for “boguscanes” – fictional hurricanes spun up by models too eager to build a tropical cyclone out of an inconsequential weather feature. Until it was improved in 2001, one major US model created dozens of boguscanes each year across the tropical Atlantic, far more than the actual number of hurricanes that occur. Models may also have trouble with shallow cold air masses, the kind that can slide a thousand kilometres southward from the Arctic in winter, but may extend upward less than a kilometre. These air masses once crept below a model's line of sight so cleanly that the model insisted the temperature of the lowest layer was above freezing when it was actually -10°C/14°F or colder. A new generation of regional models is packed with higher vertical resolution that largely alleviates this problem.
Most of the major recurring errors in a model are caught before, or shortly after, its release. But just as you might take the wrong medication for a cold and experience nasty side-effects, forecasters can make a model worse off when they tweak it to fix a recurring problem. Adding resolution tends to sharpen a model's picture of reality, but there's a catch: if each part of a model has a built-in amount of error, then making a model more complex may only multiply the uncertainty it holds. One solution is not to tamper with the model and simply account for boguscanes, missing cold waves or other problems as they occur. Another is to address some of these problems with a statistical massage.
From model to forecast
You could take the raw data from a model and interpolate between boxes to produce a halfway acceptable forecast for your home. “Halfway” is the key word, however. Because of the limits of computer storage space and power, such a model cant specify all the local influences that might affect your weather, such as an inlet that keeps a coastal town warmer than its neighbours in winter and cooler in summer, or downtown buildings that generate enough heat to keep the nighttime air several degrees warmer than in the suburbs.
Many large-scale models are localized through a system called Model Output Statistics. MOS equations are designed to tweak the numbers provided by their parent large-scale model. Each MOS equation is built by comparing the parent models forecasts of temperature, wind and the like, to the actual conditions observed in a given city, then identifying and accounting for the persistent biases. After every model run, MOS spits out a set of numbers that serves as a starting point in crafting a public weather forecast MOS equations have been developed for more than one hundred towns and cities in the UK and over three hundred in the US.
It takes years before enough experience accumulates with a given model to build an MOS database for it. That's why MOS equations are typically based on older models, run alongside the state-of-the-art ones. The latter are used to scope out the overall weather situation and adjust the MOS data when needed. For instance, if the best models show that a rare snowfall is brewing at a sub-tropical location, the MOS equation might handle it poorly because it's never come across such a feature before (just as the snow itself might baffle the warm-blooded residents).
Putting it together
The maps and numbers generated by computer models and MOS are where objective guidance stops and the subjective skill of a human forecaster begins. It's up to this person to decide which words and numbers to pass on to the public or to a specialized audience.
As in a newsroom or a factory, the deadline pressure in a weather office can be tremendous – but not always. In places like the tropics or in seasons like mid-summer, days and even weeks can go by without a truly challenging forecast. MOS and other model guidance then become the order of the day, and a forecaster's main task is to make sure no errors or surprises are hidden in the model output. US forecaster Leonard Snellman came up with the phrase “meteorological cancer” in 1977, a few years after MOS was introduced. Snellman worried that complacency would allow errors to creep into a forecast and escape unchecked. His viewpoint resonated with many, although a general, gradual improvement in forecasting skill hints that the cancer Snellman feared isn't metastasizing on a widespread basis.
Whether public or private, every forecasting outfit has its own lingo and style. This helps establish a public identity; more importantly, it ensures that people aren't surprised too often by a term they've never heard before. A new phrase should – and typically does – mean something important, as it did in Oklahoma City, Oklahoma, on May 3, 1999. A gigantic tornado (soon to be the costliest in world history) was bearing down on the region that evening. To help convey the gravity of the situation to a twister-jaded populace, local NWS forecasters deviated from the usual “tornado warning” to declare a “tornado emergency”.
Most wording decisions aren't made under such strain. Forecast entities might take years to hash out a seemingly trivial change in their terminology. It's because they're all too aware that people vary in how they interpret as simple a word as “sunny”. In US forecasts, “partly sunny” and “partly cloudy” are interchangeable – they both indicate 30 to 60 percent of the sky will be cloud-covered. Yet when surveyed, a group of US university students actually thought “partly cloudy” pointed to a brighter outlook. To some of them, “mostly sunny” (10-30 percent cloud cover, according to the NWS) meant that clouds would fill as much as 60 percent of the sky. In the UK, there aren't any such quantitative definitions, but the forecast lingo definitely tilts toward the cloudy side of the spectrum. The word “clear” isn't even in the Meteorological Office guidelines. “Bright” refers to a lightly overcast day with diffuse sunlight and perhaps some direct sun. (Of course, TV weather-casters and other forecast purveyors can always put their own spin on these officially sanctioned definitions.)