The weather is a severe hazard in some parts of the world, and even where it is depressingly familiar, people love to complain about it. But in recent decades, their discussion has at least become a lot better informed, because our ability to forecast the weather has been much enhanced. Perhaps more importantly, forecasts have also become more useful to everyone from farmers to planners of major sporting events. According to the UK Meteorological Office, the three-day forecast today is as good as the one-day forecast was twenty years ago.
The simplest method of forecasting the weather is to produce a “persistence forecast”. Pay attention and I will teach you how' to do this, at no extra cost. It today is rainy and the temperature at noon is 17°C, the persistence forecast for tomorrow is rainy with a noon temperature of 17°C. This sounds idiotic because it would allow you to make a forecast with no knowledge of meteorology at all. But it is not. To make a valid persistence forecast, you have to know that you are in a part of the world and a part of the year where the weather is very stable.
Forecasts have improved beyond the surprisingly high level of accuracy of persistence forecasts because of a fortunate conjunction of two factors. The first is that we have far more data than before on the current state of the atmosphere and on its historical condition. It is gathered from the Earths surface, both on land and at sea, as well as from balloons and aircraft, and from space.
This data is so important and so perishable that its collection and free availability on a world scale are overseen by a UN agency, the World Meteorological Organization. As its website reports, the total amount of infrastructure in play includes “some 10,000 land stations, 1000 upperair stations, 7000 ships, about 1200 drifting and moored buoys and fixed marine platforms, 3000 commercial aircraft, six operational polarorbiting satellites, eight operational geostationary satellites, and several environmental R&D satellites/space-based sensors” Although weather data is now being gathered on a far larger scale than in the past, just what is gathered has changed little. Measurements of temperature and pressure are probably the most basic, followed by the humidity of the air, wind direction and, on the ground, the amount of precipitation.
The other part of the picture is that increased computer power has allowed this data to be used to create genuine forecasts. In many countries, the state meteorological body has the largest computer in the land. The largest Linux database in the world, in Hamburg, Germany, is devoted to climate data, while the Earth Simulator in Japan was tor some years the worlds fastest computer.
The basic way in which a weather forecast is made is simplicity itself. You get as much data as you can on the present state of the atmosphere and use basic concepts, such as the flow of air from high pressure to low, to see how different it will be some time later. To do this you chop the atmosphere up into blocks, in three dimensions, and specify the initial condition of each. Then you let air in the boxes interact according to the law's of physics and see what the state of play is in, say, an hour or twelve hours. Then you can repeat the process, using the result of the first step as input to the next one, to generate an even more distant forecast.
This approach was thought up long before computers existed, or the data to feed them, and weather forecasts have long been one of the drivers of big-league computing. For a computer system to generate a useful forecast, it needs to run faster than the weather itself. A perfect forecast of the weather in 24 hours is of limited value if it takes 48 hours to appear.
Computer models have revolutionized weather forecasting. But if you set a computer running on a set of data describing the atmosphere as we see it now', it would still be foolish to think it will produce a perfect forecast for all time. In practice, even todays forecasts drop fast in reliability when they get more than a week ahead. This is described as the forecast “losing its skill” Part of the difficulty is that the input information for the forecast is imperfect, but a more basic objection is the “butterfly problem”. This is shorthand for saying that the weather is a chaotic system. It is not chaotic in the way that the top of my desk is chaotic. In this context, “chaotic” means that it is never possible to allow' for every minor immeasurable fluctuation in the input conditions, so that a butterfly flapping its wings over the Amazon could in the end produce a storm in New York.
Even in the shorter term, weather forecasts fall down because the computers that produce them can generate only a central expectation if they are fed only a single set of starting data. One way round this is to set a number of forecasts running with slightly different input assumptions. This is an “ensemble forecast”. If five are run and they all agree that tomorrow will be dry, that gives the forecast more credibility. If they predict everything from blizzards to Saharan sunshine in a few days, it is time for a rethink. In practice, some ensemble forecasts retain at least part of their skill for up to fifteen days ahead. In the Canadian weather forecasting system, sixteen forecasts with varying assumptions are run in tandem, while the European Centre for Medium-Range Weather Forecasting manages 51, a central prediction plus 50 variants.
As well as adding to the credibility of the central forecast, these ensemble forecasts allow us to get an idea of how likely variation is around it. So a single forecast will predict either rain or no rain. Run enough alternative versions, and you can produce those handy predictions that there is, say, a 60 percent chance of rain.