The verification game
When a forecaster claims to be '80 percent accurate” what does it actually mean? A whole sub-field of meteorology, forecast verification, is devoted to this question. The guru of verification was the late US researcher Allan Murphy, who claimed a good forecast needs to have consistency, quality and value. Imagine a set of forecasts that are always exactly 10°C/18°F too warm in the winter and 10°C/18°F too cool in the summer.
On the surface, the quality of those forecasts is poor, but if one can spot the bias, their consistency and value is high. Many public and private forecasters score their predictions using statistical techniques that gauge the discrepancies between the actual and predicted numbers for high and low temperatures, chances of rain or snow, and the like. Some of these techniques emphasize total error; others place more weight on the bigger miscues. For instance, when a forecaster misses four high temperatures by 1° and the fifth by 16°, the average error is 4° per day. The same total error would accumulate for another forecaster who is 4e off on each of the five days. Which outlooks would you prefer? That depends on how you're using the forecast Research suggests that people and industries tend to be most concerned about reducing the error on those rare days when the forecast is way off base. Some verification studies (including a few by motivated citizens) can be found on the Web. A few US TV stations even verify their work on the air by rewarding a random viewer when the previous day's forecast high is more than, say, 1.7°C/3°F off. Benchmarks like this make it possible to say that a forecast provider is 80 percent accurate – but the claim only makes sense if you know the criteria on which it's based.