• Weather RSS Feed

    by Published on 01-11-10 17:25 PM

    Weather forecasts: Why they have improved, why they will improve and why they will always remain just a little bit crap.

    This is my first article and it attempts to go under the hood of weather forecast to take a look at how they are produced using Numerical Weather Prediction (NWP) and atmospheric models. It's not intended to be a detailed or comprehensive guide to the subject but to provide an overview of how forecast are produced so that their strengths and weakness are more widely understood by pilots.

    In the beginning:

    One of the pioneers of Numerical Weather Prediction was a guy called Lewis Fry Richardson who in 1910 devised one of the very first atmospheric models. At the time weather forecasts were produced by forecasters matching weather patterns from the past with recent weather sequences. This approach was a bit hit and miss to say the least. Lewis worked out if he divided the atmosphere into sections and imputed observational data, then by running a set of calculations he could generate a forecast, effectively modelling the atmosphere. He began by dividing the UK and part of northern Europe up into 200km squares, with each square being 5 layers in depth.
    His chosen date was the 20th of May 1910. He set out to take the weather observations from 7am and use them to predict the weather 6 hours later using his model with number of equations for each sector to calculate changes in such variables as temperate and pressure, this technique is at the heart of all Numerical Weather Prediction. As this was long before even the most basic computer had been invented, poor Lewis had to do all the calculations by hand with just a slide rule to help. Unfortunately this meant the forecast took 1000 hours to produce so it was way out of date before it was finished. Way, way out of date in fact as it was not published until 1922!

    Not only was the forecast rather late, it was also quite badly wrong as it predicted an alarming rise in pressure of 145 millibars in 6 hours. This was because Lewis had not applied modern smoothing techniques, but give the poor guy a break, this was 1910 and when smoothing techniques were applied the forecast was shown to be fairly accurate.

    Fast forward in time….

    Although the potential of atmospheric modelling was realised it was not until the advent of computer technology that further progress was made. From the 1950s onward atmospheric models have been experimented with and refined, leading to the sophisticated models in use today.

    As in Lewis Fry Richardson's early experiment, atmospheric models today are based on dividing some or all of the Earth's surface into squares. The more squares there are per area covered by the model the higher resolution the model is said to be. And, just as Lewis pioneered each square is then divided up in into layers representing different altitudes. The temperature, wind, pressure, density and humidity data from surface weather stations, weather balloons and satellites is then inputted into the model and the computer program is run. The computer, or more often a bank of computers known as a supercomputer do their job of crunching the numbers, doing all the equations and outputting the data for what the new values for each square should be for each step forward in time.

    So now we have computers things so be easy, right….?

    Well not exactly: Each layer of each square has its own sub-model. These sub-models have to cover everything that will happen within that sector from calculating the effect of solar radiation, to modelling cloud formation and rainfall to interpreting the effect that interactions with surface features such as mountains or oceans will have on the atmosphere. It is these sub-models that take up much of the processing power of the supercomputers as each new feature that is added to the sub-model can increase the potential accuracy of the forecast but will also mean the supercomputer will take longer to run the model and complete the forecast.Also the higher the resolution of the model, the longer the forecast will take to produce, so a very high resolution model would only be practical over a smaller area. This would make it useless for predicting the weather several day in advance as weather outside of the model will have an effect that is not allowed for. This gives meteorologists no choice but to use low resolution but larger area models for longer range forecasts such as 3-5 days plus.

    Then there are fronts….

    The all important weather fronts are usually the weather feature of the most significance to aviators. The changes in pressure, humidity and temperature that indicate a front are present in the results the NWP models produce. However due to the resolutions used, they are not always accurately placed. Fronts are narrow and the sudden change between a warm and cold air mass in a front is not always accurately depicted by even the highest resolution model. Experienced meteorologists will draw in the front, based on both the data and their own experience, but it may still be some way out, quite often literally miles out. This coupled with the fact that front are relative fast moving features with speeds that can vary greatly is the reason why they often arrive hours later or early than scheduled and catch the unwary pilot out.

    The seagull effect.

    A meteorologist called Edward Lorenz was studying early computer weather models and NWP in the 1960s. He noted that if he inputted just one initial value as 0.506 instead of the full 0.506127 then the computer would produce a very different result over time. He said of the results: "If the theory were correct, one flap of a seagull's wings could change the course of weather forever." In later speeches he changed seagull to a more poetic butterfly, which is just as well as the phrase "seagull effect" would never have caught on!

    This illustrates the difficulty in producing accurate long range forecasts, there are simply so many variables that it would be impossible to input them all. Even the initial data can sometime be less than 100 per cent reliable, for example local conditions around weather stations can affect precise temperatures and humidity recorded. Data from the ocean is sparse and largely dependent on a relatively small number of floating buoys.

    To help reduce the impact of the butterfly effect in NWP it's standard practice to produce a number of forecasts at once using slightly different models or varying the initial conditions. The forecast produced are then cross checked against one another, this provide the best accuracy and protection again a single anomaly ruining a forecast. But this creates a further problem as when allocating computer resources forecasters have to decide whether to run a smaller number of higher resolution models or a higher number of lower resolution for the best results, either way is a big compromise.

    So how accurate are they?

    A 7 day forecast is now as accurate as a 5 day forecast was 20 years ago. A 5 day forecast is now the equivalent in accuracy as a 3 day forecast 20 years ago. This shows an improvement rate of one day per decade. One day meteorologists hope to produce a useful forecast for up to 14 days which is regarded as the theoretical maximum.

    At the moment 5-7 day is regarded as the most that a forecast will have any useful degree of accuracy, with 3 days now being over 95% accurate (in terms of data such as temperature and pressure predictions, localised weather such as showers of rain are excluded)

    The future…

    Some of the most powerful super computers in the world are used in weather forecasting. With computing power increasing all the time and models getting ever more sophisticated forecasts will slowly and steadily improve. This has the potential for major benefits, in addition to increasing the useful range of forecasts. For example a high resolution model with detailed surface features included would be more accurate at predicting localised weather phenomenon such as fog and could even be used to provide better advanced warning of flash flooding. These advances have a genuine potential to save lives, both in the air and on the ground.

    In conclusion:

    I hope this article has given a good basic overview to the subject of weather models and forecasting. Current models in use can vary from the basic grid model descried: some use boxes of different shapes, irregular girds or boxes that change shape with the model's calculations. However the principal is still the same as that first developed by Lewis Fry Richardson and other pioneers of NWP in the early days with just a slide rule to help.