Temperature Forecasts (temperature + forecast)

Distribution by Scientific Domains


Selected Abstracts


Erratum: Bias-free rainfall forecast and temperature trend-based temperature forecast using T-170 model output during the monsoon season

METEOROLOGICAL APPLICATIONS, Issue 3 2010
Rashmi Bhardwaj
No abstract is available for this article. [source]


Probabilistic temperature forecast by using ground station measurements and ECMWF ensemble prediction system

METEOROLOGICAL APPLICATIONS, Issue 4 2004
P. Boi
The ECMWF Ensemble Prediction System 2-metre temperature forecasts are affected by systematic errors due mainly to resolution inadequacies. Moreover, other errors sources are present: differences in height above sea level between the station and the corresponding grid point, boundary layer parameterisation, and description of the land surface. These errors are more marked in regions of complex orography. A recursive statistical procedure to adapt ECMWF EPS-2metre temperature fields to 58 meteorological stations on the Mediterranean island of Sardinia is presented. The correction has been made in three steps: (1) bias correction of systematic errors; (2) calibration to adapt the EPS temperature distribution to the station temperature distribution; and (3) doubling the ensemble size with the aim of taking into account the analysis errors. Two years of probabilistic forecasts of freezing are tested by Brier Score, reliability diagram, rank histogram and Brier Skill Score with respect to the climatological forecast. The score analysis shows much better performance in comparison with the climatological forecast and direct model output, for all forecast timse, even after the first step (bias correction). Further gains in skill are obtained by calibration and by doubling the ensemble size. Copyright © 2004 Royal Meteorological Society. [source]


Models to improve winter minimum surface temperature forecasts, Delhi, India

METEOROLOGICAL APPLICATIONS, Issue 2 2004
A. P. Dimri
Accurate forecasts of minimum surface temperature during winter help in the prediction of cold-wave conditions over northwest India. Statistical models for forecasting the minimum surface temperature at Delhi during winter (December, January and February) are developed by using the classical method and the perfect prognostic method (PPM), and the results are compared. Surface and upper air data are used for the classical method, whereas for PPM additional reanalysis data from the National Center of Environmental Prediction (NCEP) US are incorporated in the model development. Minimum surface temperature forecast models are developed by using data for the winter period 1985,89. The models are validated using an independent dataset (winter 1994,96). It is seen that by applying PPM, rather than the classical method, the model's forecast accuracy is improved by about 10% (correct to within ± 2 °C). Copyright © 2004 Royal Meteorological Society. [source]


Can multi-model combination really enhance the prediction skill of probabilistic ensemble forecasts?

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 630 2008
A. P. Weigel
Abstract The success of multi-model ensemble combination has been demonstrated in many studies. Given that a multi-model contains information from all participating models, including the less skilful ones, the question remains as to why, and under what conditions, a multi-model can outperform the best participating single model. It is the aim of this paper to resolve this apparent paradox. The study is based on a synthetic forecast generator, allowing the generation of perfectly-calibrated single-model ensembles of any size and skill. Additionally, the degree of ensemble under-dispersion (or overconfidence) can be prescribed. Multi-model ensembles are then constructed from both weighted and unweighted averages of these single-model ensembles. Applying this toy model, we carry out systematic model-combination experiments. We evaluate how multi-model performance depends on the skill and overconfidence of the participating single models. It turns out that multi-model ensembles can indeed locally outperform a ,best-model' approach, but only if the single-model ensembles are overconfident. The reason is that multi-model combination reduces overconfidence, i.e. ensemble spread is widened while average ensemble-mean error is reduced. This implies a net gain in prediction skill, because probabilistic skill scores penalize overconfidence. Under these conditions, even the addition of an objectively-poor model can improve multi-model skill. It seems that simple ensemble inflation methods cannot yield the same skill improvement. Using seasonal near-surface temperature forecasts from the DEMETER dataset, we show that the conclusions drawn from the toy-model experiments hold equally in a real multi-model ensemble prediction system. Copyright © 2008 Royal Meteorological Society [source]