Home About us Contact | |||
Forecast Errors (forecast + error)
Kinds of Forecast Errors Selected AbstractsThe Impact of Forecast Errors on Early Order Commitment in a Supply Chain,DECISION SCIENCES, Issue 2 2002Xiande Zhao ABSTRACT Supply chain partnership involves mutual commitments among participating firms. One example is early order commitment, wherein a retailer commits to purchase a fixed-order quantity and delivery time from a supplier before the real need takes place. This paper explores the value of practicing early order commitment in the supply chain. We investigate the complex interactions between early order commitment and forecast errors by simulating a supply chain with one capacitated supplier and multiple retailers under demand uncertainty. We found that practicing early order commitment can generate significant savings in the supply chain, but the benefits are only valid within a range of order commitment periods. Different components of forecast errors have different cost implications to the supplier and the retailers. The presence of trend in the demand increases the total supply chain cost, but makes early order commitment more appealing. The more retailers sharing the same supplier, the more valuable for the supply chain to practice early order commitment. Except in cases where little capacity cushion is available, our findings are relatively consistent in the environments where cost structure, number of retailers, capacity utilization, and capacity policy are varied. [source] Predictability in Financial Analyst Forecast Errors: Learning or Irrationality?JOURNAL OF ACCOUNTING RESEARCH, Issue 4 2006STANIMIR MARKOV ABSTRACT In this paper, we propose a rational learning-based explanation for the predictability in financial analysts' earnings forecast errors documented in prior literature. In particular, we argue that the serial correlation pattern in analysts' quarterly earnings forecast errors is consistent with an environment in which analysts face parameter uncertainty and learn rationally about the parameters over time. Using simulations and real data, we show that the predictability evidence is more consistent with rational learning than with irrationality (fixation on a seasonal random walk model or some other dogmatic belief). [source] Evidence That Management Earnings Forecasts Do Not Fully Incorporate Information in Prior Forecast ErrorsJOURNAL OF BUSINESS FINANCE & ACCOUNTING, Issue 7-8 2009Weihong Xu Abstract:, This paper investigates whether managers fully incorporate the implications of their prior earnings forecast errors into their future earnings forecasts and, if not, whether this behavior is related to the post-earnings announcement drift. I find a positive association in consecutive management forecast errors, suggesting that managers underestimate the future implications of past earnings information when forecasting earnings. I also find that managers underestimate the information in their prior forecast errors to a greater extent when they make earnings forecasts with a longer horizon. Finally, I find that, similar to managers, the market also underreacts to earnings information in management forecast errors, which leads to predictable stock returns following earnings announcements. [source] Accounting Policy Disclosures and Analysts' ForecastsCONTEMPORARY ACCOUNTING RESEARCH, Issue 2 2003Ole-Kristian Hope Abstract Using an international sample, I investigate whether the extent of firms' disclosure of their accounting policies in the annual report is associated with properties of analysts' earnings forecasts. Controlling for firm- and country-level variables, I find that the level of accounting policy disclosure is significantly negatively related to forecast dispersion and forecast error. In particular, I find that accounting policy disclosures are incrementally useful to analysts over and above all other annual report disclosures. These findings suggest that accounting policy disclosures reduce uncertainty about forecasted earnings. I find univariate but not multivariate support for the hypothesis that accounting policy disclosures are especially helpful to analysts in environments where firms can choose among a larger set of accounting methods. [source] The Effect of National Governance Codes on Firm Disclosure Practices: Evidence from Analyst Earnings ForecastsCORPORATE GOVERNANCE, Issue 6 2008John Nowland ABSTRACT Manuscript Type: Empirical Research Question: This study examines whether voluntary national governance codes have a significant effect on company disclosure practices. Two direct effects of the codes are expected: 1) an overall improvement in company disclosure practices, which is greater when the codes have a greater emphasis on disclosure; and 2) a leveling out of disclosure practices across companies (i.e., larger improvements in companies that were previously poorer disclosers) due to the codes new comply-or-explain requirements. The codes are also expected to have an indirect effect on disclosure practices through their effect on company governance practices. Research Findings/Results: The results show that the introduction of the codes in eight East Asian countries has been associated with lower analyst forecast error and a leveling out of disclosure practices across companies. The codes are also found to have an indirect effect on company disclosure practices through their effect on board independence. Practical Implications: This study shows that a regulatory approach to improving disclosure practices is not always necessary. Voluntary national governance codes are found to have both a significant direct effect and a significant indirect effect on company disclosure practices. In addition, the results indicate that analysts in Asia do react to changes in disclosure practices, so there is an incentive for small companies and family-owned companies to further improve their disclosure practices. [source] Dimensioning of secondary and tertiary control reserve by probabilistic methodsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2009Christoph Maurer Abstract Given the rising share of intermittent generation out of renewable energy sources on the one hand and the increased regulatory efforts to lower transmission costs and tariffs on the other hand, the optimal dimensioning of necessary control reserve has gained additional importance during the last years. Grid codes like the UCTE Operation Handbook do not provide definitive and unambiguous methods for dimensioning of secondary and tertiary control reserves. This paper therefore presents a method which calculates the necessary control reserve considering all important drivers for power imbalances like power plant outages, load variations and forecast error. For dimensioning, a probabilistic criterion, the accepted probability of insufficient control reserve, is used. Probability density functions of control area imbalances are calculated using a convolution algorithm. This paper provides analyses for a stylised example system to demonstrate the capabilities of the method. In a sensitivity analysis the impact of drivers like plant failures and forecast errors of load and generation is shown. The presented method is used by transmission system operators and regulatory authorities to determine and substantiate the necessary amount of control reserve. Copyright © 2009 John Wiley & Sons, Ltd. [source] IMPROVING FORECAST ACCURACY BY COMBINING RECURSIVE AND ROLLING FORECASTS,INTERNATIONAL ECONOMIC REVIEW, Issue 2 2009Todd E. Clark This article presents analytical, Monte Carlo, and empirical evidence on combining recursive and rolling forecasts when linear predictive models are subject to structural change. Using a characterization of the bias,variance trade-off faced when choosing between either the recursive and rolling schemes or a scalar convex combination of the two, we derive optimal observation windows and combining weights designed to minimize mean square forecast error. Monte Carlo experiments and several empirical examples indicate that combination can often provide improvements in forecast accuracy relative to forecasts made using the recursive scheme or the rolling scheme with a fixed window width. [source] Evaluating Commodity Market Efficiency: Are Cointegration Tests Appropriate?JOURNAL OF AGRICULTURAL ECONOMICS, Issue 3 2002Neil Kellard This paper investigates the claim that the finding of cointegration between commodity spot and lagged futures rates reflects the existence of commodity arbitrage and not, as is generally accepted, long-run market efficiency. The methodology of Kellard et al. (1999) is employed to match spot and lagged futures rates correctly for the UK wheat futures contract traded at LIFFE. Bi-variate analysis shows that spot and lagged futures rates are cointegrated with the vector (1, -1), a necessary condition for market efficiency. However, at variance with asymptotic theory, in a tri-variate VECM estimation, the spot rate, lagged futures rate and lagged domestic interest rate are shown to be cointegrated with the vector (1, ,1, 1). The "cointegration" paradox is explained by investigating the relative magnitudes of the forecast error and the domestic interest rate. The small sample results demonstrate that it is impossible to distinguish between the influence of commodity arbitrage and the existence of market efficiency using cointegration-based tests. In summary, this work implies that such tests are not wholly appropriate for evaluating commodity market efficiency. [source] Combining forecasts using optimal combination weight and generalized autoregression,JOURNAL OF FORECASTING, Issue 5 2008Jeong-Ryeol Kurz-Kim Abstract In this paper, we consider a combined forecast using an optimal combination weight in a generalized autoregression framework. The generalized autoregression provides not only a combined forecast but also an optimal combination weight for combining forecasts. By simulation, we find that short- and medium-horizon (as well as partly long-horizon) forecasts from the generalized autoregression using the optimal combination weight are more efficient than those from the usual autoregression in terms of the mean-squared forecast error. An empirical application with US gross domestic product confirms the simulation result. Copyright © 2008 John Wiley & Sons, Ltd. [source] Forecasting German GDP using alternative factor models based on large datasetsJOURNAL OF FORECASTING, Issue 4 2007Christian Schumacher Abstract This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state-space models. Out-of-sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean-squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely.,,Copyright © 2007 John Wiley & Sons, Ltd. [source] A fractal forecasting model for financial time seriesJOURNAL OF FORECASTING, Issue 8 2004Gordon R. Richards Abstract Financial market time series exhibit high degrees of non-linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non-integer, this is associated with two features: (1) inhomogeneity,extreme fluctuations at irregular intervals, and (2) scaling symmetries,proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition-fitted residual scale ratio (ST-FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15-minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST-FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd. [source] Forecasting composite indicators with anticipated information: an application to the industrial production indexJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2003Francesco Battaglia Summary. Many economic and social phenomena are measured by composite indicators computed as weighted averages of a set of elementary time series. Often data are collected by means of large sample surveys, and processing takes a long time, whereas the values of some elementary component series may be available a considerable time before the others and may be used for forecasting the composite index. This problem is addressed within the framework of prediction theory for stochastic processes. A method is proposed for exploiting anticipated information to minimize the mean-square forecast error, and for selecting the most useful elementary series. An application to the Italian general industrial production index is illustrated, which demonstrates that knowledge of anticipated values of some, or even just one, component series may reduce the forecast error considerably. [source] Pooling-Based Data Interpolation and BackdatingJOURNAL OF TIME SERIES ANALYSIS, Issue 1 2007Massimiliano Marcellino C32; C43; C82 Abstract., Pooling forecasts obtained from different procedures typically reduces the mean square forecast error and more generally improve the quality of the forecast. In this paper, we evaluate whether pooling-interpolated or-backdated time series obtained from different procedures can also improve the quality of the generated data. Both simulation results and empirical analyses with macroeconomic time series indicate that pooling plays a positive and important role in this context also. [source] On the use of the intensity-scale verification technique to assess operational precipitation forecastsMETEOROLOGICAL APPLICATIONS, Issue 1 2008Gabriella Csima Abstract The article describes the attempt to include the intensity-scale technique introduced by Casati et al. (2004) into a set of standardized verifications used in operational centres. The intensity-scale verification approach accounts for the spatial structure of the forecast field and allows the skill to be diagnosed as a function of the scale of the forecast error and intensity of the precipitation events. The intensity-scale method has been used to verify two different resolutions of the European Centre for Medium-Range Weather Forecasts (ECMWF) operational quantitative precipitation forecast (QPF) over France, and to compare the performance of the ECMWF and the Hungarian Meteorological Service operational model (ALADIN) forecasts, run over Hungary. Two case studies have been introduced, which show some interesting insight into the spatial scale of the error. The distribution of daily skill score for an extended period of time is also presented. The intensity-scale technique shows that the forecasts in general exhibit better skill for large-scale events, and lower skill for small-scale and intense events. In the paper, it is mentioned how some of the stringent assumptions on the domain over which the method can be applied, and the availability of the matched forecasts and observations, can limit its usability in an operational environment. Copyright © 2008 Royal Meteorological Society [source] An observing-system experiment with ground-based GPS zenith total delay data using HIRLAM 3D-Var in the absence of satellite dataTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 650 2010Reima Eresmaa Abstract Ground-based receiver networks of the Global Positioning System (GPS) provide observations of atmospheric water vapour with a high temporal and horizontal resolution. Variational data assimilation allows researchers to make use of zenith total delay (ZTD) observations, which comprise the atmospheric effects on microwave signal propagation. An observing-system experiment (OSE) is performed to demonstrate the impact of GPS ZTD observations on the output of the High Resolution Limited Area Model (HIRLAM). The GPS ZTD observations for the OSE are provided by the EUMETNET GPS Water Vapour Programme, and they are assimilated using three-dimensional variational data assimilation (3D-Var). The OSE covers a five-week period during the late summer of 2008. In parallel with GPS ZTD data assimilation in the regular mode, the impact of a static bias-correction algorithm for the GPS ZTD data is also assessed. Assimilation of GPS ZTD data, without bias correction of any kind, results in a systematic increase in the forecast water-vapour content, temperature and tropospheric relative topography. A slightly positive impact is shown in terms of decreased forecast-error standard deviation of lower and middle tropospheric humidity and lower tropospheric geopotential height. Moreover, verification of categorical forecasts of 12 h accumulated precipitation shows a positive impact. The application of the static bias-correction scheme is positively verified in the case of the mean forecast error of lower tropospheric humidity and when relatively high precipitation accumulations are considered. Copyright © 2010 Royal Meteorological Society [source] Horizontal resolution impact on short- and long-range forecast errorTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 649 2010Roberto Buizza Abstract The impact of horizontal resolution increases from spectral truncation T95 to T799 on the error growth of ECMWF forecasts is analysed. Attention is focused on instantaneous, synoptic-scale features represented by the 500 and 1000 hPa geopotential height and the 850 hPa temperature. Error growth is investigated by applying a three-parameter model, and improvements in forecast skill are assessed by computing the time limits when fractions of the forecast-error asymptotic value are reached. Forecasts are assessed both in a realistic framework against T799 analyses, and in a perfect-model framework against T799 forecasts. A strong sensitivity to model resolution of the skill of instantaneous forecasts has been found in the short forecast range (say up to about forecast day 3). But sensitivity has shown to become weaker in the medium range (say around forecast day 7) and undetectable in the long forecast range. Considering the predictability of ECMWF operational, high-resolution T799 forecasts of the 500 hPa geopotential height verified in the realistic framework over the Northern Hemisphere (NH), the long-range time limit ,(95%) is 15.2 days, a value that is one day shorter than the limit computed in the perfect-model framework. Considering the 850 hPa temperature verified in the realistic framework, the time limit ,(95%) is 16.6 days for forecasts verified in the realistic framework over the NH (cold season), 14.1 days over the SH (warm season) and 20.6 days over the Tropics. Although past resolution increases have been providing continuously better forecasts especially in the short forecast range, this investigation suggests that in the future, although further increases in resolution are expected to improve the forecast skill in the short and medium forecast range, simple resolution increases without model improvements would bring only very limited improvements in the long forecast range. Copyright © 2010 Royal Meteorological Society [source] The characteristics of Hessian singular vectors using an advanced data assimilation schemeTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 642 2009A. R. Lawrence Abstract Initial condition uncertainty is a significant source of forecast error in numerical weather prediction. Singular vectors of the tangent linear propagator can identify directions in phase-space where initial errors are likely to make the largest contribution to forecast-error variance. The physical characteristics of these singular vectors depend on the choice of initial-time metric used to represent analysis-error covariances: the total-energy norm serves as a proxy to the analysis-error covariance matrix, whereas the Hessian of the cost function of a 4D-Var assimilation scheme represents a more sophisticated estimate of the analysis-error covariances, consistent with observation and background-error covariances used in the 4D-Var scheme. This study examines and compares the structure of singular vectors computed with the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System using these two types of initial metrics. Unlike earlier studies that use background errors derived from lagged forecast differences (the NMC method), the background-error covariance matrix in the Hessian metric is based on statistics from an ensemble of 4D-Vars using perturbed observations, which produces tighter correlations of background-error statistics than in previous formulations. In light of these new background-error statistics, this article re-examines the properties of Hessian singular vectors (and their relationship to total-energy singular vectors) using cases from different periods between 2003 and 2005. Energy profiles and wavenumber spectra reveal that the total-energy singular vectors are similar to Hessian singular vectors that use all observation types in the operational 4D-Var assimilation. This is in contrast to the structure of Hessian singular vectors without observations. Increasing the observation density tends to reduce the spatial scale of the Hessian singular vectors. Copyright © 2009 Royal Meteorological Society [source] The local ETKF and SKEB: Upgrades to the MOGREPS short-range ensemble prediction systemTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 640 2009Neill E. Bowler Abstract The Met Office has been routinely running a short-range global and regional ensemble prediction system (EPS) since the summer of 2005. This article describes a major upgrade to the global ensemble, which affected both the initial condition and model uncertainty perturbations applied in that ensemble. The change to the initial condition perturbations is to allow localization within the ensemble transform Kalman filter (ETKF). This enables better specification of the ensemble spread as a function of location around the globe. The change to the model uncertainty perturbations is the addition of a stochastic kinetic energy backscatter scheme (SKEB). This adds vorticity perturbations to the forecast in order to counteract the damping of small-scale features introduced by the semi-Lagrangian advection scheme. Verification of ensemble forecasts is presented for the global ensemble system. It is shown that the localization of the ETKF gives a distribution of the spread as a function of latitude that better matches the forecast error of the ensemble mean. The SKEB scheme has a substantial effect on the power spectrum of the kinetic energy, and with the scheme a shallowing of the spectral slope is seen in the tail. A k,5/3 slope is seen at wavelengths shorter than 1000 km and this better agrees with the observed spectrum. The local ETKF significantly improves forecasts at all lead times over a number of variables. The SKEB scheme increases the rate of growth of ensemble spread in some variables, and improves forecast skill at short lead times. ©Crown Copyright 2009. Reproduced with the permission of HMSO. Published by John Wiley & Sons Ltd. [source] Monitoring the observation impact on the short-range forecastTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 638 2009Carla Cardinali Abstract This paper describes the use of forecast sensitivity to observations as a diagnostic tool to monitor the observation impact on the 24-hour forecast range. In particular, the forecast error is provided by the control experiments (using all observations available) of two sets of observing system experiments performed at ECMWF, a month in summer 2006 and a month in winter 2007, respectively. In such a way, the observation data impact obtained with the forecast sensitivity is compared with the observing system experiment's data impact; differences and similarities are highlighted. Globally, the assimilated observations decrease the forecast error; locally, some poor performances are detected that are related either to the data quality or to the suboptimality of the data assimilation system. It is also found that the synoptic situation can affect the measurements or can produce areas of large field variability that the assimilation system cannot model correctly. Copyright © 2009 Royal Meteorological Society [source] The optimal density of atmospheric sounder observations in the Met Office NWP systemTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 629 2007M. L. Dando Abstract Large numbers of satellite observations are discarded from the numerical weather prediction (NWP) process because high-density observations may have a negative impact on the analysis. In current assimilation schemes, the observation error covariance matrix R is usually represented as a diagonal matrix, which assumes there are no correlations in the observation errors and that each observation is an independent piece of information. This is not the case when there are strong error correlations and this can lead to a degraded analysis. The experiments conducted in this study were designed to identify the optimal density and to determine if there were circumstances when exceeding this density might be beneficial to forecast skill. The global optimal separation distance of Advanced TIROS Operational Vertical Sounder (ATOVS) observations was identified by comparing global forecast errors produced using different densities of ATOVS. The global average of the absolute forecast error produced by each different density was found for a 3-week period from December 2004 to January 2005. The results showed that, when using the Met Office NWP system with a horizontal model resolution of ,60 km, the lowest global forecast errors were produced when using separation distances of 115,154 km. However, localized regions of the atmosphere containing large gradients such as frontal regions may benefit from thinning distances as small as 40 km and therefore the global optimal separation distance is not necessarily applicable in these circumstances. Copyright © 2007 Royal Meteorological Society [source] Analysis of scale dependence of quantitative precipitation forecast verification: A case-study over the Mackenzie river basinTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 620 2006Olivier Bousquet Abstract Six-hour rainfall accumulations derived from radar observations collected during a 3-day summertime precipitation event over central Alberta (Canada) are used to assess the performance of a regional Canadian numerical weather prediction system for quantitative precipitation forecast verification. We show that radar data provide a simple and efficient way to significantly reduce model phase errors associated with misplacement of predicted precipitation patterns. Using wavelet analysis, we determine that the limiting spatial scale of predictability of the model is about six times its grid resolution for 6 h accumulated fields. The use of longer accumulation periods is shown to smooth out forecast errors that may have resulted from slight phase or time shift errors but does not change the limiting scale of predictability. The scale decomposition of the mean-square forecast error also reveals that scales which cannot be accurately reproduced by the model account for about 20% of the total error. Using classical continuous and categorical scores, we show that significantly better model performance can be achieved by smoothing out wavelengths that cannot be predicted. Copyright © 2006 Royal Meteorological Society [source] Forcing singular vectors and other sensitive model structuresTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 592 2003J. Barkmeijer Abstract Model tendency perturbations can, like analysis perturbations, be an effective way to influence forecasts. In this paper, optimal model tendency perturbations, or forcing singular vectors, are computed with diabatic linear and adjoint T42L40 versions of the European Centre for Medium-Range Weather Forecasts' forecast model. During the forecast time, the spatial pattern of the tendency perturbation does not vary and the response at optimization time (48 hours) is measured in terms of total energy. Their properties are compared with those of initial singular vectors, and differences, such as larger horizontal scale and location, are discussed. Sensitivity calculations are also performed, whereby a cost function measuring the 2-day forecast error is minimized by only allowing tendency perturbations. For a given number of minimization steps, this approach yields larger cost-function reductions than the sensitivity calculation using only analysis perturbations. Nonlinear forecasts using only one type of perturbation confirm an improved performance in the case of tendency perturbations. For a summer experiment a substantial reduction of the systematic error is shown in the case of forcing sensitivity. Copyright © 2003 Royal Meteorological Society. [source] A methodology for forming components of the linear model in 4D-Var with application to the marine boundary layerATMOSPHERIC SCIENCE LETTERS, Issue 4 2009Tim Payne Abstract We show how large numbers of parameters used in the linear model in 4D-Var may efficiently be optimised according to suitable criteria. We apply this to the linear model representation of boundary layer processes over sea, to obtain significant reductions in linearisation and forecast error. Crown Copyright © 2009 Royal Meteorological Society [source] PREDICTION-FOCUSED MODEL SELECTION FOR AUTOREGRESSIVE MODELSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2007Gerda Claeskens Summary In order to make predictions of future values of a time series, one needs to specify a forecasting model. A popular choice is an autoregressive time-series model, for which the order of the model is chosen by an information criterion. We propose an extension of the focused information criterion (FIC) for model-order selection, with emphasis on a high predictive accuracy (i.e. the mean squared forecast error is low). We obtain theoretical results and illustrate by means of a simulation study and some real data examples that the FIC is a valid alternative to the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) for selection of a prediction model. We also illustrate the possibility of using the FIC for purposes other than forecasting, and explore its use in an extended model. [source] Auditor Quality and the Accuracy of Management Earnings Forecasts,CONTEMPORARY ACCOUNTING RESEARCH, Issue 4 2000PETER M. CLARKSON Abstract In this study, we appeal to insights and results from Davidson and Neu 1993 and McConomy 1998 to motivate empirical analyses designed to gain a better understanding of the relationship between auditor quality and forecast accuracy. We extend and refine Davidson and Neu's analysis of this relationship by introducing additional controls for business risk and by considering data from two distinct time periods: one in which the audit firm's responsibility respecting the earnings forecast was to provide review-level assurance, and one in which its responsibility was to provide audit-level assurance. Our sample data consist of Toronto Stock Exchange (TSE) initial public offerings (IPOs). The earnings forecast we consider is the one-year-ahead management earnings forecast included in the IPO offering prospectus. The results suggest that after the additional controls for business risk are introduced, the relationship between forecast accuracy and auditor quality for the review-level assurance period is no longer significant. The results also indicate that the shift in regimes alters the fundamental nature of the relationship. Using data from the audit-level assurance regime, we find a negative and significant relationship between forecast accuracy and auditor quality (i.e., we find Big 6 auditors to be associated with smaller absolute forecast errors than non-Big 6 auditors), and further, that the difference in the relationship between the two regimes is statistically significant. [source] Viability of Auction-Based Revenue Management in Sequential MarketsDECISION SCIENCES, Issue 2 2005Tim Baker ABSTRACT The Internet is providing an opportunity to revenue management practitioners to exploit the potential of auctions as a new price distribution channel. We develop a stochastic model for a high-level abstraction of a revenue management system (RMS) that allows us to understand the potential of incorporating auctions in revenue management in the presence of forecast errors associated with key parameters. Our abstraction is for an environment where two market segments book in sequence and revenue management approaches consider auctions in none, one, or both segments. Key insights from our robust results are (i) limited auctions are best employed closest to the final sale date, (ii) counterbalancing forecast errors associated with overall traffic intensity and the proportion of customer arrivals in a segment is more important if an auction is adopted in that segment, and (iii) it is critically important not to err on the side of overestimating market willingness to pay. [source] The Impact of Forecast Errors on Early Order Commitment in a Supply Chain,DECISION SCIENCES, Issue 2 2002Xiande Zhao ABSTRACT Supply chain partnership involves mutual commitments among participating firms. One example is early order commitment, wherein a retailer commits to purchase a fixed-order quantity and delivery time from a supplier before the real need takes place. This paper explores the value of practicing early order commitment in the supply chain. We investigate the complex interactions between early order commitment and forecast errors by simulating a supply chain with one capacitated supplier and multiple retailers under demand uncertainty. We found that practicing early order commitment can generate significant savings in the supply chain, but the benefits are only valid within a range of order commitment periods. Different components of forecast errors have different cost implications to the supplier and the retailers. The presence of trend in the demand increases the total supply chain cost, but makes early order commitment more appealing. The more retailers sharing the same supplier, the more valuable for the supply chain to practice early order commitment. Except in cases where little capacity cushion is available, our findings are relatively consistent in the environments where cost structure, number of retailers, capacity utilization, and capacity policy are varied. [source] Survey Data and the Interest Rate Sensitivity of US Bank Stock ReturnsECONOMIC NOTES, Issue 2 2000H. A. Benink In this paper, we provide empirical evidence on the interest rate sensitivity of the stock returns of the twenty largest US bank holding companies. The main contribution of the paper is the use of survey data to model the unexpected interest rate variable, which is an alternative approach to the existing literature. We find evidence of significant negative interest rate sensitivity during the early 1980s, and evidence of declining significance in the late 1980s and early 1990s. This result is also obtained when using the forecast errors of ARIMA processes to model the unexpected movement in the interest rate. [source] Dimensioning of secondary and tertiary control reserve by probabilistic methodsEUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 4 2009Christoph Maurer Abstract Given the rising share of intermittent generation out of renewable energy sources on the one hand and the increased regulatory efforts to lower transmission costs and tariffs on the other hand, the optimal dimensioning of necessary control reserve has gained additional importance during the last years. Grid codes like the UCTE Operation Handbook do not provide definitive and unambiguous methods for dimensioning of secondary and tertiary control reserves. This paper therefore presents a method which calculates the necessary control reserve considering all important drivers for power imbalances like power plant outages, load variations and forecast error. For dimensioning, a probabilistic criterion, the accepted probability of insufficient control reserve, is used. Probability density functions of control area imbalances are calculated using a convolution algorithm. This paper provides analyses for a stylised example system to demonstrate the capabilities of the method. In a sensitivity analysis the impact of drivers like plant failures and forecast errors of load and generation is shown. The presented method is used by transmission system operators and regulatory authorities to determine and substantiate the necessary amount of control reserve. Copyright © 2009 John Wiley & Sons, Ltd. [source] Sequential Monte Carlo methods for multi-aircraft trajectory prediction in air traffic managementINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 10 2010I. Lymperopoulos Abstract Accurate prediction of aircraft trajectories is an important part of decision support and automated tools in air traffic management. We demonstrate that by combining information from multiple aircraft at different locations and time instants, one can provide improved trajectory prediction (TP) accuracy. To perform multi-aircraft TP, we have at our disposal abundant data. We show how this multi-aircraft sensor fusion problem can be formulated as a high-dimensional state estimation problem. The high dimensionality of the problem and nonlinearities in aircraft dynamics and control prohibit the use of common filtering methods. We demonstrate the inefficiency of several sequential Monte Carlo algorithms on feasibility studies involving multiple aircraft. We then develop a novel particle filtering algorithm to exploit the structure of the problem and solve it in realistic scale situations. In all studies we assume that aircraft fly level (possibly at different altitudes) with known, constant, aircraft-dependent airspeeds and estimate the wind forecast errors based only on ground radar measurements. Current work concentrates on extending the algorithms to non-level flights, the joint estimation of wind forecast errors and the airspeed and mass of the different aircraft and the simultaneous fusion of airborne and ground radar measurements. Copyright © 2010 John Wiley & Sons, Ltd. [source] |