Home About us Contact | |||
Empirical Performance (empirical + performance)
Selected AbstractsSample size estimation for non-inferiority trials of time-to-event dataPHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 4 2008Adam Crisp Abstract We consider the problem of sample size calculation for non-inferiority based on the hazard ratio in time-to-event trials where overall study duration is fixed and subject enrolment is staggered with variable follow-up. An adaptation of previously developed formulae for the superiority framework is presented that specifically allows for effect reversal under the non-inferiority setting, and its consequent effect on variance. Empirical performance is assessed through a small simulation study, and an example based on an ongoing trial is presented. The formulae are straightforward to program and may prove a useful tool in planning trials of this type. Copyright © 2007 John Wiley & Sons, Ltd. [source] SPACE,TIME MODELLING OF SYDNEY HARBOUR WINDSAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2005Edward Cripps Summary This paper develops a space-time statistical model for local forecasting of surface-level wind fields in a coastal region with complex topography. The statistical model makes use of output from deterministic numerical weather prediction models which are able to produce forecasts of surface wind fields on a spatial grid. When predicting surface winds at observing stations, errors can arise due to sub-grid scale processes not adequately captured by the numerical weather prediction model, and the statistical model attempts to correct for these influences. In particular, it uses information from observing stations within the study region as well as topographic information to account for local bias. Bayesian methods for inference are used in the model, with computations carried out using Markov chain Monte Carlo algorithms. Empirical performance of the model is described, illustrating that a structured Bayesian approach to complicated space-time models of the type considered in this paper can be readily implemented and can lead to improvements in forecasting over traditional methods. [source] Testing Conditional Asset Pricing Models Using a Markov Chain Monte Carlo ApproachEUROPEAN FINANCIAL MANAGEMENT, Issue 3 2008Manuel Ammann G12 Abstract We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not necessary. Moreover, the approach has exact finite sample properties and accounts for errors-in-variables. Using S&P 500 panel data, we analyse the empirical performance of the CAPM and theFama and French (1993)three-factor model. We find that time-variation of betas in the CAPM and the time variation of the coefficients for the size factor (SMB) and the distress factor (HML) in the three-factor model improve the empirical performance. Therefore, our findings are consistent with time variation of firm-specific exposure to market risk, systematic credit risk and systematic size effects. However, a Bayesian model comparison trading off goodness of fit and model complexity indicates that the conditional CAPM performs best, followed by the conditional three-factor model, the unconditional CAPM, and the unconditional three-factor model. [source] Robust estimation and testing of haplotype effects in case-control studies,,GENETIC EPIDEMIOLOGY, Issue 1 2008Andrew S. Allen Abstract Haplotype-based analyses are thought to play a major role in the study of common complex diseases. This has led to the development of a variety of statistical methods for detecting disease-haplotype associations from case-control study data. However, haplotype phase is often uncertain when only genotype data is available. Methods that account for haplotype ambiguity by modeling the distribution of haplotypes can, if this distribution is misspecified, lead to substantial bias in parameter estimates even when complete genotype data is available. Here we study estimators that can be derived from score functions of appropriate likelihoods. We use the efficient score approach to estimation in the presence of nuisance parameters to a derive novel estimators that are robust to the haplotype distribution. We establish key relationships between estimators and study their empirical performance via simulation. Genet. Epidemiol. 2007. Published 2007 Wiley-Liss, Inc. [source] Modelling the daily banknotes in circulation in the context of the liquidity management of the European Central Bank,JOURNAL OF FORECASTING, Issue 3 2009Alberto Cabrero Abstract The main focus of this paper is to model the daily series of banknotes in circulation. The series of banknotes in circulation displays very marked seasonal patterns. To the best of our knowledge the empirical performance of two competing approaches to model seasonality in daily time series, namely the ARIMA-based approach and the Structural Time Series approach, has never been put to the test. The application presented in this paper provides valid intuition on the merits of each approach. The forecasting performance of the models is also assessed in the context of their impact on the liquidity management of the Eurosystem.,,Copyright © 2008 John Wiley & Sons, Ltd. [source] Do Stock Prices and Volatility Jump?THE JOURNAL OF FINANCE, Issue 3 2004Option Prices, Reconciling Evidence from Spot This paper examines the empirical performance of jump diffusion models of stock price dynamics from joint options and stock markets data. The paper introduces a model with discontinuous correlated jumps in stock prices and stock price volatility, and with state-dependent arrival intensity. We discuss how to perform likelihood-based inference based upon joint options/returns data and present estimates of risk premiums for jump and volatility risks. The paper finds that while complex jump specifications add little explanatory power in fitting options data, these models fare better in fitting options and returns data simultaneously. [source] Nonlinear asymmetric models of the short-term interest rateTHE JOURNAL OF FUTURES MARKETS, Issue 9 2006K. Ozgur DemirtasArticle first published online: 18 JUL 200 This study introduces a generalized discrete time framework to evaluate the empirical performance of a wide variety of well-known models in capturing the dynamic behavior of short-term interest rates. A new class of models that displays nonlinearity and asymmetry in the drift, and incorporates the level effect and stochastic volatility in the diffusion function is introduced in discrete time and tested against the popular diffusion, GARCH, and level-GARCH models. Based on the statistical test results, the existing models are strongly rejected in favor of the newly proposed models because of the nonlinear asymmetric drift of the short rate, and the presence of nonlinearity, GARCH, and level effects in its volatility. The empirical results indicate that the nonlinear asymmetric models are better than the existing models in forecasting the future level and volatility of interest rate changes. © 2006 Wiley Periodicals, Inc. Jrl Fut Mark 26:869,894, 2006 [source] An empirical investigation of the GARCH option pricing model: Hedging performanceTHE JOURNAL OF FUTURES MARKETS, Issue 12 2003Haynes H. M. Yung In this article, we study the empirical performance of the GARCH option pricing model relative to the ad hoc Black-Scholes (BS) model of Dumas, Fleming, and Whaley. Specifically, we investigate the empirical performance of the option pricing model based on the exponential GARCH (EGARCH) process of Nelson. Using S&P 500 options data, we find that the EGARCH model performs better than the ad hoc BS model both in terms of in-sample valuation and out-of-sample forecasting. However, the superiority of out-of-sample performance EGARCH model over the ad hoc BS model is small and insignificant except in the case of deep-out-of-money put options. The out-performance diminishes as one lengthens the forecasting horizon. Interestingly, we find that the more complicated EGARCH model performs worse than the ad hoc BS model in hedging, irrespective of moneyness categories and hedging horizons. For at-the-money and out-of-the-money put options, the underperformance of the EGARCH model in hedging is statistically significant. © 2003 Wiley Periodicals, Inc. Jrl Fut Mark 23:1191,1207, 2003 [source] PPPs in Health: Static or Dynamic?AUSTRALIAN JOURNAL OF PUBLIC ADMINISTRATION, Issue 2010Anneloes Blanken Public-Private Partnerships (PPPs), or in the Private Finance Initiative (PFI) form throughout the Anglo-Saxon world, are gaining in popularity for the provision of hospitals. Increasingly common around the world and seen as a potential solution that will both overcome the bottlenecks associated with more conventional approaches to hospital provision and generate ,value for money'(VfM), these PFI-PPPs represent a major, but so far under-evaluated, concept. This article analyses whether public-private partnerships do deliver the benefits claimed. It endeavors to assess the potential of hospital PFI-PPPs, and their empirical performance on achieving VfM, through addressing the way the contractual arrangements are structured and the extent of flexibility they generate. Initial lessons arising from the current provisioning of English and Australian hospital facilities by PFI-PPPs are identified so they can be taken into consideration in future projects. [source] Statistical Methods for Analyzing Right-Censored Length-Biased Data under Cox ModelBIOMETRICS, Issue 2 2010Jing Qin Summary Length-biased time-to-event data are commonly encountered in applications ranging from epidemiological cohort studies or cancer prevention trials to studies of labor economy. A longstanding statistical problem is how to assess the association of risk factors with survival in the target population given the observed length-biased data. In this article, we demonstrate how to estimate these effects under the semiparametric Cox proportional hazards model. The structure of the Cox model is changed under length-biased sampling in general. Although the existing partial likelihood approach for left-truncated data can be used to estimate covariate effects, it may not be efficient for analyzing length-biased data. We propose two estimating equation approaches for estimating the covariate coefficients under the Cox model. We use the modern stochastic process and martingale theory to develop the asymptotic properties of the estimators. We evaluate the empirical performance and efficiency of the two methods through extensive simulation studies. We use data from a dementia study to illustrate the proposed methodology, and demonstrate the computational algorithms for point estimates, which can be directly linked to the existing functions in S-PLUS or R. [source] |