Misspecification

Distribution by Scientific Domains

Kinds of Misspecification

  • model misspecification


  • Selected Abstracts


    MODEL MISSPECIFICATION: WHY AGGREGATION OF OFFENSES IN FEDERAL SENTENCING EQUATIONS IS PROBLEMATIC

    CRIMINOLOGY, Issue 4 2003
    CELESTA A. ALBONETTI
    This paper addresses two concerns that arise from Steffensmeier and Demuth (2001) analysis of federal sentencing and their misrepresentation of my analyses of sentence severity (Albonetti, 1997). My primary concern is to alert researchers to the importance of controlling for the guidelines offense that drives the sentencing process under the Federal Sentencing Guidelines. My second concern is to correct Steffensmeier and Demuth's (2001) errors in interpretation of my earlier findings of the effect of guidelines offense severity on length of imprisonment. [source]


    Management Strategies for Complex Adaptive Systems Sensemaking, Learning, and Improvisation

    PERFORMANCE IMPROVEMENT QUARTERLY, Issue 2 2007
    Reuben R. McDaniel Jr.
    Misspecification of the nature of organizations may be a major reason for difficulty in achieving performance improvement. Organizations are often viewed as machine-like, but complexity science suggests that organizations should be viewed as complex adaptive systems. I identify the characteristics of complex adaptive systems and give examples of management errors that may be made when these characteristics are ignored. Command, control and planning are presented as managerial tasks that come to the fore when a machine view of organizations dominates thinking. When we treat organizations as complex adaptive systems the focus of managerial activity changes, and sensemaking, learning and improvisation become appropriate strategies for performance improvement. Each of these is defined and described. A modest research agenda is presented. [source]


    On Latent-Variable Model Misspecification in Structural Measurement Error Models for Binary Response

    BIOMETRICS, Issue 3 2009
    Xianzheng Huang
    Summary We consider structural measurement error models for a binary response. We show that likelihood-based estimators obtained from fitting structural measurement error models with pooled binary responses can be far more robust to covariate measurement error in the presence of latent-variable model misspecification than the corresponding estimators from individual responses. Furthermore, despite the loss in information, pooling can provide improved parameter estimators in terms of mean-squared error. Based on these and other findings, we create a new diagnostic method to detect latent-variable model misspecification in structural measurement error models with individual binary response. We use simulation and data from the Framingham Heart Study to illustrate our methods. [source]


    Diagnosis of Random-Effect Model Misspecification in Generalized Linear Mixed Models for Binary Response

    BIOMETRICS, Issue 2 2009
    Xianzheng Huang
    Summary Generalized linear mixed models (GLMMs) are widely used in the analysis of clustered data. However, the validity of likelihood-based inference in such analyses can be greatly affected by the assumed model for the random effects. We propose a diagnostic method for random-effect model misspecification in GLMMs for clustered binary response. We provide a theoretical justification of the proposed method and investigate its finite sample performance via simulation. The proposed method is applied to data from a longitudinal respiratory infection study. [source]


    Errors of aggregation and errors of specification in a consumer demand model: a theoretical note

    CANADIAN JOURNAL OF ECONOMICS, Issue 4 2006
    Frank T. Denton
    Abstract Consumer demand models based on the concept of a representative or average consumer suffer from aggregation error. Misspecification of the underlying micro utility-maximizing model, which is virtually inevitable, also results in error. This note provides a theoretical investigation of the relationship between the two types of error. Misspecified expenditure support functions for demand systems at the micro level induce the same misspecified structure in the corresponding expenditure functions at the macro level, and the errors at the two levels are shown to be of similar order. Les modèles de demande du consommateur fondés sur le consommateur moyen ou représentatif souffrent d'une erreur d'agrégation. Une mauvaise spécification du modèle sous-jacent de micro maximisation de l'utilité, qui est à peu près inévitable, est aussi source d'erreur. Cette note propose un examen théorique de la relation entre ces deux types d'erreurs. La mauvaise spécification des fonctions de dépenses qui fondent les systèmes de demande au niveau micro induit la même mauvaise spécification dans les fonctions de dépenses au niveau macro, et les erreurs aux deux niveaux sont d'un ordre similaire. [source]


    A score for Bayesian genome screening

    GENETIC EPIDEMIOLOGY, Issue 3 2003
    E. Warwick Daw
    Abstract Bayesian Monte Carlo Markov chain (MCMC) techniques have shown promise in dissecting complex genetic traits. The methods introduced by Heath ([1997], Am. J. Hum. Genet. 61:748,760), and implemented in the program Loki, have been able to localize genes for complex traits in both real and simulated data sets. Loki estimates the posterior probability of quantitative trait loci (QTL) at locations on a chromosome in an iterative MCMC process. Unfortunately, interpretation of the results and assessment of their significance have been difficult. Here, we introduce a score, the log of the posterior placement probability ratio (LOP), for assessing oligogenic QTL detection and localization. The LOP is the log of the posterior probability of linkage to the real chromosome divided by the posterior probability of linkage to an unlinked pseudochromosome, with marker informativeness similar to the marker data on the real chromosome. Since the LOP cannot be calculated exactly, we estimate it in simultaneous MCMC on both real and pseudochromosomes. We investigate empirically the distributional properties of the LOP in the presence and absence of trait genes. The LOP is not subject to trait model misspecification in the way a lod score may be, and we show that the LOP can detect linkage for loci of small effect when the lod score cannot. We show how, in the absence of linkage, an empirical distribution of the LOP may be estimated by simulation and used to provide an assessment of linkage detection significance. Genet Epidemiol 24:181,190, 2003. © 2003 Wiley-Liss, Inc. [source]


    Robustness of inference on measured covariates to misspecification of genetic random effects in family studies

    GENETIC EPIDEMIOLOGY, Issue 1 2003
    Ruth M.Pfeiffer
    Abstract Family studies to identify disease-related genes frequently collect only families with multiple cases. It is often desirable to determine if risk factors that are known to influence disease risk in the general population also play a role in the study families. If so, these factors should be incorporated into the genetic analysis to control for confounding. Pfeiffer et al. [2001 Biometrika 88: 933,948] proposed a variance components or random effects model to account for common familial effects and for different genetic correlations among family members. After adjusting for ascertainment, they found maximum likelihood estimates of the measured exposure effects. Although it is appealing that this model accounts for genetic correlations as well as for the ascertainment of families, in order to perform an analysis one needs to specify the distribution of random genetic effects. The current work investigates the robustness of the proposed model with respect to various misspecifications of genetic random effects in simulations. When the true underlying genetic mechanism is polygenic with a small dominant component, or Mendelian with low allele frequency and penetrance, the effects of misspecification on the estimation of fixed effects in the model are negligible. The model is applied to data from a family study on nasopharyngeal carcinoma in Taiwan. Genet Epidemiol 24:14,23, 2003. © 2003 Wiley-Liss, Inc. [source]


    EEG-fMRI of focal epileptic spikes: Analysis with multiple haemodynamic functions and comparison with gadolinium-enhanced MR angiograms

    HUMAN BRAIN MAPPING, Issue 3 2004
    Andrew P. Bagshaw
    Abstract Combined EEG-fMRI has recently been used to explore the BOLD responses to interictal epileptiform discharges. This study examines whether misspecification of the form of the haemodynamic response function (HRF) results in significant fMRI responses being missed in the statistical analysis. EEG-fMRI data from 31 patients with focal epilepsy were analysed with four HRFs peaking from 3 to 9 sec after each interictal event, in addition to a standard HRF that peaked after 5.4 sec. In four patients, fMRI responses were correlated with gadolinium-enhanced MR angiograms and with EEG data from intracranial electrodes. In an attempt to understand the absence of BOLD responses in a significant group of patients, the degree of signal loss occurring as a result of magnetic field inhomogeneities was compared with the detected fMRI responses in ten patients with temporal lobe spikes. Using multiple HRFs resulted in an increased percentage of data sets with significant fMRI activations, from 45% when using the standard HRF alone, to 62.5%. The standard HRF was good at detecting positive BOLD responses, but less appropriate for negative BOLD responses, the majority of which were more accurately modelled by an HRF that peaked later than the standard. Co-registration of statistical maps with gadolinium-enhanced MRIs suggested that the detected fMRI responses were not in general related to large veins. Signal loss in the temporal lobes seemed to be an important factor in 7 of 12 patients who did not show fMRI activations with any of the HRFs. Hum. Brain Mapp. 22:179,192, 2004. © 2004 Wiley-Liss, Inc. [source]


    Optimal asset allocation for a large number of investment opportunities

    INTELLIGENT SYSTEMS IN ACCOUNTING, FINANCE & MANAGEMENT, Issue 1 2005
    Hans Georg Zimmermann
    This paper introduces a stock-picking algorithm that can be used to perform an optimal asset allocation for a large number of investment opportunities. The allocation scheme is based upon the idea of causal risk. Instead of referring to the volatility of the assets time series, the stock-picking algorithm determines the risk exposure of the portfolio by concerning the non-forecastability of the assets. The underlying expected return forecasts are based on time-delay recurrent error correction neural networks, which utilize the last model error as an auxiliary input to evaluate their own misspecification. We demonstrate the profitability of our stock-picking approach by constructing portfolios from 68 different assets of the German stock market. It turns out that our approach is superior to a preset benchmark portfolio. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Nonparametric Varying-Coefficient Models for the Analysis of Longitudinal Data

    INTERNATIONAL STATISTICAL REVIEW, Issue 3 2002
    Colin O. Wu
    Summary Longitudinal methods have been widely used in biomedicine and epidemiology to study the patterns of time-varying variables, such as disease progression or trends of health status. Data sets of longitudinal studies usually involve repeatedly measured outcomes and covariates on a set of randomly chosen subjects over time. An important goal of statistical analyses is to evaluate the effects of the covariates, which may or may not depend on time, on the outcomes of interest. Because fully parametric models may be subject to model misspecification and completely unstructured nonparametric models may suffer from the drawbacks of "curse of dimensionality", the varying-coefficient models are a class of structural nonparametric models which are particularly useful in longitudinal analyses. In this article, we present several important nonparametric estimation and inference methods for this class of models, demonstrate the advantages, limitations and practical implementations of these methods in different longitudinal settings, and discuss some potential directions of further research in this area. Applications of these methods are illustrated through two epidemiological examples. Résumé Modèles non-paramétriques à coefficients variables pour l'analyse de données longitudinales Les méthodes longitudinales ont été largement utilisées en biomédecine et en épidémiologie pour étudier les modèles de variables variant dans le temps, du type progression de maladie ou tendances détat de santé. Les ensembles de données d'études longitudinales comprennent généralement des ésultats de mesures répétées et des covariables sur un ensemble de sujets choisis au hasard dans le temps. Un objectif important des analyses statistiques consisteàévaluer les effets des covariables, qui peuvent ou non dépendre du temps, sur les résultats d'intérêt. Du fait que des modèles entièrement paramétriques peuvent faire l'objet d'erreur de spécification de modèle et que des modèles non-paramétriques totalement non-structurés peuvent souffrir des inconvénients de la «malédiction de dimensionnalité», les modèles à coefficients variables sont une classe de modèles structurels non-paramétriques particulièrement utiles dans les analyses longitudinales. Dans cet article, on présente plusieurs estimations non-paramétriques importantes, ainsi que des méthodes d'inférence pour cette classe de modéles, on démontre les avantages, limites et mises en ,uvre pratiques de ces méthodes dans différents contextes longitudinaux et l'on traite de certaines directions possibles pour de plus amples recherches dans ce domaine. Des applications de ces méthodes sont illustrées à travers deux exemples épidémiologiques. [source]


    Estimation and forecasting in first-order vector autoregressions with near to unit roots and conditional heteroscedasticity

    JOURNAL OF FORECASTING, Issue 7 2009
    Theologos Pantelidis
    Abstract This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first-order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ,forecast cost of type I' to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ,forecast cost of type II'. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    Forecasting German GDP using alternative factor models based on large datasets

    JOURNAL OF FORECASTING, Issue 4 2007
    Christian Schumacher
    Abstract This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state-space models. Out-of-sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean-squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely.,,Copyright © 2007 John Wiley & Sons, Ltd. [source]


    A Bayesian threshold nonlinearity test for financial time series

    JOURNAL OF FORECASTING, Issue 1 2005
    Mike K. P. So
    Abstract We propose in this paper a threshold nonlinearity test for financial time series. Our approach adopts reversible-jump Markov chain Monte Carlo methods to calculate the posterior probabilities of two competitive models, namely GARCH and threshold GARCH models. Posterior evidence favouring the threshold GARCH model indicates threshold nonlinearity or volatility asymmetry. Simulation experiments demonstrate that our method works very well in distinguishing GARCH and threshold GARCH models. Sensitivity analysis shows that our method is robust to misspecification in error distribution. In the application to 10 market indexes, clear evidence of threshold nonlinearity is discovered and thus supporting volatility asymmetry. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Robustness of alternative non-linearity tests for SETAR models

    JOURNAL OF FORECASTING, Issue 3 2004
    Wai-Sum Chan
    Abstract In recent years there has been a growing interest in exploiting potential forecast gains from the non-linear structure of self-exciting threshold autoregressive (SETAR) models. Statistical tests have been proposed in the literature to help analysts check for the presence of SETAR-type non-linearities in an observed time series. It is important to study the power and robustness properties of these tests since erroneous test results might lead to misspecified prediction problems. In this paper we investigate the robustness properties of several commonly used non-linearity tests. Both the robustness with respect to outlying observations and the robustness with respect to model specification are considered. The power comparison of these testing procedures is carried out using Monte Carlo simulation. The results indicate that all of the existing tests are not robust to outliers and model misspecification. Finally, an empirical application applies the statistical tests to stock market returns of the four little dragons (Hong Kong, South Korea, Singapore and Taiwan) in East Asia. The non-linearity tests fail to provide consistent conclusions most of the time. The results in this article stress the need for a more robust test for SETAR-type non-linearity in time series analysis and forecasting. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    An evaluation of tests of distributional forecasts

    JOURNAL OF FORECASTING, Issue 6-7 2003
    Pablo Noceti
    Abstract One popular method for testing the validity of a model's forecasts is to use the probability integral transforms (pits) of the forecasts and to test for departures from the dual hypotheses of independence and uniformity, with departures from uniformity tested using the Kolmogorov,Smirnov (KS) statistic. This paper investigates the power of five statistics (including the KS statistic) to reject uniformity of the pits in the presence of misspecification in the mean, variance, skewness or kurtosis of the forecast errors. The KS statistic has the lowest power of the five statistics considered and is always dominated by the Anderson and Darling statistic. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    ESTIMATION AND HYPOTHESIS TESTING FOR NONPARAMETRIC HEDONIC HOUSE PRICE FUNCTIONS

    JOURNAL OF REGIONAL SCIENCE, Issue 3 2010
    Daniel P. McMillen
    ABSTRACT In contrast to the rigid structure of standard parametric hedonic analysis, nonparametric estimators control for misspecified spatial effects while using highly flexible functional forms. Despite these advantages, nonparametric procedures are still not used extensively for spatial data analysis due to perceived difficulties associated with estimation and hypothesis testing. We demonstrate that nonparametric estimation is feasible for large datasets with many independent variables, offering statistical tests of individual covariates and tests of model specification. We show that fixed parameterization of distance to the nearest rapid transit line is a misspecification and that pricing of access to this amenity varies across neighborhoods within Chicago. [source]


    To Hedge or Not to Hedge: Managing Demographic Risk in Life Insurance Companies

    JOURNAL OF RISK AND INSURANCE, Issue 1 2006
    Helmut Gründl
    Demographic risk, i.e., the risk that life tables change in a nondeterministic way, is a serious threat to the financial stability of an insurance company having underwritten life insurance and annuity business. The inverse influence of changes in mortality laws on the market value of life insurance and annuity liabilities creates natural hedging opportunities. Within a realistically calibrated shareholder value (SHV) maximization framework, we analyze the implications of demographic risk on the optimal risk management mix (equity capital, asset allocation, and product policy) for a limited liability insurance company operating in a market with insolvency-averse insurance buyers. Our results show that the utilization of natural hedging is optimal only if equity is scarce. Otherwise, hedging can even destroy SHV. A sensitivity analysis shows that a misspecification of demographic risk has severe consequences for both the insurer and the insured. This result highlights the importance of further research in the field of demographic risk. [source]


    Hierarchical related regression for combining aggregate and individual data in studies of socio-economic disease risk factors

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2008
    Christopher Jackson
    Summary., To obtain information about the contribution of individual and area level factors to population health, it is desirable to use both data collected on areas, such as censuses, and on individuals, e.g. survey and cohort data. Recently developed models allow us to carry out simultaneous regressions on related data at the individual and aggregate levels. These can reduce ,ecological bias' that is caused by confounding, model misspecification or lack of information and increase power compared with analysing the data sets singly. We use these methods in an application investigating individual and area level sociodemographic predictors of the risk of hospital admissions for heart and circulatory disease in London. We discuss the practical issues that are encountered in this kind of data synthesis and demonstrate that this modelling framework is sufficiently flexible to incorporate a wide range of sources of data and to answer substantive questions. Our analysis shows that the variations that are observed are mainly attributable to individual level factors rather than the contextual effect of deprivation. [source]


    Regression analysis based on semicompeting risks data

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2008
    Jin-Jian Hsieh
    Summary., Semicompeting risks data are commonly seen in biomedical applications in which a terminal event censors a non-terminal event. Possible dependent censoring complicates statistical analysis. We consider regression analysis based on a non-terminal event, say disease progression, which is subject to censoring by death. The methodology proposed is developed for discrete covariates under two types of assumption. First, separate copula models are assumed for each covariate group and then a flexible regression model is imposed on the progression time which is of major interest. Model checking procedures are also proposed to help to choose a best-fitted model. Under a two-sample setting, Lin and co-workers proposed a competing method which requires an additional marginal assumption on the terminal event and implicitly assumes that the dependence structures in the two groups are the same. Using simulations, we compare the two approaches on the basis of their finite sample performances and robustness properties under model misspecification. The method proposed is applied to a bone marrow transplant data set. [source]


    Causal inference with generalized structural mean models

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2003
    S. Vansteelandt
    Summary., We estimate cause,effect relationships in empirical research where exposures are not completely controlled, as in observational studies or with patient non-compliance and self-selected treatment switches in randomized clinical trials. Additive and multiplicative structural mean models have proved useful for this but suffer from the classical limitations of linear and log-linear models when accommodating binary data. We propose the generalized structural mean model to overcome these limitations. This is a semiparametric two-stage model which extends the structural mean model to handle non-linear average exposure effects. The first-stage structural model describes the causal effect of received exposure by contrasting the means of observed and potential exposure-free outcomes in exposed subsets of the population. For identification of the structural parameters, a second stage ,nuisance' model is introduced. This takes the form of a classical association model for expected outcomes given observed exposure. Under the model, we derive estimating equations which yield consistent, asymptotically normal and efficient estimators of the structural effects. We examine their robustness to model misspecification and construct robust estimators in the absence of any exposure effect. The double-logistic structural mean model is developed in more detail to estimate the effect of observed exposure on the success of treatment in a randomized controlled blood pressure reduction trial with self-selected non-compliance. [source]


    Assessing accuracy of a continuous screening test in the presence of verification bias

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2005
    Todd A. Alonzo
    Summary., In studies to assess the accuracy of a screening test, often definitive disease assessment is too invasive or expensive to be ascertained on all the study subjects. Although it may be more ethical or cost effective to ascertain the true disease status with a higher rate in study subjects where the screening test or additional information is suggestive of disease, estimates of accuracy can be biased in a study with such a design. This bias is known as verification bias. Verification bias correction methods that accommodate screening tests with binary or ordinal responses have been developed; however, no verification bias correction methods exist for tests with continuous results. We propose and compare imputation and reweighting bias-corrected estimators of true and false positive rates, receiver operating characteristic curves and area under the receiver operating characteristic curve for continuous tests. Distribution theory and simulation studies are used to compare the proposed estimators with respect to bias, relative efficiency and robustness to model misspecification. The bias correction estimators proposed are applied to data from a study of screening tests for neonatal hearing loss. [source]


    Choice of parametric models in survival analysis: applications to monotherapy for epilepsy and cerebral palsy

    JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2003
    G. P. S. Kwong
    Summary. In the analysis of medical survival data, semiparametric proportional hazards models are widely used. When the proportional hazards assumption is not tenable, these models will not be suitable. Other models for covariate effects can be useful. In particular, we consider accelerated life models, in which the effect of covariates is to scale the quantiles of the base-line distribution. Solomon and Hutton have suggested that there is some robustness to misspecification of survival regression models. They showed that the relative importance of covariates is preserved under misspecification with assumptions of small coefficients and orthogonal transformation of covariates. We elucidate these results by applications to data from five trials which compare two common anti-epileptic drugs (carbamazepine versus sodium valporate monotherapy for epilepsy) and to survival of a cohort of people with cerebral palsy. Results on the robustness against model misspecification depend on the assumptions of small coefficients and on the underlying distribution of the data. These results hold in cerebral palsy but do not hold in epilepsy data which have early high hazard rates. The orthogonality of coefficients is not important. However, the choice of model is important for an estimation of the magnitude of effects, particularly if the base-line shape parameter indicates high initial hazard rates. [source]


    Seasonal Unit Root Tests Under Structural Breaks,

    JOURNAL OF TIME SERIES ANALYSIS, Issue 1 2004
    Uwe Hassler
    C12; C22 Abstract., In this paper, several seasonal unit root tests are analysed in the context of structural breaks at known time and a new break corrected test is suggested. We show that the widely used HEGY test, as well as an LM variant thereof, are asymptotically robust to seasonal mean shifts of finite magnitude. In finite samples, however, experiments reveal that such tests suffer from severe size distortions and power reductions when breaks are present. Hence, a new break corrected LM test is proposed to overcome this problem. Importantly, the correction for seasonal mean shifts bears no consequence on the limiting distributions, thereby maintaining the legitimacy of canonical critical values. Moreover, although this test assumes a breakpoint a priori, it is robust in terms of misspecification of the time of the break. This asymptotic property is well reproduced in finite samples. Based on a Monte-Carlo study, our new test is compared with other procedures suggested in the literature and shown to hold superior finite sample properties. [source]


    Bayesian strategies for dynamic pricing in e-commerce

    NAVAL RESEARCH LOGISTICS: AN INTERNATIONAL JOURNAL, Issue 3 2007
    Eric Cope
    Abstract E-commerce platforms afford retailers unprecedented visibility into customer purchase behavior and provide an environment in which prices can be updated quickly and cheaply in response to changing market conditions. This study investigates dynamic pricing strategies for maximizing revenue in an Internet retail channel by actively learning customers' demand response to price. A general methodology is proposed for dynamically pricing information goods, as well as other nonperishable products for which inventory levels are not an essential consideration in pricing. A Bayesian model of demand uncertainty involving the Dirichlet distribution or a mixture of such distributions as a prior captures a wide range of beliefs about customer demand. We provide both analytic formulas and efficient approximation methods for updating these prior distributions after sales data have been observed. We then investigate several strategies for sequential pricing based on index functions that consider both the potential revenue and the information value of selecting prices. These strategies require a manageable amount of computation, are robust to many types of prior misspecification, and yield high revenues compared to static pricing and passive learning approaches. © 2006 Wiley Periodicals, Inc. Naval Research Logistics, 2007 [source]


    Effects of correlation and missing data on sample size estimation in longitudinal clinical trials

    PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 1 2010
    Song Zhang
    Abstract In longitudinal clinical trials, a common objective is to compare the rates of changes in an outcome variable between two treatment groups. Generalized estimating equation (GEE) has been widely used to examine if the rates of changes are significantly different between treatment groups due to its robustness to misspecification of the true correlation structure and randomly missing data. The sample size formula for repeated outcomes is based on the assumption of missing completely at random and a large sample approximation. A simulation study is conducted to investigate the performance of GEE sample size formula with small sample sizes, damped exponential family of correlation structure and non-ignorable missing data. Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Duncan's model for X, -control charts: sensitivity analysis to input parameters

    QUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 1 2010
    Cinzia Mortarino
    Abstract Duncan's model is a well-known procedure to build a control chart with specific reference to the production process it has to be applied to. Although many papers report true applications proving the procedure's noteworthy economic advantages over control charts set purely on the basis of standard statistical criteria, this method is often perceived only as an academic exercise. Perhaps the greater barrier preventing its practical application stems from the difficulty in making cost items explicit. In this paper a sensitivity analysis is proposed for misspecification in the cost parameters for optimal solutions of Duncan's model. While similar contributions published in the literature perform sensitivity analyses with a one-factor-at-a-time scheme, the original contribution of this paper is represented by the focus given on interactions among changes in values of different cost parameters. The results obtained here denote that all factors significantly affect optimal solutions through quite complicated interactions. This should not, in our opinion, discourage the implementation of Duncan's model, pointing conversely to its robust versions, already available in the current literature. Copyright © 2009 John Wiley & Sons, Ltd. [source]


    for misspecified regression models

    THE CANADIAN JOURNAL OF STATISTICS, Issue 4 2003
    Peilin Shi
    Abstract The authors propose minimax robust designs for regression models whose response function is possibly misspecified. These designs, which minimize the maximum of the mean squared error matrix, can control the bias caused by model misspecification and provide the desired efficiency through one parameter. The authors call on a nonsmooth optimization technique to derive these designs analytically. Their results extend those of Heo, Schmuland & Wiens (2001). The authors also discuss several examples for approximately polynomial regression. Les auteurs proposent des plans minimax robustes pour des modèles de régression dont la fonction réponse pourrait ,tre mal spécifiée. Ces plans, qui minimisent le maximum de la matrice des erreurs quadratiques, permettent de contr,ler le biais d, à une mauvaise spécification du modèle tout en garantissant l'efficacité désirée au moyen d'un paramètre. Les auteurs se servent d'une technique d'optimisation non lisse pour préciser la forme analytique de ces plans. Leurs résultats généralisent ceux de Heo, Schmuland & Wiens (2001). Les auteurs présentent en outre plusieurs exemples touchant la régression approximativement polynomiale. [source]


    On the sensitivity of the restricted least squares estimators to covariance misspecification

    THE ECONOMETRICS JOURNAL, Issue 3 2007
    Alan T.K. Wan
    Summary, Traditional econometrics has long stressed the serious consequences of non-spherical disturbances for the estimation and testing procedures under the spherical disturbance setting, that is, the procedures become invalid and can give rise to misleading results. In practice, it is not unusual, however, to find that the parameter estimates do not change much after fitting the more general structure. This suggests that the usual procedures may well be robust to covariance misspecification. Banerjee and Magnus (1999) proposed sensitivity statistics to decide if the Ordinary Least Squares estimators of the coefficients and the disturbance variance are sensitive to deviations from the spherical error assumption. This paper extends their work by investigating the sensitivity of the restricted least squares estimator to covariance misspecification where the restrictions may or may not be correct. Large sample results giving analytical evidence to some of the numerical findings reported in Banerjee and Magnus (1999) are also obtained. [source]


    Least squares estimation and tests of breaks in mean and variance under misspecification

    THE ECONOMETRICS JOURNAL, Issue 1 2004
    Jean-Yves Pitarakis
    Summary In this paper we investigate the consequences of misspecification on the large sample properties of change-point estimators and the validity of tests of the null hypothesis of linearity versus the alternative of a structural break. Specifically this paper concentrates on the interaction of structural breaks in the mean and variance of a time series when either of the two is omitted from the estimation and inference procedures. Our analysis considers the case of a break in mean under omitted-regime-dependent heteroscedasticity and that of a break in variance under an omitted mean shift. The large and finite sample properties of the resulting least-squares-based estimators are investigated and the impact of the two types of misspecification on inferences about the presence or absence of a structural break subsequently analysed. [source]


    THE FISCAL THEORY OF THE PRICE LEVEL: A CRITIQUE*

    THE ECONOMIC JOURNAL, Issue 481 2002
    Willem H. Buiter
    This paper argues that the `fiscal theory of the price level' (FTPL) has feet of clay. The source of the problem is a fundamental economic misspecification. The FTPL confuses two key building blocks of a model of a market economy: budget constraints, which must be satisfied identically, and market clearing or equilibrium conditions. The FTPL asssumes that the government's intertemporal budget constraint needs to be satisfied only in equilibrium. This economic misspecification has far-reaching implications for the mathematical properties of the equilibria supported by models that impose the structure of the FTPL. It produces a rash of contradictions and anomalies. [source]