Home About us Contact | |||
Model Selection Criteria (model + selection_criterion)
Selected AbstractsEstimation Optimality of Corrected AIC and Modified Cp in Linear RegressionINTERNATIONAL STATISTICAL REVIEW, Issue 2 2006Simon L. Davies Summary Model selection criteria often arise by constructing unbiased or approximately unbiased estimators of measures known as expected overall discrepancies (Linhart & Zucchini, 1986, p. 19). Such measures quantify the disparity between the true model (i.e., the model which generated the observed data) and a fitted candidate model. For linear regression with normally distributed error terms, the "corrected" Akaike information criterion and the "modified" conceptual predictive statistic have been proposed as exactly unbiased estimators of their respective target discrepancies. We expand on previous work to additionally show that these criteria achieve minimum variance within the class of unbiased estimators. Résumé Les critères de modèle de sélection naissent souvent de la construction de mesures d'estimation impartiales, ou approximativement impartiales, connues comme divergences globales prévues. De telles mesures quantifient la disparité entre le vrai modèle (c'est-à-dire le modèle qui a produit les données observées) et un modèle candidat correspondant. En ce qui concerne les applications de régression linéaires contenant des erreurs distribuées normalement, le modèle de critère d'information "corrigé" Akaike et le modèle conceptuel de statistique de prévision "modifié" ont été proposés comme étant des instruments exacts de mesures d'estimation impartiales de leurs objectifs respectifs de divergences. En nous appuyant sur les travaux précédents et en les développant, nous proposons de démontrer, en outre, que ces critères réalisent une variance minimum au sein de la classe des instruments de mesures d'estimation impartiales. [source] Local to unity, long-horizon forecasting thresholds for model selection in the AR(1)JOURNAL OF FORECASTING, Issue 7 2004John L. Turner Abstract This article introduces a novel framework for analysing long-horizon forecasting of the near non-stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long-horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ,drift' to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ,forecasting thresholds', while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd. [source] Selecting explanatory variables with the modified version of the Bayesian information criterionQUALITY AND RELIABILITY ENGINEERING INTERNATIONAL, Issue 6 2008gorzata Bogdan Abstract We consider the situation in which a large database needs to be analyzed to identify a few important predictors of a given quantitative response variable. There is a lot of evidence that in this case classical model selection criteria, such as the Akaike information criterion or the Bayesian information criterion (BIC), have a strong tendency to overestimate the number of regressors. In our earlier papers, we developed the modified version of BIC (mBIC), which enables the incorporation of prior knowledge on a number of regressors and prevents overestimation. In this article, we review earlier results on mBIC and discuss the relationship of this criterion to the well-known Bonferroni correction for multiple testing and the Bayes oracle, which minimizes the expected costs of inference. We use computer simulations and a real data analysis to illustrate the performance of the original mBIC and its rank version, which is designed to deal with data that contain some outlying observations. Copyright © 2008 John Wiley & Sons, Ltd. [source] Bayesian estimation of cognitive decline in patients with alzheimer's diseaseTHE CANADIAN JOURNAL OF STATISTICS, Issue 1 2002Patrick Béalisle Abstract Recently, there has been great interest in estimating the decline in cognitive ability in patients with Alzheimer's disease. Measuring decline is not straightforward, since one must consider the choice of scale to measure cognitive ability, possible floor and ceiling effects, between-patient variability, and the unobserved age of onset. The authors demonstrate how to account for the above features by modeling decline in scores on the Mini-Mental State Exam in two different data sets. To this end, they use hierarchical Bayesian models with change points, for which posterior distributions are calculated using the Gibbs sampler. They make comparisons between several such models using both prior and posterior Bayes factors, and compare the results from the models suggested by these two model selection criteria. Estimation bayésienne du déclin cognitif de patients atteints de la maladie d'Alzheimer On s'est beaucoup intéressé ces derniers temps à l'estimation du déclin des fonctions cognitives des personnes atteintes de la maladie d'Alzheimer. Il n'est pas facile de quantifier ce déclin, qui dépend de l'échelle utilisée pour mesurer les fonctions cognitives, mais aussi de la variabilité entre les individus, de 1'incertitude entourant le moment exact du début de leur maladie et d'éventuels effets plancher et plafond. Les auteurs montrent comment il est possible de tenir compte de ces différents éléments en modélisant le déclin observé dans les résultats obtenus par deux groupes de patients au mini-examen de l'état mental. Ils utilisent pour ce faire des modèles bayésiens hiérarchiques avec points de jonction, pour lesquels ils calculent les lois a posteriori au moyen de l'échantillonneur de Gibbs. Ils comparent plusieurs modèles de ce type au moyen de facteurs de Bayes a priori et a posteriori; ils comparent ensuite les résultats des modèles suggérés par ces deux critères de sélection. [source] Model selection tests for nonlinear dynamic modelsTHE ECONOMETRICS JOURNAL, Issue 1 2002Douglas Rivers This paper generalizes Vuong (1989) asymptotically normal tests for model selection in several important directions. First, it allows for incompletely parametrized models such as econometric models defined by moment conditions. Second, it allows for a broad class of estimation methods that includes most estimators currently used in practice. Third, it considers model selection criteria other than the models' likelihoods such as the mean squared errors of prediction. Fourth, the proposed tests are applicable to possibly misspecified nonlinear dynamic models with weakly dependent heterogeneous data. Cases where the estimation methods optimize the model selection criteria are distinguished from cases where they do not. We also consider the estimation of the asymptotic variance of the difference between the competing models' selection criteria, which is necessary to our tests. Finally, we discuss conditions under which our tests are valid. It is seen that the competing models must be essentially nonnested. [source] PENALIZED- R2 CRITERIA FOR MODEL SELECTION,THE MANCHESTER SCHOOL, Issue 6 2009LARRY W. TAYLOR It is beneficial to observe that popular model selection criteria for the linear model are equivalent to penalized versions of R2. Let PR2 refer to any one of these model selection criteria. Then PR2 serves the dual purpose of selecting the model and summarizing the resulting fit subject to the penalty function. Furthermore, it is straightforward to extend the logic of PR2 to instrumental variables estimation and the non-parametric selection of regressors. For two-stage least squares estimation, a simulation study investigates the finite-sample performance of PR2 to select the correct model in cases of either strong or weak instruments. [source] Testing Random Effects in the Linear Mixed Model Using Approximate Bayes FactorsBIOMETRICS, Issue 2 2009Benjamin R. Saville Summary Deciding which predictor effects may vary across subjects is a difficult issue. Standard model selection criteria and test procedures are often inappropriate for comparing models with different numbers of random effects due to constraints on the parameter space of the variance components. Testing on the boundary of the parameter space changes the asymptotic distribution of some classical test statistics and causes problems in approximating Bayes factors. We propose a simple approach for testing random effects in the linear mixed model using Bayes factors. We scale each random effect to the residual variance and introduce a parameter that controls the relative contribution of each random effect free of the scale of the data. We integrate out the random effects and the variance components using closed-form solutions. The resulting integrals needed to calculate the Bayes factor are low-dimensional integrals lacking variance components and can be efficiently approximated with Laplace's method. We propose a default prior distribution on the parameter controlling the contribution of each random effect and conduct simulations to show that our method has good properties for model selection problems. Finally, we illustrate our methods on data from a clinical trial of patients with bipolar disorder and on data from an environmental study of water disinfection by-products and male reproductive outcomes. [source] |