Parametric Models (parametric + models)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics


Selected Abstracts


Bayesian inference in a piecewise Weibull proportional hazards model with unknown change points

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2007
J. Casellas
Summary The main difference between parametric and non-parametric survival analyses relies on model flexibility. Parametric models have been suggested as preferable because of their lower programming needs although they generally suffer from a reduced flexibility to fit field data. In this sense, parametric survival functions can be redefined as piecewise survival functions whose slopes change at given points. It substantially increases the flexibility of the parametric survival model. Unfortunately, we lack accurate methods to establish a required number of change points and their position within the time space. In this study, a Weibull survival model with a piecewise baseline hazard function was developed, with change points included as unknown parameters in the model. Concretely, a Weibull log-normal animal frailty model was assumed, and it was solved with a Bayesian approach. The required fully conditional posterior distributions were derived. During the sampling process, all the parameters in the model were updated using a Metropolis,Hastings step, with the exception of the genetic variance that was updated with a standard Gibbs sampler. This methodology was tested with simulated data sets, each one analysed through several models with different number of change points. The models were compared with the Deviance Information Criterion, with appealing results. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation data. Moreover, results showed that the piecewise baseline hazard function could appropriately fit survival data, as well as other smooth distributions, with a reduced number of change points. [source]


Many zeros does not mean zero inflation: comparing the goodness-of-fit of parametric models to multivariate abundance data

ENVIRONMETRICS, Issue 3 2005
David I. Warton
Abstract An important step in studying the ecology of a species is choosing a statistical model of abundance; however, there has been little general consideration of which statistical model to use. In particular, abundance data have many zeros (often 50,80 per cent of all values), and zero-inflated count distributions are often used to specifically model the high frequency of zeros in abundance data. However, in such cases it is often taken for granted that a zero-inflated model is required, and the goodness-of-fit to count distributions with and without zero inflation is not often compared for abundance data. In this article, the goodness-of-fit was compared for several marginal models of abundance in 20 multivariate datasets (a total of 1672 variables across all datasets) from different sources. Multivariate abundance data are quite commonly collected in applied ecology, and the properties of these data may differ from abundances collected in autecological studies. Goodness-of-fit was assessed using AIC values, graphs of observed vs expected proportion of zeros in a dataset, and graphs of the sample mean,variance relationship. The negative binomial model was the best fitting of the count distributions, without zero-inflation. The high frequency of zeros was well described by the systematic component of the model (i.e. at some places predicted abundance was high, while at others it was zero) and so it was rarely necessary to modify the random component of the model (i.e. fitting a zero-inflated distribution). A Gaussian model based on transformed abundances fitted data surprisingly well, and rescaled per cent cover was usually poorly fitted by a count distribution. In conclusion, results suggest that the high frequency of zeros commonly seen in multivariate abundance data is best considered to come from distributions where mean abundance is often very low (hence there are many zeros), as opposed to claiming that there are an unusually high number of zeros compared to common parametric distributions. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Likelihood-based tests for localized spatial clustering of disease

ENVIRONMETRICS, Issue 8 2004
Ronald E. Gangnon
Abstract Numerous methods have been proposed for detecting spatial clustering of disease. Two methods for likelihood-based inference using parametric models for clustering are the spatial scan statistic and the weighted average likelihood ratio (WALR) test. The spatial scan statistic provides a measure of evidence for clustering at a specific, data-identified location; it can be biased towards finding clusters in areas with greater spatial resolution. The WALR test provides a more global assessment of the evidence for clustering and identifies cluster locations in a relatively unbiased fashion using a posterior distribution over potential clusters. We consider two new statistics which attempt to combine the specificity of the scan statistic with the lack of bias of the WALR test: a scan statistic based on a penalized likelihood ratio and a localized version of the WALR test. We evaluate the power of these tests and bias of the associated estimates through simulations and demonstrate their application using the well-known New York leukemia data. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Multi-step forecasting for nonlinear models of high frequency ground ozone data: a Monte Carlo approach

ENVIRONMETRICS, Issue 4 2002
Alessandro Fassò
Abstract Multi-step prediction using high frequency environmental data is considered. The complex dynamics of ground ozone often requires models involving covariates, multiple frequency periodicities, long memory, nonlinearity and heteroscedasticity. For these reasons parametric models, which include seasonal fractionally integrated components, self-exciting threshold autoregressive components, covariates and autoregressive conditionally heteroscedastic errors with heavy tails, have been recently introduced. Here, to obtain an h step ahead forecast for these models we use a Monte Carlo approach. The performance of the forecast is evaluated on different nonlinear models comparing some statistical indices with respect to the prediction horizon. As an application of this method, the forecast precision of a 2 year hourly ozone data set coming from an air traffic pollution station located in Bergamo, Italy, is analyzed. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Semiparametric approaches to flow normalization and source apportionment of substance transport in rivers

ENVIRONMETRICS, Issue 3 2001
Per Stålnacke
Abstract Statistical analysis of relationships between time series of data exhibiting seasonal variation is often of great interest in environmental monitoring and assessment. The present study focused on regression models with time-varying intercept and slope parameters. In particular, we derived and tested semiparametric models in which rapid interannual and interseasonal variation in the intercept were penalized in the search for a model that combined a good fit to data with smoothly varying parameters. Furthermore, we developed a software package for efficient estimation of the parameters of such models. Test runs on time series of runoff data and riverine loads of nutrients and chloride in the Rhine River showed that the proposed smoothing methods were particularly useful for analysis of time-varying linear relationships between time series of data with both seasonal variation and temporal trends. The predictivity of the semiparametric models was superior to that of conventional parametric models. In addition, normalization of observed annual loads to mean or minimum runoff produced smooth curves that provided convincing evidence of human impact on water quality. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Design of an estimator of the kinematics of AC contactors

EUROPEAN TRANSACTIONS ON ELECTRICAL POWER, Issue 7 2009
Jordi-Roger Riba Ruiz
Abstract This paper develops an estimator of the kinematics of the movable parts of any AC powered contactor. This estimator uses easily measurable electrical variables such as the voltage across the coil terminals and the current flowing through the main coil of the contactor. Hence, a low cost microcontroller would be able to implement a control algorithm in order to reduce the undesirable phenomenon of contact bounce, which causes severe erosion of the contacts and dramatically reduces the electrical life and reliability of the contacts. To develop such an estimator is essential to have at our disposal a robust model of the contactor. Therefore, a rigorous parametric model that allows us to predict the dynamic response of the AC contactor is proposed. It solves the mechanic and electromagnetic coupled differential equations that govern the dynamics of the contactor by applying a Runge,Kutta-based solver. Several approaches have been described in the technical literature. Most of them are based on high cost computational finite elements method or on simplified parametric models. The parametric model presented here takes into account the fringing flux and deals with shading rings interaction from a general point of view, thus avoiding simplified assumptions. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Recursive penalized least squares solution for dynamical inverse problems of EEG generation

HUMAN BRAIN MAPPING, Issue 4 2004
Okito Yamashita
Abstract In the dynamical inverse problem of electroencephalogram (EEG) generation where a specific dynamics for the electrical current distribution is assumed, we can impose general spatiotemporal constraints onto the solution by casting the problem into a state space representation and assuming a specific class of parametric models for the dynamics. The Akaike Bayesian Information Criterion (ABIC), which is based on the Type II likelihood, was used to estimate the parameters and evaluate the model. In addition, dynamic low-resolution brain electromagnetic tomography (LORETA), a new approach for estimating the current distribution is introduced. A recursive penalized least squares (RPLS) step forms the main element of our implementation. To obtain improved inverse solutions, dynamic LORETA exploits both spatial and temporal information, whereas LORETA uses only spatial information. A considerable improvement in performance compared to LORETA was found when dynamic LORETA was applied to simulated EEG data, and the new method was applied also to clinical EEG data. Hum. Brain Mapp. 21:221,235, 2004. © 2004 Wiley-Liss, Inc. [source]


Robust control from data via uncertainty model sets identification

INTERNATIONAL JOURNAL OF ROBUST AND NONLINEAR CONTROL, Issue 11 2004
S. Malan
Abstract In this paper an integrated robust identification and control design procedure is proposed. The plant to be controlled is supposed to be linear, time invariant, stable, possibly infinite dimensional and a set of noise-corrupted input,output measurements is supposed to be available. The emphasis is placed on the design of controllers guaranteeing robust stability and robust performances, and on the trade-off between controller complexity and achievable robust performances. First, uncertainty models are identified, consisting of parametric models of different order and tight frequency bounds on the magnitude of the unmodelled dynamics. Second, Internal Model Controllers, guaranteeing robust closed-loop stability and best approximating the ,perfect control' ideal target, are designed using H,/,-synthesis techniques. Then, the robust performances of the designed controllers are computed, allowing one to determine the level of model/controller complexity needed to guarantee desired closed-loop performances. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Nonparametric Varying-Coefficient Models for the Analysis of Longitudinal Data

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2002
Colin O. Wu
Summary Longitudinal methods have been widely used in biomedicine and epidemiology to study the patterns of time-varying variables, such as disease progression or trends of health status. Data sets of longitudinal studies usually involve repeatedly measured outcomes and covariates on a set of randomly chosen subjects over time. An important goal of statistical analyses is to evaluate the effects of the covariates, which may or may not depend on time, on the outcomes of interest. Because fully parametric models may be subject to model misspecification and completely unstructured nonparametric models may suffer from the drawbacks of "curse of dimensionality", the varying-coefficient models are a class of structural nonparametric models which are particularly useful in longitudinal analyses. In this article, we present several important nonparametric estimation and inference methods for this class of models, demonstrate the advantages, limitations and practical implementations of these methods in different longitudinal settings, and discuss some potential directions of further research in this area. Applications of these methods are illustrated through two epidemiological examples. Résumé Modèles non-paramétriques à coefficients variables pour l'analyse de données longitudinales Les méthodes longitudinales ont été largement utilisées en biomédecine et en épidémiologie pour étudier les modèles de variables variant dans le temps, du type progression de maladie ou tendances détat de santé. Les ensembles de données d'études longitudinales comprennent généralement des ésultats de mesures répétées et des covariables sur un ensemble de sujets choisis au hasard dans le temps. Un objectif important des analyses statistiques consisteàévaluer les effets des covariables, qui peuvent ou non dépendre du temps, sur les résultats d'intérêt. Du fait que des modèles entièrement paramétriques peuvent faire l'objet d'erreur de spécification de modèle et que des modèles non-paramétriques totalement non-structurés peuvent souffrir des inconvénients de la «malédiction de dimensionnalité», les modèles à coefficients variables sont une classe de modèles structurels non-paramétriques particulièrement utiles dans les analyses longitudinales. Dans cet article, on présente plusieurs estimations non-paramétriques importantes, ainsi que des méthodes d'inférence pour cette classe de modéles, on démontre les avantages, limites et mises en ,uvre pratiques de ces méthodes dans différents contextes longitudinaux et l'on traite de certaines directions possibles pour de plus amples recherches dans ce domaine. Des applications de ces méthodes sont illustrées à travers deux exemples épidémiologiques. [source]


Measuring predictability: theory and macroeconomic applications

JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2001
Francis X. Diebold
We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several US macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several non-parametric extensions of our approach. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Prospects and challenges for parametric models in historical biogeographical inference

JOURNAL OF BIOGEOGRAPHY, Issue 7 2009
Richard H. Ree
Abstract In historical biogeography, phylogenetic trees have long been used as tools for addressing a wide range of inference problems, from explaining common distribution patterns of species to reconstructing ancestral geographic ranges on branches of the tree of life. However, the potential utility of phylogenies for this purpose has yet to be fully realized, due in part to a lack of explicit conceptual links between processes underlying the evolution of geographic ranges and processes of phylogenetic tree growth. We suggest that statistical approaches that use parametric models to forge such links will stimulate integration and propel hypothesis-driven biogeographical inquiry in new directions. We highlight here two such approaches and describe how they represent early steps towards a more general framework for model-based historical biogeography that is based on likelihood as an optimality criterion, rather than having the traditional reliance on parsimony. The development of this framework will not be without significant challenges, particularly in balancing model complexity with statistical power, and these will be most apparent in studies of regions with many component areas and complex geological histories, such as the Mediterranean Basin. [source]


Calculation of off-resonance Raman scattering intensities with parametric models

JOURNAL OF RAMAN SPECTROSCOPY, Issue 12 2009
Daniel Bougeard
Abstract The paper reviews applications of parametric models for calculations of the Raman scattering intensity of materials with the main emphasis on the performance of the bond polarizability model. The use of the model in studies of polymers, aluminosilicates, and nanostructures is discussed and the existing sets of electro-optical parameters as well as their transferability are analyzed. The paper highlights the interplay between the first-principles and parametric approaches to the Raman intensity calculations and suggests further developments in this field. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Pseudomartingale estimating equations for modulated renewal process models

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2009
Fengchang Lin
Summary., We adapt martingale estimating equations based on gap time information to a general intensity model for a single realization of a modulated renewal process. The consistency and asymptotic normality of the estimators is proved under ergodicity conditions. Previous work has considered either parametric likelihood analysis or semiparametric multiplicative models using partial likelihood. The framework is generally applicable to semiparametric and parametric models, including additive and multiplicative specifications, and periodic models. It facilitates a semiparametric extension of a popular parametric earthquake model. Simulations and empirical analyses of Taiwanese earthquake sequences illustrate the methodology's practical utility. [source]


A new class of models for bivariate joint tails

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2009
Alexandra Ramos
Summary., A fundamental issue in applied multivariate extreme value analysis is modelling dependence within joint tail regions. The primary focus of this work is to extend the classical pseudopolar treatment of multivariate extremes to develop an asymptotically motivated representation of extremal dependence that also encompasses asymptotic independence. Starting with the usual mild bivariate regular variation assumptions that underpin the coefficient of tail dependence as a measure of extremal dependence, our main result is a characterization of the limiting structure of the joint survivor function in terms of an essentially arbitrary non-negative measure that must satisfy some mild constraints. We then construct parametric models from this new class and study in detail one example that accommodates asymptotic dependence, asymptotic independence and asymmetry within a straightforward parsimonious parameterization. We provide a fast simulation algorithm for this example and detail likelihood-based inference including tests for asymptotic dependence and symmetry which are useful for submodel selection. We illustrate this model by application to both simulated and real data. In contrast with the classical multivariate extreme value approach, which concentrates on the limiting distribution of normalized componentwise maxima, our framework focuses directly on the structure of the limiting joint survivor function and provides significant extensions of both the theoretical and the practical tools that are available for joint tail modelling. [source]


Semiparametric estimation by model selection for locally stationary processes

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2006
Sébastien Van Bellegem
Summary., Over recent decades increasingly more attention has been paid to the problem of how to fit a parametric model of time series with time-varying parameters. A typical example is given by autoregressive models with time-varying parameters. We propose a procedure to fit such time-varying models to general non-stationary processes. The estimator is a maximum Whittle likelihood estimator on sieves. The results do not assume that the observed process belongs to a specific class of time-varying parametric models. We discuss in more detail the fitting of time-varying AR(p) processes for which we treat the problem of the selection of the order p, and we propose an iterative algorithm for the computation of the estimator. A comparison with model selection by Akaike's information criterion is provided through simulations. [source]


Statistical methods for regular monitoring data

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2005
Michael L. Stein
Summary., Meteorological and environmental data that are collected at regular time intervals on a fixed monitoring network can be usefully studied combining ideas from multiple time series and spatial statistics, particularly when there are little or no missing data. This work investigates methods for modelling such data and ways of approximating the associated likelihood functions. Models for processes on the sphere crossed with time are emphasized, especially models that are not fully symmetric in space,time. Two approaches to obtaining such models are described. The first is to consider a rotated version of fully symmetric models for which we have explicit expressions for the covariance function. The second is based on a representation of space,time covariance functions that is spectral in just the time domain and is shown to lead to natural partially nonparametric asymmetric models on the sphere crossed with time. Various models are applied to a data set of daily winds at 11 sites in Ireland over 18 years. Spectral and space,time domain diagnostic procedures are used to assess the quality of the fits. The spectral-in-time modelling approach is shown to yield a good fit to many properties of the data and can be applied in a routine fashion relative to finding elaborate parametric models that describe the space,time dependences of the data about as well. [source]


Direct parametric inference for the cumulative incidence function

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2006
Jong-Hyeon Jeong
Summary., In survival data that are collected from phase III clinical trials on breast cancer, a patient may experience more than one event, including recurrence of the original cancer, new primary cancer and death. Radiation oncologists are often interested in comparing patterns of local or regional recurrences alone as first events to identify a subgroup of patients who need to be treated by radiation therapy after surgery. The cumulative incidence function provides estimates of the cumulative probability of locoregional recurrences in the presence of other competing events. A simple version of the Gompertz distribution is proposed to parameterize the cumulative incidence function directly. The model interpretation for the cumulative incidence function is more natural than it is with the usual cause-specific hazard parameterization. Maximum likelihood analysis is used to estimate simultaneously parametric models for cumulative incidence functions of all causes. The parametric cumulative incidence approach is applied to a data set from the National Surgical Adjuvant Breast and Bowel Project and compared with analyses that are based on parametric cause-specific hazard models and nonparametric cumulative incidence estimation. [source]


Generalized additive models for location, scale and shape

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2005
R. A. Rigby
Summary., A general class of statistical models for a univariate response variable is presented which we call the generalized additive model for location, scale and shape (GAMLSS). The model assumes independent observations of the response variable y given the parameters, the explanatory variables and the values of the random effects. The distribution for the response variable in the GAMLSS can be selected from a very general family of distributions including highly skew or kurtotic continuous and discrete distributions. The systematic part of the model is expanded to allow modelling not only of the mean (or location) but also of the other parameters of the distribution of y, as parametric and/or additive nonparametric (smooth) functions of explanatory variables and/or random-effects terms. Maximum (penalized) likelihood estimation is used to fit the (non)parametric models. A Newton,Raphson or Fisher scoring algorithm is used to maximize the (penalized) likelihood. The additive terms in the model are fitted by using a backfitting algorithm. Censored data are easily incorporated into the framework. Five data sets from different fields of application are analysed to emphasize the generality of the GAMLSS class of models. [source]


Choice of parametric models in survival analysis: applications to monotherapy for epilepsy and cerebral palsy

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2003
G. P. S. Kwong
Summary. In the analysis of medical survival data, semiparametric proportional hazards models are widely used. When the proportional hazards assumption is not tenable, these models will not be suitable. Other models for covariate effects can be useful. In particular, we consider accelerated life models, in which the effect of covariates is to scale the quantiles of the base-line distribution. Solomon and Hutton have suggested that there is some robustness to misspecification of survival regression models. They showed that the relative importance of covariates is preserved under misspecification with assumptions of small coefficients and orthogonal transformation of covariates. We elucidate these results by applications to data from five trials which compare two common anti-epileptic drugs (carbamazepine versus sodium valporate monotherapy for epilepsy) and to survival of a cohort of people with cerebral palsy. Results on the robustness against model misspecification depend on the assumptions of small coefficients and on the underlying distribution of the data. These results hold in cerebral palsy but do not hold in epilepsy data which have early high hazard rates. The orthogonality of coefficients is not important. However, the choice of model is important for an estimation of the magnitude of effects, particularly if the base-line shape parameter indicates high initial hazard rates. [source]


Estimation of the Dominating Frequency for Stationary and Nonstationary Fractional Autoregressive Models

JOURNAL OF TIME SERIES ANALYSIS, Issue 5 2000
Jan Beran
This paper was motivated by the investigation of certain physiological series for premature infants. The question was whether the series exhibit periodic fluctuations with a certain dominating period. The observed series are nonstationary and/or have long-range dependence. The assumed model is a Gaussian process Xt whose mth difference Yt = (1 ,B)mXt is stationary with a spectral density f that may have a pole (or a zero) at the origin. the problem addressed in this paper is the estimation of the frequency ,max where f achieves the largest local maximum in the open interval (0, ,). The process Xt is assumed to belong to a class of parametric models, characterized by a parameter vector ,, defined in Beran (1995). An estimator of ,max is proposed and its asymptotic distribution is derived, with , being estimated by maximum likelihood. In particular, m and a fractional differencing parameter that models long memory are estimated from the data. Model choice is also incorporated. Thus, within the proposed framework, a data driven procedure is obtained that can be applied in situations where the primary interest is in estimating a dominating frequency. A simulation study illustrates the finite sample properties of the method. In particular, for short series, estimation of ,max is difficult, if the local maximum occurs close to the origin. The results are illustrated by two of the data examples that motivated this research. [source]


Goodness-of-fit tests for parametric models in censored regression

THE CANADIAN JOURNAL OF STATISTICS, Issue 2 2007
Juan Carlos Pardo-Fernández
Abstract The authors propose a goodness-of-fit test for parametric regression models when the response variable is right-censored. Their test compares an estimation of the error distribution based on parametric residuals to another estimation relying on nonparametric residuals. They call on a bootstrap mechanism in order to approximate the critical values of tests based on Kolmogorov-Smirnov and Cramér-von Mises type statistics. They also present the results of Monte Carlo simulations and use data from a study about quasars to illustrate their work. Tests d'ajustement pour des modèles de régression paramétriques sujets à censure Les auteurs proposent un test permettant de juger de l'adéquation d'un modèle de régression paramétrique dont la variable réponse est sujette à une censure à droite. Leur test compare une estimation de la loi des erreurs déduite de résidus paramétriques à une autre estimation fondée sur des résidus non paramétriques. Ils font appel à une technique de rééchantillonnage pour approximer les valeurs critiques de tests fondés sur des statistiques de type Kolmogorov-Smirnov et Cramér-von Mises. Ils présentent aussi les résultats d'une étude de Monte-Carlo et illustrent leur propos à l'aide de données issues de travaux portant sur les quasars. [source]


Robust weighted likelihood estimators with an application to bivariate extreme value problems

THE CANADIAN JOURNAL OF STATISTICS, Issue 1 2002
Debbie J. Dupuis
Abstract The authors achieve robust estimation of parametric models through the use of weighted maximum likelihood techniques. A new estimator is proposed and its good properties illustrated through examples. Ease of implementation is an attractive property of the new estimator. The new estimator downweights with respect to the model and can be used for complicated likelihoods such as those involved in bivariate extreme value problems. New weight functions, tailored for these problems, are constructed. The increased insight provided by our robust fits to these bivariate extreme value models is exhibited through the analysis of sea levels at two East Coast sites in the United Kingdom. Estimateurs de vraisemblance pondérée robustes et application à des problèmes de valeurs extrêmes bivariées Les auteurs font appel à des techniques de maximisation de vraisemblance pondérée pour faire de l'estimation robuste dans des modèles paramétriques. Ils proposentun nouvel estimateur et en illustrent les bonnes propriétés au moyen d'exemples. Cet estimateur s'avère facile à mettre en ,uvre, ce qui le rend d'autant plus intéressant. Il s'appuie sur une pondération baissière par rapport au modèle et donne de bons résultats pour des vraisemblances compliquées corame celles qui interviennent dans les problèmes de valeurs extrêmes bivariées. Des poids conçus spécifiquement pour ce type d'application sont proposés. La finesse d'analyse accrue découlant d'une modélisation robuste de valeurs extrêmes bivariées est illustrée sur des relevés de niveau d'eau en deux points le long de la côte est du Royaume-Uni. [source]


Modeling dependencies between rating categories and their effects on prediction in a credit risk portfolio

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2008
Claudia Czado
Abstract The internal-rating-based Basel II approach increases the need for the development of more realistic default probability models. In this paper, we follow the approach taken in McNeil A and Wendin J 7 (J. Empirical Finance 2007) by constructing generalized linear mixed models for estimating default probabilities from annual data on companies with different credit ratings. The models considered, in contrast to McNeil A and Wendin J 7 (J. Empirical Finance 2007), allow parsimonious parametric models to capture simultaneously dependencies of the default probabilities on time and credit ratings. Macro-economic variables can also be included. Estimation of all model parameters are facilitated with a Bayesian approach using Markov chain Monte Carlo methods. Special emphasis is given to the investigation of predictive capabilities of the models considered. In particular, predictable model specifications are used. The empirical study using default data from Standard and Poor's gives evidence that the correlation between credit ratings further apart decreases and is higher than the one induced by the autoregressive time dynamics. Copyright © 2008 John Wiley & Sons, Ltd. [source]


HIGH-DIMENSIONAL PARAMETRIC MODELLING OF MULTIVARIATE EXTREME EVENTS

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 1 2009
Alec G. Stephenson
Summary Multivariate extreme events are typically modelled using multivariate extreme value distributions. Unfortunately, there exists no finite parametrization for the class of multivariate extreme value distributions. One common approach is to model extreme events using some flexible parametric subclass. This approach has been limited to only two or three dimensions, primarily because suitably flexible high-dimensional parametric models have prohibitively complex density functions. We present an approach that allows a number of popular flexible models to be used in arbitrarily high dimensions. The approach easily handles missing and censored data, and can be employed when modelling componentwise maxima and multivariate threshold exceedances. The approach is based on a representation using conditionally independent marginal components, conditioning on positive stable random variables. We use Bayesian inference, where the conditioning variables are treated as auxiliary variables within Markov chain Monte Carlo simulations. We demonstrate these methods with an application to sea-levels, using data collected at 10 sites on the east coast of England. [source]


Interpreting Statistical Evidence with Empirical Likelihood Functions

BIOMETRICAL JOURNAL, Issue 4 2009
Zhiwei Zhang
Abstract There has been growing interest in the likelihood paradigm of statistics, where statistical evidence is represented by the likelihood function and its strength is measured by likelihood ratios. The available literature in this area has so far focused on parametric likelihood functions, though in some cases a parametric likelihood can be robustified. This focused discussion on parametric models, while insightful and productive, may have left the impression that the likelihood paradigm is best suited to parametric situations. This article discusses the use of empirical likelihood functions, a well-developed methodology in the frequentist paradigm, to interpret statistical evidence in nonparametric and semiparametric situations. A comparative review of literature shows that, while an empirical likelihood is not a true probability density, it has the essential properties, namely consistency and local asymptotic normality that unify and justify the various parametric likelihood methods for evidential analysis. Real examples are presented to illustrate and compare the empirical likelihood method and the parametric likelihood methods. These methods are also compared in terms of asymptotic efficiency by combining relevant results from different areas. It is seen that a parametric likelihood based on a correctly specified model is generally more efficient than an empirical likelihood for the same parameter. However, when the working model fails, a parametric likelihood either breaks down or, if a robust version exists, becomes less efficient than the corresponding empirical likelihood. [source]


Testing Equality between Two Diagnostic Procedures in Paired-Sample Ordinal Data

BIOMETRICAL JOURNAL, Issue 6 2004
Kung-Jong Lui
Abstract When a new diagnostic procedure is developed, it is important to assess whether the diagnostic accuracy of the new procedure is different from that of the standard procedure. For paired-sample ordinal data, this paper develops two test statistics for testing equality of the diagnostic accuracy between two procedures without assuming any parametric models. One is derived on the basis of the probability of correctly identifying the case for a randomly selected pair of a case and a non-case over all possible cutoff points, and the other is derived on the basis of the sensitivity and specificity directly. To illustrate the practical use of the proposed test procedures, this paper includes an example regarding the use of digitized and plain films for screening breast cancer. This paper also applies Monte Carlo simulation to evaluate the finite sample performance of the two statistics developed here and notes that they can perform well in a variety of situations. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Analyzing Incomplete Data Subject to a Threshold using Empirical Likelihood Methods: An Application to a Pneumonia Risk Study in an ICU Setting

BIOMETRICS, Issue 1 2010
Jihnhee Yu
Summary The initial detection of ventilator-associated pneumonia (VAP) for inpatients at an intensive care unit needs composite symptom evaluation using clinical criteria such as the clinical pulmonary infection score (CPIS). When CPIS is above a threshold value, bronchoalveolar lavage (BAL) is performed to confirm the diagnosis by counting actual bacterial pathogens. Thus, CPIS and BAL results are closely related and both are important indicators of pneumonia whereas BAL data are incomplete. To compare the pneumonia risks among treatment groups for such incomplete data, we derive a method that combines nonparametric empirical likelihood ratio techniques with classical testing for parametric models. This technique augments the study power by enabling us to use any observed data. The asymptotic property of the proposed method is investigated theoretically. Monte Carlo simulations confirm both the asymptotic results and good power properties of the proposed method. The method is applied to the actual data obtained in clinical practice settings and compares VAP risks among treatment groups. [source]