Carlo Methods (carlo + methods)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Carlo Methods

  • monte carlo methods

  • Selected Abstracts


    R. Paroli
    Summary We consider hidden Markov models with an unknown number of regimes for the segmentation of the pixel intensities of digital images that consist of a small set of colours. New reversible jump Markov chain Monte Carlo algorithms to estimate both the dimension and the unknown parameters of the model are introduced. Parameters are updated by random walk Metropolis,Hastings moves, without updating the sequence of the hidden Markov chain. The segmentation (i.e. the estimation of the hidden regimes) is a further aim and is performed by means of a number of competing algorithms. We apply our Bayesian inference and segmentation tools to digital images, which are linearized through the Peano,Hilbert scan, and perform experiments and comparisons on both synthetic images and a real brain magnetic resonance image. [source]

    Foreword for Frontier Session, "Markov Chain Monte Carlo Methods: A User's Guide for Agricultural Economics"

    Arnold Zellner
    No abstract is available for this article. [source]

    Parallel protein folding with STAPL

    Shawna Thomas
    Abstract The protein-folding problem is a study of how a protein dynamically folds to its so-called native state,an energetically stable, three-dimensional conformation. Understanding this process is of great practical importance since some devastating diseases such as Alzheimer's and bovine spongiform encephalopathy (Mad Cow) are associated with the misfolding of proteins. We have developed a new computational technique for studying protein folding that is based on probabilistic roadmap methods for motion planning. Our technique yields an approximate map of a protein's potential energy landscape that contains thousands of feasible folding pathways. We have validated our method against known experimental results. Other simulation techniques, such as molecular dynamics or Monte Carlo methods, require many orders of magnitude more time to produce a single, partial trajectory. In this paper we report on our experiences parallelizing our method using STAPL (Standard Template Adaptive Parallel Library) that is being developed in the Parasol Lab at Texas A&M. An efficient parallel version will enable us to study larger proteins with increased accuracy. We demonstrate how STAPL enables portable efficiency across multiple platforms, ranging from small Linux clusters to massively parallel machines such as IBM's BlueGene/L, without user code modification. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Effectiveness of Conservation Targets in Capturing Genetic Diversity

    Maile C. Neel
    We used empirical data from four rare plant taxa to assess these consequences in terms of how well allele numbers ( all alleles and alleles occurring at a frequency openface>0.05 in any population ) and expected heterozygosity are represented when different numbers of populations are conserved. We determined sampling distributions for these three measures of genetic diversity using Monte Carlo methods. We assessed the proportion of alleles included in the number of populations considered adequate for conservation, needed to capture all alleles, and needed to meet an accepted standard of genetic-diversity conservation of having a 90,95% probability of including all common alleles. We also assessed the number of populations necessary to obtain values of heterozygosity within ±10% of the value obtained from all populations. Numbers of alleles were strongly affected by the number of populations sampled. Heterozygosity was only slightly less sensitive to numbers of populations than were alleles. On average, currently advocated conservation intensities represented 67,83% of all alleles and 85,93% of common alleles. The smallest number of populations to include all alleles ranged from 6 to 17 ( 42,57% ), but <0.2% of 1000 samples of these numbers of populations included them all. It was necessary to conserve 16,29 ( 53,93% ) of the sampled populations to meet the standard for common alleles. Between 20% and 64% of populations were needed to reliably represent species-level heterozygosity. Thus, higher percentages of populations are needed than are currently considered adequate to conserve genetic diversity if populations are selected without genetic data. Resumen: Cualquier acción de conservación que preserve algunas poblaciones y no otras tendrá consecuencias genéticas. Utilizamos datos empíricos de cuatro taxones de plantas raras para evaluar estas consecuencias en términos de lo bien representados que están los números de alelos ( todos los alelos ocurriendo a una frecuencia>0.05 en cualquier población ) y la heterocigosidad esperada cuando se conservan diferentes números de poblaciones. Las distribuciones de muestreo de estas tres medidas de la diversidad genética fueron determinadas utilizando métodos Monte Carlo. Evaluamos la proporción de alelos incluida en números de poblaciones: consideradas adecuadas para la conservación; requeridas para capturar todos los alelos; y las requeridas para alcanzar un estándar de conservación de diversidad genética aceptable del 90,95% de probabilidad de incluir todos los alelos comunes. También evaluamos el número de poblaciones necesarias para obtener valores de heterocigosidad que caigan dentro de ±10% del valor obtenido de todas las poblaciones. Los números de alelos fueron afectados significativamente por el número de poblaciones muestreadas. La heterocigosidad solo fue ligeramente menos sensible a los números de poblaciones de lo que fueron los alelos. Las intensidades de conservación propugnadas actualmente representaron en promedio el 67,83% de todos los alelos y el 85,93% de los alelos comunes. El menor número de poblaciones para incluir a todos los alelos varió de 6 a 17 ( 42,57% ), pero <0.2% de 1000 muestras de esos números de poblaciones los incluyó a todos. Fue necesario conservar de 16 a 29 ( 53,93% ) de las poblaciones muestreadas para alcanzar el estándar para los alelos comunes. Se requirió entre 20% y 64% de las poblaciones para representar la heterocigosidad a nivel de especie confiablemente. Por lo tanto, se requieren mayores porcentajes de poblaciones que los actualmente considerados adecuados para conservar la diversidad genética si las poblaciones son seleccionadas sin datos genéticos. [source]

    Decision Making with Uncertain Judgments: A Stochastic Formulation of the Analytic Hierarchy Process*

    DECISION SCIENCES, Issue 3 2003
    Eugene D. Hahn
    ABSTRACT In the analytic hierarchy process (AHP), priorities are derived via a deterministic method, the eigenvalue decomposition. However, judgments may be subject to error. A stochastic characterization of the pairwise comparison judgment task is provided and statistical models are introduced for deriving the underlying priorities. Specifically, a weighted hierarchical multinomial logit model is used to obtain the priorities. Inference is then conducted from the Bayesian viewpoint using Markov chain Monte Carlo methods. The stochastic methods are found to give results that are congruent with those of the eigenvector method in matrices of different sizes and different levels of inconsistency. Moreover, inferential statements can be made about the priorities when the stochastic approach is adopted, and these statements may be of considerable value to a decision maker. The methods described are fully compatible with judgments from the standard version of AHP and can be used to construct a stochastic formulation of it. [source]

    Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods

    ECOLOGY LETTERS, Issue 7 2007
    Subhash R. Lele
    Abstract We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise. [source]

    Multiple genetic tests for susceptibility to smoking do not outperform simple family history

    ADDICTION, Issue 1 2009
    Coral E. Gartner
    ABSTRACT Aims To evaluate the utility of using predictive genetic screening of the population for susceptibility to smoking. Methods The results of meta-analyses of genetic association studies of smoking behaviour were used to create simulated data sets using Monte Carlo methods. The ability of the genetic tests to screen for smoking was assessed using receiver operator characteristic curve analysis. The result was compared to prediction using simple family history information. To identify the circumstances in which predictive genetic testing would potentially justify screening we simulated tests using larger numbers of alleles (10, 15 and 20) that varied in prevalence from 10 to 50% and in strength of association [relative risks (RRs) of 1.2,2.1]. Results A test based on the RRs and prevalence of five susceptibility alleles derived from meta-analyses of genetic association studies of smoking performed similarly to chance and no better than the prediction based on simple family history. Increasing the number of alleles from five to 20 improved the predictive ability of genetic screening only modestly when using genes with the effect sizes reported to date. Conclusions This panel of genetic tests would be unsuitable for population screening. This situation is unlikely to be improved upon by screening based on more genetic tests. Given the similarity with associations found for other polygenic conditions, our results also suggest that using multiple genes to screen the general population for genetic susceptibility to polygenic disorders will be of limited utility. [source]

    Spatio-temporal point process filtering methods with an application

    ENVIRONMETRICS, Issue 3-4 2010
    ena Frcalová
    Abstract The paper deals with point processes in space and time and the problem of filtering. Real data monitoring the spiking activity of a place cell of hippocampus of a rat moving in an environment are evaluated. Two approaches to the modelling and methodology are discussed. The first one (known from literature) is based on recursive equations which enable to describe an adaptive system. Sequential Monte Carlo methods including particle filter algorithm are available for the solution. The second approach makes use of a continuous time shot-noise Cox point process model. The inference of the driving intensity leads to a nonlinear filtering problem. Parametric models support the solution by means of the Bayesian Markov chain Monte Carlo methods, moreover the Cox model enables to detect adaptivness. Model selection is discussed, numerical results are presented and interpreted. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Bayesian analysis of dynamic factor models: an application to air pollution and mortality in São Paulo, Brazil

    ENVIRONMETRICS, Issue 6 2008
    T. Sáfadi
    Abstract The Bayesian estimation of a dynamic factor model where the factors follow a multivariate autoregressive model is presented. We derive the posterior distributions for the parameters and the factors and use Monte Carlo methods to compute them. The model is applied to study the association between air pollution and mortality in the city of São Paulo, Brazil. Statistical analysis was performed through a Bayesian analysis of a dynamic factor model. The series considered were minimal temperature, relative humidity, air pollutant of PM10 and CO, mortality circulatory disease and mortality respiratory disease. We found a strong association between air pollutant (PM10), Humidity and mortality respiratory disease for the city of São Paulo. Copyright © 2007 John Wiley & Sons, Ltd. [source]


    EVOLUTION, Issue 11 2005
    Richard H. Ree
    Abstract At a time when historical biogeography appears to be again expanding its scope after a period of focusing primarily on discerning area relationships using cladograms, new inference methods are needed to bring more kinds of data to bear on questions about the geographic history of lineages. Here we describe a likelihood framework for inferring the evolution of geographic range on phylogenies that models lineage dispersal and local extinction in a set of discrete areas as stochastic events in continuous time. Unlike existing methods for estimating ancestral areas, such as dispersal-vicariance analysis, this approach incorporates information on the timing of both lineage divergences and the availability of connections between areas (dispersal routes). Monte Carlo methods are used to estimate branch-specific transition probabilities for geographic ranges, enabling the likelihood of the data (observed species distributions) to be evaluated for a given phylogeny and parameterized paleogeographic model. We demonstrate how the method can be used to address two biogeographic questions: What were the ancestral geographic ranges on a phylogenetic tree? How were those ancestral ranges affected by speciation and inherited by the daughter lineages at cladogenesis events? For illustration we use hypothetical examples and an analysis of a Northern Hemisphere plant clade (Cercis), comparing and contrasting inferences to those obtained from dispersal-vicariance analysis. Although the particular model we implement is somewhat simplistic, the framework itself is flexible and could readily be modified to incorporate additional sources of information and also be extended to address other aspects of historical biogeography. [source]

    Monte Carlo probabilistic sensitivity analysis for patient level simulation models: efficient estimation of mean and variance using ANOVA

    HEALTH ECONOMICS, Issue 10 2007
    Anthony O'Hagan
    Abstract Probabilistic sensitivity analysis (PSA) is required to account for uncertainty in cost-effectiveness calculations arising from health economic models. The simplest way to perform PSA in practice is by Monte Carlo methods, which involves running the model many times using randomly sampled values of the model inputs. However, this can be impractical when the economic model takes appreciable amounts of time to run. This situation arises, in particular, for patient-level simulation models (also known as micro-simulation or individual-level simulation models), where a single run of the model simulates the health care of many thousands of individual patients. The large number of patients required in each run to achieve accurate estimation of cost-effectiveness means that only a relatively small number of runs is possible. For this reason, it is often said that PSA is not practical for patient-level models. We develop a way to reduce the computational burden of Monte Carlo PSA for patient-level models, based on the algebra of analysis of variance. Methods are presented to estimate the mean and variance of the model output, with formulae for determining optimal sample sizes. The methods are simple to apply and will typically reduce the computational demand very substantially. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Simulation Monte Carlo methods in extended stochastic volatility models

    Miroslav, imandl
    A new technique for nonlinear state and parameter estimation of discrete time stochastic volatility models is developed. Algorithms of Gibbs sampler and simulation filters are used to construct a simulation tool that reflects both inherent model variability and parameter uncertainty. The proposed chain converges to equilibrium enabling the estimation of unobserved volatilities and unknown model parameter distributions. The estimation algorithm is illustrated using numerical examples. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Validation of simplified PN models for radiative transfer in combustion systems

    E. Schneider
    Abstract This paper illustrates the use of simplified PN approximations as a tools of achieving verification of codes and simulations of radiative transfer in combustion systems. The main advantage of considering these models is the fact that the integro-differential equation for radiative transfer can be replaced by a set of differential equations which are independent of angle variable, compatible to the partial differential equations of flow and combustion, and easy to solve using standard numerical discretizations. Validation of these models is then performed by comparing predictions to measurements for a three-dimensional diffusion flame. The good agreement between measurements and predictions indicates that the simplified PN models can be used to incorporate radiation transfer in combustion systems at very low computational cost without relying on discrete ordinates or Monte Carlo methods. Copyright © 2006 John Wiley & Sons, Ltd. [source]

    Sequential Monte Carlo methods for multi-aircraft trajectory prediction in air traffic management

    I. Lymperopoulos
    Abstract Accurate prediction of aircraft trajectories is an important part of decision support and automated tools in air traffic management. We demonstrate that by combining information from multiple aircraft at different locations and time instants, one can provide improved trajectory prediction (TP) accuracy. To perform multi-aircraft TP, we have at our disposal abundant data. We show how this multi-aircraft sensor fusion problem can be formulated as a high-dimensional state estimation problem. The high dimensionality of the problem and nonlinearities in aircraft dynamics and control prohibit the use of common filtering methods. We demonstrate the inefficiency of several sequential Monte Carlo algorithms on feasibility studies involving multiple aircraft. We then develop a novel particle filtering algorithm to exploit the structure of the problem and solve it in realistic scale situations. In all studies we assume that aircraft fly level (possibly at different altitudes) with known, constant, aircraft-dependent airspeeds and estimate the wind forecast errors based only on ground radar measurements. Current work concentrates on extending the algorithms to non-level flights, the joint estimation of wind forecast errors and the airspeed and mass of the different aircraft and the simultaneous fusion of airborne and ground radar measurements. Copyright © 2010 John Wiley & Sons, Ltd. [source]

    Recent topics in numerical integration

    Ronald Cools
    Abstract In this article, we describe some recent topics in the approximation of univariate and multivariate integrals. We especially pay attention to progress in the area of oscillatory integrals and in quasi-Monte Carlo methods using lattice rules. Many pointers to the relevant literature are provided. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem, 2009 [source]

    Vibrational,rotational energies of all H2 isotopomers using Monte Carlo methods

    S. A. Alexander
    Abstract Using variational Monte Carlo techniques, we have computed several of the lowest rotational,vibrational energies of all the hydrogen molecule isotopomers (H2, HD, HT, D2, DT, and T2). These calculations do not require the excited states to be explicitly orthogonalized. We have examined both the usual Gaussian wave function form as well as a rapidly convergent Padé form. The high-quality potential energy surfaces used in these calculations are taken from our earlier work and include the Born,Oppenheimer energy, the diagonal correction to the Born,Oppenheimer approximation, and the lowest-order relativistic corrections at 24 internuclear points. Our energies are in good agreement with those determined by other methods. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2006 [source]

    Erratum: Spectroscopic constants of H2 using Monte Carlo methods, S.A. Alexander and R.L. Coldwell, International Journal of Quantum Chemistry(2004) 100(6) 851,857

    S. A. Alexander
    The original article to which this Erratum refers was published in International Journal of Quantum Chemistry (2004) 100(6)851,857 [source]

    Linkage Disequilibrium Mapping of Disease Susceptibility Genes in Human Populations

    David Clayton
    Summary The paper reviews recent work on statistical methods for using linkage disequilibrium to locate disease susceptibility genes, given a set of marker genes at known positions in the genome. The paper starts by considering a simple deterministic model for linkage disequilibrium and discusses recent attempts to elaborate it to include the effects of stochastic influences, of "drift", by the use of either Writht-Fisher models or by approaches based on the coalescence of the genealogy of the sample of disease chromosomes. Most of this first part of the paper concerns a series of diallelic markers and, in this case, the models so far proposed are hierarchical probability models for multivariate binary data. Likelihoods are intractable and most approaches to linkage disequilibrium mapping amount to marginal models for pairwise associations between individual markers and the disease susceptibility locus. Approaches to evalutation of a full likelihood require Monte Carlo methods in order to integrate over the large number of unknowns. The fact that the initial state of the stochastic process which has led to present-day allele frequencies is unknown is noted and its implications for the hierarchical probability model is discussed. Difficulties and opportunities arising as a result of more polymorphic markers and extended marker haplotypes are indicated. Connections between the hierarchical modelling approach and methods based upon identity by descent and haplotype sharing by seemingly unrelated case are explored. Finally problems resulting from unknown modes of inheritance, incomplete penetrance, and "phenocopies" are briefly reviewed. Résumé Ce papier est une revue des travaux récents, protant sur les méthodes statistiques qui utilisent I'étude, des liaisons désé, quilib rées, pour identifer les génes, de susceptibilité des maladies,ápartir d'une série, de marqueurs de géncs á des positions définies du génome,. Le papier commence par considérer, un modéle, détéministe, simple pour liaisons déséquilibr,ées, puis nous discutons les améliorations, ré centes proposées, de ce modéle, dans but de tenir compte des effects des influences stochastiques soit en utilisant les modéles, de wright-fisher, soit par des approches basées, sur la coalescence de la géné alogic de I'échantillon, des chromosomes malades. La plupart de cette premiére, partie porte sur une série, de marqueurs dialléliques et, dans ce cas, les modéles, proposés, sont des modéles, hiérerchiques, probabilistes pour dinnées, binaires multivariées. Les viaisemblances n'ont pas de forme analytique et la plupart des approches pour la cartographie des liaisons déséquilibrées, sont équivalentes aux modéles, marginaux pour dinnées, appariées, entre des marqueurs individuels et le géne, de susceptibilité de la maladie.Pour évaluer, la vriausemblance compléte, des méthodes de Monte carlo sont nécessaires, afin d'intégrer, le large nombre d; inconnues. Le fait que l'état, initial du process stochastique qui a conduit éla fré, quence, allélique, actuel soit inconnu est á noter et ses implications pour le modéle, hiérarchique, probabiliste sont discutées.Les difficultés, et implications issues de marqueurs polumorphiques et de marquers haplotypes sont dévéloppées.Les liens entire l'approche de modélisation, hiérerchique, et les méthodes, d'analyse d'identite pardescendance et les haplotypes partagés, par des cas apparement non apparentés, sont explorés. Enfin les problémes, relatifs à des modes de transmission inconnus,à des pénétrances, incomplé, tes, et aux "phénocopies" sont briévenment evoqués. [source]

    A comparison between multivariate Slash, Student's t and probit threshold models for analysis of clinical mastitis in first lactation cows

    Y-M. Chang
    Summary Robust threshold models with multivariate Student's t or multivariate Slash link functions were employed to infer genetic parameters of clinical mastitis at different stages of lactation, with each cow defining a cluster of records. The robust fits were compared with that from a multivariate probit model via a pseudo-Bayes factor and an analysis of residuals. Clinical mastitis records on 36 178 first-lactation Norwegian Red cows from 5286 herds, daughters of 245 sires, were analysed. The opportunity for infection interval, going from 30 days pre-calving to 300 days postpartum, was divided into four periods: (i) ,30 to 0 days pre-calving; (ii) 1,30 days; (iii) 31,120 days; and (iv) 121,300 days of lactation. Within each period, absence or presence of clinical mastitis was scored as 0 or 1 respectively. Markov chain Monte Carlo methods were used to draw samples from posterior distributions of interest. Pseudo-Bayes factors strongly favoured the multivariate Slash and Student's t models over the probit model. The posterior mean of the degrees of freedom parameter for the Slash model was 2.2, indicating heavy tails of the liability distribution. The posterior mean of the degrees of freedom for the Student's t model was 8.5, also pointing away from a normal liability for clinical mastitis. A residual was the observed phenotype (0 or 1) minus the posterior mean of the probability of mastitis. The Slash and Student's t models tended to have smaller residuals than the probit model in cows that contracted mastitis. Heritability of liability to clinical mastitis was 0.13,0.14 before calving, and ranged from 0.05 to 0.08 after calving in the robust models. Genetic correlations were between 0.50 and 0.73, suggesting that clinical mastitis resistance is not the same trait across periods, corroborating earlier findings with probit models. [source]

    The combined use of Patterson and Monte Carlo methods for the decomposition of a powder diffraction pattern

    Angela Altomare
    The success of ab initio crystal structure solution by powder diffraction data is strictly related to the quality of the integrated intensity estimates. A new method that is able to improve the pattern decomposition step has been developed. It combines the inversion of a suitably modified Patterson map with the use of the Hamming codes [13,10] and [40,36] in order to explore more decomposition trials. The new approach has been introduced in EXPO2005, an updated version of EXPO2004, and successfully applied to a set of known organic and inorganic structures. [source]

    Distribution of Patients, Paroxysmal Atrial Tachyarrhythmia Episodes: Implications for Detection of Treatment Efficacy

    Distribution of Paroxysmal Atrial Tachyarrhythmia Episodes.Introduction: Clinical trials of treatments for paroxysmal atrial tachyarrhythmia (pAT) often compare different treatment groups using the time to first episode recurrence. This approach assumes that the time to the first recurrence is representative of all times between successive episodes in a given patient. We subjected this assumption to an empiric test. Methods and Results: Records of pAT onsets from a chronologic series of 134 patients with dual chamber implantable defibrillators were analyzed; 14 had experienced > 10 pAT episodes, which is sufficient for meaningful statistical modeling of the time intervals between episodes. Episodes were independent and randomly distributed in 9 of 14 patients, but a fit of the data to an exponential distribution, required by the stated assumption, was rejected in 13 of 14. In contrast, a Weibull distribution yielded an adequate goodness of fit in 5 of the 9 cases with independent and randomly distributed data. Monte Carlo methods were used to determine the impact of violations of the exponential distribution assumption on clinical trials using time from cardioversion to first episode recurrence as the dependent measure. In a parallel groups design, substantial loss of power occurs with sample sizes < 500 patients per group. In a cross-over design, there is insufficient power to detect a 30% reduction in episode frequency even with 300 patients. Conclusion: Clinical trials that rely on time to first episode recurrence may be considerably less able to detect efficacious treatments than may have been supposed. Analysis of multiple episode onsets recorded over time should be used to avoid this pitfall. [source]

    A forecasting procedure for nonlinear autoregressive time series models

    Yuzhi CaiArticle first published online: 2 AUG 200
    Abstract Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi-step-ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m -step-ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    A Bayesian threshold nonlinearity test for financial time series

    Mike K. P. So
    Abstract We propose in this paper a threshold nonlinearity test for financial time series. Our approach adopts reversible-jump Markov chain Monte Carlo methods to calculate the posterior probabilities of two competitive models, namely GARCH and threshold GARCH models. Posterior evidence favouring the threshold GARCH model indicates threshold nonlinearity or volatility asymmetry. Simulation experiments demonstrate that our method works very well in distinguishing GARCH and threshold GARCH models. Sensitivity analysis shows that our method is robust to misspecification in error distribution. In the application to 10 market indexes, clear evidence of threshold nonlinearity is discovered and thus supporting volatility asymmetry. Copyright © 2005 John Wiley & Sons, Ltd. [source]

    Application of the parametric bootstrap method to determine statistical errors in quantitative X-ray microanalysis of thin films

    Summary We applied the parametric bootstrap to the X-ray microanalysis of Si-Ge binary alloys, in order to assess the dependence of the Ge concentrations and the local film thickness, obtained by using previously described Monte Carlo methods, on the precision of the measured intensities. We show how it is possible by this method to determine the statistical errors associated with the quantitative analysis performed in sample regions of different composition and thickness, but by conducting only one measurement. We recommend the use of the bootstrap for a broad range of applications for quantitative microanalysis to estimate the precision of the final results and to compare the performances of different methods to each other. Finally, we exploited a test based on bootstrap confidence intervals to ascertain if, for given X-ray intensities, different values of the estimated composition in two points of the sample are indicative of an actual lack of homogeneity. [source]

    Molecular and macroscopic modeling of phase separation

    AICHE JOURNAL, Issue 10 2000
    Fernando A. Escobedo
    Recently proposed pseudoensemble Monte Carlo methods are extended in this work to map out diverse phase diagrams (projections of the phase-coexistence hypersurface) of multicomponent mixtures required to characterize fluid-phase equilibrium phenomena and to design separation processes. Within the pseudoensemble framework, the macroscopic models of different equilibrium processes can be readily integrated to the mathematical constraints that specify the thermodynamic state of the system. The proposed Monte Carlo methods allow, for example, the simulation of isopleths and cloud-point lines to compare experimental to simulation data and to test molecular force fields. Applications of this approach include the study of retrograde phenomena in a model natural-gas mixture through simulation of dewlines and coexistence lines at constant vaporization fraction. As demonstrated, pseudoensemble simulations can also be used to generate the thermodynamic data necessary to solve problems encountered in continuous and discontinuous distillation processes. [source]

    About estimation of fitted parameters' statistical uncertainties in EXAFS.

    Critical approach on usual, Monte Carlo methods
    An important step in X-ray absorption spectroscopy (XAS) analysis is the fitting of a model to the experimental spectra, with a view to obtaining structural parameters. It is important to estimate the errors on these parameters, and three methods are used for this purpose. This article presents the conditions for applying these methods. It is shown that the usual equation is not applicable for fitting in R space or on filtered XAS data; a formula is established to treat these cases, and the equivalence between the usual formula and the brute-force method is evidenced. Lastly, the problem of the nonlinearity of the XAS models and a comparison with Monte Carlo methods are addressed. [source]

    Estimates of human immunodeficiency virus prevalence and proportion diagnosed based on Bayesian multiparameter synthesis of surveillance data

    A. Goubar
    Summary., Estimates of the number of prevalent human immunodeficiency virus infections are used in England and Wales to monitor development of the human immunodeficiency virus,acquired immune deficiency syndrome epidemic and for planning purposes. The population is split into risk groups, and estimates of risk group size and of risk group prevalence and diagnosis rates are combined to derive estimates of the number of undiagnosed infections and of the overall number of infected individuals. In traditional approaches, each risk group size, prevalence or diagnosis rate parameter must be informed by just one summary statistic. Yet a rich array of surveillance and other data is available, providing information on parameters and on functions of parameters, and raising the possibility of inconsistency between sources of evidence in some parts of the parameter space. We develop a Bayesian framework for synthesis of surveillance and other information, implemented through Markov chain Monte Carlo methods. The sources of data are found to be inconsistent under their accepted interpretation, but the inconsistencies can be resolved by introducing additional ,bias adjustment' parameters. The best-fitting model incorporates a hierarchical structure to spread information more evenly over the parameter space. We suggest that multiparameter evidence synthesis opens new avenues in epidemiology based on the coherent summary of available data, assessment of consistency and bias modelling. [source]

    A hierarchical modelling framework for identifying unusual performance in health care providers

    David I. Ohlssen
    Summary. A wide variety of statistical methods have been proposed for detecting unusual performance in cross-sectional data on health care providers. We attempt to create a unified framework for comparing these methods, focusing on a clear distinction between estimation and hypothesis testing approaches, with the corresponding distinction between detecting ,extreme' and ,divergent' performance. When assuming a random-effects model the random-effects distribution forms the null hypothesis, and there appears little point in testing whether individual effects are greater or less than average. The hypothesis testing approach uses p -values as summaries and brings with it the standard problems of multiple testing, whether Bayesian or classical inference is adopted. A null random-effects formulation allows us to answer appropriate questions of the type: ,is a particular provider worse than we would expect the true worst provider (but still part of the null distribution) to be'? We outline a broad three-stage strategy of exploratory detection of unusual providers, detailed modelling robust to potential outliers and confirmation of unusual performance, illustrated by using two detailed examples. The concepts are most easily handled within a Bayesian analytic framework using Markov chain Monte Carlo methods, but the basic ideas should be generally applicable. [source]

    Particle Markov chain Monte Carlo methods

    Christophe Andrieu
    Summary., Markov chain Monte Carlo and sequential Monte Carlo methods have emerged as the two main tools to sample from high dimensional probability distributions. Although asymptotic convergence of Markov chain Monte Carlo algorithms is ensured under weak assumptions, the performance of these algorithms is unreliable when the proposal distributions that are used to explore the space are poorly chosen and/or if highly correlated variables are updated independently. We show here how it is possible to build efficient high dimensional proposal distributions by using sequential Monte Carlo methods. This allows us not only to improve over standard Markov chain Monte Carlo schemes but also to make Bayesian inference feasible for a large class of statistical models where this was not previously so. We demonstrate these algorithms on a non-linear state space model and a Lévy-driven stochastic volatility model. [source]

    Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations

    Håvard Rue
    Summary., Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged. [source]