Carlo

Distribution by Scientific Domains

Kinds of Carlo

  • chain monte carlo
  • markov chain monte carlo
  • monte carlo

  • Terms modified by Carlo

  • carlo algorithm
  • carlo algorithms
  • carlo analysis
  • carlo approach
  • carlo calculation
  • carlo computer simulation
  • carlo evidence
  • carlo experiment
  • carlo integration
  • carlo markov chain
  • carlo method
  • carlo methods
  • carlo model
  • carlo procedure
  • carlo sampling
  • carlo sampling scheme
  • carlo scheme
  • carlo simulation
  • carlo simulation approach
  • carlo simulation method
  • carlo simulation studies
  • carlo study
  • carlo technique
  • carlo techniques

  • Selected Abstracts


    A Bayesian Monte Carlo Approach to Global Illumination

    COMPUTER GRAPHICS FORUM, Issue 8 2009
    Jonathan Brouillat
    I.3.7 [Computer Graphics]: Three-Dimensional Graphics an Realism Abstract Most Monte Carlo rendering algorithms rely on importance sampling to reduce the variance of estimates. Importance sampling is efficient when the proposal sample distribution is well-suited to the form of the integrand but fails otherwise. The main reason is that the sample location information is not exploited. All sample values are given the same importance regardless of their proximity to one another. Two samples falling in a similar location will have equal importance whereas they are likely to contain redundant information. The Bayesian approach we propose in this paper uses both the location and value of the data to infer an integral value based on a prior probabilistic model of the integrand. The Bayesian estimate depends only on the sample values and locations, and not how these samples have been chosen. We show how this theory can be applied to the final gathering problem and present results that clearly demonstrate the benefits of Bayesian Monte Carlo. [source]


    Effectiveness of Conservation Targets in Capturing Genetic Diversity

    CONSERVATION BIOLOGY, Issue 1 2003
    Maile C. Neel
    We used empirical data from four rare plant taxa to assess these consequences in terms of how well allele numbers ( all alleles and alleles occurring at a frequency openface>0.05 in any population ) and expected heterozygosity are represented when different numbers of populations are conserved. We determined sampling distributions for these three measures of genetic diversity using Monte Carlo methods. We assessed the proportion of alleles included in the number of populations considered adequate for conservation, needed to capture all alleles, and needed to meet an accepted standard of genetic-diversity conservation of having a 90,95% probability of including all common alleles. We also assessed the number of populations necessary to obtain values of heterozygosity within ±10% of the value obtained from all populations. Numbers of alleles were strongly affected by the number of populations sampled. Heterozygosity was only slightly less sensitive to numbers of populations than were alleles. On average, currently advocated conservation intensities represented 67,83% of all alleles and 85,93% of common alleles. The smallest number of populations to include all alleles ranged from 6 to 17 ( 42,57% ), but <0.2% of 1000 samples of these numbers of populations included them all. It was necessary to conserve 16,29 ( 53,93% ) of the sampled populations to meet the standard for common alleles. Between 20% and 64% of populations were needed to reliably represent species-level heterozygosity. Thus, higher percentages of populations are needed than are currently considered adequate to conserve genetic diversity if populations are selected without genetic data. Resumen: Cualquier acción de conservación que preserve algunas poblaciones y no otras tendrá consecuencias genéticas. Utilizamos datos empíricos de cuatro taxones de plantas raras para evaluar estas consecuencias en términos de lo bien representados que están los números de alelos ( todos los alelos ocurriendo a una frecuencia>0.05 en cualquier población ) y la heterocigosidad esperada cuando se conservan diferentes números de poblaciones. Las distribuciones de muestreo de estas tres medidas de la diversidad genética fueron determinadas utilizando métodos Monte Carlo. Evaluamos la proporción de alelos incluida en números de poblaciones: consideradas adecuadas para la conservación; requeridas para capturar todos los alelos; y las requeridas para alcanzar un estándar de conservación de diversidad genética aceptable del 90,95% de probabilidad de incluir todos los alelos comunes. También evaluamos el número de poblaciones necesarias para obtener valores de heterocigosidad que caigan dentro de ±10% del valor obtenido de todas las poblaciones. Los números de alelos fueron afectados significativamente por el número de poblaciones muestreadas. La heterocigosidad solo fue ligeramente menos sensible a los números de poblaciones de lo que fueron los alelos. Las intensidades de conservación propugnadas actualmente representaron en promedio el 67,83% de todos los alelos y el 85,93% de los alelos comunes. El menor número de poblaciones para incluir a todos los alelos varió de 6 a 17 ( 42,57% ), pero <0.2% de 1000 muestras de esos números de poblaciones los incluyó a todos. Fue necesario conservar de 16 a 29 ( 53,93% ) de las poblaciones muestreadas para alcanzar el estándar para los alelos comunes. Se requirió entre 20% y 64% de las poblaciones para representar la heterocigosidad a nivel de especie confiablemente. Por lo tanto, se requieren mayores porcentajes de poblaciones que los actualmente considerados adecuados para conservar la diversidad genética si las poblaciones son seleccionadas sin datos genéticos. [source]


    Equation of State of Strongly Coupled Quark,Gluon Plasma , Path Integral Monte Carlo Results

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 7-8 2009
    V.S. Filinov
    Abstract A strongly coupled plasma of quark and gluon quasiparticles at temperatures from 1.1Tc to 3Tc is studied by path integral Monte Carlo simulations. This method extends previous classical nonrelativistic simulations based on a color Coulomb interaction to the quantum regime. We present the equation of state and find good agreement with lattice results. Further, pair distribution functions and color correlation functions are computed indicating strong correlations and liquid-like behavior (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    The Equation of State of Fluid Hydrogen

    CONTRIBUTIONS TO PLASMA PHYSICS, Issue 3-4 2005
    D. Kremp
    Abstract A review is given about some selected aspects of the development of the equation of state of hydrogen. Recent results are presented for low temperature fluid hydrogen. Reaction Ensemble Monte Carlo data determined thus are combined with Path Integral Monte Carlo results to give an Hugoniot covering the entire pressure range. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Synthesis, spectroscopic studies and ab-initio structure determination from X-ray powder diffraction of bis-(N-3-acetophenylsalicylaldiminato)copper(II)

    CRYSTAL RESEARCH AND TECHNOLOGY, Issue 8 2005
    S. Banerjee
    Abstract The synthesis, spectroscopic studies and crystal structure determination from X-ray powder diffraction have been carried out for bis-(N-3-acetophenylsalicylaldiminato)copper(II). The structure is triclinic, space group P1 with unit cell dimensions a = 11.817(1) Å, b = 12.087(1) Å, c = 9.210(1) Å, , = 102.62(1)°, , = 111.16(1)°, , = 86.15(1)°, V = 1197.0(2)Å3, Z = 2. The structure has been solved by Monte Carlo simulated annealing approach and refined by GSAS package. The final Rp value was 8.68%. The coordination geometry around the copper atom in the complex is intermediate between square-planar and tetrahedral with two salicylaldimine ligands in trans arrangement. Intermolecular C,H,O hydrogen bonds between molecules related by translation generate infinite chains along [010] direction. The molecular chains are linked via additional C,H,O hydrogen bonds to form a three-dimensional supramolecular network. (© 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


    Data cloning: easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods

    ECOLOGY LETTERS, Issue 7 2007
    Subhash R. Lele
    Abstract We introduce a new statistical computing method, called data cloning, to calculate maximum likelihood estimates and their standard errors for complex ecological models. Although the method uses the Bayesian framework and exploits the computational simplicity of the Markov chain Monte Carlo (MCMC) algorithms, it provides valid frequentist inferences such as the maximum likelihood estimates and their standard errors. The inferences are completely invariant to the choice of the prior distributions and therefore avoid the inherent subjectivity of the Bayesian approach. The data cloning method is easily implemented using standard MCMC software. Data cloning is particularly useful for analysing ecological situations in which hierarchical statistical models, such as state-space models and mixed effects models, are appropriate. We illustrate the method by fitting two nonlinear population dynamics models to data in the presence of process and observation noise. [source]


    Characterization and uncertainty analysis of VOCs emissions from industrial wastewater treatment plants

    ENVIRONMENTAL PROGRESS & SUSTAINABLE ENERGY, Issue 3 2010
    Kaishan Zhang
    Abstract Air toxics from the industrial wastewater treatment plants (IWTPs) impose serious health concerns on its surrounding residential neighborhoods. To address such health concerns, one of the key challenges is to quantify the air emissions from the IWTPs. The objective here is to characterize the air emissions from the IWTPs and quantify its associated uncertainty. An IWTP receiving the wastewaters from an airplane maintenance facility is used for illustration with focus on the quantification of air emissions for benzyl alcohol, phenol, methylene chloride, 2-butanone, and acetone. Two general fate models, i.e., WATER9 and TOXCHEM+V3.0 were used to model the IWTP and quantify the air emissions. Monte Carlo and Bootstrap simulation were used for uncertainty analysis. On average, air emissions from the IWTP were estimated to range from 0.003 lb/d to approximately 16 lb/d with phenol being the highest and benzyl alcohol being the least. However, emissions are associated with large uncertainty. The ratio of the 97.5th percentile to the 2.5th percentile air emissions ranged from 5 to 50 depending on pollutants. This indicates point estimates of air emissions might fail to capture the worst scenarios, leading to inaccurate conclusion when used for health risk assessment. © 2009 American Institute of Chemical Engineers Environ Prog, 2010 [source]


    TK/TD dose,response modeling of toxicity

    ENVIRONMETRICS, Issue 5 2007
    Munni Begum
    Abstract In environmental cancer risk assessment of a toxic chemical, the main focus is in understanding induced target organ toxicity that may in turn lead to carcinogenicity. Mathematical models based on systems of ordinary differential equations with biologically relevant parameters are tenable methods for describing the disposition of chemicals in target organs. In evaluation of a toxic chemical, dose,response assessment often addresses only toxicodynamics (TD) of the chemical, while its toxicokinetics (TK) do not enter into consideration. The primary objective of this research is to integrate both TK and TD in evaluation of toxic chemicals while performing dose,response assessment. Population models, with hierarchical setup and nonlinear predictors, for TK concentration and TD effect measures are considered. A one-compartment model with biologically relevant parameters, such as organ volume, uptake rate and excretion rate, or clearance, is used to derive the TK predictor while a two parameter Emax model is used as a predictor for TD measures. Inference of the model parameters with nonnegative and assay's Limit of Detection (LOD) constraints was carried out by Bayesian approaches using Markov Chain Monte Carlo (MCMC) techniques. Copyright © 2006 John Wiley & Sons, Ltd. [source]


    Chemical mass balance when an unknown source exists

    ENVIRONMETRICS, Issue 8 2004
    Nobuhisa Kashiwagi
    Abstract A chemical mass balance method is proposed for the case where the existence of an unknown source is suspected. In general, when the existence of an unknown source is assumed in statistical receptor modeling, unknown quantities such as the composition of an unknown source and the contributions of assumed sources become unidentifiable. To estimate these unknown quantities avoiding the identification problem, a Bayes model for chemical mass balance is constructed in the form of composition without using prior knowledge on the unknown quantities except for natural constraints. The covariance of ambient observations given in the form of composition is defined in several ways. Markov chain Monte Carlo is used for evaluating the posterior means and variances of the unknown quantities as well as the likelihood for the proposed model. The likelihood is used for selecting the best fit covariance model. A simulation study is carried out to check the performance of the proposed method. Copyright © 2004 John Wiley & Sons, Ltd. [source]


    Empirical Bayes estimators and non-parametric mixture models for space and time,space disease mapping and surveillance

    ENVIRONMETRICS, Issue 5 2003
    Dankmar Böhning
    Abstract The analysis of the geographic variation of disease and its representation on a map is an important topic in epidemiological research and in public health in general. Identification of spatial heterogeneity of relative risk using morbidity and mortality data is required. Frequently, interest is also in the analysis of space data with respect to time, where typically data are used which are aggregated in certain time windows like 5 or 10 years. The occurrence measure of interest is usually the standardized mortality (morbidity) ratio (SMR). It is well known that disease maps in space or in space and time should not solely be based upon the crude SMR but rather some smoothed version of it. This fact has led to a tremendous amount of theoretical developments in spatial methodology, in particular in the area of hierarchical modeling in connection with fully Bayesian estimation techniques like Markov chain Monte Carlo. It seems, however, that at the same time, where these theoretical developments took place, on the practical side only very few of these developments have found their way into daily practice of epidemiological work and surveillance routines. In this article we focus on developments that avoid the pitfalls of the crude SMR and simultaneously retain a simplicity and, at least approximately, the validity of more complex models. After an illustration of the typical pitfalls of the crude SMR the article is centered around three issues: (a) the separation of spatial random variation from spatial structural variation; (b) a simple mixture model for capturing spatial heterogeneity; (c) an extension of this model for capturing temporal information. The techniques are illustrated by numerous examples. Public domain software like Dismap is mentioned that enables easy mixture modeling in the context of disease mapping. Copyright © 2003 John Wiley & Sons, Ltd. [source]


    Testing Conditional Asset Pricing Models Using a Markov Chain Monte Carlo Approach

    EUROPEAN FINANCIAL MANAGEMENT, Issue 3 2008
    Manuel Ammann
    G12 Abstract We use Markov Chain Monte Carlo (MCMC) methods for the parameter estimation and the testing of conditional asset pricing models. In contrast to traditional approaches, it is truly conditional because the assumption that time variation in betas is driven by a set of conditioning variables is not necessary. Moreover, the approach has exact finite sample properties and accounts for errors-in-variables. Using S&P 500 panel data, we analyse the empirical performance of the CAPM and theFama and French (1993)three-factor model. We find that time-variation of betas in the CAPM and the time variation of the coefficients for the size factor (SMB) and the distress factor (HML) in the three-factor model improve the empirical performance. Therefore, our findings are consistent with time variation of firm-specific exposure to market risk, systematic credit risk and systematic size effects. However, a Bayesian model comparison trading off goodness of fit and model complexity indicates that the conditional CAPM performs best, followed by the conditional three-factor model, the unconditional CAPM, and the unconditional three-factor model. [source]


    Lattice Monte Carlo and Experimental Analyses of the Thermal Conductivity of Random-Shaped Cellular Aluminum

    ADVANCED ENGINEERING MATERIALS, Issue 10 2009
    Thomas Fiedler
    The effective thermal conductivity of open- and closed-cell aluminium foams with stochastic pore morphologies has been determined by numerical, analytical and experimental methods. A three dimensional analysis technique has been used where numerical calculation models are generated based on 3D computed tomographic (CT) reconstructions. The resulting three dimensional grid models are used for thermal Lattice Monte Carlo (LMC) analyses. The second part of this paper addresses experimental measurements of open-cell M-pore® and closed-cell Alporas® cellular aluminium. Finally, results obtained using both approaches are compared to classical analytic predictions. [source]


    DETECTING CORRELATION BETWEEN CHARACTERS IN A COMPARATIVE ANALYSIS WITH UNCERTAIN PHYLOGENY

    EVOLUTION, Issue 6 2003
    John P. Huelsenbeck
    Abstract., The importance of accommodating the phylogenetic history of a group when performing a comparative analysis is now widely recognized. The typical approaches either assume the tree is known without error, or they base inferences on a collection of well-supported trees or on a collection of trees generated under a stochastic model of cladogenesis. However, these approaches do not adequately account for the uncertainty of phylogenetic trees in a comparative analysis, especially when data relevant to the phylogeny of a group are available. Here, we develop a method for performing comparative analyses that is based on an extension of Felsenstein's independent contrasts method. Uncertainties in the phylogeny, branch lengths, and other parameters are accommodated by averaging over all possible trees, weighting each by the probability that the tree is correct. We do this in a Bayesian framework and use Markov chain Monte Carlo to perform the high-dimensional summations and integrations required by the analysis. We illustrate the method using comparative characters sampled from Anolis lizards. [source]


    A BAYESIAN FRAMEWORK FOR THE ANALYSIS OF COSPECIATION

    EVOLUTION, Issue 2 2000
    John P. Huelsenbeck
    Abstract., Information on the history of cospeciation and host switching for a group of host and parasite species is contained in the DNA sequences sampled from each. Here, we develop a Bayesian framework for the analysis of cospeciation. We suggest a simple model of host switching by a parasite on a host phylogeny in which host switching events are assumed to occur at a constant rate over the entire evolutionary history of associated hosts and parasites. The posterior probability density of the parameters of the model of host switching are evaluated numerically using Markov chain Monte Carlo. In particular, the method generates the probability density of the number of host switches and of the host switching rate. Moreover, the method provides information on the probability that an event of host switching is associated with a particular pair of branches. A Bayesian approach has several advantages over other methods for the analysis of cospeciation. In particular, it does not assume that the host or parasite phylogenies are known without error; many alternative phylogenies are sampled in proportion to their probability of being correct. [source]


    Kinetic Monte Carlo Simulations of Precipitation,

    ADVANCED ENGINEERING MATERIALS, Issue 12 2006
    E. Clouet
    Abstract We present some recent applications of the atomistic diffusion model and of the kinetic Monte Carlo (KMC) algorithm to systems of industrial interest, i.e. Al-Zr-Sc and Fe-Nb-C alloys, or to model systems. These applications include study of homogeneous and heterogeneous precipitation as well as of phase transformation under irradiation. The KMC simulations are also used to test the main assumptions and limitations of more simple models and classical theories used in the industry, e.g. the classical nucleation theory. [source]


    Quantitative Phase Field Modeling of Precipitation Processes,

    ADVANCED ENGINEERING MATERIALS, Issue 12 2006
    Q. Bronchard
    Phase Field modelling of microstructural evolution in alloys has already a long and successful history. One of the basics of the theory is the introduction of continuous fields (concentration, long-range order parameters) that describe the local state of the alloy. These fields have a meaning only at a mesoscopic scale. One consequence is that we can treat much larger systems than with microscopic methods such as Monte Carlo or molecular dynamics simulations. The aim of this work is to precisely analyse the status of the mesoscopic free energy densities that are used in Phase Field theories and, simultaneously, to clarify the form that the Phase Field equations should adopt. [source]


    The Forex Forward Puzzle: The Career Risk Hypothesis

    FINANCIAL REVIEW, Issue 3 2009
    Fang Liu
    F31; G15 Abstract We conjecture that the forward puzzle may reflect career risks. When professional investors observe public danger signals about a currency, they require a premium for holding it. We find evidence of this in Exchange Rate Mechanism rates. As deep discounts do signal danger, we next specify nonlinear variants of the Fama regression to capture this risk. We also decompose the forward premium into a long-memory trend and short-term component. We find empirical evidence for a career risk premium; risk is in fact dominant in the trend component while the short-term component loads more on expectations. All confidence intervals are calculated via Monte Carlo. [source]


    MCMC-based linkage analysis for complex traits on general pedigrees: multipoint analysis with a two-locus model and a polygenic component

    GENETIC EPIDEMIOLOGY, Issue 2 2007
    Yun Ju Sung
    Abstract We describe a new program lm_twoqtl, part of the MORGAN package, for parametric linkage analysis with a quantitative trait locus (QTL) model having one or two QTLs and a polygenic component, which models additional familial correlation from other unlinked QTLs. The program has no restriction on number of markers or complexity of pedigrees, facilitating use of more complex models with general pedigrees. This is the first available program that can handle a model with both two QTLs and a polygenic component. Competing programs use only simpler models: one QTL, one QTL plus a polygenic component, or variance components (VC). Use of simple models when they are incorrect, as for complex traits that are influenced by multiple genes, can bias estimates of QTL location or reduce power to detect linkage. We compute the likelihood with Markov Chain Monte Carlo (MCMC) realization of segregation indicators at the hypothesized QTL locations conditional on marker data, summation over phased multilocus genotypes of founders, and peeling of the polygenic component. Simulated examples, with various sized pedigrees, show that two-QTL analysis correctly identifies the location of both QTLs, even when they are closely linked, whereas other analyses, including the VC approach, fail to identify the location of QTLs with modest contribution. Our examples illustrate the advantage of parametric linkage analysis with two QTLs, which provides higher power for linkage detection and better localization than use of simpler models. Genet. Epidemiol. © 2006 Wiley-Liss, Inc. [source]


    Finding starting points for Markov chain Monte Carlo analysis of genetic data from large and complex pedigrees

    GENETIC EPIDEMIOLOGY, Issue 1 2003
    Yuqun Luo
    Abstract Genetic data from founder populations are advantageous for studies of complex traits that are often plagued by the problem of genetic heterogeneity. However, the desire to analyze large and complex pedigrees that often arise from such populations, coupled with the need to handle many linked and highly polymorphic loci simultaneously, poses challenges to current standard approaches. A viable alternative to solving such problems is via Markov chain Monte Carlo (MCMC) procedures, where a Markov chain, defined on the state space of a latent variable (e.g., genotypic configuration or inheritance vector), is constructed. However, finding starting points for the Markov chains is a difficult problem when the pedigree is not single-locus peelable; methods proposed in the literature have not yielded completely satisfactory solutions. We propose a generalization of the heated Gibbs sampler with relaxed penetrances (HGRP) of Lin et al., ([1993] IMA J. Math. Appl. Med. Biol. 10:1,17) to search for starting points. HGRP guarantees that a starting point will be found if there is no error in the data, but the chain usually needs to be run for a long time if the pedigree is extremely large and complex. By introducing a forcing step, the current algorithm substantially reduces the state space, and hence effectively speeds up the process of finding a starting point. Our algorithm also has a built-in preprocessing procedure for Mendelian error detection. The algorithm has been applied to both simulated and real data on two large and complex Hutterite pedigrees under many settings, and good results are obtained. The algorithm has been implemented in a user-friendly package called START. Genet Epidemiol 25:14,24, 2003. © 2003 Wiley-Liss, Inc. [source]


    Topology and Dependency Tests in Spatial and Network Autoregressive Models

    GEOGRAPHICAL ANALYSIS, Issue 2 2009
    Steven Farber
    Social network analysis has been identified as a promising direction for further applications of spatial statistical and econometric models. The type of network analysis envisioned is formally identical to the analysis of geographical systems, in that both involve the measurement of dependence between observations connected by edges that constitute a system. An important item, which has not been investigated in this context, is the potential relationship between the topology properties of networks (or network descriptions of geographical systems) and the properties of spatial models and tests. The objective of this article is to investigate, within a simulation setting, the ability of spatial dependency tests to identify a spatial/network autoregressive model when two network topology measures, namely degree distribution and clustering, are controlled. Drawing on a large data set of synthetically controlled social networks, the impact of network topology on dependency tests is investigated under a hierarchy of topology factors, sample size, and autocorrelation strength. In addition, topology factors are related to known properties of empirical systems. El análisis de redes sociales ha sido y es una dirección prometedora en el avance de las aplicaciones de modelos econométricos y de estadística espacial. El tipo de análisis de redes que proponemos es idéntico al análisis de sistemas geográficos, ya que ambos miden la dependencia entre observaciones conectadas que conforman un sistema. Un punto importante que no ha sido investigado en este contexto es la potencial relación entre las propiedades topológicas de redes (o descripción de redes de sistemas geográficos) y las propiedades de los modelos y pruebas (tests) espaciales. El objetivo de este artículo es investigar (dentro del marco de simulaciones Monte Carlo), la capacidad que poseen las pruebas de dependencia espacial para identificar un modelo autorregresivo espacial/de redes, en los casos en los que dos medidas topológicas de redes (grado de distribución y transitividad) son controlados. Haciendo uso de una base de datos de redes sociales controladas sintéticamente, este artículo evalúa el impacto de la topología de redes en las pruebas de dependencia espacial. Dicho impacto es evaluado con respecto a variaciones en los factores topológicos, el tamaño de muestra, y los niveles de autocorrelación espacial. Adicionalmente, los factores topológicos son relacionados a propiedades conocidas de varios sistemas empíricos. [source]


    Bayesian Estimation of Limited Dependent Variable Spatial Autoregressive Models

    GEOGRAPHICAL ANALYSIS, Issue 1 2000
    James P. LeSage
    A Gibbs sampling (Markov chain Monte Carlo) method for estimating spatial autoregressive limited dependent variable models is presented. The method can accommodate data sets containing spatial outliers and general forms of non-constant variance. It is argued that there are several advantages to the method proposed here relative to that proposed and illustrated in McMillen (1992) for spatial probit models. [source]


    A hybrid fast algorithm for first arrivals tomography

    GEOPHYSICAL PROSPECTING, Issue 5 2009
    Manuela Mendes
    ABSTRACT A hybrid algorithm, combining Monte-Carlo optimization with simultaneous iterative reconstructive technique (SIRT) tomography, is used to invert first arrival traveltimes from seismic data for building a velocity model. Stochastic algorithms may localize a point around the global minimum of the misfit function but are not suitable for identifying the precise solution. On the other hand, a tomographic model reconstruction, based on a local linearization, will only be successful if an initial model already close to the best solution is available. To overcome these problems, in the method proposed here, a first model obtained using a classical Monte Carlo-based optimization is used as a good initial guess for starting the local search with the SIRT tomographic reconstruction. In the forward problem, the first-break times are calculated by solving the eikonal equation through a velocity model with a fast finite-difference method instead of the traditional slow ray-tracing technique. In addition, for the SIRT tomography the seismic energy from sources to receivers is propagated by applying a fast Fresnel volume approach which when combined with turning rays can handle models with both positive and negative velocity gradients. The performance of this two-step optimization scheme has been tested on synthetic and field data for building a geologically plausible velocity model. This is an efficient and fast search mechanism, which permits insertion of geophysical, geological and geodynamic a priori constraints into the grid model and ray path is completed avoided. Extension of the technique to 3D data and also to the solution of ,static correction' problems is easily feasible. [source]


    Monte Carlo Study of Quantitative Electron Probe Microanalysis of Monazite with a Coating Film: Comparison of 25 nm Carbon and 10 nm Gold at E0= 15 and 25 keV

    GEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 2 2007
    Takenori Kato
    simulation par la méthode de Monte Carlo; microanalyse par sonde électronique (EPMA); analyse quantitative; film de revêtement; monazite Carbon (25,30 nm in thickness) is the most common coating material used in the electron probe microanalysis (EPMA) of geological samples. A gold coating is also used in special cases to reduce the surface damage by electron bombardment. Monte Carlo simulations have been performed for monazite with a 25 nm carbon and a 10 nm gold coating to understand the effect of a coating film in quantitative EPMA at E0= 15 keV and 25 keV. Simulations showed that carbon-coated monazite gave the same depth distribution of the generated X-rays in the monazite as uncoated monazite, whilst gold-coated monazite gave a distorted depth distribution. A 10 nm gold coating was 1.06 (15 keV) and 1.05 (25 keV) times higher in k -ratio between monazite and pure thorium than a 25 nm carbon coating at an X-ray take-off angle of 40 degrees. Thus, a 10 nm gold coating is a possible factor contributing to inaccuracy in quantitative EPMA of monazite, while a 25 nm carbon coating does not have a significant effect. Le carbone, avec des épaisseurs de 25 à 30 nm, est le matériel de dépôt le plus fréquemment utilisé en microanalyse par sonde électronique (EPMA) d'échantillons géologiques. Un dépôt d'or est aussi utilisé dans des cas spécifiques, pour réduire les dommages causés à la surface par le bombardement d'électrons. Des simulations par la méthode de Monte Carlo ont été effectuées pour une monazite recouverte d'une couche de carbone de 25 nm et d'une couche d'or de 10 nm, dans le but de comprendre l'effet du dépôt dans les mesures quantitatives à l'EPMA, à E0= 15 keV et 25 keV. Les simulations ont montré que la monazite recouverte de carbone avait la même distribution en profondeur de rayons X générés qu'une monazite non recouverte, tandis que la monazite recouverte d'or avait une distribution en profondeur déformée. Le dépôt de 10 nm d'or avait un k -ratio qui était 1.06 (pour 15 keV) et 1.05 (pour 25 keV) fois plus important pour la monazite et du thorium pur que le dépôt de 25 nm de carbone dans le cas d'un angle de sortie des rayons X de 40 degrés. En conséquence un dépôt d'or de 10 nm est un facteur possible d'inexactitude lors de mesures quantitatives de monazites par EPMA, alors qu'un dépôt de carbone de 25 nm n'a pas d'effet significatif sur la mesure. [source]


    A Computational Study of the Sub-monolayer Growth of Pentacene

    ADVANCED FUNCTIONAL MATERIALS, Issue 13 2006
    D. Choudhary
    Abstract A computational study of organic thin-film growth using a combination of ab,initio based energy calculations and kinetic Monte Carlo (KMC) simulations is provided. A lattice-based KMC model is used in which binding energies determine the relative rates of diffusion of the molecules. This KMC approach is used to present "landscapes" or "maps" that illustrate the possible structural outcomes of growing a thin film of small organic molecules, represented as a two-site dimer, on a substrate in which the strength of organic,substrate interactions is allowed to vary. KMC provides a mesoscopic-scale view of sub-monolayer deposition of organic thin films on model substrates, mapped out as a function of the flux of depositing molecules and the temperature of the substrate. The morphology of the crystalline thin films is shown to be a strong function of the molecule,molecule and molecule,substrate interactions. A rich variety of maps is shown to occur in which the small organic molecules either stand up or lie down in a variety of different patterns depending on the nature of the binding to the surface. In this way, it is possible to suggest how to tailor the substrate or the small organic molecule in order to create a desired growth habit. In order to demonstrate how this set of allowable maps is reduced in the case where the set of energy barriers between substrate and organic molecule are reliably known, we have used Gaussian,98 calculations to establish binding energies for the weak van der Waals interactions between a),pairs of pentacene molecules as a function of orientation and b),pentacene and two substrates, silicon surfaces passivated with cyclopentene molecules and a crystalline model of silicon dioxide. The critical nucleation size and the mode of diffusion of this idealized two-site dimer model for pentacene molecules are found to be in good agreement with experimental data. [source]


    A discrete random effects probit model with application to the demand for preventive care

    HEALTH ECONOMICS, Issue 5 2001
    Partha Deb
    Abstract I have developed a random effects probit model in which the distribution of the random intercept is approximated by a discrete density. Monte Carlo results show that only three to four points of support are required for the discrete density to closely mimic normal and chi-squared densities and provide unbiased estimates of the structural parameters and the variance of the random intercept. The empirical application shows that both observed family characteristics and unobserved family-level heterogeneity are important determinants of the demand for preventive care. Copyright © 2001 John Wiley & Sons, Ltd. [source]


    IMPROVING FORECAST ACCURACY BY COMBINING RECURSIVE AND ROLLING FORECASTS,

    INTERNATIONAL ECONOMIC REVIEW, Issue 2 2009
    Todd E. Clark
    This article presents analytical, Monte Carlo, and empirical evidence on combining recursive and rolling forecasts when linear predictive models are subject to structural change. Using a characterization of the bias,variance trade-off faced when choosing between either the recursive and rolling schemes or a scalar convex combination of the two, we derive optimal observation windows and combining weights designed to minimize mean square forecast error. Monte Carlo experiments and several empirical examples indicate that combination can often provide improvements in forecast accuracy relative to forecasts made using the recursive scheme or the rolling scheme with a fixed window width. [source]


    View factor calculation using the Monte Carlo method and numerical sensitivity

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 3 2006
    M. R. Vuji
    Abstract The geometrical view (configuration) factor plays a crucial role in radiative heat transfer simulations and several methods, such as integration, the Monte Carlo and the hemi-cube method have been introduced to calculate view factors in recent years. In this paper the Monte Carlo method combined with the finite element (FE) technique is investigated. Results describing the relationships between different discretization schemes, number of rays used for the view factor calculation, CPU time and accuracy are presented. The interesting case where reduced accuracy is obtained with increased refinement of FE mesh is discussed. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    Variance-reduced Monte Carlo solutions of the Boltzmann equation for low-speed gas flows: A discontinuous Galerkin formulation

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2008
    Lowell L. Baker
    Abstract We present and discuss an efficient, high-order numerical solution method for solving the Boltzmann equation for low-speed dilute gas flows. The method's major ingredient is a new Monte Carlo technique for evaluating the weak form of the collision integral necessary for the discontinuous Galerkin formulation used here. The Monte Carlo technique extends the variance reduction ideas first presented in Baker and Hadjiconstantinou (Phys. Fluids 2005; 17, art. no. 051703) and makes evaluation of the weak form of the collision integral not only tractable but also very efficient. The variance reduction, achieved by evaluating only the deviation from equilibrium, results in very low statistical uncertainty and the ability to capture arbitrarily small deviations from equilibrium (e.g. low-flow speed) at a computational cost that is independent of the magnitude of this deviation. As a result, for low-signal flows the proposed method holds a significant computational advantage compared with traditional particle methods such as direct simulation Monte Carlo (DSMC). Copyright © 2008 John Wiley & Sons, Ltd. [source]


    Empirical slip and viscosity model performance for microscale gas flow

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2005
    Matthew J. McNenly
    Abstract For the simple geometries of Couette and Poiseuille flows, the velocity profile maintains a similar shape from continuum to free molecular flow. Therefore, modifications to the fluid viscosity and slip boundary conditions can improve the continuum based Navier,Stokes solution in the non-continuum non-equilibrium regime. In this investigation, the optimal modifications are found by a linear least-squares fit of the Navier,Stokes solution to the non-equilibrium solution obtained using the direct simulation Monte Carlo (DSMC) method. Models are then constructed for the Knudsen number dependence of the viscosity correction and the slip model from a database of DSMC solutions for Couette and Poiseuille flows of argon and nitrogen gas, with Knudsen numbers ranging from 0.01 to 10. Finally, the accuracy of the models is measured for non-equilibrium cases both in and outside the DSMC database. Flows outside the database include: combined Couette and Poiseuille flow, partial wall accommodation, helium gas, and non-zero convective acceleration. The models reproduce the velocity profiles in the DSMC database within an L2 error norm of 3% for Couette flows and 7% for Poiseuille flows. However, the errors in the model predictions outside the database are up to five times larger. Copyright © 2005 John Wiley & Sons, Ltd. [source]


    The direct simulation Monte Carlo method using unstructured adaptive mesh and its application

    INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 4 2002
    J.-S. Wu
    Abstract The implementation of an adaptive mesh-embedding (h-refinement) scheme using unstructured grid in two-dimensional direct simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new mesh where the local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging node, by introducing the an-isotropic refinement in the interfacial cells between refined and non-refined cells. Not only does this remedy increase a negligible amount of work, but it also removes all the difficulties presented in the originals scheme. We have tested the proposed scheme for argon gas in a high-speed driven cavity flow. The results show an improved flow resolution as compared with that of un-adaptive mesh. Finally, we have used triangular adaptive mesh to compute a near-continuum gas flow, a hypersonic flow over a cylinder. The results show fairly good agreement with previous studies. In summary, the proposed simple mesh adaptation is very useful in computing rarefied gas flows, which involve both complicated geometry and highly non-uniform density variations throughout the flow field. Copyright © 2002 John Wiley & Sons, Ltd. [source]