Monte Carlo Procedure (monte + carlo_procedure)

Distribution by Scientific Domains


Selected Abstracts


Optimization of Monte Carlo Procedures for Value at Risk Estimates

ECONOMIC NOTES, Issue 1 2002
Sabrina Antonelli
This paper proposes a methodology which improves the computational efficiency of the Monte Carlo simulation approach of value at risk (VaR) estimates. Principal components analysis is used to reduce the number of relevant sources of risk driving the portfolio dynamics. Moreover, large deviations techniques are used to provide an estimate of the minimum number of price scenarios to be simulated to attain a given accuracy. Numerical examples are provided and show the good performance of the methodolgy proposed. (J.E.L.: C15, G1). [source]


How to model shallow water-table depth variations: the case of the Kervidy-Naizin catchment, France

HYDROLOGICAL PROCESSES, Issue 4 2005
Jérôme Molénat
Abstract The aim of this work is threefold: (1) to identify the main characteristics of water-table variations from observations in the Kervidy-Naizin catchment, a small catchment located in western France; (2) to confront these characteristics with the assumptions of the Topmodel concepts; and (3) to analyse how relaxation of the assumptions could improve the simulation of distributed water-table depth. A network of piezometers was installed in the Kervidy-Naizin catchment and the water-table depth was recorded every 15 min in each piezometer from 1997 to 2000. From these observations, the Kervidy-Naizin groundwater appears to be characteristic of shallow groundwaters of catchments underlain by crystalline bedrock, in view of the strong relation between water distribution and topography in the bottom land of the hillslopes. However, from midslope to summit, the water table can attain a depth of many metres, it does not parallel the topographic surface and it remains very responsive to rainfall. In particular, hydraulic gradients vary with time and are not equivalent to the soil surface slope. These characteristics call into question some assumptions that are used to model shallow lateral subsurface flow in saturated conditions. We investigate the performance of three models (Topmodel, a kinematic model and a diffusive model) in simulating the hourly distributed water-table depths along one of the hillslope transects, as well as the hourly stream discharge. For each model, two sets of parameters are identified following a Monte Carlo procedure applied to a simulation period of 2649 h. The performance of each model with each of the two parameter sets is evaluated over a test period of 2158 h. All three models, and hence their underlying assumptions, appear to reproduce adequately the stream discharge variations and water-table depths in bottom lands at the foot of the hillslope. To simulate the groundwater depth distribution over the whole hillslope, the steady-state assumption (Topmodel) is quite constraining and leads to unacceptable water-table depths in midslope and summit areas. Once this assumption is relaxed (kinematic model), the water-table simulation is improved. A subsequent relaxation of the hydraulic gradient (diffusive model) further improves water-table simulations in the summit area, while still yielding realistic water-table depths in the bottom land. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Monte Carlo modelling of abrupt InP/InGaAs HBTs

INTERNATIONAL JOURNAL OF NUMERICAL MODELLING: ELECTRONIC NETWORKS, DEVICES AND FIELDS, Issue 4 2003
Pau Garcias-Salvá
Abstract In this paper a Monte Carlo simulator which is focused on the modelling of abrupt heterojunction bipolar transistors (HBTs) is described. In addition, simulation results of an abrupt InP/InGaAs HBT are analysed in order to describe the behaviour of this kind of device, and are compared with experimental data. A distinctive feature of InP/InGaAs HBTs is their spike-like discontinuity in the Ec level at the emitter,base heterojunction interface. The transport of electrons through this potential barrier can be described by the Schrödinger's equation. Therefore, in our simulator we have consistently included the numerical solution of this equation in the iterative Monte Carlo procedure. The simulation results of the transistor include the density of electrons along the device and their velocity, kinetic energy and occupation of the upper conduction sub-bands. It is shown that the electrons in the base region and in the base,collector depletion region are far from thermal equilibrium, and therefore the drift,diffusion transport model is no longer applicable. Finally, the experimental and simulated Gummel plots JC(VBE) and JB(VBE) are compared in the bias range of common operation of these transistors, showing a good data agreement. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Theoretical electronic spectra of 2-aminopurine in vapor and in water

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 13 2006
Antonio Carlos Borin
Abstract The accurate quantum chemical CASSCF and CASPT2 methods combined with a Monte Carlo procedure to mimic solvation effects have been used in the calculation of the spectroscopic properties of two tautomers of 2-aminopurine (2AP). Absorption and emission spectra have been simulated both in vacuum and in aqueous environment. State and transition energies and properties have been obtained with high accuracy, leading to the assignment of the most important spectroscopic features. The lowest-lying 1(,,,*) (1La) state has been determined as responsible for the first band in the absorption spectrum and also for the strong fluorescence observed for the system in water. The combined approach used in the present work gives quantitatively accurate results. © 2006 Wiley Periodicals, Inc. Int J Quantum Chem, 2006 [source]


Completion of crystal structures from powder data: the use of the coordination polyhedra

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 6 2000
Angela Altomare
Direct methods applied to powder diffraction data often provide well located heavy atoms and unreliable light-atom positions. The completion of the crystal structure is then not always straightforward and may require a considerable amount of user intervention. The heavy-atom connectivity provided by the trial solution may be used to guess the nature of the coordination polyhedra. A Monte Carlo procedure is described which, in the absence of a well defined structural model, is able to locate the light atoms correctly under the restraints of the experimental heavy-atom connectivity model. The correctness of the final model is assessed by criteria based on the agreement between the whole experimental diffraction pattern and the calculated one. The procedure requires little CPU computing time and has been implemented as a routine of EXPO [Altomare et al. (1999). J. Appl. Cryst.32, 339,340]. The method has proved to be sufficiently robust against the distortion of the coordination polyhedra and has been successfully applied to some test structures. [source]


Inverse Monte Carlo procedure for conformation determination of macromolecules

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 7 2003
Mark Bathe
Abstract A novel numerical method for determining the conformational structure of macromolecules is applied to idealized biomacromolecules in solution. The method computes effective inter-residue interaction potentials solely from the corresponding radial distribution functions, such as would be obtained from experimental data. The interaction potentials generate conformational ensembles that reproduce thermodynamic properties of the macromolecule (mean energy and heat capacity) in addition to the target radial distribution functions. As an evaluation of its utility in structure determination, we apply the method to a homopolymer and a heteropolymer model of a three-helix bundle protein [Zhou, Y.; Karplus, M. Proc Natl Acad Sci USA 1997, 94, 14429; Zhou, Y. et al. J Chem Phys 1997, 107, 10691] at various thermodynamic state points, including the ordered globule, disordered globule, and random coil states. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 876,890, 2003 [source]


Ordering in Stretched Bernoullian Copolymers

MACROMOLECULAR THEORY AND SIMULATIONS, Issue 1 2003
Arkady D. Litmanovich
Abstract A new approach is suggested to estimate the theoretical maximum capability to order for stretched Bernoullian copolymers AB, provided interchain AB contacts are unfavorable. A simple Monte Carlo procedure simulating the ordering of such copolymers via rotation of ring-shaped chains reveals the capability to order even for quite long copolymer chains. The analytical probabilistic consideration is elaborated, which enables one to interpret the ordering via rotation in terms of a sliding of periodic Bernoullian chains. Using both the probabilistic analysis and Monte Carlo simulations it is shown that estimations of a capability to order given by the rotation procedure are also good for a sliding of true Bernoullian copolymers. Therefore, the simple Monte Carlo procedure seems suitable for estimating ordering in other classes of copolymers for which an analytical approach is more complicated. Such estimations might be useful for a consideration of various properties of irregular copolymers connected with their tendency to order. Ordering by rotation of rings. As an example, an ordering for M,=,10,000 chains of length N,=,20 and composition p,=,0.50 (p is the mole fraction of A units) is shown. See text. [source]


Deterministic and statistical methods for reconstructing multidimensional NMR spectra,

MAGNETIC RESONANCE IN CHEMISTRY, Issue 3 2006
Ji Won Yoon
Abstract Reconstruction of an image from a set of projections is a well-established science, successfully exploited in X-ray tomography and magnetic resonance imaging. This principle has been adapted to generate multidimensional NMR spectra, with the key difference that, instead of continuous density functions, high-resolution NMR spectra comprise discrete features, relatively sparsely distributed in space. For this reason, a reliable reconstruction can be made from a small number of projections. This speeds the measurements by orders of magnitude compared to the traditional methodology, which explores all evolution space on a Cartesian grid, one step at a time. Speed is of crucial importance for structural investigations of biomolecules such as proteins and for the investigation of time-dependent phenomena. Whereas the recording of a suitable set of projections is a straightforward process, the reconstruction stage can be more problematic. Several practical reconstruction schemes are explored. The deterministic methods,additive back-projection and the lowest-value algorithm,derive the multidimensional spectrum directly from the experimental projections. The statistical search methods include iterative least-squares fitting, maximum entropy, and model-fitting schemes based on Bayesian analysis, particularly the reversible-jump Markov chain Monte Carlo procedure. These competing reconstruction schemes are tested on a set of six projections derived from the three-dimensional 700-MHz HNCO spectrum of a 187-residue protein (HasA) and compared in terms of reliability, absence of artifacts, sensitivity to noise, and speed of computation. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Modelling financial time series with threshold nonlinearity in returns and trading volume

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2007
Mike K. P. So
Abstract This paper investigates the effect of past returns and trading volumes on the temporal behaviour of international market returns. We propose a class of nonlinear threshold time-series models with generalized autoregressive conditional heteroscedastic disturbances. Using Bayesian approach, an implementation of Markov chain Monte Carlo procedure is used to obtain estimates of unknown parameters. The proposed family of models incorporates changes in log of volumes in the sense of regime changes and asymmetric effects on the volatility functions. The results show that when differences of log volumes are involved in the system of log return and volatility models, an optimum selection can be achieved. In all the five markets considered, both mean and variance equations involve volumes in the best models selected. Our best models produce higher posterior-odds ratios than that in Gerlach et al.'s (Phys. A Statist. Mech. Appl. 2006; 360:422,444) models, indicating that our return,volume partition of regimes can offer extra gain in explaining return-volatility term structure. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Anomalous signal indicators in protein crystallography

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 11 2005
P. H. Zwart
A Monte Carlo procedure is described that generates random structure factors with simulated errors corresponding to an X-­ray data set of a protein of a specific size and given heavy-atom content. The simulated data set can be used to estimate Bijvoet ratios and figures of merit as obtained from SAD phasing routines and can be used to gauge the feasibility of solving a structure via the SAD method. In addition to being able to estimate results from phasing, the simulation allows the estimation of the correlation coefficient between |,F|, the absolute Bijvoet amplitude difference, and FA, the structure-factor amplitude of the heavy-atom model. As this quantity is used in various substructure-solution routines, the estimate provides a rough estimate of the ease of substructure solution. Furthermore, the Monte Carlo procedure provides an easy way of estimating the number of significant Bijvoet intensity differences, denoted as the measurability, and is proposed as an intuitive measure of the quality of anomalous data. [source]


Good modeling practice for PAT applications: Propagation of input uncertainty and sensitivity analysis

BIOTECHNOLOGY PROGRESS, Issue 4 2009
Gürkan Sin
Abstract The uncertainty and sensitivity analysis are evaluated for their usefulness as part of the model-building within Process Analytical Technology applications. A mechanistic model describing a batch cultivation of Streptomyces coelicolor for antibiotic production was used as case study. The input uncertainty resulting from assumptions of the model was propagated using the Monte Carlo procedure to estimate the output uncertainty. The results showed that significant uncertainty exists in the model outputs. Moreover the uncertainty in the biomass, glucose, ammonium and base-consumption were found low compared to the large uncertainty observed in the antibiotic and off-gas CO2 predictions. The output uncertainty was observed to be lower during the exponential growth phase, while higher in the stationary and death phases - meaning the model describes some periods better than others. To understand which input parameters are responsible for the output uncertainty, three sensitivity methods (Standardized Regression Coefficients, Morris and differential analysis) were evaluated and compared. The results from these methods were mostly in agreement with each other and revealed that only few parameters (about 10) out of a total 56 were mainly responsible for the output uncertainty. Among these significant parameters, one finds parameters related to fermentation characteristics such as biomass metabolism, chemical equilibria and mass-transfer. Overall the uncertainty and sensitivity analysis are found promising for helping to build reliable mechanistic models and to interpret the model outputs properly. These tools make part of good modeling practice, which can contribute to successful PAT applications for increased process understanding, operation and control purposes. © 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009 [source]