Home About us Contact | |||
Monte Carlo Study (monte + carlo_study)
Selected AbstractsMonte Carlo Study of Quantitative Electron Probe Microanalysis of Monazite with a Coating Film: Comparison of 25 nm Carbon and 10 nm Gold at E0= 15 and 25 keVGEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 2 2007Takenori Kato simulation par la méthode de Monte Carlo; microanalyse par sonde électronique (EPMA); analyse quantitative; film de revêtement; monazite Carbon (25,30 nm in thickness) is the most common coating material used in the electron probe microanalysis (EPMA) of geological samples. A gold coating is also used in special cases to reduce the surface damage by electron bombardment. Monte Carlo simulations have been performed for monazite with a 25 nm carbon and a 10 nm gold coating to understand the effect of a coating film in quantitative EPMA at E0= 15 keV and 25 keV. Simulations showed that carbon-coated monazite gave the same depth distribution of the generated X-rays in the monazite as uncoated monazite, whilst gold-coated monazite gave a distorted depth distribution. A 10 nm gold coating was 1.06 (15 keV) and 1.05 (25 keV) times higher in k -ratio between monazite and pure thorium than a 25 nm carbon coating at an X-ray take-off angle of 40 degrees. Thus, a 10 nm gold coating is a possible factor contributing to inaccuracy in quantitative EPMA of monazite, while a 25 nm carbon coating does not have a significant effect. Le carbone, avec des épaisseurs de 25 à 30 nm, est le matériel de dépôt le plus fréquemment utilisé en microanalyse par sonde électronique (EPMA) d'échantillons géologiques. Un dépôt d'or est aussi utilisé dans des cas spécifiques, pour réduire les dommages causés à la surface par le bombardement d'électrons. Des simulations par la méthode de Monte Carlo ont été effectuées pour une monazite recouverte d'une couche de carbone de 25 nm et d'une couche d'or de 10 nm, dans le but de comprendre l'effet du dépôt dans les mesures quantitatives à l'EPMA, à E0= 15 keV et 25 keV. Les simulations ont montré que la monazite recouverte de carbone avait la même distribution en profondeur de rayons X générés qu'une monazite non recouverte, tandis que la monazite recouverte d'or avait une distribution en profondeur déformée. Le dépôt de 10 nm d'or avait un k -ratio qui était 1.06 (pour 15 keV) et 1.05 (pour 25 keV) fois plus important pour la monazite et du thorium pur que le dépôt de 25 nm de carbone dans le cas d'un angle de sortie des rayons X de 40 degrés. En conséquence un dépôt d'or de 10 nm est un facteur possible d'inexactitude lors de mesures quantitatives de monazites par EPMA, alors qu'un dépôt de carbone de 25 nm n'a pas d'effet significatif sur la mesure. [source] Modeling the Structure of Amorphous MoS3: A Neutron Diffraction and Reverse Monte Carlo Study.CHEMINFORM, Issue 15 2004Simon J. Hibble No abstract is available for this article. [source] Modeling the Practical Effects of Applicant Reactions: Subgroup Differences in Test-Taking Motivation, Test Performance, and Selection RatesINTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 4 2002Robert E. Ployhart Research suggests that Black,White differences in test-taking motivation may be related to subgroup test score differences, but this research has not shown the extent to which minimizing subgroup motivation differences will reduce subgroup differences in selection rates and adverse impact. This Monte Carlo study examined how enhancing Blacks' test-taking motivation for cognitive ability tests might reduce adverse impact across a range of (a) subgroup test differences, (b) selection ratios, (c) subgroup differences in test-taking motivation, and (d) relationships between motivation and test scores. The results suggest that although enhancing test-taking motivation will consistently reduce subgroup differences in test performance and adverse impact, the effect is often small and will not eliminate adverse impact for any condition we examine. However, under some conditions the reduction may be important, and the discussion considers conditions where even these minimal reductions may be practically helpful. [source] Finite sample improvements in statistical inference with I(1) processesJOURNAL OF APPLIED ECONOMETRICS, Issue 3 2001D. Marinucci Robinson and Marinucci (1998) investigated the asymptotic behaviour of a narrow-band semiparametric procedure termed Frequency Domain Least Squares (FDLS) in the broad context of fractional cointegration analysis. Here we restrict discussion to the standard case when the data are I(1) and the cointegrating errors are I(0), proving that modifications of the Fully Modified Ordinary Least Squares (FM-OLS) procedure of Phillips and Hansen (1990) which use the FDLS idea have the same asymptotically desirable properties as FM-OLS, and, on the basis of a Monte Carlo study, find evidence that they have superior finite-sample properties. The new procedures are also shown to compare satisfactorily with parametric estimates. Copyright © 2001 John Wiley & Sons, Ltd. [source] Comparison of NOHARM and DETECT in Item Cluster Recovery: Counting Dimensions and Allocating ItemsJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 2 2005Holmes Finch This study examines the performance of a new method for assessing and characterizing dimensionality in test data using the NOHARM model, and comparing it with DETECT. Dimensionality assessment is carried out using two goodness-of-fit statistics that are compared to reference ,2 distributions. A Monte Carlo study is used with item parameters based on a statewide basic skills assessment and the SAT. Other factors that are varied include the correlation among the latent traits, the number of items, the number of subjects, skewness of the latent traits, and the presence or absence of guessing. The performance of the two procedures is judged by the accuracy in determining the number of underlying dimensions, and the degree to which items are correctly clustered together. Results indicate that the new, NOHARM-based method appears to perform comparably to DETECT in terms of simultaneously finding the correct number of dimensions and clustering items correctly. NOHARM is generally better able to determine the number of underlying dimensions, but less able to group items together, than DETECT. When errors in item cluster assignment are made, DETECT is more likely to incorrectly separate items while NOHARM more often incorrectly groups them together. [source] The Impact of Omitted Responses on the Accuracy of Ability Estimation in Item Response TheoryJOURNAL OF EDUCATIONAL MEASUREMENT, Issue 3 2001R. J. De Ayala Practitioners typically face situations in which examinees have not responded to all test items. This study investigated the effect on an examinee's ability estimate when an examinee is presented an item, has ample time to answer, but decides not to respond to the item. Three approaches to ability estimation (biweight estimation, expected a posteriori, and maximum likelihood estimation) were examined. A Monte Carlo study was performed and the effect of different levels of omissions on the simulee's ability estimates was determined. Results showed that the worst estimation occurred when omits were treated as incorrect. In contrast, substitution of 0.5 for omitted responses resulted in ability estimates that were almost as accurate as those using complete data. Implications for practitioners are discussed. [source] Grafting of maleic anhydride onto linear polyethylene: A Monte Carlo studyJOURNAL OF POLYMER SCIENCE (IN TWO SECTIONS), Issue 22 2004Yutian Zhu Abstract Monte Carlo simulation was used to study the graft of maleic anhydride (MAH) onto linear polyethylene (PE-g-MAH) initiated by dicumyl peroxide (DCP). Simulation results revealed that major MAH monomers attached onto PE chains as branched graft at higher MAH content. However, at extremely low MAH content, the fraction of bridged graft was very close to that of branched graft. This conclusion was somewhat different from the conventional viewpoint, namely, the fraction of bridged graft was always much lower than that of branched graft under any condition. Moreover, the results indicated that the grafting degree increased almost linearly to MAH and DCP concentrations. On the other hand, it was found that the amount of grafted MAH dropped sharply with increasing the length of grafted MAH, indicating that MAH monomers were mainly attached onto the PE chain as single MAH groups or very short oligomers. With respect to the crosslink of PE, the results showed that the fraction of PE-(MAH)n -PE crosslink structure increased continuously, and hence the fraction of PE-PE crosslink decreased with increasing MAH concentration. Finally, quantitative relationship among number average molecular weight of the PE, MAH, and DCP contents was given. © 2004 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem 42: 5714,5724, 2004 [source] A SPATIAL CLIFF-ORD-TYPE MODEL WITH HETEROSKEDASTIC INNOVATIONS: SMALL AND LARGE SAMPLE RESULTS,JOURNAL OF REGIONAL SCIENCE, Issue 2 2010Irani Arraiz ABSTRACT In this paper, we specify a linear Cliff-and-Ord-type spatial model. The model allows for spatial lags in the dependent variable, the exogenous variables, and disturbances. The innovations in the disturbance process are assumed to be heteroskedastic with an unknown form. We formulate multistep GMM/IV-type estimation procedures for the parameters of the model. We also give the limiting distributions for our suggested estimators and consistent estimators for their asymptotic variance-covariance matrices. We conduct a Monte Carlo study to show that the derived large-sample distribution provides a good approximation to the actual small-sample distribution of our estimators. [source] New Improved Tests for Cointegration with Structural BreaksJOURNAL OF TIME SERIES ANALYSIS, Issue 2 2007Joakim Westerlund C12; C32; C33 Abstract., This article proposes Lagrange multiplier-based tests for the null hypothesis of no cointegration. The tests are general enough to allow for heteroskedastic and serially correlated errors, deterministic trends, and a structural break of unknown timing in both the intercept and slope. The limiting distributions of the test statistics are derived, and are found to be invariant not only with respect to the trend and structural break, but also with respect to the regressors. A small Monte Carlo study is also conducted to investigate the small-sample properties of the tests. The results reveal that the tests have small size distortions and good power relative to other tests. [source] Cross-validation Criteria for Setar Model SelectionJOURNAL OF TIME SERIES ANALYSIS, Issue 3 2001Jan G. De Gooijer Three cross-validation criteria, denoted by respectively C, Cc, and Cu, are proposed for selecting the orders of a self-exciting threshold autoregressive (SETAR) model when both the delay and the threshold value are unknown. The derivation of C is within a natural cross-validation framework. The criterion Cc is similar in spirit as AICc, a bias-corrected version of AIC for SETAR model selection introduced by Wong and Li (1998). The criterion Cu is a variant of Cc having a similar poperty as AICu, a model selection proposed by McQuarrie et al. (1997) for linear models. In a Monte Carlo study, the performance of each of the criteria C, Cc, Cu, AIC, AICc, AICu, and BIC is investigated in detail for various models and various sample sizes. It will be shown that Cu consistently outperforms all other criteria when the sample size is moderate to large. [source] Robust Automatic Bandwidth for Long MemoryJOURNAL OF TIME SERIES ANALYSIS, Issue 3 2001Marc Henry The choice of bandwidth, or number of harmonic frequencies, is crucial to semiparametric estimation of long memory in a covariance stationary time series as it determines the rate of convergence of the estimate, and a suitable choice can insure robustness to some non-standard error specifications, such as (possibly long-memory) conditional heteroscedasticity. This paper considers mean squared error minimizing bandwidths proposed in the literature for the local Whittle, the averaged periodogram and the log periodogram estimates of long memory. Robustness of these optimal bandwidth formulae to conditional heteroscedasticity of general form in the errors is considered. Feasible approximations to the optimal bandwidths are assessed in an extensive Monte Carlo study that provides a good basis for comparison of the above-mentioned estimates with automatic bandwidth selection. [source] A Simple Explanation of the Forecast Combination Puzzle,OXFORD BULLETIN OF ECONOMICS & STATISTICS, Issue 3 2009Jeremy Smith Abstract This article presents a formal explanation of the forecast combination puzzle, that simple combinations of point forecasts are repeatedly found to outperform sophisticated weighted combinations in empirical applications. The explanation lies in the effect of finite-sample error in estimating the combining weights. A small Monte Carlo study and a reappraisal of an empirical study by Stock and Watson [Federal Reserve Bank of Richmond Economic Quarterly (2003) Vol. 89/3, pp. 71,90] support this explanation. The Monte Carlo evidence, together with a large-sample approximation to the variance of the combining weight, also supports the popular recommendation to ignore forecast error covariances in estimating the weight. [source] Monte Carlo study of the Falicov,Kimball model: implementation of the histogram methodPHYSICA STATUS SOLIDI (B) BASIC SOLID STATE PHYSICS, Issue 5 2009Maciej Wróbel Abstract We show that the histogram method that is widely used in the Monte Carlo simulations for the Ising model, can also be implemented in the simulations carried out for the Falicov,Kimball model. We estimate the systematic errors arising from application of this method and discuss the regimes of model parameters, where these errors are minimal. We apply the histogram method for determining the transition temperature and demonstrate significant advantages over standard simulations (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Monte Carlo study of 2D electron gas transport including Pauli exclusion principle in highly doped siliconPHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 1 2008F. Carosella Abstract A Multi Sub-band Monte Carlo Simulator improved to efficiently include the Pauli Exclusion Principle is presented. It is used to study the transport in highly doped and ultra-thin silicon film. Both steady state and transient regime of transport for silicon films under uniform driving field are investigated. Such approach aims to be carried out in a full device simulator to improve the modeling of the access region of nano-Double Gate MOSFETs. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Disorder dependence of phase transitions in a Coulomb glassPHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 1 2004Michael H. Overlin Abstract We have performed a Monte Carlo study of a three dimensional system of classical electrons with Coulomb interactions at half filling. We systematically increase the positional disorder by starting from a completely ordered system and gradually transitioning to a Coulomb glass. The phase transition as a function of temperature is second order for all values of disorder. We use finite size scaling to determine the transition temperature TC and the critical exponent ,. We find that TC decreases and that , increases with increasing disorder. (© 2003 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Evaluation of photovoltaic modules based on sampling inspection using smoothed empirical quantiles,PROGRESS IN PHOTOVOLTAICS: RESEARCH & APPLICATIONS, Issue 1 2010Ansgar Steland Abstract An important issue for end users and distributors of photovoltaic (PV) modules is the inspection of the power output specification of a shipment. The question is whether or not the modules satisfy the specifications given in the data sheet, namely the nominal power output under standard test conditions, relative to the power output tolerance. Since collecting control measurements of all modules is usually unrealistic, decisions have to be based on random samples. In many cases, one has access to flash data tables of final output power measurements (flash data) from the producer. We propose to rely on the statistical acceptance sampling approach as an objective decision framework, which takes into account both the end users and producers risk of a false decision. A practical solution to the problem is discussed which has been recently found by the authors. The solution consists of estimates of the required optimal sample size and the associated critical value where the estimation uses the information contained in the additional flash data. We propose and examine an improved solution which yields even more reliable estimated sampling plans as substantiated by a Monte Carlo study. This is achieved by employing advanced statistical estimation techniques. Copyright © 2009 John Wiley & Sons, Ltd. [source] Monte Carlo study of cycloamylose: Chain conformation, radius of gyration, and diffusion coefficientBIOPOLYMERS, Issue 2 2002Yasushi Nakata Abstract Cyclic (1 , 4)-,- D -glucan chains with or without excluded volume have been collected from a huge number (about 107) of linear amylosic chains generated by the Monte Carlo method with a conformational energy map for maltose, and their mean-square radii of gyration ,S2, and translational diffusion coefficients D (based on the Kirkwood formula) have been computed as functions of x (the number of glucose residues in a range from 7 to 300) and the excluded-volume strength represented by the effective hard-core radius. Both ,S2,/x and D in the unperturbed state weakly oscillate for x < 30 and the helical nature of amylose appears more pronouncedly in cyclic chains than in linear chains. As x increases, these properties approach the values expected for Gaussian rings. Though excluded-volume effects on them are always larger in cycloamylose than in the corresponding linear amylose, the ratios of ,S2, and the hydrodynamic radius of the former to the respective properties of the latter in good solvents can be slightly lower than or comparable to the (asymptotic) Gaussian-chain values when x is not sufficiently large. An interpolation expression is constructed for the relation between the gyration-radius expansion factors for linear and cyclic chains from the present Monte Carlo data and the early proposed asymptotic relation with the aid of the first-order perturbation theories. © 2002 Wiley Periodicals, Inc. Biopolymers 64: 72,79, 2002 [source] Comparison of 1- and 2-day protocols for myocardial SPECT: a Monte Carlo studyCLINICAL PHYSIOLOGY AND FUNCTIONAL IMAGING, Issue 4 2005H. H. El-Ali Summary Background:, Myocardial perfusion single-photon emission computed tomography (SPECT) is carried out by combining a rest and a stress study that are performed either on one day or two separate days. A problem when performing the two studies on 1 day is that the residual activity from the first study contributes to the activity measured in the second study. Aim:, Our aim was to identify and evaluate trends in the quantification parameters of myocardial perfusion images as a function of separation time between rest and stress. Methods:, A digital phantom was used for the generation of heart images and a Monte Carlo-based scintillation camera program was used to simulate SPECT projection images. In our simulations, the rest images were normal and the stress images included lesions of different types and localization. Two programs for quantification of myocardial perfusion images were used to assess the different images in an automated and objective way. Results:, The summed difference scores observed with the 2-day protocol were 3 ± 1 (mean ± SD) higher for AutoQUANT and 2 ± 1 higher for 4D-MSPECT compared with those observed with the 1-day protocol. The extent values were 2% points higher for the 2-day protocol compared with the 1-day protocol for both programs. Conclusions:, There are differences in the quantitative assessment of perfusion defects depending on the type of protocol used. The contribution of residual activity is larger when a 1-day protocol is used compared with the 2-day protocol. The differences, although small, are of a magnitude that results in a clear shift in quantification parameters. [source] |