Simulation Study (simulation + study)

Distribution by Scientific Domains
Distribution within Mathematics and Statistics

Kinds of Simulation Study

  • computer simulation study
  • dynamics simulation study
  • extensive simulation study
  • molecular dynamics simulation study
  • numerical simulation study
  • small simulation study

  • Terms modified by Simulation Study

  • simulation study shows

  • Selected Abstracts


    EVOLUTION, Issue 1 2003
    Abstract., Explaining the uneven distribution of species among lineages is one of the oldest questions in evolution. Proposed correlations between biological traits and species diversity are routinely tested by making comparisons between phylogenetic sister clades. Several recent studies have used nested sister-clade comparisons to test hypotheses linking continuously varying traits, such as body size, with diversity. Evaluating the findings of these studies is complicated because they differ in the index of species richness difference used, the way in which trait differences were treated, and the statistical tests employed. In this paper, we use simulations to compare the performance of four species richness indices, two choices about the branch lengths used to estimate trait values for internal nodes and two statistical tests under a range of models of clade growth and character evolution. All four indices returned appropriate Type I error rates when the assumptions of the method were met and when branch lengths were set proportional to time. Only two of the indices were robust to the different evolutionary models and to different choices of branch lengths and statistical tests. These robust indices had comparable power under one nonnull scenario. Regression through the origin was consistently more powerful than the t -test, and the choice of branch lengths exerts a strong effect on both the validity and power. In the light of our simulations, we re-evaluate the findings of those who have previously used nested comparisons in the context of species richness. We provide a set of simple guidelines to maximize the performance of phylogenetically nested comparisons in tests of putative correlates of species richness. [source]


    ARTUR C. B. DA SILVA LOPESArticle first published online: 18 AUG 200
    In this paper, it is demonstrated by simulation that, contrary to a widely held belief, pure seasonal mean shifts,i.e. seasonal structural breaks which affect only the seasonal cycle,really do matter for Dickey,Fuller long-run unit root tests. Both size and power properties are affected by such breaks but using the t -sig method for lag selection induces a stabilizing effect. Although most results are reassuring when the t -sig method is used, some concern with this type of breaks cannot be disregarded. Further evidence on the poor performance of the t -sig method for quarterly time series in standard (no-break) cases is also presented. [source]

    Simulation Study of the MHD Stability Beta Limit in LHD by TASK3D

    M. Sato
    Abstract The numerical method for analysis of the "MHD stability beta limit" based on a hierarchy integrated simulation code TASK3D has been developed. The numerical model for the effect of the MHD instabilities is introduced such that the pressure profile is flattened around the rational surface due to the MHD instabilities. The width of the flattening of the pressure gradient is determined from the width of the eigenmode structure of the MHD instabilities ( 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Simulation Study of Radiative Cooling in the Divertor on JT-60 Super Advanced (JT-60SA)

    Y. Suzuki
    Abstract A simulation study of the divertor for JT-60SA is discussed in both double-null (DN) and single-null (SN) configurations. In the DN case, the transition of peak heat load on the targets and divertor plasma states from CDN (connected double-null) to DDN (disconnected double-null) are shown in the simulation. The radiative cooling power in the divertor plasma is discussed in this study. The carbon impurity generated by the sputtering on the divertor target is included in the simulation. In the gas puffing cases of fueling gas (D2) and impurity gas (Ne), the reduction of heat load is confirmed consistently with increasing the radiative cooling loss power in the diverter plasma. The loss power and the distribution of radiative cooling in the divertor plasma are studied in this paper. ( 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Smoothing Mechanisms in Defined Benefit Pension Accounting Standards: A Simulation Study,

    Cameron Morrill
    ABSTRACT The accounting for defined benefit (DB) pension plans is complex and varies significantly across jurisdictions despite recent international convergence efforts. Pension costs are significant, and many worry that unfavorable accounting treatment could lead companies to terminate DB plans, a result that would have important social implications. A key difference in accounting standards relates to whether and how the effects of fluctuations in market and demographic variables on reported pension cost are "smoothed". Critics argue that smoothing mechanisms lead to incomprehensible accounting information and induce managers to make dysfunctional decisions. Furthermore, the effectiveness of these mechanisms may vary. We use simulated data to test the volatility, representational faithfulness, and predictive ability of pension accounting numbers under Canadian, British, and international standards (IFRS). We find that smoothed pension expense is less volatile, more predictive of future expense, and more closely associated with contemporaneous funding than is "unsmoothed" pension expense. The corridor method and market-related value approaches allowed under Canadian GAAP have virtually no smoothing effect incremental to the amortization of actuarial gains and losses. The pension accrual or deferred asset is highly correlated with the pension plan deficit/surplus. Our findings complement existing, primarily archival, pension accounting research and could provide guidance to standard-setters. [source]

    Effects of Wall Stress on the Dynamics of Ventricular Fibrillation: A Simulation Study Using a Dynamic Mechanoelectric Model of Ventricular Tissue

    SATOKO HIRABAYASHI master of environment
    Introduction: To investigate the mechanisms underlying the increased prevalence of ventricular fibrillation (VF) in the mechanically compromised heart, we developed a fully coupled electromechanical model of the human ventricular myocardium. Methods and Results: The model formulated the biophysics of specific ionic currents, excitation,contraction coupling, anisotropic nonlinear deformation of the myocardium, and mechanoelectric feedback (MEF) through stretch-activated channels. Our model suggests that sustained stretches shorten the action potential duration (APD) and flatten the electrical restitution curve, whereas stretches applied at the wavefront prolong the APD. Using this model, we examined the effects of mechanical stresses on the dynamics of spiral reentry. The strain distribution during spiral reentry was complex, and a high strain-gradient region was located in the core of the spiral wave. The wavefront around the core was highly stretched, even at lower pressures, resulting in prolongation of the APD and extension of the refractory area in the wavetail. As the left ventricular pressure increased, the stretched area became wider and the refractory area was further extended. The extended refractory area in the wavetail facilitated the wave breakup and meandering of tips through interactions between the wavefront and wavetail. Conclusions: This simulation study indicates that mechanical loading promotes meandering and wave breaks of spiral reentry through MEF. Mechanical loading under pathological conditions may contribute to the maintenance of VF through these mechanisms. [source]

    Validation of ECG Indices of Ventricular Repolarization Heterogeneity: A Computer Simulation Study

    Introduction: Repolarization heterogeneity (RH) is functionally linked to dispersion in refractoriness and to arrhythmogenicity. In the current study, we validate several proposed electrocardiogram (ECG) indices for RH: T-wave amplitude, -area, -complexity, and -symmetry ratio, QT dispersion, and the Tapex-end interval (the latter being an index of transmural dispersion of the repolarization (TDR)). Methods and Results: We used ECGSIM, a mathematical simulation model of ECG genesis in a human thorax, and varied global RH by increasing the standard deviation (SD) of the repolarization instants from 20 (default) to 70 msec in steps of 10 msec. T-wave amplitude, -area, -symmetry, and Tapex-end depended linearly on SD. T-wave amplitude increased from 275 173 to 881 456 ,V, T-wave area from 34 103 21 103 to 141 103 58 103,V msec, T-wave symmetry decreased from 1.55 0.11 to 1.06 0.23, and Tapex-end increased from 84 17 to 171 52 msec. T-wave complexity increased initially but saturated at SD = 50 msec. QT dispersion increased modestly until SD = 40 msec and more rapidly for higher values of SD. TDR increased linearly with SD. Tapex-end increased linearly with TDR, but overestimated it. Conclusion: T-wave complexity did not discriminate between differences in larger RH values. QT dispersion had low sensitivity in the transitional zone between normal and abnormal RH. In conclusion, T-wave amplitude, -area, -symmetry, and, with some limitations, Tapex-end and T-wave complexity reliably reflect changes in RH. [source]

    Tax Amnesties, Justice Perceptions, and Filing Behavior: A Simulation Study

    LAW & POLICY, Issue 2 2010
    A simulation study demonstrates the influence of perceived justice of a tax amnesty on subsequent tax compliance. In addition, it investigates how the amnesty is perceived to serve the punishment objectives retribution (i.e., giving offenders what they "deserve") and value restoration (i.e., restoring the values violated by tax evasion). Hierarchical regression analysis revealed the expected positive influence of justice on subsequent tax compliance. However, when the influence of punishment objectives was controlled for the influence of justice disappeared, while retribution and value restoration showed positive effects on post-amnesty tax compliance. [source]

    Coresidential Patterns in Historical China: A Simulation Study

    Zhongwei Zhao
    The controversy regarding China's historical residential patterns is related to the lack of investigation into demographic influences on past kinship structures and household formation. This study uses computer micro-simulation to examine demographic feasibility of people living in large multi-generation households under the demographic conditions close to those recorded in Chinese history. It investigates both the composition of households in which individuals live at a particular point in their life course and the transition in their household structure and the length of time they spend in households of different types. The simulation exercise suggests that demographic regimes and household formation systems similar to those operating in China in the past produce diverse residential patterns, in which individuals could experience different household forms at different stages of the life cycle. [source]

    Noninvasive Activity-based Control of an Implantable Rotary Blood Pump: Comparative Software Simulation Study

    ARTIFICIAL ORGANS, Issue 2 2010
    Dean M. Karantonis
    Abstract A control algorithm for an implantable centrifugal rotary blood pump (RBP) based on a noninvasive indicator of the implant recipient's activity level has been proposed and evaluated in a software simulation environment. An activity level index (ALI),derived from a noninvasive estimate of heart rate and the output of a triaxial accelerometer,forms the noninvasive indicator of metabolic energy expenditure. Pump speed is then varied linearly according to the ALI within a defined range. This ALI-based control module operates within a hierarchical multiobjective framework, which imposes several constraints on the operating region, such as minimum flow and minimum speed amplitude thresholds. Three class IV heart failure (HF) cases of varying severity were simulated under rest and exercise conditions, and a comparison with other popular RBP control strategies was performed. Pump flow increases of 2.54, 1.94, and 1.15 L/min were achieved for the three HF cases, from rest to exercise. Compared with constant speed control, this represents a relative flow change of 30.3, 19.8, and ,15.4%, respectively. Simulations of the proposed control algorithm exhibited the effective intervention of each constraint, resulting in an improved flow response and the maintenance of a safe operating condition, compared with other control modes. [source]

    Physiological Control of Blood Pumps Using Intrinsic Pump Parameters: A Computer Simulation Study

    ARTIFICIAL ORGANS, Issue 4 2006
    Guruprasad A. Giridharan
    Abstract:, Implantable flow and pressure sensors, used to control rotary blood pumps, are unreliable in the long term. It is, therefore, desirable to develop a physiological control system that depends only on readily available measurements of the intrinsic pump parameters, such as measurements of the pump current, voltage, and speed (in revolutions per minute). A previously proposed ,P control method of ventricular assist devices (VADs) requires the implantation of two pressure sensors to measure the pressure difference between the left ventricle and aorta. In this article, we propose a model-based method for estimating ,P, which eliminates the need for implantable pressure sensors. The developed estimator consists of the extended Kalman filter in conjunction with the Golay,Savitzky filter. The performance of the combined estimator,VAD controller system was evaluated in computer simulations for a broad range of physical activities and varying cardiac conditions. The results show that there was no appreciable performance degradation of the estimator,controller system compared to the case when ,P is measured directly. The proposed approach effectively utilizes a VAD as both a pump and a differential pressure sensor, thus eliminating the need for dedicated implantable pressure and flow sensors. The simulation results show that different pump designs may not be equally effective at playing a dual role of a flow actuator and ,P sensor. [source]

    Resampling-Based Empirical Bayes Multiple Testing Procedures for Controlling Generalized Tail Probability and Expected Value Error Rates: Focus on the False Discovery Rate and Simulation Study

    Sandrine Dudoit
    Abstract This article proposes resampling-based empirical Bayes multiple testing procedures for controlling a broad class of Type I error rates, defined as generalized tail probability (gTP) error rates, gTP (q,g) = Pr(g (Vn,Sn) > q), and generalized expected value (gEV) error rates, gEV (g) = E [g (Vn,Sn)], for arbitrary functions g (Vn,Sn) of the numbers of false positives Vn and true positives Sn. Of particular interest are error rates based on the proportion g (Vn,Sn) = Vn /(Vn + Sn) of Type I errors among the rejected hypotheses, such as the false discovery rate (FDR), FDR = E [Vn /(Vn + Sn)]. The proposed procedures offer several advantages over existing methods. They provide Type I error control for general data generating distributions, with arbitrary dependence structures among variables. Gains in power are achieved by deriving rejection regions based on guessed sets of true null hypotheses and null test statistics randomly sampled from joint distributions that account for the dependence structure of the data. The Type I error and power properties of an FDR-controlling version of the resampling-based empirical Bayes approach are investigated and compared to those of widely-used FDR-controlling linear step-up procedures in a simulation study. The Type I error and power trade-off achieved by the empirical Bayes procedures under a variety of testing scenarios allows this approach to be competitive with or outperform the Storey and Tibshirani (2003) linear step-up procedure, as an alternative to the classical Benjamini and Hochberg (1995) procedure. ( 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Experimental and Numerical Simulation Study of Heat Transfer Due to Confined Impinging Circular Jet

    L. Chang-geng
    Abstract An experimental and numerical simulation study of heat transfer due to a confined impinging circular jet is presented. In this research, a stainless steel foil heated disk was used as the heat transfer surface of a simulated chip, and the thermocouples were mounted symmetrically along the diameter of the foil to measure the temperature distribution on the surface. Driven by a small pump, a circular air jet (1.5,mm and 1,mm in diameter) impinged on the heat-transfer surface with middle and low Reynolds numbers. The parameters, such as Reynolds number and ratio of height-to-diameter, were changed to investigate the radial distribution of the Nusselt number and the characteristics of heat transfer in the stagnation region. Numerical computations were performed by using several different turbulence models. In wall bounded turbulent flows, near-wall modeling is crucial. Therefore, the turbulence models enhanced wall treatment, such as the RNG ,-, model, may be superior for modeling impingement flows. The numerical results showed reasonable agreement with the experimental data for local heat transfer coefficient distributions. The impinging jet may be an effective method to solve the cooling problem of high power density electronic packaging. [source]

    A Molecular Dynamics Simulation Study of (OH - ) Schottky Defects in Hydroxyapatite.

    CHEMINFORM, Issue 27 2005
    Dirk Zahn
    No abstract is available for this article. [source]

    Atomistic Simulation Study of the Order/Disorder (Monoclinic to Hexagonal) Phase Transition of Hydroxyapatite.

    CHEMINFORM, Issue 25 2005
    Oliver Hochrein
    No abstract is available for this article. [source]

    ChemInform Abstract: Solvent Effect on Relative Gibbs Free Energy and Structural Property of Eu3* to Yb3+ Ion Mutation: A Monte Carlo Simulation Study.

    CHEMINFORM, Issue 10 2002
    Hag-Sung Kim
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 100 leading journals. To access a ChemInform Abstract of an article which was published elsewhere, please select a "Full Text" option. The original article is trackable via the "References" option. [source]

    Hydrogen Adsorption and Diffusion in p - tert -Butylcalix[4]arene: An Experimental and Molecular Simulation Study

    Dr. Saman Alavi
    Abstract Experimental adsorption isotherms were measured and computer simulations were performed to determine the nature of the H2 gas uptake in the low-density p - tert -butylcalix[4]arene (tBC) phase. 1H,NMR peak intensity measurements for pressures up to 175,bar were used to determine the H2 adsorption isotherm. Weak surface adsorption (up to ,2,mass,% H2) and stronger adsorption (not exceeding 0.25,mass,% or one H2 per calixarene bowl) inside the calixarene phase were detected. The latter type of adsorbed H2 molecule has restricted motion and shows a reversible gas adsorption/desorption cycle. Pulsed field gradient (PFG) NMR pressurization/depressurization measurements were performed to study the diffusion of H2 in the calixarene phases. Direct adsorption isotherms by exposure of the calixarene phase to pressures of H2 gas to ,60,bar are also presented, and show a maximum H2 adsorption of 0.4,H2 per calixarene bowl. Adsorption isotherms of H2 in bulk tBC have been simulated using grand canonical Monte Carlo calculations in a rigid tBC framework, and yield adsorptions of ,1,H2 per calixarene bowl at saturation. Classical molecular dynamics simulations with a fully flexible calixarene molecular force field are used to determine the guest distribution and inclusion energy of the H2 in the solid with different loadings. [source]

    Simulation study of methemoglobin reduction in erythrocytes

    FEBS JOURNAL, Issue 6 2007
    Differential contributions of two pathways to tolerance to oxidative stress
    Methemoglobin (metHb), an oxidized form of hemoglobin, is unable to bind and carry oxygen. Erythrocytes are continuously subjected to oxidative stress and nitrite exposure, which results in the spontaneous formation of metHb. To avoid the accumulation of metHb, reductive pathways mediated by cytochrome b5 or flavin, coupled with NADH-dependent or NADPH-dependent metHb reductases, respectively, keep the level of metHb in erythrocytes at less than 1% of the total hemoglobin under normal conditions. In this work, a mathematical model has been developed to quantitatively assess the relative contributions of the two major metHb-reducing pathways, taking into consideration the supply of NADH and NADPH from central energy metabolism. The results of the simulation experiments suggest that these pathways have different roles in the reduction of metHb; one has a high response rate to hemoglobin oxidation with a limited reducing flux, and the other has a low response rate with a high capacity flux. On the basis of the results of our model, under normal oxidative conditions, the NADPH-dependent system, the physiological role of which to date has been unclear, is predicted to be responsible for most of the reduction of metHb. In contrast, the cytochrome b5,NADH pathway becomes dominant under conditions of excess metHb accumulation, only after the capacity of the flavin,NADPH pathway has reached its limit. We discuss the potential implications of a system designed with two metHb-reducing pathways in human erythrocytes. [source]

    A Combined QM/MM Molecular Dynamics Simulations Study of Nitrate Anion (NO3 - ) in Aqueous Solution.

    CHEMINFORM, Issue 9 2007
    Anan Tongraar
    Abstract ChemInform is a weekly Abstracting Service, delivering concise information at a glance that was extracted from about 200 leading journals. To access a ChemInform Abstract, please click on HTML or PDF. [source]

    On the modelling of publish/subscribe communication systems

    R. Baldoni
    Abstract This paper presents a formal framework of a distributed computation based on a publish/subscribe system. The framework abstracts the system through two delays, namely the subscription/unsubscription delay and the diffusion delay. This abstraction allows one to model concurrent execution of publication and subscription operations without waiting for the stability of the system state and to define a Liveness property which gives the conditions for the presence of a notification event in the global history of the system. This formal framework allows us to analytically define a measure of the effectiveness of a publish/subscribe system, which reflects the percentage of notifications guaranteed by the system to subscribers. A simulation study confirms the validity of the analytical measurements. Copyright 2005 John Wiley & Sons, Ltd. [source]

    Simulation Study of Radiative Cooling in the Divertor on JT-60 Super Advanced (JT-60SA)

    Y. Suzuki
    Abstract A simulation study of the divertor for JT-60SA is discussed in both double-null (DN) and single-null (SN) configurations. In the DN case, the transition of peak heat load on the targets and divertor plasma states from CDN (connected double-null) to DDN (disconnected double-null) are shown in the simulation. The radiative cooling power in the divertor plasma is discussed in this study. The carbon impurity generated by the sputtering on the divertor target is included in the simulation. In the gas puffing cases of fueling gas (D2) and impurity gas (Ne), the reduction of heat load is confirmed consistently with increasing the radiative cooling loss power in the diverter plasma. The loss power and the distribution of radiative cooling in the divertor plasma are studied in this paper. ( 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Nonlinear epigenetic variance: review and simulations

    Kees-Jan Kan
    We present a review of empirical evidence that suggests that a substantial portion of phenotypic variance is due to nonlinear (epigenetic) processes during ontogenesis. The role of such processes as a source of phenotypic variance in human behaviour genetic studies is not fully appreciated. In addition to our review, we present simulation studies of nonlinear epigenetic variance using a computational model of neuronal network development. In each simulation study, time series for monozygotic and dizygotic twins were generated and analysed using conventional behaviour genetic modelling. In the results of these analyses, the nonlinear epigenetic variance was subsumed under the non-shared environmental component. As is commonly found in behaviour genetic studies, observed heritabilities and unique environmentabilities increased with time, whereas common environmentabilities decreased. The fact that the phenotypic effects of nonlinear epigenetic processes appear as unsystematic variance in conventional twin analyses complicates the identification and quantification of the ultimate genetic and environmental causes of individual differences. We believe that nonlinear dynamical system theories provide a challenging perspective on the development of individual differences, which may enrich behaviour genetic studies. [source]

    Co-evolution between ectoparasites and their insect hosts: a simulation study of a damselfly,water mite interaction

    Jens Rolff
    Summary 1. A simulation model investigating the co-evolution of water mites infesting their aquatic insect hosts during emergence is presented. The model is based on field and experimental studies of the ectoparasitic water mite Arrenurus cuspidator and the damselfly Coenagrion puella. 2. Three scenarios were studied: (1) Only the host was allowed to evolve timing of emergence, while the timing of the parasites' infestation opportunity was held constant. (2) Both host and parasite were allowed to evolve. (3) Only the parasite's timing was allowed to evolve, while the host was constrained completely. 3. In the first two scenarios, parasite abundances decreased in the course of evolution and reached values well below those found in the field, whereas in the third scenario, parasite abundances were maintained at a level close to that found in the field. In the second scenario (co-evolution), the host seemed to be the leader in the evolutionary race. 4. It is concluded that water mite parasitism is capable of shaping emergence patterns in aquatic insects and, despite the same life-cycle length for host and parasite, the parasite evolves fast enough to shape its hatching pattern to match the emergence pattern of its host. [source]

    Nonparametric Estimation of Nonadditive Random Functions

    ECONOMETRICA, Issue 5 2003
    Rosa L. Matzkin
    We present estimators for nonparametric functions that are nonadditive in unobservable random terms. The distributions of the unobservable random terms are assumed to be unknown. We show that when a nonadditive, nonparametric function is strictly monotone in an unobservable random term, and it satisfies some other properties that may be implied by economic theory, such as homogeneity of degree one or separability, the function and the distribution of the unobservable random term are identified. We also present convenient normalizations, to use when the properties of the function, other than strict monotonicity in the unobservable random term, are unknown. The estimators for the nonparametric function and for the distribution of the unobservable random term are shown to be consistent and asymptotically normal. We extend the results to functions that depend on a multivariate random term. The results of a limited simulation study are presented. [source]

    Missing data assumptions and methods in a smoking cessation study

    ADDICTION, Issue 3 2010
    Sunni A. Barnes
    ABSTRACT Aim A sizable percentage of subjects do not respond to follow-up attempts in smoking cessation studies. The usual procedure in the smoking cessation literature is to assume that non-respondents have resumed smoking. This study used data from a study with a high follow-up rate to assess the degree of bias that may be caused by different methods of imputing missing data. Design and methods Based on a large data set with very little missing follow-up information at 12 months, a simulation study was undertaken to compare and contrast missing data imputation methods (assuming smoking, propensity score matching and optimal matching) under various assumptions as to how the missing data arose (randomly generated missing values, increased non-response from smokers and a hybrid of the two). Findings Missing data imputation methods all resulted in some degree of bias which increased with the amount of missing data. Conclusion None of the missing data imputation methods currently available can compensate for bias when there are substantial amounts of missing data. [source]

    Predicting intra-urban variation in air pollution concentrations with complex spatio-temporal dependencies,

    ENVIRONMETRICS, Issue 6 2010
    Adam A. Szpiro
    Abstract We describe a methodology for assigning individual estimates of long-term average air pollution concentrations that accounts for a complex spatio-temporal correlation structure and can accommodate spatio-temporally misaligned observations. This methodology has been developed as part of the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air), a prospective cohort study funded by the US EPA to investigate the relationship between chronic exposure to air pollution and cardiovascular disease. Our hierarchical model decomposes the space--time field into a "mean" that includes dependence on covariates and spatially varying seasonal and long-term trends and a "residual" that accounts for spatially correlated deviations from the mean model. The model accommodates complex spatio-temporal patterns by characterizing the temporal trend at each location as a linear combination of empirically derived temporal basis functions, and embedding the spatial fields of coefficients for the basis functions in separate linear regression models with spatially correlated residuals (universal kriging). This approach allows us to implement a scalable single-stage estimation procedure that easily accommodates a significant number of missing observations at some monitoring locations. We apply the model to predict long-term average concentrations of oxides of nitrogen (NOx) from 2005 to 2007 in the Los Angeles area, based on data from 18 EPA Air Quality System regulatory monitors. The cross-validated IR2 is 0.67. The MESA Air study is also collecting additional concentration data as part of a supplementary monitoring campaign. We describe the sampling plan and demonstrate in a simulation study that the additional data will contribute to improved predictions of long-term average concentrations. Copyright 2009 John Wiley & Sons, Ltd. [source]

    Chemical mass balance when an unknown source exists

    ENVIRONMETRICS, Issue 8 2004
    Nobuhisa Kashiwagi
    Abstract A chemical mass balance method is proposed for the case where the existence of an unknown source is suspected. In general, when the existence of an unknown source is assumed in statistical receptor modeling, unknown quantities such as the composition of an unknown source and the contributions of assumed sources become unidentifiable. To estimate these unknown quantities avoiding the identification problem, a Bayes model for chemical mass balance is constructed in the form of composition without using prior knowledge on the unknown quantities except for natural constraints. The covariance of ambient observations given in the form of composition is defined in several ways. Markov chain Monte Carlo is used for evaluating the posterior means and variances of the unknown quantities as well as the likelihood for the proposed model. The likelihood is used for selecting the best fit covariance model. A simulation study is carried out to check the performance of the proposed method. Copyright 2004 John Wiley & Sons, Ltd. [source]

    Optimization of ordered distance sampling,

    ENVIRONMETRICS, Issue 2 2004
    Ryan M. Nielson
    Abstract Ordered distance sampling is a point-to-object sampling method that can be labor-efficient for demanding field situations. An extensive simulation study was conducted to find the optimum number, g, of population members to be encountered from each random starting point in ordered distance sampling. Monte Carlo simulations covered 64 combinations of four spatial patterns, four densities and four sample sizes. Values of g from 1 to 10 were considered for each case. Relative root mean squared error (RRMSE) and relative bias were calculated for each level of g, with RRMSE used as the primary assessment criterion for finding the optimum level of g. A non-parametric confidence interval was derived for the density estimate, and this was included in the simulations to gauge its performance. Superior estimation properties were found for g > 3, but diminishing returns, relative to the potential for increased effort in the field, were found for g > 5. The simulations showed noticeable diminishing returns for more than 20 sampled points. The non-parametric confidence interval performed well for populations with random, aggregate or double-clumped spatial patterns, but rarely came close to target coverage for populations that were regularly distributed. The non-parametric confidence interval presented here is recommended for general use. Copyright 2004 John Wiley & Sons, Ltd. [source]

    Models for the estimation of a ,no effect concentration'

    ENVIRONMETRICS, Issue 1 2002
    Ana M. Pires
    Abstract The use of a no effect concentration (NEC), instead of the commonly used no observed effect concentration (NOEC), has been advocated recently. In this article models and methods for the estimation of an NEC are proposed and it is shown that the NEC overcomes many of the objections to the NOEC. The NEC is included as a threshold parameter in a non-linear model. Numerical methods are then used for point estimation and several techniques are proposed for interval estimation (based on bootstrap, profile likelihood and asymptotic normality). The adequacy of these methods is empirically confirmed by the results of a simulation study. The profile likelihood based interval has emerged as the best method. Finally the methodology is illustrated with data obtained from a 21 day Daphnia magna reproduction test with a reference substance, 3,4-dichloroaniline (3,4-DCA), and with a real effluent. Copyright 2002 John Wiley & Sons, Ltd. [source]

    Bootstrap calibration to improve the reliability of tests to compare sample means and variances

    ENVIRONMETRICS, Issue 8 2001
    R. I. C. Chris Francis
    Abstract The comparison of several sample means to see whether they differ significantly is a common analysis, which is not straightforward when the samples may be from non-normal distributions with different variances. A recent study found that a randomization test that attempts to approximate the distribution of F -statistics from one- and two-factor analysis of variance in the presence of unequal population variances was the best of 12 alternative tests considered. However, it sometimes suffered from excess size with data from extremely non-normal distributions. In the present article a method for improving the robustness of the test by bootstrap calibration is described for one-factor analysis of variance, and shown to be effective by a simulation study. The method is also applied with Levene's test for unequal variance by randomization. In this case the test is very robust without calibration, and calibration does not improve it. Copyright 2001 John Wiley & Sons, Ltd. [source]