Two-stage Approach (two-stage + approach)

Distribution by Scientific Domains


Selected Abstracts


Identification of Interacting Genes in Genome-Wide Association Studies Using a Model-Based Two-Stage Approach

ANNALS OF HUMAN GENETICS, Issue 5 2010
Zhaogong Zhang
Summary In this paper, we propose a two-stage approach based on 17 biologically plausible models to search for two-locus combinations that have significant joint effects on the disease status in genome-wide association (GWA) studies. In the two-stage analyses, we only test two-locus joint effects of SNPs that show modest marginal effects. We use simulation studies to compare the power of our two-stage analysis with a single-marker analysis and a two-stage analysis by using a full model. We find that for most plausible interaction effects, our two-stage analysis can dramatically increase the power to identify two-locus joint effects compared to a single-marker analysis and a two-stage analysis based on the full model. We also compare two-stage methods with one-stage methods. Our simulation results indicate that two-stage methods are more powerful than one-stage methods. We applied our two-stage approach to a GWA study for identifying genetic factors that might be relevant in the pathogenesis of sporadic Amyotrophic Lateral Sclerosis (ALS). Our proposed two-stage approach found that two SNPs have significant joint effect on sporadic ALS while the single-marker analysis and the two-stage analysis based on the full model did not find any significant results. [source]


Prediction of lethal/effective concentration/dose in the presence of multiple auxiliary covariates and components of variance

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 9 2007
Steve Gutreuter
Abstract Predictors of the percentile lethal/effective concentration/dose are commonly used measures of efficacy and toxicity. Typically such quantal-response predictors (e.g., the exposure required to kill 50% of some population) are estimated from simple bioassays wherein organisms are exposed to a gradient of several concentrations of a single agent. The toxicity of an agent may be influenced by auxiliary covariates, however, and more complicated experimental designs may introduce multiple variance components. Prediction methods lag examples of those cases. A conventional two-stage approach consists of multiple bivariate predictions of, say, medial lethal concentration followed by regression of those predictions on the auxiliary covariates. We propose a more effective and parsimonious class of generalized nonlinear mixed-effects models for prediction of lethal/effective dose/concentration from auxiliary covariates. We demonstrate examples using data from a study regarding the effects of pH and additions of variable quantities 2',5'-dichloro-4'-nitrosalicylanilide (niclosamide) on the toxicity of 3-trifluoromethyl-4-nitrophenol to larval sea lamprey (Petromyzon marinus). The new models yielded unbiased predictions and root-mean-squared errors (RMSEs) of prediction for the exposure required to kill 50 and 99.9% of some population that were 29 to 82% smaller, respectively, than those from the conventional two-stage procedure. The model class is flexible and easily implemented using commonly available software. [source]


Patterns in the spawning of cod (Gadus morhua L.), sole (Solea solea L.) and plaice (Pleuronectes platessa L.) in the Irish Sea as determined by generalized additive modelling

FISHERIES OCEANOGRAPHY, Issue 1 2000

Eleven ichthyoplankton cruises were undertaken covering most of the Irish Sea during the period February to June, 1995. To identify spawning localities and investigate temporal trends in egg production, the data on stage 1 A egg distributions of cod (Gadus morhua), plaice (Pleuronectes platessa) and sole (Solea solea) have been modelled using generalized additive models (GAMs). A two-stage approach was adopted where presence/absence was firstly modelled as a binary process and a GAM surface subsequently fitted to egg production (conditional on presence). We demonstrate that this approach can be used to model egg production both in space and in time. The spawning sites for cod, plaice and sole in the Irish Sea were defined in terms of the probability of egg occurrence. For cod, we demonstrate that by integrating under predicted egg production surfaces, a cumulative production curve can be generated and used to define percentiles of production and thus delimit the extent of the spawning season. However, for plaice and sole, the surveys did not fully cover the spawning season and the limitations that this imposes on GAM modelling of these data are discussed. Comparison of the spawning sites in 1995 with historical data suggests that the locations of cod, plaice and sole egg production in the Irish Sea have probably remained relatively constant over the last 30 years. [source]


Surface deformation due to loading of a layered elastic half-space: a rapid numerical kernel based on a circular loading element

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
E. Pan
SUMMARY This study is motivated by a desire to develop a fast numerical algorithm for computing the surface deformation field induced by surface pressure loading on a layered, isotropic, elastic half-space. The approach that we pursue here is based on a circular loading element. That is, an arbitrary surface pressure field applied within a finite surface domain will be represented by a large number of circular loading elements, all with the same radius, in which the applied downwards pressure (normal stress) is piecewise uniform: that is, the load within each individual circle is laterally uniform. The key practical requirement associated with this approach is that we need to be able to solve for the displacement field due to a single circular load, at very large numbers of points (or ,stations'), at very low computational cost. This elemental problem is axisymmetric, and so the displacement vector field consists of radial and vertical components both of which are functions only of the radial coordinate r. We achieve high computational speeds using a novel two-stage approach that we call the sparse evaluation and massive interpolation (SEMI) method. First, we use a high accuracy but computationally expensive method to compute the displacement vectors at a limited number of r values (called control points or knots), and then we use a variety of fast interpolation methods to determine the displacements at much larger numbers of intervening points. The accurate solutions achieved at the control points are framed in terms of cylindrical vector functions, Hankel transforms and propagator matrices. Adaptive Gauss quadrature is used to handle the oscillatory nature of the integrands in an optimal manner. To extend these exact solutions via interpolation we divide the r -axis into three zones, and employ a different interpolation algorithm in each zone. The magnitude of the errors associated with the interpolation is controlled by the number, M, of control points. For M= 54, the maximum RMS relative error associated with the SEMI method is less than 0.2 per cent, and it is possible to evaluate the displacement field at 100 000 stations about 1200 times faster than if the direct (exact) solution was evaluated at each station; for M= 99 which corresponds to a maximum RMS relative error less than 0.03 per cent, the SEMI method is about 700 times faster than the direct solution. [source]


Two-stage computing budget allocation approach for the response surface method

INTERNATIONAL TRANSACTIONS IN OPERATIONAL RESEARCH, Issue 6 2007
J. Peng
Abstract Response surface methodology (RSM) is one of the main statistical approaches to search for an input combination that optimizes the simulation output. In the early stages of RSM, an iterative steepest ascent search procedure is frequently used. In this paper, we attempt to improve this procedure by considering a more realistic case where there are computing budget constraints, and formulate a new computing budget allocation problem to look into the important issue of allocating computing budget to the design points in the local region of experimentation. We propose a two-stage computing budget allocation approach, which uses a limited budget to estimate the response surface in the first stage and then uses the rest of the budget to improve the lower bound of the estimated response at the center of the next design region in the second stage. Several numerical experiments are carried out to compare the two-stage approach with the regular factorial design, which allocates budget equally to each design point. The results show that our two-stage allocation outperforms the equal allocation, especially when the system noise is large. [source]


Divergence in alternative Hicksian welfare measures: the case of revealed preference for public amenities

JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2002
Dr Sudip Chattopadhyay
This paper investigates the divergence between the two Hicksian welfare measures of non-traded amenity improvement associated with housing. First, the Hicksian surplus measures for amenity changes are analytically developed based on explicit specification of utility structures. A hedonic two-stage approach is then applied to empirically show that, for quantity changes, in contrast to hypothetical markets, divergence in real market is small. The paper also analytically develops expressions for the income and substitution effects and empirically shows that for a given income effect, the greater the substitution effect the smaller the divergence between the two measures. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Simulation and extremal analysis of hurricane events

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2000
E. Casson
In regions affected by tropical storms the damage caused by hurricane winds can be catastrophic. Consequently, accurate estimates of hurricane activity in such regions are vital. Unfortunately, the severity of events means that wind speed data are scarce and unreliable, even by standards which are usual for extreme value analysis. In contrast, records of atmospheric pressures are more complete. This suggests a two-stage approach: the development of a model describing spatiotemporal patterns of wind field behaviour for hurricane events; then the simulation of such events, using meteorological climate models, to obtain a realization of associated wind speeds whose extremal characteristics are summarized. This is not a new idea, but we apply careful statistical modelling for each aspect of the model development and simulation, taking the Gulf and Atlantic coastlines of the USA as our study area. Moreover, we address for the first time the issue of spatial dependence in extremes of hurricane events, which we find to have substantial implications for regional risk assessments. [source]


Serum pharmacokinetics of oxytetracycline in sheep and calves and tissue residues in sheep following a single intramuscular injection of a long-acting preparation

JOURNAL OF VETERINARY PHARMACOLOGY & THERAPEUTICS, Issue 6 2000
A. L. CRAIGMILL
The pharmacokinetics of a long-acting oxytetracycline (OTC) formulation (Liquamycin® LA-200®) injected intramuscularly (i.m.) at a dose of 20,mg/kg were determined in four calves and 24 sheep to determine if the approved label dose for cattle provided a similar serum time/concentration profile in sheep. The AUC for the calves was 168±14.6 (,g ? h/mL) and was significantly less than the AUC for sheep (209±43,,g ? h/mL). Using the standard two-stage approach and a one-compartment model, the mean Cmax for the calves was 5.2±0.8,,g/mL, and for the sheep was 6.1±1.3,,g/mL. The mean terminal phase rate constants were 0.031 and 0.033 h, and the Vdss were 3.3 and 3.08,L/kg for the calves and sheep respectively. Analysis of the data using the standard two-stage approach, the naive pooled-data approach and a population model gave very similar results for both the cattle and sheep data. Sheep tissue residues of OTC in serum, liver, kidney, fat, muscle and injection site were measured at 1, 2, 3, 5, 7 and 14 days after a single i.m. injection of 20,mg/kg OTC. Half-lives of OTC residues in the tissues were 38.6, 33.4, 28.6, 25.4, 21.3, and 19.9,h for injection site, kidney, muscle, liver, mesenteric fat and renal fat, respectively. The ratio of tissue to serum concentration was fairly consistent at all slaughter times, except for the fat and injection sites. The mean ratios were 1.72, 4.19, 0.11, 0.061, 0.84 and 827 for the liver, kidney, renal fat, mesenteric fat, muscle and injection sites, respectively. The tissue concentrations of OTC residues were below the established cattle tolerances for OTC in liver (6,p.p.m.), muscle (2,p.p.m.) and kidney (12,p.p.m.) by 48,h, and in injection site muscle by 14,days after the single i.m. injection of 20,mg/kg. [source]


An optimal adaptive design to address local regulations in global clinical trials,

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 3 2010
Xiaolong Luo
Abstract After multi-regional clinical trials (MRCTs) have demonstrated overall significant effects, evaluation for a region-specific effect is often important. Recent guidance (see, e.g. 1) from regulatory authorities regarding evaluation for possible country-specific effects has led to research on statistical designs that incorporate such evaluations in MRCTs. These statistical designs are intended to use the MRCTs to address the requirements for global registration of a medicinal product. Adding a regional requirement could change the probability for declaring positive effect for the region when there is indeed no treatment difference as well as when there is in fact a true difference within the region. In this paper, we first quantify those probability structures based on the guidance issued by the Ministry of Health, Labour and Welfare (MHLW) of Japan. An adaptive design is proposed to consider those probabilities and to optimize the efficiency for regional objectives. This two-stage approach incorporates comprehensive global objectives into an integrated study design and may mitigate the need for a separate local bridging study. A procedure is used to optimize region-specific enrollment based on an objective function. The overall sample size requirement is assessed. We will use simulation analyses to illustrate the performance of the proposed study design. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Confidence intervals for ratios of AUCs in the case of serial sampling: a comparison of seven methods

PHARMACEUTICAL STATISTICS: THE JOURNAL OF APPLIED STATISTICS IN THE PHARMACEUTICAL INDUSTRY, Issue 1 2009
Thomas Jaki
Abstract Pharmacokinetic studies are commonly performed using the two-stage approach. The first stage involves estimation of pharmacokinetic parameters such as the area under the concentration versus time curve (AUC) for each analysis subject separately, and the second stage uses the individual parameter estimates for statistical inference. This two-stage approach is not applicable in sparse sampling situations where only one sample is available per analysis subject similar to that in non-clinical in vivo studies. In a serial sampling design, only one sample is taken from each analysis subject. A simulation study was carried out to assess coverage, power and type I error of seven methods to construct two-sided 90% confidence intervals for ratios of two AUCs assessed in a serial sampling design, which can be used to assess bioequivalence in this parameter. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Identification of Interacting Genes in Genome-Wide Association Studies Using a Model-Based Two-Stage Approach

ANNALS OF HUMAN GENETICS, Issue 5 2010
Zhaogong Zhang
Summary In this paper, we propose a two-stage approach based on 17 biologically plausible models to search for two-locus combinations that have significant joint effects on the disease status in genome-wide association (GWA) studies. In the two-stage analyses, we only test two-locus joint effects of SNPs that show modest marginal effects. We use simulation studies to compare the power of our two-stage analysis with a single-marker analysis and a two-stage analysis by using a full model. We find that for most plausible interaction effects, our two-stage analysis can dramatically increase the power to identify two-locus joint effects compared to a single-marker analysis and a two-stage analysis based on the full model. We also compare two-stage methods with one-stage methods. Our simulation results indicate that two-stage methods are more powerful than one-stage methods. We applied our two-stage approach to a GWA study for identifying genetic factors that might be relevant in the pathogenesis of sporadic Amyotrophic Lateral Sclerosis (ALS). Our proposed two-stage approach found that two SNPs have significant joint effect on sporadic ALS while the single-marker analysis and the two-stage analysis based on the full model did not find any significant results. [source]


Optimal Robust Two-Stage Designs for Genome-Wide Association Studies

ANNALS OF HUMAN GENETICS, Issue 6 2009
Thuy Trang Nguyen
Summary Optimal robust two-stage designs for genome-wide association studies are proposed using the maximum of the recessive, additive and dominant linear trend test statistics. These designs combine cost-saving two-stage genotyping with robustness against misspecification of the genetic model and are much more efficient than designs based on a single model specific test statistic in detecting multiple loci with different modes of inheritance. For given power of 90%, typical cost savings of 34% can be realised by increasing the total sample size by about 13% but genotyping only about half of the sample for the full marker set in the first stage and carrying forward about 0.06% of the markers to the second stage analysis. We also present robust two-stage designs providing optimal allocation of a limited budget for pre-existing samples. If a sample is available which would yield a power of 90% when fully genotyped, genotyping only half of the sample due to a limited budget will typically cause a loss of power of more than 55%. Using an optimal two-stage approach in the same sample under the same budget restrictions will limit the loss of power to less than 10%. In general, the optimal proportion of markers to be followed up in the second stage strongly depends on the cost ratio for chips and individual genotyping, while the design parameters of the optimal designs (total sample size, first stage proportion, first and second stage significance limit) do not much depend on the genetic model assumptions. [source]


The use of a geographical information system for land-based aquaculture planning

AQUACULTURE RESEARCH, Issue 4 2002
Ian McLeod
Abstract Site selection for aquaculture planning is a complex task involving the identification of areas that are economically, socially and environmentally suitable, available to aquaculture and commercially practicable. This paper reports upon a study into the use of a geographic information system (GIS) to assist in aquaculture planning. Using a case study in the site selection for land-based shrimp farming within the Australian coastal zone, we demonstrate that a GIS has potential to assist aquaculture planning. Our analysis is based on a sequential, two-stage approach. The first stage eliminates the grossly unsuitable portion of the study area through a preselection with low resolution, cheap and easily available data. The second stage then focuses on and ranks the remaining area using high resolution, possibly more expensive data. Finally, we use the GIS to present the results of the analysis in an easily accessible form. [source]


Real-space protein-model completion: an inverse-kinematics approach

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 1 2005
Henry Van Den Bedem
Rapid protein-structure determination relies greatly on software that can automatically build a protein model into an experimental electron-density map. In favorable circumstances, various software systems are capable of building over 90% of the final model. However, completeness falls off rapidly with the resolution of the diffraction data. Manual completion of these partial models is usually feasible, but is time-consuming and prone to subjective interpretation. Except for the N- and C-termini of the chain, the end points of each missing fragment are known from the initial model. Hence, fitting fragments reduces to an inverse-kinematics problem. A method has been developed that combines fast inverse-kinematics algorithms with a real-space torsion-angle refinement procedure in a two-stage approach to fit missing main-chain fragments into the electron density between two anchor points. The first stage samples a large number of closing conformations, guided by the electron density. These candidates are ranked according to density fit. In a subsequent refinement stage, optimization steps are projected onto a carefully chosen subspace of conformation space to preserve rigid geometry and closure. Experimental results show that fitted fragments are in excellent agreement with the final refined structure for lengths of up to 12,15 residues in areas of weak or ambiguous electron density, even at medium to low resolution. [source]


A COMPARISON OF ANALYSIS METHODS FOR LATE-STAGE VARIETY EVALUATION TRIALS

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2010
Sue J. Welham
Summary The statistical analysis of late-stage variety evaluation trials using a mixed model is described, with one- or two-stage approaches to the analysis. Two sets of trials, from Australia and the UK, were used to provide realistic scenarios for a simulation study to evaluate the different methods of analysis. This study showed that a one-stage approach gave the most accurate predictions of variety performance overall or within each environment, across a range of models, as measured by mean squared error of prediction or realized genetic gain. A weighted two-stage approach performed adequately for variety predictions both overall and within environments, but a two-stage unweighted approach performed poorly in both cases. A generalized heritability measure was developed to compare methods. [source]


Impacts of climate change on lower Murray irrigation,

AUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 3 2009
Jeff Connor
This article evaluates irrigated agriculture sector response and resultant economic impacts of climate change for a part of the Murray Darling Basin in Australia. A water balance model is used to predict reduced basin inflows for mild, moderate and severe climate change scenarios involving 1, 2 and 4°C warming, and predict 13, 38 and 63% reduced inflows. Impact on irrigated agricultural production and profitability are estimated with a mathematical programming model using a two-stage approach that simultaneously estimates short and long-run adjustments. The model accounts for a range of adaptive responses including: deficit irrigation, temporarily following of some areas, permanently reducing the irrigated area and changing the mix of crops. The results suggest that relatively low cost adaptation strategies are available for a moderate reduction in water availability and thus costs of such a reduction are likely to be relatively small. In more severe climate change scenarios greater costs are estimated. Adaptations predicted include a reduction in total area irrigated and investments in efficient irrigation. A shift away from perennial to annual crops is also predicted as the latter can be managed more profitably when water allocations in some years are very low. [source]


A COMPARISON OF ANALYSIS METHODS FOR LATE-STAGE VARIETY EVALUATION TRIALS

AUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 2 2010
Sue J. Welham
Summary The statistical analysis of late-stage variety evaluation trials using a mixed model is described, with one- or two-stage approaches to the analysis. Two sets of trials, from Australia and the UK, were used to provide realistic scenarios for a simulation study to evaluate the different methods of analysis. This study showed that a one-stage approach gave the most accurate predictions of variety performance overall or within each environment, across a range of models, as measured by mean squared error of prediction or realized genetic gain. A weighted two-stage approach performed adequately for variety predictions both overall and within environments, but a two-stage unweighted approach performed poorly in both cases. A generalized heritability measure was developed to compare methods. [source]