Simple Estimate (simple + estimate)

Distribution by Scientific Domains


Selected Abstracts


Simple estimates of haplotype relative risks in case-control data

GENETIC EPIDEMIOLOGY, Issue 6 2006
Benjamin French
Abstract Methods of varying complexity have been proposed to efficiently estimate haplotype relative risks in case-control data. Our goal was to compare methods that estimate associations between disease conditions and common haplotypes in large case-control studies such that haplotype imputation is done once as a simple data-processing step. We performed a simulation study based on haplotype frequencies for two renin-angiotensin system genes. The iterative and noniterative methods we compared involved fitting a weighted logistic regression, but differed in how the probability weights were specified. We also quantified the amount of ambiguity in the simulated genes. For one gene, there was essentially no uncertainty in the imputed diplotypes and every method performed well. For the other, ,60% of individuals had an unambiguous diplotype, and ,90% had a highest posterior probability greater than 0.75. For this gene, all methods performed well under no genetic effects, moderate effects, and strong effects tagged by a single nucleotide polymorphism (SNP). Noniterative methods produced biased estimates under strong effects not tagged by an SNP. For the most likely diplotype, median bias of the log-relative risks ranged between ,0.49 and 0.22 over all haplotypes. For all possible diplotypes, median bias ranged between ,0.73 and 0.08. Results were similar under interaction with a binary covariate. Noniterative weighted logistic regression provides valid tests for genetic associations and reliable estimates of modest effects of common haplotypes, and can be implemented in standard software. The potential for phase ambiguity does not necessarily imply uncertainty in imputed diplotypes, especially in large studies of common haplotypes. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


Estimating the Accuracy of Jury Verdicts

JOURNAL OF EMPIRICAL LEGAL STUDIES, Issue 2 2007
Bruce D. Spencer
Average accuracy of jury verdicts for a set of cases can be studied empirically and systematically even when the correct verdict cannot be known. The key is to obtain a second rating of the verdict, for example, the judge's, as in the recent study of criminal cases in the United States by the National Center for State Courts (NCSC). That study, like the famous Kalven-Zeisel study, showed only modest judge-jury agreement. Simple estimates of jury accuracy can be developed from the judge-jury agreement rate; the judge's verdict is not taken as the gold standard. Although the estimates of accuracy are subject to error, under plausible conditions they tend to overestimate the average accuracy of jury verdicts. The jury verdict was estimated to be accurate in no more than 87 percent of the NCSC cases (which, however, should not be regarded as a representative sample with respect to jury accuracy). More refined estimates, including false conviction and false acquittal rates, are developed with models using stronger assumptions. For example, the conditional probability that the jury incorrectly convicts given that the defendant truly was not guilty (a "Type I error") was estimated at 0.25, with an estimated standard error (s.e.) of 0.07, the conditional probability that a jury incorrectly acquits given that the defendant truly was guilty ("Type II error") was estimated at 0.14 (s.e. 0.03), and the difference was estimated at 0.12 (s.e. 0.08). The estimated number of defendants in the NCSC cases who truly are not guilty but are convicted does seem to be smaller than the number who truly are guilty but are acquitted. The conditional probability of a wrongful conviction, given that the defendant was convicted, is estimated at 0.10 (s.e. 0.03). [source]


POST-HARVEST RIPARIAN BUFFER RESPONSE: IMPLICATIONS FOR WOOD RECRUITMENT MODELING AND BUFFER DESIGN,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 1 2006
Michael K. Liquori
ABSTRACT: Despite the importance of riparian buffers in providing aquatic functions to forested streams, few studies have sought to capture key differences in ecological and geomorphic processes between buffered sites and forested conditions. This study examines post-harvest buffer conditions from 20 randomly selected harvest sites within a managed tree farm in the Cascade Mountains of western Washington. Post-harvest wind derived treefall rates in buffers up to three years post-harvest averaged 268 trees/km/year, 26 times greater than competition-induced mortality rate estimates. Treefall rates and stem breakage were strongly tied to tree species and relatively unaffected by stream direction. Observed treefall direction is strongly biased toward the channel, irrespective of channel or buffer orientation. Fall direction bias can deliver significantly more wood recruitment relative to randomly directed treefall, suggesting that models that utilize the random fall assumption will significantly underpredict recruitment. A simple estimate of post-harvest wood recruitment from buffers can be obtained from species specific treefall and breakage rates, combined with bias corrected recruitment probability as a function of source distance from the channel. Post-harvest wind effects may reduce the standing density of trees enough to significantly reduce or eliminate competition mortality and thus indirectly alter bank erosion rates, resulting in substantially different wood recruitment dynamics from buffers as compared to unmanaged forests. [source]


On cross-correlating weak lensing surveys

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2005
Dipak Munshi
ABSTRACT The present generation of weak lensing surveys will be superseded by surveys run from space with much better sky coverage and high level of signal-to-noise ratio, such as the Supernova/Acceleration Probe (SNAP). However, removal of any systematics or noise will remain a major cause of concern for any weak lensing survey. One of the best ways of spotting any undetected source of systematic noise is to compare surveys that probe the same part of the sky. In this paper we study various measures that are useful in cross-correlating weak lensing surveys with diverse survey strategies. Using two different statistics , the shear components and the aperture mass , we construct a class of estimators which encode such cross-correlations. These techniques will also be useful in studies where the entire source population from a specific survey can be divided into various redshift bins to study cross-correlations among them. We perform a detailed study of the angular size dependence and redshift dependence of these observables and of their sensitivity to the background cosmology. We find that one-point and two-point statistics provide complementary tools which allow one to constrain cosmological parameters and to obtain a simple estimate of the noise of the survey. [source]


Acclimation of photosynthetic capacity to irradiance in tree canopies in relation to leaf nitrogen concentration and leaf mass per unit area

PLANT CELL & ENVIRONMENT, Issue 3 2002
P. Meir
Abstract The observation of acclimation in leaf photosynthetic capacity to differences in growth irradiance has been widely used as support for a hypothesis that enables a simplification of some soil-vegetation-atmosphere transfer (SVAT) photosynthesis models. The acclimation hypothesis requires that relative leaf nitrogen concentration declines with relative irradiance from the top of a canopy to the bottom, in 1 : 1 proportion. In combination with a light transmission model it enables a simple estimate of the vertical profile in leaf nitrogen concentration (which is assumed to determine maximum carboxylation capacity), and in combination with estimates of the fraction of absorbed radiation it also leads to simple ,big-leaf' analytical solutions for canopy photosynthesis. We tested how forests deviate from this condition in five tree canopies, including four broadleaf stands, and one needle-leaf stand: a mixed-species tropical rain forest, oak (Quercus petraea (Matt.) Liebl), birch (Betula pendula Roth), beech (Fagus sylvatica L.) and Sitka spruce (Picea sitchensis (Bong.) Carr). Each canopy was studied when fully developed (mid-to-late summer for temperate stands). Irradiance (Q, µmol m,2 s,1) was measured for 20 d using quantum sensors placed throughout the vertical canopy profile. Measurements were made to obtain parameters from leaves adjacent to the radiation sensors: maximum carboxylation and electron transfer capacity (Va, Ja, µmol m,2 s,1), day respiration (Rda, µmol m,2 s,1), leaf nitrogen concentration (Nm, mg g,1) and leaf mass per unit area (La, g m,2). Relative to upper-canopy values, Va declined linearly in 1 : 1 proportion with Na. Relative Va also declined linearly with relative Q, but with a significant intercept at zero irradiance (P < 0·01). This intercept was strongly related to La of the lowest leaves in each canopy (P < 0·01, r2 = 0·98, n= 5). For each canopy, daily lnQ was also linearly related with lnVa(P < 0·05), and the intercept was correlated with the value for photosynthetic capacity per unit nitrogen (PUN: Va/Na, µmol g,1 s,1) of the lowest leaves in each canopy (P < 0·05). Va was linearly related with La and Na(P < 0·01), but the slope of the Va : Na relationship varied widely among sites. Hence, whilst there was a unique Va : Na ratio in each stand, acclimation in Va to Q varied predictably with La of the lowest leaves in each canopy. The specific leaf area, Lm(cm2 g,1), of the canopy-bottom foliage was also found to predict carboxylation capacity (expressed on a mass basis; Vm, µmol g,1 s,1) at all sites (P < 0·01). These results invalidate the hypothesis of full acclimation to irradiance, but suggest that La and Lm of the most light-limited leaves in a canopy are widely applicable indicators of the distribution of photosynthetic capacity with height in forests. [source]


Plasma Edge Physics with B2-Eirene

CONTRIBUTIONS TO PLASMA PHYSICS, Issue 1-2 2006
R. Schneider
Abstract The B2-Eirene code package was developed to give better insight into the physics in the scrape-off layer (SOL), which is defined as the region of open field-lines intersecting walls. The SOL is characterised by the competition of parallel and perpendicular transport defining by this a 2D system. The description of the plasma-wall interaction due to the existence of walls and atomic processes are necessary ingredients for an understanding of the scrape-off layer. This paper concentrates on understanding the basic physics by combining the results of the code with experiments and analytical models or estimates. This work will mainly focus on divertor tokamaks, but most of the arguments and principles can be easily adapted also to other concepts like island divertors in stellarators or limiter devices. The paper presents the basic equations for the plasma transport and the basic models for the neutral transport. This defines the basic ingredients for the SOLPS (Scrape-Off Layer Plasma Simulator) code package. A first level of understanding is approached for pure hydrogenic plasmas based both on simple models and simulations with B2-Eirene neglecting drifts and currents. The influence of neutral transport on the different operation regimes is here the main topic. This will finish with time-dependent phenomena for the pure plasma, so-called Edge Localised Modes (ELMs). Then, the influence of impurities on the SOL plasma is discussed. For the understanding of impurity physics in the SOL one needs a rather complex combination of different aspects. The impurity production process has to be understood, then the effects of impurities in terms of radiation losses have to be included and finally impurity transport is necessary. This will be introduced with rising complexity starting with simple estimates, analysing then the detailed parallel force balance and the flow pattern of impurities. Using this, impurity compression and radiation instabilities will be studied. This part ends, combining all the elements introduced before, with specific, detailed results from different machines. Then, the effect of drifts and currents is introduced and their consequences presented. Finally, some work on deriving scaling laws for the anomalous turbulent transport based on automatic edge transport code fitting procedures will be described. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


The effect of health changes and long-term health on the work activity of older Canadians

HEALTH ECONOMICS, Issue 10 2005
Doreen Wing Han Au
Abstract Using longitudinal data from the Canadian National Population Health Survey (NPHS), we study the relationship between health and employment among older Canadians. We focus on two issues: (1) the possible problems with self-reported health, including endogeneity and measurement error, and (2) the relative importance of health changes and long-term health in the decision to work. We contrast estimates of the impact of health on employment using self-assessed health, an objective health index contained in the NPHS , the HUI3, and a ,purged' health stock measure. Our results suggest that health has an economically significant effect on employment probabilities for Canadian men and women aged 50,64, and that this effect is underestimated by simple estimates based on self-assessed health. We also corroborate recent US and UK findings that changes in health are important in the work decision. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Rapid simulated hydrologic response within the variably saturated near surface

HYDROLOGICAL PROCESSES, Issue 3 2008
Brian A. Ebel
Abstract Column and field experiments have shown that the hydrologic response to increases in rainfall rates can be more rapid than expected from simple estimates. Physics-based hydrologic response simulation, with the Integrated Hydrology Model (InHM), is used here to investigate rapid hydrologic response, within the variably saturated near surface, to temporal variations in applied flux at the surface boundary. The factors controlling the speed of wetting front propagation are discussed within the Darcy,Buckingham conceptual framework, including kinematic wave approximations. The Coos Bay boundary-value problem is employed to examine simulated discharge, pressure head, and saturation responses to a large increase in applied surface flux. The results presented here suggest that physics-based simulations are capable of representing rapid hydrologic response within the variably saturated near surface. The new InHM simulations indicate that the temporal discretization and measurement precision needed to capture the rapid subsurface response to a spike increase in surface flux, necessary for both data-based analyses and evaluation of physics-based models, are smaller than the capabilities of the instrumentation deployed at the Coos Bay experimental catchment. Copyright © 2007 John Wiley & Sons, Ltd. [source]