Upper Confidence Limits (upper + confidence_limit)

Distribution by Scientific Domains


Selected Abstracts


Upper susceptibility threshold limits with confidence intervals: a method to identify normal and abnormal population values for laboratory toxicological parameters, based on acetylcholinesterase activities in sea lice

PEST MANAGEMENT SCIENCE (FORMERLY: PESTICIDE SCIENCE), Issue 3 2006
Anders Fallang
Abstract The interpretation and importance of comparing field values of susceptibility to pesticides with a laboratory reference strain that might bear little resemblance to the actual situation in the field are problematic and a continuing subject of debate. In this paper a procedure for defining a ,normal sensitive' population from a field study of 383 individuals to provide a basis for analysing and interpreting in vitro results is described and examined. Instead of using only the 95th percentile, the upper and lower confidence limits for the 95th percentile were also compared to select the best estimation of the limit for the normal material. A field population constrained by the upper confidence limit for the 95th percentile provides appropriate descriptions of the normal material in this study. This approach should prove useful in studies of pesticide resistance in field populations. Copyright © 2006 Society of Chemical Industry [source]


Risk Assessment for Quantitative Responses Using a Mixture Model

BIOMETRICS, Issue 2 2000
Mehdi Razzaghi
Summary. A problem that frequently occurs in biological experiments with laboratory animals is that some subjects are less susceptible to the treatment than others. A mixture model has traditionally been proposed to describe the distribution of responses in treatment groups for such experiments. Using a mixture dose-response model, we derive an upper confidence limit on additional risk, defined as the excess risk over the background risk due to an added dose. Our focus will be on experiments with continuous responses for which risk is the probability of an adverse effect defined as an event that is extremely rare in controls. The asymptotic distribution of the likelihood ratio statistic is used to obtain the upper confidence limit on additional risk. The method can also be used to derive a benchmark dose corresponding to a specified level of increased risk. The EM algorithm is utilized to find the maximum likelihood estimates of model parameters and an extension of the algorithm is proposed to derive the estimates when the model is subject to a specified level of added risk. An example is used to demonstrate the results, and it is shown that by using the mixture model a more accurate measure of added risk is obtained. [source]


Measurement of memory of color

COLOR RESEARCH & APPLICATION, Issue 4 2002
H. H. Seliger
Abstract Colors produced by monochromatic wavelengths of light viewed in isolation have been used as the only visual variables in short-term delayed matching (DM) and long-term recall (LTR) protocols to quantify three types of color memory in individuals with normal color vision. Measurements were normally distributed, so that color memories of individuals could be compared in terms of means and standard deviations. The variance of LTR of colors of familiar objects is shown to be separable into two portions, one due to "preferred colors" and the other due to individuals' precisions of matching. The wavelength dependence of DM exhibited minima of standard deviations at the same wavelengths as those reported for color discrimination measured by bipartite wavelength matching, and these wavelengths were shown to occur at the wavelengths of the intersections of cone spectral sensitivities. In an intermediate "green" region of relatively constant color discrimination, it was possible to combine DM measurements for different wavelengths for statistical analysis. The standard deviations of DM for individuals of a healthy population were normally distributed, providing a 95% upper confidence limit for identifying individuals with possible short-term memory impairment. Preliminary measurements of standard deviations of DM for delay times of , 1 s were consistent with a proposed rapidly decaying color imagery memory. © 2002 Wiley Periodicals, Inc. Col Res Appl, 27, 233,242, 2002; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/col.10067 [source]


Evaluation of statistical methods for left-censored environmental data with nonuniform detection limits

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 9 2006
Parikhit Sinha
Abstract Monte Carlo simulations were used to evaluate statistical methods for estimating 95% upper confidence limits of mean constituent concentrations for left-censored data with nonuniform detection limits. Two primary scenarios were evaluated: data sets with 15 to 50% nondetected samples and data sets with 51 to 80% nondetected samples. Sample size and the percentage of nondetected samples were allowed to vary randomly to generate a variety of left-censored data sets. All statistical methods were evaluated for efficacy by comparing the 95% upper confidence limits for the left-censored data with the 95% upper confidence limits for the noncensored data and by determining percent coverage of the true mean (,). For data sets with 15 to 50% nondetected samples, the trimmed mean, Winsorization, Aitchison's, and log-probit regression methods were evaluated. The log-probit regression was the only method that yielded sufficient coverage (99,100%) of ,, as well as a high correlation coefficient (r2 = 0.99) and small average percent residuals (, 0.1%) between upper confidence limits for censored versus noncensored data sets. For data sets with 51 to 80% nondetected samples, a bounding method was effective (r2 = 0.96,0.99, average residual = ,5% to ,7%, 95-98% coverage of ,), except when applied to distributions with low coefficients of variation (standard deviation/, < 0.5). Thus, the following recommendations are supported by this research: data sets with 15 to 50% nondetected samples,log-probit regression method and use of Chebyshev theorem to estimate 95% upper confidence limits; data sets with 51 to 80% nondetected samples, bounding method and use of Chebyshev theorem to estimate 95% upper confidence limits. [source]


The quality and size of yolk sac in early pregnancy loss

AUSTRALIAN AND NEW ZEALAND JOURNAL OF OBSTETRICS AND GYNAECOLOGY, Issue 5 2006
Fu-Nan CHO
Abstract Background:, Accurate differentiation between normal pregnancy and pregnancy loss in early gestation remains a clinical challenge. Aims:, To determine whether ultrasound findings of yolk sac size and morphology are valuable in relation to pregnancy loss at six to ten weeks gestation. Methods:, Transvaginal ultrasonography was performed in 111 normal singleton pregnancies, 25 anembryonic gestations, and 18 missed abortions. Mean diameters of gestational sac and yolk sac were measured. The relationship between yolk sacs and gestational sacs in normal pregnancies was depicted. The yolk sacs ultrasound findings in cases of pregnancy loss were recorded. Results:, In normal pregnancies with embryonic heartbeats, a deformed or an absent yolk sac was never detected. Sequential appearance of yolk sac, embryonic heartbeats and amniotic membrane was essential for normal pregnancy. The largest yolk sac in viable pregnancies was 8.1 mm. Findings in anembryonic gestations included an absent yolk sac, an irregular-shaped yolk sac and a relatively large yolk sac (> 95% upper confidence limits, in 11 cases). In cases of missed abortion with prior existing embryonic heartbeats, abnormal findings included a relatively large, a progressively regressing, a relatively small, and a deformed yolk sac (an irregular-shaped yolk sac, an echogenic spot, or a band). Conclusion:, A very large yolk sac may exist in normal pregnancy. When embryonic heartbeats exist, the poor quality and early regression of a yolk sac are more specific than the large size of a yolk sac in predicting pregnancy loss. When an embryo is undetectable, a relatively large yolk sac, even of normal shape, may be an indicator of miscarriage. [source]


Multiplicity-Adjusted Inferences in Risk Assessment: Benchmark Analysis with Quantal Response Data

BIOMETRICS, Issue 1 2005
Daniela K. Nitcheva
Summary A primary objective in quantitative risk or safety assessment is characterization of the severity and likelihood of an adverse effect caused by a chemical toxin or pharmaceutical agent. In many cases data are not available at low doses or low exposures to the agent, and inferences at those doses must be based on the high-dose data. A modern method for making low-dose inferences is known as benchmark analysis, where attention centers on the dose at which a fixed benchmark level of risk is achieved. Both upper confidence limits on the risk and lower confidence limits on the "benchmark dose" are of interest. In practice, a number of possible benchmark risks may be under study; if so, corrections must be applied to adjust the limits for multiplicity. In this short note, we discuss approaches for doing so with quantal response data. [source]


Confidence Bands for Low-Dose Risk Estimation with Quantal Response Data

BIOMETRICS, Issue 4 2003
Obaid M. Al-Saidy
Summary. We study the use of simultaneous confidence bands for low-dose risk estimation with quantal response data, and derive methods for estimating simultaneous upper confidence limits on predicted extra risk under a multistage model. By inverting the upper bands on extra risk, we obtain simultaneous lower bounds on the benchmark dose (BMD). Monte Carlo evaluations explore characteristics of the simultaneous limits under this setting, and a suite of actual data sets are used to compare existing methods for placing lower limits on the BMD. [source]