Precise Estimates (precise + estimate)

Distribution by Scientific Domains
Distribution within Life Sciences


Selected Abstracts


Biological significance of metals partitioned to subcellular fractions within earthworms (Aporrectodea caliginosa),

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 3 2006
Martina G. Vijver
Abstract Metal ions in excess of metabolic requirements are potentially toxic and must be removed from the vicinity of important biological molecules to protect organisms from adverse effects. Correspondingly, metals are sequestrated in various forms, defining the accumulation pattern and the magnitude of steady-state levels reached. To investigate the subcellular fractions over which Ca, Mg, Fe, Cu, Zn, Cd, Pb, Ni, and As are distributed, earthworms (Aporrectodea caliginosa) collected from the field were analyzed by isolating metal-rich granules and tissue fragments from intracellular microsomal and cytosolic fractions (i.e., heat-stable proteins and heat-denatured proteins). The fractions showed metal-specific binding capacity. Cadmium was mainly retrieved from the protein fractions. Copper was equally distributed over the protein fraction and the fraction comprising tissue fragments, cell membranes, and intact cells. Zinc, Ca, Mg, and As were mainly found in this fraction as well. Lead, Fe, and Ni were mainly isolated from the granular fraction. To study accumulation kinetics in the different fractions, three experiments were conducted in which earthworms were exposed to metal-spiked soil and a soil contaminated by anthropogenic inputs and, indigenous earthworms were exposed to field soils. Although kinetics showed variation, linear uptake and steady-state types of accumulation patterns could be understood according to subcellular compartmentalization. For risk assessment purposes, subcellular distribution of metals might allow for a more precise estimate of effects than total body burden. Identification of subcellular partitioning appears useful in determining the biological significance of steady-state levels reached in animals. [source]


Occupational exposure to methyl tertiary butyl ether in relation to key health symptom prevalence: the effect of measurement error correction

ENVIRONMETRICS, Issue 6 2003
Aparna P. Keshaviah
Abstract In 1995, White et al. reported that methyl tertiary butyl ether (MTBE), an oxygenate added to gasoline, was significantly associated with key health symptoms, including headaches, eye irritation, and burning of the nose and throat, among 44 people occupationally exposed to the compound and for whom serum MTBE measurements were available (odds ratio (OR),=,8.9, 95% CI,=,[1.2, 75.6]). However, these serum MTBE measurements were available for only 29 per cent of the 150 subjects enrolled. Around the same time, Mannino et al. conducted a similar study among individuals occupationally exposed to low levels of MTBE and did not find a significant association between exposure to MTBE and the presence of one or more key health symptoms among the 264 study participants (OR,=,0.60, 95% CI,=,[0.3, 1.21]). In this article, we evaluate the effect of MTBE on the prevalence of key health symptoms by applying a regression calibration method to White et al.'s and Mannino et al.'s data. Unlike White et al., who classified exposure using actual MTBE levels among a subset of the participants, and Mannino et al., who classified exposure based on job category among all participants, we use all of the available data to obtain an estimate of the effect of MTBE in units of serum concentration, adjusted for measurement error due to using job category instead of measured exposure. After adjusting for age, gender and smoking status, MTBE exposure was found to be significantly associated with a 50 per cent increase in the prevalence of one or more key health symptoms per order of magnitude increase in blood concentration on the log10 scale, using data from the 409 study participants with complete information on the covariates (95% CI,=,[1.00, 2.25]). Simulation results indicated that under conditions similar to those observed in these data, the estimator is unbiased and has a coverage probability close to the nominal value. The methodology illustrated in this article is advantageous because all of the available data were used in the analysis, obtaining a more precise estimate of exposure effect on health outcome, and the estimate is adjusted for measurement error due to using job category instead of measured exposure. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Estimation of the consumption CAPM with imperfect sample separation information

INTERNATIONAL JOURNAL OF FINANCE & ECONOMICS, Issue 4 2008
Andrei Semenov
Abstract We propose a consumption-based capital asset pricing model consumption (CAPM), in which the pricing kernel is calculated as the average of individuals' intertemporal marginal rates of substitution weighted by the probabilities of holding the asset in question. These probabilities are conditional on available imperfect sample separation information and are estimated simultaneously with the parameters of Euler equations. Using data from the US Consumer Expenditure Survey, we find that the consumption CAPM with probability-weighted agents yields a more precise estimate of the agent's risk aversion compared with the model, in which the available imperfect information on asset-holding status is erroneously regarded as a perfect sample separation indicator. Copyright © 2007 John Wiley & Sons, Ltd. [source]


The Role of Benzodiazepines in the Treatment of Insomnia

JOURNAL OF AMERICAN GERIATRICS SOCIETY, Issue 6 2001
Meta-Analysis of Benzodiazepine Use in the Treatment of Insomnia
PURPOSE: To obtain a precise estimate of the efficacy and common adverse effects of benzodiazepines for the treatment of insomnia compared with those of placebo and other treatments. BACKGROUND: Insomnia, also referred to as disorder of initiating or maintaining sleep, is a common problem and its prevalence among older people is estimated to be 23% to 34%.1 The total direct cost in the United States for insomnia in 1995 was estimated to be $13.9 billion.2 The complaint of insomnia in older people is associated with chronic medical conditions; psychiatric problems, mainly depression, chronic pain, and poor perceived general condition;1,3,4 and use of sleep medications.5 Thus in most cases, insomnia is due to some other underlying problem and is not just a consequence of aging.6 Accordingly, the management of insomnia should focus on addressing the primary problem and not just short-term treatment of the insomnia. Benzodiazepines belong to the drug class of choice for the symptomatic treatment of primary insomnia.7 This abstract will appraise a meta-analysis that compared the effect of benzodiazepines for short-term treatment of primary insomnia with placebo or other treatment. DATA SOURCES: Data sources included articles listed in Medline from 1966 to December 1998 and the Cochrane Controlled Trials Registry. The medical subject heading (MeSH) search terms used were "benzodiazepine" (exploded) or "benzodiazepine tranquillizers" (exploded) or "clonazepam,""drug therapy,""randomized controlled trial" or "random allocation" or "all random,""human," and "English language." In addition, bibliographies of retrieved articles were scanned for additional articles and manufacturers of brand-name benzodiazepines were asked for reports of early trials not published in the literature. STUDY SELECTION CRITERIA: Reports of randomized controlled trials of benzodiazepine therapy for primary insomnia were considered for the meta-analysis if they compared a benzodiazepine with a placebo or an alternative active drug. DATA EXTRACTION: Data were abstracted from 45 randomized controlled trials representing 2,672 patients, 47% of whom were women. Fifteen studies included patients age 65 and older and four studies involved exclusively older patients. Twenty-five studies were based in the community and nine involved inpatients. The duration of the studies ranged from 1 day to 6 weeks, with a mean of 12.2 days and median of 7.5 days. The primary outcome measures analyzed were sleep latency and total sleep duration after a sleep study, subjects' estimates of sleep latency and sleep duration, and subjects' report of adverse effects. Interrater reliability was checked through duplicate, independent abstraction of the first 21 articles. Overall agreement was between 95% and 98% (kappa value of 0.90 and 0.95 accordingly) for classification of the studies and validity of therapy, and 76% (kappa value of 0.51) for study of harmful effects. A scale of 0 to 5 was used to rate the individual reports, taking into account the quality of randomization, blinding, follow-up, and control for baseline differences between groups. Tests for homogeneity were applied across the individual studies and, when studies were found to be heterogeneous, subgroup analysis according to a predefined group was performed. MAIN RESULTS: The drugs used in the meta-analysis included triazolam in 16 studies; flurazepam in 14 studies; temazepam in 13 studies; midazolam in five studies; nitrazepam in four studies; and estazolam, lorazepam, and diazepam in two studies each. Alternative drug therapies included zopiclone in 13 studies and diphenhydramine, glutethimide, and promethazine in one study each. Only one article reported on a nonpharmacological treatment (behavioral therapy). The mean age of patients was reported in 33 of the 45 studies and ranged between 29 and 82. SLEEP LATENCY: In four studies involving 159 subjects, there was sleep-record latency (time to fall asleep) data for analysis. The pooled difference indicated that the latency to sleep for patients receiving a benzodiazepine was 4.2 minutes (95% CI = (,0.7) (,9.2)) shorter than for those receiving placebo. Patient's estimates of sleep latency examined in eight studies showed a difference of 14.3 minutes (95% CI = 10.6,18.0) in favor of benzodiazepines over placebo. TOTAL SLEEP DURATION: Analysis of two studies involving 35 patients in which total sleep duration using sleep-record results was compared indicated that patients in the benzodiazepine groups slept for an average of 61.8 minutes (95% CI = 37.4,86.2) longer than those in the placebo groups. Patient's estimates of sleep duration from eight studies (566 points) showed total sleep duration to be 48.4 minutes (95% CI = 39.6,57.1) longer for patients taking benzodiazepines than for those on placebo. ADVERSE EFFECTS: Analysis of eight studies (889 subjects) showed that those in the benzodiazepine groups were more likely than those in the placebo groups to complain of daytime drowsiness (odds ratio (OR) 2.4, 95% confidence interval (CI) = 1.8,3.4). Analysis of four studies (326 subjects) also showed that subjects in the benzodiazepine groups were more likely to complain of dizziness or lightheadedness than the placebo groups. (OR 2.6, 95% CI = 0.7,10.3). Despite the increased reported side effects in the benzodiazepine groups, drop-out rates were similar in the benzodiazepine and placebo groups. For patient reported outcome, there was no strong correlation found for sleep latency data, (r = 0.4, 95% CI = (,0.3) (,0.9)) or for sleep duration (r = 0.2, 95% CI = ,0.8,0.4) between benzodiazepine dose and outcome. COMPARISON WITH OTHER DRUGS AND TREATMENTS: In three trials with 96 subjects, meta-analysis of the results comparing benzodiazepines with zopiclone, did not show significant difference in sleep latency in the benzodiazepine and placebo groups, but the benzodiazepine groups had increased total sleep duration (23.1 min. 95% CI = 5.6,40.6). In four trials with 252 subjects, the side effect profile did not show a statistically significant difference (OR 1.5, CI 0.8,2.9). There was only one study comparing the effect of behavioral therapy with triazolam. The result showed that triazolam was more effective than behavioral therapy in decreasing sleep latency, but its efficacy declined by the second week of treatment. Behavioral therapy remained effective throughout the 9-week follow-up period. There were four small trials that involved older patients exclusively, with three of the studies having less than 2 weeks of follow-up. The results were mixed regarding benefits and adverse effects were poorly reported. CONCLUSION: The result of the meta-analysis shows that the use of benzodiazepines results in a decrease in sleep latency and a significant increase in total sleep time as compared with placebo. There was also a report of significantly increased side effects, but this did not result in increased discontinuation rate. There was no dose-response relationship for beneficial effect seen with the use of benzodiazepines, although the data are scant. Zopiclone was the only alternative pharmacological therapy that could be studied with any precision. There was no significant difference in the outcome when benzodiazepines were compared with zopiclone. There was only one study that compared the effect of benzodiazepines with nonpharmacological therapy; thus available data are insufficient to comment. [source]


Reproductive biology and population variables of the Brazilian sharpnose shark Rhizoprionodon lalandii (Müller & Henle, 1839) captured in coastal waters of south-eastern Brazil

JOURNAL OF FISH BIOLOGY, Issue 3 2008
A. C. Andrade
Throughout 1 year, from October 2003 to September 2004, 88 visits to the landing site of a small urban fishery (APREBAN) in Rio de Janeiro city were conducted and 816 specimens of Rhizoprionodon lalandii were analysed. The sample, mostly females, was composed of two cohorts: young-of-the-year were abundant in spring and summer and adults predominated in autumn and winter. Gravid females were most abundant from April to June, whereas post-partum females composed most of the catch in August to September. Adult males were present all year although were more abundant between February and July. No neonates were captured during the study and most embryos collected measured slightly below the reported total length (LT) at time of birth (L0) for the species, suggesting that parturition may occur slightly outside the main fishing grounds or that neonates were not captured in commercial gillnets set at this time of the year. The mean LT at maturity (LT50) for males was 578 mm and females matured between 620 and 660 mm, although a precise estimate of LT50 for females could not be determined. The total length (LT) and total mass (MT) relationship was calculated for both sexes and showed no significant differences. The mean condition factor increased steadily from February to July followed by a steep decrease in values relative to females in August and September, suggesting a pupping season. The present study area can be classified as a coastal juvenile habitat and a probable mating ground for R. lalandii. [source]


Polynomial and analytic stabilization of a wave equation coupled with an Euler,Bernoulli beam

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 5 2009
Kaïs Ammari
Abstract We consider a stabilization problem for a model arising in the control of noise. We prove that in the case where the control zone does not satisfy the geometric control condition, B.L.R. (see Bardos et al. SIAM J. Control Optim. 1992; 30:1024,1065), we have a polynomial stability result for all regular initial data. Moreover, we give a precise estimate on the analyticity of reachable functions where we have an exponential stability. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Corrigendum: The cover time of the giant component of a random graph, Random Structures and Algorithms 32 (2008), 401,439

RANDOM STRUCTURES AND ALGORITHMS, Issue 2 2009
Colin Cooper
Abstract The above paper gives an asymptotically precise estimate of the cover time of the giant component of a sparse random graph. The proof as it stands is not tight enough, and in particular, Eq. (64) is not strong enough to prove (65). The o(1) term in (64) needs to be improved to o(1/(lnn)2) for (65) to follow. The following section, which replaces Section 3.6, amends this oversight. The notation and definitions are from the paper. A further correction is needed. Property P2 claims that the conductance of the giant is whp, ,(1/lnn). The proof of P2 in the appendix of the paper is not quite complete. A complete proof of Property P2 can be found in 1,2. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2009 [source]


Olfactory fossa of Tremacebus harringtoni (platyrrhini, early Miocene, Sacanana, Argentina): Implications for activity pattern

THE ANATOMICAL RECORD : ADVANCES IN INTEGRATIVE ANATOMY AND EVOLUTIONARY BIOLOGY, Issue 1 2004
Richard F. Kay
Abstract CT imaging was undertaken on the skull of , 20-Myr-old Miocene Tremacebus harringtoni. Here we report our observations on the relative size of the olfactory fossa and its implications for the behavior of Tremacebus. The endocranial surface of Tremacebus is incomplete, making precise estimate of brain size and olfactory fossa size imprecise. However, olfactory fossa breadth and maximum endocranial breadth measured from CT images of one catarrhine species and eight platyrrhine species for which volumes of the olfactory bulb and brain are known show that the osteological proxies give a reasonably accurate indication of relative olfactory bulb size. Nocturnal Aotus has the largest relative olfactory fossa breadth and the largest olfactory bulb volume compared to brain volume among extant anthropoids. Tremacebus had a much smaller olfactory fossa breadth and, by inference, bulb volume,within the range of our sample of diurnal anthropoids. Variations in the relative size of the olfactory bulbs in platyrrhines appear to relate to the importance of olfaction in daily behaviors. Aotus has the largest olfactory bulbs among platyrrhines and relies more on olfactory cues when foraging than Cebus, Callicebus, or Saguinus. As in other examples of nocturnal versus diurnal primates, nocturnality may have been the environmental factor that selected for this difference in Aotus, although communication and other behaviors are also likely to select for olfactory variation in diurnal anthropoids. Considering the olfactory fossa size of Tremacebus, olfactory ability of this Miocene monkey was probably not as sensitive as in Aotus and counts against the hypothesis that Tremacebus was nocturnal. This finding accords well with previous observations that the orbits of Tremacebus are not as large as nocturnal Aotus. © 2004 Wiley-Liss, Inc. [source]


A Bayesian Spatial Multimarker Genetic Random-Effect Model for Fine-Scale Mapping

ANNALS OF HUMAN GENETICS, Issue 5 2008
M.-Y. Tsai
Summary Multiple markers in linkage disequilibrium (LD) are usually used to localize the disease gene location. These markers may contribute to the disease etiology simultaneously. In contrast to the single-locus tests, we propose a genetic random effects model that accounts for the dependence between loci via their spatial structures. In this model, the locus-specific random effects measure not only the genetic disease risk, but also the correlations between markers. In other words, the model incorporates this relation in both mean and covariance structures, and the variance components play important roles. We consider two different settings for the spatial relations. The first is our proposal, relative distance function (RDF), which is intuitive in the sense that markers nearby are likely to correlate with each other. The second setting is a common exponential decay function (EDF). Under each setting, the inference of the genetic parameters is fully Bayesian with Markov chain Monte Carlo (MCMC) sampling. We demonstrate the validity and the utility of the proposed approach with two real datasets and simulation studies. The analyses show that the proposed model with either one of two spatial correlations performs better as compared with the single locus analysis. In addition, under the RDF model, a more precise estimate for the disease locus can be obtained even when the candidate markers are fairly dense. In all simulations, the inference under the true model provides unbiased estimates of the genetic parameters, and the model with the spatial correlation structure does lead to greater confidence interval coverage probabilities. [source]


Marine fish diversity on the Scotian Shelf, Canada

AQUATIC CONSERVATION: MARINE AND FRESHWATER ECOSYSTEMS, Issue 4 2003
Nancy L. Shackell
Abstract 1.Marine life in offshore regions has not been fully censused, yet related conservation policy relies on our ability to identify areas of high biodiversity. 2.We assessed the census of marine finfish on the Scotian Shelf, Northwest Atlantic using data collected during annual research vessel surveys between 1970 and 2000. The species accumulation curve did not reach an asymptote reflecting that new species continued to be discovered throughout the survey period. Only 0.12% of the area of the Scotian Shelf has been sampled since 1970. 3.Since 1974, when over 50% of the species had been discovered, the community composition has been relatively constant. However, the dominance structure has changed dramatically as reflected in the geographic contraction of the formerly abundant, large-bodied piscivores concomitant with the geographic expansion of their prey species. 4.The region is under-sampled, and species' distribution and abundance are changing. A precise estimate of diversity is elusive. As an alternative, we searched for physical correlates of finfish diversity to identify its possible surrogates. Surrogates have potential both as a method for understanding process and as a tool for conservation management. We examined the effect of area and depth range on species richness. High species richness was associated with larger areas and greater depth range at large spatial scales. 5.Highly diverse areas include the Bay of Fundy, the Eastern Gully, the slopes, Western Bank and the northeastern shelf. Until now, the northeastern shelf has been under-appreciated as a highly diverse area. Such information will be important for environmental impact assessments as well as selection of ,sensitive' or protected areas. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Monitoring local trends in Indigenous tobacco consumption

AUSTRALIAN AND NEW ZEALAND JOURNAL OF PUBLIC HEALTH, Issue 1 2009
David P. Thomas
Abstract Objective: To compare two methods of monitoring tobacco consumption in remote Indigenous communities. Methods: We examined the monthly difference between wholesale invoice and point-of-sale data for tobacco products from three stores from remote Aboriginal communities in the Northern Territory. We assessed three measures of wholesale data. Results: The average monthly difference between the sale data and the average of wholesale invoices for the previous, same and following month was -33 cigarettes per day (95% CI -157, 92). This average of three months' wholesale invoices provided a more precise estimate than either wholesale invoices from the same or previous month. Conclusion: Tobacco wholesale data provided a close estimate of sales data in these stores. Implications: This wholesale data could be used to monitor local trends in remote Indigenous tobacco consumption, facilitating the evaluation of the impact of tobacco control activities and informing future work to reduce Indigenous smoking and its harms. [source]


Advantages of mixed effects models over traditional ANOVA models in developmental studies: A worked example in a mouse model of fetal alcohol syndrome

DEVELOPMENTAL PSYCHOBIOLOGY, Issue 7 2007
Patricia E. Wainwright
Abstract Developmental studies in animals often violate the assumption of statistical independence of observations due to the hierarchical nature of the data (i.e., pups cluster by litter, correlation of individual observations over time). Mixed effect modeling (MEM) provides a robust analytical approach for addressing problems associated with hierarchical data. This article compares the application of MEM to traditional ANOVA models within the context of a developmental study of prenatal ethanol exposure in mice. The results of the MEM analyses supported the ANOVA results in showing that a large proportion of the variability in both behavioral score and brain weight could be explained by ethanol. The MEM also identified that there were significant interactions between ethanol and litter size in relation to behavioral scores and brain weight. In addition, the longitudinal modeling approach using linear MEM allowed us to model for flexible weight gain over time, as well as to provide precise estimates of these effects, which would be difficult in repeated measures ANOVA. © 2007 Wiley Periodicals, Inc. Dev Psychobiol 49: 664,674, 2007. [source]


Comparing weighted and unweighted analyses applied to data with a mix of pooled and individual observations

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 5 2010
Sarah G. Anderson
Abstract Smaller organisms may have too little tissue to allow assaying as individuals. To get a sufficient sample for assaying, a collection of smaller individual organisms is pooled together to produce a simple observation for modeling and analysis. When a dataset contains a mix of pooled and individual organisms, the variances of the observations are not equal. An unweighted regression method is no longer appropriate because it assumes equal precision among the observations. A weighted regression method is more appropriate and yields more precise estimates because it incorporates a weight to the pooled observations. To demonstrate the benefits of using a weighted analysis when some observations are pooled, the bias and confidence interval (CI) properties were compared using an ordinary least squares and a weighted least squares t -based confidence interval. The slope and intercept estimates were unbiased for both weighted and unweighted analyses. While CIs for the slope and intercept achieved nominal coverage, the CI lengths were smaller using a weighted analysis instead of an unweighted analysis, implying that a weighted analysis will yield greater precision. Environ. Toxicol. Chem. 2010;29:1168,1171. © 2010 SETAC [source]


A simulation study comparing different experimental designs for estimating uptake and elimination rates

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 1 2006
Bryan R. Bell
Abstract The design of ecotoxicological studies requires decisions about the number and spacing of exposure groups tested, the number of replications, the spacing of sampling times, the duration of the study, and other considerations. For example, geometric spacing of sampling times or toxicant concentrations is often used as a default design. Optimal design methods in statistics can suggest alternative spacing of sampling times that yield more precise estimates of regression coefficients. In this study, we use a computer simulation to explore the impact of the spacing of sampling times and other factors on the estimation of uptake and elimination rate constants in an experiment addressing the bioaccumulation of a contaminant. Careful selection of sampling times can result in smaller standard errors for the parameter estimates, thereby allowing the construction of smaller, more precise confidence intervals. Thus, the effort invested in constructing an optimal experimental design may result in more precise inference or in a reduction of replications in an experimental design. [source]


Ds -optimal designs for studying combinations of chemicals using multiple fixed-ratio ray experiments

ENVIRONMETRICS, Issue 2 2005
Michelle Casey
Abstract Detecting and characterizing interactions among chemicals is an important environmental issue. Traditional factorial designs become infeasible as the number of compounds under study increases. Ray designs, which reduce the amount of experimental effort, can be considered when interest is restricted to relevant mixing ratios. Simultaneous tests for departure from additivity across multiple fixed-ratio rays in the presence and absence of single chemical data have been developed. Tests for characterizing interactions among subsets of chemicals at relevant mixing ratios have also been developed. Of primary importance are precise estimates for the parameters associated with these hypotheses. Since the hypotheses of interest are stated in terms of subsets of parameters, we have developed a methodology for finding Ds -optimal designs, which are associated with the minimum generalized variance of subsets of the parameter vector, along fixed-ratio rays. We illustrate these methods by characterizing the interactions of five organophosphorus pesticides (full-ray) as well as a subset of pesticides (reduced-ray) on a measure of motor activity. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Linkage disequilibrium estimates of contemporary Ne using highly variable genetic markers: a largely untapped resource for applied conservation and evolution

EVOLUTIONARY APPLICATIONS (ELECTRONIC), Issue 3 2010
Robin S. Waples
Abstract Genetic methods are routinely used to estimate contemporary effective population size (Ne) in natural populations, but the vast majority of applications have used only the temporal (two-sample) method. We use simulated data to evaluate how highly polymorphic molecular markers affect precision and bias in the single-sample method based on linkage disequilibrium (LD). Results of this study are as follows: (1) Low-frequency alleles upwardly bias , but a simple rule can reduce bias to precise estimates can be obtained for relatively small populations (Ne < 200), and small populations are not likely to be mistaken for large ones. However, it is very difficult to obtain reliable estimates for large populations. (3) With ,microsatellite' data, the LD method has greater precision than the temporal method, unless the latter is based on samples taken many generations apart. Our results indicate the LD method has widespread applicability to conservation (which typically focuses on small populations) and the study of evolutionary processes in local populations. Considerable opportunity exists to extract more information about Ne in nature by wider use of single-sample estimators and by combining estimates from different methods. [source]


Share Repurchase Offers and Liquidity: An Examination of Temporary and Permanent Effects

FINANCIAL MANAGEMENT, Issue 2 2008
Nandkumar Nayar
Open-market repurchase programs do not allow for precise estimates of share buy-back intensity to measure liquidity effects. To circumvent the uncertainty surrounding the quantity and timing of shares truly acquired in repurchase programs and to measure their long-term impact, we examine Dutch auctions and fixed-price tender offers. We investigate both the temporary and permanent liquidity effects of share repurchase programs and find that the improvement in liquidity is transitory and limited to the tender period when the firm's offer to repurchase shares is outstanding. Improvements in liquidity over longer intervals appear to be the result of an overall price improvement and a reduction in volatility rather than the result of structural change in market dynamics. [source]


Estimating deer abundance from line transect surveys of dung: sika deer in southern Scotland

JOURNAL OF APPLIED ECOLOGY, Issue 2 2001
Fernanda F.C. Marques
Summary 1Accurate and precise estimates of abundance are required for the development of management regimes for deer populations. In woodland areas, indirect dung count methods, such as the clearance plot and standing crop methods, are currently the preferred procedures to estimate deer abundance. The use of line transect methodology is likely to provide a cost-effective alternative to these methods. 2We outline a methodology based on line transect surveys of deer dung that can be used to obtain deer abundance estimates by geographical block and habitat type. Variance estimation procedures are also described. 3As an example, we applied the method to estimate sika deer Cervus nippon abundance in south Scotland. Estimates of deer defecation and length of time to dung decay were used to convert pellet group density to deer density by geographical block and habitat type. The results obtained agreed with knowledge from cull and sightings data, and the precision of the estimates was generally high. 4Relatively high sika deer densities observed in moorland areas up to 300 m from the forest edge indicated the need to encompass those areas in future surveys to avoid an underestimate of deer abundance in the region of interest. 5It is unlikely that a single method for estimating deer abundance will prove to be better under all circumstances. Direct comparisons between methods are required to evaluate thoroughly the relative merits of each of them. 6Line transect surveys of dung are becoming a widely used tool to aid management and conservation of a wide range of species. The survey methodology we outline is readily adaptable to other vertebrates that are amenable to dung survey methodology. [source]


Participatory wildlife surveys in communal lands: a case study from Simanjiro, Tanzania

AFRICAN JOURNAL OF ECOLOGY, Issue 3 2010
Fortunata U. Msoffe
Abstract It is widely accepted that protected areas alone are not sufficient to conserve wildlife populations particularly for migratory or wide-ranging species. In this study, we assess the population density of migratory species in the Tarangire,Simanjiro Ecosystem by conducting a ground census using DISTANCE sampling. We focus on the Simanjiro Plains which are used as a dispersal area by wildebeest (Connochaetes taurinus) and zebra (Equus burchellii). We demonstrate that DISTANCE sampling can provide precise estimates of population density and is an affordable method for monitoring wildlife populations over time. We stress the importance of involving local communities in monitoring programmes across landscapes that incorporate communal lands as well as protected areas. Résumé On reconnaît généralement que les aires protégées ne suffisent pas, seules, à préserver les populations de faune sauvage, particulièrement celles d'espèces migratrices ou très largement distribuées. Dans cette étude, nous évaluons la densité de population d'espèces migratrices de l'Ecosystème Tarangire-Simanjiro en réalisant un recensement au sol recourant à l'échantillonnage par distance. Nous nous concentrons sur les plaines de Simanjiro qui sont utilisées comme aire de dispersion des gnous Connochaetes taurinus et des zèbres Equus burchellii. Nous montrons que l'échantillonnage par distance peut donner des estimations précises de la densité d'une population et que c'est une méthode accessible pour suivre des populations sauvages dans le temps. Nous soulignons l'importance d'impliquer les communautés locales dans les programmes de suivi, dans des paysages qui intègrent des terres publiques aussi bien que des aires protégées. [source]


A precise water displacement method for estimating egg volume

JOURNAL OF FIELD ORNITHOLOGY, Issue 2 2009
Scott A. Rush
ABSTRACT Relationships between egg volume and an array of life-history traits have been identified for many bird species. Despite the importance of egg volume and the need for precise and accurate measurements, egg volume is usually estimated using a mathematical model that incorporates length and width measurements along with a shape variable. We developed an instrument that provides precise estimates of egg volume and can be easily used in the field. Using Clapper Rail (Rallus longirostris) eggs, we compared egg volumes measured using our instrument with estimates based on linear measurements. We found our instrument to be both precise and accurate. Compared with a method based on linear measurements of eggs, use of our instrument reduced variation in egg volume estimates by 1.6 cm3, approximately 8% of the volume of a Clapper Rail's egg. Further advantages of our technique include ease of use, increased accuracy of field-based volume estimates, and increased resolution of variation in egg volume estimates. In addition, our technique does not require postdata collection processing time and did not influence hatching success. Also, for Clapper Rails and similar species, our technique can be combined with other techniques (e.g., egg flotation) so that both egg volume and embryonic stage can be estimated at the same time. SINOPSIS Las relaciones entre el volumen del huevo y una gran cantidad de caracteres en las historias de vida han sido identificadas para muchas especies de aves. A pesar de la importancia del volumen del huevo y la necesidad de medidas más precisas, el volumen del huevo es comúnmente estimado usando modelos matemáticos que incorporan medidas del largo, ancho y forma del huevo. Nosotros creamos un instrumento que proporciona estimativos precisos del volumen del huevo y puede ser fácilmente usado en el campo. Usando huevos de Rallus longirostris comparamos las medidas de los volúmenes de los huevos usando nuestro instrumento con estimados obtenidos mediante mediadas lineales. Encontramos que nuestro instrumento fue preciso. Comparado con métodos que se basan en medidas lineales de los huevos, el uso de nuestro instrumento reduce la variación de los estimativos del volumen de los huevos en 1.6 cm3, aproximadamente 8% del volumen de los huevos de Rallus longirostris. Ventajas adicionales de nuestra técnica incluye facilidades de uso, incremento en la precisión en los estimativos de volumen realizados en el campo y un incremento en la disminución de la variación de los estimativos del volumen del huevo. Adicionalmente, nuestra técnica no requiere tiempo de manejo después de la colección de los datos, y no afecta el éxito de eclosión. También, para Rallus longirostris y especies similares, nuestra técnica puede ser combinada con otras técnicas (e. g., flotación de los huevos) de tal forma que simultáneamente se puedan estimar el volumen del huevo y el estadio embrionario. [source]


Bank Mergers, Competition, and Liquidity

JOURNAL OF MONEY, CREDIT AND BANKING, Issue 5 2007
ELENA CARLETTI
credit market competition; bank reserves; internal money market; banking system liquidity; monetary operations We model the impact of bank mergers on loan competition, reserve holdings, and aggregate liquidity. A merger changes the distribution of liquidity shocks and creates an internal money market, leading to financial cost efficiencies and more precise estimates of liquidity needs. The merged banks may increase their reserve holdings through an internalization effect or decrease them because of a diversification effect. The merger also affects loan market competition, which in turn modifies the distribution of bank sizes and aggregate liquidity needs. Mergers among large banks tend to increase aggregate liquidity needs and thus the public provision of liquidity through monetary operations of the central bank. [source]


Drinking Patterns and Myocardial Infarction: A Linear Dose,Response Model

ALCOHOLISM, Issue 2 2009
Marcia Russell
Background:, The relation of alcohol intake to cardiovascular health is complex, involving both protective and harmful effects, depending on the amount and pattern of consumption. Interpretation of data available on the nature of these relations is limited by lack of well-specified, mathematical models relating drinking patterns to alcohol-related consequences. Here we present such a model and apply it to data on myocardial infarction (MI). Methods:, The dose,response model derived assumes: (1) each instance of alcohol use has an effect that either increases or decreases the likelihood of an alcohol-related consequence, and (2) greater quantities of alcohol consumed on any drinking day add linearly to these increases or decreases in risk. Risk was reduced algebraically to a function of drinking frequency and dosage (volume minus frequency, a measure of the extent to which drinkers have more than 1 drink on days when they drink). In addition to estimating the joint impact of frequency and dosage, the model provides a method for calculating the point at which risk related to alcohol consumption is equal to background risk from other causes. A bootstrapped logistic regression based on the dose,response model was conducted using data from a case-control study to obtain the predicted probability of MI associated with current drinking patterns, controlling for covariates. Results:, MI risk decreased with increasing frequency of drinking, but increased as drinking dosage increased. Rates of increasing MI risk associated with drinking dosage were twice as high among women as they were among men. Relative to controls, lower MI risk was associated with consuming < 4.55 drinks per drinking day for men (95% CI: 2.77 to 7.18) and < 3.08 drinks per drinking day for women (95% CI: 1.35 to 5.16), increasing after these cross-over points were exceeded. Conclusions:, Use of a well-specified mathematical dose,response model provided precise estimates for the first time of how drinking frequency and dosage each contribute linearly to the overall impact of a given drinking pattern on MI risk in men and women. [source]


Alleviating linear ecological bias and optimal design with subsample data

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2008
Adam N. Glynn
Summary., We illustrate that combining ecological data with subsample data in situations in which a linear model is appropriate provides two main benefits. First, by including the individual level subsample data, the biases that are associated with linear ecological inference can be eliminated. Second, available ecological data can be used to design optimal subsampling schemes that maximize information about parameters. We present an application of this methodology to the classic problem of estimating the effect of a college degree on wages, showing that small, optimally chosen subsamples can be combined with ecological data to generate precise estimates relative to a simple random subsample. [source]


Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2009
Håvard Rue
Summary., Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged. [source]


Variability in the nutritional value of the major copepods in Cape Cod Bay (Massachusetts, USA) with implications for right whales

MARINE ECOLOGY, Issue 2 2006
Amy DeLorenzo Costa
Abstract The North Atlantic right whale, a seriously endangered species, is found in Cape Cod Bay (Massachusetts, USA) during the winter and early spring. During their residency in these waters, these whales are frequently observed feeding. This study evaluated spatial and temporal changes in the chemical composition (carbon weight and C/N ratio) of the food resource targeted by the right whales in Cape Cod Bay. The three taxa measured (Centropages typicus, Pseudocalanus spp., and Calanus finmarchicus) had highly variable chemical compositions resulting from the different life strategies and from fluctuations in their surrounding environment. The impact of seasonal variability in the energy densities of the food resource of right whales was calculated and compared to the energetic requirements of these whales. Calculations indicated that differences in the nutritional content of the zooplankton prey in Cape Cod Bay could have a considerable effect on the nutrition available to the right whales. Therefore, it is likely that using more precise estimates of the energetic densities of the prey of right whales would lead to a re-evaluation of the adequacy of the food resource available to these whales in the North Atlantic. [source]


The effects of interpolation error and location quality on animal track reconstruction

MARINE MAMMAL SCIENCE, Issue 2 2009
Mike Lonergan
Abstract The Global Positioning System (GPS) gives precise estimates of location. However, the investigation of animal movement and behavior often requires interpolation to examine events between such fixes. We obtained 6,288 GPS locations from an electronic tag deployed for 170 d on an adult male gray seal (Halichoerus grypus) that ranged freely off the east coast of Scotland, and interpolated between subsamples of these data to investigate the growth of uncertainty within the intervals between observations. Average uncertainty over the path increased linearly as the interval between interpolating locations increased, reaching 12 km in longitude and 6 km in latitude at 2-d separation. The decrease in precision caused by duty-cycling, only collecting data in part of the day, was demonstrated. Adding noise to the GPS locations to simulate data from the ARGOS satellite system had little effect on the total errors for observations separated by more than 12 h. While the rate of growth in interpolation error is likely to vary between species, these results suggest that frequent, and preferably evenly spaced, location fixes are required to take full advantage of the precision of GPS data in the reconstruction of animal tracks. [source]


METHODS FOR JOINT INFERENCE FROM MULTIPLE DATA SOURCES FOR IMPROVED ESTIMATES OF POPULATION SIZE AND SURVIVAL RATES

MARINE MAMMAL SCIENCE, Issue 3 2004
Daniel Goodman
Abstract Critical conservation decisions often hinge on estimates of population size, population growth rate, and survival rates, but as a practical matter it is difficult to obtain enough data to provide precise estimates. Here we discuss Bayesian methods for simultaneously drawing on the information content from multiple sorts of data to get as much precision as possible for the estimates. The basic idea is that an underlying population model can connect the various sorts of observations, so this can be elaborated into a joint likelihood function for joint estimation of the respective parameters. The potential for improved estimates derives from the potentially greater effective sample size of the aggregate of data, even though some of the data types may only bear directly on a subset of the parameters. The achieved improvement depends on specifics of the interactions among parameters in the underlying model, and on the actual content of the data. Assuming the respective data sets are unbiased, notwithstanding the fact that they may be noisy, we may gauge the average improvement in the estimates of the parameters of interest from the reduction, if any, in the standard deviations of their posterior marginal distributions. Prospective designs may be evaluated from analysis of simulated data. Here this approach is illustrated with an assessment of the potential value in various ways of merging mark-resight and carcass-survey data for the Florida manatee, as could be made possible by various modifications in the data collection protocols in both programs. [source]


Evaluating capture,recapture population and density estimation of tigers in a population with known parameters

ANIMAL CONSERVATION, Issue 1 2010
R. K. Sharma
Abstract Conservation strategies for endangered species require accurate and precise estimates of abundance. Unfortunately, obtaining unbiased estimates can be difficult due to inappropriate estimator models and study design. We evaluate population,density estimators for tigers Panthera tigris in Kanha Tiger Reserve, India, using camera traps in conjunction with telemetry (n=6) in a known minimum population of 14 tigers. An effort of 462 trap nights over 42 days yielded 44 photographs of 12 adult tigers. Using closed population estimators, the best-fit model (program capture) accounted for individual heterogeneity (Mh). The least biased and precise population estimate ( (SE) []) was obtained by the Mh Jackknife 1 (JK1) [14 (1.89)] in program care -2. Tiger density ( (SE) []) per 100 km2 was estimated at 13 (2.08) when the effective trapping area was estimated using the half mean maximum distance moved (1/2 MMDM), 8.1 (2.08), using the home-range radius, 7.8 (1.59), with the full MMDM and 8.0 (3.0) with the spatial likelihood method in program density 4.1. The actual density of collared tigers (3.27 per 100 km2) was closely estimated by home-range radius at 3.9 (0.76), full MMDM at 3.48 (0.81) and spatial likelihood at 3.78 (1.54), but overestimated by 1/2 MMDM at 6 (0.81) tigers per 100 km2. Sampling costs (Rs. 450 per camera day) increased linearly with camera density, while the precision of population estimates leveled off at 25 cameras per 100 km2. At simulated low tiger densities, a camera density of 50 per 100 km2 with an effort of 8 trap nights km,2 provided 95% confidence coverage, but estimates lacked precision. [source]


Effects of trapping effort and trap shyness on estimates of tiger abundance from camera trap studies

ANIMAL CONSERVATION, Issue 3 2004
Per Wegge
Camera trapping has recently been introduced as an unbiased and practical method for monitoring tiger abundance. In a high density area in the Royal Bardia National Park in lowland Nepal, we tested this method by trapping very intensively within a 25 km2 area to determine the true number of animals in that area. We then tested the effect of study design by sub-sampling the data set using varying distances between trap stations and by reducing the number of trapping nights at each station. We compared these numbers with the density estimates generated by the capture,recapture models of the program CAPTURE. Both distance between traps and trapping duration greatly influenced the results. For example, increasing the inter-trap distance from 1 to 2.1 km and reducing the trapping duration per station from 15 to 10 nights reduced the number of tigers captured by 25%. A significant decrease in trapping rates during successive 5-night periods suggested that our tigers became trap-shy, probably because of the photo flash and because they detected the camera traps from cues from impression pads 50 m from the traps. A significant behavioural response was also confirmed by the program CAPTURE. The best capture,recapture model selected by the computer program (Mbh) gave precise estimates from data collected by the initial 1 km spacing of traps. However, when we omitted data from half the number of traps, thus decreasing the sampling effort to a more realistic level for monitoring purposes, the program CAPTURE underestimated the true number of tigers. Most probably, this was due to a combination of trap shyness and the way the study was designed. Within larger protected areas, total count from intensive, stratified subsampling is suggested as a complementary technique to the capture,recapture method, since it circumvents the problem of trap shyness. [source]


Proposal for a standardised identification of the mono-exponential terminal phase for orally administered drugs

BIOPHARMACEUTICS AND DRUG DISPOSITION, Issue 3 2008
Christian Scheerans
Abstract The area under the plasma concentration-time curve from time zero to infinity (AUC0,inf) is generally considered to be the most appropriate measure of total drug exposure for bioavailability/bioequivalence studies of orally administered drugs. However, the lack of a standardised method for identifying the mono-exponential terminal phase of the concentration-time curve causes variability for the estimated AUC0,inf. The present investigation introduces a simple method, called the two times tmax method (TTT method) to reliably identify the mono-exponential terminal phase in the case of oral administration. The new method was tested by Monte Carlo simulation in Excel and compared with the adjusted r squared algorithm (ARS algorithm) frequently used in pharmacokinetic software programs. Statistical diagnostics of three different scenarios, each with 10,000 hypothetical patients showed that the new method provided unbiased average AUC0,inf estimates for orally administered drugs with a monophasic concentration-time curve post maximum concentration. In addition, the TTT method generally provided more precise estimates for AUC0,inf compared with the ARS algorithm. It was concluded that the TTT method is a most reasonable tool to be used as a standardised method in pharmacokinetic analysis especially bioequivalence studies to reliably identify the mono-exponential terminal phase for orally administered drugs showing a monophasic concentration-time profile. Copyright © 2007 John Wiley & Sons, Ltd. [source]