Reliable Estimates (reliable + estimate)

Distribution by Scientific Domains


Selected Abstracts


Absolute palaeointensity of Oligocene (28,30 Ma) lava flows from the Kerguelen Archipelago (southern Indian Ocean)

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2003
G. Plenier
SUMMARY We report palaeointensity estimates obtained from three Oligocene volcanic sections from the Kerguelen Archipelago (Mont des Ruches, Mont des Tempêtes, and Mont Rabouillère). Of 402 available samples, 102 were suitable for a palaeofield strength determination after a preliminary selection, among which 49 provide a reliable estimate. Application of strict a posteriori criteria make us confident about the quality of the 12 new mean-flow determinations, which are the first reliable data available for the Kerguelen Archipelago. The Virtual Dipole Moments (VDM) calculated for these flows vary from 2.78 to 9.47 1022 Am2 with an arithmetic mean value of 6.15 ± 2.1 1022 Am2. Compilation of these results with a selection of the 2002 updated IAGA palaeointensity database lead to a higher (5.4 ± 2.3 1022 Am2) Oligocene mean VDM than previously reported (Goguitchaichvili et al. 2001; Riisager 1999), identical to the 5.5 ± 2.4 1022 Am2 mean VDM obtained for the 0.3,5 Ma time window. However, these Kerguelen palaeointensity estimates represent half of the reliable Oligocene determinations and thus a bias toward higher values. Nonetheless, the new estimates reported here strengthen the conclusion that the recent geomagnetic field strength is anomalously high compared to that older than 0.3 Ma. [source]


Characterizing interannual variations in global fire calendar using data from Earth observing satellites

GLOBAL CHANGE BIOLOGY, Issue 9 2005
César Carmona-Moreno
Abstract Daily global observations from the Advanced Very High-Resolution Radiometers on the series of meteorological satellites operated by the National Oceanic and Atmospheric Administration between 1982 and 1999 were used to generate a new weekly global burnt surface product at a resolution of 8 km. Comparison with independently available information on fire locations and timing suggest that while the time-series cannot yet be used to make accurate and quantitative estimates of global burnt area it does provide a reliable estimate of changes in location and season of burning on the global scale. This time-series was used to characterize fire activity in both northern and southern hemispheres on the basis of average seasonal cycle and interannual variability. Fire seasonality and fire distribution data sets have been combined to provide gridded maps at 0.5° resolution documenting the probability of fire occurring in any given season for any location. A multiannual variogram constructed from 17 years of observations shows good agreement between the spatial,temporal behavior in fire activity and the ,El Niño' Southern Oscillation events, showing highly likely connections between both phenomena. [source]


Recharge Through a Regional Till Aquitard: Three-Dimensional Flow Model Water Balance Approach

GROUND WATER, Issue 3 2000
Richard E. Gerber
In southern Ontario, vertical leakage through a regionally extensive till is the primary source of recharge to underlying aquifers used for domestic and municipal water supply. Since leakage is largely controlled by the bulk hydraulic conductivity (K) of the aquitard, accurate estimates of K are necessary to quantify the resource. Considerable controversy exists regarding estimates of K for this aquitard, which vary according to the scale of the test method. For the till matrix, estimates from core samples and slug tests consistently range from 10,11 to 10,10 m/s. Isotopic evidence (3H), on the other hand, indicates that nonmatrix structures such as sand lenses, erosional surfaces, joints, and fractures significantly enhance till permeability. This is confirmed by slug test, pump test, recharge, and water balance studies, which show that K varies over seven orders of magnitude (10,12 to 10,5 m/s). To provide a regional estimate of bulk K and a reliable estimate of vertical recharge through the Northern Till, a numerical ground water flow model was constructed for the Duffins and Petticoat Creek drainage hasin. The model was calibrated to measurements of hydraulic head and estimates and measurements of base flow throughout the basin. This model demonstrates that the vertical hydraulic conductivity (Kv) for the Northern Till ranges from 5 × 10,10 to 5 × 10,9 m/s, values that are up to 2.5 orders of magnitude greater than matrix K estimates. Regional recharge through the Northern Till is estimated to range from 30 to 35 mm/a. [source]


Immune tolerance induction in patients with haemophilia A with inhibitors: a systematic review

HAEMOPHILIA, Issue 4 2003
J. Wight
Summary., In some patients with haemophilia A, therapeutically administered factor VIII (FVIII) comes to stimulate the production of antibodies (inhibitors) which react with FVIII to render it ineffective. As a result, FVIII cannot be used prophylactically and patients become liable to recurrent bleeds. There are two elements to the management of patients with inhibitors: the treatment of bleeding episodes, and attempts to abolish inhibitor production through the induction of immune tolerance. This paper reports a systematic review of the best available evidence of clinical effectiveness in relation to immune tolerance induction (ITI) in patients with haemophilia A with inhibitors. Owing to the lack of randomized controlled trials on this topic, broad inclusion criteria with regard to study design were applied in order to assess the best available evidence for each intervention. As a result of the clinical and methodological heterogeneity of the evidence, it was not appropriate to pool data across studies; instead, data were synthesized using tabulation and qualitative narrative assessment. The International Registry provides the most reliable estimate of the proportion of successful cases of ITI [48.7%, 95% confidence interval (CI) 42.6,52.7%]. The duration of effect is unclear, but relapses appear to be infrequent. The International Registry shows a rate of relapse of 15% at 15 years. The comparative effectiveness of different protocols is uncertain, as no trials have been undertaken which compare them directly. However, the evidence suggests that the Bonn protocol may be more effective than the Malmö or low-dose protocols. There is no good evidence that immunosuppressive drug regimens are effective. [source]


Above-stream microclimate and stream surface energy exchanges in a wildfire-disturbed riparian zone

HYDROLOGICAL PROCESSES, Issue 17 2010
J. A. Leach
Abstract Stream temperature and riparian microclimate were characterized for a 1·5 km wildfire-disturbed reach of Fishtrap Creek, located north of Kamloops, British Columbia. A deterministic net radiation model was developed using hemispherical canopy images coupled with on-site microclimate measurements. Modelled net radiation agreed reasonably with measured net radiation. Air temperature and humidity measured at two locations above the stream, separated by 900 m, were generally similar, whereas wind speed was poorly correlated between the two sites. Modelled net radiation varied considerably along the reach, and measurements at a single location did not provide a reliable estimate of the modelled reach average. During summer, net radiation dominated the surface heat exchanges, particularly because the sensible and latent heat fluxes were normally of opposite sign and thus tended to cancel each other. All surface heat fluxes shifted to negative values in autumn and were of similar magnitude through winter. In March, net radiation became positive, but heat gains were cancelled by sensible and latent heat fluxes, which remained negative. A modelling exercise using three canopy cover scenarios (current, simulated pre-wildfire and simulated complete vegetation removal) showed that net radiation under the standing dead trees was double that modelled for the pre-fire canopy cover. However, post-disturbance standing dead trees reduce daytime net radiation reaching the stream surface by one-third compared with complete vegetation removal. The results of this study have highlighted the need to account for reach-scale spatial variability of energy exchange processes, especially net radiation, when modelling stream energy budgets. Copyright © 2010 John Wiley & Sons, Ltd. [source]


A new space and time sensor fusion method for mobile robot navigation

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 7 2004
TaeSeok Jin
To fully utilize the information from the sensors of mobile robot, this paper proposes a new sensor-fusion technique where the sample data set obtained at a previous instant is properly transformed and fused with the current data sets to produce a reliable estimate for navigation control. Exploration of an unknown environment is an important task for the new generation of mobile service robots. The mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more sensors are required to measure a given physical parameter or to improve the reliability of the measurement. However, in this approach, instead of adding more sensors to the system, the temporal sequences of the data sets are stored and utilized for the purpose. The basic principle is illustrated by examples and the effectiveness is proved through simulations and experiments. The newly proposed STSF (space and time sensor fusion) scheme is applied to the navigation of a mobile robot in an environment using landmarks, and the experimental results demonstrate the effective performance of the system. © 2004 Wiley Periodicals, Inc. [source]


How accurate is dynamic contrast-enhanced MRI in the assessment of renal glomerular filtration rate?

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 4 2008
A critical appraisal
Abstract Purpose To evaluate the current literature to see if the published results of MRI-glomerular filtration rate (GFR) stand up to the claim that MRI-GFR may be used in clinical practice. Claims in the current literature that Gadolinium (Gd) DTPA dynamic contrast enhanced (DCE) MRI clearance provides a reliable estimate of glomerular filtration are an overoptimistic interpretation of the results obtained. Before calculating absolute GFR from Gd-enhanced MRI, numerous variables must be considered. Materials and Methods We examine the methodology in the published studies on absolute quantification of MRI-GFR. The techniques evaluated included the dose and volume of Gd-DTPA used, the speed of injection, acquisition sequences, orientation of the subject, re-processing, conversion of signal to concentration and the model used for analysis of the data as well as the MRI platform. Results Claims in the current literature that using DCE MRI "Gd DTPA clearance provides a good estimate of glomerular filtration" are not supported by the data presented and a more accurate conclusion should be that "no MRI approach used provides a wholly satisfactory measure of renal GFR function." Conclusion This study suggests that DCE MRI-GFR results are not yet able to be used as a routine clinical or research tool. The published literature does not show what change in DCE MRI-GFR is clinically significant, nor do the results in the literature allow a single DCE MRI-GFR measurement to be correlated directly with a multiple blood sampling technique. J. Magn. Reson. Imaging 2008. © 2008 Wiley-Liss, Inc. [source]


THE MEULLENET-OWENS RAZOR SHEAR (MORS) FOR PREDICTING POULTRY MEAT TENDERNESS: ITS APPLICATIONS AND OPTIMIZATION

JOURNAL OF TEXTURE STUDIES, Issue 6 2008
Y.S. LEE
ABSTRACT The Meullenet-Owens Razor Shear (MORS), recently developed for the assessment of poultry meat tenderness, is a reliable instrumental method. Three different studies were conducted to (1) investigate the adaptation of MORS to an Instron InSpec 2200 tester (InSpec); (2) optimize the number of replications necessary per fillet to obtain a reliable instrumental tenderness mean; and (3) test the efficacy of a blunt version of MORS (BMORS). In study 1, the tenderness of 157 cooked broiler breast fillets was predicted by the MORS performed with both a texture analyzer (MORS standard) and InSpec. The correlation coefficient of 0.95 was reported for the MORS energy obtained from the both tests, indicating that the MORS performed with an InSpec is equivalent to that performed on the more expensive texture analyzer. In study 2, eight shears were taken on each cooked fillet (101 fillets) to determine a recommended number of shears per fillet for the MORS. The composite hypothesis test was conducted considering the average of 8 shears as Y (representative estimated tenderness of a fillet) and the average of 2, 3, 4, 5, 6 or 7 as X (potentials for recommended number of shears). The results showed that the optimal number of replications of the MORS for a reliable estimate of tenderness to be four shears or greater per fillet. A blunt version of MORS (BMORS) was introduced in study 3. A total of 288 broilers (576 fillets) were deboned at eight different postmortem deboning times. Tenderness of cooked fillets was assessed by both the MORS and BMORS on the same individual fillets. Both methods were equivalent in performance for predicting broiler breast meat tenderness, giving a correlation coefficient of 0.99 with all instrumental parameters obtained from both methods. Tenderness intensity perceived by consumers was slightly more highly correlated to BMORS energy (r = ,0.90) than MORS energy (r = ,0.87). The BMORS was recommended to use especially for tough meat because of its better discrimination ability among tough meat. Overall, both the MORS and BMORS were proven to be reliable predictors for broiler breast meat tenderness. PRACTICAL APPLICATIONS The incidence of tough meat has been a major issue the poultry industry faces. Therefore, the need to ensure consumer acceptance and the increased recognition of the importance of tenderness has led to the development of instrumental methods for monitoring meat tenderness. To date, a great deal of efforts has been devoted to the development of such instrumental methods. One promising method is the Meullenet-Owens Razor Shear (MORS). The method has gained in popularity for predicting poultry meat tenderness because of its high reliability as well as simplicity compared with that of other industry standards (Warner-Bratzler shear or Allo-Kramer shear). The MORS is not only as reliable as the industry standards, but also more rapid because of the elimination of the sample cutting steps. The application of the MORS will be of benefit to the poultry industry as it could significantly save labor or time to implement for routine quality control. [source]


Reliability of the Amsterdam Clinical Challenge Scale (ACCS): a new instrument to assess the level of difficulty of patient cases in medical education

MEDICAL EDUCATION, Issue 7 2000
Gercama
Introduction In problem-based medical curricula, consideration should be given to the level of difficulty of patient cases used for training and assessment. The Amsterdam Clinical Challenge Scale (ACCS) has been developed to assess the degree of difficulty of patient cases in a systematic and reproducible manner. To determine the reliability of the instrument two research questions were addressed: (1) How many judges are required, on the basis of the total score of the ACCS, to obtain a reliable estimate of the difficulty of a single case? (2) How many cases and/or how many judges are needed to reach an acceptable level of reliability of the total score of the ACCS? Method Four judges scored 36 patient scripts reflecting a wide range of patient problems encountered in general practice. Each script was scored four times. In the reliability analysis, the generalizability theory was applied. Results The results show that the judges did, indeed, use the whole range of difficulty ratings. When the ACCS is applied to a single case, eight or more judges are needed to reach an acceptable level of reliability. When more cases are involved, fewer judges are needed; for 10 or more cases one judge will be sufficient. Conclusions Given the typical length, for example of an objective structured clinical examination, the ACCS makes it possible to provide a reliable estimate of the level of difficulty of such a test with only a limited number of judges. [source]


The effect of the 19F(,, p)22Ne reaction rate uncertainty on the yield of fluorine from Wolf,Rayet stars

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2005
Richard J. Stancliffe
ABSTRACT In the light of recent recalculations of the 19F(,, p)22Ne reaction rate, we present results of the expected yield of 19F from Wolf,Rayet (WR) stars. In addition to using the recommended rate, we have computed models using the upper and lower limits for the rate, and hence we constrain the uncertainty in the yield with respect to this reaction. We find a yield of 3.1 × 10,4 M, of 19F with our recommended rate, and a difference of a factor of 2 between the yields computed with the upper and lower limits. In comparison with previous work we find a difference in the yield of a factor of approximately 4, connected with a different choice of mass loss. Model uncertainties must be carefully evaluated in order to obtain a reliable estimate of the yield, together with its uncertainties, of fluorine from WR stars. [source]


Stop test or pressure-flow study?

NEUROUROLOGY AND URODYNAMICS, Issue 3 2004
Measuring detrusor contractility in older females
Abstract Aims Impaired detrusor contractility is common in older adults. One aspect, detrusor contraction strength during voiding, can be measured by the isovolumetric detrusor pressure attained if flow is interrupted mechanically (a stop test). Because interruption is awkward in practice, however, simple indices or nomograms based on measurements made during uninterrupted voiding are an appealing alternative. We investigated whether such methods, originally developed for males, might be applicable in female subjects, and attempted to identify a single best method. Methods We compared stop-test isovolumetric pressures with estimates based on pressure-flow studies in a group of elderly women suffering from urge incontinence. Measurements were made pre- and post-treatment with placebo or oxybutynin, allowing investigation of test,retest reliability and responsiveness to small changes of contractility. Results Existing methods of estimating detrusor contraction strength from pressure-flow studies, including the Schäfer contractility nomogram and the projected isovolumetric pressure PIP, greatly overestimate the isovolumetric pressure in these female patients. A simple modification provides a more reliable estimate, PIP1, equal to pdet.Qmax,+,Qmax (with pressure in cmH2O and Qmax in ml/sec). Typically PIP1 ranges from 30 to 75 cmH2O in this population of elderly urge-incontinent women. PIP1, however, is less responsive to a small change in contraction strength than the isovolumetric pressure measured by mechanical interruption. Conclusions The parameter PIP1 is simple to calculate from a standard pressure-flow study and may be useful for clinical assessment of detrusor contraction strength in older females. For research, however, a mechanical stop test still remains the most reliable and responsive method. The Schäfer contractility nomogram and related parameters such as DECO and BCI are not suitable for use in older women. Neurourol. Urodynam. 23:184,189, 2004. © 2004 Wiley-Liss, Inc. [source]


Prevalence of steroid sulfatase deficiency in California according to race and ethnicity

PRENATAL DIAGNOSIS, Issue 9 2010
Wendy Y. Craig
Abstract Objective Estimate steroid sulfatase deficiency (STSD) prevalence among California's racial/ethnic groups using data from a previous study focused on prenatal detection of Smith-Lemli-Opitz syndrome (SLOS). SLOS and STSD both have low maternal serum unconjugated estriol (uE3) levels. Methods Prevalence was estimated using three steps: listing clinically identified cases; modeling STSD frequency at three uE3 intervals using diagnostic urine steroid measurements; applying this model to determine frequency in pregnancies not providing urine. Results Overall, 2151 of 777 088 pregnancies (0.28%) were screen positive; 1379 of these were explained and excluded. Fifty-four cases were diagnosed clinically among 707 remaining pregnancies with a male fetus. Urine steroid testing identified 74 additional STSD cases: 66 (89.2%) at uE3 values < 0.15 MoM, 8 (10.8%) at 0.15,0.20 MoM, and 0 (0%) at > 0.20 MoM. Modeling estimated 107.5 STSD cases among 370 pregnancies without urine samples. In males, STSD prevalence was highest among non-Hispanic Whites (1:1230) compared to Hispanics (1:1620) and Asians (1:1790), but differences were not significant. No STSD pregnancies were found among 65 screen positive Black women. Conclusion The overall prevalence estimate of 1:1500 males is consistent with published estimates and is reasonable for counseling, except among Black pregnancies where no reliable estimate could be made. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Counts with an endogenous binary regressor: A series expansion approach

THE ECONOMETRICS JOURNAL, Issue 1 2005
Andrés Romeu
Summary, We propose an estimator for count data regression models where a binary regressor is endogenously determined. This estimator departs from previous approaches by using a flexible form for the conditional probability function of the counts. Using a Monte Carlo experiment we show that our estimator improves the fit and provides a more reliable estimate of the impact of regressors on the count when compared to alternatives which do restrict the mean to be linear-exponential. In an application to the number of trips by households in the United States, we find that the estimate of the treatment effect obtained is considerably different from the one obtained under a linear-exponential mean specification. [source]


Identification of Modal Combinations for Nonlinear Static Analysis of Building Structures

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 4 2004
Sashi K. Kunnath
An increasingly popular analytical method to establish these demand values is a "pushover" analysis in which a model of the building structure is subjected to an invariant distribution of lateral forces. Although such an approach takes into consideration the redistribution of forces following yielding of sections, it does not incorporate the effects of varying dynamic characteristics during the inelastic response. Simple modal combination schemes are investigated in this article to indirectly account for higher mode effects. Because the modes that contribute to deformations may be different from the modes that contribute to forces, it is necessary to identify unique modal combinations that provide reliable estimates of both force and deformation demands. The proposed procedure is applied to typical moment frame buildings to assess the effectiveness of the methodology. It is shown that the envelope of demands obtained from a series of nonlinear static analysis using the proposed modal-combination-based lateral load patterns results in better estimation of inter-story drift, a critical parameter in seismic evaluation and design. [source]


Determinants of coverage in Community-based Therapeutic Care programmes: towards a joint quantitative and qualitative analysis

DISASTERS, Issue 2 2010
Saúl Guerrero
One of the most important elements behind the success of Community-based Therapeutic Care (CTC) programmes for the treatment of severe acute malnutrition has been their ability to achieve high levels of coverage. In CTC, coverage is measured using the Centric System Area Sampling (CSAS) method, which provides accurate and reliable estimates of programme coverage as well as information on the primary reasons for non-attendance. Another important feature of CTC programmes is their use of socio-cultural assessments to determine potential barriers to access and to develop context-specific responses. By analysing data on non-attendance provided by CSAS surveys, in conjunction with data from socio-cultural assessments, it is possible to identify common factors responsible for failures in programme coverage. This paper focuses on an analysis of data from 12 CTC programmes across five African countries. It pinpoints three common factors (distance to sites, community awareness of the programme, and the way in which rejections are handled at the sites) that, together, account for approximately 75 per cent of non-attendance. [source]


Prevalence of illicit drug use in Asia and the Pacific

DRUG AND ALCOHOL REVIEW, Issue 1 2007
MADONNA L. DEVANEY
Abstract This paper reports on the prevalence of drug use in Asia and the Pacific. It is based on the report "Situational analysis of illicit drug issues and responses in Asia and the Pacific", commissioned by the Australian National Council on Drugs Asia Pacific Drug Issues Committee. Review of existing estimates of the prevalence of people who use illicit drugs from published and unpublished literature and information from key informants and regional institutions was undertaken for the period 1998-2004. Estimates of the prevalence of people who use illicit drugs were conducted for 12 Asian and six Pacific Island countries. The estimated prevalence of those using illicit drugs ranges from less than 0.01% to 4.6%. Countries with estimated prevalence rates higher than 2% are Cambodia, Hong Kong, Philippines, Thailand, Indonesia, Laos and Malaysia. China, Myanmar and Vietnam have estimated prevalence rates ranging between less than 0.01% and 2%. Data to estimate prevalence rates was not available for Pacific Island countries and Brunei. Estimates of the prevalence of drug use are critical to policy development, planning responses and measuring the coverage of programs. However, reliable estimates of the numbers of people using illicit drugs are rare in Asia, particularly the Pacific. [source]


Estimation of seismic drift and ductility demands in planar regular X-braced steel frames

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 15 2007
Theodore L. Karavasilis
Abstract This paper summarizes the results of an extensive study on the inelastic seismic response of X-braced steel buildings. More than 100 regular multi-storey tension-compression X-braced steel frames are subjected to an ensemble of 30 ordinary (i.e. without near fault effects) ground motions. The records are scaled to different intensities in order to drive the structures to different levels of inelastic deformation. The statistical analysis of the created response databank indicates that the number of stories, period of vibration, brace slenderness ratio and column stiffness strongly influence the amplitude and heightwise distribution of inelastic deformation. Nonlinear regression analysis is employed in order to derive simple formulae which reflect the aforementioned influences and offer a direct estimation of drift and ductility demands. The uncertainty of this estimation due to the record-to-record variability is discussed in detail. More specifically, given the strength (or behaviour) reduction factor, the proposed formulae provide reliable estimates of the maximum roof displacement, the maximum interstorey drift ratio and the maximum cyclic ductility of the diagonals along the height of the structure. The strength reduction factor refers to the point of the first buckling of the diagonals in the building and thus, pushover analysis and estimation of the overstrength factor are not required. This design-oriented feature enables both the rapid seismic assessment of existing structures and the direct deformation-controlled seismic design of new ones. A comparison of the proposed method with the procedures adopted in current seismic design codes reveals the accuracy and efficiency of the former. Copyright © 2007 John Wiley & Sons, Ltd. [source]


The utility of online panel surveys versus computer-assisted interviews in obtaining substance-use prevalence estimates in the Netherlands

ADDICTION, Issue 10 2009
Renske Spijkerman
ABSTRACT Aims Rather than using the traditional, costly method of personal interviews in a general population sample, substance-use prevalence rates can be derived more conveniently from data collected among members of an online access panel. To examine the utility of this method, we compared the outcomes of an online survey with those obtained with the computer-assisted personal interviews (CAPI) method. Design Data were gathered from a large sample of online panellists and in a two-stage stratified sample of the Dutch population using the CAPI method. Setting The Netherlands. Participants The online sample comprised 57 125 Dutch online panellists (15,64 years) of Survey Sampling International LLC (SSI), and the CAPI cohort 7204 respondents (15,64 years). Measurements All participants answered identical questions about their use of alcohol, cannabis, ecstasy, cocaine and performance-enhancing drugs. The CAPI respondents were asked additionally about internet access and online panel membership. Both data sets were weighted statistically according to the distribution of demographic characteristics of the general Dutch population. Findings Response rates were 35.5% (n = 20 282) for the online panel cohort and 62.7% (n = 4516) for the CAPI cohort. The data showed almost consistently lower substance-use prevalence rates for the CAPI respondents. Although the observed differences could be due to bias in both data sets, coverage and non-response bias were higher in the online panel survey. Conclusions Despite its economic advantage, the online panel survey showed stronger non-response and coverage bias than the CAPI survey, leading to less reliable estimates of substance use in the general population. [source]


Toxicological characterization of 2,4,6-trinitrotoluene, its transformation products, and two nitramine explosives

ENVIRONMENTAL TOXICOLOGY & CHEMISTRY, Issue 6 2007
Judith Neuwoehner
Abstract The soil and groundwater of former ordnance plants and their dumping sites have often been highly contaminated with the explosive 2,4,6-trinitrotoluene (2,4,6-TNT) leading to a potential hazard for humans and the environment. Further hazards can arise from metabolites of transformation, by-products of the manufacturing process, or incomplete combustion. This work examines the toxicity of polar nitro compounds relative to their parent compound 2,4,6-TNT using four different ecotoxicological bioassays (algae growth inhibition test, daphnids immobilization test, luminescence inhibition test, and cell growth inhibition test), three genotoxicological assays (umu test, NM2009 test, and SOS Chromotest), and the Ames fluctuation test for detection of mutagenicity. For this study, substances typical for certain steps of degradation/transformation of 2,4,6-TNT were chosen for investigation. This work determines that the parent compounds 2,4,6-TNT and 1,3,5-trinitrobenzene are the most toxic substances followed by 3,5-dinitrophenol, 3,5-dinitroaniline and 4-amino-2-nitrotoluene. Less toxic are the direct degradation products of 2,4,6-TNT like 2,4-dinitrotoluene, 2,6-dinitrotoluene, 2-amino-4,6-dinitrotoluene, and 4-amino-2,6-dinitrotoluene. A weak toxic potential was observed for 2,4,6-trinitrobenzoic acid, 2,4-diamino-6-nitrotoluene, 2,4-dinitrotoluene-5-sulfonic acid, and 2,6-diamino-4-nitrotoluene. Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine and hexahydro-1,3,5-trinitro-1,3,5-triazine show no hint of acute toxicity. Based on the results of this study, we recommend expanding future monitoring programs of not only the parent substances but also potential metabolites based on conditions at the contaminated sites and to use bioassays as tools for estimating the toxicological potential directly by testing environmental samples. Site-specific protocols should be developed. If hazardous substances are found in relevant concentrations, action should be taken to prevent potential risks for humans and the environment. Analyses can then be used to prioritise reliable estimates of risk. [source]


Allowing for redundancy and environmental effects in estimates of home range utilization distributions

ENVIRONMETRICS, Issue 1 2005
W. G. S. Hines
Abstract Real location data for radio tagged animals can be challenging to analyze. They can be somewhat redundant, since successive observations of an animal slowly wandering through its environment may well show very similar locations. The data set can possess trends over time or be irregularly timed, and they can report locations in environments with features that should be incorporated to some degree. Also, the periods of observation may be too short to provide reliable estimates of characteristics such as inter-observation correlation levels that can be used in conventional time-series analyses. Moreover, stationarity (in the sense of the data being generated by a source that provides observations of constant mean, variance and correlation structure) may not be present. This article considers an adaptation of the kernel density estimator for estimating home ranges, an adaptation which allows for these various complications and which works well in the absence of exact (or precise) information about correlation structure and parameters. Modifications to allow for irregularly timed observations, non-stationarity and heterogeneous environments are discussed and illustrated. Copyright © 2004 John Wiley & Sons, Ltd. [source]


EXTINCTION DURING EVOLUTIONARY RADIATIONS: RECONCILING THE FOSSIL RECORD WITH MOLECULAR PHYLOGENIES

EVOLUTION, Issue 12 2009
Tiago B. Quental
Recent application of time-varying birth,death models to molecular phylogenies suggests that a decreasing diversification rate can only be observed if there was a decreasing speciation rate coupled with extremely low or no extinction. However, from a paleontological perspective, zero extinction rates during evolutionary radiations seem unlikely. Here, with a more comprehensive set of computer simulations, we show that substantial extinction can occur without erasing the signal of decreasing diversification rate in a molecular phylogeny. We also find, in agreement with the previous work, that a decrease in diversification rate cannot be observed in a molecular phylogeny with an increasing extinction rate alone. Further, we find that the ability to observe decreasing diversification rates in molecular phylogenies is controlled (in part) by the ratio of the initial speciation rate (Lambda) to the extinction rate (Mu) at equilibrium (the LiMe ratio), and not by their absolute values. Here we show in principle, how estimates of initial speciation rates may be calculated using both the fossil record and the shape of lineage through time plots derived from molecular phylogenies. This is important because the fossil record provides more reliable estimates of equilibrium extinction rates than initial speciation rates. [source]


Linkage disequilibrium estimates of contemporary Ne using highly variable genetic markers: a largely untapped resource for applied conservation and evolution

EVOLUTIONARY APPLICATIONS (ELECTRONIC), Issue 3 2010
Robin S. Waples
Abstract Genetic methods are routinely used to estimate contemporary effective population size (Ne) in natural populations, but the vast majority of applications have used only the temporal (two-sample) method. We use simulated data to evaluate how highly polymorphic molecular markers affect precision and bias in the single-sample method based on linkage disequilibrium (LD). Results of this study are as follows: (1) Low-frequency alleles upwardly bias , but a simple rule can reduce bias to reliable estimates for large populations. (3) With ,microsatellite' data, the LD method has greater precision than the temporal method, unless the latter is based on samples taken many generations apart. Our results indicate the LD method has widespread applicability to conservation (which typically focuses on small populations) and the study of evolutionary processes in local populations. Considerable opportunity exists to extract more information about Ne in nature by wider use of single-sample estimators and by combining estimates from different methods. [source]


Simple estimates of haplotype relative risks in case-control data

GENETIC EPIDEMIOLOGY, Issue 6 2006
Benjamin French
Abstract Methods of varying complexity have been proposed to efficiently estimate haplotype relative risks in case-control data. Our goal was to compare methods that estimate associations between disease conditions and common haplotypes in large case-control studies such that haplotype imputation is done once as a simple data-processing step. We performed a simulation study based on haplotype frequencies for two renin-angiotensin system genes. The iterative and noniterative methods we compared involved fitting a weighted logistic regression, but differed in how the probability weights were specified. We also quantified the amount of ambiguity in the simulated genes. For one gene, there was essentially no uncertainty in the imputed diplotypes and every method performed well. For the other, ,60% of individuals had an unambiguous diplotype, and ,90% had a highest posterior probability greater than 0.75. For this gene, all methods performed well under no genetic effects, moderate effects, and strong effects tagged by a single nucleotide polymorphism (SNP). Noniterative methods produced biased estimates under strong effects not tagged by an SNP. For the most likely diplotype, median bias of the log-relative risks ranged between ,0.49 and 0.22 over all haplotypes. For all possible diplotypes, median bias ranged between ,0.73 and 0.08. Results were similar under interaction with a binary covariate. Noniterative weighted logistic regression provides valid tests for genetic associations and reliable estimates of modest effects of common haplotypes, and can be implemented in standard software. The potential for phase ambiguity does not necessarily imply uncertainty in imputed diplotypes, especially in large studies of common haplotypes. Genet. Epidemiol. 2006. © 2006 Wiley-Liss, Inc. [source]


An analysis of P times reported in the Reviewed Event Bulletin for Chinese underground explosions

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2005
A. Douglas
SUMMARY Analysis of variance is used to estimate the measurement error and path effects in the P times reported in the Reviewed Event Bulletins (REBs, produced by the provisional International Data Center, Arlington, USA) and in times we have read, for explosions at the Chinese Test Site. Path effects are those differences between traveltimes calculated from tables and the true times that result in epicentre error. The main conclusions of the study are: (1) the estimated variance of the measurement error for P times reported in the REB at large signal-to-noise ratio (SNR) is 0.04 s2, the bulk of the readings being analyst-adjusted automatic-detections, whereas for our times the variance is 0.01 s2 and (2) the standard deviation of the path effects for both sets of observations is about 0.6 s. The study shows that measurement error is about twice (,0.2 s rather than ,0.1 s) and path effects about half the values assumed for the REB times. However, uncertainties in the estimated epicentres are poorly described by treating path effects as a random variable with a normal distribution. Only by estimating path effects and using these to correct onset times can reliable estimates of epicentre uncertainty be obtained. There is currently an international programme to do just this. The results imply that with P times from explosions at three or four stations with good SNR (so that the measurement error is around 0.1 s) and well distributed in azimuth, then with correction for path effects the area of the 90 per cent coverage ellipse should be much less than 1000 km2,the area allowed for an on-site inspection under the Comprehensive Test Ban Treaty,and should cover the true epicentre with the given probability. [source]


Getting the biodiversity intactness index right: the importance of habitat degradation data

GLOBAL CHANGE BIOLOGY, Issue 11 2006
MATHIEU ROUGET
Abstract Given high-level commitments to reducing the rate of biodiversity loss by 2010, there is a pressing need to develop simple and practical indicators to monitor progress. In this context, a biodiversity intactness index (BII) was recently proposed, which provides an overall indicator suitable for policy makers. The index links data on land use with expert assessments of how this impacts the population densities of well-understood taxonomic groups to estimate current population sizes relative to premodern times. However, when calculated for southern Africa, the resulting BII of 84% suggests a far more positive picture of the state of wild nature than do other large-scale estimates. Here, we argue that this discrepancy is in part an artefact of the coarseness of the land degradation data used to calculate the BII, and that the overall BII for southern Africa is probably much lower than 84%. In particular, based on two relatively inexpensive, ground-truthed studies of areas not generally regarded as exceptional in terms of their degradation status, we demonstrate that Scholes and Biggs might have seriously underestimated the extent of land degradation. These differences have substantial bearing on BII scores. Urgent attention should be given to the further development of cost-effective ground-truthing methods for quantifying the extent of land degradation in order to provide reliable estimates of biodiversity loss, both in southern Africa and more widely. [source]


Is there a connection between weather at departure sites, onset of migration and timing of soaring-bird autumn migration in Israel?

GLOBAL ECOLOGY, Issue 6 2006
Judy Shamoun-Baranes
ABSTRACT Aims, Different aspects of soaring-bird migration are influenced by weather. However, the relationship between weather and the onset of soaring-bird migration, particularly in autumn, is not clear. Although long-term migration counts are often unavailable near the breeding areas of many soaring birds in the western Palaearctic, soaring-bird migration has been systematically monitored in Israel, a region where populations from large geographical areas converge. This study tests several fundamental hypotheses regarding the onset of migration and explores the connection between weather, migration onset and arrival at a distant site. Location, Globally gridded meteorological data from the breeding areas in north-eastern Europe were used as predictive variables in relation to the arrival of soaring migrants in Israel. Methods, Inverse modelling was used to study the temporal and spatial influence of weather on initiation of migration based on autumn soaring-bird migration counts in Israel. Numerous combinations of migration duration and temporal influence of meteorological variables (temperature, sea-level pressure and precipitable water) were tested with different models for meteorological sensitivity. Results, The day of arrival in Israel of white storks, honey buzzards, Levant sparrowhawks and lesser spotted eagles was significantly and strongly related to meteorological conditions in the breeding area days or even weeks before arrival in Israel. The cumulative number of days or cumulative value above or below a meteorological threshold performed significantly better than other models tested. Models provided reliable estimates of migration duration for each species. Main conclusions, The meteorological triggers of migration at the breeding grounds differed between species and were related to deteriorating living conditions and deteriorating migratory flight conditions. Soaring birds are sensitive to meteorological triggers at the same period every year and their temporal response to weather appears to be constrained by their annual routine. [source]


Rapid biodiversity assessment of spiders (Araneae) using semi-quantitative sampling: a case study in a Mediterranean forest

INSECT CONSERVATION AND DIVERSITY, Issue 2 2008
PEDRO CARDOSO
Abstract. 1A thorough inventory of a Mediterranean oak forest spider fauna carried out during 2 weeks is presented. It used a semi-quantitative sampling protocol to collect comparable data in a rigorous, rapid and efficient way. Four hundred and eighty samples of one person-hour of work each were collected, mostly inside a delimited 1-ha plot. 2Sampling yielded 10 808 adult spiders representing 204 species. The number of species present at the site was estimated using five different richness estimators (Chao1, Chao2, Jackknife1, Jackknife2 and Michaelis,Menten). The estimates ranged from 232 to 260. The most reliable estimates were provided by the Chao estimators and the least reliable was obtained with the Michaelis,Menten. However, the behavior of the Michaelis,Menten accumulation curves supports the use of this estimator as a stopping or reliability rule. 3Nineteen per cent of the species were represented by a single specimen (singletons) and 12% by just two specimens (doubletons). The presence of locally rare species in this exhaustive inventory is discussed. 4The effects of day, time of day, collector experience and sampling method on the number of adults, number of species and taxonomic composition of the samples are assessed. Sampling method is the single most important factor influencing the results and all methods generate unique species. Time of day is also important, in such way that each combination of method and time of day may be considered as a different method in itself. There are insignificant differences between the collectors in terms of species and number of adult spiders collected. Despite the high collecting effort, the species richness and abundance of spiders remained constant throughout the sampling period. [source]


Seasonal and interannual variability in atmospheric turbidity over South Africa

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 5 2001
Helen C. Power
Abstract Aerosols affect climate by attenuating solar radiation and acting as cloud condensation nuclei. Despite their importance in the climate system, our understanding of the time-space variability of aerosols is fragmentary. Measurements and reliable estimates of atmospheric turbidity,the total column amount of aerosol,are scarce in most countries and this is especially true in the Southern Hemisphere. Very little is known about the seasonal, interannual and spatial variability of aerosols over the southern half of the globe. In this paper, we estimate monthly averaged atmospheric turbidity from surface climate data at eight locations in South Africa, regardless of cloud cover. Findings include new estimates of turbidity trends and variability over South Africa. Seasonal trends are evident at many stations, although there is no consistent trend. Over recent decades, turbidity has generally been stable at six of the eight stations. Our methodology can be applied at any location where the requisite climate data are available and, therefore, holds promise for a more complete, and possibly global, climatology of aerosols. Copyright © 2001 Royal Meteorological Society [source]


Evidence for density-dependent survival in adult cormorants from a combined analysis of recoveries and resightings

JOURNAL OF ANIMAL ECOLOGY, Issue 5 2000
Morten Frederiksen
Summary 1.,The increasing population of cormorants (Phalacrocorax carbo sinensis) in Europe since 1970 has led to conflicts with fishery interests. Control of cormorant populations is a management issue in many countries and a predictive population model is needed. However, reliable estimates of survival are lacking as input for such a model 2.,Capture,recapture estimates of survival of dispersive species like cormorants suffer from an unknown bias due to permanent emigration from the study area. However, a combined analysis of resightings and recovery of dead birds allows unbiased estimates of survival and emigration. 3.,We use data on 11 000 cormorants colour-ringed as chicks in the Danish colony Vorsø 1977,97 to estimate adult survival and colony fidelity. Recent statistical models allowing simultaneous use of recovery and resighting data are employed. We compensate for variation in colour-ring quality, and study the effect of population size and winter severity on survival, as well as of breeding success on fidelity by including these factors as covariates in statistical models. 4.,Annual adult survival fluctuated from year to year (0·74,0·95), with a mean of 0·88. A combination of population size in Europe and winter temperatures explained 52,64% of the year-to-year variation in survival. Differences in survival between sexes was less than 1%. Cormorants older than ,,12 years experienced lower survival, whereas second-year birds had survival similar to adults. Colony fidelity declined after 1990 from nearly 1 to ,,0·90, implying 10% permanent emigration per year. This change coincided with a decline in food availability. 5.,Apparently, survival was more severely affected by winter severity when population size was high. This could be caused by saturation of high-quality wintering habitat, forcing some birds to winter in less good habitat where they would be more vulnerable to cold winters. There was thus evidence for density dependence in adult survival, at least in cold winters. 6.,The high population growth rate sustained by European Ph. c. sinensis in the 1970s and 1980s can partly be accounted for by unusually high survival of immature and adult birds, probably caused by absence of hunting, low population density and high food availability. [source]


GLOBAL EVIDENCE ON THE EQUITY RISK PREMIUM

JOURNAL OF APPLIED CORPORATE FINANCE, Issue 4 2003
Elroy Dimson
The size of the equity risk premium,the incremental return that shareholders require to hold risky equities rather than risk-free securities,is a key issue in corporate finance. Financial economists generally measure the equity premium over long periods of time in order to obtain reliable estimates. These estimates are widely used by investors, finance professionals, corporate executives, regulators, lawyers, and consultants. But because the 20th century proved to be a period of such remarkable growth in the U.S. economy, estimates of the risk premium that rely on past market performance may be too high to serve as a reliable guide to the future. The authors analyze a 103-year history of risk premiums in 16 countries and conclude that the U.S. risk premium relative to Treasury bills was 5.3% for that period,lower than previous studies suggest,as compared to 4.2% for the U.K. and 4.5% for a world index. But the article goes on to observe that the historical record may still overstate expectations of the future risk premium, partly because market volatility in the future may be lower than in the past, and partly because of a general decline in risk resulting from new technological advances and increased diversification opportunities for investors. After adjusting for the expected impact of these factors, the authors calculate forward-looking equity risk premiums of 4.3% for the U.S., 3.9% for the U.K., and 3.5% for the world index. At the same time, however, they caution that the risk premium can fluctuate over time and that managers should make appropriate adjustments when there are compelling economic reasons to think that expected premiums are unusually high or low. [source]