Different Estimates (different + estimate)

Distribution by Scientific Domains


Selected Abstracts


Some Methods of Propensity-Score Matching had Superior Performance to Others: Results of an Empirical Investigation and Monte Carlo simulations

BIOMETRICAL JOURNAL, Issue 1 2009
Peter C. Austin
Abstract Propensity-score matching is increasingly being used to reduce the impact of treatment-selection bias when estimating causal treatment effects using observational data. Several propensity-score matching methods are currently employed in the medical literature: matching on the logit of the propensity score using calipers of width either 0.2 or 0.6 of the standard deviation of the logit of the propensity score; matching on the propensity score using calipers of 0.005, 0.01, 0.02, 0.03, and 0.1; and 5 , 1 digit matching on the propensity score. We conducted empirical investigations and Monte Carlo simulations to investigate the relative performance of these competing methods. Using a large sample of patients hospitalized with a heart attack and with exposure being receipt of a statin prescription at hospital discharge, we found that the 8 different methods produced propensity-score matched samples in which qualitatively equivalent balance in measured baseline variables was achieved between treated and untreated subjects. Seven of the 8 propensity-score matched samples resulted in qualitatively similar estimates of the reduction in mortality due to statin exposure. 5 , 1 digit matching resulted in a qualitatively different estimate of relative risk reduction compared to the other 7 methods. Using Monte Carlo simulations, we found that matching using calipers of width of 0.2 of the standard deviation of the logit of the propensity score and the use of calipers of width 0.02 and 0.03 tended to have superior performance for estimating treatment effects (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Challenging Wallacean and Linnean shortfalls: knowledge gradients and conservation planning in a biodiversity hotspot

DIVERSITY AND DISTRIBUTIONS, Issue 5 2006
Luis Mauricio Bini
ABSTRACT Knowledge about biodiversity remains inadequate because most species living on Earth were still not formally described (the Linnean shortfall) and because geographical distributions of most species are poorly understood and usually contain many gaps (the Wallacean shortfall). In this paper, we developed models to infer the size and placement of geographical ranges of hypothetical non-described species, based on the range size frequency distribution of anurans recently described in the Cerrado Biome, on the level of knowledge (number of inventories) and on surrogates for habitat suitability. The rationale for these models is as follow: (1) the range size frequency distribution of these species should be similar to the range-restricted species, which have been most recently described in the Cerrado Biome; (2) the probability of new discoveries will increase in areas with low biodiversity knowledge, mainly in suitable areas, and (3) assuming range continuity, new species should occupy adjacent cells only if the level of knowledge is low enough to allow the existence of undiscovered species. We ran a model based on the number of inventories only, and two models combining effects of number of inventories and two different estimates of habitat suitability, for a total of 100 replicates each. Finally, we performed a complementary analysis using simulated annealing to solve the set-covering problem for each simulation (i.e. finding the smallest number of cells so that all species are represented at least once), using extents of occurrence of 160 species (131 real anuran species plus 29 new simulated species). The revised reserve system that included information about unknown or poorly sampled taxa significantly shifted northwards, when compared to a system based on currently known species. This main result can be explained by the paucity of biodiversity data in this part of the biome, associated with its relatively high habitat suitability. As a precautionary measure, weighted by the inferred distribution data, the prioritization of a system of reserves in the north part of the biome appears to be defensible. [source]


Comparison of manual and automated ELISA methods for serum ferritin analysis

JOURNAL OF CLINICAL LABORATORY ANALYSIS, Issue 5 2005
Fabian Rohner
Abstract Serum ferritin concentration is a sensitive measure of body iron stores. The aim of this study was to compare the performance of two commercially available enzyme-linked immunoassays (ELISAs) for serum ferritin: a widely used manual assay kit (Spectro Ferritin MT®), and a new fully automated assay (Immulite®). We analyzed serum samples from Moroccan school-aged children (n=51) from a rural area with a high prevalence of iron deficiency anemia (IDA). Four replicates of each sample were analyzed using both assays. For the manual method, the interassay repeatability was 24%, 22%, and 11%, and intraassay precision was 18.3%, 9.2%, and 9.1% at increasing serum ferritin concentrations. Using the automated assay, the interassay repeatability was 7%, 6%, and 6%, and intraassay precision was 1.5%, 5.4%, and 5.5% at increasing serum ferritin concentrations. The two assays were well correlated (y=1.16x+1.83; r=0.98). However, the limits of agreement (LOAs) were wide, particularly at low concentrations. A comparison of the assay results with recommended cutoffs for serum ferritin generated sharply different estimates of the prevalence of iron deficiency (ID) in the sample. We conclude that the automated assay has several potential advantages compared to the manual method, including better precision, less operator dependence, and faster sample through-put. J. Clin. Lab. Anal. 19:196,198, 2005. © 2005 Wiley-Liss, Inc. [source]


Invariant co-ordinate selection

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2009
David E. Tyler
Summary., A general method for exploring multivariate data by comparing different estimates of multivariate scatter is presented. The method is based on the eigenvalue,eigenvector decomposition of one scatter matrix relative to another. In particular, it is shown that the eigenvectors can be used to generate an affine invariant co-ordinate system for the multivariate data. Consequently, we view this method as a method for invariant co-ordinate selection. By plotting the data with respect to this new invariant co-ordinate system, various data structures can be revealed. For example, under certain independent components models, it is shown that the invariant co- ordinates correspond to the independent components. Another example pertains to mixtures of elliptical distributions. In this case, it is shown that a subset of the invariant co-ordinates corresponds to Fisher's linear discriminant subspace, even though the class identifications of the data points are unknown. Some illustrative examples are given. [source]


Bootstrap-based bandwidth choice for log-periodogram regression

JOURNAL OF TIME SERIES ANALYSIS, Issue 6 2009
Josu Arteche
Abstract., The choice of the bandwidth in the local log-periodogram regression is of crucial importance for estimation of the memory parameter of a long memory time series. Different choices may give rise to completely different estimates, which may lead to contradictory conclusions, for example about the stationarity of the series. We propose here a data-driven bandwidth selection strategy that is based on minimizing a bootstrap approximation of the mean-squared error (MSE). Its behaviour is compared with other existing techniques for optimal bandwidth selection in a MSE sense, revealing its better performance in a wider class of models. The empirical applicability of the proposed strategy is shown with two examples: the widely analysed in a long memory context Nile river annual minimum levels and the input gas rate series of Box and Jenkins. [source]


Use of tissue water as a concentration reference for proton spectroscopic imaging

MAGNETIC RESONANCE IN MEDICINE, Issue 6 2006
Charles Gasparovic
Abstract A strategy for using tissue water as a concentration standard in 1H magnetic resonance spectroscopic imaging studies on the brain is presented, and the potential errors that may arise when the method is used are examined. The sensitivity of the method to errors in estimates of the different water compartment relaxation times is shown to be small at short echo times (TEs). Using data from healthy human subjects, it is shown that different image segmentation approaches that are commonly used to account for partial volume effects (SPM2, FSL's FAST, and K-means) lead to different estimates of metabolite levels, particularly in gray matter (GM), owing primarily to variability in the estimates of the cerebrospinal fluid (CSF) fraction. While consistency does not necessarily validate a method, a multispectral segmentation approach using FAST yielded the lowest intersubject variability in the estimates of GM metabolites. The mean GM and white matter (WM) levels of N-acetyl groups (NAc, primarily N-acetylaspartate), choline (Ch), and creatine (Cr) obtained in these subjects using the described method with FAST multispectral segmentation are reported: GM [NAc] = 17.16 ± 1.19 mM; WM [NAc] = 14.26 ± 1.38 mM; GM [Ch] = 3.27 ± 0.47 mM; WM [Ch] = 2.65 ± 0.25 mM; GM [Cr] = 13.98 ± 1.20 mM; and WM [Cr] = 7.10 ± 0.67 mM. Magn Reson Med, 2006. © 2006 Wiley-Liss, Inc. [source]


Sampling within the genome for measuring within-population diversity: trade-offs between markers

MOLECULAR ECOLOGY, Issue 7 2002
S. Mariette
Abstract Experimental results of diversity estimates in a set of populations often exhibit contradictory patterns when different marker systems are used. Using simulations we identified potential causes for these discrepancies. These investigations aimed also to detect whether different sampling strategies of markers within the genome resulted in different estimates of the diversity at the whole genome level. The simulations consisted in generating a set of populations undergoing various evolutionary scenarios which differed by population size, migration rate and heterogeneity of gene flow. Population diversity was then computed for the whole genome and for subsets of loci corresponding to different marker techniques. Rank correlation between the two measures of diversity were investigated under different scenarios. We showed that the heterogeneity of genetic diversity either between loci (genomic heterogeneity, GH) or among populations (population heterogeneity, PH) varied greatly according to the evolutionary scenario considered. Furthermore, GH and PH were major determinants of the level of rank correlation between estimates of genetic diversities obtained using different kinds of markers. We found a strong positive relationship between the level of the correlation and PH, whatever the marker system. It was also shown that, when GH values were constantly low during generations, a reduced number of microsatellites was enough to predict the diversity of the whole genome, whereas when GH increased, more loci were needed to predict the diversity and amplified fragment length polymorphism markers would be more recommended in this case. Finally the results are discussed to recommend strategies for gene diversity surveys. [source]


Perturbation theory and excursion set estimates of the probability distribution function of dark matter, and a method for reconstructing the initial distribution function

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2008
Tsz Yan Lam
ABSTRACT Non-linear evolution is sometimes modelled by assuming there is a deterministic mapping from initial to final values of the locally smoothed overdensity. However, if an underdense region is embedded in a denser one, then it is possible that its evolution is determined by its surroundings, so the mapping between initial and final overdensities is not as ,local' as one might have assumed. If this source of non-locality is not accounted for, then it appears as stochasticity in the mapping between initial and final densities. Perturbation theory methods ignore this ,cloud-in-cloud' effect, whereas methods based on the excursion set approach do account for it; as a result, one may expect the two approaches to provide different estimates of the shape of the non-linear counts in cells distribution. We show that, on scales where the rms fluctuation is small, this source of non-locality has only a small effect, so the predictions of the two approaches differ only on the small scales on which perturbation theory is no longer expected to be valid anyway. We illustrate our results by comparing the predictions of these approaches when the initial,final mapping is given by the spherical collapse model. Both are in reasonably good agreement with measurements in numerical simulations on scales where the rms fluctuation is of the order of unity or smaller. If the deterministic mapping from initial conditions to final density depends on quantities other than the initial density, then this will also manifest as stochasticity in the mapping from initial density to final. For example, the Zeldovich approximation and the ellipsoidal collapse model both assume that the initial shear field plays an important role in determining the evolution. We compare the predictions of these approximations with simulations, both before and after accounting for the ,cloud-in-cloud' effect. Our analysis accounts approximately for the fact that the shape of a cell at the present time is different from its initial shape; ignoring this makes a noticeable difference on scales where the rms fluctuation in a cell is of the order of unity or larger. On scales where the rms fluctuation is 2 or less, methods based on the spherical model are sufficiently accurate to permit a rather accurate reconstruction of the shape of the initial distribution from the non-linear one. This can be used as the basis for a method for constraining the statistical properties of the initial fluctuation field from the present-day field, under the hypothesis that the evolution was purely gravitational. We illustrate by showing how the highly non-Gaussian non-linear density field in a numerical simulation can be transformed to provide an accurate estimate of the initial Gaussian distribution from which it evolved. [source]


Where are z= 4 Lyman Break Galaxies?

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2006
Results from conditional luminosity function models of luminosity-dependent clustering
ABSTRACT Using the conditional luminosity function (CLF) , the luminosity distribution of galaxies in a dark matter halo , as a way to model galaxy statistics, we study how z= 4 Lyman Break Galaxies (LBGs) are distributed in dark matter haloes. For this purpose, we measure luminosity-dependent clustering of LBGs in the Subaru/XMM,Newton Deep Field by separating a sample of 16 920 galaxies to three magnitude bins in i, band between 24.5 and 27.5. Our model fits to data show a possible trend for more-luminous galaxies to appear as satellites in more-massive haloes; the minimum halo mass in which satellites appear is 3.9+4.1,3.5× 1012, 6.2+3.8,4.9× 1012 and 9.6+7.0,4.6× 1012 M, (1, errors) for galaxies with 26.5 < i, < 27.5, 25.5 < i, < 26.5 and 24.5 < i, < 25.5 mag, respectively. The satellite fraction of galaxies at z= 4 in these magnitude bins is 0.13,0.3, 0.09,0.22 and 0.03,0.14, respectively, where the 1, ranges account for differences coming from two different estimates of the z= 4 LF from the literature. To jointly explain the LF and the large-scale linear bias factor of z= 4 LBGs as a function of rest UV luminosity requires central galaxies to be brighter in UV at z= 4 than present-day galaxies in same dark matter mass haloes. Moreover, UV luminosity of central galaxies in haloes with total mass greater than roughly 1012 M, must decrease from z= 4 to today by an amount more than the luminosity change for galaxies in haloes below this mass. This mass-dependent luminosity evolution is preferred at more than 3, confidence level compared to a pure-luminosity evolution scenario where all galaxies decrease in luminosity by the same amount from z= 4 to today. The scenario preferred by the data is consistent with the ,downsizing' picture of galaxy evolution. [source]


Estimation of plasmalemma conductivity to ascorbic acid in intact leaves exposed to ozone

PHYSIOLOGIA PLANTARUM, Issue 4 2000
Irina Bichele
To establish the capacity of the leaf mesophyll plasmalemma of Phaseolus vulgaris L. to supply ascorbate (ASC) into the cell wall by simple diffusion, a method for calculating plasmalemma diffusional conductivity to ascorbic acid (AA) in intact leaves was evaluated. The core of the approach is that in the presence of a sink for ascorbate in the cell wall, cell wall total ascorbic acid concentration [TAA]cw (=[ASC]cw+[AA]cw) reaches zero at some positive whole-leaf total ascorbic acid concentration [TAA]l. It is shown that [TAA]l at [TAA]cw=0 is proportional to the sink for ASC in the cell wall and the reciprocal of plasmalemma conductivity. The predicted proportional relationship between [TAA]cw and [TAA]l was confirmed by decreasing TAA levels in leaves through predarkening. Furthermore, increasing the sink intensity for ASC in the cell wall by the acute exposure of leaves to 450 nmol ozone mol,1 during re-illumination, [TAA]cw reached zero at 2.7-fold higher [TAA]l than without ozone, and the slope of the relationship increased twofold. Plasmalemma diffusional conductivities to AA of 2.9×10,6 and 1.8×10,6 m s,1, needed to maintain [TAA]cw at the observed level, were calculated from the increase in [TAA]l at [TAA]cw=0 and from the two different estimates of the sink for ASC. A value of 1.3×10,6 m s,1 was calculated on the basis of the oil-water distribution coefficient for TAA. It is concluded that the demand for ASC in the mesophyll cell wall of the investigated leaves could be met by simple diffusion of AA through the plasmalemma. From the measured increase in the slope of the relationship [TAA]cw versus [TAA]l, an increase in the cell wall pH of 0.3 units was estimated under the influence of ozone. [source]


Identification of occupational cancer risk in British Columbia: A population-based case,control study of 2,998 lung cancers by histopathological subtype

AMERICAN JOURNAL OF INDUSTRIAL MEDICINE, Issue 3 2009
Amy C. MacArthur MHSc
Abstract Background Few studies have investigated occupational lung cancer risk in relation to specific histopathological subtypes. Methods A case,control study was conducted to evaluate the relationship between lung cancer and occupation/industry of employment by histopathological subtype. A total of 2,998 male cases and 10,223 cancer controls, diagnosed between 1983 and 1990, were identified through the British Columbia Cancer Registry. Matched on age and year of diagnosis, conditional logistic regression analyses were performed for two different estimates of exposure with adjustment for potentially important confounding variables, including tobacco smoking, alcohol consumption, marital status, educational attainment, and questionnaire respondent. Results For all lung cancers, an excess risk was observed for workers in the primary metal (OR,=,1.31, 95% CI, 1.01,1.71), mining (OR,=,1.53, 95% CI, 1.20,1.96), machining (OR,=,1.33, 95% CI, 1.09,1.63), transport (OR,=,1.50, 95% CI, 1.08,2.07), utility (OR,=,1.60, 95% CI, 1.22,2.09), and protective services (OR,=,1.27, 95% CI, 1.05,1.55) industries. Associations with histopathological subtypes included an increased risk of squamous cell carcinoma in construction trades (OR,=,1.25, 95% CI, 1.06,1.48), adenocarcinoma for professional workers in medicine and health (OR,=,1.73, 95% CI, 1.18,2.53), small cell carcinoma in railway (OR,=,1.62, 95% CI, 1.06,2.49), and truck transport industries (OR,=,1.51, 95% CI, 1.00,2.28), and large cell carcinoma for employment in the primary metal industry (OR,=,2.35, 95% CI, 1.11,4.96). Conclusions Our results point to excess lung cancer risk for occupations involving exposure to metals, polyaromatic hydrocarbons and asbestos, as well as several new histopathologic-specific associations that merit further investigation. Am. J. Ind. Med. 52:221,232, 2009. © 2008 Wiley-Liss, Inc. [source]


Habitat-specific ranging patterns of Dian's tarsiers (Tarsius dianae) as revealed by radiotracking

AMERICAN JOURNAL OF PRIMATOLOGY, Issue 2 2006
Stefan Merker
Abstract Dian's tarsier Tarsius dianae, one of the smallest primates on earth, is endemic to the central regions of Sulawesi, Indonesia. To evaluate the effects of increasing land use by humans on the ranging patterns of this nocturnal insect hunter, four study plots along a gradient of anthropogenic disturbance were selected for this study. In these plots, 71 tarsiers were captured with mist nets, and 30 of these were fitted with 3.9 g radiotransmitters and subsequently tracked over the course of 2 weeks per animal. The average home ranges were 1.1,1.8 ha in size, with the smallest ranges in slightly disturbed habitat and the largest ranges in a heavily disturbed plantation. These findings coincide with different estimates of insect abundance in the study plots. Nightly travel distances were smallest in undisturbed old-growth forest and slightly increased along a gradient of human disturbance. The tarsiers were most active shortly after dusk and just before dawn. The results of this comprehensive radiotracking study on tarsiers show that T. dianae adapts its ranging behavior to the degree and type of human land use. Integrated data on home range size and travel distance indicate that slightly disturbed forest is as favorable to these animals as undisturbed habitat. However, with increasing anthropogenic effects, the living conditions of the tarsiers appear to deteriorate, resulting in the necessity for larger home and night ranges. The results of this study provide an important tool for directing conservation efforts targeted at the survival of this primate in central Sulawesi. Am. J. Primatol. 68:111,125, 2006. © 2006 Wiley-Liss, Inc. [source]


A comparison between criteria for diagnosing atopic eczema in infants

BRITISH JOURNAL OF DERMATOLOGY, Issue 2 2005
H. Jøhnke
Summary Background, Epidemiological studies have shown different estimates of the frequency of atopic eczema (AE) in children. This may be explained by several factors including variations in the definition of AE, study design, age of study group, and the possibility of a changed perception of atopic diseases. The role of IgE sensitization in AE is a matter of debate. Objectives, To determine the prevalence and cumulative incidence of AE in a group of unselected infants followed prospectively from birth to 18 months of age using different diagnostic criteria; to evaluate the agreement between criteria; and to describe the association between atopic heredity and postnatal sensitization, respectively, and the development of AE according to the different diagnostic criteria. Methods, During a 1-year period a consecutive series of 1095 newborns and their parents were approached at the maternity ward at the Odense University Hospital, Denmark and a cohort of 562 newborns was established. Infants were examined and followed prospectively from birth and at 3, 6, 9, 12 and 18 months of age. AE was diagnosed using four different criteria, the Hanifin and Rajka criteria, the Schultz-Larsen criteria, the Danish Allergy Research Centre (DARC) criteria developed for this study and doctor-diagnosed visible eczema with typical morphology and atopic distribution. Additionally, the U.K. diagnostic criteria based on a questionnaire were used at 1 year of age. Agreement between the four criteria was analysed at each time point and over time, and agreement between the four criteria and the U.K. questionnaire criteria was analysed. Results, The cumulative 1-year prevalence of AE using the Hanifin and Rajka criteria was 9·8% (95% confidence interval, CI 7,13%), for the Schultz-Larsen criteria it was 7·5% (95% CI 5,10%), for the DARC criteria 8·2% (95% CI 6,11%), for visible eczema 12·2% (95% CI 9,16%) and for the U.K. criteria 7·5% (95% CI 5,10%). The pairwise agreement between criteria showed good agreement, with rates varying between 93% and 97% and kappa scores between 0·6 and 0·8. Agreement analysis of diagnoses between the four criteria demonstrated that cumulative incidences showed better agreement than point prevalence values. Conclusions, Agreement between different criteria for diagnosing AE was acceptable, but the mild cases constituted a diagnostic problem, although they were in the minority. Repeated examinations gave better agreement between diagnostic criteria than just one examination. Atopic heredity was less predictive for AE than sensitization to common food and inhalant allergens in early childhood. [source]