Sampling Error (sampling + error)

Distribution by Scientific Domains


Selected Abstracts


Statistical hypothesis testing in intraspecific phylogeography: nested clade phylogeographical analysis vs. approximate Bayesian computation

MOLECULAR ECOLOGY, Issue 2 2009
ALAN R. TEMPLETON
Abstract Nested clade phylogeographical analysis (NCPA) and approximate Bayesian computation (ABC) have been used to test phylogeographical hypotheses. Multilocus NCPA tests null hypotheses, whereas ABC discriminates among a finite set of alternatives. The interpretive criteria of NCPA are explicit and allow complex models to be built from simple components. The interpretive criteria of ABC are ad hoc and require the specification of a complete phylogeographical model. The conclusions from ABC are often influenced by implicit assumptions arising from the many parameters needed to specify a complex model. These complex models confound many assumptions so that biological interpretations are difficult. Sampling error is accounted for in NCPA, but ABC ignores important sources of sampling error that creates pseudo-statistical power. NCPA generates the full sampling distribution of its statistics, but ABC only yields local probabilities, which in turn make it impossible to distinguish between a good fitting model, a non-informative model, and an over-determined model. Both NCPA and ABC use approximations, but convergences of the approximations used in NCPA are well defined whereas those in ABC are not. NCPA can analyse a large number of locations, but ABC cannot. Finally, the dimensionality of tested hypothesis is known in NCPA, but not for ABC. As a consequence, the ,probabilities' generated by ABC are not true probabilities and are statistically non-interpretable. Accordingly, ABC should not be used for hypothesis testing, but simulation approaches are valuable when used in conjunction with NCPA or other methods that do not rely on highly parameterized models. [source]


Fine-needle aspiration cytology of salivary glands: Diagnostic pitfalls,revisited

DIAGNOSTIC CYTOPATHOLOGY, Issue 8 2006
Arvind Rajwanshi M.D., F.R.C.Path.
Abstract Fine needle aspiration cytology (FNAC) of salivary gland lesions is a safe, effective diagnostic technique. Several amply illustrated reviews are available in the English literature. The reported diagnostic accuracy varies between 86% to 98%. The sensitivity ranges from 62% to 97.6% and specificity is higher from 94.3% to 100%. In this present study, we have analyzed 172 cases of salivary gland aspirates and the histopathological diagnosis was available in 45 cases. There was discordance in cytological and histopathological diagnosis in nine cases. Five cases had discrepancies in benign versus malignant diagnosis with four cases being false negative. The errors in these FNA diagnoses were due to sampling error, observational error and interpretational error. Therefore, this study illustrates high diagnostic accuracy of FNAC in salivary gland lesions and shows that FNAC offers valuable information that allows the planning of subsequent patient management. Diagn. Cytopathol. 2006;34:580,584. © 2006 Wiley-Liss, Inc. [source]


Systematic sample design for the estimation of spatial means

ENVIRONMETRICS, Issue 1 2003
Luis Ambrosio Flores
Abstract This article develops a practical approach to undertaking systematic sampling for the estimation of the spatial mean of an attribute in a selected area. A design-based approach is used to estimate population parameters, but it is combined with elements of a model-based approach in order to identify the spatial correlation structure, to evaluate the relative efficiency of the sample mean under simple random and systematic sampling, to estimate sampling error and to assess the sample size needed in order to achieve a desired level of precision. Using two case studies (land use estimation and weed seedbank in soil) it is demonstrated how the practical basis for the design of systematic samples provided in this work should be applied and it is shown that if the spatial correlation is ignored the sampling error of the sample mean and the sample size needed in order to achieve a desired level of precision with systematic sampling are overestimated. Copyright © 2003 John Wiley & Sons, Ltd. [source]


The Role of Intracranial Electrode Reevaluation in Epilepsy Patients After Failed Initial Invasive Monitoring

EPILEPSIA, Issue 5 2000
Adrian M. Siegel
Summary: Purpose: Intracranial electrode recording often provides localization of the site of seizure onset to allow epilepsy surgery. In patients whose invasive evaluation fails to localize seizure origin, the utility of further invasive monitoring is unknown. This study was undertaken to explore the hypothesis that a second intracranial investigation is selected patients warrants consideration and can lead to successful epilepsy surgery. Methods: A series of 110 consecutive patients with partial epilepsy who had undergone intracranial electrode evaluation (by subdural strip, subdural grid, and/or depth electrodes) between February 1992 and October 1998 was retrospectively analyzed. Of these, failed localization of seizure origin was thought to be due to sampling error in 13 patients. Nine of these 13 patients underwent a second intracranial investigation. Results: Reevaluation with intracranial electrodes resulted in satisfactory seizure-onset localization in seven of nine patients, and these seven had epilepsy surgery. Three frontal, two temporal, and one occipital resection as well as one multiple subpial transection were performed. Six patients have become seizure free, and one was not significantly improved. The mean follow-up is 2.8 years. There was no permanent morbidity. Conclusions: In selected patients in whom invasive monitoring fails to identify the site of seizure origin, reinvestigation with intracranial electrodes can achieve localization of the region of seizure onset and allow successful surgical treatment. [source]


HETEROZYGOTE EXCESS IN SMALL POPULATIONS AND THE HETEROZYGOTE-EXCESS EFFECTIVE POPULATION SIZE

EVOLUTION, Issue 9 2004
Franclois Balloux
Abstract It has been proposed that effective size could be estimated in small dioecious population by considering the heterozygote excess observed at neutral markers. When the number of breeders is small, allelic frequencies in males and females will slightly differ due to binomial sampling error. However, this excess of heterozygotes is not generated by dioecy but by the absence of individuals produced through selfing. Consequently, the approach can also be applied to self-incompatible monoecious species. Some inaccuracies in earlier equations expressing effective size as function of the heterozygote excess are also corrected in this paper. The approach is then extended to subdivided populations, where time of sampling becomes crucial. When adults are sampled, the effective size of the entire population can be estimated, whereas when juveniles are sampled, the average effective number of breeders per subpopulations can be estimated. The main limitation of the heterozygote excess method is that it will only perform satisfactorily for populations with a small number of reproducing individuals. While this situation is unlikely to happen frequently at the scale of the entire population, structured populations with small subpopulations are likely to be common. The estimation of the average number of breeders per subpopulations is thus expected to be applicable to many natural populations. The approach is straightforward to compute and independent of equilibrium assumptions. Applications to simulated data suggest the estimation of the number of breeders to be robust to mutation and migration rates, and to specificities of the mating system. [source]


Endpoints of therapy in chronic hepatitis B,

HEPATOLOGY, Issue S5 2009
Jordan J. Feld
Because clearance of hepatitis B virus (HBV) infection is rarely, if ever, achievable, the goals of therapy necessarily focus on prevention of bad clinical outcomes. Ideally, therapies would be shown to prevent tangible clinical endpoints like development of cirrhosis, end-stage liver disease and hepatocellular carcinoma. However, these endpoints typically take years or decades to occur and are therefore impractical targets for clinical trials which last only 1-2 years. As a result, surrogate biomarkers that are believed to correlate with long-term outcome are used to evaluate therapy. Of the clinical, biochemical, serological, virological, and histological endpoints that have been evaluated, none has been shown to be ideal on its own. Symptoms are uncommon and aminotransferase levels fluctuate spontaneously. Loss of hepatitis B e antigen (HBeAg) has been the traditional therapeutic endpoint; however, the indefinite durability off treatment and the emergence of HBeAg-negative disease have made it inadequate as the sole goal of therapy. Loss of hepatitis B surface antigen is associated with improved clinical outcomes, but it is rarely achieved with current therapies. Suppression of viral replication, as measured by serum HBV DNA levels, has become the major goal of therapy, particularly if maintained off therapy. Although useful, the significance of viral levels depends on the stage of disease, degree of liver damage, and the type of therapy. Finally, liver biopsy, often considered the gold standard, is invasive, prone to sampling error, and may take years to change significantly. At present, there is no ideal biomarker for evaluation of therapies for hepatitis B. Future research should be directed at development and validation of surrogate markers that accurately predict or reflect clinically relevant outcomes of chronic hepatitis B. (HEPATOLOGY 2009;49:S96,S102.) [source]


Non-invasive markers for the prediction of fibrosis in chronic hepatitis C infection

HEPATOLOGY RESEARCH, Issue 8 2008
Timothy Cross
Liver fibrosis occurs as a result of chronic liver injury and is the hallmark of chronic liver disease. The final stage of progressive liver fibrosis is cirrhosis, which is implicated in portal hypertension, end-stage liver disease and hepatocellular carcinoma. Liver biopsy has historically been the gold standard test for the assessment of liver fibrosis for liver diseases such as viral hepatitis, autoimmune hepatitis and primary biliary cirrhosis. Improved serological tests have enhanced the diagnosis of these conditions and reduced the need for liver biopsy. Liver biopsy is unpopular among patients and clinicians. It is associated with morbidity and mortality, and in addition is subject to sampling error, inter- and intra-observer variability. There is therefore a need for non-invasive markers of liver fibrosis that are accurate, reliable, cheap and easy to use. The aim of this review is to examine the different non-invasive methods that can be used to estimate the severity of fibrosis. The methods evaluated include clinical examination, routine laboratory investigations, imaging tests, specialized tests of liver function and finally serum extra-cellular matrix markers of fibrosis. The review mainly focuses on fibrogenesis in the context of chronic hepatitis C infection. [source]


Quantifying random measurement errors in Voluntary Observing Ships' meteorological observations

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 7 2005
Elizabeth C. Kent
Abstract Estimates of the random measurement error contained in surface meteorological observations from Voluntary Observing Ships (VOS) have been made on a 30° area grid each month for the period 1970 to 2002. Random measurement errors are calculated for all the basic meteorological variables: surface pressure, wind speed, air temperature, humidity and sea-surface temperature. The random errors vary with space and time, the quality assurance applied and the types of instrument used to make the observations. The estimates of random measurement error are compared with estimates of total observational error, which includes uncertainty due both to measurement errors and to observational sampling. In tropical regions the measurement error makes a significant contribution to the total observational error in a single observation, but in higher latitudes the sampling error can be much larger. Copyright © 2005 Royal Meteorological Society [source]


Census error and the detection of density dependence

JOURNAL OF ANIMAL ECOLOGY, Issue 4 2006
ROBERT P. FRECKLETON
Summary 1Studies aiming to identify the prevalence and nature of density dependence in ecological populations have often used statistical analysis of ecological time-series of population counts. Such time-series are also being used increasingly to parameterize models that may be used in population management. 2If time-series contain measurement errors, tests that rely on detecting a negative relationship between log population change and population size are biased and prone to spuriously detecting density dependence (Type I error). This is because the measurement error in density for a given year appears in the corresponding change in population density, with equal magnitude but opposite sign. 3This effect introduces bias that may invalidate comparisons of ecological data with density-independent time-series. Unless census error can be accounted for, time-series may appear to show strongly density-dependent dynamics, even though the density-dependent signal may in reality be weak or absent. 4We distinguish two forms of census error, both of which have serious consequences for detecting density dependence. 5First, estimates of population density are based rarely on exact counts, but on samples. Hence there exists sampling error, with the level of error depending on the method employed and the number of replicates on which the population estimate is based. 6Secondly, the group of organisms measured is often not a truly self-contained population, but part of a wider ecological population, defined in terms of location or behaviour. Consequently, the subpopulation studied may effectively be a sample of the population and spurious density dependence may be detected in the dynamics of a single subpopulation. In this case, density dependence is detected erroneously, even if numbers within the subpopulation are censused without sampling error. 7In order to illustrate how process variation and measurement error may be distinguished we review data sets (counts of numbers of birds by single observers) for which both census error and long-term variance in population density can be estimated. 8Tests for density dependence need to obviate the problem that measured population sizes are typically estimates rather than exact counts. It is possible that in some cases it may be possible to test for density dependence in the presence of unknown levels of census error, for example by uncovering nonlinearities in the density response. However, it seems likely that these may lack power compared with analyses that are able to explicitly include census error and we review some recently developed methods. [source]


Small-Sample Equating Using a Synthetic Linking Function

JOURNAL OF EDUCATIONAL MEASUREMENT, Issue 4 2008
Sooyeon Kim
This study addressed the sampling error and linking bias that occur with small samples in a nonequivalent groups anchor test design. We proposed a linking method called the synthetic function, which is a weighted average of the identity function and a traditional equating function (in this case, the chained linear equating function). Specifically, we compared the synthetic, identity, and chained linear functions for various-sized samples from two types of national assessments. One design used a highly reliable test and an external anchor, and the other used a relatively low-reliability test and an internal anchor. The results from each of these methods were compared to the criterion equating function derived from the total samples with respect to linking bias and error. The study indicated that the synthetic functions might be a better choice than the chained linear equating method when samples are not large and, as a result, unrepresentative. [source]


A common model approach to macroeconomics: using panel data to reduce sampling error

JOURNAL OF FORECASTING, Issue 3 2005
William T. Gavin
Abstract Is there a common model inherent in macroeconomic data? Macroeconomic theory suggests that market economies of various nations should share many similar dynamic patterns; as a result, individual country empirical models, for a wide variety of countries, often include the same variables. Yet, empirical studies often find important roles for idiosyncratic shocks in the differing macroeconomic performance of countries. We use forecasting criteria to examine the macrodynamic behaviour of 15 OECD countries in terms of a small set of familiar, widely used core economic variables, omitting country-specific shocks. We find this small set of variables and a simple VAR ,common model' strongly support the hypothesis that many industrialized nations have similar macroeconomic dynamics. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Proper Assessment of the JFK Assassination Bullet Lead Evidence from Metallurgical and Statistical Perspectives

JOURNAL OF FORENSIC SCIENCES, Issue 4 2006
Erik Randich Ph.D.
ABSTRACT: The bullet evidence in the JFK assassination investigation was reexamined from metallurgical and statistical standpoints. The questioned specimens are comprised of soft lead, possibly from full-metal-jacketed Mannlicher-Carcano (MC), 6.5-mm ammunition. During lead refining, contaminant elements are removed to specified levels for a desired alloy or composition. Microsegregation of trace and minor elements during lead casting and processing can account for the experimental variabilities measured in various evidentiary and comparison samples by laboratory analysts. Thus, elevated concentrations of antimony and copper at crystallographic grain boundaries, the widely varying sizes of grains in MC bullet lead, and the 5,60 mg bullet samples analyzed for assassination intelligence effectively resulted in operational sampling error for the analyses. This deficiency was not considered in the original data interpretation and resulted in an invalid conclusion in favor of the single-bullet theory of the assassination. Alternate statistical calculations, based on the historic analytical data, incorporating weighted averaging and propagation of experimental uncertainties also considerably weaken support for the single-bullet theory. In effect, this assessment of the material composition of the lead specimens from the assassination concludes that the extant evidence is consistent with any number between two and five rounds fired in Dealey Plaza during the shooting. [source]


MR-guided biopsy of musculoskeletal lesions in a low-field system

JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 5 2001
Claudius W. Koenig MD
Abstract Thirty magnetic resonance (MR)-guided biopsies were obtained from 20 skeletal and 10 soft-tissue lesions in 31 patients using an open 0.2 T MR system equipped with interventional accessories. The results from aspiration (N = 3), core biopsy (N = 15), and transcortical trephine biopsy (N = 12) were evaluated for accuracy and clinical efficacy. Specimens were successfully obtained from 29 patients. Results were clinically effective in 23 patients, rated definitive in 16, nonconclusive in 9, and unspecific in 2 patients. A false diagnosis due to sampling error occurred in 2 patients, and biopsy sampling was impossible in one case. The best diagnostic yield was achieved from nontranscortical biopsies of osteolytic or soft-tissue masses. Results from transcortical biopsies were less specific due to the predominance of benign lesions. MR fluoroscopy for needle guidance was applied in 13 patients. Complete needle placement inside the magnet could be performed in 16 patients. MR-guided biopsy using an open low-field MR imager is feasible and clinically effective and will become a valuable tool in the management of musculoskeletal lesions. J. Magn. Reson. Imaging 2001;13:761,768. © 2001 Wiley-Liss, Inc. [source]


An introduction to medical statistics for health care professionals: Hypothesis tests and estimation

MUSCULOSKELETAL CARE, Issue 2 2005
Elaine Thomas PhD MSc BSc Lecturer in Biostatistics
Abstract This article is the second in a series of three that will give health care professionals (HCPs) a sound introduction to medical statistics (Thomas, 2004). The objective of research is to find out about the population at large. However, it is generally not possible to study the whole of the population and research questions are addressed in an appropriate study sample. The next crucial step is then to use the information from the sample of individuals to make statements about the wider population of like individuals. This procedure of drawing conclusions about the population, based on study data, is known as inferential statistics. The findings from the study give us the best estimate of what is true for the relevant population, given the sample is representative of the population. It is important to consider how accurate this best estimate is, based on a single sample, when compared to the unknown population figure. Any difference between the observed sample result and the population characteristic is termed the sampling error. This article will cover the two main forms of statistical inference (hypothesis tests and estimation) along with issues that need to be addressed when considering the implications of the study results. Copyright © 2005 Whurr Publishers Ltd. [source]


Test of the relationship between sutural ossicles and cultural cranial deformation: Results from Hawikuh, New Mexico

AMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 4 2009
Cynthia A. Wilczak
Abstract A number of researchers have hypothesized that the biomechanical forces associated with cultural cranial deformation can influence the formation of sutural ossicles. However, it is still difficult to make definitive conclusions about this relationship because the effects appear to be quite weak, and contradictory results have been obtained when specific sutures and deformation types are compared across studies. This research retests the hypothesis using a single archeological sample of lamdoidally deformed, occipitally deformed, and undeformed crania from Hawikuh, New Mexico (AD 1300,1680). Our results show no significant difference in either the prevalence or number of ossicles between deformed and undeformed crania, suggesting that the abnormal strains generated by cranial shape modification during infancy are not a factor in ossicle development for this population. One significant relationship was detected at the right lambdoid suture in crania with asymmetrical occipital deformation. Crania that were more deformed on the left side showed greater numbers of ossicles on the right side, but the effect was small. Furthermore, the relationship may well reflect a sampling error, due to the small number of crania with greater left side deformation and scorable right side lambdoid ossicles (n = 11). Although it is possible that forms of cranial deformation other than the posterior tabular types examined here may affect ossicle expression, our review of the literature suggests that the relationship in humans is complex and incompletely understood at this time. Am J Phys Anthropol, 2009. © 2009 Wiley-Liss, Inc. [source]


A robust formulation of the ensemble Kalman filter,

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 639 2009
S. J. Thomas
Abstract The ensemble Kalman filter (EnKF) can be interpreted in the more general context of linear regression theory. The recursive filter equations are equivalent to the normal equations for a weighted least-squares estimate that minimizes a quadratic functional. Solving the normal equations is numerically unreliable and subject to large errors when the problem is ill-conditioned. A numerically reliable and efficient algorithm is presented, based on the minimization of an alternative functional. The method relies on orthogonal rotations, is highly parallel and does not ,square' matrices in order to compute the analysis update. Computation of eigenvalue and singular-value decompositions is not required. The algorithm is formulated to process observations serially or in batches and therefore easily handles spatially correlated observation errors. Numerical results are presented for existing algorithms with a hierarchy of models characterized by chaotic dynamics. Under a range of conditions, which may include model error and sampling error, the new algorithm achieves the same or lower mean square errors as the serial Potter and ensemble adjustment Kalman filter (EAKF) algorithms. Published in 2009 by John Wiley and Sons, Ltd. [source]


Does the skull carry a phylogenetic signal?

BIOLOGICAL JOURNAL OF THE LINNEAN SOCIETY, Issue 4 2008
Evolution, modularity in the guenons
Form and genes often tell different stories about the evolution of animals, with molecular data generally considered to be more objective than morphological data. However, form provides the basis for the description of organisms, and the study of fossils crucially depends on morphology. Complex organisms tend to evolve as ,mosaics', in which parts may be modified at varying rates and in response to different selective pressures. Thus, individual anatomical regions may contain different phylogenetic signals. In the present study, we used computerized methods to ,dissect' the skulls of a primate clade, the guenons, into functional and developmental modules (FDM). The potential of different modules as proxies for phylogenetic divergence in modern lineages was investigated. We found that the chondrocranium was the only FDM in which shape consistently had a strong and significant phylogenetic signal. This region might be less susceptible to epigenetic factors and thus more informative about phylogeny. The examination of the topology of trees from the chondrocranium suggested that the main differences evolved at the time of the radiation of terrestrial and arboreal guenons. However, phylogenetic reconstructions were found to be strongly affected by sampling error, with more localized anatomical regions (i.e. smaller/less complex FDMs) generally producing less reproducible tree topologies. This finding, if confirmed in other groups, implies that the utility of specific FDMs for phylogenetic inference could, in many cases, be hampered by the low reproducibility of results. The study also suggested that uncertainties due to sampling error may be larger than those from character sampling. This might have implications for phylogenetic analyses, which typically provide estimates of support of tree nodes based on characters but do not generally take into account the effect of sampling error on the tree topology. Nonetheless, studies of the potential of different FDMs as proxies for phylogenetic divergence in modern lineages, such as the present study, provide a framework that may help in modelling the morphological evolution of present and fossil species. © 2008 The Linnean Society of London, Biological Journal of the Linnean Society, 2008, 93, 813,834. [source]


Efficiency of Functional Regression Estimators for Combining Multiple Laser Scans of cDNA Microarrays

BIOMETRICAL JOURNAL, Issue 1 2009
C. A. Glasbey
Abstract The first stage in the analysis of cDNA microarray data is estimation of the level of expression of each gene, from laser scans of hybridised microarrays. Typically, data are used from a single scan, although, if multiple scans are available, there is the opportunity to reduce sampling error by using all of them. Combining multiple laser scans can be formulated as multivariate functional regression through the origin. Maximum likelihood estimation fails, but many alternative estimators exist, one of which is to maximise the likelihood of a Gaussian structural regression model. We show by simulation that, surprisingly, this estimator is efficient for our problem, even though the distribution of gene expression values is far from Gaussian. Further, it performs well if errors have a heavier tailed distribution or the model includes intercept terms, but not necessarily in other regions of parameter space. Finally, we show that by combining multiple laser scans we increase the power to detect differential expression of genes. (© 2009 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Novel application of flow cytometry: Determination of muscle fiber types and protein levels in whole murine skeletal muscles and heart

CYTOSKELETON, Issue 12 2007
Connie Jackaman
Abstract Conventional methods for measuring proteins within muscle samples such as immunohistochemistry and western blot analysis can be time consuming, labor intensive and subject to sampling errors. We have developed flow cytometry techniques to detect proteins in whole murine heart and skeletal muscle. Flow cytometry and immunohistochemistry were performed on quadriceps and soleus muscles from male C57BL/6J, BALB/c, CBA and mdx mice. Proteins including actins, myosins, tropomyosin and ,-actinin were detected via single staining flow cytometric analysis. This correlated with immunohistochemistry using the same antibodies. Muscle fiber types could be determined by dual labeled flow cytometry for skeletal muscle actin and different myosins. This showed similar results to immunohistochemistry for I, IIA and IIB myosins. Flow cytometry of heart samples from C57BL/6J and BALB/c mice dual labeled with cardiac and skeletal muscle actin antibodies demonstrated the known increase in skeletal actin protein in BALB/c hearts. The membrane-associated proteins ,-sarcoglycan and dystrophin could be detected in C57BL/6J mice, but were decreased or absent in mdx mice. With the ability to label whole muscle samples simultaneously with multiple antibodies, flow cytometry may have advantages over conventional methods for certain applications, including assessing the efficacy of potential therapies for muscle diseases. Cell Motil. Cytoskeleton 2007. © 2007 Wiley-Liss, Inc. [source]


Estrogen and progesterone hormone receptor status in breast carcinoma: Comparison of immunocytochemistry and immunohistochemistry

DIAGNOSTIC CYTOPATHOLOGY, Issue 3 2002
Svetlana Tafjord M.D.
Abstract We evaluated the correlation between histologic and cytologic specimens in the determination of estrogen receptor (ER) and progesterone receptor (PR) status in breast carcinoma and investigated the causes of clinically significant discrepancies. We analyzed 70 immunoassays for ER and 60 for PR from 71 patients with breast carcinoma. Concordance between cytology and histology was 89% for ER and 63% for PR using scores from pathology reports. Concordance between cytology and histology was 98% for ER and 91% for PR using consensus scores (obtained after reevaluation by the team of pathologists). Thirty of 130 (23%) tests had clinically relevant discrepancies, 53% of which were caused by wrong interpretation of cytologic findings, 10% by wrong interpretation of histologic findings, 17% by sampling error and 20% were not available for reevaluation. Wrong interpretation of the results for ER and PR status in cytology was a far more frequent cause of clinically relevant discrepancies than sampling errors. The use of strict criteria is recommended. Diagn. Cytopathol. 2002;26:137,141; DOI 10.1002/dc.10079 © 2002 Wiley-Liss, Inc. [source]


Measurement error and estimates of population extinction risk

ECOLOGY LETTERS, Issue 1 2004
John M. McNamara
Abstract It is common to estimate the extinction probability for a vulnerable population using methods that are based on the mean and variance of the long-term population growth rate. The numerical values of these two parameters are estimated from time series of population censuses. However, the proportion of a population that is registered at each census is typically not constant but will vary among years because of stochastic factors such as weather conditions at the time of sampling. Here, we analyse how such sampling errors influence estimates of extinction risk and find sampling errors to produce two opposite effects. Measurement errors lead to an exaggerated overall variance, but also introduce negative autocorrelations in the time series (which means that estimates of annual growth rates tend to alternate in size). If time series data are treated properly these two effects exactly counter balance. We advocate routinely incorporating a measure of among year correlations in estimating population extinction risk. [source]


Sampling and analytical plus subsampling variance components for five soil indicators observed at regional scale

EUROPEAN JOURNAL OF SOIL SCIENCE, Issue 5 2009
B. G. Rawlins
Summary When comparing soil baseline measurements with resampled values there are four main sources of error. These are: i) location (errors in relocating the sample site), ii) sampling errors (representing the site with a sample of material) iii) subsampling error (selecting material for analysis) and iv) analytical error (error in laboratory measurements). In general we cannot separate the subsampling and analytical sources of error (since we always analyse a different subsample of a specimen), so in this paper we combine these two sources into subsampling plus analytical error. More information is required on the relative magnitudes of location and sampling errors for the design of effective resampling strategies to monitor changes in soil indicators. Recently completed soil surveys of the UK with widely differing soils included a duplicate site and subsampling protocol to quantify ii), and the sum of iii) and iv) above. Sampling variances are estimated from measurements on duplicate samples , two samples collected on a support of side length 20 m separated by a short distance (21 m). Analytical and subsampling variances are estimated from analyses of two subsamples from each duplicate site. After accounting for variation caused by region, parent material class and land use, we undertook a nested analysis of data from 196 duplicate sites across three regions to estimate the relative magnitude of medium-scale (between sites), sampling and subsampling plus analytical variance components, for five topsoil indicators: total metal concentrations of copper (Cu), nickel (Ni) and zinc (Zn), soil pH and soil organic carbon (SOC) content. The variance components for each indicator diminish by about an order of magnitude from medium-scale, to sampling, to analytical plus subsampling. Each of the three fixed effects (parent material, land use and region) were statistically significant for each of the five indicators. The most effective way to minimise the overall uncertainty of our observations at sample sites is to reduce the sampling variance. [source]


Uncertainties in early Central England temperatures

INTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 8 2010
David E. Parker
Abstract Uncertainties in historical climate records constrain our understanding of natural variability of climate, but estimation of these uncertainties enables us to place recent climate events and extremes into a realistic historical perspective. Uncertainties in Central England temperature (CET) since 1878 have already been estimated; here we estimate uncertainties back to the start of the record in 1659, using Manley's publications and more recently developed techniques for estimating spatial sampling errors. Estimated monthly standard errors are of the order of 0.5 °C up to the 1720s, but 0.3 °C subsequently when more observing sites were used. Corresponding annual standard errors are up to nearly 0.4 °C in the earliest years but around 0.15 °C after the 1720s. Daily standard errors from 1772, when the daily series begins, up to 1877 are of the order of 1 °C because only a single site was used at any one time. Inter-diurnal variability in the daily CET record appears greater before 1878 than subsequently, partly because the sites were in the Midlands or southern England where day-to-day temperature variability exceeds that in the Lancashire part of Manley's CET. Copyright © 2009 Royal Meteorological Society [source]


Principles of Proper Validation: use and abuse of re-sampling for validation

JOURNAL OF CHEMOMETRICS, Issue 3-4 2010
Kim H. Esbensen
Abstract Validation in chemometrics is presented using the exemplar context of multivariate calibration/prediction. A phenomenological analysis of common validation practices in data analysis and chemometrics leads to formulation of a set of generic Principles of Proper Validation (PPV), which is based on a set of characterizing distinctions: (i) Validation cannot be understood by focusing on the methods of validation only; validation must be based on full knowledge of the underlying definitions, objectives, methods, effects and consequences,which are all outlined and discussed here. (ii) Analysis of proper validation objectives implies that there is one valid paradigm only: test set validation. (iii) Contrary to much contemporary chemometric practices (and validation myths), cross-validation is shown to be unjustified in the form of monolithic application of a one-for-all procedure (segmented cross-validation) on all data sets. Within its own design and scope, cross-validation is in reality a sub-optimal simulation of test set validation, crippled by a critical sampling variance omission, as it manifestly is based on one data set only (training data set). Other re-sampling validation methods are shown to suffer from the same deficiencies. The PPV are universal and can be applied to all situations in which the assessment of performance is desired: prediction-, classification-, time series forecasting-, modeling validation. The key element of PPV is the Theory of Sampling (TOS), which allow insight into all variance generating factors, especially the so-called incorrect sampling errors, which, if not properly eliminated, are responsible for a fatal inconstant sampling bias, for which no statistical correction is possible. In the light of TOS it is shown how a second data set (test set, validation set) is critically necessary for the inclusion of the sampling errors incurred in all ,future' situations in which the validated model must perform. Logically, therefore, all one data set re-sampling approaches for validation, especially cross-validation and leverage-corrected validation, should be terminated, or at the very least used only with full scientific understanding and disclosure of their detrimental variance omissions and consequences. Regarding PLS-regression, an emphatic call is made for stringent commitment to test set validation based on graphical inspection of pertinent t,u plots for optimal understanding of the X,Y interrelationships and for validation guidance. QSAR/QSAP forms a partial exemption from the present test set imperative with no generalization potential. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Usefulness of non-invasive markers for predicting liver cirrhosis in patients with chronic hepatitis B

JOURNAL OF GASTROENTEROLOGY AND HEPATOLOGY, Issue 1 2010
Kwang Gyun Lee
Abstract Background and Aim:, Recently, various non-invasive blood markers and indices have been studied to overcome the limitations of liver biopsy, such as its invasiveness and sampling errors. However, the majority of these studies have focused on patients with chronic hepatitis C. Accordingly, this study was performed to evaluate the significances of various non-invasive serum markers in terms of predicting the presence of liver cirrhosis in chronic hepatitis B. Methods:, We included 125 chronic hepatitis B patients who had undergone liver biopsy. Fibrosis stage was assessed using the METAVIR scoring system (F0,F4), which defines liver cirrhosis as F4. In addition, we measured various blood markers at times of liver biopsy. Results:, Thirty four of the 125 patients (27.2%) were rated as F4 by liver biopsy. Age, platelet, white blood cells, aspartate aminotransferase (AST), alanine aminotransferase, haptoglobin, apolipoprotein-A1 (Apo-A1), collagen-IV, hyaluronic acid, ,2-macroglobulin, matrix metalloproteinase-2, and YKL-40 were significantly different between patients with chronic hepatitis and those with liver cirrhosis. However, multivariate analysis showed that only platelet, AST, haptoglobin, and Apo-A1 independently predicted the presence of liver cirrhosis. Having identified these four factors, we devised a system, which we refer to as platelet count, AST, haptoglobin, and Apo-A1 (PAHA). The area under the receiver-operating characteristics (AUROC) of PAHA indices for the presence of liver cirrhosis was 0.924 (95% confidence interval, 0.877,0.971), which was significantly greater than the AUROC of other indices of fibrosis. Conclusion:, The devised PAHA system was found to be useful for predicting the presence of liver cirrhosis in patients with chronic hepatitis B. [source]


Delineating melanoma using multimodal polarized light imaging

LASERS IN SURGERY AND MEDICINE, Issue 1 2009
Zeina Tannous
Abstract Background and Significance Melanoma accounts for 3% of all skin cancers but causes 83% of skin cancer deaths. The first step in treatment of melanoma is the removal of the lesions, usually by surgical excision. Currently most lesions are removed without intraoperative margin control. Post-operative methods inspect 1,2% of the surgical margin and are prone to sampling errors. In this study we evaluate the use of reflectance and fluorescence polarization imaging for the demarcation of melanoma in thick fresh skin excisions. Materials and Methods Pigmented lesions clinically suspicious for melanoma were elliptically excised with proper margins. Elliptical surgical excisions were vertically bisected along the short axis of the specimen into two halves in the middle of the pigmented lesions. The vertically bisected tumor face was imaged. After that, one half of the sample was briefly stained in aqueous 2 mg/ml solution of tetracycline, whereas another half was stained in 0.2 mg/ml aqueous solution of methylene blue. Then both specimens were reimaged. Reflectance images were acquired in the spectral range between 390 and 750 nm. Fluorescence images of the tetracycline-stained tissue were excited at 390 nm and registered between 450 and 700 nm. Fluorescence of the methylene blue-stained samples was excited at 630 nm and registered between 650 and 750 nm. After imaging, the tissue was processed for standard H&E histopathology. The resulting histological and optical images were compared to each other. Results and Conclusions Our findings demonstrate that both tetracycline and methylene blue are suitable for imaging dysplastic and benign nevi. Melanoma is better delineated in the samples stained in methylene blue. Accurate and rapid delineation of melanoma in standard fresh surgical excisions appears feasible. Lasers Surg. Med. 41:10,16, 2009. © 2008 Wiley-Liss, Inc. [source]