Home About us Contact | |||
Potential Errors (potential + error)
Selected AbstractsPotential Errors in Detecting Earnings Management: Reexamining Studies Investigating the AMT of 1986,CONTEMPORARY ACCOUNTING RESEARCH, Issue 4 2001Won W. Choi Abstract In this paper we seek to document errors that could affect studies of earnings management. The book income adjustment (BIA) of the alternative minimum tax (AMT) created apparently strong incentives to manage book income downward in 1987. Five earlier papers using different methodologies and samples all conclude that earnings were reduced in response to the BIA. This consensus of findings offers an opportunity to investigate our speculation that methodological biases are more likely when there appear to be clear incentives for earnings management. A reexamination of these studies uncovers potential biases related to a variety of factors, including choices of scaling variables, selection of affected and control samples, and measurement error in estimated discretionary accruals. A reexamination of the argument underlying these studies also suggests that the incentives to manage earnings are less powerful than initially predicted, and are partially mitigated by tax and non-tax factors. As a result, we believe that the extent of earnings management that occurred in 1987 in response to the BIA remains an unresolved issue. [source] Discussion of "Potential Errors in Detection of Earnings Management: Reexamining Studies Investigating the AMT of 1986"CONTEMPORARY ACCOUNTING RESEARCH, Issue 4 2001Dan S. Dhaliwal First page of article [source] Dating young geomorphic surfaces using age of colonizing Douglas fir in southwestern Washington and northwestern Oregon, USA,EARTH SURFACE PROCESSES AND LANDFORMS, Issue 6 2007Thomas C. Pierson Abstract Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes , Mount Rainier, Mount St. Helens and Mount Hood , in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ±5 years. If the trees are cored at about 1·4 m above the ground surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ±7 years). Published in 2006 by John Wiley & Sons, Ltd. [source] Method of weighted proportion of reproductive-aged women taking folic acid supplements to predict a neural tube defect rate decline,BIRTH DEFECTS RESEARCH, Issue 12 2003Quanhe Yang Abstract BACKGROUND Neural tube defect (NTD) rates can be lowered by increased consumption of folic acid (FA) by women before and during early pregnancy. The crude proportion of reproductive-aged women taking FA supplements has been used to predict a decline of the NTD rate in the general population. In this study we examine the potential error in using the crude proportion to predict NTD risk reduction, and offer an alternative method. METHODS The crude proportion measures the number of women taking FA. It ignores the substantial variability by maternal age in the probability of giving birth. Age-specific fertility rates (ASFRs) reflect the probability that a woman in a specific age group will give birth in a given year. In this study, we show how to calculate a proportion weighted by ASFRs to predict a decline in the NTD rate, and to assess the effectiveness of FA consumption in preventing NTDs. RESULTS Our results show that a crude proportion of 50% of women (15,49 years old) taking FA is associated with a range of 24,77% in weighted proportions. Assuming a 40% risk reduction from taking 400 ,g of FA daily, the expected NTD rate decline could vary from 9.6% to 30.6%, depending on the age distribution of women taking FA. CONCLUSIONS The ASFR-weighted proportion estimates the proportion of babies born to women taking FA, as opposed to the crude proportion of women taking FA. We recommend using the ASFR-weighted proportion to predict an NTD rate decline and measure the success of FA education campaigns. We found that when women in high-fertility age groups increased their FA consumption, the decline in the NTD rate was greater than when women in low-fertility age groups did so. Our findings suggest that the more efficient approach to NTD prevention is to focus on women with a higher probability of giving birth. For example, by focusing on <50% of women of childbearing age (20,34 years), as much as 76% of the maximum NTD rate reduction can be achieved. Birth Defects Research (Part A), 2003. Published 2003 Wiley-Liss, Inc. [source] Providers Do Not Verify Patient Identity during Computer Order EntryACADEMIC EMERGENCY MEDICINE, Issue 7 2008Philip L. Henneman MD Abstract Introduction:, Improving patient identification (ID), by using two identifiers, is a Joint Commission safety goal. Appropriate identifiers include name, date of birth (DOB), or medical record number (MRN). Objectives:, The objectives were to determine the frequency of verifying patient ID during computerized provider order entry (CPOE). Methods:, This was a prospective study using simulated scenarios with an eye-tracking device. Medical providers were asked to review 10 charts (scenarios), select the patient from a computer alphabetical list, and order tests. Two scenarios had embedded ID errors compared to the computer (incorrect DOB or misspelled last name), and a third had a potential error (second patient on alphabetical list with same last name). Providers were not aware the focus was patient ID. Verifying patient ID was defined as looking at name and either DOB or MRN on the computer. Results:, Twenty-five of 25 providers (100%; 95% confidence interval [CI] = 86% to 100%) selected the correct patient when there was a second patient with the same last name. Two of 25 (8%; 95% CI = 1% to 26%) noted the DOB error; the remaining 23 ordered tests on an incorrect patient. One of 25 (4%, 95% CI = 0% to 20%) noted the last name error; 12 ordered tests on an incorrect patient. No participant (0%, 0/107; 95% CI = 0% to 3%) verified patient ID by looking at MRN prior to selecting a patient from the alphabetical list. Twenty-three percent (45/200; 95% CI = 17% to 29%) verified patient ID prior to ordering tests. Conclusions:, Medical providers often miss ID errors and infrequently verify patient ID with two identifiers during CPOE. [source] Refining the results of a whole-genome screen based on 4666 microsatellite markers for defining predisposition factors for multiple sclerosisELECTROPHORESIS, Issue 14 2004René Gödde Abstract Multiple sclerosis (MS) is a demyelinating disease of the central nervous system with a complex genetic background. In order to identify loci associated with the disease, we had performed a genome screen initially using 6000 microsatellite markers in pooled DNA samples of 198 MS patients and 198 controls. Here, we report on the detailed reanalysis of this set of data. Distinctive features of microsatellites genotyped in pooled DNA causing false-positive association or masking existing association were met by improved evaluation and refined correction factors in the statistical analyses. In order to assess potential errors introduced by DNA pooling and genotyping, we resurveyed the experiment in a subset of microsatellite markers using de novo -composed DNA pools. True MS associations of markers were verified via genotyping all individual DNA samples comprised in the pools. Microsatellites share characteristically superb information content but they do not lend themselves to automation in very large scale formats. Especially after DNA pooling many artifacts of individual marker systems require special attention and treatment. Therefore, in the near future comprehensive whole-genome screens may rather be performed by typing single nucleotide polymorphisms on chip-based platforms. [source] An Epidemiologic Study of Closed Emergency Department Malpractice Claims in a National Database of Physician Malpractice InsurersACADEMIC EMERGENCY MEDICINE, Issue 5 2010Terrence W. Brown MD Abstract Objectives:, The objective was to perform an epidemiologic study of emergency department (ED) medical malpractice claims using data maintained by the Physician Insurers Association of America (PIAA), a trade association whose participating malpractice insurance carriers collectively insure over 60% of practicing physicians in the United States. Methods:, All closed malpractice claims in the PIAA database between 1985 and 2007, where an event in an ED was alleged to have caused injury to a patient 18 years of age or older, were retrospectively reviewed. Study outcomes were the frequency of claims and average indemnity payments associated with specific errors identified by the malpractice insurer, as well as associated health conditions, primary specialty groups, and injury severity. Indemnity payments include money paid to claimants as a result of settlement or court adjudication, and this financial obligation to compensate a claimant constitutes the insured's financial liability. These payments do not include the expenses associated with resolving a claim, such as attorneys' fees. The study examined claims by adjudicatory outcome, associated financial liability, and expenses of litigation. Adjudicatory outcome refers to the legal disposition of a claim as it makes its way into and through the court system and includes resolution of claims by formal verdict as well as by settlement. The study also investigated how the number of claims, average indemnity payments, paid-to-close ratios (the percentage of closed claims that resolved with a payment to the plaintiff), and litigation expenses have trended over the 23-year study period. Results:, The authors identified 11,529 claims arising from an event originating in an ED, representing over $664 million in total liability over the 23-year study period. Emergency physicians (EPs) were the primary defendants in 19% of ED claims. The largest sources of error, as identified by the individual malpractice insurer, included errors in diagnosis (37%), followed by improper performance of a procedure (17%). In 18% of claims, no error could be identified by the insurer. Acute myocardial infarction (AMI; 5%), fractures (6%), and appendicitis (2%) were the health conditions associated with the highest number of claims. Over two-thirds of claims (70%) closed without payment to the claimant. Most claims that paid out did so through settlement (29%). Only 7% of claims were resolved by verdict, and 85% of those were in favor of the clinician. Over time, the average indemnity payments and expenses of litigation, adjusted for inflation, more than doubled, while both the total number of claims and number of paid claims decreased. Conclusions:, Emergency physicians were the primary defendants in a relatively small proportion of ED claims. The disease processes associated with the highest numbers of claims included AMI, appendicitis, and fractures. The largest share of overall indemnity was attributed to errors in the diagnostic process. The financial liability of medical malpractice in the ED is substantial, yet the vast majority of claims resolve in favor of the clinician. Efforts to mitigate risk in the ED should include the diverse clinical specialties who work in this complex environment, with attention to those health conditions and potential errors with the highest risk. ACADEMIC EMERGENCY MEDICINE 2010; 17:553,560 © 2010 by the Society for Academic Emergency Medicine [source] Spatial variations in throughfall in a Moso bamboo forest: sampling design for the estimates of stand-scale throughfallHYDROLOGICAL PROCESSES, Issue 3 2010Yoshinori Shinohara Abstract We investigated the spatial and seasonal variations in throughfall (Tf) in relation to spatial and seasonal variations in canopy structure and gross rainfall (Rf) and assessed the impacts of the variations in Tf on stand-scale Tf estimates. We observed the canopy structure expressed as the leaf area index (LAI) once a month and Tf once a week in 25 grids placed in a Moso bamboo (Phyllostachys pubescens) forest for 1 year. The mean LAI and spatial variation in LAI did have some seasonal variations. The spatial variations in Tf reduced with increasing Rf, and the relationship between the spatial variation and the Rf held throughout the year. These results indicate that the seasonal change in LAI had little impact on spatial variations in Tf, and that Rf is a critical factor determining the spatial variations in Tf at the study site. We evaluated potential errors in stand-scale Tf estimates on the basis of measured Tf data using Monte Carlo sampling. The results showed that the error decreases greatly with increasing sample size when the sample size was less than ,8, whereas it was near stable when the sample size was 8 or more, regardless of Rf. A sample size of eight results in less than 10% error for Tf estimates based on Student's t -value analysis and would be satisfactory for interception loss estimates when considering errors included in Rf data. Copyright © 2009 John Wiley & Sons, Ltd. [source] Nonlinear adaptive tracking-control synthesis for functionally uncertain systemsINTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING, Issue 2 2010Zenon Zwierzewicz Abstract The paper is concerned with the problem of adaptive tracking system control synthesis. It is assumed that a nonlinear, feedback linearizable object dynamics (model structure) is (partially) unknown and some of its nonlinear characteristics can be approximated by a sort of functional approximators. It has been proven that proportional state feedback plus parameter adaptation are able to assure its asymptotic stability. This form of controller permits online compensation of unknown model nonlinearities and exogenous disturbances, which results in satisfactory tracking performance. An interesting feature of the system is that the whole process control is performed without requisite asymptotic convergence of approximator parameters to the postulated ,true' values. It has been noticed that the parameters play rather a role of slack variables on which potential errors (that otherwise would affect the state variables) cumulate. The system's performance has been tested via Matlab/Simulink simulations via an example of ship path-following problem. Copyright © 2009 John Wiley & Sons, Ltd. [source] Improving TCP performance over networks with wireless components using ,probing devices'INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMS, Issue 6 2002A. Lahanas Abstract TCP error control mechanism lacks the ability to detect with precision the nature of potential errors during communication. It is only capable of detecting the results of the errors, namely that segments are dropped. As a result, the protocol lacks the ability to implement an appropriate error recovery strategy cognizant of current network conditions and responsive to the distinctive error characteristics of the communication channel. TCP sender always calls for the sending window to shrink. We show that probing mechanisms could enhance the error detection capabilities of the protocol. TCP could then flexibly adjust its window in a manner that permits the available bandwidth to be exploited without violating the requirements of stability, efficiency and fairness that need to be guaranteed during congestion. Our experiments have three distinct goals: First, to demonstrate the potential contribution of probing mechanisms. A simple probing mechanism and an immediate recovery strategy are grafted into TCP-Tahoe and TCP-Reno. We show that, this way, standard TCP can improve its performance without requiring any further change. Second, to study the performance of adaptive strategies. An adaptive TCP with probing is used, that is responsive to the detected error conditions by alternating slow start, fast recovery and immediate recovery. An adaptive error recovery strategy can yield better performance. Third, to study the design limitations of the probing device itself. The aggressive or conservative nature of the probing mechanisms themselves can determine the aggressive or conservative behaviour of the protocol and exploit accordingly the energy/throughput trade-off. Copyright © 2002 John Wiley & Sons, Ltd. [source] Misstatement Direction, Litigation Risk, and Planned Audit InvestmentJOURNAL OF ACCOUNTING RESEARCH, Issue 3 2001Orie Barron This study reports the results of an experiment showing that auditor assessments of litigation risk and planned audit investments are higher when potential errors overstate financial performance than when those errors understate performance. This result is much stronger in the presence of high levels of litigation risk in the client's industry. These results suggest that in industries where litigation risk is high audited financial statements may contain more unintentional material understatement errors than overstatement errors. Thus, litigation risk,through its effect on auditors,may encourage financial statements that understate firm performance [source] Personal Identification Using the Frontal Sinus,JOURNAL OF FORENSIC SCIENCES, Issue 3 2010Joanna L. Besana M.Sc. Abstract:, The frontal sinuses are known to be unique to each individual; however, no one has tested the independence of the frontal sinus traits to see if probability analysis through trait combination is a viable method of identifying an individual using the frontal sinuses. This research examines the feasibility of probability trait combination, based on criteria recommended in the literature, and examines two other methods of identification using the frontal sinuses: discrete trait combinations and superimposition pattern matching. This research finds that most sinus traits are dependent upon one another and thus cannot be used in probability combinations. When looking at traits that are independent, this research finds that metric methods are too fraught with potential errors to be useful. Discrete trait combinations do not have a high enough discriminating power to be useful. Only superimposition pattern matching is an effective method of identifying an individual using the frontal sinuses. [source] Use of tissue water as a concentration reference for proton spectroscopic imagingMAGNETIC RESONANCE IN MEDICINE, Issue 6 2006Charles Gasparovic Abstract A strategy for using tissue water as a concentration standard in 1H magnetic resonance spectroscopic imaging studies on the brain is presented, and the potential errors that may arise when the method is used are examined. The sensitivity of the method to errors in estimates of the different water compartment relaxation times is shown to be small at short echo times (TEs). Using data from healthy human subjects, it is shown that different image segmentation approaches that are commonly used to account for partial volume effects (SPM2, FSL's FAST, and K-means) lead to different estimates of metabolite levels, particularly in gray matter (GM), owing primarily to variability in the estimates of the cerebrospinal fluid (CSF) fraction. While consistency does not necessarily validate a method, a multispectral segmentation approach using FAST yielded the lowest intersubject variability in the estimates of GM metabolites. The mean GM and white matter (WM) levels of N-acetyl groups (NAc, primarily N-acetylaspartate), choline (Ch), and creatine (Cr) obtained in these subjects using the described method with FAST multispectral segmentation are reported: GM [NAc] = 17.16 ± 1.19 mM; WM [NAc] = 14.26 ± 1.38 mM; GM [Ch] = 3.27 ± 0.47 mM; WM [Ch] = 2.65 ± 0.25 mM; GM [Cr] = 13.98 ± 1.20 mM; and WM [Cr] = 7.10 ± 0.67 mM. Magn Reson Med, 2006. © 2006 Wiley-Liss, Inc. [source] Design and statistical analysis of oral medicine studies: common pitfallsORAL DISEASES, Issue 3 2010L Baccaglini Oral Diseases (2010) 16, 233,241 A growing number of articles are emerging in the medical and statistics literature that describe epidemiologic and statistical flaws of research studies. Many examples of these deficiencies are encountered in the oral, craniofacial, and dental literature. However, only a handful of methodologic articles have been published in the oral literature warning investigators of potential errors that may arise early in the study and that can irreparably bias the final results. In this study, we briefly review some of the most common pitfalls that our team of epidemiologists and statisticians has identified during the review of submitted or published manuscripts and research grant applications. We use practical examples from the oral medicine and dental literature to illustrate potential shortcomings in the design and analysis of research studies, and how these deficiencies may affect the results and their interpretation. A good study design is essential, because errors in the analysis can be corrected if the design was sound, but flaws in study design can lead to data that are not salvageable. We recommend consultation with an epidemiologist or a statistician during the planning phase of a research study to optimize study efficiency, minimize potential sources of bias, and document the analytic plan. [source] Pretreatment technique for siderite removal for organic carbon isotope and C:N ratio analysis in geological samplesRAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 6 2008Toti E. Larson A method for the removal of siderite from geological samples to determine organic carbon isotope compositions using elemental analysis isotope ratio mass spectrometry is presented which includes calculations for % organic carbon in samples that contain diagenetic carbonate. The proposed method employs in situ acidification of geological samples with 6,N HCl and silver capsule sample holders and was tested on modern peach leaf samples (NIST 1547) and ancient lacustrine samples from Valles Caldera, New Mexico. The in situ acidification technique eliminates potential errors associated with the removal of soluble organic material using standard acid decanting techniques and allows for removal of the less soluble siderite, which is not efficiently removed using vapor acidification techniques. Copyright © 2008 John Wiley & Sons, Ltd. [source] Teledermatology: Influence of zoning and education on a clinician's ability to observe peripheral lesionsAUSTRALASIAN JOURNAL OF DERMATOLOGY, Issue 3 2002Keng Chen SUMMARY Teledermatology can benefit rural and remote communities, where specialist dermatological services may not be readily available. Regarding store-and- forward teledermatology, we hypothesized that the site of a lesion in an image (zoning) may influence a clinician's ability to observe target lesions, and that education on image viewing may improve use of this technology. We examined this by conducting both pre- and post-education studies. The education on image viewing consisted of a presentation on the outcome of the first study-survey on image viewing. The first study demonstrated that zoning influences a clinician's visual attention and that significant, concurrent lesions in the periphery may be missed. The second study demonstrated that brief education could produce a measurable change in observing peripheral lesions. These findings have medico-legal implications and suggest that further education in the use of such technology is necessary in order to optimize patient care and prevent potential errors. [source] Towards the development of a transferable set of value estimates for environmental attributesAUSTRALIAN JOURNAL OF AGRICULTURAL & RESOURCE ECONOMICS, Issue 1 2004Martin Van Bueren Estimates of environmental values are frequently required as inputs to cost-benefit analyses when evaluating alternative options for managing natural resources. One strategy to avoid the high cost of conducting empirical work when non-market values are involved is to use value estimates from an existing source study and to transfer them to the target context of interest (a practice known as benefit transfer). However, the transfer of values is subject to a host of potential errors and could lead to significant overestimation or underestimation of welfare change. The present paper reports the results of a choice modelling study in which household values for the impacts of land and water degradation in Australia are estimated. A key objective of the present study was to test the validity of transferring estimates derived in a national context to different regional contexts. On the basis of these test results, inferences are made about the impact that differing contexts have on value estimates. The scale of value differences across the different contexts provides a guide for calibrating benefit transfer estimates. [source] Potential medication dosing errors in outpatient pediatricsCHILD: CARE, HEALTH AND DEVELOPMENT, Issue 3 2006Robert M. Jacobson md Potential medication dosing errors in outpatient pediatrics . McPhillipsH.A., StilleC.J., SmithD., HechtJ., PearsonJ., StullJ., DebellisK.P., AndradeS., MillerM., KaushalR., GurwitzJ. & DavisR.L. ( 2005 ) Journal of Pediatrics , 147 , 761 , 767 . Objective To determine the prevalence of potential dosing errors of medication dispensed to children for 22 common medications. Study design The investigators used automated pharmacy data from three health maintenance organizations. Children were eligible if they were less than 17 years old at the time of dispensing. The investigators randomly selected up to 120 children with a new dispensing prescription for each drug of interest, giving 1933 study subjects. Errors were defined as potential overdoses or potential under-doses. The error rate in two health maintenance organizations that use paper prescriptions was compared with one health maintenance organization that uses an electronic prescription writer. Results Approximately 15% of children were dispensed a medication with a potential dosing error: 8% were potential overdoses and 7% were potential under-doses. Among children weighing less than 35 kg, only 67% of doses were dispensed within recommended dosing ranges, and more than 1% were dispensed at more than twice the recommended maximum dose. Analgesics were most likely to be potentially overdosed (15%), whereas anti-epileptics were most likely potentially under-dosed (20%). Potential error rates were not lower at the site with an electronic prescription writer. Conclusions Potential medication dosing errors occur frequently in outpatient paediatrics. Studies on the clinical impact of these potential errors and effective error prevention strategies are needed. [source] EFFICIENT MARKOV NETWORK DISCOVERY USING PARTICLE FILTERSCOMPUTATIONAL INTELLIGENCE, Issue 4 2009Dimitris Margaritis In this paper, we introduce an efficient independence-based algorithm for the induction of the Markov network (MN) structure of a domain from the outcomes of independence test conducted on data. Our algorithm utilizes a particle filter (sequential Monte Carlo) method to maintain a population of MN structures that represent the posterior probability distribution over structures, given the outcomes of the tests performed. This enables us to select, at each step, the maximally informative test to conduct next from a pool of candidates according to information gain, which minimizes the cost of the statistical tests conducted on data. This makes our approach useful in domains where independence tests are expensive, such as cases of very large data sets and/or distributed data. In addition, our method maintains multiple candidate structures weighed by posterior probability, which allows flexibility in the presence of potential errors in the test outcomes. [source] |