Home About us Contact | |||
Measurement Error (measurement + error)
Terms modified by Measurement Error Selected AbstractsMEASUREMENT ERROR IN RESEARCH ON HUMAN RESOURCES AND FIRM PERFORMANCE: ADDITIONAL DATA AND SUGGESTIONS FOR FUTURE RESEARCHPERSONNEL PSYCHOLOGY, Issue 4 2001PATRICK M. WRIGHT Gerhart and colleagues (2000) and Huselid and Becker (2000) recently debated the presence and implications of measurement error in measures of human resource practices. This paper presents data from 3 more studies, 1 of large organizations from different industries at the corporate level, 1 from commercial banks, and the other of autonomous business units at the level of the job. Results of all 3 studies provide additional evidence that single respondent measures of HR practices contain large amounts of measurement error. Implications for future research into the HR firm performance relationship are discussed. [source] COMMENT ON "MEASUREMENT ERROR IN RESEARCH ON HUMAN RESOURCES AND FIRM PERFORMANCE: HOW MUCH ERROR IS THERE AND HOW DOES IT INFLUENCE EFFECTSIZE ESTIMATES?"PERSONNEL PSYCHOLOGY, Issue 4 2000AND SNELL, MC MAHAN, WRIGHT, by GERHART First page of article [source] MEASUREMENT ERROR IN RESEARCH ON THE HUMAN RESOURCES AND FIRM PERFORMANCE RELATIONSHIP: FURTHER EVIDENCE AND ANALYSISPERSONNEL PSYCHOLOGY, Issue 4 2000BARRY GERHART Our earlier article in Personnel Psychology demonstrated how general-izability theory could be used to obtain improved reliability estimates in the human resource (HR) and firm performance literature and that correcting for unreliability using these estimates had important implications for the magnitude of the HR and firm performance relationship. In their comment, Huselid and Becker both raise criticisms specific to our study and broad issues for the field to consider. In our present article, we argue, using empirical evidence whenever possible, that the issues and criticisms raised by Huselid and Becker do not change our original conclusions. We also provide new evidence on how the reliability of HR-related measures may differ at different levels of analysis. Finally, we build on Huselid and Becker's helpful discussion of broad research design and strategy issues in the HR and firm performance literature in an effort to help researchers make better informed choices regarding their own research designs and strategies in the area. [source] Estimation of Nonlinear Models with Measurement ErrorECONOMETRICA, Issue 1 2004Susanne M. Schennach This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the "true" value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved "true" variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the "true," unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach. [source] Effects of Measurement Error on Horizontal Hydraulic Gradient EstimatesGROUND WATER, Issue 1 2007J.F. Devlin During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an ,500-m2 site. Additional wells were installed to increase the monitored area to 26,500 m2, and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of ±1.3 × 10,2 m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 × 10,4±25% with a flow direction of 56° southeast ±18°, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis. [source] Measurement Error in Nonlinear Models: a Modern PerspectiveJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2008Andrew W. Roddam No abstract is available for this article. [source] Measurement Error and Incentive PayLABOUR, Issue 1 2005Eero Lauri Oskari Lehto Each agent produces an individual contribution which jointly form a total output. Agents' efforts are unobservable and the principal cannot observe individual outputs without an error. Neither the observed individual output of an agent nor the observed total output of the whole team are then sufficient statistics for the actual individual output in the sense of Blackwell. We show that the mixed contract of the pure piece-rate contract and of the pure team contract then dominates the pure contracts from the principal's point of view. [source] Causal Inference with Differential Measurement Error: Nonparametric Identification and Sensitivity AnalysisAMERICAN JOURNAL OF POLITICAL SCIENCE, Issue 2 2010Kosuke Imai Political scientists have long been concerned about the validity of survey measurements. Although many have studied classical measurement error in linear regression models where the error is assumed to arise completely at random, in a number of situations the error may be correlated with the outcome. We analyze the impact of differential measurement error on causal estimation. The proposed nonparametric identification analysis avoids arbitrary modeling decisions and formally characterizes the roles of different assumptions. We show the serious consequences of differential misclassification and offer a new sensitivity analysis that allows researchers to evaluate the robustness of their conclusions. Our methods are motivated by a field experiment on democratic deliberations, in which one set of estimates potentially suffers from differential misclassification. We show that an analysis ignoring differential measurement error may considerably overestimate the causal effects. This finding contrasts with the case of classical measurement error, which always yields attenuation bias. [source] Analysis of Misclassified Correlated Binary Data Using a Multivariate Probit Model when Covariates are Subject to Measurement ErrorBIOMETRICAL JOURNAL, Issue 3 2009Surupa Roy Abstract A multivariate probit model for correlated binary responses given the predictors of interest has been considered. Some of the responses are subject to classification errors and hence are not directly observable. Also measurements on some of the predictors are not available; instead the measurements on its surrogate are available. However, the conditional distribution of the unobservable predictors given the surrogate is completely specified. Models are proposed taking into account either or both of these sources of errors. Likelihood-based methodologies are proposed to fit these models. To ascertain the effect of ignoring classification errors and /or measurement error on the estimates of the regression and correlation parameters, a sensitivity study is carried out through simulation. Finally, the proposed methodology is illustrated through an example. [source] Semiparametric Bayesian Analysis of Nutritional Epidemiology Data in the Presence of Measurement ErrorBIOMETRICS, Issue 2 2010Samiran Sinha Summary:, We propose a semiparametric Bayesian method for handling measurement error in nutritional epidemiological data. Our goal is to estimate nonparametrically the form of association between a disease and exposure variable while the true values of the exposure are never observed. Motivated by nutritional epidemiological data, we consider the setting where a surrogate covariate is recorded in the primary data, and a calibration data set contains information on the surrogate variable and repeated measurements of an unbiased instrumental variable of the true exposure. We develop a flexible Bayesian method where not only is the relationship between the disease and exposure variable treated semiparametrically, but also the relationship between the surrogate and the true exposure is modeled semiparametrically. The two nonparametric functions are modeled simultaneously via B-splines. In addition, we model the distribution of the exposure variable as a Dirichlet process mixture of normal distributions, thus making its modeling essentially nonparametric and placing this work into the context of functional measurement error modeling. We apply our method to the NIH-AARP Diet and Health Study and examine its performance in a simulation study. [source] Modeling Data with Excess Zeros and Measurement Error: Application to Evaluating Relationships between Episodically Consumed Foods and Health OutcomesBIOMETRICS, Issue 4 2009Victor Kipnis Summary Dietary assessment of episodically consumed foods gives rise to nonnegative data that have excess zeros and measurement error. Tooze et al. (2006,,Journal of the American Dietetic Association,106, 1575,1587) describe a general statistical approach (National Cancer Institute method) for modeling such food intakes reported on two or more 24-hour recalls (24HRs) and demonstrate its use to estimate the distribution of the food's usual intake in the general population. In this article, we propose an extension of this method to predict individual usual intake of such foods and to evaluate the relationships of usual intakes with health outcomes. Following the regression calibration approach for measurement error correction, individual usual intake is generally predicted as the conditional mean intake given 24HR-reported intake and other covariates in the health model. One feature of the proposed method is that additional covariates potentially related to usual intake may be used to increase the precision of estimates of usual intake and of diet-health outcome associations. Applying the method to data from the Eating at America's Table Study, we quantify the increased precision obtained from including reported frequency of intake on a food frequency questionnaire (FFQ) as a covariate in the calibration model. We then demonstrate the method in evaluating the linear relationship between log blood mercury levels and fish intake in women by using data from the National Health and Nutrition Examination Survey, and show increased precision when including the FFQ information. Finally, we present simulation results evaluating the performance of the proposed method in this context. [source] Ratio Estimation with Measurement Error in the Auxiliary VariateBIOMETRICS, Issue 2 2009Timothy G. Gregoire Summary With auxiliary information that is well correlated with the primary variable of interest, ratio estimation of the finite population total may be much more efficient than alternative estimators that do not make use of the auxiliary variate. The well-known properties of ratio estimators are perturbed when the auxiliary variate is measured with error. In this contribution we examine the effect of measurement error in the auxiliary variate on the design-based statistical properties of three common ratio estimators. We examine the case of systematic measurement error as well as measurement error that varies according to a fixed distribution. Aside from presenting expressions for the bias and variance of these estimators when they are contaminated with measurement error we provide numerical results based on a specific population. Under systematic measurement error, the biasing effect is asymmetric around zero, and precision may be improved or degraded depending on the magnitude of the error. Under variable measurement error, bias of the conventional ratio-of-means estimator increased slightly with increasing error dispersion, but far less than the increased bias of the conventional mean-of-ratios estimator. In similar fashion, the variance of the mean-of-ratios estimator incurs a greater loss of precision with increasing error dispersion compared with the other estimators we examine. Overall, the ratio-of-means estimator appears to be remarkably resistant to the effects of measurement error in the auxiliary variate. [source] Measurement Error in a Random Walk Model with Applications to Population DynamicsBIOMETRICS, Issue 4 2006John Staudenmayer Summary Population abundances are rarely, if ever, known. Instead, they are estimated with some amount of uncertainty. The resulting measurement error has its consequences on subsequent analyses that model population dynamics and estimate probabilities about abundances at future points in time. This article addresses some outstanding questions on the consequences of measurement error in one such dynamic model, the random walk with drift model, and proposes some new ways to correct for measurement error. We present a broad and realistic class of measurement error models that allows both heteroskedasticity and possible correlation in the measurement errors, and we provide analytical results about the biases of estimators that ignore the measurement error. Our new estimators include both method of moments estimators and "pseudo"-estimators that proceed from both observed estimates of population abundance and estimates of parameters in the measurement error model. We derive the asymptotic properties of our methods and existing methods, and we compare their finite-sample performance with a simulation experiment. We also examine the practical implications of the methods by using them to analyze two existing population dynamics data sets. [source] Modeling Human Fertility in the Presence of Measurement ErrorBIOMETRICS, Issue 1 2000David B. Dunson Summary. The probability of conception in a given menstrual cycle is closely related to the timing of intercourse relative to ovulation. Although commonly used markers of time of ovulation are known to be error prone, most fertility models assume the day of ovulation is measured without error. We develop a mixture model that allows the day to be misspecified. We assume that the measurement errors are i.i.d. across menstrual cycles. Heterogeneity among couples in the per cycle likelihood of conception is accounted for using a beta mixture model. Bayesian estimation is straightforward using Markov chain Monte Carlo techniques. The methods are applied to a prospective study of couples at risk of pregnancy. In the absence of validation data or multiple independent markers of ovulation, the identifiability of the measurement error distribution depends on the assumed model. Thus, the results of studies relating the timing of intercourse to the probability of conception should be interpreted cautiously. [source] Joint Inference on HIV Viral Dynamics and Immune Suppression in Presence of Measurement ErrorsBIOMETRICS, Issue 2 2010L. Wu Summary:, In an attempt to provide a tool to assess antiretroviral therapy and to monitor disease progression, this article studies association of human immunodeficiency virus (HIV) viral suppression and immune restoration. The data from a recent acquired immune deficiency syndrome (AIDS) study are used for illustration. We jointly model HIV viral dynamics and time to decrease in CD4/CD8 ratio in the presence of CD4 process with measurement errors, and estimate the model parameters simultaneously via a method based on a Laplace approximation and the commonly used Monte Carlo EM algorithm. The approaches and many of the points presented apply generally. [source] Haplotype-Based Regression Analysis and Inference of Case,Control Studies with Unphased Genotypes and Measurement Errors in Environmental ExposuresBIOMETRICS, Issue 3 2008Iryna Lobach Summary It is widely believed that risks of many complex diseases are determined by genetic susceptibilities, environmental exposures, and their interaction. Chatterjee and Carroll (2005, Biometrika92, 399,418) developed an efficient retrospective maximum-likelihood method for analysis of case,control studies that exploits an assumption of gene,environment independence and leaves the distribution of the environmental covariates to be completely nonparametric. Spinka, Carroll, and Chatterjee (2005, Genetic Epidemiology29, 108,127) extended this approach to studies where certain types of genetic information, such as haplotype phases, may be missing on some subjects. We further extend this approach to situations when some of the environmental exposures are measured with error. Using a polychotomous logistic regression model, we allow disease status to have K+ 1 levels. We propose use of a pseudolikelihood and a related EM algorithm for parameter estimation. We prove consistency and derive the resulting asymptotic covariance matrix of parameter estimates when the variance of the measurement error is known and when it is estimated using replications. Inferences with measurement error corrections are complicated by the fact that the Wald test often behaves poorly in the presence of large amounts of measurement error. The likelihood-ratio (LR) techniques are known to be a good alternative. However, the LR tests are not technically correct in this setting because the likelihood function is based on an incorrect model, i.e., a prospective model in a retrospective sampling scheme. We corrected standard asymptotic results to account for the fact that the LR test is based on a likelihood-type function. The performance of the proposed method is illustrated using simulation studies emphasizing the case when genetic information is in the form of haplotypes and missing data arises from haplotype-phase ambiguity. An application of our method is illustrated using a population-based case,control study of the association between calcium intake and the risk of colorectal adenoma. [source] Measurement error: implications for diagnosis and discrepancy models of developmental dyslexiaDYSLEXIA, Issue 3 2005Sue M. Cotton Abstract The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between ,intelligence' and ,actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia. Copyright © 2005 John Wiley & Sons, Ltd. [source] Measurement error and estimates of population extinction riskECOLOGY LETTERS, Issue 1 2004John M. McNamara Abstract It is common to estimate the extinction probability for a vulnerable population using methods that are based on the mean and variance of the long-term population growth rate. The numerical values of these two parameters are estimated from time series of population censuses. However, the proportion of a population that is registered at each census is typically not constant but will vary among years because of stochastic factors such as weather conditions at the time of sampling. Here, we analyse how such sampling errors influence estimates of extinction risk and find sampling errors to produce two opposite effects. Measurement errors lead to an exaggerated overall variance, but also introduce negative autocorrelations in the time series (which means that estimates of annual growth rates tend to alternate in size). If time series data are treated properly these two effects exactly counter balance. We advocate routinely incorporating a measure of among year correlations in estimating population extinction risk. [source] Do Australian companies manage earnings to meet simple earnings benchmarks?ACCOUNTING & FINANCE, Issue 1 2003David Holland Measurement error in unexpected accruals is an important problem for empirical earnings management research. Several recent studies avoid this problem by examining the pooled, cross,sectional distribution of reported earnings. Discontinuities in the distribution of reported earnings around key earnings thresholds may indicate the exercise of management discretion (i.e. earnings management). We apply this approach to the detection of earnings management by Australian firms. Our results generally indicate significantly more small earnings increases and small profits than expected and conversely, considerably fewer small earnings decreases and small losses than expected. These results are much stronger for larger Australian firms. We undertake an exploratory analysis of alternative explanations for our results and find some evidence consistent with management signalling its inside knowledge about the firm's expected future profitability to smooth earnings, as opposed to ,management intent to deceive' as an explanation for our results. [source] Measurement error in computed tomography pelvimetryJOURNAL OF MEDICAL IMAGING AND RADIATION ONCOLOGY, Issue 2 2005N Anderson SUMMARY Computed tomography pelvimetry is still used in clinical practice. We wished to quantify observer error in order to assess the level of confidence with which pelvic measurements can be described as adequate or inadequate. Anteroposterior inlet, anteroposterior outlet, transverse inlet and interspinous distances were measured from 11 CT pelvimetry examinations by five observers at one institution. Three CT pelvimetries were measured by five observers at a second institution. Intraobserver and interobserver variation was assessed using analysis of variance. Reliability of measurements was assessed using intraclass correlation coefficient. Combined error was calculated to determine 95% confidence limits for published minimum recommended pelvic measurements. The standard error of measurement, combining all sources, for measurement of the bony dimensions of the pelvis were: for anteroposterior inlet, 2.0 mm; anteroposterior outlet, 6.9 mm; transverse inlet, 1.3 mm; and interspinous distance, 2.1 mm. The 95% confidence interval around the recommended anteroposterior outlet of 100 mm was 88.5,111.3 mm. Observer variation in measurement of anteroposterior outlet is so large as to make the measurement of doubtful clinical utility. [source] Technical note: A new method for measuring long bone curvature using 3D landmarks and semi-landmarksAMERICAN JOURNAL OF PHYSICAL ANTHROPOLOGY, Issue 4 2010Isabelle De Groote Abstract Here we describe and evaluate a new method for quantifying long bone curvature using geometric morphometric and semi-landmark analysis of the human femur. The technique is compared with traditional ways of measuring subtense and point of maximum curvature using either coordinate calipers or projection onto graph paper. Of the traditional methods the graph paper method is more reliable than using coordinate calipers. Measurement error is consistently lower for measuring point of maximum curvature than for measuring subtense. The results warrant caution when comparing data collected by the different traditional methods. Landmark data collection proves reliable and has a low measurement error. However, measurement error increases with the number of semi-landmarks included in the analysis of curvature. Measurements of subtense can be estimated more reliably using 3D landmarks along the curve than using traditional techniques. We use equidistant semi-landmarks to quantify the curve because sliding the semi-landmarks masks the curvature signal. Principal components analysis of these equidistant semi-landmarks provides the added benefit of describing the shape of the curve. These results are promising for functional and forensic analysis of long bone curvature in modern human populations and in the fossil record. Am J Phys Anthropol, 2010. © 2010 Wiley-Liss, Inc. [source] Models with Errors due to Misreported MeasurementsAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2003Brent Henderson Summary Measurement error and misclassification models feature prominently in the literature. This paper describes misreporting error, which can be considered to fall somewhere between these two broad types of model. Misreporting is concerned with situations where a continuous random variable X is measured with error and only reported as the discrete random variable Z. Data grouping or rounding are the simplest examples of this, but more generally X may be reported as a value z of Z which refers to a different interval from the one in which X lies. The paper discusses a method for handling misreported data and draws links with measurement error and misclassification models. A motivating example is considered from a prenatal Down's syndrome screening, where the gestational age at which mothers present for screening is a true continuous variable but is misreported because it is only ever observed as a discrete whole number of weeks which may in fact be in error. The implications this misreporting might have for the screening are investigated. [source] Measurement error and estimates of population extinction riskECOLOGY LETTERS, Issue 1 2004John M. McNamara Abstract It is common to estimate the extinction probability for a vulnerable population using methods that are based on the mean and variance of the long-term population growth rate. The numerical values of these two parameters are estimated from time series of population censuses. However, the proportion of a population that is registered at each census is typically not constant but will vary among years because of stochastic factors such as weather conditions at the time of sampling. Here, we analyse how such sampling errors influence estimates of extinction risk and find sampling errors to produce two opposite effects. Measurement errors lead to an exaggerated overall variance, but also introduce negative autocorrelations in the time series (which means that estimates of annual growth rates tend to alternate in size). If time series data are treated properly these two effects exactly counter balance. We advocate routinely incorporating a measure of among year correlations in estimating population extinction risk. [source] Potential Errors in Detecting Earnings Management: Reexamining Studies Investigating the AMT of 1986,CONTEMPORARY ACCOUNTING RESEARCH, Issue 4 2001Won W. Choi Abstract In this paper we seek to document errors that could affect studies of earnings management. The book income adjustment (BIA) of the alternative minimum tax (AMT) created apparently strong incentives to manage book income downward in 1987. Five earlier papers using different methodologies and samples all conclude that earnings were reduced in response to the BIA. This consensus of findings offers an opportunity to investigate our speculation that methodological biases are more likely when there appear to be clear incentives for earnings management. A reexamination of these studies uncovers potential biases related to a variety of factors, including choices of scaling variables, selection of affected and control samples, and measurement error in estimated discretionary accruals. A reexamination of the argument underlying these studies also suggests that the incentives to manage earnings are less powerful than initially predicted, and are partially mitigated by tax and non-tax factors. As a result, we believe that the extent of earnings management that occurred in 1987 in response to the BIA remains an unresolved issue. [source] Comparison of the Melbourne Assessment of Unilateral Upper Limb Function and the Quality of Upper Extremity Skills Test in hemiplegic CPDEVELOPMENTAL MEDICINE & CHILD NEUROLOGY, Issue 12 2008K Klingels MSc This study investigated interrater reliability and measurement error of the Melbourne Assessment of Unilateral Upper Limb Function (Melbourne Assessment) and the Quality of Upper Extremity Skills Test (QUEST), and assessed the relationship between both scales in 21 children (15 females, six males; mean age 6y 4mo [SD 1y 3mo], range 5,8y) with hemiplegic CP. Two raters scored the videotapes of the assessments independently in a randomized order. According to the House Classification, three participants were classified as level 1, one participant as level 3, eight as level 4, three as level 5, one participant as level 6, and five as level 7. The Melbourne Assessment and the QUEST showed high interrater reliability (intraclass correlation 0.97 for Melbourne Assessment; 0.96 for QUEST total score; 0.96 for QUEST hemiplegic side). The standard error of measurement and the smallest detectable difference was 3.2% and 8.9% for the Melbourne Assessment and 5.0% and 13.8% for the QUEST score on the hemiplegic side. Correlation analysis indicated that different dimensions of upper limb function are addressed in both scales. [source] Measurement error: implications for diagnosis and discrepancy models of developmental dyslexiaDYSLEXIA, Issue 3 2005Sue M. Cotton Abstract The diagnosis of developmental dyslexia (DD) is reliant on a discrepancy between intellectual functioning and reading achievement. Discrepancy-based formulae have frequently been employed to establish the significance of the difference between ,intelligence' and ,actual' reading achievement. These formulae, however, often fail to take into consideration test reliability and the error associated with a single test score. This paper provides an illustration of the potential effects that test reliability and measurement error can have on the diagnosis of dyslexia, with particular reference to discrepancy models. The roles of reliability and standard error of measurement (SEM) in classic test theory are also briefly reviewed. This is followed by illustrations of how SEM and test reliability can aid with the interpretation of a simple discrepancy-based formula of DD. It is proposed that a lack of consideration of test theory in the use of discrepancy-based models of DD can lead to misdiagnosis (both false positives and false negatives). Further, misdiagnosis in research samples affects reproducibility and generalizability of findings. This in turn, may explain current inconsistencies in research on the perceptual, sensory, and motor correlates of dyslexia. Copyright © 2005 John Wiley & Sons, Ltd. [source] Nonlinear determinism in river flow: prediction as a possible indicatorEARTH SURFACE PROCESSES AND LANDFORMS, Issue 7 2007Bellie SivakumarArticle first published online: 6 DEC 200 Abstract Whether or not river flow exhibits nonlinear determinism remains an unresolved question. While studies on the use of nonlinear deterministic methods for modeling and prediction of river flow series are on the rise and the outcomes are encouraging, suspicions and criticisms of such studies continue to exist as well. An important reason for this situation is that the correlation dimension method, used as a nonlinear determinism identification tool in most of those studies, may possess certain limitations when applied to real river flow series, which are always finite and often short and also contaminated with noise (e.g. measurement error). In view of this, the present study addresses the issue of nonlinear determinism in river flow series using prediction as a possible indicator. This is done by (1) reviewing studies that have employed nonlinear deterministic methods (coupling phase-space reconstruction and local approximation techniques) for river flow predictions and (2) identifying nonlinear determinism (or linear stochasticity) based on the level of prediction accuracy in general, and on the prediction accuracy against the phase-space reconstruction parameters in particular (termed as the ,inverse approach'). The results not only provide possible indications to the presence of nonlinear determinism in the river flow series studied, but also support, both qualitatively and quantitatively, the low correlation dimensions reported for such. Therefore, nonlinear deterministic methods are a viable complement to linear stochastic ones for studying river flow dynamics, if sufficient caution is exercised in their applications and in interpreting the outcomes. Copyright © 2006 John Wiley & Sons, Ltd. [source] Identification and Estimation of Regression Models with MisclassificationECONOMETRICA, Issue 3 2006Aprajit Mahajan This paper studies the problem of identification and estimation in nonparametric regression models with a misclassified binary regressor where the measurement error may be correlated with the regressors. We show that the regression function is nonparametrically identified in the presence of an additional random variable that is correlated with the unobserved true underlying variable but unrelated to the measurement error. Identification for semiparametric and parametric regression functions follows straightforwardly from the basic identification result. We propose a kernel estimator based on the identification strategy, derive its large sample properties, and discuss alternative estimation procedures. We also propose a test for misclassification in the model based on an exclusion restriction that is straightforward to implement. [source] Income Variance Dynamics and HeterogeneityECONOMETRICA, Issue 1 2004Costas Meghir Recent theoretical work has shown the importance of measuring microeconomic uncertainty for models of both general and partial equilibrium under imperfect insurance. In this paper the assumption of i.i.d. income innovations used in previous empirical studies is removed and the focus of the analysis is placed on models for the conditional variance of income shocks, which is related to the measure of risk emphasized by the theory. We first discriminate amongst various models of earnings determination that separate income shocks into idiosyncratic transitory and permanent components. We allow for education- and time-specific differences in the stochastic process for earnings and for measurement error. The conditional variance of the income shocks is modelled as a parsimonious ARCH process with both observable and unobserved heterogeneity. The empirical analysis is conducted on data drawn from the 1967,1992 Panel Study of Income Dynamics. We find strong evidence of sizeable ARCH effects as well as evidence of unobserved heterogeneity in the variances. [source] Estimation of Nonlinear Models with Measurement ErrorECONOMETRICA, Issue 1 2004Susanne M. Schennach This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the "true" value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved "true" variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the "true," unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach. [source] |