Home About us Contact | |||
Errors
Kinds of Errors Terms modified by Errors Selected AbstractsSTUDY ON THE MOTION ERROR OF MAIN SPINDLE OF LATHE BASED ON THE HARMONIC WAVELET NEURAL NETWORKEXPERIMENTAL TECHNIQUES, Issue 3 2009X.-J. Fu First page of article [source] MEASUREMENT ERROR IN RESEARCH ON HUMAN RESOURCES AND FIRM PERFORMANCE: ADDITIONAL DATA AND SUGGESTIONS FOR FUTURE RESEARCHPERSONNEL PSYCHOLOGY, Issue 4 2001PATRICK M. WRIGHT Gerhart and colleagues (2000) and Huselid and Becker (2000) recently debated the presence and implications of measurement error in measures of human resource practices. This paper presents data from 3 more studies, 1 of large organizations from different industries at the corporate level, 1 from commercial banks, and the other of autonomous business units at the level of the job. Results of all 3 studies provide additional evidence that single respondent measures of HR practices contain large amounts of measurement error. Implications for future research into the HR firm performance relationship are discussed. [source] COMMENT ON "MEASUREMENT ERROR IN RESEARCH ON HUMAN RESOURCES AND FIRM PERFORMANCE: HOW MUCH ERROR IS THERE AND HOW DOES IT INFLUENCE EFFECTSIZE ESTIMATES?"PERSONNEL PSYCHOLOGY, Issue 4 2000AND SNELL, MC MAHAN, WRIGHT, by GERHART First page of article [source] MEASUREMENT ERROR IN RESEARCH ON THE HUMAN RESOURCES AND FIRM PERFORMANCE RELATIONSHIP: FURTHER EVIDENCE AND ANALYSISPERSONNEL PSYCHOLOGY, Issue 4 2000BARRY GERHART Our earlier article in Personnel Psychology demonstrated how general-izability theory could be used to obtain improved reliability estimates in the human resource (HR) and firm performance literature and that correcting for unreliability using these estimates had important implications for the magnitude of the HR and firm performance relationship. In their comment, Huselid and Becker both raise criticisms specific to our study and broad issues for the field to consider. In our present article, we argue, using empirical evidence whenever possible, that the issues and criticisms raised by Huselid and Becker do not change our original conclusions. We also provide new evidence on how the reliability of HR-related measures may differ at different levels of analysis. Finally, we build on Huselid and Becker's helpful discussion of broad research design and strategy issues in the HR and firm performance literature in an effort to help researchers make better informed choices regarding their own research designs and strategies in the area. [source] CRITICAL EVALUATION: PATTERNS OF SURGICAL TECHNICAL ERRORANZ JOURNAL OF SURGERY, Issue 8 2008Thomas B Hugh No abstract is available for this article. [source] CRIMINALIZATION OF MEDICAL ERROR: WHO DRAWS THE LINE?ANZ JOURNAL OF SURGERY, Issue 10 2007Sidney W. A. Dekker As stakeholders struggle to reconcile calls for accountability and pressures for increased patient safety, criminal prosecution of surgeons and other health-care workers for medical error seems to be on the rise. This paper examines whether legal systems can meaningfully draw a line between acceptable performance and negligence. By questioning essentialist assumptions behind ,crime' or ,negligence', this paper suggests that multiple overlapping and partially contradictory descriptions of the same act are always possible, and even necessary, to approximate the complexity of reality. Although none of these descriptions is inherently right or wrong, each description of the act (as negligence, or system failure, or pedagogical issue) has a fixed repertoire of responses and countermeasures appended to it, which enables certain courses of action while excluding others. Simply holding practitioners accountable (e.g. by putting them on trial) excludes any beneficial effects as it produces defensive posturing, obfuscation and excessive stress and leads to defensive medicine, silent reporting systems and interference with professional oversight. Calls for accountability are important, but accountability should be seen as bringing information about needed improvements to levels or groups that can do something about it, rather than deflecting resources into legal protection and limiting liability. We must avoid a future in which we have to turn increasingly to legal systems to wring accountability out of practitioners because legal systems themselves have increasingly created a climate in which telling each other accounts openly is less and less possible. [source] NONPARAMETRIC SURVEY RESPONSE ERRORS,INTERNATIONAL ECONOMIC REVIEW, Issue 4 2007Rosa L. Matzkin I present nonparametric methods to identify and estimate the biases associated with response errors. When applied to survey data, these methods can be used to analyze how observable and unobservable characteristics of the respondent, and characteristics of the design of the survey, affect errors in the responses. This provides a method to correct the biases that those errors generate, by using the estimated response errors to "undo" those biases. The results are useful also to design better surveys, since they point at characteristics of the design and of subpopulations of respondents that can provide identification of response errors. Several models are considered. [source] AN INVESTIGATION OF THE RELATIONSHIP BETWEEN SAFETY CLIMATE AND MEDICATION ERRORS AS WELL AS OTHER NURSE AND PATIENT OUTCOMESPERSONNEL PSYCHOLOGY, Issue 4 2006DAVID A. HOFMANN Safety climate has been shown to be associated with a number of important organizational outcomes. In this study, we take a broad view of safety climate,one that includes not only the development and adherence to safety protocols, but also open and constructive responses to errors,and investigate correlates within the health care industry. Drawing on a random, national sample of hospitals, the results revealed that safety climate predicted medication errors, nurse back injuries, urinary tract infections, patient satisfaction, patient perceptions of nurse responsiveness, and nurse satisfaction. As hypothesized, the relationship between safety climate and both medication errors and back injuries was moderated by the complexity of the patient conditions on the unit. Specifically, the effect of the overall safety climate of the unit was accentuated when dealing with more complex patient conditions. [source] INTEGRATING ERRORS INTO THE TRAINING PROCESS: THE FUNCTION OF ERROR MANAGEMENT INSTRUCTIONS AND THE ROLE OF GOAL ORIENTATIONPERSONNEL PSYCHOLOGY, Issue 2 2003DOERTE HEIMBECK Error management training explicitly allows participants to make errors. We examined the effects of error management instructions ("rules of thumb" designed to reduce the negative emotional effects of errors), goal orientation (learning goal, prove goal, and avoidance goal orientations) and attribute x treatment interactions on performance. A randomized experiment with 87 participants consisting of 3 training procedures for learning to work with a computer program was conducted: (a) error training with error management instructions, (b) error training without error management instructions; and (c) a group that was prevented from making errors. Results showed that short-and medium-term performance (near and far transfer) was superior for participants of the error training that included error management instructions, compared with the two other training conditions. Thus, error management instructions were crucial for the high performance effects of error training. Prove and avoidance goal orientation interacted with training conditions. [source] Significance of Modeling Error in Structural Parameter EstimationCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 1 2001Masoud Sanayei Structural health monitoring systems rely on algorithms to detect potential changes in structural parameters that may be indicative of damage. Parameter-estimation algorithms seek to identify changes in structural parameters by adjusting parameters of an a priori finite-element model of a structure to reconcile its response with a set of measured test data. Modeling error, represented as uncertainty in the parameters of a finite-element model of the structure, curtail capability of parameter estimation to capture the physical behavior of the structure. The performance of four error functions, two stiffness-based and two flexibility-based, is compared in the presence of modeling error in terms of the propagation rate of the modeling error and the quality of the final parameter estimates. Three different types of parameters are used in the parameter estimation procedure: (1) unknown parameters that are to be estimated, (2) known parameters assumed to be accurate, and (3) uncertain parameters that manifest the modeling error and are assumed known and not to be estimated. The significance of modeling error is investigated with respect to excitation and measurement type and locations, the type of error function, location of the uncertain parameter, and the selection of unknown parameters to be estimated. It is illustrated in two examples that the stiffness-based error functions perform significantly better than the corresponding flexibility-based error functions in the presence of modeling error. Additionally, the topology of the structure, excitation and measurement type and locations, and location of the uncertain parameters with respect to the unknown parameters can have a significant impact on the quality of the parameter estimates. Insight into the significance of modeling error and its potential impact on the resulting parameter estimates is presented through analytical and numerical examples using static and modal data. [source] Effects of species and habitat positional errors on the performance and interpretation of species distribution modelsDIVERSITY AND DISTRIBUTIONS, Issue 4 2009Patrick E. Osborne Abstract Aim, A key assumption in species distribution modelling is that both species and environmental data layers contain no positional errors, yet this will rarely be true. This study assesses the effect of introduced positional errors on the performance and interpretation of species distribution models. Location, Baixo Alentejo region of Portugal. Methods, Data on steppe bird occurrence were collected using a random stratified sampling design on a 1-km2 pixel grid. Environmental data were sourced from satellite imagery and digital maps. Error was deliberately introduced into the species data as shifts in a random direction of 0,1, 2,3, 4,5 and 0,5 pixels. Whole habitat layers were shifted by 1 pixel to cause mis-registration, and the cumulative effect of one to three shifted layers investigated. Distribution models were built for three species using three algorithms with three replicates. Test models were compared with controls without errors. Results, Positional errors in the species data led to a drop in model performance (larger errors having larger effects , typically up to 10% drop in area under the curve on average), although not enough for models to be rejected. Model interpretation was more severely affected with inconsistencies in the contributing variables. Errors in the habitat layers had similar although lesser effects. Main conclusions, Models with species positional errors are hard to detect, often statistically good, ecologically plausible and useful for prediction, but interpreting them is dangerous. Mis-registered habitat layers produce smaller effects probably because shifting entire layers does not break down the correlation structure to the same extent as random shifts in individual species observations. Spatial autocorrelation in the habitat layers may protect against species positional errors to some extent but the relationship is complex and requires further work. The key recommendation must be that positional errors should be minimised through careful field design and data processing. [source] Dating young geomorphic surfaces using age of colonizing Douglas fir in southwestern Washington and northwestern Oregon, USA,EARTH SURFACE PROCESSES AND LANDFORMS, Issue 6 2007Thomas C. Pierson Abstract Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes , Mount Rainier, Mount St. Helens and Mount Hood , in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ±5 years. If the trees are cored at about 1·4 m above the ground surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ±7 years). Published in 2006 by John Wiley & Sons, Ltd. [source] Effect of Angular Error on Tissue Doppler Velocities and StrainECHOCARDIOGRAPHY, Issue 7 2003Camilla Storaa M.S. One of the major criticisms of ultrasound Doppler is its angle dependency, that is its ability to measure velocity components directly to or from the transducer only. The present article aims to investigate the impact of this angular error in a clinical setting. Apical two- and four-chamber views were recorded in 43 individuals, and the myocardium was marked by hand in each image. We assume that the main direction of the myocardial velocities is longitudinal and correct for the angular error by backprojecting measured velocities onto the longitudinal direction drawn. Strain was calculated from both corrected and uncorrected velocities in 12 segments for each individual. The results indicate that the difference between strain values calculated from corrected and uncorrected velocities is insignificant in 5 segments and within a decimal range in 11 segments. The biggest difference between measured and corrected strain values was found in the apical segments. Strain is also found to be more robust against angular error than velocities because the difference between corrected and uncorrected values is smaller for strain. Considering that there are multiple sources of noise in ultrasound Doppler measurements, the authors conclude that the angular error has so little impact on longitudinal strain that correction for this error can safely be omitted. (ECHOCARDIOGRAPHY, Volume 20, October 2003) [source] Estimation of Nonlinear Models with Measurement ErrorECONOMETRICA, Issue 1 2004Susanne M. Schennach This paper presents a solution to an important econometric problem, namely the root n consistent estimation of nonlinear models with measurement errors in the explanatory variables, when one repeated observation of each mismeasured regressor is available. While a root n consistent estimator has been derived for polynomial specifications (see Hausman, Ichimura, Newey, and Powell (1991)), such an estimator for general nonlinear specifications has so far not been available. Using the additional information provided by the repeated observation, the suggested estimator separates the measurement error from the "true" value of the regressors thanks to a useful property of the Fourier transform: The Fourier transform converts the integral equations that relate the distribution of the unobserved "true" variables to the observed variables measured with error into algebraic equations. The solution to these equations yields enough information to identify arbitrary moments of the "true," unobserved variables. The value of these moments can then be used to construct any estimator that can be written in terms of moments, including traditional linear and nonlinear least squares estimators, or general extremum estimators. The proposed estimator is shown to admit a representation in terms of an influence function, thus establishing its root n consistency and asymptotic normality. Monte Carlo evidence and an application to Engel curve estimation illustrate the usefulness of this new approach. [source] Sexual and Religious Politics in Book I of Spenser's Faerie QueeneENGLISH LITERARY RENAISSANCE, Issue 2 2004Harry Berger Jr. Recent criticism that professes to be gender-sensitive and post-new-historicist still refuses to entertain the possibility that The Faerie Queene might distance itself from the misogyny embedded in the spectrum of Reformation discourses from the Puritan to the Papist pole. But Spenser could have found and reacted to misogyny not only in the religious polemics of his century but also in the intertextual archive of precursors to which The Faerie Queene so richly alludes. The problematic treatment of woman in Book 1 is not ingenuous, peripheral, nor accidental. Far from merely participating in the misogynist metaphorics of religious polemics, Book 1 performs a critique of it. Spenser shows how the male protagonist's fear and loathing of himself gets displaced to female scapegoats,Error, Duessa, Lucifera, Night,and how Una reinforces this evasive process of self-correction. In the episodes of Una's adventure with the lion (canto 3) and the house of Pride (canto 4), Book 1 interrogates both romance conventions and anti-papist allegory. [source] Error in an Emergency Medicine Textbook: Isopropyl Alcohol ToxicityACADEMIC EMERGENCY MEDICINE, Issue 2 2002Mark Su MD No abstract is available for this article. [source] Non-linearity and error in modelling soil processesEUROPEAN JOURNAL OF SOIL SCIENCE, Issue 1 2001T. M. Addiscott Summary Error in models and their inputs can be propagated to outputs. This is important for modelling soil processes because soil properties used as parameters commonly contain error in the statistical sense, that is, variation. Model error can be assessed by validation procedures, but tests are needed for the propagation of (statistical) error from input to output. Input error interacts with non-linearity in the model such that it contributes to the mean of the output as well as its error. This can lead to seriously incorrect results if input error is ignored when a non-linear model is used, as is demonstrated for the Arrhenius equation. Tests for non-linearity and error propagation are suggested. The simplest test for non-linearity is a graph of the output against the input. This can be supplemented if necessary by testing whether the mean of the output changes as the standard deviation of the input increases. The tests for error propagation examine whether error is suppressed or exaggerated as it is propagated through the model and whether changes in the error in one input influence the propagation of another. Applying these tests to a leaching model with rate and capacity parameters showed differences between the parameters, which emphasized that statements about non-linearity must be for specific inputs and outputs. In particular, simulations of mean annual concentrations of solute in drainage and concentrations on individual days differed greatly in the amount of non-linearity revealed and in the way error was propagated. This result is interpreted in terms of decoherence. [source] Adaptive group detection for DS/CDMA systems over frequency-selective fading channels,EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003Stefano Buzzi In this paper we consider the problem of group detection for asynchronous Direct-Sequence Code Division Multiple Access (DS/CDMA) systems operating over frequency-selective fading channels. A two-stage near-far resistant detection structure is proposed. The first stage is a linear filter, aimed at suppressing the effect of the unwanted user signals, while the second stage is a non-linear block, implementing a maximum likelihood detection rule on the set of desired user signals. As to the linear stage, we consider both the Zero-Forcing (ZF) and the Minimum Mean Square Error (MMSE) approaches; in particular, based on the amount of prior knowledge on the interference parameters which is available to the receiver and on the affordable computational complexity, we come up with several receiving structures, which trade system performance for complexity and needed channel state information. We also present adaptive implementations of these receivers, wherein only the parameters from the users to be decoded are assumed to be known. The case that the channel fading coefficients of the users to be decoded are not known a priori is also considered. In particular, based on the transmission of pilot signals, we adopt a least-squares criterion in order to obtain estimates of these coefficients. The result is thus a fully adaptive structure, which can be implemented with no prior information on the interfering signals and on the channel state. As to the performance assessment, the new receivers are shown to be near-far resistant, and simulation results confirm their superiority with respect to previously derived detection structures. Copyright © 2003 AEI. [source] Not Very Material but Hardly Immaterial: China's Bombed Embassy and Sino-American Relations,FOREIGN POLICY ANALYSIS, Issue 1 2010Gregory J. Moore In 1999 Sino-American relations experienced intense strain as a result of NATO's Kosovo intervention, and in particular by the bombing of the Chinese Embassy in Belgrade by an American B-2 bomber. Why did the bombing of China's embassy in Belgrade in the spring of 1999 touch such a raw nerve among the Chinese people and leadership? With the coming of the tenth anniversary of these events, what still needs to be explained is how Chinese and Americans could draw such divergent conclusions about that which they've never disagreed on,the incontestable fact of the embassy's demolition,and how the fact that what Americans called "a mistake" could almost completely derail Sino-American relations, which President Clinton in his very successful visit to China a year before had called a "strategic partnership." Based on a series of semistructured interviews the author did in Beijing and Washington with 28 Chinese and 30 American experts, this research draws a number of important conclusions in this regard. First, intensifying and even defining the conflict were a number of important perceptual gaps. Second, given the dispute over the intentionality of the embassy bombing, the conflict boiled down not to clashing interests, per se, but rather to issues of trust and beliefs about motives and intentions. Third, poor handling of the embassy bombing by both governments deepened the conflict and the alienation both sides felt. Fourth, underlying the lack of trust and the perceptual gaps between the two sides was "Fundamental Attribution Error." [source] Winter diatom blooms in a regulated river in South Korea: explanations based on evolutionary computationFRESHWATER BIOLOGY, Issue 10 2007DONG-KYUN KIM Summary 1. An ecological model was developed using genetic programming (GP) to predict the time-series dynamics of the diatom, Stephanodiscus hantzschii for the lower Nakdong River, South Korea. Eight years of weekly data showed the river to be hypertrophic (chl. a, 45.1 ± 4.19 ,g L,1, mean ± SE, n = 427), and S. hantzschii annually formed blooms during the winter to spring flow period (late November to March). 2. A simple non-linear equation was created to produce a 3-day sequential forecast of the species biovolume, by means of time series optimization genetic programming (TSOGP). Training data were used in conjunction with a GP algorithm utilizing 7 years of limnological variables (1995,2001). The model was validated by comparing its output with measurements for a specific year with severe blooms (1994). The model accurately predicted timing of the blooms although it slightly underestimated biovolume (training r2 = 0.70, test r2 = 0.78). The model consisted of the following variables: dam discharge and storage, water temperature, Secchi transparency, dissolved oxygen (DO), pH, evaporation and silica concentration. 3. The application of a five-way cross-validation test suggested that GP was capable of developing models whose input variables were similar, although the data are randomly used for training. The similarity of input variable selection was approximately 51% between the best model and the top 20 candidate models out of 150 in total (based on both Root Mean Squared Error and the determination coefficients for the test data). 4. Genetic programming was able to determine the ecological importance of different environmental variables affecting the diatoms. A series of sensitivity analyses showed that water temperature was the most sensitive parameter. In addition, the optimal equation was sensitive to DO, Secchi transparency, dam discharge and silica concentration. The analyses thus identified likely causes of the proliferation of diatoms in ,river-reservoir hybrids' (i.e. rivers which have the characteristics of a reservoir during the dry season). This result provides specific information about the bloom of S. hantzschii in river systems, as well as the applicability of inductive methods, such as evolutionary computation to river-reservoir hybrid systems. [source] Impact of Simulation Model Solver Performance on Ground Water Management ProblemsGROUND WATER, Issue 5 2008David P. Ahlfeld Ground water management models require the repeated solution of a simulation model to identify an optimal solution to the management problem. Limited precision in simulation model calculations can cause optimization algorithms to produce erroneous solutions. Experiments are conducted on a transient field application with a streamflow depletion control management formulation solved with a response matrix approach. The experiment consists of solving the management model with different levels of simulation model solution precision and comparing the differences in optimal solutions obtained. The precision of simulation model solutions is controlled by choice of solver and convergence parameter and is monitored by observing reported budget discrepancy. The difference in management model solutions results from errors in computation of response coefficients. Error in the largest response coefficients is found to have the most significant impact on the optimal solution. Methods for diagnosing the adequacy of precision when simulation models are used in a management model framework are proposed. [source] Effects of Measurement Error on Horizontal Hydraulic Gradient EstimatesGROUND WATER, Issue 1 2007J.F. Devlin During the design of a natural gradient tracer experiment, it was noticed that the hydraulic gradient was too small to measure reliably on an ,500-m2 site. Additional wells were installed to increase the monitored area to 26,500 m2, and wells were instrumented with pressure transducers. The resulting monitoring system was capable of measuring heads with a precision of ±1.3 × 10,2 m. This measurement error was incorporated into Monte Carlo calculations, in which only hydraulic head values were varied between realizations. The standard deviation in the estimated gradient and the flow direction angle from the x-axis (east direction) were calculated. The data yielded an average hydraulic gradient of 4.5 × 10,4±25% with a flow direction of 56° southeast ±18°, with the variations representing 1 standard deviation. Further Monte Carlo calculations investigated the effects of number of wells, aspect ratio of the monitored area, and the size of the monitored area on the previously mentioned uncertainties. The exercise showed that monitored areas must exceed a size determined by the magnitude of the measurement error if meaningful gradient estimates and flow directions are to be obtained. The aspect ratio of the monitored zone should be as close to 1 as possible, although departures as great as 0.5 to 2 did not degrade the quality of the data unduly. Numbers of wells beyond three to five provided little advantage. These conclusions were supported for the general case with a preliminary theoretical analysis. [source] Power of Tests for a Dichotomous Independent Variable Measured with ErrorHEALTH SERVICES RESEARCH, Issue 3 2008Daniel F. McCaffrey Objective. To examine the implications for statistical power of using predicted probabilities for a dichotomous independent variable, rather than the actual variable. Data Sources/Study Setting. An application uses 271,479 observations from the 2000 to 2002 CAHPS Medicare Fee-for-Service surveys. Study Design and Data. A methodological study with simulation results and a substantive application to previously collected data. Principle Findings. Researchers often must employ key dichotomous predictors that are unobserved but for which predictions exist. We consider three approaches to such data: the classification estimator (1); the direct substitution estimator (2); the partial information maximum likelihood estimator (3, PIMLE). The efficiency of (1) (its power relative to testing with the true variable) roughly scales with the square of one less the classification error. The efficiency of (2) roughly scales with the R2 for predicting the unobserved dichotomous variable, and is usually more powerful than (1). Approach (3) is most powerful, but for testing differences in means of 0.2,0.5 standard deviations, (2) is typically more than 95 percent as efficient as (3). Conclusions. The information loss from not observing actual values of dichotomous predictors can be quite large. Direct substitution is easy to implement and interpret and nearly as efficient as the PIMLE. [source] Quantitative assessment of the effect of basis set superposition error on the electron density of molecular complexes by means of quantum molecular similarity measuresINTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 11 2009Pedro Salvador Abstract The Chemical Hamiltonian Approach (CHA) method is applied to obtain Basis Set Superposition Error (BSSE)-free molecular orbitals at the Hartree,Fock (HF) and Density Functional Theory (DFT) levels of theory. To assess qualitatively the effect of the BSSE on the first-order electron density, we had previously applied Bader's analysis of the intermolecular critical points located on the electron density, as well as density difference maps for several hydrogen bonded complexes. In this work, Quantum Molecular Similarity Measures are probed as an alternative avenue to properly quantify the electronic relaxation due to the BSSE removal by means of distance indices between the uncorrected and corrected charge densities. It is shown that BSSE contamination is more important at the DFT level of theory, and in some cases, changes on the topology of the electron density are observed upon BSSE correction. Inclusion of diffuse functions have been found to dramatically decrease the BSSE effect in both geometry and electron density. The CHA method represents a good compromise to obtain accurate results with small basis sets. © 2009 Wiley Periodicals, Inc. Int J Quantum Chem, 2009 [source] When What You See Isn't What You Get: Alcohol Cues, Alcohol Administration, Prediction Error, and Human Striatal DopamineALCOHOLISM, Issue 1 2009Karmen K. Yoder Background:, The mesolimbic dopamine (DA) system is implicated in the development and maintenance of alcohol drinking; however, the exact mechanisms by which DA regulates human alcohol consumption are unclear. This study assessed the distinct effects of alcohol-related cues and alcohol administration on striatal DA release in healthy humans. Methods:, Subjects underwent 3 PET scans with [11C]raclopride (RAC). Subjects were informed that they would receive either an IV Ringer's lactate infusion or an alcohol (EtOH) infusion during scanning, with naturalistic visual and olfactory cues indicating which infusion would occur. Scans were acquired in the following sequence: (1) Baseline Scan: Neutral cues predicting a Ringer's lactate infusion, (2) CUES Scan: Alcohol-related cues predicting alcohol infusion in a Ringer's lactate solution, but with alcohol infusion after scanning to isolate the effects of cues, and (3) EtOH Scan: Neutral cues predicting Ringer's, but with alcohol infusion during scanning (to isolate the effects of alcohol without confounding expectation or craving). Results:, Relative to baseline, striatal DA concentration decreased during CUES, but increased during EtOH. Conclusion:, While the results appear inconsistent with some animal experiments showing dopaminergic responses to alcohol's conditioned cues, they can be understood in the context of the hypothesized role of the striatum in reward prediction error, and of animal studies showing that midbrain dopamine neurons decrease and increase firing rates during negative and positive prediction errors, respectively. We believe that our data are the first in humans to demonstrate such changes in striatal DA during reward prediction error. [source] A Study of the Role of Regionalization in the Generation of Aggregation Error in Regional Input ,Output ModelsJOURNAL OF REGIONAL SCIENCE, Issue 3 2002Michael L. Lahr Although the need for aggregation in input ,output modelling has diminished with the increases in computing power, an alarming number of regional studies continue to use the procedure. The rationales for doing so typically are grounded in data problems at the regional level. As a result many regional analysts use aggregated national input ,output models and trade ,adjust them at this aggregated level. In this paper, we point out why this approach can be inappropriate. We do so by noting that it creates a possible source of model misapplication (i.e., a direct effect could appear for a sector where one does not exist) and also by finding that a large amount of error (on the order of 100 percent) can be induced into the impact results as a result of improper aggregation. In simulations, we find that average aggregation error tends to peak at 81 sectors after rising from 492 to 365 sectors. Perversely, error then diminishes somewhat as the model size decreases further to 11 and 6 sectors. We also find that while region , and sector ,specific attributes influence aggregation error in a statistically significantly manner, their influence on the amount of error generally does not appear to be large. [source] Insurer Reserve Error and Executive CompensationJOURNAL OF RISK AND INSURANCE, Issue 2 2010David L. Eckles This article investigates incentives of insurance firm managers to manipulate loss reserves in order to maximize their compensation. We find that managers who receive bonuses that are likely capped or no bonuses tend to over-reserve for current-year incurred losses. However, managers who receive bonuses that are likely not capped tend to under-reserve for current-year incurred losses. We also find that managers who exercise stock options tend to under-reserve in the current period. [source] Measurement Error in Nonlinear Models: a Modern PerspectiveJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 2 2008Andrew W. Roddam No abstract is available for this article. [source] On Estimating Conditional Mean-Squared Prediction Error in Autoregressive ModelsJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2003CHING-KANG ING Abstract. Zhang and Shaman considered the problem of estimating the conditional mean-squared prediciton error (CMSPE) for a Gaussian autoregressive (AR) process. They used the final prediction error (FPE) of Akaike to estimate CMSPE and proposed that FPE's effectiveness be judged by its asymptotic correlation with CMSPE. However, as pointed out by Kabaila and He, the derivation of this correlation by Zhang and Shaman is incomplete, and the performance of FPE in estimating CMSPE is also poor in Kabaila and He's simulation study. Kabaila and He further proposed an alternative estimator of CMSPE, V, in the stationary AR(1) model. They reported that V has a larger normalized correlation with CMSPE through Monte Carlo simulation results. In this paper, we propose a generalization of V, V,, in the higher-order AR model, and obtain the asymptotic correlation of FPE and V, with CMSPE. We show that the limit of the normalized correlation of V, with CMSPE is larger than that of FPE with CMSPE, and hence Kabaila and He's finding is justified theoretically. In addition, the performances of the above estimators of CMSPE are re-examined in terms of mean-squared errors (MSE). Our main conclusion is that from the MSE point of view, V, is the best choice among a family of asymptotically unbiased estimators of CMSPE including FPE and V, as its special cases. [source] Error in calculation of indirect analysesJOURNAL OF VIRAL HEPATITIS, Issue 5 2009R. Chou [source] |