Home About us Contact | |||
Systematic Errors (systematic + error)
Kinds of Systematic Errors Selected AbstractsCorrelations and predictions of solvent effects on reactivity: some limitations of multi-parameter equations and comparisons with similarity models based on one solvent parameterJOURNAL OF PHYSICAL ORGANIC CHEMISTRY, Issue 6 2006T. William Bentley Abstract Three recent publications on multi-parameter correlations of solvent effects on solvolytic reactivity are re-examined, by considering ,similarity' and/or ,analogy'. Systematic errors due to compensation effects and to comparisons between dissimilar processes are found. Models for solvent nucleophilicity involving dissimilar spectroscopic processes (e.g. , or B parameters) give insensitive measures of low nucleophilicity. From qualitative considerations based on structural similarities, it is predicted that the sensitivities to changes to solvent polarity for solvolyses of chloroalkanes should be in the order: 1-adamantyl (3),>,2-methyl-2-adamantyl (1),>,t -butyl (2). The predictions are confirmed quantitatively by simple linear free-energy relationships and similarity models, involving correlations with YCl (based on solvolyses of 1-chloroadamantane) or ET(30) (based on solvatochromism). Multi-parameter correlations, indicating that solvolyses of 1 show a low sensitivity to both solvent polarity and electrophilicity, and also a negative sensitivity to solvent nucleophilicity, are shown to be unreliable. Large errors are also evident in recent KOMPH2 calculations. Conclusions are supported by comparing several multi-parameter treatments of solvolyses of 4-methoxyneophyl tosylate, for which there is a reliable set of kinetic data and a generally accepted mechanism. Copyright © 2006 John Wiley & Sons, Ltd. [source] The role of bioinformatics in two-dimensional gel electrophoresisPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 8 2003Andrew W. Dowsey Abstract Over the last two decades, two-dimensional electrophoresis (2-DE) gel has established itself as the de facto approach to separating proteins from cell and tissue samples. Due to the sheer volume of data and its experimental geometric and expression uncertainties, quantitative analysis of these data with image processing and modelling has become an actively pursued research topic. The results of these analyses include accurate protein quantification, isoelectric point and relative molecular mass estimation, and the detection of differential expression between samples run on different gels. Systematic errors such as current leakage and regional expression inhomogeneities are corrected for, followed by each protein spot in the gel being segmented and modelled for quantification. To assess differential expression of protein spots in different samples run on a series of two-dimensional gels, a number of image registration techniques for correcting geometric distortion have been proposed. This paper provides a comprehensive review of the computation techniques used in the analysis of 2-DE gels, together with a discussion of current and future trends in large scale analysis. We examine the pitfalls of existing techniques and highlight some of the key areas that need to be developed in the coming years, especially those related to statistical approaches based on multiple gel runs and image mining techniques through the use of parallel processing based on cluster computing and the grid technology. [source] Carbon nanotube disposable detectors in microchip capillary electrophoresis for water-soluble vitamin determination: Analytical possibilities in pharmaceutical quality controlELECTROPHORESIS, Issue 14 2008Agustín G. Crevillén Abstract In this work, the synergy of one mature example from "lab-on-chip" domain, such as CE microchips with emerging miniaturized carbon nanotube detectors in analytical science, is presented. Two different carbon electrodes (glassy carbon electrode (GCE) 3,mm diameter, and screen-printed electrode (SPE) 0.3,mm×2.5,mm) were modified with multiwalled carbon nanotubes (MWCNTs) and their electrochemical behavior was evaluated as detectors in CE microchip using water-soluble vitamins (pyridoxine, ascorbic acid, and folic acid) in pharmaceutical preparations as representative examples. The SPE modified with MWCNT was the best electrode for the vitamin analysis in terms of analytical performance. In addition, accurate determination of the three vitamins in four different pharmaceuticals was obtained (systematic error less than 9%) in only 400,s using a protocol that combined the sample analysis and the methodological calibration. [source] Further Characterisation of the 91500 Zircon CrystalGEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 1 2004Michael Wiedenbeck zircon 91500; matériau de référence; intercomparaison entre techniques; valeurs de travail This paper reports the results from a second characterisation of the 91500 zircon, including data from electron probe microanalysis, laser ablation inductively coupled plasma-mass spectrometry (LA-ICP-MS), secondary ion mass spectrometry (SIMS) and laser fluorination analyses. The focus of this initiative was to establish the suitability of this large single zircon crystal for calibrating in situ analyses of the rare earth elements and oxygen isotopes, as well as to provide working values for key geochemical systems. In addition to extensive testing of the chemical and structural homogeneity of this sample, the occurrence of banding in 91500 in both backscattered electron and cathodoluminescence images is described in detail. Blind intercomparison data reported by both LA-ICP-MS and SIMS laboratories indicate that only small systematic differences exist between the data sets provided by these two techniques. Furthermore, the use of NIST SRM 610 glass as the calibrant for SIMS analyses was found to introduce little or no systematic error into the results for zircon. Based on both laser fluorination and SIMS data, zircon 91500 seems to be very well suited for calibrating in situ oxygen isotopic analyses. Cet article présente les résultats d'une nouvelle caractérisation du zircon 91500, dont des données de microanalyse par sonde électronique, d'analyse par ablation laser en couplage à un ICP-MS, d'analyse par sonde ionique (SIMS) et d'analyse par fluorination laser. Le but de cette étude etait de démontrer que ce large monocristal de zircon pouvait être utilisé pour la calibration d'analyses in situ de Terres Rares et des isotopes de I'Oxygène, et en même temps de fournir des valeurs "de travail" pour un certain nombre de systémes géochimiques cruciaux. En complément des tests systématiques d'homogénéité de I'échantillon, tant chimiquement que structurellement, /'ex/sfence, dans le zircon 91500, de zonages visibles en électrons retro diffusés et en cathodoluminescence, est décrite en détail, line comparaison en aveugle des résultats obtenus par LA-ICP-MS et par SIMS, dans des laboratoires différents, montre que les différences systématiques entre les ensembles de données obtenues par ces deux techniques sont très faibles. De plus, I'utilisation du verre NIST SRM 610 comme calibrant lors de I'analyse par SIMS n'introduit qu'une erreur systématique très faible si ce n'est inexistante sur les résultats du zircon. Sur la base des analyses par fluorination laser et par SIMS, le zircon 91500 semble être parfaitement adapte a son utilisation pour la calibration d'analyses isotopiques in situ d'oxygène. [source] The use of near infrared interactance in hemodialysisHEMODIALYSIS INTERNATIONAL, Issue 1 2005N. Sarhill Forty-one consecutive admissions to a hemodialysis center were evaluated. Demographic information including age, gender, race, and diagnosis was collected. Patients, >18 years old, with end stage renal disease and on hemodialysis for at least one year were included. Those with edema or known ascites were excluded. Weight was measured before and after hemodialysis (HD) using a standard scale and by considering the amount of fluid loss by the hemodialysis machine. Body composition including total body water (TBW) was calculated before and after HD using near infrared interactance (NIR). All measurements were completed during half hour before and after HD. Forty-one patients included: men (n = 26), women (n = 15); median age 58 (range 28,88 years). Twenty-eight were African American and the rest Caucasians. The amount of intravascular fluid taken after HD (assessed by weight reduction) ranged 0,5 L with median 2.2 L. NIR analysis for the same patients at the same time showed different total body water measurements in 91% of cases (P > 0.05). Moreover, NIR analysis showed increase in total body water in 24% of patients even though the hemodialysis machine showed a loss of total body water; median of 1.3 (range: 0,3L). The error in measuring body composition with NIR was both large and varied (random and not systematic error). We conclude that NIR analysis cannot be considered as a reliable method to evaluate body composition, especially total body water, amongst patients with end stage renal disease undergoing hemodialysis. [source] Some confusion concerning integral isoconversional methods that may result from the paper by Budrugeac and Segal "Some Methodological Problems Concerning Nonisothermal Kinetic Analysis of Heterogeneous Solid,Gas Reactions"INTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 7 2002Sergey Vyazovkin Budrugeac and Segal (Int. J. Chem. Kinet.33, 564, 2001) have generally criticized the integral isoconversional methods for producing a systematic error in the activation energy, whose value varies with the extent of conversion. We stress that this error is practically eliminated in the advanced integral isoconversional methods when using integration over small time segments (Vyazovkin, S. J. Comput Chem.22, 178, 2001) © 2002 Wiley Periodicals, Inc. Int J Chem Kinet 34: 418,420, 2002 [source] Hidden Markov model-based real-time transient identifications in nuclear power plantsINTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, Issue 8 2002Kee-Choon Kwon In this article, a transient identification method based on a stochastic approach with the hidden Markov model (HMM) has been suggested and evaluated experimentally for the classification of nine types of transients in nuclear power plants (NPPs). A transient is defined as when a plant proceeds to an abnormal state from a normal state. Identification of the types of transients during an early accident stage in NPPs is crucial for proper action selection. The transient can be identified by its unique time-dependent patterns related to the principal variables. The HMM, a double-stochastic process, can be applied to transient identification that is a spatial and temporal classification problem under a statistical pattern-recognition framework. The trained HMM is created for each transient from a set of training data by the maximum-likelihood estimation method which uses a forward-backward algorithm and the Baum-Welch re-estimation algorithm. The transient identification is determined by calculating which model has the highest probability for given test data using the Viterbi algorithm. Several experimental tests have been performed with normalization methods, clustering algorithms, and a number of states in HMM. There are also a few experimental tests that have been performed, including superimposing random noise, adding systematic error, and adding untrained transients to verify its performance and robustness. The proposed real-time transient identification system has been proven to have many advantages, although there are still some problems that should be solved before applying it to an operating NPP. Further efforts are being made to improve the system performance and robustness in order to demonstrate reliability and accuracy to the required level. © 2002 Wiley Periodicals, Inc. [source] Menstrual age,dependent systematic error in sonographic fetal weight estimation: A mathematical modelJOURNAL OF CLINICAL ULTRASOUND, Issue 3 2002Max Mongelli MD Abstract Purpose We used computer modeling techniques to evaluate the accuracy of different types of sonographic formulas for estimating fetal weight across the full range of clinically important menstrual ages. Methods Input data for the computer modeling techniques were derived from published British standards for normal distributions of sonographic biometric growth parameters and their correlation coefficients; these standards had been derived from fetal populations whose ages were determined using sonography. The accuracy of each of 10 formulas for estimating fetal weight was calculated by comparing the weight estimates obtained with these formulas in simulated populations with the weight estimates expected from birth weight data, from 24 weeks' menstrual age to term. Preterm weights were estimated by interpolation from term birth weights using sonographic growth curves. With an ideal formula, the median weight estimates at term should not differ from the population birth weight median. Results The simulated output sonographic values closely matched those of the original population. The accuracy of the fetal weight estimation differed by menstrual age and between various formulas. Most methods tended to overestimate fetal weight at term. Shepard's formula progressively overestimated weights from about 2% at 32 weeks to more than 15% at term. The accuracy of Combs's and Shinozuka's volumetric formulas varied least by menstrual age. Hadlock's formula underestimated preterm fetal weight by up to 7% and overestimated fetal weight at term by up to 5%. Conclusions The accuracy of sonographic fetal weight estimation based on volumetric formulas is more consistent across menstrual ages than are other methods. © 2002 Wiley Periodicals, Inc. J Clin Ultrasound 30:139,144, 2002; DOI 10.1002/jcu.10051 [source] Systematic and statistical error in histogram-based free energy calculationsJOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 12 2003Mark N. Kobrak Abstract A common technique for the numerical calculation of free energies involves estimation of the probability density along a given coordinate from a set of configurations generated via simulation. The process requires discretization of one or more reaction coordinates to generate a histogram from which the continuous probability density is inferred. We show that the finite size of the intervals used to construct the histogram leads to quantifiable systematic error. The width of these intervals also determines the statistical error in the free energy, and the choice of the appropriate interval is therefore driven by the need to balance the two sources of error. We present a method for the construction of the optimal histogram for a given system, and show that the use of this technique requires little additional computational expense. We demonstrate the efficacy of the technique for a model system, and discuss how the principles governing the choice of discretization interval could be used to improve extended sampling techniques. © 2003 Wiley Periodicals, Inc. J Comput Chem 24: 1437,1446, 2003 [source] Laboratory evaluation of two bioenergetics models applied to yellow perch: identification of a major source of systematic errorJOURNAL OF FISH BIOLOGY, Issue 2 2003P. G. Bajer Laboratory growth and food consumption data for two size classes of age 2 year yellow perch Perca flavescens, each fed on two distinct feeding schedules at 21° C, were used to evaluate the abilities of the Wisconsin (WI) and Karas,Thoresson (KT) bioenergetics models to predict fish growth and cumulative consumption. Neither model exhibited consistently better performance for predicting fish body masses across all four fish size and feeding regime combinations. Results indicated deficiencies in estimates of resting routine metabolism by both models. Both the WI and KT models exhibited errors for predicting growth rates, which were strongly correlated with food consumption rate. Consumption-dependent prediction errors may be common in bioenergetics models and are probably the result of deficiencies in parameter values or assumptions within the models for calculating energy costs of specific dynamic action, feeding activity metabolism or egestion and excretion. Inter-model differences in growth and consumption predictions were primarily the result of differences in egestion and excretion costs calculated by the two models. The results highlighted the potential importance of parameters describing egestion and excretion costs to the accuracy of bioenergetics model predictions, even though bioenergetics models are generally regarded as being insensitive to these parameters. The findings strongly emphasize the utility and necessity of performing laboratory evaluations of all bioenergetics models for assurance of model accuracy and for facilitation of model refinement. [source] Reexamining the quantification of perfusion MRI data in the presence of bolus dispersion,JOURNAL OF MAGNETIC RESONANCE IMAGING, Issue 3 2007Linda Ko BSc Abstract Purpose To determine the true impact of dispersion upon cerebral blood flow (CBF) quantification by removing an algorithm implementation-induced systematic error. Materials and Methods The impact of dispersion on the arterial input function (AIF) between measurement and entry into the tissue of interest on CBF estimates was simulated assuming: 1) contralateral circulation flow that introduces a true arterial tissue delay (ATD)-related dispersive component; and 2) the presence of an arterial stenosis that disperses and shifts the AIF peak entering the tissue; increasing the apparent ATD relative to the original AIF. Results Previously reported CBF estimates for the stenosis dispersion model were found to be a mixture of true dispersive effects and an algorithm implementation-induced systematic error. The true CBFMEASURED/CBFNO-DISPERSION ratios for short mean transit times (MTT) (normal) and long MTT (infarcted) tissue were similar for both dispersion models evaluated; this was an unanticipated result. The CBF quantification inaccuracies induced through the dispersion model truly related to ATD were lower than for the local stenosis-based dispersion for small ATD values. Conclusion Correcting the systematic error present in a previous deconvolution study removes the reported ATD-related impact on CBF quantification. The impact of dispersion was smaller than half that reported in previous simulation studies. J. Magn. Reson. Imaging 2007;25:639,643. © 2007 Wiley-Liss, Inc. [source] A fast hybrid algorithm for exoplanetary transit searchesMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2006A. Collier Cameron ABSTRACT We present a fast and efficient hybrid algorithm for selecting exoplanetary candidates from wide-field transit surveys. Our method is based on the widely used SysRem and Box Least-Squares (BLS) algorithms. Patterns of systematic error that are common to all stars on the frame are mapped and eliminated using the SysRem algorithm. The remaining systematic errors caused by spatially localized flat-fielding and other errors are quantified using a boxcar-smoothing method. We show that the dimensions of the search-parameter space can be reduced greatly by carrying out an initial BLS search on a coarse grid of reduced dimensions, followed by Newton,Raphson refinement of the transit parameters in the vicinity of the most significant solutions. We illustrate the method's operation by applying it to data from one field of the SuperWASP survey, comprising 2300 observations of 7840 stars brighter than V= 13.0. We identify 11 likely transit candidates. We reject stars that exhibit significant ellipsoidal variations caused indicative of a stellar-mass companion. We use colours and proper motions from the Two Micron All Sky Survey and USNO-B1.0 surveys to estimate the stellar parameters and the companion radius. We find that two stars showing unambiguous transit signals pass all these tests, and so qualify for detailed high-resolution spectroscopic follow-up. [source] Galaxy groups in the Two-degree Field Galaxy Redshift Survey: the luminous content of the groupsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2004V. R. Eke ABSTRACT The Two-degree Field Galaxy Redshift Survey (2dFGRS) Percolation-Inferred Galaxy Group (2PIGG) catalogue of ,29 000 objects is used to study the luminous content of galaxy systems of various sizes. Mock galaxy catalogues constructed from cosmological simulations are used to gauge the accuracy with which intrinsic group properties can be recovered. It is found that a Schechter function is a reasonable fit to the galaxy luminosity functions in groups of different mass in the real data, and that the characteristic luminosity L, is slightly larger for more massive groups. However, the mock data show that the shape of the recovered luminosity function is expected to differ from the true shape, and this must be allowed for when interpreting the data. Luminosity function results are presented in both the bJ and rF wavebands. The variation of the halo mass-to-light ratio, ,, with group size is studied in both of these wavebands. A robust trend of increasing , with increasing group luminosity is found in the 2PIGG data. Going from groups with bJ luminosities equal to 1010 h,2 L, to those 100 times more luminous, the typical bJ -band mass-to-light ratio increases by a factor of 5, whereas the rF -band mass-to-light ratio grows by a factor of 3.5. These trends agree well with the predictions of the simulations which also predict a minimum in the mass-to-light ratio on a scale roughly corresponding to the Local Group. The data indicate that if such a minimum exists, then it must occur at L, 1010h,2 L,, below the range accurately probed by the 2PIGG catalogue. According to the mock data, the bJ mass-to-light ratios of the largest groups are expected to be approximately 1.1 times the global value. Assuming that this correction applies to the real data, the mean bJ luminosity density of the Universe yields an estimate of ,m= 0.26 ± 0.03 (statistical error only). Various possible sources of systematic error are considered, with the conclusion that these could affect the estimate of ,m by a few tens of per cent. [source] Cluster temperature profiles and Sunyaev,Zeldovich observationsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2004Steen H. Hansen ABSTRACT Galaxy clusters are not isothermal, and the radial temperature dependence will affect the cluster parameters derived through the observation of the Sunyaev,Zeldovich (SZ) effect. We show that the derived peculiar velocity will be systematically shifted by 10,20 per cent. For future all-sky surveys one cannot rely on the observationally expensive X-ray observations to remove this systematic error, but should instead reach for sufficient angular resolution to perform a deprojection in the SZ spectra. The Compton-weighted electron temperature is accurately derived through SZ observations. [source] Two measures of the shape of the dark halo of the Milky WayMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 2 2000Rob P. Olling In order to test the reliability of determinations of the shapes of dark-matter haloes of the galaxies, we have made such measurements for the Milky Way by two independent methods. First, we have combined the measurements of the overall mass distribution of the Milky Way derived from its rotation curve and the measurements of the amount of dark matter in the solar neighbourhood obtained from stellar kinematics to determine the flattening of the dark halo. Secondly, we have used the established technique based on the variation in thickness of the Milky Way's H i layer with radius: by assuming that the H i gas is in hydrostatic equilibrium in the gravitational potential of a galaxy, one can use the observed flaring of the gas layer to determine the shape of the dark halo. These techniques are found to produce a consistent estimate for the flattening of the dark-matter halo, with a shortest-to-longest axis ratio of q,0.8, but only if one adopts somewhat non-standard values for the distance to the Galactic centre, R0, and the local Galactic rotation speed, ,0. For consistency, one requires values of R0,7.6 kpc and ,0,190 km s,1. The results depend on the Galactic constants because the adopted values affect both distance measurements within the Milky Way and the shape of the rotation curve, which, in turn, alter the inferred halo shape. Although differing significantly from the current IAU-sanctioned values, these upper limits are consistent with all existing observational constraints. If future measurements confirm these lower values for the Galactic constants, then the validity of the gas-layer-flaring method will be confirmed. Further, dark-matter candidates such as cold molecular gas and massive decaying neutrinos, which predict very flat dark haloes with q,0.2, will be ruled out. Conversely, if the Galactic constants were found to be close to the more conventional values, then there would have to be some systematic error in the methods for measuring dark halo shapes, so the existing modelling techniques would have to be viewed with some scepticism. [source] Repeatability of joint proprioception and muscle torque assessment in healthy children and in children diagnosed with hypermobility syndromeMUSCULOSKELETAL CARE, Issue 2 2008Francis A. Fatoye MSc Abstract Background:,Impairment of joint proprioception in patients with hypermobility syndrome (HMS) has been well documented. Both joint proprioception and muscle torque are commonly assessed in patients with musculoskeletal complaints. It is unknown, however, if these measures change significantly on repeated application in healthy children and in children with HMS. Aim:,To investigate the between-days repeatability of joint proprioception and muscle torque in these groups. Methods:,Twenty children (10 healthy and 10 with HMS), aged eight to 15 years, were assessed on two separate occasions (one week apart) for joint kinaesthesia (JK), joint position sense (JPS), and the extensor and knee flexor muscle torque of the knee. JK was measured using threshold to detection of passive movement. JPS was measured using the absolute angular error (AAE; the absolute difference between the target and perceived angles). Knee extensor and flexor muscle torque was normalized to body weight. Results:,Intra-class correlation coefficients (ICC) for JK, extensor and flexor muscle torque were excellent in both groups (range 0.83 to 0.98). However, ICC values for JPS tests were poor to moderate in the two groups (range 0.18 to 0.56). 95% limits of agreement (LOA) were narrow in both cohorts for JK and muscle torque (indicating low systematic error) but wide for the JPS tests. 95% LOA also demonstrated that the measuring instruments used in this study had low between-days systematic error. Conclusions:,Based on ICC and 95% LOA, the repeatability of JK and muscle torque measurements was excellent in both healthy children and those with HMS. The JPS test can only be assessed with poor to moderate repeatability. The use of the JPS test in these children should be undertaken with caution. Copyright © 2008 John Wiley & Sons, Ltd. [source] Dynamics of molecules in crystals from multi-temperature anisotropic displacement parameters.ACTA CRYSTALLOGRAPHICA SECTION A, Issue 5 2000A new model for analysing the temperature evolution of anisotropic displacement parameters (ADP's) is presented. It allows for a separation of temperature-dependent from temperature-independent contributions to ADP's and provides a fairly detailed description of the temperature-dependent large-amplitude molecular motions in crystals in terms of correlated atomic displacements and associated effective vibrational frequencies. It can detect disorder in the crystal structure, systematic error in the diffraction data and the effects of non-spherical electron-density distributions on ADP's in X-ray data. The analysis requires diffraction data measured at multiple temperatures. [source] The role of the basic state in the ENSO,monsoon relationship and implications for predictabilityTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 607 2005A. G. Turner Abstract The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon,ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon,ENSO teleconnection is better simulated, particularly the lag,lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability. Copyright © 2005 Royal Meteorological Society [source] Forcing singular vectors and other sensitive model structuresTHE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 592 2003J. Barkmeijer Abstract Model tendency perturbations can, like analysis perturbations, be an effective way to influence forecasts. In this paper, optimal model tendency perturbations, or forcing singular vectors, are computed with diabatic linear and adjoint T42L40 versions of the European Centre for Medium-Range Weather Forecasts' forecast model. During the forecast time, the spatial pattern of the tendency perturbation does not vary and the response at optimization time (48 hours) is measured in terms of total energy. Their properties are compared with those of initial singular vectors, and differences, such as larger horizontal scale and location, are discussed. Sensitivity calculations are also performed, whereby a cost function measuring the 2-day forecast error is minimized by only allowing tendency perturbations. For a given number of minimization steps, this approach yields larger cost-function reductions than the sensitivity calculation using only analysis perturbations. Nonlinear forecasts using only one type of perturbation confirm an improved performance in the case of tendency perturbations. For a summer experiment a substantial reduction of the systematic error is shown in the case of forcing sensitivity. Copyright © 2003 Royal Meteorological Society. [source] K2[HCr2AsO10]: redetermination of phase II and the predicted structure of phase IACTA CRYSTALLOGRAPHICA SECTION C, Issue 12 2004T. J. R. Weakley Our prediction that phase II of dipotassium hydrogen chromatoarsenate, K2[HCr2AsO10], is ferroelectric, based on the analysis of the atomic coordinates by Averbuch-Pouchot, Durif & Guitel [Acta Cryst. (1978), B34, 3725,3727], led to an independent redetermination of the structure using two separate crystals. The resulting improved accuracy allows the inference that the H atom is located in the hydrogen bonds of length 2.555,(5),Å which form between the terminal O atoms of shared AsO3OH tetrahedra in adjacent HCr2AsO102, ions. The largest atomic displacement of 0.586,Å between phase II and the predicted paraelectric phase I is by these two O atoms. The H atoms form helices of radius ,0.60,Å about the 31 or 32 axes. Normal probability analysis reveals systematic error in seven or more of the earlier atomic coordinates. [source] Zero-dose extrapolation as part of macromolecular synchrotron data reductionACTA CRYSTALLOGRAPHICA SECTION D, Issue 5 2003Kay Diederichs Radiation damage to macromolecular crystals at third-generation synchrotron sites constitutes a major source of systematic error in X-ray data collection. Here, a computational method to partially correct the observed intensities during data reduction is described and investigated. The method consists of a redundancy-based zero-dose extrapolation of a decay function that is fitted to the intensities of all observations of a unique reflection as a function of dose. It is shown in a test case with weak anomalous signal that this conceptually simple correction, when applied to each unique reflection, can significantly improve the accuracy of averaged intensities and single-wavelength anomalous dispersion phases and leads to enhanced experimental electron-density maps. Limitations of and possible improvements to the method are discussed. [source] Does the ligand-biopolymer equilibrium binding constant depend on the number of bound ligands?,BIOPOLYMERS, Issue 11 2010Daria A. Beshnova Abstract Conventional methods, such as Scatchard or McGhee-von Hippel analyses, used to treat ligand-biopolymer interactions, indirectly make the assumption that the microscopic binding constant is independent of the number of ligands, i, already bound to the biopolymer. Recent results on the aggregation of aromatic molecules (Beshnova et al., J Chem Phys 2009, 130, 165105) indicated that the equilibrium constant of self-association depends intrinsically on the number of molecules in an aggregate due to loss of translational and rotational degrees of freedom on formation of the complex. The influence of these factors on the equilibrium binding constant for ligand-biopolymer complexation was analyzed in this work. It was shown that under the conditions of binding of "small" molecules, these factors can effectively be ignored and, hence, do not provide any hidden systematic error in such widely-used approaches, such as the Scatchard or McGhee-von Hippel methods for analyzing ligand-biopolymer complexation. © 2010 Wiley Periodicals, Inc. Biopolymers 93: 932,935, 2010. [source] True and Apparent Temperature Dependence of Protein Adsorption Equilibrium in Reversed-Phase HPLCBIOTECHNOLOGY PROGRESS, Issue 6 2002Szabelski The adsorption behavior of bovine insulin on a C8 -bonded silica stationary phase was investigated at different column pressures and temperatures in isocratic reversed-phase HPLC. Changes in the molar volume of insulin (, Vm) upon adsorption were derived from the pressure dependence of the isothermal retention factor ( k,). The values of , Vm were found to be practically independent of the temperature between 25 and 50 °C at ,96 mL/mol and to increase with increasing temperature, up to ,108 mL/mol reached at 50 °C. This trend was confirmed by two separate series of measurements of the thermal dependence of ln( k,). In the first series the average column pressure was kept constant. The second series involved measurements of ln( k,) under constant mobile-phase flow rate, the average column pressure varying with the temperature. In both cases, a parabolic shape relationship was observed between ln( k,) and the temperature, but the values obtained for ln k, were higher in the first than in the second case. The relative difference in ln( k,), caused by the change in pressure drop induced by the temperature, is equivalent to a systematic error in the estimate of the Gibbs free energy of 12%. Thus, a substantial error is made in the estimates of the enthalpy and entropy of adsorption when neglecting the pressure effects associated with the change in the molar volume of insulin. This work proves that the average column pressure must be kept constant during thermodynamic measurements of protein adsorption constants, especially in RPLC and HIC. Our results show also that there is a critical temperature, Tc , 53 °C, at which ln( k,) is maximum and the insulin adsorption process changes from an exothermic to an endothermic one. This temperature determines also the transition point in the molecular mechanism of insulin adsorption that involves successive unfolding of the protein chain. [source] An Experimental Investigation of Approaches to Audit Decision Making: An Evaluation Using Systems-Mediated Mental Models,CONTEMPORARY ACCOUNTING RESEARCH, Issue 2 2005AMY K. CHOY Abstract The objective of this research is to articulate a decision-making foundation for the systems audit approach. Under this audit approach, the auditor first gains an understanding of the auditee's economic environment, strategy, and business processes and then forms expectations about its performance and financial reporting. Proponents of this audit approach argue that decision making is enhanced because the knowledge of the system allows the auditor to focus on the most important risks. However, there has not been an explicit framework to explain how systems knowledge can enhance decision making. To provide such a framework, we combine mental model theory with general systems theory to produce a hypothesis we refer to as a systems-mediated mental model hypothesis. We test this hypothesis using experimental economics methods. We find that (1) subjects make systematic errors under the setting without an organizing framework provided by the systems information, and (2) the presence of an organizing framework results in lower reporting errors. Importantly, the organizing framework significantly enhances decision making in the settings where the environment changed. Establishing a decision-making foundation for systems audits can provide an important building block that, in part, can contribute to the development of a more effective and efficient audit technology - an important objective now when audits are facing a credibility crisis. [source] Comparative Study of Flat and Round Collectors Using a Validated 1D Fluid Probe ModelCONTRIBUTIONS TO PLASMA PHYSICS, Issue 5-6 2006P. Peleman Abstract In the literature two different types of Gundestrup-like probe designs are proposed: design with flat and with round collectors. In this paper we study the influence of different collector shapes of Gundestrup-like probes on the accuracy of the measurement of the parallel and perpendicular flows. A one dimensional fluid probe model is used for deducing both Mach numbers of the unperturbed flow from the probe data. An analytical expression relates the plasma flow to the measured ion saturation currents collected at the upstream and downstream collecting surfaces of the probe. For flat collectors, the analytical model is validated by comparing it to a two dimensional quasi-neutral Particle In Cell (PIC) simulation code. An extension of the theoretical model then allows us to study round collectors. We performed an accuracy study which showed that systematic errors are introduced when round collectors are employed for determination of the perpendicular flow which is systematically overestimated. The error can reach more than 70% when the perpendicular flow increases and when the angle of the collecting surface with respect to the magnetic field (, , 0)is small. The correct analytical expression is applied to experimental data from Gundestrup probe measurements with round collectors on the CASTOR tokamak. The analysis shows that for these measurements the error introduced by using the expression for flat collectors remains negligible, supporting our former use of the model for flat collectors. A new advanced Gundestrup-like probe design and the motivation for the choice of flat collectors are presented. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] European Momentum Strategies, Information Diffusion, and Investor ConservatismEUROPEAN FINANCIAL MANAGEMENT, Issue 3 2005John A. Doukas G1; G11; G14 Abstract In this paper we conduct an out-of-sample test of two behavioural theories that have been proposed to explain momentum in stock returns. We test the gradual-information-diffusion model of Hong and Stein (1999) and the investor conservatism bias model of Barberis et al. (1998) in a sample of 13 European stock markets during the period 1988 to 2001. These two models predict that momentum comes from the (i) gradual dissemination of firm-specific information and (ii) investors' failure to update their beliefs sufficiently when they observe new public information. The findings of this study are consistent with the predictions of the behavioural models of Hong and Stein's (1999) and Barberis et al. (1998). The evidence shows that momentum is the result of the gradual diffusion of private information and investors' psychological conservatism reflected on the systematic errors they make in forming earnings expectations by not updating them adequately relative to their prior beliefs and by undervaluing the statistical weight of new information. [source] Model complexity versus scatter in fatigueFATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 11 2004T. SVENSSON ABSTRACT Fatigue assessment in industry is often based on simple empirical models, such as the Wöhler curve or the Paris' law. In contrast, fatigue research to a great extent works with very complex models, far from the engineering practice. One explanation for this discrepancy is that the scatter in service fatigue obscures many of the subtle phenomena that can be studied in a laboratory. Here we use a statistical theory for stepwise regression to investigate the role of scatter in the choice of model complexity in fatigue. The results suggest that the amount of complexity used in different design concepts reflects the appreciated knowledge about input parameters. The analysis also points out that even qualitative knowledge about the neglected complexity may be important in order to avoid systematic errors. [source] The design of an optimal filter for monthly GRACE gravity modelsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2008R. Klees SUMMARY Most applications of the publicly released Gravity Recovery and Climate Experiment monthly gravity field models require the application of a spatial filter to help suppressing noise and other systematic errors present in the data. The most common approach makes use of a simple Gaussian averaging process, which is often combined with a ,destriping' technique in which coefficient correlations within a given degree are removed. As brute force methods, neither of these techniques takes into consideration the statistical information from the gravity solution itself and, while they perform well overall, they can often end up removing more signal than necessary. Other optimal filters have been proposed in the literature; however, none have attempted to make full use of all information available from the monthly solutions. By examining the underlying principles of filter design, a filter has been developed that incorporates the noise and full signal variance,covariance matrix to tailor the filter to the error characteristics of a particular monthly solution. The filter is both anisotropic and non-symmetric, meaning it can accommodate noise of an arbitrary shape, such as the characteristic stripes. The filter minimizes the mean-square error and, in this sense, can be considered as the most optimal filter possible. Through both simulated and real data scenarios, this improved filter will be shown to preserve the highest amount of gravity signal when compared to other standard techniques, while simultaneously minimizing leakage effects and producing smooth solutions in areas of low signal. [source] Rate coefficients for the reactions of OH with n -propanol and iso -propanol between 237 and 376 KINTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 1 2010B. Rajakumar The rate coefficients for the reaction OH + CH3CH2CH2OH , products (k1) and OH + CH3CH(OH)CH3 , products (k2) were measured by the pulsed-laser photolysis,laser-induced fluorescence technique between 237 and 376 K. Arrhenius expressions for k1 and k2 are as follows: k1 = (6.2 ± 0.8) × 10,12 exp[,(10 ± 30)/T] cm3 molecule,1 s,1, with k1(298 K) = (5.90 ± 0.56) × 10,12 cm3 molecule,1 s,1, and k2 = (3.2 ± 0.3) × 10,12 exp[(150 ± 20)/T] cm3 molecule,1 s,1, with k2(298) = (5.22 ± 0.46) × 10,12 cm3 molecule,1 s,1. The quoted uncertainties are at the 95% confidence level and include estimated systematic errors. The results are compared with those from previous measurements and rate coefficient expressions for atmospheric modeling are recommended. The absorption cross sections for n -propanol and iso -propanol at 184.9 nm were measured to be (8.89 ± 0.44) × 10,19 and (1.90 ± 0.10) × 10,18 cm2 molecule,1, respectively. The atmospheric implications of the degradation of n -propanol and iso -propanol are discussed. © 2009 Wiley Periodicals, Inc. Int J Chem Kinet 42: 10,24, 2010 [source] Kinetics of the gas-phase reactions of cyclo-CF2CFXCHXCHX , (X = H, F, Cl) with OH radicals at 253,328 KINTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 8 2009L. Chen Rate constants were determined for the reactions of OH radicals with halogenated cyclobutanes cyclo-CF2CF2CHFCH2(k1), trans -cyclo-CF2CF2CHClCHF(k2), cyclo-CF2CFClCH2CH2(k3), trans -cyclo-CF2CFClCHClCH2(k4), and cis -cyclo-CF2CFClCHClCH2(k5) by using a relative rate method. OH radicals were prepared by photolysis of ozone at a UV wavelength (254 nm) in 200 Torr of a sample reference H2OO3O2He gas mixture in an 11.5-dm3 temperature-controlled reaction chamber. Rate constants of k1 = (5.52 ± 1.32) × 10,13 exp[,(1050 ± 70)/T], k2 = (3.37 ± 0.88) × 10,13 exp[,(850 ± 80)/T], k3 = (9.54 ± 4.34) × 10,13 exp[,(1000 ± 140)/T], k4 = (5.47 ± 0.90) × 10,13 exp[,(720 ± 50)/T], and k5 = (5.21 ± 0.88) × 10,13 exp[,(630 ± 50)/T] cm3 molecule,1 s,1 were obtained at 253,328 K. The errors reported are ± 2 standard deviations, and represent precision only. Potential systematic errors associated with uncertainties in the reference rate constants could add an additional 10%,15% uncertainty to the uncertainty of k1,k5. The reactivity trends of these OH radical reactions were analyzed by using a collision theory,based kinetic equation. The rate constants k1,k5 as well as those of related halogenated cyclobutane analogues were found to be strongly correlated with their CH bond dissociation enthalpies. We consider the dominant tropospheric loss process for the halogenated cyclobutanes studied here to be by reaction with the OH radicals, and atmospheric lifetimes of 3.2, 2.5, 1.5, 0.9, and 0.7 years are calculated for cyclo-CF2CF2CHFCH2, trans -cyclo-CF2CF2CHClCHF, cyclo-CF2CFClCH2CH2, trans -cyclo-CF2CFClCHClCH2, and cis -cyclo-CF2CFClCHClCH2, respectively, by scaling from the lifetime of CH3CCl3. © 2009 Wiley Periodicals, Inc. Int J Chem Kinet 41: 532,542, 2009 [source] |