Significant Errors (significant + error)

Distribution by Scientific Domains


Selected Abstracts


Study of spectral properties of bis(1,10-phenanthroline) silicon hexacoordinated complexes by density functional theory

INTERNATIONAL JOURNAL OF QUANTUM CHEMISTRY, Issue 14 2008
Irina Irgibaeva
Abstract Applying ab initio method the structures and UV-vis spectra of silicon hexacoordinated compound [Si(phen)2(OMe)2]I2 and it's nitrate [Si(phen)2(OMe)2](NO3)2 were calculated. On the ground of comparison of theoretical and experimental data (1H NMR and electronic absorption spectra) it was shown that the theoretical method B3LYP/LanL2DZ we have used describes bis(1,10-phenanthroline) silicon complexes reasonably well. On the basis of TDDFT calculations at B3LYP/LanL2DZ level it is predicted that [Si(phen)2(OMe)2]I2 compound has charge transfer band in UV-vis spectrum at 557 nm which is associated with electron transfer from I, to phen ligand while [Si(phen)2(OMe)2](NO3)2 doesn't have one. The absence of this band in the observed spectrum of the [Si(phen)2(OMe)2]I2 complex methanol solution (10,5 M) is explained by the dissociation of the complex into ions [Si(phen)2(OMe)2]2+ and 2I,. We assume that this charge transfer band corresponds to peak at 400 nm in UV-vis spectrum of [Si(phen)2(OMe)2]I2 thin film. The missing of such bands in UV-vis spectrum of nitrate [Si(phen)2(OMe)2](NO3)2 film is explained by n , p* nature of these transitions. Significant error in prediction of charge transfer band energy is due to failure of TDDFT method to yield underestimated charge transfer electronic energies. © 2008 Wiley Periodicals, Inc. Int J Quantum Chem, 2008 [source]


Dyslexia and music: measuring musical timing skills

DYSLEXIA, Issue 1 2003
Katie Overy
Abstract Over the last few decades, a growing amount of research has suggested that dyslexics have particular difficulties with skills involving accurate or rapid timing, including musical timing skills. It has been hypothesised that music training may be able to remediate such timing difficulties, and have a positive effect on fundamental perceptual skills that are important in the development of language and literacy skills (Overy, 2000). In order to explore this hypothesis further, the nature and extent of dyslexics' musical difficulties need to be examined in more detail. In the present study, a collection of musical aptitude tests (MATs) were designed specifically for dyslexic children, in order to distinguish between a variety of musical skills and sub-skills. 15 dyslexic children (age 7,11, mean age 9.0) and 11 control children (age 7,10, mean age 8.9) were tested on the MATs, and their scores were compared. Results showed that the dyslexic group scored higher than the control group on 3 tests of pitch skills (possibly attributable to slightly greater musical experience), but lower than the control group on 7 out of 9 tests of timing skills. Particular difficulties were noted on one of the tests involving rapid temporal processing, in which a subgroup of 5 of the dyslexic children (33%) (mean age 8.4) was found to account for all the significant error. Also, an interesting correlation was found between spelling ability and the skill of tapping out the rhythm of a song, which both involve the skill of syllable segmentation. These results support suggestions that timing is a difficulty area for dyslexic children, and suggest that rhythm skills and rapid skills may need particular attention in any form of musical training with dyslexics. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A new solution for a partially penetrating constant-rate pumping well with a finite-thickness skin

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 15 2007
Pin-Yuan Chiu
Abstract A mathematical model describing the constant pumping is developed for a partially penetrating well in a heterogeneous aquifer system. The Laplace-domain solution for the model is derived by applying the Laplace transforms with respect to time and the finite Fourier cosine transforms with respect to vertical co-ordinates. This solution is used to produce the curves of dimensionless drawdown versus dimensionless time to investigate the influences of the patch zone and well partial penetration on the drawdown distributions. The results show that the dimensionless drawdown depends on the hydraulic properties of the patch and formation zones. The effect of a partially penetrating well on the drawdown with a negative patch zone is larger than that with a positive patch zone. For a single-zone aquifer case, neglecting the effect of a well radius will give significant error in estimating dimensionless drawdown, especially when dimensionless distance is small. The dimensionless drawdown curves for cases with and without considering the well radius approach the Hantush equation (Advances in Hydroscience. Academic Press: New York, 1964) at large time and/or large distance away from a test well. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Hydrodynamic investigation of USP dissolution test apparatus II

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 9 2007
Ge Bai
Abstract The USP Apparatus II is the device commonly used to conduct dissolution testing in the pharmaceutical industry. Despite its widespread use, dissolution testing remains susceptible to significant error and test failures, and limited information is available on the hydrodynamics of this apparatus. In this work, laser-Doppler velocimetry (LDV) and computational fluid dynamics (CFD) were used, respectively, to experimentally map and computationally predict the velocity distribution inside a standard USP Apparatus II under the typical operating conditions mandated by the dissolution test procedure. The flow in the apparatus is strongly dominated by the tangential component of the velocity. Secondary flows consist of an upper and lower recirculation loop in the vertical plane, above and below the impeller, respectively. A low recirculation zone was observed in the lower part of the hemispherical vessel bottom where the tablet dissolution process takes place. The radial and axial velocities in the region just below the impeller were found to be very small. This is the most critical region of the apparatus since the dissolving tablet will likely be at this location during the dissolution test. The velocities in this region change significantly over short distances along the vessel bottom. This implies that small variations in the location of the tablet on the vessel bottom caused by the randomness of the tablet descent through the liquid are likely to result in significantly different velocities and velocity gradients near the tablet. This is likely to introduce variability in the test. © 2007 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 96: 2327,2349, 2007 [source]


A model for saturation correction in meteor photometry

MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 1 2010
Jean-Baptiste Kikwaya
ABSTRACT In order to correct for the effect of saturation on photometric measurements of meteors, we have developed a numerical model for saturation and apply it to data gathered using two generation III image intensified video systems on two nights (2008 October 31 and 2008 November 6). The two cameras were pointed in the same direction, and the aperture of one camera was set two stops below the aperture of the other. With these conditions, some meteors saturated one camera but not the other (group I); some saturated both cameras (group II); and some did not saturate either of them (group III). A model of meteor saturation has been developed which uses the image background value, angular meteor speed and the lateral width of the meteor image to simulate the true and saturated light curve of meteors. For group I meteors, we computed a saturation correction and applied it to the saturated light curve. We then compared the corrected saturated curve to the unsaturated curve from the other camera to validate the model. For group II meteors, a saturation correction is calculated and applied to both observed light curves, which have different degrees of saturation, and the corrected curves are compared. We collected 516 meteors, of which 30 were of group I, and seven of group II. For meteors in group I, an average residual of less than 0.4 mag was found between the observed unsaturated light curve and the model-corrected saturated light curve. For meteors in group II, the average residual between the two corrected light curves was 0.3 mag. For our data, the saturation correction goes from 0.5 to 1.9 mag for meteors in group I, and 1.2 to 2.5 mag for meteors in group II. Based on the agreement between the observed and modelled light curves (less than 0.4 mag over all meteors of all groups), we conclude that our model for saturation correction is valid. It can be used to extract the true luminosity of a saturated meteor, which is necessary to calculate photometric mass. Our model also demonstrates that fixed corrections to saturated meteor photometry, not accounting for background levels or angular velocities, do introduce significant error to meteor photometric analyses. [source]


Strong excitonic mixing effect in asymmetric double quantum wells: On the optimization of electroabsorption modulators

PHYSICA STATUS SOLIDI (C) - CURRENT TOPICS IN SOLID STATE PHYSICS, Issue 7 2008
Dong Kwon Kim
Abstract We investigate the mixing of excitons originating in different subband pairs in asymmetric double quantum wells (ADQWs) in a range of electric field where the two lowest exciton states anticross. This excitonic mixing is mainly attributed to the Coulomb interactions between subbands and the valence-subband nonparabolicity. Results show that excluding the excitonic mixing effect results in significant error in both the energies and the oscillator strengths of the excitons in an ADQW with thick barrier (3 nm). Even in an ADQW with a fairly thin barrier (1.2 nm), the error in the oscillator strengths can be substantial, although the errors in the computed energies may be tolerable. We find that including the mixing of excitons is indispensable in optimizing the structures of the asymmetric double quantum well electroabsorption modulators. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Improved temperature response functions for models of Rubisco-limited photosynthesis

PLANT CELL & ENVIRONMENT, Issue 2 2001
C. J. Bernacchi
ABSTRACT Predicting the environmental responses of leaf photosynthesis is central to many models of changes in the future global carbon cycle and terrestrial biosphere. The steady-state biochemical model of C3 photosynthesis of Farquhar et al. (Planta 149, 78,90, 1980) provides a basis for these larger scale predictions; but a weakness in the application of the model as currently parameterized is the inability to accurately predict carbon assimilation at the range of temperatures over which significant photosynthesis occurs in the natural environment. The temperature functions used in this model have been based on in vitro measurements made over a limited temperature range and require several assumptions of in vivo conditions. Since photosynthetic rates are often Rubisco-limited (ribulose, 1-5 bisphosphate carboxylase/oxygenase) under natural steady-state conditions, inaccuracies in the functions predicting Rubisco kinetic properties at different temperatures may cause significant error. In this study, transgenic tobacco containing only 10% normal levels of Rubisco were used to measure Rubisco-limited photosynthesis over a large range of CO2 concentrations. From the responses of the rate of CO2 assimilation at a wide range of temperatures, and CO2 and O2 concentrations, the temperature functions of Rubisco kinetic properties were estimated in vivo. These differed substantially from previously published functions. These new functions were then used to predict photosynthesis in lemon and found to faithfully mimic the observed pattern of temperature response. There was also a close correspondence with published C3 photosynthesis temperature responses. The results represent an improved ability to model leaf photosynthesis over a wide range of temperatures (10,40 °C) necessary for predicting carbon uptake by terrestrial C3 systems. [source]


Uncertainties in the oxygen isotopic composition of barium sulfate induced by coprecipitation of nitrate

RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 19 2008
Greg Michalski
Coprecipitation of nitrate and sulfate by barium has probably resulted in significant error in numerous studies dealing with the oxygen isotopic composition of natural sulfates using chemical/thermal conversion of BaSO4 and analysis by isotope ratio mass spectrometry. In solutions where NO/SO molar ratios are above 2 the amount of nitrate coprecipitated with BaSO4 reaches a maximum of approximately 7% and decreases roughly linearly as the molar ratio decreases. The fraction of coprecipitated nitrate appears to increase with decreasing pH and is also affected by the nature of the cations in the precipitating solution. The size of the oxygen isotope artifact in sulfate depends both on the amount of coprecipitated nitrate and the ,18O and ,17O values of the nitrate, both of which can be highly variable. The oxygen isotopic composition of sulfate extracted from atmospheric aerosols or rain waters are probably severely biased because photochemical nitrate is usually also present and it is highly enriched in 18O (,18O ,50,90,) and has a large mass-independent isotopic composition (,17O ,20,32,). The sulfate ,18O error can be 2,5, with ,17O artifacts reaching as high as 4.0,. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Ectopic Beats in Heart Rate Variability Analysis: Effects of Editing on Time and Frequency Domain Measures

ANNALS OF NONINVASIVE ELECTROCARDIOLOGY, Issue 1 2001
Mirja A. Salo M.Sc.
Background: Various methods can be used to edit biological and technical artefacts in heart rate variability (HRV), but there is relatively little information on the effects of such editing methods on HRV. Methods: The effects of editing on HRV analysis were studied using R-R interval data of 10 healthy subjects and 10 patients with a previous myocardial infarction (Ml). R-R interval tachograms of verified sinus beats were analyzed from short-term (,5 min) and long-term (,24 hours) recordings by eliminating different amounts of real R-R intervals. Three editing methods were applied to these segments: (1) interpolation of degree zero, (2) interpolation of degree one, and (3) deletion without replacement. Results: In time domain analysis of short-term data, the standard deviation of normal-to-normal intervals (SDANN) was least affected by editing, and 30%-50% of the data could be edited by all the three methods without a significant error (< 5%). In the frequency domain analysis, the method of editing resulted in remarkably different changes and errors for both the high-frequency (HF) and the low-frequency (LF) spectral components. The editing methods also yielded in different results in healthy subjects and AMI patients. In 24-hour HRV analysis, up to 50% could be edited by all methods without an error larger than 5% in the analysis of the standard deviation of normal to normal intervals (SDNN). Both interpolation methods also performed well in the editing of the long-term power spectral components for 24-hour data, but with the deletion method, only 5% of the data could be edited without a significant error. Conclusions: The amount and type of editing R-R interval data have remarkably different effects on various HRV indices. There is no universal method for editing ectopic beats that could be used in both the time-domain and the frequency-domain analysis of HRV. A.N.E. 2001;6(1):5,17 [source]


Influence of Ramberg,Osgood fitting on the determination of plastic displacement rates in creep crack growth testing

FATIGUE & FRACTURE OF ENGINEERING MATERIALS AND STRUCTURES, Issue 4 2007
NAM-SU HUH
ABSTRACT This paper investigates the effect of the Ramberg,Osgood (R-O) fitting procedures on plastic displacement rate estimates in creep crack growth testing, via detailed two-dimensional and three-dimensional finite-element analyses of the standard compact tension specimen. Four different R-O fitting procedures are considered: (i) fitting the entire true stress,strain data up to the ultimate tensile strength, (ii) fitting the true stress,strain data from 0.1% strain to 0.8 of the true ultimate strain, (iii) fitting the true stress,strain data only up to 5% strain and (iv) fitting the engineering stress,strain data. It is found that the first two fitting procedures can produce significant errors in plastic displacement rate estimates. The last two procedures, on the other hand, provide reasonably accurate plastic displacement rates and thus should be recommended in creep crack growth testing. Several advantages of fitting the engineering stress,strain data over fitting the true stress,strain data only up to 5% strain are discussed. [source]


A study of ground-structure interaction in dynamic plate load testing

INTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 12 2002
Bojan B. Guzina
Abstract A mathematical treatment is presented for the forced vertical vibration of a padded annular footing on a layered viscoelastic half-space. On assuming a depth-independent stress distribution for the interfacial buffer, the set of triple integral equations stemming from the problem is reduced to a Fredholm integral equation of the second kind. The solution method, which is tailored to capture the stress concentrations beneath footing edges, is highlighted. To cater to small-scale geophysical applications, the model is used to investigate the near-field effects of ground-loading system interaction in dynamic geotechnical and pavement testing. Numerical results indicate that the uniform-pressure assumption for the contact load between the composite disc and the ground which is customary in dynamic plate load testing may lead to significant errors in the diagnosis of subsurface soil and pavement conditions. Beyond its direct application to non-intrusive site characterization, the proposed solution can be used in the seismic analysis of a variety of structures involving annular foundation geometries. Copyright © 2002 John Wiley & Sons, Ltd. [source]


On the residue calculus evaluation of the 3-D anisotropic elastic green's function

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING, Issue 5 2004
A.-V. Phan
Abstract An algorithm based upon the residue calculus for computing three-dimensional anisotropic elastic Green's function and its derivatives has been presented in Sales and Gray (Comput. Structures 1998; 69:247,254). It has been shown that the algorithm runs three to four times faster than the standard Wilson,Cruse interpolation scheme. However, the main concern of the Sales,Gray algorithm is its numerical instability that could lead to significant errors due to the existence of multiple poles of the residue. This paper proposes a remedy for the problem by adding the capability to evaluate the Green's function in case of multiple poles of the residue. Further, an improved numerical implementation based on the use of double-subscript-notation elastic constants in determining the Christoffel tensor is also at issue. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Accuracy of self-reported weight and height: Relationship with eating psychopathology among young women

INTERNATIONAL JOURNAL OF EATING DISORDERS, Issue 4 2009
Caroline Meyer PhD
Abstract Objective: Self-reported height and weight data are commonly reported within eating disorders research. The aims of this study are to demonstrate the accuracy of self-reported height and weight and to determine whether that accuracy is associated with levels of eating psychopathology among a group of young nonclinical women. Method: One hundred and four women were asked to report their own height and weight. They then completed the Eating Disorders Examination-Questionnaire. Finally, they were weighed and their height was measured in a standardized manner. Accuracy scores for height and weight were calculated by subtracting their actual weight and height from their self-reports. Results: Overall, the women overestimated their heights and underestimated their weights, leading to significant errors in body mass index where self-report is used. Those women with high eating concerns were likely to overestimate their weight, whereas those with high weight concerns were more likely to underestimate it. Discussion: These data show that self-reports of height and weight are inaccurate in a way that skews any research that depends on them. The errors are influenced by eating psychopathology. These findings highlight the importance of obtaining objective height and weight data, particularly when comparing those data with those of patients with eating disorders. © 2008 by Wiley Periodicals, Inc. Int J Eat Disord 2009 [source]


Minimizing errors in identifying Lévy flight behaviour of organisms

JOURNAL OF ANIMAL ECOLOGY, Issue 2 2007
DAVID W. SIMS
Summary 1Lévy flights are specialized random walks with fundamental properties such as superdiffusivity and scale invariance that have recently been applied in optimal foraging theory. Lévy flights have movement lengths chosen from a probability distribution with a power-law tail, which theoretically increases the chances of a forager encountering new prey patches and may represent an optimal solution for foraging across complex, natural habitats. 2An increasing number of studies are detecting Lévy behaviour in diverse organisms such as microbes, insects, birds, and mammals including humans. A principal method for detecting Lévy flight is whether the exponent (µ) of the power-law distribution of movement lengths falls within the range 1 < µ , 3. The exponent can be determined from the histogram of frequency vs. movement (step) lengths, but different plotting methods have been used to derive the Lévy exponent across different studies. 3Here we investigate using simulations how different plotting methods influence the µ-value and show that the power-law plotting method based on 2k (logarithmic) binning with normalization prior to log transformation of both axes yields low error (1·4%) in identifying Lévy flights. Furthermore, increasing sample size reduced variation about the recovered values of µ, for example by 83% as sample number increased from n = 50 up to 5000. 4Simple log transformation of the axes of the histogram of frequency vs. step length underestimated µ by c.40%, whereas two other methods, 2k (logarithmic) binning without normalization and calculation of a cumulative distribution function for the data, both estimate the regression slope as 1 , µ. Correction of the slope therefore yields an accurate Lévy exponent with estimation errors of 1·4 and 4·5%, respectively. 5Empirical reanalysis of data in published studies indicates that simple log transformation results in significant errors in estimating µ, which in turn affects reliability of the biological interpretation. The potential for detecting Lévy flight motion when it is not present is minimized by the approach described. We also show that using a large number of steps in movement analysis such as this will also increase the accuracy with which optimal Lévy flight behaviour can be detected. [source]


X-ray diffraction analysis of stacking and twin faults in f.c.c. metals: a revision and allowance for texture and non-uniform fault probabilities

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 2 2000
L. Velterop
A revision is presented of the original description by Warren [X-ray Diffraction, (1969), pp. 275,298. Massachusetts: Addison-Wesley] of the intensity distribution of powder-pattern reflections from f.c.c. metal samples containing stacking and twin faults. The assumptions (in many cases unrealistic) that fault probabilities need to be very small and equal for all fault planes and that the crystallites in the sample have to be randomly oriented have been removed. To elucidate the theory, a number of examples are given, showing how stacking and twin faults change the shape and position of diffraction peaks. It is seen that significant errors may arise from Warren's assumptions, especially in the peak maximum shift. Furthermore, it is explained how to describe powder-pattern reflections from textured specimens and specimens with non-uniform fault probabilities. Finally, it is discussed how stacking- and twin-fault probabilities (and crystallite sizes) can be determined from diffraction line-profile measurements. [source]


A computational NQR study on the hydrogen-bonded lattice of cytosine-5-acetic acid

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 5 2008
Mahmoud Mirzaei
Abstract A computational study at the level of density functional theory (DFT) employing 6-311++G** standard basis set was carried out to evaluate nuclear quadrupole resonance (NQR) spectroscopy parameters in cytosine-5-acetic acid (C5AA). Since the electric field gradient (EFG) tensors are very sensitive to the electrostatic environment at the sites of quadruple nuclei, the most possible interacting molecules with the target one were considered in a five-molecule model system of C5AA using X-ray coordinates transforming. The hydrogen atoms positions were optimized and two model systems of original and H-optimized C5AA were considered in NQR calculations. The calculated EFG tensors at the sites of 17O, 14N, and 2H nuclei were converted to their experimentally measurable parameters, quadrupole coupling constants and asymmetry parameters. The evaluated NQR parameters reveal that the nuclei in original and H-optimized systems contribute to different hydrogen bonding (HB) interaction. The comparison of calculated parameters between optimized isolated gas-phase and crystalline monomer also shows the relationship between the structural deformation and NQR parameters in C5AA. The basis set superposition error (BSSE) calculations yielded no significant errors for employed basis set in the evaluation of NQR parameters. All the calculations were performed by Gaussian 98 package of program. © 2007 Wiley Periodicals, Inc. J Comput Chem 2008 [source]


A novel method for deriving true density of pharmaceutical solids including hydrates and water-containing powders

JOURNAL OF PHARMACEUTICAL SCIENCES, Issue 3 2004
Changquan (Calvin) Sun
Abstract True density is commonly measured using helium pycnometry. However, most water-containing powders, for example, hydrates, amorphous drugs and excipients, and most tablet formulations, release water when exposed to a dry helium atmosphere. Because released water brings significant errors to the measured true density and drying alters the nature of water-containing solids, the helium pycnometry is not suitable for those substances. To overcome this problem, a novel method has been developed to accurately calculate powder true density from compaction data. No drying treatment of powder samples is required. Consequently, the true density thus obtained is relevant to tableting characterization studies because no alteration to the solid is induced by drying. This method involves nonlinear regression of compaction pressure-tablet density data based on a modified Heckel equation. When true density values of water-free powders derived by this novel method were plotted against values measured using pycnometry, a regression line with slope close to unity and intercept close to zero was obtained. Thus, the validity of this method was supported. Using this new method, it was further demonstrated that helium pycnometry always overestimates true densities of water containing powders, for example, hydrates, microcrystalline cellulose (MCC), and tablet formulations. The calculated true densities of powders were the same for different particle shapes and sizes of each material. This further suggests that true density values calculated using this novel method are characteristic of given materials and independent of particulate properties. © 2004 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 93:646,653, 2004 [source]


Improved signal spoiling in fast radial gradient-echo imaging: Applied to accurate T1 mapping and flip angle correction

MAGNETIC RESONANCE IN MEDICINE, Issue 5 2009
Wei Lin
Abstract In conventional spoiled gradient echo imaging utilizing quadratic radio frequency (RF) spoiling, nonideal signal intensities are often generated, particularly when repetition time is short and/or excitation flip angle (FA) becomes larger. This translates to significant errors in various quantitative applications based on T1 -weighted image intensities. In this work, a novel spoiling scheme is proposed, based on random gradient moments and RF phases. This scheme results in a non-steady-state condition, but achieves ideal mean signal intensity. In order to suppress artifacts created by the inter-TR signal variations and at the same time attain the ideal signal intensity, radial data acquisition is utilized. The proposed method achieves ideal spoiling for a wide range of T1, T2, TR, and FAs. Phantom and in vivo experiments demonstrate improved performance for T1 mapping and FA correction when compared with conventional RF spoiling methods. Magn Reson Med, 2009. © 2009 Wiley-Liss, Inc. [source]


Exact integration of the stiffness matrix of an 8-node plane elastic finite element by symbolic computation

NUMERICAL METHODS FOR PARTIAL DIFFERENTIAL EQUATIONS, Issue 1 2008
L. Videla
Abstract Computer algebra systems (CAS) are powerful tools for obtaining analytical expressions for many engineering applications in both academic and industrial environments. CAS have been used in this paper to generate exact expressions for the stiffness matrix of an 8-node plane elastic finite element. The Maple software system was used to identify six basic formulas from which all the terms of the stiffness matrix could be obtained. The formulas are functions of the Cartesian coordinates of the corner nodes of the element, and elastic parameters Young's modulus and Poisson's ratio. Many algebraic manipulations were performed on the formulas to optimize their efficiency. The redaction in CPU time using the exact expressions as opposed to the classical Gauss,Legendre numerical integration approach was over 50%. In an additional study of accuracy, it was shown that the numerical approach could lead to quite significant errors as compared with the exact approach, especially as element distortion was increased.© 2007 Wiley Periodicals, Inc. Numer Methods Partial Differential Eq 2007 [source]


The effect of transient conditions on an equilibrium permafrost-climate model

PERMAFROST AND PERIGLACIAL PROCESSES, Issue 1 2007
Dan Riseborough
Abstract Equilibrium permafrost models assume a stationary temperature and snow-cover climate. With a variable or changing climate, short-term energy imbalances between the active layer and permafrost result in transient departures from the equilibrium condition. This study examines the effects of such variability on an equilibrium permafrost-climate model, the temperature at the top of permafrost (TTOP) model. Comparisons between numerical results and temperatures predicted by the TTOP-model suggest that stationary inter-annual variability introduces an error in the top-of-permafrost temperature obtained with the equilibrium model that is higher where permafrost temperature is close to 0°C, although multi-year averaging reduces the error to 0.1°C or less. In the presence of a warming trend, the equilibrium model prediction tracked the changing top-of-permafrost temperature until permafrost temperatures reached 0°C, after which the equilibrium model produced significant errors. Errors up to 1°C were due to the temperature gradient through the developing talik, and depended on the warming rate, and the thickness of the talik. For all warming rates, the error was largest when the permafrost table was about 4,m below the surface, with the error declining as the permafrost table fell. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Finite element analysis of plain weave composites for flexural failure

POLYMER COMPOSITES, Issue 4 2010
Ömer Soykasap
This article presents finite element analysis for flexural behavior of woven composites considering the fiber and the matrix and their interactions. Finite element model using Abaqus program is developed to predict the homogenized properties of plain-weave T300/LTM45 composite. Initially, curved beam elements are used to model each resin-infiltrated fiber bundle. Geometrically, nonlinear analyses of the model with periodic boundary conditions are carried out to obtain effective in-plane and bending properties of the composite. Statistical analysis is presented to study the stiffness variability. The flexural failure of a single-ply composite is estimated based on the homogenized material properties, and is compared with previously published data. The model is able to correct the significant errors in the stiffnesses of the composite and captures the failure behavior accurately. POLYM. COMPOS., 2010. © 2009 Society of Plastics Engineers [source]


How well can the accuracy of comparative protein structure models be predicted?

PROTEIN SCIENCE, Issue 11 2008
David Eramian
Comparative structure models are available for two orders of magnitude more protein sequences than are experimentally determined structures. These models, however, suffer from two limitations that experimentally determined structures do not: They frequently contain significant errors, and their accuracy cannot be readily assessed. We have addressed the latter limitation by developing a protocol optimized specifically for predicting the C, root-mean-squared deviation (RMSD) and native overlap (NO3.5Å) errors of a model in the absence of its native structure. In contrast to most traditional assessment scores that merely predict one model is more accurate than others, this approach quantifies the error in an absolute sense, thus helping to determine whether or not the model is suitable for intended applications. The assessment relies on a model-specific scoring function constructed by a support vector machine. This regression optimizes the weights of up to nine features, including various sequence similarity measures and statistical potentials, extracted from a tailored training set of models unique to the model being assessed: If possible, we use similarly sized models with the same fold; otherwise, we use similarly sized models with the same secondary structure composition. This protocol predicts the RMSD and NO3.5Å errors for a diverse set of 580,317 comparative models of 6174 sequences with correlation coefficients (r) of 0.84 and 0.86, respectively, to the actual errors. This scoring function achieves the best correlation compared to 13 other tested assessment criteria that achieved correlations ranging from 0.35 to 0.71. [source]


Safety on an inpatient pediatric otolaryngology service: Many small errors, few adverse events

THE LARYNGOSCOPE, Issue 5 2009
Rahul K. Shah MD
Abstract Objectives: Studies of medical error demonstrate that errors and adverse events (AEs) are common in hospitals. There are little data of errors on pediatric surgical services. Methods: We retrospectively reviewed 50 randomly selected inpatient admissions to the otolaryngology service at a tertiary care children's hospital. We used a "zero-defect" paradigm, recording any error or adverse event,from minor errors such as illegible notes to more significant errors such as mismanagement resulting in a bleeding emergency. Results: A total of 553 errors/AEs were identified in 50 admissions. Most (449) were charting or record-keeping deficiencies. Minor AEs (n = 26) and moderate AEs (n = 8) were present in 38% of admissions; there were no major AEs or permanent morbidity. Medication-related errors occurred in 22% of admissions, but only two resulted in minor AEs. There was a positive correlation between minor errors and AEs; however, this was not statistically significant. Conclusions: Multiple errors occurred in every inpatient pediatric otolaryngology admission; however, only 26 minor and eight moderate AEs were identified. The rate of errors per 1,000 hospital days (6,356 per 1,000 days) is higher than previously reported in voluntary reporting studies, possibly due to our methodology of physician review with a "zero-defect" standard. Trends in the data suggest that the presence of small errors may be associated with the risk of adverse events. Although labor-intensive, physician chart review is a valuable tool for identifying areas for improvement. Although small errors were common, there were few harms and no major morbidity. Laryngoscope, 2009 [source]


The vertical resolution sensitivity of simulated equilibrium temperature and water-vapour profiles

THE QUARTERLY JOURNAL OF THE ROYAL METEOROLOGICAL SOCIETY, Issue 565 2000
Adrian.
Abstract Variability of atmospheric water vapour is the most important climate feedback in present climate models. Thus, it is of crucial importance to understand the sensitivity of water vapour to model attributes, such as physical parametrizations and resolution. Here we attempt to determine the minimum vertical resolution necessary for accurate prediction of water vapour. To address this issue, we have run two single-column models to tropical radiative,convective equilibrium states and have examined the sensitivity of the equilibrium profiles to vertical resolution. Both column models produce reasonable equilibrium states of temperature and moisture. Convergence of the profiles was achieved in both models using a uniform vertical resolution of around 25 hPa. Coarser resolution leads to significant errors in both the water vapour and temperature profiles, with a resolution of 100 hPa proving completely inadequate. However, fixing the boundary-layer resolution and altering only the free-tropospheric resolution significantly reduces sensitivity to vertical resolution in one of the column models, in both water and temperature, highlighting the importance of resolving boundary-layer processes. Additional experiments show that the height of the simulated tropopause is sensitive to upper-tropospheric vertical resolution. At resolutions higher than 33 hPa, one of the models developed a high degree of vertical structure in the vapour profile, resulting directly from the complex array of microphysical processes included in the stratiform cloud parametrization, some of which were only resolved at high resolutions. This structure was completely absent at lower resolutions, casting some doubt on the approach of using relatively complicated cloud schemes at low vertical resolutions. [source]