Inaccuracies

Distribution by Scientific Domains
Distribution within Medical Sciences


Selected Abstracts


Inventory Record Inaccuracy, Double Marginalization, and RFID Adoption

PRODUCTION AND OPERATIONS MANAGEMENT, Issue 5 2007
H. Sebastian Heese
Most retailers suffer from substantial discrepancies between inventory quantities recorded in the system and stocks truly available to customers. Promising full inventory transparency, radio frequency identification (RFID) technology has often been suggested as a remedy to the problem. We consider inventory record inaccuracy in a supply chain model, where a Stackelberg manufacturer sets the wholesale price and a retailer determines how much to stock for sale to customers. We first analyze the impact of inventory record inaccuracy on optimal stocking decisions and profits. By contrasting optimal decisions in a decentralized supply chain with those in an integrated supply chain, we find that inventory record inaccuracy exacerbates the inefficiencies resulting from double marginalization in decentralized supply chains. Assuming RFID technology can eliminate the problem of inventory record inaccuracy, we determine the cost thresholds at which RFID adoption becomes profitable. We show that a decentralized supply chain benefits more from RFID technology, such that RFID adoption improves supply chain coordination. [source]


Seamless Montage for Texturing Models

COMPUTER GRAPHICS FORUM, Issue 2 2010
Ran Gal
Abstract We present an automatic method to recover high-resolution texture over an object by mapping detailed photographs onto its surface. Such high-resolution detail often reveals inaccuracies in geometry and registration, as well as lighting variations and surface reflections. Simple image projection results in visible seams on the surface. We minimize such seams using a global optimization that assigns compatible texture to adjacent triangles. The key idea is to search not only combinatorially over the source images, but also over a set of local image transformations that compensate for geometric misalignment. This broad search space is traversed using a discrete labeling algorithm, aided by a coarse-to-fine strategy. Our approach significantly improves resilience to acquisition errors, thereby allowing simple and easy creation of textured models for use in computer graphics. [source]


Network-aware selective job checkpoint and migration to enhance co-allocation in multi-cluster systems,

CONCURRENCY AND COMPUTATION: PRACTICE & EXPERIENCE, Issue 13 2009
William M. Jones
Abstract Multi-site parallel job schedulers can improve average job turn-around time by making use of fragmented node resources available throughout the grid. By mapping jobs across potentially many clusters, jobs that would otherwise wait in the queue for local resources can begin execution much earlier; thereby improving system utilization and reducing average queue waiting time. Recent research in this area of scheduling leverages user-provided estimates of job communication characteristics to more effectively partition the job across system resources. In this paper, we address the impact of inaccuracies in these estimates on system performance and show that multi-site scheduling techniques benefit from these estimates, even in the presence of considerable inaccuracy. While these results are encouraging, there are instances where these errors result in poor job scheduling decisions that cause network over-subscription. This situation can lead to significantly degraded application performance and turnaround time. Consequently, we explore the use of job checkpointing, termination, migration, and restart (CTMR) to selectively stop offending jobs to alleviate network congestion and subsequently restart them when (and where) sufficient network resources are available. We then characterize the conditions and the extent to which the process of CTMR improves overall performance. We demonstrate that this technique is beneficial even when the overhead of doing so is costly. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Discrepancies in Reported Levels of International Wildlife Trade

CONSERVATION BIOLOGY, Issue 6 2005
ARTHUR G. BLUNDELL
aduanas; CITES; especies en peligro; programa de aranceles armonizados Abstract:,The international wildlife trade is a principal cause of biodiversity loss, involving hundreds of millions of plants and animals each year, yet wildlife trade records are notoriously unreliable. We assessed the precision of wildlife trade reports for the United States, the world's largest consumer of endangered wildlife, by comparing data from the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) with U.S. Customs data. For both U.S. imports and exports, CITES and Customs reported substantially different trade volumes for all taxa in all years. Discrepancies ranged from a CITES-reported volume 376% greater than that reported by Customs (live coral imports, 2000) to a Customs' report 5202% greater than CITES (conch exports, 2000). These widely divergent data suggest widespread inaccuracies that may distort the perceived risk of targeted wildlife exploitation, leading to misallocation of management resources and less effective conservation strategies. Conservation scientists and practitioners should reexamine assumptions regarding the significance of the international wildlife trade. Resumen:,El comercio internacional de vida silvestre es una causa principal de la pérdida de biodiversidad, ya que involucra a cientos de millones de plantas y animales cada año; no obstante eso, los registros del comercio son notoriamente poco confiables. Evaluamos la precisión de los registros de comercio de vida silvestre de Estados Unidos, el mayor consumidor de vida silvestre en peligro en el mundo, mediante la comparación de datos del Convenio Internacional para el Comercio de Especies de Flora y Fauna Silvestre en Peligro (CITES) con datos de la Aduana de E.U.A. Tanto para importaciones como exportaciones, CITES y Aduana reportaron volúmenes de comercio de todos los taxa sustancialmente diferentes en todos los años. Las discrepancias abarcaron desde un volumen reportado por CITES 376% más grande que el reportado por la Aduana (importaciones de coral vivo, 2000) hasta un reporte de la Aduana 5202% mayor que el de CITES (exportaciones de caracol, 2000). Estos datos ampliamente divergentes sugieren imprecisiones generalizadas que pueden distorsionar el riesgo percibido por la explotación de vida silvestre, lo que conducirá a la incorrecta asignación de recursos para la gestión y a estrategias de conservación menos efectivas. Los científicos y profesionales de la conservación deberían reexaminar sus suposiciones respecto al significado del comercio internacional de vida silvestre. [source]


Double Excited High-n Spin Dependent Atomic Structure Scaling Laws for He I: Application to Radiative Properties for Edge Plasma Conditions

CONTRIBUTIONS TO PLASMA PHYSICS, Issue 7-9 2006
E. H. Guedda
Abstract We present our numerical calculations of dielectronic recombination rate coefficients which are spin dependent (2lnl , -1snl1L, 1snl3L ) and develop scaling relations which are converging for all spin (S), angular (L) and main (n) quantum numbers. The influence of atomic data inaccuracies, spin dependent channelling of dielectronic recombination rates and collisions on the atomic/ionic fractions is discussed. (© 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


Testing bedload transport formulae using morphologic transport estimates and field data: lower Fraser River, British Columbia

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 10 2005
Yvonne Martin
Abstract Morphologic transport estimates available for a 65-km stretch of Fraser River over the period 1952,1999 provide a unique opportunity to evaluate the performance of bedload transport formulae for a large river over decadal time scales. Formulae tested in this paper include the original and rational versions of the Bagnold formula, the Meyer-Peter and Muller formula and a stream power correlation. The generalized approach adopted herein does not account for spatial variability in flow, bed structure and channel morphology. However, river managers and engineers, as well as those studying rivers within the context of long-term landscape change, may find this approach satisfactory as it has minimal data requirements and provides a level of process specification that may be commensurable with longer time scales. Hydraulic geometry equations for width and depth are defined using morphologic maps based on aerial photography and bathymetric survey data. Comparison of transport predictions with bedload transport measurements completed at Mission indicates that the original Bagnold formula most closely approximates the main trends in the field data. Sensitivity analyses are conducted to evaluate the impact of inaccuracies in input variables width, depth, slope and grain size on transport predictions. The formulae differ in their sensitivity to input variables and between reaches. Average annual bedload transport predictions for the four formulae show that they vary between each other as well as from the morphologic transport estimates. The original Bagnold and Meyer-Peter and Muller formulae provide the best transport predictions, although the former underestimates while the latter overestimates transport rates. Based on our findings, an error margin of up to an order of magnitude can be expected when adopting generalized approaches for the prediction of bedload transport. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Stability analysis for real-time pseudodynamic and hybrid pseudodynamic testing with multiple sources of delay

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2008
Oya Mercan
Abstract Real-time pseudodynamic (PSD) and hybrid PSD test methods are experimental techniques to obtain the response of structures, where restoring force feedback is used by an integration algorithm to generate command displacements. Time delays in the restoring force feedback from the physical test structure and/or the analytical substructure cause inaccuracies and can potentially destabilize the system. In this paper a method for investigating the stability of structural systems involved in real-time PSD and hybrid PSD tests with multiple sources of delay is presented. The method involves the use of the pseudodelay technique to perform an exact mapping of fixed delay terms to determine the stability boundary. The approach described here is intended to be a practical one that enables the requirements for a real-time testing system to be established in terms of system parameters when multiple sources of delay exist. Several real-time testing scenarios with delay that include single degree of freedom (SDOF) and multi-degree of freedom (MDOF) real-time PSD/hybrid PSD tests are analyzed to illustrate the method. From the stability analysis of the real-time hybrid testing of an SDOF test structure, delay-independent stability with respect to either experimental or analytical substructure delay is shown to exist. The conditions that the structural properties must satisfy in order for delay-independent stability to exist are derived. Real-time hybrid PSD testing of an MDOF structure equipped with a passive damper is also investigated, where observations from six different cases related to the stability plane behavior are summarized. Throughout this study, root locus plots are used to provide insight and explanation of the behavior of the stability boundaries. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Aortic Valve Closure: Relation to Tissue Velocities by Doppler and Speckle Tracking in Patients with Infarction and at High Heart Rates

ECHOCARDIOGRAPHY, Issue 4 2010
Ph.D., Svein A. Aase M.Sc.
Aim: To resolve the event in tissue Doppler (TDI)- and speckle tracking-based velocity/time curves that most accurately represent aortic valve closure (AVC) in infarcted ventricles and at high heart rates. Methods: We studied the timing of AVC in 13 patients with myocardial infarction and in 8 patients at peak dobutamine stress echo. An acquisition setup for recording alternating B-mode and TDI image frames was used to achieve the same frame rate in both cases (mean 136.7 frames per second [FPS] for infarcted ventricles, mean 136.9 FPS for high heart rates). The reference method was visual assessment of AVC in the high frame rate narrow sector B-mode images of the aortic valve. Results: The initial negative velocities after ejection in the velocity/time curves occurred before AVC, 44.9 ± 21.0 msec before the reference in the high heart rate material, and 25.2 ± 15.2 msec before the reference in the infarction material. Using this time point as a marker for AVC may cause inaccuracies when estimating end-systolic strain. A more accurate but still a practical marker for AVC was the time point of zero crossing after the initial negative velocities after ejection, 5.4 ± 15.3 msec before the reference in high heart rates and 8.2 ± 12.9 msec after the reference in the infarction material. Conclusion: The suggested marker of AVC at high heart rate and in infarcted ventricles was the time point of zero crossing after the initial negative velocities after ejection in velocity/time curves. (Echocardiography 2010;27:363-369) [source]


Inaccuracies on Applications for Emergency Medicine Residency Training

ACADEMIC EMERGENCY MEDICINE, Issue 9 2004
Martha S. Roellig MD
Abstract Objectives: Studies have shown erroneous claims of authorship by medical students applying for residency. Authors have hypothesized that investigation of advanced degrees, Alpha Omega Alpha (AOA) status, and peer-reviewed publications all show important rates of inaccuracy. Methods:A retrospective review of all applicants offered an interview for the authors' emergency medicine (EM) residency (entering class of 2002), excluding foreign medical graduates and current residents, was conducted. After verifying peer-reviewed publications by MEDLINE search and journal review, errors were tabulated as follows: reference not found, not referenced as an abstract, incorrect author list, or clerical error. AOA status was verified by the AOA organization. Advanced degrees were verified by the awarding institutions. Results: Of 194 applications screened (58.3% of applications), 21 (10.8%) were excluded (9 foreign medical graduates, 12 current residents). Multiple inaccuracies on a single application were counted separately. Of the 173 remaining applications, 23 (13.3%; 95% confidence interval [95% CI] = 8.8% to 19.5%) had at least one misrepresentation and seven of 173 (4.0%; 95% CI = 1.8% to 8.5%) had two or more. Authorship of at least one peer-reviewed article was claimed by 47 of 173 (27.2%), with ten of 47 (21.3%; 95% CI = 11.2% to 36.1%) having one inaccuracy and six of 47 (12.8%, 95% CI = 5.3% to 26.4%) having two or more. AOA membership was claimed by 14 applicants (8.1%), but five claims (35.7%, 95% CI = 14.0% to 64.4%) were inaccurate. Advanced degrees were claimed by 15 (8.7%); four (26.7%, 95% CI = 8.9% to 55.2%) were in error. Conclusions: Applications for EM residency contain frequent inaccuracies in publications listed, AOA status, and advanced degrees. Careful review of applications is necessary to ensure appropriate credit is given for claims of these types. [source]


Quality of poisoning management advice in the Monthly Index of Medical Specialties Annual

EMERGENCY MEDICINE AUSTRALASIA, Issue 5-6 2005
James Mallows
Abstract Background:, The Monthly Index of Medical Specialties (MIMS) contains Therapeutic Goods Administration-approved product information supplied by manufacturers. It is widely used by health-care professionals but is not specifically designed as a toxicology reference. Objectives:, To determine how widespread the use of MIMS is as a toxicology reference. To evaluate the quality of poisoning management advice it contains. Methods:, First, a survey of 500 consecutive calls to the NSW Poison Information Centre (PIC) was undertaken asking health-care workers which toxicology references were consulted prior to calling and which references they would use if the PIC were not available. Second, a consensus opinion for poisoning management was obtained, for 25 medications which are either commonly involved in poisoning or potentially life-threatening in overdose, by review of 5 current toxicology references for contraindicated treatments, ineffective treatments and specific recommended treatments and antidotes. MIMS poisoning management advice was then compared with this toxicology consensus opinion. Results:, In total, 276 doctors and 222 nurses were surveyed. Prior to calling the PIC 22.8% of doctors and 6.8% of nurses consulted MIMS. In total, 25.7% of doctors and 39.6% nurses stated they would use the MIMS for poisoning management advice if the PIC were not available. For the 25 drugs assessed, 14 contained inaccurate poisoning management: 1 recommended ineffective treatments and 14 omitted specific treatments or antidotes. Conclusion:, The MIMS is often used as a toxicology reference by physicians prior to calling the PIC. It contains a number of significant inaccuracies pertaining to management of poisonings and should not be used as a primary reference for poisoning advice. [source]


The appropriate use of references in a scientific research paper

EMERGENCY MEDICINE AUSTRALASIA, Issue 2 2002
David McD Taylor
Abstract References have an important and varied role in any scientific paper. Unfortunately, many authors do not appreciate this importance and errors within reference lists are frequently encountered. Most reference errors involve spelling, numerical and punctuation mistakes, although the use of too many, too few or even inappropriate references is often seen. The consequences of reference errors include difficulty in reference retrieval, limitation for the reader to read more widely, failure to credit the cited authors, and inaccuracies in citation indexes. This paper discusses the value of accurate reference lists and provides guidelines for their preparation. [source]


Investigating Burkholderia cepacia complex populations recovered from Italian maize rhizosphere by multilocus sequence typing

ENVIRONMENTAL MICROBIOLOGY, Issue 7 2007
Claudia Dalmastri
Summary The Burkholderia cepacia complex (BCC) comprises at least nine closely related species of abundant environmental microorganisms. Some of these species are highly spread in the rhizosphere of several crop plants, particularly of maize; additionally, as opportunistic pathogens, strains of the BCC are capable of colonizing humans. We have developed and validated a multilocus sequence typing (MLST) scheme for the BCC. Although widely applied to understand the epidemiology of bacterial pathogens, MLST has seen limited application to the population analysis of species residing in the natural environment; we describe its novel application to BCC populations within maize rhizospheres. 115 BCC isolates were recovered from the roots of different maize cultivars from three different Italian regions over a 9-year period (1994,2002). A total of 44 sequence types (STs) were found of which 41 were novel when compared with existing MLST data which encompassed a global database of 1000 clinical and environmental strains representing nearly 400 STs. In this study of rhizosphere isolates approximately 2.5 isolates per ST was found, comparable to that found for the whole BCC population. Multilocus sequence typing also resolved inaccuracies associated with previous identification of the maize isolates based on recA gene restriction fragment length polymorphims and species-specific polymerase chain reaction. The 115 maize isolates comprised the following BCC species groups, B. ambifaria (39%), BCC6 (29%), BCC5 (10%), B. pyrrocinia (8%), B. cenocepacia IIIB (7%) and B. cepacia (6%), with BCC5 and BCC6 potentially constituting novel species groups within the complex. Closely related clonal complexes of strains were identified within B. cepacia, B. cenocepacia IIIB, BCC5 and BCC6, with one of the BCC5 clonal complexes being distributed across all three sampling sites. Overall, our analysis demonstrates that the maize rhizosphere harbours a massive diversity of novel BCC STs, so that their addition to our global MLST database increased the ST diversity by 10%. [source]


HETEROZYGOTE EXCESS IN SMALL POPULATIONS AND THE HETEROZYGOTE-EXCESS EFFECTIVE POPULATION SIZE

EVOLUTION, Issue 9 2004
Franclois Balloux
Abstract It has been proposed that effective size could be estimated in small dioecious population by considering the heterozygote excess observed at neutral markers. When the number of breeders is small, allelic frequencies in males and females will slightly differ due to binomial sampling error. However, this excess of heterozygotes is not generated by dioecy but by the absence of individuals produced through selfing. Consequently, the approach can also be applied to self-incompatible monoecious species. Some inaccuracies in earlier equations expressing effective size as function of the heterozygote excess are also corrected in this paper. The approach is then extended to subdivided populations, where time of sampling becomes crucial. When adults are sampled, the effective size of the entire population can be estimated, whereas when juveniles are sampled, the average effective number of breeders per subpopulations can be estimated. The main limitation of the heterozygote excess method is that it will only perform satisfactorily for populations with a small number of reproducing individuals. While this situation is unlikely to happen frequently at the scale of the entire population, structured populations with small subpopulations are likely to be common. The estimation of the average number of breeders per subpopulations is thus expected to be applicable to many natural populations. The approach is straightforward to compute and independent of equilibrium assumptions. Applications to simulated data suggest the estimation of the number of breeders to be robust to mutation and migration rates, and to specificities of the mating system. [source]


A Comparison of Data Sources for Motor Vehicle Crash Characteristic Accuracy

ACADEMIC EMERGENCY MEDICINE, Issue 8 2000
Robert J. Grant MD
Abstract. Objective: To determine the accuracy of police reports (PRs), ambulance reports (ARs), and emergency department records (EDRs) in describing motor vehicle crash (MVC) characteristics when compared with an investigation performed by an experienced crash investigator trained in impact biomechanics. Methods: This was a cross-sectional, observational study. Ninety-one patients transported by ambulance to a university emergency department (ED) directly from the scene of an MVC from August 1997 to April 1998 were enrolled. Potential patients were identified from the ED log and consent was obtained to investigate the crash vehicle. Data describing MVC characteristics were abstracted from the PR, AR, and medical record. Variables of interest included restraint use (RU), air bag deployment (AD), and type of impact (TI). Agreements between the variables and the independent crash investigation were compared using kappa. Interrater reliability was determined using kappa by comparing a random sample of 20 abstracted reports for each data source with the originally abstracted data. Results: Agreement using kappa between the crash investigation and each data source was 0.588 (95% CI = 0.508 to 0.667) for the PR, 0.330 (95% CI = 0.252 to 0.407) for the AR, and 0.492 (95% CI = 0.413 to 0.572) for the EDR. Variable agreement was 0.239 (95% CI = 0.164 to 0.314) for RU, 0.350 (95% CI = 0.268 to 0.432) for AD, and 0.631 (95%= 0.563 to 0.698) for TI. Interrater reliability was excellent (kappa > 0.8) for all data sources. Conclusions: The strength of the agreement between the independent crash investigation and the data sources that were measured by kappa was fair to moderate, indicating inaccuracies. This presents ramifications for researchers and necessitates consideration of the validity and accuracy of crash characteristics contained in these data sources. [source]


Can the Earth's dynamo run on heat alone?

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2003
David Gubbins
SUMMARY The power required to drive the geodynamo places significant constraints on the heat passing across the core,mantle boundary and the Earth's thermal history. Calculations to date have been limited by inaccuracies in the properties of liquid iron mixtures at core pressures and temperatures. Here we re-examine the problem of core energetics in the light of new first-principles calculations for the properties of liquid iron. There is disagreement on the fate of gravitational energy released by contraction on cooling. We show that only a small fraction of this energy, that associated with heating resulting from changes in pressure, is available to drive convection and the dynamo. This leaves two very simple equations in the cooling rate and radioactive heating, one yielding the heat flux out of the core and the other the entropy gain of electrical and thermal dissipation, the two main dissipative processes. This paper is restricted to thermal convection in a pure iron core; compositional convection in a liquid iron mixture is considered in a companion paper. We show that heat sources alone are unlikely to be adequate to power the geodynamo because they require a rapid secular cooling rate, which implies a very young inner core, or a combination of cooling and substantial radioactive heating, which requires a very large heat flux across the core,mantle boundary. A simple calculation with no inner core shows even higher heat fluxes are required in the absence of latent heat before the inner core formed. [source]


Constrained tomography of realistic velocity models in microseismic monitoring using calibration shots

GEOPHYSICAL PROSPECTING, Issue 5 2010
T. Bardainne
ABSTRACT The knowledge of the velocity model in microseismic jobs is critical to achieving statistically reliable microseismic event locations. The design of microseismic networks and the limited sources for calibration do not allow for a full tomographic inversion. We propose optimizing a priori velocity models using a few active shots and a non-linear inversion, suitable to poorly constrained systems. The considered models can be described by several layers with different P- and S-wave velocities. The velocities may be constant or have 3D gradients; the layer interfaces may be simple dipping planes or more complex 3D surfaces. In this process the P- and S- wave arrival times and polarizations measured on the seismograms constitute the observed data set. They are used to estimate two misfit functions: i) one based on the measurement residuals and ii) one based on the inaccuracy of the source relocation. These two functions are minimized thanks to a simulated annealing scheme, which decreases the risk of converging to a local solution within the velocity model. The case study used to illustrate this methodology highlights the ability of this technique to constrain a velocity model with dipping layers. This was performed by jointly using sixteen perforation shots recorded during a multi-stage fracturing operation from a single string of 3C-receivers. This decreased the location inaccuracies and the residuals by a factor of six. In addition, the retrieved layer dip was consistent with the pseudo-horizontal trajectories of the wells and the background information provided by the customer. Finally, the theoretical position of each calibration shot was contained in the uncertainty domain of the relocation of each shot. In contrast, single-stage inversions provided different velocity models that were neither consistent between each other nor with the well trajectories. This example showed that it is essential to perform a multi-stage inversion to derive a better updated velocity model. [source]


Ground Water Recharge and Chemical Contaminants: Challenges in Communicating the Connections and Collisions of Two Disparate Worlds

GROUND WATER MONITORING & REMEDIATION, Issue 2 2004
Christian G. Daughton
Our knowledge base regarding the presence and significance of chemicals foreign to the subsurface environment is large and growing , the papers in this volume serving as testament. However, complex questions with few answers surround the unknowns regarding the potential for environmental or human health effects from trace levels of xenobiotics in ground water, especially ground water augmented with treated waste water. Public acceptance for direct or indirect ground water recharge using treated municipal waste water (especially sewage) spans the spectrum from unquestioned embrace to outright rejection. In this paper, I detour around the issues most commonly discussed regarding ground water recharge and instead focus on some of the less-recognized issues,those that emanate from the mysteries created at the many literal and virtual interfaces involved with the subsurface world. My major objective is to catalyze discussion that advances our understanding of the barriers to public acceptance of waste water reuse with its ultimate culmination in direct reuse for drinking. I pose what could be a key question as to whether much of the public's frustration or ambivalence in its decision-making process for accepting, or rejecting, water reuse (for various purposes including personal use) emanates from fundamental inaccuracies, misrepresentation, or oversimplification of what water is and how it functions in the environment,just exactly what the water cycle is. These questions suggest it might behoove us to revisit some very elementary aspects of our science and how we are conveying them to the public. [source]


Representing elevation uncertainty in runoff modelling and flowpath mapping

HYDROLOGICAL PROCESSES, Issue 12 2001
Theodore A. Endreny
Abstract Vertical inaccuracies in terrain data propagate through dispersal area subroutines to create uncertainties in runoff flowpath predictions. This study documented how terrain error sensitivities in the D8, Multiple Flow (MF), DEMON, D-Infinity and two hybrid dispersal area algorithms, responded to changes in terrain slope and error magnitude. Runoff dispersal areas were generated from convergent and divergent sections of low, medium, and high gradient 64-ha parcels using a 30 m pixel scale control digital elevation model (DEM) and an ensemble of alternative realizations of the control DEM. The ensemble of alternative DEM realizations was generated randomly to represent root mean square error (RMSE) values ranging from 0·5 to 6 m and spatial correlations of 0 to 0·999 across 180 m lag distances. Dispersal area residuals, derived by differencing output from control and ensemble simulations, were used to quantify the spatial consistency of algorithm dispersal area predictions. A maximum average algorithm consistency of 85% was obtained in steep sloping convergent terrain, and two map analysis techniques are recommended in maintaining high spatial consistencies under less optimum terrain conditions. A stochastic procedure was developed to translate DEM error into dispersal area probability maps, and thereby better represent uncertainties in runoff modelling and management. Two uses for these runoff probability maps include watershed management indices that identify the optimal areas for intercepting polluted runoff as well as Monte-Carlo-ready probability distributions that report the cumulative pollution impact of each pixel's downslope dispersal area. Copyright © 2001 John Wiley & Sons, Ltd. [source]


Traffic flow continuum modeling by hypersingular boundary integral equations

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 1 2010
Luis M. Romero
Abstract The quantity of data necessary in order to study traffic in dense urban areas through a traffic network, and the large volume of information that is provided as a result, causes managerial difficulties for the said model. A study of this kind is expensive and complex, with many sources of error connected to each step carried out. A simplification like the continuous medium is a reasonable approximation and, for certain dimensions of the actual problem, may be an alternative to be kept in mind. The hypotheses of the continuous model introduce errors comparable to those associated with geometric inaccuracies in the transport network, with the grouping of hundreds of streets in one same type of link and therefore having the same functional characteristics, with the centralization of all journey departure points and destinations in discrete centroids and with the uncertainty produced by a huge origin/destination matrix that is quickly phased out, etc. In the course of this work, a new model for characterizing traffic in dense network cities as a continuous medium, the diffusion,advection model, is put forward. The model is approached by means of the boundary element method, which has the fundamental characteristic of only requiring the contour of the problem to be discretized, thereby reducing the complexity and need for information into one order versus other more widespread methods, such as finite differences and the finite element method. On the other hand, the boundary elements method tends to give a more complex mathematical formulation. In order to validate the proposed technique, three examples in their fullest form are resolved with a known analytic solution. Copyright © 2009 John Wiley & Sons, Ltd. [source]


High-order accurate numerical solutions of incompressible flows with the artificial compressibility method

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2004
John A. Ekaterinaris
Abstract A high-order accurate, finite-difference method for the numerical solution of incompressible flows is presented. This method is based on the artificial compressibility formulation of the incompressible Navier,Stokes equations. Fourth- or sixth-order accurate discretizations of the metric terms and the convective fluxes are obtained using compact, centred schemes. The viscous terms are also discretized using fourth-order accurate, centred finite differences. Implicit time marching is performed for both steady-state and time-accurate numerical solutions. High-order, spectral-type, low-pass, compact filters are used to regularize the numerical solution and remove spurious modes arising from unresolved scales, non-linearities, and inaccuracies in the application of boundary conditions. The accuracy and efficiency of the proposed method is demonstrated for test problems. Copyright © 2004 John Wiley & Sons, Ltd. [source]


Modelling and experimental studies on heat transfer in the convection section of a biomass boiler

INTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 12 2006
Jukka Yrjölä
Abstract This paper describes a model of heat transfer for the convection section of a biomass boiler. The predictions obtained with the model are compared to the measurement results from two boilers, a 50 kWth pellet boiler and a 4000 kWth wood chips boiler. An adequate accuracy was achieved on the wood chips boiler. As for the pellet boiler, the calculated and measured heat transfer rates differed more than expected on the basis of the inaccuracies in correlation reported in the literature. The most uncertain aspect of the model was assumed to be the correlation equation of the entrance region. Hence, the model was adjusted to improve the correlation. As a result of this, a high degree of accuracy was also obtained with the pellet boiler. The next step was to analyse the effect of design and the operating parameters on the pellet boiler. Firstly, the portion of radiation was established at 3,13 per cent, and the portion of entrance region at 39,52 per cent of the entire heat transfer rate under typical operating conditions. The effect of natural convection was small. Secondly, the heat transfer rate seemed to increase when dividing the convection section into more passes, even when the heat transfer surface area remained constant. This is because the effect of the entrance region is recurrent. Thirdly, when using smaller tube diameters the heat transfer area is more energy-efficient, even when the bulk velocity of the flow remains constant. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Relationships between Personality and Organization Switching: Implications for utility estimates

INTERNATIONAL JOURNAL OF SELECTION AND ASSESSMENT, Issue 1 2007
Gregory A. Vinson
This study examined individuals' tendencies to migrate from one organization to another (i.e., the propensity to switch employers). Previous researchers have suggested that switching organizations throughout the career span may be partially heritable and therefore related to individual differences in personality traits. If personality traits are indeed related to a tendency to turnover from organizations, this suggests that current procedures for calculating utility may be inaccurate. Using a database of 1081 individuals who have been in the workforce for several years, results indicated that personality traits measured by the Occupational Personality Questionnaire (non-ipsative; OPQn) were modestly related to organization switching (i.e., repeated moves from organization to organization). We found that higher scores on extraversion, openness to experience, and conscientiousness-related traits were modestly correlated with more frequent organization switching. However, we demonstrate that these modest relationships can produce large inaccuracies in utility estimates. [source]


Toward accurate relative energy predictions of the bioactive conformation of drugs

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 4 2009
Keith T. Butler
Abstract Quantifying the relative energy of a ligand in its target-bound state (i.e. the bioactive conformation) is essential to understand the process of molecular recognition, to optimize the potency of bioactive molecules and to increase the accuracy of structure-based drug design methods. This is, nevertheless, seriously hampered by two interrelated issues, namely the difficulty in carrying out an exhaustive sampling of the conformational space and the shortcomings of the energy functions, usually based on parametric methods of limited accuracy. Matters are further complicated by the experimental uncertainty on the atomic coordinates, which precludes a univocal definition of the bioactive conformation. In this article we investigate the relative energy of bioactive conformations introducing two major improvements over previous studies: the use sophisticated QM-based methods to take into account both the internal energy of the ligand and the solvation effect, and the application of physically meaningful constraints to refine the bioactive conformation. On a set of 99 drug-like molecules, we find that, contrary to previous observations, two thirds of bioactive conformations lie within 0.5 kcal mol,1 of a local minimum, with penalties above 2.0kcal mol,1 being generally attributable to structural determination inaccuracies. The methodology herein described opens the door to obtain quantitative estimates of the energy of bioactive conformations and can be used both as an aid in refining crystallographic structures and as a tool in drug discovery. © 2008 Wiley Periodicals, Inc. J Comput Chem 2009 [source]


Application of the frozen atom approximation to the GB/SA continuum model for solvation free energy

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 2 2002
Olgun Guvench
Abstract The generalized Born/surface area (GB/SA) continuum model for solvation free energy is a fast and accurate alternative to using discrete water molecules in molecular simulations of solvated systems. However, computational studies of large solvated molecular systems such as enzyme,ligand complexes can still be computationally expensive even with continuum solvation methods simply because of the large number of atoms in the solute molecules. Because in such systems often only a relatively small portion of the system such as the ligand binding site is under study, it becomes less attractive to calculate energies and derivatives for all atoms in the system. To curtail computation while still maintaining high energetic accuracy, atoms distant from the site of interest are often frozen; that is, their coordinates are made invariant. Such frozen atoms do not require energetic and derivative updates during the course of a simulation. Herein we describe methodology and results for applying the frozen atom approach to both the generalized Born (GB) and the solvent accessible surface area (SASA) parts of the GB/SA continuum model for solvation free energy. For strictly pairwise energetic terms, such as the Coulombic and van-der-Waals energies, contributions from pairs of frozen atoms can be ignored. This leaves energetic differences unaffected for conformations that vary only in the positions of nonfrozen atoms. Due to the nonlocal nature of the GB analytical form, however, excluding such pairs from a GB calculation leads to unacceptable inaccuracies. To apply a frozen-atom scheme to GB calculations, a buffer region within the frozen-atom zone is generated based on a user-definable cutoff distance from the nonfrozen atoms. Certain pairwise interactions between frozen atoms in the buffer region are retained in the GB computation. This allows high accuracy in conformational GB comparisons to be maintained while achieving significant savings in computational time compared to the full (nonfrozen) calculation. A similar approach for using a buffer region of frozen atoms is taken for the SASA calculation. The SASA calculation is local in nature, and thus exact SASA energies are maintained. With a buffer region of 8 Å for the frozen-atom cases, excellent agreement in differences in energies for three different conformations of cytochrome P450 with a bound camphor ligand are obtained with respect to the nonfrozen cases. For various minimization protocols, simulations run 2 to 10.5 times faster and memory usage is reduced by a factor of 1.5 to 5. Application of the frozen atom method for GB/SA calculations thus can render computationally tractable biologically and medically important simulations such as those used to study ligand,receptor binding conformations and energies in a solvated environment. © 2002 Wiley Periodicals, Inc. J Comput Chem 23: 214,221, 2002 [source]


Achieving a cooperative behavior in a dual-arm robot system via a modular control structure

JOURNAL OF FIELD ROBOTICS (FORMERLY JOURNAL OF ROBOTIC SYSTEMS), Issue 12 2001
Fabrizio Caccavale
In this paper the problem of achieving a cooperative behavior in a dual-arm robot system is addressed. A control strategy is conceived in which one robot is position controlled and devoted to task execution, whereas a suitable compliance is conferred to the end effector of the other robot to cope with unavoidable misalignment between the two arms and task planning inaccuracies. A modular control structure is adopted that allows the selection of the proper operating mode for each robot, depending on the task requirements. The proposed approach is experimentally tested in two different tasks involving the two robots in the laboratory setup. First, a parts-mating task of peg-in-hole type is executed; the robot carrying the peg is position controlled, whereas the robot holding the hollow part is controlled to behave as a mechanical impedance. Then, a pressure-forming task is executed, in which a disk-shaped tool is required to align with a flat surface while exerting a desired pressure; in this case, the robot carrying the disk is position controlled, whereas the robot holding the surface is force controlled. © 2001 John Wiley & Sons, Inc. [source]


An algorithm for the use of surrogate models in modular flowsheet optimization

AICHE JOURNAL, Issue 10 2008
José A. Caballero
Abstract In this work a methodology is presented for the rigorous optimization of nonlinear programming problems in which the objective function and (or) some constraints are represented by noisy implicit black box functions. The special application considered is the optimization of modular process simulators in which the derivatives are not available and some unit operations introduce noise preventing the calculation of accurate derivatives. The black box modules are substituted by metamodels based on a kriging interpolation that assumes that the errors are not independent but a function of the independent variables. A Kriging metamodel uses non-Euclidean measure of distance to avoid sensitivity to the units of measure. It includes adjustable parameters that weigh the importance of each variable for obtaining a good model representation, and it allows calculating errors that can be used to establish stopping criteria and provide a solid base to deal with "possible infeasibility" due to inaccuracies in the metamodel representation of objective function and constraints. The algorithm continues with a refining stage and successive bound contraction in the domain of independent variables with or without kriging recalibration until an acceptable accuracy in the metamodel is obtained. The procedure is illustrated with several examples. © 2008 American Institute of Chemical Engineers AIChE J, 2008 [source]


Vital signs for vital people: an exploratory study into the role of the Healthcare Assistant in recognising, recording and responding to the acutely ill patient in the general ward setting

JOURNAL OF NURSING MANAGEMENT, Issue 5 2010
JAYNE JAMES RN., Ortho.
james j., butler-williams c., hunt j. & cox h. (2010) Journal of Nursing Management18, 548,555 Vital signs for vital people: an exploratory study into the role of the Healthcare Assistant in recognising, recording and responding to the acutely ill patient in the general ward setting Aim, To examine the contribution of the Healthcare Assistant (HCA) as the recogniser, responder and recorder of acutely ill patients within the general ward setting. Background, Concerns have been highlighted regarding the recognition and management of the acutely ill patient within the general ward setting. The contribution of the HCA role to this process has been given limited attention. Methods, A postal survey of HCAs was piloted and conducted within two district general hospitals. Open and closed questions were used. Results, Results suggest that on a regular basis HCAs are caring for acutely ill patients. Contextual issues and inaccuracies in some aspects of patient assessment were highlighted. It would appear normal communication channels and hierarchies were bypassed when patients' safety was of concern. Educational needs were identified including scenario-based learning and the importance of ensuring mandatory training is current. Conclusions and implications for nursing management, HCAs play a significant role in the detection and monitoring of acutely ill patients. Acknowledgement is needed of the contextual factors in the general ward setting which may influence the quality of this process. The educational needs identified by this study can assist managers to improve clinical supervision and educational input in order to improve the quality of care for acutely ill patients. [source]


Multiple steady states in distillation: Effect of VL(L)E inaccuracies

AICHE JOURNAL, Issue 5 2000
Nikolaos Bekiaris
Output multiplicities in heterogeneous azeotropic distillation columns were studied. The accuracy of the thermodynamic description is a key factor that determines if multiplicities can be observed in numerical simulations. The descriptions used in the multiplicity-related literature are analyzed. The ,/, analysis of Bekiaris et al. (1996) was used to check implications of inaccuracies in the reported thermodynamics on the existence of multiplicities in azeotropic distillation. On this basis, guidelines are derived concerning what features of thermodynamic descriptions need special attention for use in multiplicity prediction and simulation. Secondly, numerical studies on output multiplicities in heterogeneous azeotropic distillation in the literature were compared to the ,/, predictions wherever possible. The ,/, analysis was used to derive the relations between the reported multiplicities and to identify the physical phenomena causing them. [source]


Small Retailer and Service Company Accuracy in Evaluating the Legality of Specified Practices

JOURNAL OF SMALL BUSINESS MANAGEMENT, Issue 4 2001
Robin T. Peterson
This study examined the degree to which small retail and service company managers were familiar with important federal laws. Further, it assessed differences between these two types of firms in managerial cognizance of the regulations. The findings revealed reasonable knowledge of the federal restrictions, accompanied by some important inaccuracies. Generally, small retail managers were found to be more knowledgeable than small service company managers. [source]


A survey of the quality and accuracy of information leaflets about skin cancer and sun-protective behaviour available from UK general practices and community pharmacies

JOURNAL OF THE EUROPEAN ACADEMY OF DERMATOLOGY & VENEREOLOGY, Issue 5 2009
S Nicholls
Abstract Background, Better information promotes sun protection behaviour and is associated with earlier presentation and survival for malignant melanoma. Aim, To assess the quality of patient information leaflets about skin cancer and sun-protective behaviour available from general practices and community pharmacies. Design of study, A structured review of patient information leaflets. Setting, All community pharmacies and general practices in one Primary Care Trust were invited to supply leaflets. Methods, Readability was assessed using the SMOG scoring system. Presentation and content were reviewed using the Ensuring Quality Information for Patients (EQIP) guidelines. Three consultant dermatologists assessed each leaflet for accuracy. Results, Thirty-one different patient information leaflets were returned. Thirteen (42%) were published in the previous 2 years, but 10 (32%) were over 5 years old. Nine (29%) leaflets were produced by the NHS or Health Education Authority, and 8 (27%) were linked to a commercial organization. One leaflet had readability in the primary education range (SMOG score = 6), and none with the recommended range for health education material (SMOG score , 5). Two leaflets (6%) were in the highest quartile of EQIP score for presentation and content. Five leaflets (17%) had a major inaccuracy such as over-reliance on sun screen products instead of shade and clothing. Conclusions, Leaflets were of variable quality in presentation and content. All required a reading age higher than recommended. All leaflets with major inaccuracies had links with commercial organizations. This study raises important issues about the potential conflict between marketing and health messages in the way sun creams are promoted. Conflicts of interest None declared [source]