High Accuracy (high + accuracy)

Distribution by Scientific Domains
Distribution within Engineering


Selected Abstracts


Measuring enthalpy of fast exothermal reaction with micro-reactor-based capillary calorimeter

AICHE JOURNAL, Issue 4 2010
K. Wang
Abstract This work presents a new micro-reactor-based capillary calorimeter for the enthalpy measurement of fast exothermal reactions. The new calorimeter was operated in the continuous way and the reaction enthalpy can be easily measured with the online temperatures from detached sensor chips. A standard reaction system and an industrial reaction system were selected to test this new calorimeter with homogeneous and heterogeneous reaction processes. The measurement was taken place at nearly adiabatic situations and the reaction enthalpy was calculated from the rising of temperature. High accuracy and good repeatability were obtained from this new calorimeter with relative experimental errors less than 3.5% and 2.4%, respectively. The temperature response was quick in this new calorimeter too, which was benefit to the low cost of reactive component. The fast and accurate measurement was contributed to the nice mixing performance and strict plug flowing in the calorimeter. © 2009 American Institute of Chemical Engineers AIChE J, 2010 [source]


High accuracy and cost-effectiveness of a biopsy-avoiding endoscopic approach in diagnosing coeliac disease

ALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 1 2006
G. CAMMAROTA
Summary Background The ,immersion' technique during upper endoscopy allows the visualization of duodenal villi and the detection of total villous atrophy. Aim To evaluate the accuracy of the immersion technique in detecting total villous atrophy in suspected coeliac patients. The accuracy in diagnosing coeliac disease and the potential cost-sparing of a biopsy-avoiding approach, based on selection of individuals with coeliac disease-related antibodies and on endoscopic detection of absence of villi, were also analysed. Methods The immersion technique was performed in 79 patients with positive antibodies and in 105 controls. Duodenal villi were evaluated as present or absent. As reference, results were compared with histology. Diagnostic approaches, including endoscopy with or without biopsy, were designed to investigate patients with coeliac disease-related antibodies and total villous atrophy. A cost-minimization analysis was performed. Results All patients with positive antibodies had coeliac disease. The sensitivity, specificity, positive and negative predictive values of endoscopy to detect total villous atrophy was always 100%. The sensitivity, specificity, positive and negative predictive values of biopsy-avoiding or biopsy-including strategies in diagnosing coeliac disease when villi were absent was always 100%. The biopsy-avoiding strategy was cost-sparing. Conclusions Upper endoscopy is highly accurate in detecting total villous atrophy coeliac patients. A biopsy-avoiding approach is both accurate and cost-sparing to diagnose coeliac disease in subjects with marked duodenal villous atrophy. [source]


Confocal Optical System: A Novel Noninvasive Sensor To Study Mixing

BIOTECHNOLOGY PROGRESS, Issue 5 2005
Jose R. Vallejos
A novel confocal optical system to study mixing time in small-scale bioreactors is presented. The system is designed to monitor fluorescence upon tracer addition from a localized confocal volume of 0.21 mL within a glass vessel. The key elements of the fluorescence-based confocal system are a pinhole, a lens, an APD (Avalanche photodiode) detector, and light filters. The optical technique was validated by comparison with a pH-based technique. Finally, the optical sensor was tested and a real cultivation media (i.e., spent mammalian cell media) was used to measure mixing time in a 12.5-mL stirred transparent vessel. High accuracy, easy results interpretation, and low costs are the three most attractive characteristics of the sensor. Because of its noninvasive nature and versatility, the results suggest that the confocal system is a promising tool to perform mixing time studies in stirred vessels. [source]


Source Camera Identification for Heavily JPEG Compressed Low Resolution Still Images,

JOURNAL OF FORENSIC SCIENCES, Issue 3 2009
Erwin J. Alles M.Sc.
Abstract:, In this research, we examined whether fixed pattern noise or more specifically Photo Response Non-Uniformity (PRNU) can be used to identify the source camera of heavily JPEG compressed digital photographs of resolution 640 × 480 pixels. We extracted PRNU patterns from both reference and questioned images using a two-dimensional Gaussian filter and compared these patterns by calculating the correlation coefficient between them. Both the closed and open-set problems were addressed, leading the problems in the closed set to high accuracies for 83% for single images and 100% for around 20 simultaneously identified questioned images. The correct source camera was chosen from a set of 38 cameras of four different types. For the open-set problem, decision levels were obtained for several numbers of simultaneously identified questioned images. The corresponding false rejection rates were unsatisfactory for single images but improved for simultaneous identification of multiple images. [source]


Profile-Likelihood Inference for Highly Accurate Diagnostic Tests

BIOMETRICS, Issue 4 2002
John V. Tsimikas
Summary. We consider profile-likelihood inference based on the multinomial distribution for assessing the accuracy of a diagnostic test. The methods apply to ordinal rating data when accuracy is assessed using the area under the receiver operating characteristic (ROC) curve. Simulation results suggest that the derived confidence intervals have acceptable coverage probabilities, even when sample sizes are small and the diagnostic tests have high accuracies. The methods extend to stratified settings and situations in which the ratings are correlated. We illustrate the methods using data from a clinical trial on the detection of ovarian cancer. [source]


SENTIMENT CLASSIFICATION of MOVIE REVIEWS USING CONTEXTUAL VALENCE SHIFTERS

COMPUTATIONAL INTELLIGENCE, Issue 2 2006
Alistair Kennedy
We present two methods for determining the sentiment expressed by a movie review. The semantic orientation of a review can be positive, negative, or neutral. We examine the effect of valence shifters on classifying the reviews. We examine three types of valence shifters: negations, intensifiers, and diminishers. Negations are used to reverse the semantic polarity of a particular term, while intensifiers and diminishers are used to increase and decrease, respectively, the degree to which a term is positive or negative. The first method classifies reviews based on the number of positive and negative terms they contain. We use the General Inquirer to identify positive and negative terms, as well as negation terms, intensifiers, and diminishers. We also use positive and negative terms from other sources, including a dictionary of synonym differences and a very large Web corpus. To compute corpus-based semantic orientation values of terms, we use their association scores with a small group of positive and negative terms. We show that extending the term-counting method with contextual valence shifters improves the accuracy of the classification. The second method uses a Machine Learning algorithm, Support Vector Machines. We start with unigram features and then add bigrams that consist of a valence shifter and another word. The accuracy of classification is very high, and the valence shifter bigrams slightly improve it. The features that contribute to the high accuracy are the words in the lists of positive and negative terms. Previous work focused on either the term-counting method or the Machine Learning method. We show that combining the two methods achieves better results than either method alone. [source]


Fast High-Dimensional Filtering Using the Permutohedral Lattice

COMPUTER GRAPHICS FORUM, Issue 2 2010
Andrew Adams
Abstract Many useful algorithms for processing images and geometry fall under the general framework of high-dimensional Gaussian filtering. This family of algorithms includes bilateral filtering and non-local means. We propose a new way to perform such filters using the permutohedral lattice, which tessellates high-dimensional space with uniform simplices. Our algorithm is the first implementation of a high-dimensional Gaussian filter that is both linear in input size and polynomial in dimensionality. Furthermore it is parameter-free, apart from the filter size, and achieves a consistently high accuracy relative to ground truth (> 45 dB). We use this to demonstrate a number of interactive-rate applications of filters in as high as eight dimensions. [source]


Potential role of colour-Doppler cystosonography with echocontrast in the screening and follow-up of vesicoureteral reflux

ACTA PAEDIATRICA, Issue 11 2000
G Ascenti
Primary vesicoureteral reflux is a predisposing factor for urinary tract infections in children. The first-choice technique for the diagnosis of vesicoureteral reflux is voiding cystourethrography, followed by cystoscintigraphy; cystoscintigraphy, however, has the advantage of only minor irradiation of the patient, but it does not allow the morphological evaluation of bladder and vesicoureteral reflux grading. Colour-Doppler cystosonography with echocontrast is a recently introduced method for imaging vesicoureteral reflux. The aim of our study is to evaluate the role of colour-Doppler cystosonography with echocontrast in the diagnosis of vesicoureteral reflux. Twenty children (11M, 9F) aged between 0.4 and 4.9 y underwent colour-Doppler cystosonography using a diluted solution of Levovist® (Schering, Germany), after filling up the bladder with saline. In all patients, vesicoureteral reflux diagnosis and grading had been performed previously by voiding cystourethrography within 5 d from ultrasonography. Our data showed high accuracy in the detection of medium to severe vesicoureteral reflux (grades III-V), confirmed by radiological features in 9/9 patients. Conversely, in the 11 patients with mild vesicoureteral reflux (grades I-II), this technique showed extremely low sensitivity, allowing diagnosis in only four cases. Conclusions: Colour-Doppler cystosonography, because of the absence of ionizing radiations, has great advantages, particularly in patients needing prolonged monitoring. Despite experiences reported in the literature, this technique has a role in the diagnosis of vesicoureteral reflux. Our group chooses colour-Doppler cystosonography for the follow-up of medium-severe grade vesicoureteral reflux already diagnosed by radiology and/or scintigraphy. Cystoscintigraphy is employed only to confirm cases resulting negative at ultrasonography. [source]


Decentralized Parametric Damage Detection Based on Neural Networks

COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2002
Zhishen Wu
In this paper, based on the concept of decentralized information structures and artificial neural networks, a decentralized parametric identification method for damage detection of structures with multi-degrees-of-freedom (MDOF) is conducted. First, a decentralized approach is presented for damage detection of substructures of an MDOF structure system by using neural networks. The displacement and velocity measurements from a substructure of a healthy structure system and the restoring force corresponding to this substructure are used to train the decentralized detection neural networks for the purpose of identifying the corresponding substructure. By using the trained decentralized detection neural networks, the difference of the interstory restoring force between the damaged substructures and the undamaged substructures can be calculated. An evaluation index, that is, relative root mean square (RRMS) error, is presented to evaluate the condition of each substructure for the purpose of health monitoring. Although neural networks have been widely used for nonparametric identification, in this paper, the decentralized parametric evaluation neural networks for substructures are trained for parametric identification. Based on the trained decentralized parametric evaluation neural networks and the RRMS error of substructures, the structural parameter of stiffness of each subsystem can be forecast with high accuracy. The effectiveness of the decentralized parametric identification is evaluated through numerical simulations. It is shown that the decentralized parametric evaluation method has the potential of being a practical tool for a damage detection methodology applied to structure-unknown smart civil structures. [source]


Single-beat estimation of the left ventricular end-systolic pressure,volume relationship in patients with heart failure

ACTA PHYSIOLOGICA, Issue 1 2010
E. A. Ten Brinke
Abstract Aim:, The end-systolic pressure,volume relationship (ESPVR) constructed from multiple pressure,volume (PV) loops acquired during load intervention is an established method to asses left ventricular (LV) contractility. We tested the accuracy of simplified single-beat (SB) ESPVR estimation in patients with severe heart failure. Methods:, Nineteen heart failure patients (NYHA III-IV) scheduled for surgical ventricular restoration and/or restrictive mitral annuloplasty and 12 patients with normal LV function scheduled for coronary artery bypass grafting were included. PV signals were obtained before and after cardiac surgery by pressure-conductance catheters and gradual pre-load reductions by vena cava occlusion (VCO). The SB method was applied to the first beat of the VCO run. Accuracy was quantified by the root-mean-square-error (RMSE) between ESPVRSB and gold-standard ESPVRVCO. In addition, we compared slopes (EES) and intercepts (end-systolic volume at multiple pressure levels (70,100 mmHg: ESV70,ESV100) of ESPVRSB vs. ESPVRVCO by Bland,Altman analyses. Results:, RMSE was 1.7 ± 1.0 mmHg and was not significantly different between groups and not dependent on end-diastolic volume, indicating equal, high accuracy over a wide volume range. SB-predicted EES had a bias of ,0.39 mmHg mL,1 and limits of agreement (LoA) ,2.0 to +1.2 mmHg mL,1. SB-predicted ESVs at each pressure level showed small bias (range: ,10.8 to +9.4 mL) and narrow LoA. Two-way anova indicated that differences between groups were not dependent on the method. Conclusion:, Our findings, obtained in hearts spanning a wide range of sizes and conditions, support the use of the SB method. This method ultimately facilitates less invasive ESPVR estimation, particularly when coupled with emerging noninvasive techniques to measure LV pressures and volumes. [source]


Assessment of Optical Coherence Tomography Imaging in the Diagnosis of Non-Melanoma Skin Cancer and Benign Lesions Versus Normal Skin: Observer-Blinded Evaluation by Dermatologists and Pathologists

DERMATOLOGIC SURGERY, Issue 6 2009
METTE MOGENSEN MD
BACKGROUND Optical coherence tomography (OCT) is an optical imaging technique that may be useful in diagnosis of non-melanoma skin cancer (NMSC). OBJECTIVES To describe OCT features in NMSC such as actinic keratosis (AK) and basal cell carcinoma (BCC) and in benign lesions and to assess the diagnostic accuracy of OCT in differentiating NMSC from benign lesions and normal skin. METHODS AND MATERIALS OCT and polarization-sensitive (PS) OCT from 104 patients were studied. Observer-blinded evaluation of OCT images from 64 BCCs, 1 baso-squamous carcinoma, 39 AKs, two malignant melanomas, nine benign lesions, and 105 OCT images from perilesional skin was performed; 50 OCT images of NMSC and 50 PS-OCT images of normal skin were evaluated twice. RESULTS Sensitivity was 79% to 94% and specificity 85% to 96% in differentiating normal skin from lesions. Important features were absence of well-defined layering in OCT and PS-OCT images and dark lobules in BCC. Discrimination of AK from BCC had an error rate of 50% to 52%. CONCLUSION OCT features in NMSC are identified, but AK and BCC cannot be differentiated. OCT diagnosis is less accurate than clinical diagnosis, but high accuracy in distinguishing lesions from normal skin, crucial for delineating tumor borders, was obtained. [source]


The implications of data selection for regional erosion and sediment yield modelling

EARTH SURFACE PROCESSES AND LANDFORMS, Issue 15 2009
Joris de Vente
Abstract Regional environmental models often require detailed data on topography, land cover, soil, and climate. Remote sensing derived data form an increasingly important source of information for these models. Yet, it is often not easy to decide what the most feasible source of information is and how different input data affect model outcomes. This paper compares the quality and performance of remote sensing derived data for regional soil erosion and sediment yield modelling with the WATEM-SEDEM model in south-east Spain. An ASTER-derived digital elevation model (DEM) was compared with the DEM obtained from the Shuttle Radar Topography Mission (SRTM), and land cover information from the CORINE database (CLC2000) was compared with classified ASTER satellite images. The SRTM DEM provided more accurate estimates of slope gradient and upslope drainage area than the ASTER DEM. The classified ASTER images provided a high accuracy (90%) land cover map, and due to its higher resolution, it showed a more fragmented landscape than the CORINE land cover data. Notwithstanding the differences in quality and level of detail, CORINE and ASTER land cover data in combination with the SRTM DEM or ASTER DEM allowed accurate predictions of sediment yield at the catchment scale. Although the absolute values of erosion and sediment deposition were different, the qualitative spatial pattern of the major sources and sinks of sediments was comparable, irrespective of the DEM and land cover data used. However, due to its lower accuracy, the quantitative spatial pattern of predictions with the ASTER DEM will be worse than with the SRTM DEM. Therefore, the SRTM DEM in combination with ASTER-derived land cover data presumably provide most accurate spatially distributed estimates of soil erosion and sediment yield. Nevertheless, model calibration is required for each data set and resolution and validation of the spatial pattern of predictions is urgently needed. Copyright © 2009 John Wiley & Sons, Ltd. [source]


Parameter identification of framed structures using an improved finite element model-updating method,Part I: formulation and verification

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 5 2007
Eunjong Yu
Abstract In this study, we formulate an improved finite element model-updating method to address the numerical difficulties associated with ill conditioning and rank deficiency. These complications are frequently encountered model-updating problems, and occur when the identification of a larger number of physical parameters is attempted than that warranted by the information content of the experimental data. Based on the standard bounded variables least-squares (BVLS) method, which incorporates the usual upper/lower-bound constraints, the proposed method (henceforth referred to as BVLSrc) is equipped with novel sensitivity-based relative constraints. The relative constraints are automatically constructed using the correlation coefficients between the sensitivity vectors of updating parameters. The veracity and effectiveness of BVLSrc is investigated through the simulated, yet realistic, forced-vibration testing of a simple framed structure using its frequency response function as input data. By comparing the results of BVLSrc with those obtained via (the competing) pure BVLS and regularization methods, we show that BVLSrc and regularization methods yield approximate solutions with similar and sufficiently high accuracy, while pure BVLS method yields physically inadmissible solutions. We further demonstrate that BVLSrc is computationally more efficient, because, unlike regularization methods, it does not require the laborious a priori calculations to determine an optimal penalty parameter, and its results are far less sensitive to the initial estimates of the updating parameters. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Macro,micro analysis method for wave propagation in stochastic media

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 4 2006
T. Ichimura
Abstract This paper presents a new analysis method, called macro,micro analysis method (MMAM) for numerical simulation of wave propagation in stochastic media, which could be used to predict distribution of earthquake strong motion with high accuracy and spatial resolution. This MMAM takes advantage of the bounding medium theory (BMT) and the singular perturbation expansion (SPE). BMT can resolve uncertainty of soil and crust structures by obtaining optimistic and pessimistic estimates of expected strong motion distribution. SPE leads to efficient multi-scale analysis for reducing a huge amount of computation. The MMAM solution is given as the sum of waves of low resolution covering a whole city and waves of high resolution for each city portion. This paper presents BMT and SPE along with the formulation of MMAM for wave propagation in three-dimensional elastic media. Application examples are presented to verify the validity of the MMAM and demonstrate potential usefulness of this approach. In a companion paper (Earthquake Engng. Struct. Dyn., this issue) application examples of earthquake strong motion prediction are also presented. Copyright © 2005 John Wiley & Sons, Ltd. [source]


A perturbation analysis of harmonic generation from saturated elements in power systems

ELECTRICAL ENGINEERING IN JAPAN, Issue 4 2010
Teruhisa Kumano
Abstract Nonlinear phenomena such as saturation of magnetic flux have considerable effects in power systems analysis. It is reported that a failure in a real 500-kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze phenomena of this type, but it has basically two shortcomings. One is that the time simulation takes too much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedures such as N-R (Newton,Raphson) must be used and such methods tend to be caught in an ill-conditioned numerical hunting. The other is that such simulation methods sometimes do not aid an intuitive understanding of the studied phenomenon because all of the nonlinear equations are treated in matrix form and are not properly divided into understandable parts, as is done in linear systems. This paper proposes a new computation scheme that is based on the so-called perturbation method. Magnetic saturation of iron cores in a generator and a transformer are taken into account. The proposed method has a special feature to deal with the first shortcoming of the N-R-based time simulation method stated above. The proposed method does not use an iterative process to reduce the equation residue, but uses perturbation series, so that it is free of the ill-conditioning problem. The user need only calculate the perturbation terms one by one until the necessary accuracy is attained. In a numerical example treated in the present paper, first-order perturbation can achieve reasonably high accuracy, which means very fast computing time. In a numerical study, three nonlinear elements are considered. The calculation results are almost identical to the conventional N-R-based time simulation, which shows the validity of the method. The proposed method can be effectively used in screening where many case studies are needed. © 2009 Wiley Periodicals, Inc. Electr Eng Jpn, 170(4): 35,42, 2010; Published online in Wiley InterScience (www.interscience.wiley.com). DOI 10.1002/eej.20895 [source]


Metastable zone determination of lipid systems: Ultrasound velocity versus optical back-reflectance measurements

EUROPEAN JOURNAL OF LIPID SCIENCE AND TECHNOLOGY, Issue 5 2010
Kesarin Chaleepa
Abstract The metastable zone width (MZW) of a multi-component system as influenced by the process parameters cooling rate, agitation speed, and additive concentration was determined via ultrasound velocity measurements. The results were compared with those obtained by optical back-reflectance measurements (ORM) using coconut oil as a model substance. Increasing the cooling rate led to the shift of the nucleation point to lower temperatures. This tendency was better visualized by the ultrasonic curves while a significant disturbance of the ORM signal could be observed. Agitation led to an increase of the nucleation temperature and hence a narrower metastable zone. The influence of an additive on the MZW was found to strongly depend on its concentration. The MZW detected by the ultrasound technique was narrower compared to that obtained by the ORM method, indicating the faster response to the phase transition of the ultrasound technique. Another advantage of the ultrasound technique was the in situ evaluation of the experimental data, while ORM needed a linear fitting to estimate the saturation temperature. Furthermore, ultrasound velocity measurements are based on density determination of the medium whereas the ORM sensor is able to detect only particles that are located within the measuring zone and possess a well-defined size. Practical applications: MZW is one of the most important parameters that determine the characteristics of crystalline products. However, a proper technique that can be used in MZW detection in fat systems has rarely been reported, due to the difficulties in dealing with natural fats. The findings of this study can greatly help those who are involved in the field of fat crystallization from both the academic and the practical point of view. This is due to the fact that new and promising techniques for the online and in situ determination of the MZW of fats, with high accuracy, and reproducibility, under most process conditions, were clarified in this work. The readers can easily follow the procedure developed in this paper. Also information about the influence of process parameters and additives on the MZW is included. [source]


Manufacturing of Net-Shape Reaction-Bonded Ceramic Microparts by Low-Pressure Injection Molding,

ADVANCED ENGINEERING MATERIALS, Issue 5 2009
Nadja Schlechtriemen
Reaction-bonded oxide ceramics based on intermetallic compounds are able to compensate the sintering shrinkage completely due to their high increase in volume caused by oxidation. Using low-pressure injection molding (LPIM) for shaping ceramics avoids needless materials loss and affords the manufacturing of complex formed structures. The combination of both, reaction-bonded ceramic and LPIM-processing, offers the manufacturing of ceramic microparts by keeping a high accuracy and replication quality. [source]


Selecting discriminant function models for predicting the expected richness of aquatic macroinvertebrates

FRESHWATER BIOLOGY, Issue 2 2006
JOHN VAN SICKLE
Summary 1. The predictive modelling approach to bioassessment estimates the macroinvertebrate assemblage expected at a stream site if it were in a minimally disturbed reference condition. The difference between expected and observed assemblages then measures the departure of the site from reference condition. 2. Most predictive models employ site classification, followed by discriminant function (DF) modelling, to predict the expected assemblage from a suite of environmental variables. Stepwise DF analysis is normally used to choose a single subset of DF predictor variables with a high accuracy for classifying sites. An alternative is to screen all possible combinations of predictor variables, in order to identify several ,best' subsets that yield good overall performance of the predictive model. 3. We applied best-subsets DF analysis to assemblage and environmental data from 199 reference sites in Oregon, U.S.A. Two sets of 66 best DF models containing between one and 14 predictor variables (that is, having model orders from one to 14) were developed, for five-group and 11-group site classifications. 4. Resubstitution classification accuracy of the DF models increased consistently with model order, but cross-validated classification accuracy did not improve beyond seventh or eighth-order models, suggesting that the larger models were overfitted. 5. Overall predictive model performance at model training sites, measured by the root-mean-squared error of the observed/expected species richness ratio, also improved steadily with DF model order. But high-order DF models usually performed poorly at an independent set of validation sites, another sign of model overfitting. 6. Models selected by stepwise DF analysis showed evidence of overfitting and were outperformed by several of the best-subsets models. 7. The group separation strength of a DF model, as measured by Wilks',, was more strongly correlated with overall predictive model performance at training sites than was DF classification accuracy. 8. Our results suggest improved strategies for developing reliable, parsimonious predictive models. We emphasise the value of independent validation data for obtaining a realistic picture of model performance. We also recommend assessing not just one or two, but several, candidate models based on their overall performance as well as the performance of their DF component. 9. We provide links to our free software for stepwise and best-subsets DF analysis. [source]


Measuring metabolic rate in the field: the pros and cons of the doubly labelled water and heart rate methods

FUNCTIONAL ECOLOGY, Issue 2 2004
P. J. Butler
Summary 1Measuring the metabolic rate of animals in the field (FMR) is central to the work of ecologists in many disciplines. In this article we discuss the pros and cons of the two most commonly used methods for measuring FMR. 2Both methods are constantly under development, but at the present time can only accurately be used to estimate the mean rate of energy expenditure of groups of animals. The doubly labelled water method (DLW) uses stable isotopes of hydrogen and oxygen to trace the flow of water and carbon dioxide through the body over time. From these data, it is possible to derive a single estimate of the rate of oxygen consumption () for the duration of the experiment. The duration of the experiment will depend on the rate of flow of isotopes of oxygen and hydrogen through the body, which in turn depends on the animal's size, ranging from 24 h for small vertebrates to up to 28 days in Humans. 3This technique has been used widely, partly as a result of its relative simplicity and potential low cost, though there is some uncertainty over the determination of the standard error of the estimate of mean . 4The heart rate (fH) method depends on the physiological relationship between heart rate and . 5If these two quantities are calibrated against each other under controlled conditions, fH can then be measured in free-ranging animals and used to estimate . 6The latest generation of small implantable data loggers means that it is possible to measure fH for over a year on a very fine temporal scale, though the current size of the data loggers limits the size of experimental animals to around 1 kg. However, externally mounted radio-transmitters are now sufficiently small to be used with animals of less than 40 g body mass. This technique is gaining in popularity owing to its high accuracy and versatility, though the logistic constraint of performing calibrations can make its use a relatively extended process. [source]


Optimal Control of Rigid-Link Manipulators by Indirect Methods

GAMM - MITTEILUNGEN, Issue 1 2008
Rainer Callies
Abstract The present paper is a survey and research paper on the treatment of optimal control problems of rigid-link manipulators by indirect methods. Maximum Principle based approaches provide an excellent tool to calculate optimal reference trajectories for multi-link manipulators with high accuracy. Their major drawback was the need to explicitly formulate the complicated system of adjoint differential equations and to apply the full apparatus of optimal control theory. This is necessary in order to convert the optimal control problem into a piecewise defined, nonlinear multi-point boundary value problem. An accurate and efficient access to first- and higher-order derivatives is crucial. The approach described in this paper allows it to generate all the derivative information recursively and simultaneously with the recursive formulation of the equations of motion. Nonlinear state and control constraints are treated without any simplifications by transforming them into sequences of systems of linear equations. By these means, the modeling of the complete optimal control problem and the accompanying boundary value problem is automated to a great extent. The fast numerical solution is by the advanced multiple shooting method JANUS. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]


A two-step procedure for constructing confidence intervals of trait loci with application to a rheumatoid arthritis dataset

GENETIC EPIDEMIOLOGY, Issue 1 2006
Charalampos Papachristou
Abstract Preliminary genome screens are usually succeeded by fine mapping analyses focusing on the regions that signal linkage. It is advantageous to reduce the size of the regions where follow-up studies are performed, since this will help better tackle, among other things, the multiplicity adjustment issue associated with them. We describe a two-step approach that uses a confidence set inference procedure as a tool for intermediate mapping (between preliminary genome screening and fine mapping) to further localize disease loci. Apart from the usual Hardy-Weiberg and linkage equilibrium assumptions, the only other assumption of the proposed approach is that each region of interest houses at most one of the disease-contributing loci. Through a simulation study with several two-locus disease models, we demonstrate that our method can isolate the position of trait loci with high accuracy. Application of this two-step procedure to the data from the Arthritis Research Campaign National Repository also led to highly encouraging results. The method not only successfully localized a well-characterized trait contributing locus on chromosome 6, but also placed its position to narrower regions when compared to their LOD support interval counterparts based on the same data. Genet. Epidemiol. 30:18,29, 2006. © 2005 Wiley-Liss, Inc. [source]


Dating methods for sediments of caves and rockshelters with examples from the Mediterranean Region

GEOARCHAEOLOGY: AN INTERNATIONAL JOURNAL, Issue 4 2001
H. P. Schwarcz
A wide range of potential dating methods may be applied to archaeological deposits found in caves and rockshelters, depending on the nature of the deposit and age range of the deposit. Organic sediments, including faunal and floral material, can be dated by radiocarbon (AMS and high-sensitivity beta-counting). Many karstic features contain speleothems which can be dated with high accuracy by U-series. Wind-blown detritus, where it is the dominant constituent of the cave deposits, can be dated by luminescence (TL, OSL, or IRSL), taking care to avoid material fallen into the deposits from the shelter/cave walls. Fireplaces contain burned rocks (including stone artifacts) which can be dated by TL. Enamel from the teeth of mammals is present in most sites, representing either animal residents of the shelter, or residues from food brought to the shelter by human residents. Electron spin resonance (ESR) dating of enamel is applicable over a wide time range, with high accuracy and reasonable precision where uranium accumulation in teeth is low, but with lower precision where uranium content in teeth is high. In general, multiple dating methods applied to a site may resolve ambiguities arising from uncertain model assumptions in some dating methods. © 2001 John Wiley & Sons, Inc. [source]


2-D/3-D multiply transmitted, converted and reflected arrivals in complex layered media with the modified shortest path method

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2009
Chao-Ying Bai
SUMMARY Grid-cell based schemes for tracing seismic arrivals, such as the finite difference eikonal equation solver or the shortest path method (SPM), are conventionally confined to locating first arrivals only. However, later arrivals are numerous and sometimes of greater amplitude than the first arrivals, making them valuable information, with the potential to be used for precise earthquake location, high-resolution seismic tomography, real-time automatic onset picking and identification of multiple events on seismic exploration data. The purpose of this study is to introduce a modified SPM (MSPM) for tracking multiple arrivals comprising any kind of combination of transmissions, conversions and reflections in complex 2-D/3-D layered media. A practical approach known as the multistage scheme is incorporated into the MSPM to propagate seismic wave fronts from one interface (or subsurface structure for 3-D application) to the next. By treating each layer that the wave front enters as an independent computational domain, one obtains a transmitted and/or converted branch of later arrivals by reinitializing it in the adjacent layer, and a reflected and/or converted branch of later arrivals by reinitializing it in the incident layer. A simple local grid refinement scheme at the layer interface is used to maintain the same accuracy as in the one-stage MSPM application in tracing first arrivals. Benchmark tests against the multistage fast marching method are undertaken to assess the solution accuracy and the computational efficiency. Several examples are presented that demonstrate the viability of the multistage MSPM in highly complex layered media. Even in the presence of velocity variations, such as the Marmousi model, or interfaces exhibiting a relatively high curvature, later arrivals composed of any combination of the transmitted, converted and reflected events are tracked accurately. This is because the multistage MSPM retains the desirable properties of a single-stage MSPM: high computational efficiency and a high accuracy compared with the multistage FMM scheme. [source]


Surface deformation due to loading of a layered elastic half-space: a rapid numerical kernel based on a circular loading element

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 1 2007
E. Pan
SUMMARY This study is motivated by a desire to develop a fast numerical algorithm for computing the surface deformation field induced by surface pressure loading on a layered, isotropic, elastic half-space. The approach that we pursue here is based on a circular loading element. That is, an arbitrary surface pressure field applied within a finite surface domain will be represented by a large number of circular loading elements, all with the same radius, in which the applied downwards pressure (normal stress) is piecewise uniform: that is, the load within each individual circle is laterally uniform. The key practical requirement associated with this approach is that we need to be able to solve for the displacement field due to a single circular load, at very large numbers of points (or ,stations'), at very low computational cost. This elemental problem is axisymmetric, and so the displacement vector field consists of radial and vertical components both of which are functions only of the radial coordinate r. We achieve high computational speeds using a novel two-stage approach that we call the sparse evaluation and massive interpolation (SEMI) method. First, we use a high accuracy but computationally expensive method to compute the displacement vectors at a limited number of r values (called control points or knots), and then we use a variety of fast interpolation methods to determine the displacements at much larger numbers of intervening points. The accurate solutions achieved at the control points are framed in terms of cylindrical vector functions, Hankel transforms and propagator matrices. Adaptive Gauss quadrature is used to handle the oscillatory nature of the integrands in an optimal manner. To extend these exact solutions via interpolation we divide the r -axis into three zones, and employ a different interpolation algorithm in each zone. The magnitude of the errors associated with the interpolation is controlled by the number, M, of control points. For M= 54, the maximum RMS relative error associated with the SEMI method is less than 0.2 per cent, and it is possible to evaluate the displacement field at 100 000 stations about 1200 times faster than if the direct (exact) solution was evaluated at each station; for M= 99 which corresponds to a maximum RMS relative error less than 0.03 per cent, the SEMI method is about 700 times faster than the direct solution. [source]


The high-resolution gravimetric geoid of Iberia: IGG2005

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 3 2005
V. Corchete
SUMMARY It is well known that orthometric heights can be obtained without levelling by using ellipsoidal and geoidal heights. For engineering purposes, these orthometric heights must be determined with high accuracy. For this reason, the determination of a high-resolution geoid is necessary. In Iberia, since the publication of the most recent geoid (IBERGEO95), a new geopotential model has become available (EIGEN-CG01C, released on 2004 October 29) and a new high-resolution digital terrain model (SRTM 90M obtained from the Shuttle Radar Topography Mission) has been developed for the Earth. Logically, these new data represent improvements that must be included in a new geoid of Iberia. With this goal in mind, we have carried out a new gravimetric geoid determination in which these new data are included. The computation of the geoid uses the Stokes integral in convolution form, which has been shown as an efficient method to reach the proposed objective. The terrain correction has been applied to the gridded gravity anomalies to obtain the corresponding reduced anomalies. The indirect effect has also been taken into account. Thus, a new geoid is provided as grid data distributed for Iberia from 35° to 44° latitude and ,10° to 4° longitude (extending to 9°× 14°) in a 361 × 561 regular grid with a mesh size of 1.5,× 1.5, and 202 521 points in the GRS80 reference system. This calculated geoid and previous geoids that exist for this study area (IBERGEO95, EGM96, EGG97 and EIGEN-CG01C) are compared to the geoid undulations corresponding to 16 points of the European Vertical Reference Network (EUVN) on Iberia. The new geoid shows an improvement in precision and reliability, fitting the geoidal heights of these EUVN points with more accuracy than the other previous geoids. [source]


Analytical solution for the electric potential in arbitrary anisotropic layered media applying the set of Hankel transforms of integer order

GEOPHYSICAL PROSPECTING, Issue 5 2006
E. Pervago
ABSTRACT The analytical solution and algorithm for simulating the electric potential in an arbitrarily anisotropic multilayered medium produced by a point DC source is here proposed. The solution is presented as a combination of Hankel transforms of integer order and Fourier transforms based on the analytical recurrent equations obtained for the potential spectrum. For the conversion of the potential spectrum into the space domain, we have applied the algorithm of the Fast Fourier Transform for logarithmically spaced points. A comparison of the modelling results with the power-series solution for two-layered anisotropic structures demonstrated the high accuracy and computing-time efficiency of the method proposed. The results of the apparent-resistivity calculation for both traditional pole-pole and tensor arrays above three-layered sequence with an azimuthally anisotropic second layer are presented. The numerical simulations show that both arrays have the same sensitivity to the anisotropy parameters. This sensitivity depends significantly on the resistivity ratio between anisotropic and adjacent layers and increases for the models with a conductive second layer. [source]


Determination of Lithium Contents in Silicates by Isotope Dilution ICP-MS and its Evaluation by Isotope Dilution Thermal Ionisation Mass Spectrometry

GEOSTANDARDS & GEOANALYTICAL RESEARCH, Issue 3 2004
Takuya Moriguti
Lithium; ICP-MS avec dilution isotopique; TIMS; matériaux silicatés de référence; météorites A precise and simple method for the determination of lithium concentrations in small amounts of silicate sample was developed by applying isotope dilution-inductively coupled plasma-mass spectrometry (ID-ICP-MS). Samples plus a Li spike were digested with HF-HClO4, dried and diluted with HNO3, and measured by ICP-MS. No matrix effects were observed for 7Li/6Li in rock solutions with a dilution factor (DF) of 97 at an ICP power of 1.7 kW. By this method, the determination of 0.5 ,g g -1 Li in a silicate sample of 1 mg can be made with a blank correction of < 1%. Lithium contents of ultrabasic to acidic silicate reference materials (JP-1, JB-2, JB-3, JA-1, JA-2, JA-3, JR-1 and JR-2 from the Geological Survey of Japan, and PCC-1 from the US Geological Survey) and chondrites (three different Allende and one Murchison sample) of 8 to 81 mg were determined. The relative standard deviation (RSD) was typically < 1.7%. Lithium contents of these samples were further determined by isotope dilution-thermal ionisation mass spectrometry (ID-TIMS). The relative differences between ID-ICP-MS and ID-TIMS were typically < 2%, indicating the high accuracy of ID-ICP-MS developed in this study. Nous avons développé une méthode simple et précise de détermination des concentrations en lithium dans de très petits échantillons silicatés. Elle est basée sur la méthode de dilution isotopique couplée à l'analyse par spectrométrie de masse avec couplage induit (ID-ICP-MS). Les échantillons auxquels est ajouté le spike de Li, sont mis en solution avec un mélange HF-HclO4, évaporés à sec, puis repris avec HNO3 et analysés à l'ICP-MS. Aucun effet de matrice n'est observé sur les rapports 7Li/6Li dans les solutions quand les facteurs de dilution sont 97 et qu'elles sont analysées avec une puissance du plasma de 1.7 kW. Par cette méthode, la détermination de 0.5 ,g g -1 de Li dans un échantillon silicaté de 1 mg peut être effectuée avec une correction de blanc < 1 %. Les teneurs en lithium des matériaux de référence de composition ultrabasique à acide (JP-1, JB-2, JB-3, JA-1, JA-2, JA-3, JR- 1 et JR-2 du Service Géologique du Japon, et PCC-1 du Service Géologique des USA) et de chondrites (trois échantillons différents d'Allende et un de Murchison), de poids variant entre 8 et 81 mg ont été déterminées. La déviation standard relative typique était < 1.7%. Les teneurs en lithium de ces échantillons ont été ensuite mesurées par dilution isotopique et spectrométrie de masse à thermo-ionisation (ID-TIMS). Les différences entre les résultats obtenus par ID-ICP-MS et ID-TIMS étaient < 2%, démontrant ainsi la grande justesse de la technique ID-ICP-MS développée dans cette étude. [source]


Reliability and construct validity of the compatible MRI scoring system for evaluation of elbows in haemophilic children

HAEMOPHILIA, Issue 2 2008
A. S. DORIA
Summary., We assessed the reliability and construct validity of the Compatible MRI scale for evaluation of elbows, and compared the diagnostic performance of MRI and radiographs for assessment of these joints. Twenty-nine MR examinations of elbows from 27 boys with haemophilia A and B [age range, 5,17 years (mean, 11.5)] were independently read by four blinded radiologists on two occasions. Three centres participated in the study: (Toronto, n = 24 examinations; Atlanta, n = 3; Cuiaba, n = 2). The number of previous joint bleeds and severity of haemophilia were reference standard measures. The inter-reader reliability of MRI scores was substantial (ICC = 0.73) for the additive (A)-scale and excellent (ICC = 0.83) for the progressive (P)-scale. The intrareader reliability was excellent for both P-scores (ICC = 0.91) and A-scores (ICC = 0.93). The total P- and A-scores correlated poorly (r = 0.36) or moderately (r = 0.54), but positively, with clinical-laboratory measurements. The total MRI scores demonstrated high accuracy for discrimination of presence or absence of arthropathy [P-scale, area-under-the-curve (AUC) = 0.94 ± 0.05; A-scale, AUC = 0.89 ± 0.06], as did the soft tissue scores of both scales (P-scale, AUC = 0.90 ± 0.06; A-scale, AUC = 0.86 ± 0.06). Areas-under-the-curve used to discriminate severe disease demonstrated high accuracy for both P-MRI scores (AUC = 0.83 ± 0.09) and A-MRI scores (AUC = 0.87 ± 0.09), but non-diagnostic ability to discriminate mild disease. Similar results were noted for radiographic scales. In conclusion, both MRI scales demonstrated substantial to excellent reliability and accuracy for discrimination of presence/absence of arthropathy, and severe/non-severe disease, but poor to moderate convergent validity for total scores and non-diagnostic discriminant validity for mild/non-mild disease. Compared with radiographic scores, MRI scales did not perform better for discrimination of severity of arthropathy. [source]


SAFE biopsy: A validated method for large-scale staging of liver fibrosis in chronic hepatitis C,

HEPATOLOGY, Issue 6 2009
Giada Sebastiani
The staging of liver fibrosis is pivotal for defining the prognosis and indications for therapy in hepatitis C. Although liver biopsy remains the gold standard, several noninvasive methods are under evaluation for clinical use. The aim of this study was to validate the recently described sequential algorithm for fibrosis evaluation (SAFE) biopsy, which detects significant fibrosis (,F2 by METAVIR) and cirrhosis (F4) by combining the AST-to-platelet ratio index and Fibrotest-Fibrosure, thereby limiting liver biopsy to cases not adequately classifiable by noninvasive markers. Hepatitis C virus (HCV) patients (2035) were enrolled in nine locations in Europe and the United States. The diagnostic accuracy of SAFE biopsy versus histology, which is the gold standard, was investigated. The reduction in the need for liver biopsies achieved with SAFE biopsy was also assessed. SAFE biopsy identified significant fibrosis with 90.1% accuracy (area under the receiver operating characteristic curve = 0.89; 95% confidence interval, 0.87-0.90) and reduced by 46.5% the number of liver biopsies needed. SAFE biopsy had 92.5% accuracy (area under the receiver operating characteristic curve = 0.92; 95% confidence interval, 0.89-0.94) for the detection of cirrhosis, obviating 81.5% of liver biopsies. A third algorithm identified significant fibrosis and cirrhosis simultaneously with high accuracy and a 36% reduction in the need for liver biopsy. The patient's age and body mass index influenced the performance of SAFE biopsy, which was improved with adjusted Fibrotest-Fibrosure cutoffs. Two hundred two cases (9.9%) had discordant results for significant fibrosis with SAFE biopsy versus histology, whereas 153 cases (7.5%) were discordant for cirrhosis detection; 71 of the former cases and 56 of the latter cases had a Fibroscan measurement within 2 months of histological evaluation. Fibroscan confirmed SAFE biopsy findings in 83.1% and 75%, respectively. Conclusion: SAFE biopsy is a rational and validated method for staging liver fibrosis in hepatitis C with a marked reduction in the need for liver biopsy. It is an attractive tool for large-scale screening of HCV carriers. (HEPATOLOGY 2009.) [source]


Development of an Artificial Lipid-Based Membrane Sensor with High Selectivity and Sensitivity to the Bitterness of Drugs and with High Correlation with Sensory Score

IEEJ TRANSACTIONS ON ELECTRICAL AND ELECTRONIC ENGINEERING, Issue 6 2009
Yoshikazu Kobayashi Non-member
Abstract This paper reports the development of membrane sensors based on an artificial lipid and plasticizers with high selectivity and sensitivity to drug bitterness by using bis(1-butylpentyl) adipate (BBPA), bis(2-ethylhexyl) sebacate (BEHS), phosphoric acid tris(2-ethylhexyl) ester (PTEH), and tributyl o-acetylcitrate (TBAC) as a plasticizer and phosphoric acid di-n-decyl ester (PADE) as an artificial lipid to optimize surface hydrophobicity of the sensors. In addition, a sensor with highly correlated bitterness sensory score was developed by blending BBPA and TBAC to detect the bitterness suppression effect of sucrose, and other bitter-masking materials. Therefore, this sensor can be used to evaluate the bitterness of various drug formulations with high accuracy. Copyright © 2009 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [source]