| |||
Accurate Methods (accurate + methods)
Selected AbstractsAccurate correlations to estimate refinery fuel gas, natural gas, and fuel oil CO2 emission factors and its uncertaintyAICHE JOURNAL, Issue 9 2010Esteban F. Márquez-Riquelme Abstract The quantification of Greenhouse Gas (GHG) inventories and its associated uncertainty is a relevant activity often requested by authorities. Accurate methods to calculate both inventories and the involved uncertainty are convenient for close monitoring purposes. Using Monte Carlo simulations, correlations of high accuracy between emission factors (EFs), lower heating value (LHV), and density were built for refinery fuel gas, natural gas and fuel/residual oil. In all cases, the data generated by the simulations also served the purpose of building correlations for upper and lower bounds of the EF that can be readily used to estimate the EF estimation uncertainty. The correlations were tested against actual refinery data and the results show that more accurate estimations were obtained compared with EF obtained from laboratory composition methods and from methods that estimate EF as proportional to LHV only. In the case of fuel and residual oils, the correlations developed are a function of LHV only but were improved by using a cubic polynomial. The calculation of upper and lower bounds for EF offer a convenient method to estimate EF uncertainties that are required in official GHG emissions inventory calculations. In conclusion, in addition to LHV, the use of one additional readily available fuel property, namely fuel density is sufficient to reduce uncertainty of estimation of GHG (in this case CO2) from combustion to acceptable levels. © 2010 American Institute of Chemical Engineers AIChE J, 2010 [source] Detection of Micrometastasis in the Sentinel Lymph Node via Lymphoscintigraphy for a Patient With In-Transit Metastatic MelanomaDERMATOLOGIC SURGERY, Issue 9 2003Chih-Hsun Yang MD Background. Lymphoscintigraphy and sentinel lymph node (SLN) biopsy are highly accurate methods of detecting regional lymph node status for melanoma. Previously, these procedures were mainly performed in patients with primary melanoma before wide local excision. Objective. To present a case with in-transit recurrence melanoma using lymphoscintigraphy and SLN biopsy for detection of nodal basin status. Methods. The patient discussed here had a subungual melanoma that developed as an in-transit metastatic melanoma on the pretibia area 2 years after right big toe amputation. By using lymphoscintigraphy and SLN biopsy technique with injection of technetium-99m colloid around the in-transit metastatic site, the first node (SLN) draining the in-transit metastatic tumor was identified and harvested on the right inguinal area. Immediate right inguinal node dissection was subsequently performed. Results. Under thorough histologic examination, the first node (SLN) draining the in-transit metastatic tumor was the only node that contained micrometastatic tumor cells in the surgical specimens. Conclusion. Lymphoscintigraphy and SLN biopsy techniques are sensitive procedures for detecting the regional nodal basin micrometastasis in in-transit recurrence melanoma patients. [source] Sampling variability of liver fibrosis in chronic hepatitis CHEPATOLOGY, Issue 6 2003Pierre Bedossa M.D. Fibrosis is a common endpoint of clinical trials in chronic hepatitis C, and liver biopsy remains the gold standard for fibrosis evaluation. However, variability in the distribution of fibrosis within the liver is a potential limitation. Our aim was to assess the heterogeneity of liver fibrosis and its influence on the accuracy of assessment of fibrosis with liver biopsy. Surgical samples of livers from patients with chronic hepatitis C were studied. Measurement of fibrosis was performed on the whole section by using both image analysis and METAVIR score (reference value). From the digitized image of the whole section, virtual biopsy specimens of increasing length were produced. Fibrosis was assessed independently on each individual virtual biopsy specimen. Results were compared with the reference value according to the length of the biopsy specimen. By using image analysis, the coefficient of variation of fibrosis measurement with 15-mm long biopsy specimens was 55%; and for biopsy specimens of 25-mm length it was 45%. By using the METAVIR scoring system, 65% of biopsies 15 mm in length were categorized correctly according to the reference value. This increased to 75% for a 25-mm liver biopsy specimen without any substantial benefit for longer biopsy specimens. Sampling variability of fibrosis is a significant limitation in the assessment of fibrosis with liver biopsy. In conclusion, this study suggests that a length of at least 25 mm is necessary to evaluate fibrosis accurately with a semiquantitative score. Sampling variability becomes a major limitation when using more accurate methods such as automated image analysis. [source] Transient elastography: Applications and limitationsHEPATOLOGY RESEARCH, Issue 11 2008Kentaro Yoshioka Transient elastgraphy with use of FibroScan is one of most accurate methods for assessment of liver fibrosis. FibroScan can be readily used with an operator with a short training. In many different studies, liver stiffness measured by transient elastgraphy correlates well with fibrosis stages, and cutoff values of liver stiffness for fibrosis staging are similar even among different diseases. However there is wide variation of stiffness values in the same fibrosis stage, and some overlap between the adjacent stages. In addition, inflammatory activity and size of nodule of cirrhosis affect the liver stiffness values. The reproducibility may be reduced by age, obesity, steatosis, narrow intercostal space and lower degrees of hepatic fibrosis in patients. Thus the estimation of fibrosis stages from liver stiffness should be cautiously done. To improve the accuracy of liver fibrosis staging, the combination of transient elastography with other noninvasive methods such as FibroTest should be required. [source] High-resolution mapping of the 8p23.1 beta-defensin cluster reveals strictly concordant copy number variation of all genes,HUMAN MUTATION, Issue 10 2008Marco Groth Abstract One unexpected feature of the human genome is the high structural variability across individuals. Frequently, large regions of the genome show structural polymorphisms and many vary in their abundance. However, accurate methods for the characterization and typing of such copy number variations (CNV) are needed. The defensin cluster at the human region 8p23.1 is one of the best studied CNV regions due to its potential clinical relevance for innate immunity, inflammation, and cancer. The region can be divided into two subclusters, which harbor predominantly either alpha- or beta-defensin genes. Previous studies assessing individual copy numbers gave different results regarding whether the complete beta-defensin cluster varies or only particular genes therein. We applied multiplex ligation-dependent probe amplification (MLPA) to measure defensin locus copy numbers in 42 samples. The data show strict copy number concordance of all 10 loci typed within the beta-defensin cluster in each individual, while seven loci within the alpha-defensin cluster are consistently found as single copies per chromosome. The exception is DEFA3, which is located within the alpha-defensin cluster and was found to also differ in copy number interindividually. Absolute copy numbers ranged from two to nine for the beta-defensin cluster and zero to four for DEFA3. The CNV-typed individuals, including HapMap samples, are publicly available and may serve as a universal reference for absolute copy number determination. On this basis, MLPA represents a reliable technique for medium- to high-throughput typing of 8p23.1 defensin CNV in association studies for diverse clinical phenotypes. Hum Mutat 0,1,8, 2008. © 2008 Wiley-Liss, Inc. [source] High-order ENO and WENO schemes for unstructured gridsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 10 2007W. R. Wolf Abstract This work describes the implementation and analysis of high-order accurate schemes applied to high-speed flows on unstructured grids. The class of essentially non-oscillatory schemes (ENO), that includes weighted ENO schemes (WENO), is discussed in the paper with regard to the implementation of third- and fourth-order accurate methods. The entire reconstruction process of ENO and WENO schemes is described with emphasis on the stencil selection algorithms. The stencils can be composed by control volumes with any number of edges, e.g. triangles, quadrilaterals and hybrid meshes. In the paper, ENO and WENO schemes are implemented for the solution of the dimensionless, 2-D Euler equations in a cell centred finite volume context. High-order flux integration is achieved using Gaussian quadratures. An approximate Riemann solver is used to evaluate the fluxes on the interfaces of the control volumes and a TVD Runge,Kutta scheme provides the time integration of the equations. Such a coupling of all these numerical tools, together with the high-order interpolation of primitive variables provided by ENO and WENO schemes, leads to the desired order of accuracy expected in the solutions. An adaptive mesh refinement technique provides better resolution in regions with strong flowfield gradients. Results for high-speed flow simulations are presented with the objective of assessing the implemented capability. Copyright © 2007 John Wiley & Sons, Ltd. [source] Theoretical analysis for achieving high-order spatial accuracy in Lagrangian/Eulerian source termsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 8 2006David P. Schmidt Abstract In a fully coupled Lagrangian/Eulerian two-phase calculation, the source terms from computational particles must be agglomerated to nearby gas-phase nodes. Existing methods are capable of accomplishing this particle-to-gas coupling with second-order accuracy. However, higher-order methods would be useful for applications such as two-phase direct numerical simulation and large eddy simulation. A theoretical basis is provided for producing high spatial accuracy in particle-to-gas source terms with low computational cost. The present work derives fourth- and sixth-order accurate methods, and the procedure for even higher accuracy is discussed. The theory is also expanded to include two- and three-dimensional calculations. One- and two-dimensional tests are used to demonstrate the convergence of this method and to highlight problems with statistical noise. Finally, the potential for application in computational fluid dynamics codes is discussed. It is concluded that high-order kernels have practical benefits only under limited ranges of statistical and spatial resolution. Additionally, convergence demonstrations with full CFD codes will be extremely difficult due to the worsening of statistical errors with increasing mesh resolution. Copyright © 2006 John Wiley & Sons, Ltd. [source] Bayesian inference in a piecewise Weibull proportional hazards model with unknown change pointsJOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2007J. Casellas Summary The main difference between parametric and non-parametric survival analyses relies on model flexibility. Parametric models have been suggested as preferable because of their lower programming needs although they generally suffer from a reduced flexibility to fit field data. In this sense, parametric survival functions can be redefined as piecewise survival functions whose slopes change at given points. It substantially increases the flexibility of the parametric survival model. Unfortunately, we lack accurate methods to establish a required number of change points and their position within the time space. In this study, a Weibull survival model with a piecewise baseline hazard function was developed, with change points included as unknown parameters in the model. Concretely, a Weibull log-normal animal frailty model was assumed, and it was solved with a Bayesian approach. The required fully conditional posterior distributions were derived. During the sampling process, all the parameters in the model were updated using a Metropolis,Hastings step, with the exception of the genetic variance that was updated with a standard Gibbs sampler. This methodology was tested with simulated data sets, each one analysed through several models with different number of change points. The models were compared with the Deviance Information Criterion, with appealing results. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation data. Moreover, results showed that the piecewise baseline hazard function could appropriately fit survival data, as well as other smooth distributions, with a reduced number of change points. [source] An improved sample preparation for an LC method used in the age estimation based on aspartic acid racemization from human dentinJOURNAL OF SEPARATION SCIENCE, JSS, Issue 1 2007Raja Yekkala Abstract The determination of age on the basis of aspartic acid (Asp) racemization in teeth is one of the most reliable and accurate methods to date. In this paper, the usefulness of HPLC coupled with fluorescence detection for determination of Asp racemization was evaluated. A modified sample preparation is proposed for better stability of o -phthaldialdehyde- N -acetyl- L -cysteine derivatives of D/L -Asp (due to the instability below pH 7). To ensure the accuracy of the method, the validation parameters' specificity, precision, linearity, and LOD were determined. Three dentin samples of premolar teeth, extracted from living individuals (bucco-lingual longitudinal sections of 1 mm thickness), were analyzed and quantitative results are discussed. [source] Review article: adherence to medication for chronic hepatitis C , building on the model of human immunodeficiency virus antiretroviral adherence researchALIMENTARY PHARMACOLOGY & THERAPEUTICS, Issue 1 2009J. J. WEISS Summary Background, Treatment of hepatitis C virus (HCV) infection with pegylated interferon/ribavirin achieves sustained virological response in up to 56% of HCV mono-infected patients and 40% of HCV/human immunodeficiency virus (HIV)-co-infected patients. The relationship of patient adherence to outcome warrants study. Aim, To review comprehensively research on patient-missed doses to HCV treatment and discuss applicable research from adherence to HIV antiretroviral therapy. Methods, Publications were identified by PubMed searches using the keywords: adherence, compliance, hepatitis C virus, interferon and ribavirin. Results, The term ,non-adherence' differs in how it is used in the HCV from the HIV literature. In HCV, ,non-adherence' refers primarily to dose reductions by the clinician and early treatment discontinuation. In contrast, in HIV, ,non-adherence' refers primarily to patient-missed doses. Few data have been published on the rates of missed dose adherence to pegylated interferon/ribavirin and its relationship to virological response. Conclusions, As HCV treatment becomes more complex with new classes of agents, adherence will be increasingly important to treatment success as resistance mutations may develop with suboptimal dosing of HCV enzyme inhibitors. HIV adherence research can be applied to that on HCV to establish accurate methods to assess adherence, investigate determinants of non-adherence and develop strategies to optimize adherence. [source] Characterization of shapes for use in classification of starch grains imagesMICROSCOPY RESEARCH AND TECHNIQUE, Issue 9 2008Chong-Sze Tong Abstract As tradition Chinese herbal medicine becomes increasingly popular, there is an urgent need for efficient and accurate methods for the authentication of the Chinese Materia Medica (CMM) used in the herbal medicine. In this work, we present a denoising filter and introduce the use of chord length distribution (CLD) for the classification of starch grains in microscopic images of Chinese Materia Medica. Our simple denoising filter is adaptive to the background and is shown to be effective to remove noise, which appears in CMM microscopic starch grains images. The CLD is extracted by considering the frequency of the chord length in the binarized starch grains image, and we shall show that the CLD is an efficient and effective characterization of the starch grains. Experimental results on 240 starch grains images of 24 classes show that our method outperforms benchmark result using the current state-of-the-art method based on circular size distribution extracted by morphological operators at much higher computational cost. cost. Microsc. Res. Tech., 2008. © 2008 Wiley-Liss, Inc. [source] Error analysis in cross-correlation of sky maps: application to the Integrated Sachs,Wolfe detectionMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 4 2007Anna Cabré ABSTRACT Constraining cosmological parameters from measurements of the Integrated Sachs,Wolfe effect requires developing robust and accurate methods for computing statistical errors in the cross-correlation between maps. This paper presents a detailed comparison of such error estimation applied to the case of cross-correlation of cosmic microwave background (CMB) and large-scale structure data. We compare theoretical models for error estimation with Monte Carlo simulations where both the galaxy and the CMB maps vary around a fiducial autocorrelation and cross-correlation model which agrees well with the current concordance , cold dark matter cosmology. Our analysis compares estimators both in harmonic and configuration (or real) space, quantifies the accuracy of the error analysis and discusses the impact of partial sky survey area and the choice of input fiducial model on dark energy constraints. We show that purely analytic approaches yield accurate errors even in surveys that cover only 10 per cent of the sky and that parameter constraints strongly depend on the fiducial model employed. Alternatively, we discuss the advantages and limitations of error estimators that can be directly applied to data. In particular, we show that errors and covariances from the jackknife method agree well with the theoretical approaches and simulations. We also introduce a novel method in real space that is computationally efficient and can be applied to real data and realistic survey geometries. Finally, we present a number of new findings and prescriptions that can be useful for analysis of real data and forecasts, and present a critical summary of the analyses done to date. [source] Analysis of sesquiterpene lactones in Eupatorium lindleyanum by HPLC-PDA-ESI-MS/MSPHYTOCHEMICAL ANALYSIS, Issue 2 2010Nian Yun Yang Abstract Introduction , The aerial part Eupatorium lindleyanum is commonly used as an antipyretic and detoxicant clinically in traditional Chinese medicine. Our previous research showed that germacrane sesquiterpene lactones were its main active constituents, so the development of rapid and accurate methods for the identification of the sesquiterpene lactones is of great significance. Objective , To develop an HPLC-PDA-ESI-MS/MS method capable for simple and rapid analysis of germacrane sesquiterpene lactones in the aerial part E. lindleyanum. Methodology , High-performance liquid chromatography-photodiode array detection-electrospray ionization-tandem mass spectrometry was used to analyze germacrane sesquiterpene lactones of Eupatorium lindleyanum. The fragmentation behavior of germacrane sesquiterpene lactones in a Micromass Q/TOF Mass Spectrometer was discussed, and 9 germacrane sesquiterpene lactones were identified by comparison of their characteristic data of HPLC and MS analyses with those obtained from reference compounds. Results , The investigated germacrane sesquiterpene lactones were identified as eupalinolides C (1), 3,-acetoxy-8,-(4,-hydroxy-tigloyloxy)-14-hydroxy-costunolide (2), eupalinolides A (3), eupalinolides B (4), eupalinolides E (5), 3,-acetoxy-8,-(4,-oxo-tigloyloxy)-14-hydroxy-heliangolide (6), 3,-acetoxy-8,-(4,-oxo- tigloyloxy)-14-hydroxy-costunolide (7), hiyodorilactone B (8), and 3,-acetoxy-8,-(4,-hydroxy-tigloyloxy)- costunolide (9). Compounds 6, 7 and 9 were reported for the first time. Conclusion , HPLC-PDA-ESI-MS/MS provides a new powerful approach to identify germacrane sesquiterpene lactones in E. lindleyanum rapidly and accurately. Copyright © 2009 John Wiley & Sons, Ltd. [source] Getting Real Performance Out of Pay-for-PerformanceTHE MILBANK QUARTERLY, Issue 3 2008SEAN NICHOLSON Context: Most private and public health insurers are implementing pay-for-performance (P4P) programs in an effort to improve the quality of medical care. This article offers a paradigm for evaluating how P4P programs should be structured and how effective they are likely to be. Methods: This article assesses the current comprehensiveness of evidence-based medicine by estimating the percentage of outpatient medical spending for eighteen medical processes recommended by the Institute of Medicine. Findings: Three conditions must be in place for outcomes-based P4P programs to improve the quality of care: (1) health insurers must not fully understand what medical processes improve health (i.e., the health production function); (2) providers must know more about the health production function than insurers do; and (3) health insurers must be able to measure a patient's risk-adjusted health. Only two of these conditions currently exist. Payers appear to have incomplete knowledge of the health production function, and providers appear to know more about the health production function than payers do, but accurate methods of adjusting the risk of a patient's health status are still being developed. Conclusions: This article concludes that in three general situations, P4P will have a different impact on quality and costs and so should be structured differently. When information about patients' health and the health production function is incomplete, as is currently the case, P4P payments should be kept small, should be based on outcomes rather than processes, and should target physicians' practices and health systems. As information improves, P4P incentive payments could be increased, and P4P may become more powerful. Ironically, once information becomes complete, P4P can be replaced entirely by "optimal fee-for-service." [source] Method validation for measurement of hair nicotine level in nonsmokersBIOMEDICAL CHROMATOGRAPHY, Issue 3 2009Sung Roul Kim Abstract The development of strategies to address the growing worldwide burden of exposure to secondhand smoke (SHS) would be facilitated by sensitive and accurate methods for assessing SHS exposure. Hair provides a readily available matrix for assessing biomarkers of typical SHS exposure. We developed and applied an optimized analytical method using an isotope dilution gas chromatography,mass spectrometry (GC/MS) for hair nicotine measurement. The utility of this optimized method is illustrated by presenting data on SHS exposure of women and children from 31 countries. Using this isotope dilution method with spiked samples (3.3 ng/mg), we found that the greatest hair nicotine extraction efficiency was obtained with a 60 min shaking time. In the field study (n = 2400), a positive association was evident between hair nicotine concentrations from nonsmokers and higher numbers of cigarettes smoked per day in a household. Copyright © 2008 John Wiley & Sons, Ltd. [source] |