Home About us Contact | |||
Improved Accuracy (improved + accuracy)
Selected AbstractsClassification of the Myoclonic EpilepsiesEPILEPSIA, Issue 2003Ilo E. Leppik Summary: The myoclonic epilepsies are a collection of syndromes in which myoclonic seizures are a prominent feature. Proper classification of a patient's syndrome is critical for appropriate treatment and prognosis. However, classification of such syndromes is often difficult because the terminology used to describe seizures can be confusing and inconsistent. Myoclonic epilepsy syndromes can be epileptic or nonepileptic and can also be divided into inherited and acquired forms. Progressive myoclonic epilepsy (PME) syndromes are the most severe of the myoclonic epilepsies. Diagnosis of PME syndromes on clinical grounds can be difficult, but advances in genetic testing have made diagnoses more accurate. Some other benign myoclonic epilepsy syndromes also have identified gene markers, which can aid in diagnosis. To accurately classify a patient's epilepsy syndrome, clinicians should use all available clinical laboratory tools appropriately. Improved accuracy of diagnosis for patients with myoclonic epilepsies should lead to more dependable prognoses and more effective treatment. [source] Improved accuracy for the Helmholtz equation in unbounded domainsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2004Eli Turkel Abstract Based on properties of the Helmholtz equation, we derive a new equation for an auxiliary variable. This reduces much of the oscillations of the solution leading to more accurate numerical approximations to the original unknown. Computations confirm the improved accuracy of the new models in both two and three dimensions. This also improves the accuracy when one wants the solution at neighbouring wavenumbers by using an expansion in k. We examine the accuracy for both waveguide and scattering problems as a function of k, h and the forcing mode l. The use of local absorbing boundary conditions is also examined as well as the location of the outer surface as functions of k. Connections with parabolic approximations are analysed. Copyright © 2004 John Wiley & Sons, Ltd. [source] Improved accuracy of cell surface shaving proteomics in Staphylococcus aureus using a false-positive controlPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 10 2010Nestor Solis Abstract Proteolytic treatment of intact bacterial cells is an ideal means for identifying surface-exposed peptide epitopes and has potential for the discovery of novel vaccine targets. Cell stability during such treatment, however, may become compromised and result in the release of intracellular proteins that complicate the final analysis. Staphylococcus aureus is a major human pathogen, causing community and hospital-acquired infections, and is a serious healthcare concern due to the increasing prevalence of multiple antibiotic resistances amongst clinical isolates. We employed a cell surface "shaving" technique with either trypsin or proteinase- K combined with LC-MS/MS. Trypsin-derived data were controlled using a "false-positive" strategy where cells were incubated without protease, removed by centrifugation and the resulting supernatants digested. Peptides identified in this fraction most likely result from cell lysis and were removed from the trypsin-shaved data set. We identified 42 predicted S. aureus COL surface proteins from 260 surface-exposed peptides. Trypsin and proteinase- K digests were highly complementary with ten proteins identified by both, 16 specific to proteinase- K treatment, 13 specific to trypsin and three identified in the control. The use of a subtracted false-positive strategy improved enrichment of surface-exposed peptides in the trypsin data set to approximately 80% (124/155 peptides). Predominant surface proteins were those associated with methicillin resistance,surface protein SACOL0050 (pls) and penicillin-binding protein 2, (mecA), as well as bifunctional autolysin and the extracellular matrix-binding protein Ebh. The cell shaving strategy is a rapid method for identifying surface-exposed peptide epitopes that may be useful in the design of novel vaccines against S. aureus. [source] Compensation of actuator delay and dynamics for real-time hybrid structural simulationEARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 1 2008M. Ahmadizadeh Abstract Compensation of delay and dynamic response of servo-hydraulic actuators is critical for stability and accuracy of hybrid experimental and numerical simulations of seismic response of structures. In this study, current procedures for compensation of actuator delay are examined and improved procedures are proposed to minimize experimental errors. The new procedures require little or no a priori information about the behavior of the test specimen or the input excitation. First, a simple approach is introduced for rapid online estimation of system delay and actuator command gain, thus capturing the variability of system response through a simulation. Second, an extrapolation procedure for delay compensation, based on the same kinematics equations used in numerical integration procedures is examined. Simulations using the proposed procedures indicate a reduction in high-frequency noise in force measurements that can minimize the excitation of high-frequency modes. To further verify the effectiveness of the compensation procedures, the artificial energy added to a hybrid simulation as a result of actuator tracking errors is measured and used for demonstrating the improved accuracy in the simulations. Copyright © 2007 John Wiley & Sons, Ltd. [source] Patterns of change in withdrawal symptoms, desire to smoke, reward motivation and response inhibition across 3 months of smoking abstinenceADDICTION, Issue 5 2009Lynne Dawkins ABSTRACT Aims We have demonstrated previously that acute smoking abstinence is associated with lowered reward motivation and impaired response inhibition. This prospective study explores whether these impairments, along with withdrawal-related symptoms, recover over 3 months of sustained abstinence. Design Participants completed a 12-hour abstinent baseline assessment and were then allocated randomly to quit unaided or continue smoking. All were re-tested after 7 days, 1 month and 3 months. Successful quitters' scores were compared with those of continuing smokers, who were tested after ad libitum smoking. Setting Goldsmiths, University of London. Participants A total of 33 smokers who maintained abstinence to 3 months, and 31 continuing smokers. Measurements Indices demonstrated previously in this cohort of smokers to be sensitive to the effect of nicotine versus acute abstinence: reward motivation [Snaith,Hamilton pleasure scale (SHAPS), Card Arranging Reward Responsivity Objective Test (CARROT), Stroop], tasks of response inhibition [anti-saccade task; Continuous Performance Task (CPT)], clinical indices of mood [Hospital Anxiety and Depression Scale (HADS)], withdrawal symptoms [Mood and Physical Symptoms Scale (MPSS)] and desire to smoke. Findings SHAPS anhedonia and reward responsivity (CARROT) showed significant improvement and plateaued after a month of abstinence, not differing from the scores of continuing smokers tested in a satiated state. Mood, other withdrawal symptoms and desire to smoke all declined from acute abstinence to 1 month of cessation and were equivalent to, or lower than, the levels reported by continuing, satiated smokers. Neither group showed a change in CPT errors over time while continuing smokers, but not abstainers, showed improved accuracy on the anti-saccade task at 3 months. Conclusion Appetitive processes and related affective states appear to improve in smokers who remain nicotine-free for 3 months, whereas response inhibition does not. Although in need of replication, the results suggest tentatively that poor inhibitory control may constitute a long-term risk factor for relapse and could be a target for intervention. [source] Functional trait variation and sampling strategies in species-rich plant communitiesFUNCTIONAL ECOLOGY, Issue 1 2010Christopher Baraloto Summary 1. ,Despite considerable interest in the application of plant functional traits to questions of community assembly and ecosystem structure and function, there is no consensus on the appropriateness of sampling designs to obtain plot-level estimates in diverse plant communities. 2. ,We measured 10 plant functional traits describing leaf and stem morphology and ecophysiology for all trees in nine 1-ha plots in terra firme lowland tropical rain forests of French Guiana (N = 4709). 3. ,We calculated, by simulation, the mean and variance in trait values for each plot and each trait expected under seven sampling methods and a range of sampling intensities. Simulated sampling methods included a variety of spatial designs, as well as the application of existing data base values to all individuals of a given species. 4. ,For each trait in each plot, we defined a performance index for each sampling design as the proportion of resampling events that resulted in observed means within 5% of the true plot mean, and observed variance within 20% of the true plot variance. 5. ,The relative performance of sampling designs was consistent for estimations of means and variances. Data base use had consistently poor performance for most traits across all plots, whereas sampling one individual per species per plot resulted in relatively high performance. We found few differences among different spatial sampling strategies; however, for a given strategy, increased intensity of sampling resulted in markedly improved accuracy in estimates of trait mean and variance. 6. ,We also calculated the financial cost of each sampling design based on data from our ,every individual per plot' strategy and estimated the sampling and botanical effort required. The relative performance of designs was strongly positively correlated with relative financial cost, suggesting that sampling investment returns are relatively constant. 7. ,Our results suggest that trait sampling for many objectives in species-rich plant communities may require the considerable effort of sampling at least one individual of each species in each plot, and that investment in complete sampling, though great, may be worthwhile for at least some traits. [source] Traveltime computation by wavefront-orientated ray tracingGEOPHYSICAL PROSPECTING, Issue 1 2005Radu Coman ABSTRACT For multivalued traveltime computation on dense grids, we propose a wavefront-orientated ray-tracing (WRT) technique. At the source, we start with a few rays which are propagated stepwise through a smooth two-dimensional (2D) velocity model. The ray field is examined at wavefronts and a new ray might be inserted between two adjacent rays if one of the following criteria is satisfied: (1) the distance between the two rays is larger than a predefined threshold; (2) the difference in wavefront curvature between the rays is larger than a predefined threshold; (3) the adjacent rays intersect. The last two criteria may lead to oversampling by rays in caustic regions. To avoid this oversampling, we do not insert a ray if the distance between adjacent rays is smaller than a predefined threshold. We insert the new ray by tracing it from the source. This approach leads to an improved accuracy compared with the insertion of a new ray by interpolation, which is the method usually applied in wavefront construction. The traveltimes computed along the rays are used for the estimation of traveltimes on a rectangular grid. This estimation is carried out within a region bounded by adjacent wavefronts and rays. As for the insertion criterion, we consider the wavefront curvature and extrapolate the traveltimes, up to the second order, from the intersection points between rays and wavefronts to a gridpoint. The extrapolated values are weighted with respect to the distances to wavefronts and rays. Because dynamic ray tracing is not applied, we approximate the wavefront curvature at a given point using the slowness vector at this point and an adjacent point on the same wavefront. The efficiency of the WRT technique is strongly dependent on the input parameters which control the wavefront and ray densities. On the basis of traveltimes computed in a smoothed Marmousi model, we analyse these dependences and suggest some rules for a correct choice of input parameters. With suitable input parameters, the WRT technique allows an accurate traveltime computation using a small number of rays and wavefronts. [source] A continuum-to-atomistic bridging domain method for composite latticesINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 13 2010Mei Xu Abstract The bridging domain method is an overlapping domain decomposition approach for coupling finite element continuum models and molecular mechanics models. In this method, the total energy is decomposed into atomistic and continuum parts by complementary weight functions applied to each part of the energy in the coupling domain. To enforce compatibility, the motions of the coupled atoms are constrained by the continuum displacement field using Lagrange multipliers. For composite lattices, this approach is suboptimal because the internal modes of the lattice are suppressed by the homogeneous continuum displacement field in the coupling region. To overcome this difficulty, we present a relaxed bridging domain method. In this method, the atom set is divided into primary and secondary atoms; the relative motions between them are often called the internal modes. Only the primary atoms are constrained in the coupling region, which succeed in allowing these internal modes to fully relax. Several one- and two-dimensional examples are presented, which demonstrate improved accuracy over the standard bridging domain method. Copyright © 2009 John Wiley & Sons, Ltd. [source] Accurate eight-node hexahedral elementINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 6 2007Magnus Fredriksson Abstract Based on the assumed strain method, an eight-node hexahedral element is proposed. Consistent choice of the fundamental element stiffness guarantees convergence and fulfillment of the patch test a priori. In conjunction with a ,-projection operator, the higher order strain field becomes orthogonal to rigid body and linear displacement fields. The higher order strain field in question is carefully selected to preserve correct rank for the element stiffness matrix, also for distorted elements. Volumetric locking is also removed effectively. By considerations of the bending energy, improved accuracy is obtained even for coarse element meshes. The choice of local co-ordinate system aligned with the principal axes of inertia makes it possible to improve the performance even for distorted elements. The strain-driven format obtained is well suited for materials with non-linear stress,strain relations. Several numerical examples are presented where the excellent performance of the proposed eight-node hexahedral is verified. Copyright © 2007 John Wiley & Sons, Ltd. [source] Improved accuracy for the Helmholtz equation in unbounded domainsINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 15 2004Eli Turkel Abstract Based on properties of the Helmholtz equation, we derive a new equation for an auxiliary variable. This reduces much of the oscillations of the solution leading to more accurate numerical approximations to the original unknown. Computations confirm the improved accuracy of the new models in both two and three dimensions. This also improves the accuracy when one wants the solution at neighbouring wavenumbers by using an expansion in k. We examine the accuracy for both waveguide and scattering problems as a function of k, h and the forcing mode l. The use of local absorbing boundary conditions is also examined as well as the location of the outer surface as functions of k. Connections with parabolic approximations are analysed. Copyright © 2004 John Wiley & Sons, Ltd. [source] On a new integration scheme for von-Mises plasticity with linear hardeningINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN ENGINEERING, Issue 10 2003Ferdinando Auricchio Abstract Limiting the discussion to an associative von-Mises plasticity model with linear kinematic and isotropic hardening, we compare the performance of the classical radial return map algorithm with a new integration scheme based on the computation of an integration factor. The numerical examples clearly show the improved accuracy of the new method. Copyright © 2003 John Wiley & Sons, Ltd. [source] A least square extrapolation method for the a posteriori error estimate of the incompressible Navier Stokes problemINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 1 2005M. Garbey Abstract A posteriori error estimators are fundamental tools for providing confidence in the numerical computation of PDEs. To date, the main theories of a posteriori estimators have been developed largely in the finite element framework, for either linear elliptic operators or non-linear PDEs in the absence of disparate length scales. On the other hand, there is a strong interest in using grid refinement combined with Richardson extrapolation to produce CFD solutions with improved accuracy and, therefore, a posteriori error estimates. But in practice, the effective order of a numerical method often depends on space location and is not uniform, rendering the Richardson extrapolation method unreliable. We have recently introduced (Garbey, 13th International Conference on Domain Decomposition, Barcelona, 2002; 379,386; Garbey and Shyy, J. Comput. Phys. 2003; 186:1,23) a new method which estimates the order of convergence of a computation as the solution of a least square minimization problem on the residual. This method, called least square extrapolation, introduces a framework facilitating multi-level extrapolation, improves accuracy and provides a posteriori error estimate. This method can accommodate different grid arrangements. The goal of this paper is to investigate the power and limits of this method via incompressible Navier Stokes flow computations. Copyright © 2005 John Wiley & Sons, Ltd. [source] Towards entropy detection of anomalous mass and momentum exchange in incompressible fluid flowINTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 11 2002G. F. Naterer An entropy-based approach is presented for assessment of computational accuracy in incompressible flow problems. It is shown that computational entropy can serve as an effective parameter in detecting erroneous or anomalous predictions of mass and momentum transport in the flow field. In the present paper, the fluid flow equations and second law of thermodynamics are discretized by a Galerkin finite-element method with linear, isoparametric triangular elements. It is shown that a weighted entropy residual is closely related to truncation error; this relationship is examined in an application problem involving incompressible flow through a converging channel. In particular, regions exhibiting anomalous flow behaviour, such as under-predicted velocities, appear together with analogous trends in the weighted entropy residual. It is anticipated that entropy-based error detection can provide important steps towards improved accuracy in computational fluid flow. Copyright © 2002 John Wiley & Sons, Ltd. [source] Development of a Novel Immunoradiometric Assay Exclusively for Biologically Active Whole Parathyroid Hormone 1,84: Implications for Improvement of Accurate Assessment of Parathyroid FunctionJOURNAL OF BONE AND MINERAL RESEARCH, Issue 4 2001Ping Gao Abstract We developed a novel immunoradiometric assay (IRMA; whole parathyroid hormone [PTH] IRMA) for PTH, which specifically measures biologically active whole PTH(1,84). The assay is based on a solid phase coated with anti-PTH(39,84) antibody, a tracer of125I-labeled antibody with a unique specificity to the first N-terminal amino acid of PTH(1,84), and calibrators of diluted synthetic PTH(1,84). In contrast to the Nichols intact PTH IRMA, this new assay does not detect PTH(7,84) fragments and only detects one immunoreactive peak in chromatographically fractionated patient samples. The assay was shown to have an analytical sensitivity of 1.0 pg/ml with a linear measurement range up to 2300 pg/ml. With this assay, we further identified that the previously described non-(1,84)PTH fragments are aminoterminally truncated with similar hydrophobicity as PTH(7,84), and these PTH fragments are present not only in patients with secondary hyperparathyroidism (2° -HPT) of uremia, but also in patients with primary hyperparathyroidism (1° -HPT) and normal persons. The plasma normal range of the whole PTH(1,84) was 7,36 pg/ml (mean ± SD: 22.7 ± 7.2 pg/ml, n = 135), whereas over 93.9% (155/165) of patients with 1° -HPT had whole PTH(1,84) values above the normal cut-off. The percentage of biologically active whole PTH(1,84) (pB%) in the pool of total immunoreactive "intact" PTH is higher in the normal population (median: 67.3%; SD: 15.8%; n = 56) than in uremic patients (median:53.8%; SD: 15.5%; n = 318; p < 0.001), although the whole PTH(1,84) values from uremic patients displayed a more significant heterogeneous distribution when compared with that of 1° -HPT patients and normals. Moreover, the pB% displayed a nearly Gaussian distribution pattern from 20% to over 90% in patients with either 1° -HPT or uremia. The specificity of this newly developed whole PTH(1,84) IRMA is the assurance, for the first time, of being able to measure only the biologically active whole PTH(1,84) without cross-reaction to the high concentrations of the aminoterminally truncated PTH fragments found in both normal subjects and patients. Because of the significant variations of pB% in patients, it is necessary to use the whole PTH assay to determine biologically active PTH levels clinically and, thus, to avoid overestimating the concentration of the true biologically active hormone. This new assay could provide a more meaningful standardization of future PTH measurements with improved accuracy in the clinical assessment of parathyroid function. [source] Fluid-particle drag in low-Reynolds-number polydisperse gas,solid suspensionsAICHE JOURNAL, Issue 6 2009Xiaolong Yin Abstract Lattice-Boltzmann simulations of low-Reynolds-number fluid flow in bidisperse fixed beds and suspensions with particle,particle relative motions have been performed. The particles are spherical and are intimately mixed. The total volume fraction of the suspension was varied between 0.1 and 0.4, the volume fraction ratio ,1/,2 from 1:1 to 1:6, and the particle size ratio d1/d2 from 1:1.5 to 1:4. A drag law with improved accuracy has been established for bidisperse fixed beds. For suspensions with particle,particle relative motions, the hydrodynamic particle,particle drag representing the momentum transfer between particle species through hydrodynamic interaction is found to be an important contribution to the net fluid-particle drag. It has a logarithmic dependence on the lubrication cutoff distance and can be fit as the harmonic mean of the drag forces in bidisperse fixed beds. The proposed drag laws for bidisperse fixed beds and suspensions are generalized to polydisperse suspensions with three or more particle species. © 2009 American Institute of Chemical Engineers AIChE J, 2009 [source] Modifications to improve the accuracy of a four-ball test apparatusLUBRICATION SCIENCE, Issue 1 2000P. I. Lacey Abstract The four-ball wear test machine is one of the most widely used tribological tools in both research and industry. In general, the test geometry is self-aligning and minimises the opportunity for random variation. Nonetheless, accurate control of the test parameters remains vital to repeatability and reproducibility. The present paper details a number of modifications to a commercially available test apparatus that have been found to improve accuracy. The applied load on some apparatus was found to vary from the correct value, probably due to frictional drag in the loading system. A feedback control loop was designed and fitted to the applied load mechanism, which resulted in significantly improved accuracy. Finally, the apparatus was fully automated, with complete computer control of all test parameters. Under this, following cleaning and assembly of the test specimens, the required test procedure could be selected from a menu of standard methods, and the computer program then adjusted the test parameters according to the method selected, greatly reducing the possibility of operator error. [source] Transient Behavior and Gelation of Free Radical Polymerizations in Continuous Stirred Tank ReactorsMACROMOLECULAR THEORY AND SIMULATIONS, Issue 4 2005Rolando C. S. Dias Abstract Summary: Using the authors' previously developed method for the general kinetic analysis of non-linear irreversible polymerizations, the simulation of free radical homogeneous polymerization systems with terminal branching and chain transfer to polymer has been carried out for continuous stirred tank reactors. Its improved accuracy on the numerical evaluation of generating functions has been exploited in order to perform their numerical inversion and chain length distributions could also be estimated with or without the presence of gel. A comparison with alternative techniques emphasizing the effect of their simplifying assumptions on the accuracy of calculations is also presented. Predicted CLD before gelation (t,=,1 h), after gelation (t,=,15 h, steady state), and close to gel point for a free radical polymerization with transfer to polymer in a CSTR with ,,=,60 min. [source] Independent estimation of T*2 for water and fat for improved accuracy of fat quantificationMAGNETIC RESONANCE IN MEDICINE, Issue 4 2010Venkata V. Chebrolu Abstract Noninvasive biomarkers of intracellular accumulation of fat within the liver (hepatic steatosis) are urgently needed for detection and quantitative grading of nonalcoholic fatty liver disease, the most common cause of chronic liver disease in the United States. Accurate quantification of fat with MRI is challenging due the presence of several confounding factors, including T*2 decay. The specific purpose of this work is to quantify the impact of T*2 decay and develop a multiexponential T*2 correction method for improved accuracy of fat quantification, relaxing assumptions made by previous T*2 correction methods. A modified Gauss-Newton algorithm is used to estimate the T*2 for water and fat independently. Improved quantification of fat is demonstrated, with independent estimation of T*2 for water and fat using phantom experiments. The tradeoffs in algorithm stability and accuracy between multiexponential and single exponential techniques are discussed. Magn Reson Med 63:849,857, 2010. © 2010 Wiley-Liss, Inc. [source] Joint design of trajectory and RF pulses for parallel excitationMAGNETIC RESONANCE IN MEDICINE, Issue 3 2007Chun-Yu Yip Abstract We propose an alternating optimization framework for the joint design of excitation k-space trajectory and RF pulses for small-tip-angle parallel excitation. Using Bloch simulations, we show that compared with conventional designs with predetermined trajectories, joint designs can often excite target patterns with improved accuracy and reduced total integrated pulse power, particularly at high reduction factors. These benefits come at a modest increase in computational time. Magn Reson Med 58:598,604, 2007. © 2007 Wiley-Liss, Inc. [source] Comparison of new sequences for high-resolution cartilage imagingMAGNETIC RESONANCE IN MEDICINE, Issue 4 2003Brian A. Hargreaves Abstract The high prevalence of osteoarthritis continues to demand improved accuracy in detecting cartilage injury and monitoring its response to different treatments. MRI is the most accurate noninvasive method of diagnosing cartilage lesions. However, MR imaging of cartilage is limited by scan time, signal-to-noise ratio (SNR), and image contrast. Recently, there has been renewed interest in SNR-efficient imaging sequences for imaging cartilage, including various forms of steady-state free-precession as well as driven-equilibrium imaging. This work compares several of these sequences with existing methods, both theoretically and in normal volunteers. Results show that the new steady-state methods increase SNR-efficiency by as much as 30% and improve cartilage-synovial fluid contrast by a factor of three. Additionally, these methods markedly decrease minimum scan times, while providing 3D coverage without the characteristic blurring seen in fast spin-echo images. Magn Reson Med 49:700,709, 2003. © 2003 Wiley-Liss, Inc. [source] The structured total least-squares approach for non-linearly structured matricesNUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 4 2002P. Lemmerling Abstract In this paper, an extension of the structured total least-squares (STLS) approach for non-linearly structured matrices is presented in the so-called ,Riemannian singular value decomposition' (RiSVD) framework. It is shown that this type of STLS problem can be solved by solving a set of Riemannian SVD equations. For small perturbations the problem can be reformulated into finding the smallest singular value and the corresponding right singular vector of this Riemannian SVD. A heuristic algorithm is proposed. Some examples of Vandermonde-type matrices are used to demonstrate the improved accuracy of the obtained parameter estimator when compared to other methods such as least squares (LS) or total least squares (TLS). Copyright © 2002 John Wiley & Sons, Ltd. [source] Methods to adjust for the interference of N2O on ,13C and ,18O measurements of CO2 from soil mineralization,RAPID COMMUNICATIONS IN MASS SPECTROMETRY, Issue 11 2005D. Beheydt In this paper we present an overview of the present knowledge relating to methods that avoid interference of N2O on ,13C and ,18O measurements of CO2. The main focus of research to date has been on atmospheric samples. However, N2O is predominantly generated by soil processes. Isotope analyses related to soil trace gas emissions are often performed with continuous flow isotope ratio mass spectrometers, which do not necessarily have the high precision needed for atmospheric research. However, it was shown by using laboratory and field samples that a correction to obtain reliable ,13C and ,18O values is also required for a commercial continuous flow isotope ratio mass spectrometer. The capillary gas chromatography column of the original equipment was changed to a packed Porapak Q column. This adaptation resulted in an improved accuracy and precision of ,13C (standard deviationGhent: from 0.2 to 0.08,; standard deviationLincoln: from 0.2 to 0.13,) of CO2 for N2O/CO2 ratios up to 0.1. For ,18O there was an improvement for the standard deviation measured at Ghent University (0.13 to 0.08,) but not for the measurements at Lincoln University (0.08 to 0.23,). The benefits of using the packed Porapak Q column compared with the theoretical correction method meant that samples were not limited to small N2O concentrations, they did not require an extra N2O concentration measurement, and measurements were independent of the variable isotopic composition of N2O from soil. Copyright © 2005 John Wiley & Sons, Ltd. [source] International evidence on alternative models of the term structure of volatilitiesTHE JOURNAL OF FUTURES MARKETS, Issue 7 2009Antonio Díaz The term structure of instantaneous volatilities (TSV) of forward rates for different monetary areas (euro, U.S. dollar and British pound) is examined using daily data from at-the-money cap markets. During the sample period (two and a half years), the TSV experienced severe changes both in level and shape. Two new functional forms of the instantaneous volatility of forward rates are proposed and tested within the LIBOR Market Model framework. Two other alternatives are calibrated and used as benchmarks to test the accuracy of the new models. The two new models provide more flexibility to adequately calibrate the observed cap prices, although this improved accuracy in replicating cap prices produces some instability in parameter estimates. © 2009 Wiley Periodicals, Inc. Jrl Fut Mark 29:653,683, 2009 [source] A Practical Method to Estimate the Bed Height of a Fluidized Bed of Fine ParticlesCHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 12 2008M. Zhang Abstract Knowledge of both dense bed expansion and freeboard solids inventory are required for the determination of bed height in fluidized beds of fine particles, e.g., Fluidized Catalytic Cracking (FCC) catalysts. A more accurate estimation of the solids inventory in the freeboard is achieved based on a modified model for the freeboard particle concentration profile. Using the experimentally determined dense bed expansion and the modified freeboard model, a more practical method with improved accuracy is provided to determine the bed height both in laboratory and industrial fluidized beds of FCC particles. The bed height in a fluidized bed can exhibit different trends as the superficial gas velocity increases, depending on the different characteristics of the dense bed expansion and solids entrainment in the freeboard. The factors that influence the bed height are discussed, showing the complexity of bed height and demonstrating that it is not realistic to determine the bed height by a generalized model that can accurately predict the dense bed expansion and freeboard solids inventory simultaneously. Moreover, a method to determine the bed height, based on axial pressure fluctuation profiles, is proposed in this study for laboratory fluidized beds, which provides improved accuracy compared to observation alone or determining the turning points in the axial pressure profiles, especially in high-velocity fluidized beds. [source] |