Home About us Contact | |||
Experimental Data Sets (experimental + data_set)
Selected AbstractsEmpirical preprocessing methods and their impact on NIR calibrations: a simulation studyJOURNAL OF CHEMOMETRICS, Issue 2 2005S. N. Thennadil Abstract The extraction of chemical information from dense particulate suspensions, such as industrial slurries and biological suspensions, using near-infrared (NIR) spectroscopic measurements is complicated by sample-to-sample path length variations due to light scattering. Empirical preprocessing techniques such as multiplicative scatter correction (MSC), extended MSC and derivatives have been applied to remove these effects and in some cases have shown promise. While the performance of these techniques and other related approaches is known to depend on the nature and extent of the variations and on the measurement configuration, detailed investigations into the efficacy of these approaches under various conditions have not been previously undertaken. The main obstacle to carrying out such investigations has been the lack of, and the difficulty in obtaining, an accurate and comprehensive experimental data set. In this work, simulations that generate ,actual' measurements were carried out to obtain ,experimental' spectroscopic data on particulate systems. This was achieved by solving the exact transport equation for light propagation. A model system comprising four chemical components with one consisting of spherical submicron particles was considered. Total diffuse transmittance and reflectance data generated through simulations for moderate particle concentrations were used as the basis for examining the effect of particle size variations and measurement configurations on the efficacy of a number of preprocessing techniques in enhancing the performance of partial least squares (PLS) models for predicting the concentration of one of the non-scattering chemical species. Additionally, a form of extended multiplicative signal correction based on considerations arising from fundamental light scattering theory is proposed and found to perform better than the other techniques for the cases considered in the study. Copyright © 2005 John Wiley & Sons, Ltd. [source] A new formulation of garnet,clinopyroxene geothermometer based on accumulation and statistical analysis of a large experimental data setJOURNAL OF METAMORPHIC GEOLOGY, Issue 7 2009D. NAKAMURAArticle first published online: 13 JUL 200 Abstract Published experimental data including garnet and clinopyroxene as run products were used to develop a new formulation of the garnet,clinopyroxene geothermometer based on 333 garnet,clinopyroxene pairs. Only experiments with graphite capsules were selected because of difficulty in estimating the Fe3+ content of clinopyroxene. For the calibration, a published subregular-solution model was adopted to express the non-ideality of garnet. The magnitude of the Fe,Mg excess interaction parameter for clinopyroxene (WFeMgCpx), and differences in enthalpy and entropy of the Fe,Mg exchange reaction were regressed from the accumulated experimental data set. As a result, a markedly negative value was obtained for the Fe,Mg excess interaction parameter of clinopyroxene (WFeMgCpx = , 3843 J mol,1). The pressure correction is simply treated as linear, and the difference in volume of the Fe,Mg exchange reaction was calculated from a published thermodynamic data set and fixed to be ,120.72 (J kbar,1 mol,1). The regressed and obtained thermometer formulation is as follows: where T = temperature, P = pressure (kbar), A = 0.5 Xgrs (Xprp , Xalm , Xsps), B = 0.5 Xgrs (Xprp , Xalm + Xsps), C = 0.5 (Xgrs + Xsps) (Xprp , Xalm), Xprp = Mg/(Fe2+ + Mn + Mg + Ca)Grt, Xalm = Fe/(Fe2+ + Mn + Mg + Ca)Grt, Xsps = Mn/(Fe2+ + Mn + Mg + Ca)Grt, Xgrs = Ca/(Fe2+ + Mn + Mg + Ca)Grt, XMgCpx = Mg/(Al + Fetotal + Mg)Cpx, XFeCpx = Fe2+/(Al + Fetotal + Mg)Cpx, KD = (Fe2+/Mg)Grt/(Fe2+/Mg)Cpx, Grt = garnet, Cpx = clinopyroxene. A test of this new formulation to the accumulated data gave results that are concordant with the experimental temperatures over the whole range of the experimental temperatures (800,1820 °C), with a standard deviation (1 sigma) of 74 °C. Previous formulations of the thermometer are inconsistent with the accumulated data set; they underestimate temperatures by about 100 °C at >1300 °C and overestimate by 100,200 °C at <1300 °C. In addition, they tend to overestimate temperatures for high-Ca garnet (Xgrs , 0.30,0.50). This new formulation has been tested against previous formulations of the thermometer by application to natural eclogites. This gave temperatures some 20,100 °C lower than previous formulations. [source] The determination of membrane transport parameters with the cell pressure probe: theory suggests that unstirred layers have significant impactPLANT CELL & ENVIRONMENT, Issue 12 2005MELVIN T. TYREE ABSTRACT A simulation model was written to compute the time-kinetics of turgor pressure, P, change in Chara corallina during cell pressure probe experiments. The model allowed for the contribution of a membrane plus zero, one, or two unstirred layers of any desired thickness. The hypothesis that a cell with an unstirred layer is a composite membrane that will follow the same kind of kinetics with or without unstirred layers was tested. Typical ,osmotic pulse' experiments yield biphasic curves with minimum or maximum pressures, Pmin(max), at time tmin(max) and a solute exponential decay with halftime . These observed data were then used to compute composite membrane properties, namely the parameters Lp = the hydraulic conductance, , = reflection coefficient and Ps = solute permeability using theoretical equations. Using the simulation model, it was possible to fit an experimental data set to the same values of Pmin(max), tmin(max) and incorporating different, likely values of unstirred layer thickness, where each thickness requires a unique set of plasmalemma membrane values of Lp, , and Ps. We conclude that it is not possible to compute plasmalemma membrane properties from cell pressure probe experiments without independent knowledge of the unstirred layer thickness. [source] Modelling of viscoelastic material behaviour close to the glass transition temperaturePROCEEDINGS IN APPLIED MATHEMATICS & MECHANICS, Issue 1 2009Michael Johlitz In this contribution we investigate the mechanical behaviour of polyurethane over a range of different but constant temperatures from the glass to the viscoelastic state. Therefore uniaxial tension tests are performed on dogbone specimens under different isothermal conditions. In this manner an experimental data set is provided. As a theoretical basis we present the well known thermomechanically coupled one dimensional linear viscoelastic material model which is able to display the experimentally observed material behaviour. For this we adopt temperature dependent relaxation times. The introduced model parameters are identified via a standard parameter identification tool. Finally, the experimental results are compared with the ones of simulations of the identified model parameters. (© 2009 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Comparison of different algorithms to calculate electrophoretic mobility of analytes as a function of binary solvent compositionELECTROPHORESIS, Issue 10 2003Abolghasem Jouyban Abstract Ten different mathematical models representing the electrophoretic mobility of analytes in capillary electrophoresis in mixed solvents of different composition have been compared using 32 experimental data sets. The solvents are binary mixtures of water-methanol, water-ethanol and methanol-ethanol, respectively. Mean percentage deviation (MPD), overall MPD (OMPD) and individual percentage deviation (IPD) have been considered as comparison criteria. The results showed that a reorganized solution model, namely the combined nearly ideal binary solvent/Redlich-Kister equation, is the most accurate model among other similar models concerning both correlation ability and prediction capability. [source] Joint inversion of multiple data types with the use of multiobjective optimization: problem formulation and application to the seismic anisotropy investigationsGEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007E. Kozlovskaya SUMMARY In geophysical studies the problem of joint inversion of multiple experimental data sets obtained by different methods is conventionally considered as a scalar one. Namely, a solution is found by minimization of linear combination of functions describing the fit of the values predicted from the model to each set of data. In the present paper we demonstrate that this standard approach is not always justified and propose to consider a joint inversion problem as a multiobjective optimization problem (MOP), for which the misfit function is a vector. The method is based on analysis of two types of solutions to MOP considered in the space of misfit functions (objective space). The first one is a set of complete optimal solutions that minimize all the components of a vector misfit function simultaneously. The second one is a set of Pareto optimal solutions, or trade-off solutions, for which it is not possible to decrease any component of the vector misfit function without increasing at least one other. We investigate connection between the standard formulation of a joint inversion problem and the multiobjective formulation and demonstrate that the standard formulation is a particular case of scalarization of a multiobjective problem using a weighted sum of component misfit functions (objectives). We illustrate the multiobjective approach with a non-linear problem of the joint inversion of shear wave splitting parameters and longitudinal wave residuals. Using synthetic data and real data from three passive seismic experiments, we demonstrate that random noise in the data and inexact model parametrization destroy the complete optimal solution, which degenerates into a fairly large Pareto set. As a result, non-uniqueness of the problem of joint inversion increases. If the random noise in the data is the only source of uncertainty, the Pareto set expands around the true solution in the objective space. In this case the ,ideal point' method of scalarization of multiobjective problems can be used. If the uncertainty is due to inexact model parametrization, the Pareto set in the objective space deviates strongly from the true solution. In this case all scalarization methods fail to find the solution close to the true one and a change of model parametrization is necessary. [source] Prediction of concentrated flow width in ephemeral gully channelsHYDROLOGICAL PROCESSES, Issue 10 2002J. Nachtergaele Abstract Empirical prediction equations of the form W = aQb have been reported for rills and rivers, but not for ephemeral gullies. In this study six experimental data sets are used to establish a relationship between channel width (W, m) and flow discharge (Q, m3 s,1) for ephemeral gullies formed on cropland. The resulting regression equation (W = 2·51 Q0·412; R2 = 0·72; n = 67) predicts observed channel width reasonably well. Owing to logistic limitations related to the respective experimental set ups, only relatively small runoff discharges (i.e. Q < 0·02 m3s,1) were covered. Using field data, where measured ephemeral gully channel width was attributed to a calculated peak runoff discharge on sealed cropland, the application field of the regression equation was extended towards larger discharges (i.e. 5 × 10,4m3s,1 < Q < 0·1 m3s,1). Comparing W,Q relationships for concentrated flow channels revealed that the discharge exponent (b) varies from 0·3 for rills over 0·4 for gullies to 0·5 for rivers. This shift in b may be the result of: (i) differences in flow shear stress distribution over the wetted perimeter between rills, gullies and rivers, (ii) a decrease in probability of a channel formed in soil material with uniform erosion resistance from rills over gullies to rivers and (iii) a decrease in average surface slope from rills over gullies to rivers. The proposed W,Q equation for ephemeral gullies is valid for (sealed) cropland with no significant change in erosion resistance with depth. Two examples illustrate limitations of the W,Q approach. In a first example, vertical erosion is hindered by a frozen subsoil. The second example relates to a typical summer situation where the soil moisture profile of an agricultural field makes the top 0·02 m five times more erodible than the underlying soil material. For both cases observed W values are larger than those predicted by the established channel width equation for concentrated flow on cropland. For the frozen soils the equation W = 3·17 Q0·368 (R2 = 0·78; n = 617) was established, but for the summer soils no equation could be established. Copyright © 2002 John Wiley & Sons, Ltd. [source] Effective stress concept in unsaturated soils: Clarification and validation of a unified frameworkINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 7 2008Mathieu Nuth Abstract The effective stress principle, conventionally applied in saturated soils, is reviewed for constitutive modelling purposes. The assumptions for the applicability of Terzaghi's single effective stress are recalled and its advantages are inventoried. The possible stress frameworks applicable to unsaturated soil modelling are reassessed in a comparative manner, specifically the Bishop's single effective stress, the independent stress variables approach and the generalized stress framework. The latter considerations lead to the definition of a unified stress context, suitable for modelling soils under different saturation states. In order to qualify the implications brought by the proposed stress framework, several experimental data sets are re-examined in the light of the generalized effective stress. The critical state lines (CSLs) at different saturation states tend to converge remarkably towards a unique saturated line in the deviatoric stress versus mean effective stress plane. The effective stress interpretation is also applied to isotropic paths and compared with conventional net stress conception. The accent is finally laid on a second key feature for constitutive frameworks based on a unified stress, namely the sufficiency of a unique mechanical yield surface besides the unique CSL. Copyright © 2007 John Wiley & Sons, Ltd. [source] Energy analysis in fluidized-bed drying of large wet particlesINTERNATIONAL JOURNAL OF ENERGY RESEARCH, Issue 6 2002S. Syahrul Abstract Energy analysis of a fluidized-bed drying system is undertaken to optimize the fluidized-bed drying conditions for large wet particles (Group D) using energy models. Three critical factors; the inlet air temperature, the fluidization velocity, and the initial moisture contents of the material (e.g., wheat) are studied to determine their effects on the overall energy efficiency to optimize the fluidized bed drying process. In order to verify the model, different experimental data sets for wheat material taken from the literature are used. The results show that the energy efficiencies of the fluidized-bed dryer decrease with increasing drying time and become the lowest at the end of the drying process. It is observed that the inlet air temperature has an important effect on energy efficiency for the material where the diffusion coefficient depends on both the temperature and the moisture content of the particle. Furthermore, the energy efficiencies showed higher values for particles with high initial moisture content while the effect of gas velocity varied depending on the material properties. A good agreement is achieved between the model predictions and the available experimental results. Copyright © 2002 John Wiley & Sons, Ltd. [source] Accelerating the analyses of 3-way and 4-way PARAFAC models utilizing multi-dimensional wavelet compressionJOURNAL OF CHEMOMETRICS, Issue 11-12 2005Jeff Cramer Abstract Parallel factor analysis (PARAFAC) is one of the most popular methods for evaluating multi-way data sets, such as those typically acquired by hyphenated measurement techniques. One of the reasons for PARAFAC popularity is the ability to extract directly interpretable chemometric models with little a priori information and the capability to handle unknown interferents and missing values. However, PARAFAC requires long computation times that often prohibit sufficiently fast analyses for applications such as online sensing. An additional challenge faced by PARAFAC users is the handling and storage of very large, high-dimensional data sets. Accelerating computations and reducing storage requirements in multi-way analyses are the topics of this manuscript. This study introduces a data pre-processing method based on multi-dimensional wavelet transforms (WTs), which enables highly efficient data compression applied prior to data evaluation. Because multi-dimensional WTs are linear, the intrinsic underlying linear data construction is preserved in the wavelet domain. In almost all studied examples, computation times for analyzing the much smaller, compressed data sets could be reduced so much that the additional effort for wavelet compression was more than recompensated. For 3-way and 4-way synthetic and experimental data sets, acceleration factors up to 50 have been achieved; these data sets could be compressed down to a few per cent of the original size. Despite the high compression, accurate and interpretable models were derived, which are in good agreement with conventionally determined PARAFAC models. This study also found that the wavelet type used for compression is an important factor determining acceleration factors, data compression ratios and model quality. Copyright © 2006 John Wiley & Sons, Ltd. [source] A new family of genetic algorithms for wavelength interval selection in multivariate analytical spectroscopyJOURNAL OF CHEMOMETRICS, Issue 6 2003Héctor C. Goicoechea Abstract A new procedure is presented for wavelength interval selection with a genetic algorithm in order to improve the predictive ability of partial least squares multivariate calibration. It involves separately labelling each of the selected sensor ranges with an appropriate inclusion ranking. The new approach intends to alleviate overfitting without the need of preparing an independent monitoring sample set. A theoretical example is worked out in order to compare the performance of the new approach with previous implementations of genetic algorithms. Two experimental data sets are also studied: target parameters are the concentration of glucuronic acid in complex mixtures studied by Fourier transform mid-infrared spectroscopy and the octane number in gasolines monitored by near-infrared spectroscopy. Copyright © 2003 John Wiley & Sons, Ltd. [source] Vacuum drying of wood with radiative heating: I. Experimental procedureAICHE JOURNAL, Issue 1 2004Patrick Perré Abstract Experimental results for the vacuum drying of wood with radiative heating are presented. In particular, the temperature and pressure measurements at different locations within the board are provided, as are the overall drying curves. The heat source is such that the temperature at the end of the process remains low (,150°C), and under these conditions, the drying process resembles convective drying with superheated steam. Further important details concerning the internal transfer mechanisms that are induced by this drying process can be pointed out by comparing results for sapwood and heartwood of different species (Picea abies, Abies alba and Fagus silvatica). These extensive experimental data sets will be used in Part II of this work for the purposes of assessing the accuracy and predictive ability of two different drying models and for analyzing the vacuum drying process further at a fundamental level. © 2004 American Institute of Chemical Engineers AIChE J, 50:97,107, 2004 [source] Vacuum drying of wood with radiative heating: II.AICHE JOURNAL, Issue 1 2004Comparison between theory, experiment Abstract In part I of this work extensive experimental data sets for the vacuum drying of wood with radiative heating were presented for sapwood and heartwood of different species (Picea abies, Abies alba, and Fagus silvatica). These data sets are used here to validate two previously developed drying models. The first drying model, which is known as TransPore, is a comprehensive model able to capture the intricately coupled heat- and mass-transfer mechanisms that evolve throughout the drying process. The second model, which is known as Front_2D, uses a number of simplifying assumptions to reduce the complexity of the comprehensive model to a system that enables a semianalytical approach to be exploited for its solution. Although the first model provides a more accurate description of the entire process, the second model is able to produce representative solutions very efficiently in terms of overall computational times, making it a viable option for on-line control purposes. The comparison with experimental data highlights that both models are able to capture all of the observed trends, allowing them to be used with confidence for investigating the vacuum drying process at a fundamental level. The new contribution of this work lies in the fact that both models are used here for the first time to simulate drying at a reduced external pressure. © 2004 American Institute of Chemical Engineers AIChE J, 50: 108,118, 2004 [source] A priori information in a regularized sinogram-based method for removing ring artefacts in tomographyJOURNAL OF SYNCHROTRON RADIATION, Issue 4 2010Sofya Titarenko Ring artefacts in X-ray computerized tomography reconstructions are considered. The authors propose a ring artefact removal method based on a priori information regarding the sinogram including smoothness along the horizontal coordinate, symmetry of the first and the final rows and consideration of small perturbations during acquisition. The method does not require prior reconstruction of the original or corrected sinograms. Its numerical implementation is based on quadratic programming. Its efficacy is examined with regard to experimental data sets collected on graphite and bone. [source] Searching for the reionization sourcesMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY: LETTERS (ELECTRONIC), Issue 1 2007T. Roy Choudhury ABSTRACT Using a reionization model simultaneously accounting for a number of experimental data sets, we investigate the nature and properties of reionization sources. Such a model predicts that hydrogen reionization starts at z, 15, is initially driven by metal-free (Population III) stars, and is 90 per cent complete by z, 8. We find that a fraction f, > 80 per cent of the ionizing power at z, 7 comes from haloes of mass M < 109 M, predominantly harbouring Population III stars; a turnover to a Population II dominated phase occurs shortly after, with this population, residing in M > 109 M, haloes, yielding f,, 60 per cent at z= 6. Using Lyman-break broad-band dropout techniques, J -band detection of sources contributing to 50 per cent (90 per cent) of the ionizing power at z, 7.5 requires reaching a magnitude J110,AB= 31.2 (31.7), where , 15 (30) (Population III) sources arcmin,2 are predicted. We conclude that z > 7 sources tentatively identified in broad-band surveys are relatively massive (M, 109 M,) and rare objects which are only marginally (,1 per cent) adding to the reionization photon budget. [source] Precession electron diffraction 1: multislice simulationACTA CRYSTALLOGRAPHICA SECTION A, Issue 6 2006C. S. Own Precession electron diffraction (PED) is a method that considerably reduces dynamical effects in electron diffraction data, potentially enabling more straightforward solution of structures using the transmission electron microscope. This study focuses upon the characterization of PED data in an effort to improve the understanding of how experimental parameters affect it in order to predict favorable conditions. A method for generating simulated PED data by the multislice method is presented and tested. Data simulated for a wide range of experimental parameters are analyzed and compared to experimental data for the (Ga,In)2SnO4 (GITO) and ZSM-5 zeolite (MFI) systems. Intensity deviations between normalized simulated and kinematical data sets, which are bipolar for dynamical diffraction data, become unipolar for PED data. Three-dimensional difference plots between PED and kinematical data sets show that PED data are most kinematical for small thicknesses, and as thickness increases deviations are minimized by increasing the precession cone semi-angle ,. Lorentz geometry and multibeam dynamical effects explain why the largest deviations cluster about the transmitted beam, and one-dimensional diffraction is pointed out as a strong mechanism for deviation along systematic rows. R factors for the experimental data sets are calculated, demonstrating that PED data are less sensitive to thickness variation. This error metric was also used to determine the experimental specimen thickness. R1 (unrefined) was found to be about 12 and 15% for GITO and MFI, respectively. [source] On the use of low-resolution data for translation search in molecular replacementACTA CRYSTALLOGRAPHICA SECTION A, Issue 1 2002Andrei Fokine Low-resolution reflections (approximately 15,Å and lower) are very useful for the translation search in molecular replacement because they are less sensitive to model errors compared with the traditionally used reflections of resolution 4,10,Å. At low resolution, however, the contribution from the bulk solvent is quite significant and corresponding structure factors calculated from a macromolecular model cannot be compared with experimental values if this contribution is neglected. The proposed method provides a way of fast translation searches where low-resolution reflections are taken into account. Test calculations using several experimental data sets show a dramatic improvement in the signal after the bulk-solvent correction and low-resolution reflections were included in the calculation; this improvement allowed unambiguous identification of the solution. [source] |