Baseline Correction (baseline + correction)

Distribution by Scientific Domains


Selected Abstracts


Pattern recognition in capillary electrophoresis data using dynamic programming in the wavelet domain

ELECTROPHORESIS, Issue 13 2008
Gerardo A. Ceballos
Abstract A novel approach for CE data analysis based on pattern recognition techniques in the wavelet domain is presented. Low-resolution, denoised electropherograms are obtained by applying several preprocessing algorithms including denoising, baseline correction, and detection of the region of interest in the wavelet domain. The resultant signals are mapped into character sequences using first derivative information and multilevel peak height quantization. Next, a local alignment algorithm is applied on the coded sequences for peak pattern recognition. We also propose 2-D and 3-D representations of the found patterns for fast visual evaluation of the variability of chemical substances concentration in the analyzed samples. The proposed approach is tested on the analysis of intracerebral microdialysate data obtained by CE and LIF detection, achieving a correct detection rate of about 85% with a processing time of less than 0.3,s per 25,000-point electropherogram. Using a local alignment algorithm on low-resolution denoised electropherograms might have a great impact on high-throughput CE since the proposed methodology will substitute automatic fast pattern recognition analysis for slow, human based time-consuming visual pattern recognition methods. [source]


Signal denoising and baseline correction by discrete wavelet transform for microchip capillary electrophoresis

ELECTROPHORESIS, Issue 18 2003
Bi-Feng Liu
Abstract Signal denoising and baseline correction using discrete wavelet transform (DWT) are described for microchip capillary electrophoresis (MCE). DWT was performed on an electropherogram describing a separation of nine tetramethylrohodamine-5-isothiocyanate labeled amino acids, following MCE with laser-induced fluorescence detection, using Daubechies 5 wavelet at a decomposition level of 6. The denoising efficiency was compared with, and proved to be superior to, other commonly used denoising techniques such as Fourier transform, Savitzky-Golay smoothing and moving average, in terms of noise removal and peak preservation by directly visual inspection. Novel strategies for baseline correction were proposed, with a special interest in baseline drift that frequently occurred in chromatographic and electrophoretic separations. [source]


Hyperspectral NIR image regression part II: dataset preprocessing diagnostics

JOURNAL OF CHEMOMETRICS, Issue 3-4 2006
James Burger
Abstract When known reference values such as concentrations are available, the spectra from near infrared (NIR) hyperspectral images can be used for building regression models. The sets of spectra must be corrected for errors, transformed to reflectance or absorbance values, and trimmed of bad pixel outliers in order to build robust models and minimize prediction errors. Calibration models can be computed from small (<100) sets of spectra, where each spectrum summarizes an individual image or spatial region of interest (ROI), and used to predict large (>20,000) test sets of spectra. When the distributions of these large populations of predicted values are viewed as histograms they provide mean sample concentrations (peak centers) as well as uniformity (peak widths) and purity (peak shape) information. The same predicted values can also be viewed as concentration maps or images adding spatial information to the uniformity or purity presentations. Estimates of large population statistics enable a new metric for determining the optimal number of model components, based on a combination of global bias and pooled standard deviation values computed from multiple test images or ROIs. Two example datasets are presented: an artificial mixture design of three chemicals with distinct NIR spectra and samples of different cheeses. In some cases it was found that baseline correction by taking first derivatives gave more useful prediction results by reducing optical problems. Other data pretreatments resulted in negligible changes in prediction errors, overshadowed by the variance associated with sample preparation or presentation and other physical phenomena. Copyright © 2007 John Wiley & Sons, Ltd. [source]


Granulation sensing of first-break ground wheat using a near-infrared reflectance spectrometer: studies with soft red winter wheats,

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 3 2003
Melchor C Pasikatan
Abstract A near-infrared reflectance spectrometer, previously evaluated as a granulation sensor for first-break ground wheat from six wheat classes and hard red winter (HRW) wheats, was further evaluated for soft red winter (SRW) wheats. Two sets of 35 wheat samples, representing seven cultivars of SRW wheat ground by an experimental roller mill at five roll gap settings (0.38, 0.51, 0.63, 0.75 and 0.88,mm), were used for calibration and validation. Partial least squares regression was applied to develop the granulation models using combinations of four data pretreatments (log(1/R), baseline correction, unit area normalisation and derivatives) and subregions of the 400,1700,nm wavelength range. Cumulative mass of size fraction was used as reference value. Models that corrected for path length effects (those that used unit area normalisation) predicted the bigger size fractions well. The model based on unit area normalisation/first derivative predicted 34 out of 35 validation spectra with standard errors of prediction of 3.53, 1.83, 1.43 and 1.30 for the >1041, >375, >240 and >136,µm size fractions respectively. Because of less variation in mass of each size fraction, SRW wheat granulation models performed better than the previously reported models for six wheat classes. However, because of SRW wheat flour's tendency to stick to the underside of sieves, the finest size fraction of these models did not perform as well as the HRW wheat models. © 2003 Society of Chemical Industry [source]