Classical Methods (classical + methods)

Distribution by Scientific Domains


Selected Abstracts


Sexual dimorphism in limb bones of ibex (Capra ibex L.): mixture analysis applied to modern and fossil data

INTERNATIONAL JOURNAL OF OSTEOARCHAEOLOGY, Issue 5 2007
H. Fernández
Abstract Estimating sex ratios of fossil bone assemblages is an important step in the determination of demographic profiles, which are essential for understanding the palaeobiology and palaeoethology of any particular species, as well as its exploitation patterns by humans. This is especially true for ibex (Capra ibex), which was a main source of food for hominids during Pleistocene times. Classical methods for determining sexual dimorphism and sex ratio, such as analyses using uni- and bivariate plots, are based on an arbitrary fixing of limits between sexes. Here we use a more robust statistical method termed mixture analysis (MA) to determine the sex of postcranial remains (long bones, metapodials and tarsals) from ibex. For the first time, we apply MA to both a modern and a fossil sample of one species, by using metric data taken from (i) a collection of present-day ibex skeletons and (ii) a Palaeolithic sample of the same species. Our results clearly show that the forelimb (humerus and radius) is more dimorphic than the hindlimb (femur and tibia) and is therefore better suited for sexing ibex. It also appears that metapodials should be used carefully for estimating sex ratios. On the basis of these results, we propose a classification of bone measurements that are more or less reliable for sexing ibex. The results of MA applied to the ibex fossil bones from the Upper Palaeolithic site of the Observatoire (Monaco) lead us to the conclusion that this assemblage consists of a majority of males. The quantitative estimations calculated by the MA make it possible to compare the size of Pleistocene and modern ibex for the whole set of variables used in this study. Copyright © 2006 John Wiley & Sons, Ltd. [source]


An Adaptive Method for Indirect Illumination Using Light Vectors

COMPUTER GRAPHICS FORUM, Issue 3 2001
Xavier Serpaggi
In computer graphics, several phenomema need to be taken into account when it comes to the field of photo-realism. One of the most relevant is obviously the notion of global, and more precisely indirect, illumination. In "classical" ray-tracing if you are not under the light, then you are in a shadow. A great amount of work has been carried out which proposes ray-tracing based solutions to take into account the fact that "there is a certain amount of light in shadows". All of these methods carry the same weaknesses: high computation time and a lot of parameters you need to manage to get something out of the method. This paper proposes a generic computation method of indirect illumination based on Monte Carlo sampling and on the sequential analysis theory, which is faster and more automatic than classical methods. [source]


Protein engineering and discovery of lipases,

EUROPEAN JOURNAL OF LIPID SCIENCE AND TECHNOLOGY, Issue 1 2010
Robert Kourist
Abstract Lipases are widely used in the modification of fats and oils with applications in the production of structured triacylglycerols, selective isolation or incorporation of specific fatty acids, and in oleochemistry for the synthesis of emollient esters and sugar fatty acid esters. Despite the numerous examples for the effective use of lipases, the biocatalysts often need to be optimized to show the desired specificities, stability, operational properties, etc. Beside rather classical methods such as variation of the solvent system or carrier for immobilization, the use of protein engineering methods to modify the protein on a molecular level is an important tool for the creation of tailor-designed enzymes. Protein design is also complemented with the efficient isolation of novel lipases from the metagenome. This article covers concepts and examples for the discovery of novel lipases and their variants by protein engineering and metagenome techniques. [source]


Restricted parameter space models for testing gene-gene interaction

GENETIC EPIDEMIOLOGY, Issue 5 2009
Minsun Song
Abstract There is a growing recognition that interactions (gene-gene and gene-environment) can play an important role in common disease etiology. The development of cost-effective genotyping technologies has made genome-wide association studies the preferred tool for searching for loci affecting disease risk. These studies are characterized by a large number of investigated SNPs, and efficient statistical methods are even more important than in classical association studies that are done with a small number of markers. In this article we propose a novel gene-gene interaction test that is more powerful than classical methods. The increase in power is due to the fact that the proposed method incorporates reasonable constraints in the parameter space. The test for both association and interaction is based on a likelihood ratio statistic that has a x,2 distribution asymptotically. We also discuss the definitions used for "no interaction" and argue that tests for pure interaction are useful in genome-wide studies, especially when using two-stage strategies where the analyses in the second stage are done on pairs of loci for which at least one is associated with the trait. Genet. Epidemiol. 33:386,393, 2009. © 2008 Wiley-Liss, Inc. [source]


Performance of a parallel implementation of the FMM for electromagnetics applications

INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN FLUIDS, Issue 8 2003
G. Sylvand
Abstract This paper describes the parallel fast multipole method implemented in EADS integral equations code. We will focus on the electromagnetics applications such as CEM and RCS computation. We solve Maxwell equations in the frequency domain by a finite boundary-element method. The complex dense system of equations obtained cannot be solved using classical methods when the number of unknowns exceeds approximately 105. The use of iterative solvers (such as GMRES) and fast methods (such as the fast multipole method (FMM)) to speed up the matrix,vector product allows us to break this limit. We present the parallel out-of-core implementation of this method developed at CERMICS/INRIA and integrated in EADS industrial software. We were able to solve unprecedented industrial applications containing up to 25 million unknowns. Copyright © 2003 John Wiley & Sons, Ltd. [source]


A comparison of modern data analysis methods for X-ray and neutron specular reflectivity data

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 5 2007
A. Van Der Lee
Data analysis methods for specular X-ray or neutron reflectivity are compared. The methods that have been developed over the years can be classified into different types. The so-called classical methods are based on Parrat's or Abelčs' formalism and rely on minimization using more or less evolved Levenberg,Marquardt or simplex routines. A second class uses the same formalism, but optimization is carried out using simulated annealing or genetic algorithms. A third class uses alternative expressions for the reflectivity, such as the Born approximation or distorted Born approximation. This makes it easier to invert the specular data directly, coupled or not with classical least-squares or iterative methods using over-relaxation or charge-flipping techniques. A fourth class uses mathematical methods founded in scattering theory to determine the phase of the scattered waves, but has to be coupled in certain cases with (magnetic) reference layers. The strengths and weaknesses of a number of these methods are evaluated using simulated and experimental data. It is shown that genetic algorithms are by far superior to traditional and advanced least-squares methods, but that they fail when the layers are less well defined. In the latter case, the methods from the third or fourth class are the better choice, because they permit at least a first estimate of the density profile to be obtained that can be refined using the classical methods of the first class. It is also shown that different analysis programs may calculate different reflectivities for a similar chemical system. One reason for this is that the representation of the layers is either described by chemical composition or by scattering length or electronic densities, between which the conversion of the absorptive part is not straightforward. A second important reason is that routines that describe the convolution with the instrumental resolution function are not identical. [source]


Dispersion and repulsion contributions to the solvation free energy: Comparison of quantum mechanical and classical approaches in the polarizable continuum model

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 15 2006
Carles Curutchet
Abstract We report a systematic comparison of the dispersion and repulsion contributions to the free energy of solvation determined using quantum mechanical self-consistent reaction field (QM-SCRF) and classical methods. In particular, QM-SCRF computations have been performed using the dispersion and repulsion expressions developed in the framework of the integral equation formalism of the polarizable continuum model, whereas classical methods involve both empirical pairwise potential and surface-dependent approaches. Calculations have been performed for a series of aliphatic and aromatic compounds containing prototypical functional groups in four solvents: water, octanol, chloroform, and carbon tetrachloride. The analysis is focused on the dependence of the dispersion and repulsion components on the level of theory used in QM-SCRF computations, the contribution of those terms in different solvents, and the magnitude of the coupling between electrostatic and dispersion,repulsion components. Finally, comparison is made between the dispersion,repulsion contributions obtained from QM-SCRF calculations and the results determined from classical approaches. © 2006 Wiley Periodicals, Inc. J Comput Chem, 2006 [source]


Preservation of Microstructure in Peach and Mango during High-pressure-shift Freezing

JOURNAL OF FOOD SCIENCE, Issue 3 2000
L. Otero
ABSTRACT: A histological technique was used to evaluate modifications on the microstructure of peach and mango due to classical methods of freezing and those produced by high-pressure-shift freezing (HPSF). With the high-pressure-shift method, samples are cooled under pressure (200 MPa) to -20°C without ice formation, then pressure is released to atmospheric pressure (0.1 MPa). The high level of supercooling (approximately 20°C) leads to uniform and rapid ice nucleation throughout the volume of the specimen. This method maintained the original tissue structure to a great extent. Since problems associated with thermal gradients are minimized, high-pressure-shift freezing prevented quality losses due to freeze-cracking or large ice crystal presence. [source]


Molecular detection and characterization of human enteroviruses directly from clinical samples using RT-PCR and DNA sequencing

JOURNAL OF MEDICAL VIROLOGY, Issue 2 2006
Miren Iturriza-Gómara
Abstract Enteroviruses are common human pathogens associated with a wide spectrum of symptoms ranging from asymptomatic infection to acute flaccid paralysis and neonatal multi-organ failure. Molecular methods that provide rapid diagnosis and increased sensitivity have been developed for the diagnosis of enterovirus infection using oligonucleotide primers complementary to conserved sequences located in the 5, untranslated region (UTR), but data generated from these regions are not sufficiently discriminatory for typing due to the lack of correlation between their nucleic acid sequence and serotype specificity. Sequences derived from the gene encoding the capsid VP1 correlate with serotype, and therefore provide the opportunity for the development of molecular typing methods consistent with present serogical methods. In this study, oligonucleotide primers that amplify a region of the 5,UTR to detect enterovirus RNA, and the region encoding the enterovirus VP1 N-terminus to characterize virus strains were used in nested and semi-nested RT-PCRs, respectively. The ability of the VP1 RT-PCR to amplify diverse viruses within genotypes and genogroups was confirmed by the correct identification of both prototype strains, and strains circulating currently of the same genotypes. The mole-cular methods proved their utility through the detection of enteroviruses that failed to grow in cell culture, their subsequent characterization and the characterization of strains that failed to serotype in neutralization assays. Molecular methods increased significantly the sensitivity of detection (P,<,0.001) and of characterization (P,<,0.01) of enteroviruses when compared to classical methods. J. Med. Virol. 78:243,253, 2006. © 2005 Wiley-Liss, Inc. [source]


New antitumour cyclic astin analogues: synthesis, conformation and bioactivity

JOURNAL OF PEPTIDE SCIENCE, Issue 2 2004
Dr Filomena Rossi
Abstract Astins, antitumour cyclic pentapeptides, were isolated from the Aster tataricus. Their chemical structures, consist of a 16-membered ring system containing a unique ,,,-dichlorinated proline [Pro(Cl)2], other non-coded amino acid residues and a cis conformation in one of the peptide bonds. The astin backbone conformation, along with the cis peptide bond in which the ,,,-dichlorinated proline residue is involved, was considered to play an important role in their antineoplastic activities on sarcoma 180A and P388 lymphocytic leukaemia in mice, but the scope and potential applications of this activity remain unclear. With the aim at improving our knowledge of the conformational properties influencing the bioactivity in this class of compounds, new astin-related cyclopeptides were synthesized differing from the natural products by the presence of some non-proteinogenic amino acid residues: Aib, Abu, -(S),3 -hPhe and a peptide bond surrogate (-SO2 -NH-). The analogues prepared c(-Pro-Thr-Aib-,3 -Phe-Abu-), c[Pro-Thr-Aib-(S),3 -hPhe-Abu], c[Pro-Abu-Ser-(S),3 -hPhe,(CH2 -SO2 -NH)-Abu] and c[Pro-Thr-Aib-(S),3 -hPhe,(CH2 -SO2 -NH)-Abu] were synthesized by classical methods in solution and tested for their antitumour effect. These molecules were studied by crystal-state x-ray diffraction analysis and/or solution NMR and MD techniques. Copyright © 2003 European Peptide Society and John Wiley & Sons, Ltd. [source]


Probing structural requirements of fMLP receptor: On the size of the hydrophobic pocket corresponding to residue 2 of the tripeptide

JOURNAL OF PEPTIDE SCIENCE, Issue 2 2002
Susanna Spisani
Abstract The conformationally constrained f- L -Met-Acnc- L -Phe-OMe (n = 4,9,12) tripeptides, analogues of the chemoattractant f- L -Met- L -Leu- L -Phe-OH, were synthesized in solution by classical methods and fully characterized. These compounds and the published f- L -Met-Xxx- L -Phe-OMe (Xxx = Aib and Acnc where n = 3, 5,8) analogues were compared to determine the combined effect of backbone preferred conformation and side-chain bulkiness at position 2 on the relation of 3D-structure to biological activity. A conformational study of all the analogues was performed in solution by FT-IR absorption and 1H-NMR techniques. In parallel, each peptide was tested for its ability to induce chemotaxis, superoxide anion production and lysozyme secretion from human neutrophils. The biological and conformational data are discussed in relation to the proposed model of the chemotactic receptor on neutrophils, in particular of the hydrophobic pocket accommodating residue 2 of the tripeptide. Copyright © 2002 European Peptide Society and John Wiley & Sons, Ltd. [source]


µGISAXS and protein nanotemplate crystallization: methods and instrumentation

JOURNAL OF SYNCHROTRON RADIATION, Issue 6 2005
Eugenia Pechkova
Microbeam grazing-incidence small-angle X-ray scattering (µGISAXS) has been used and the technique has been improved in order to investigate protein nucleation and crystal growth, assisted by a protein nanotemplate. The aim is to understand the protein nanotemplate method in detail, as this method has been proved capable of accelerating and increasing crystal size and quality as well as inducing crystallization of proteins that are not crystallizable by classical methods. The nanotemplate experimental setup was used for drops containing growing lysozyme crystals at three different stages of growth. [source]


Using historical data for Bayesian sample size determination

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007
Fulvio De Santis
Summary., We consider the sample size determination (SSD) problem, which is a basic yet extremely important aspect of experimental design. Specifically, we deal with the Bayesian approach to SSD, which gives researchers the possibility of taking into account pre-experimental information and uncertainty on unknown parameters. At the design stage, this fact offers the advantage of removing or mitigating typical drawbacks of classical methods, which might lead to serious miscalculation of the sample size. In this context, the leading idea is to choose the minimal sample size that guarantees a probabilistic control on the performance of quantities that are derived from the posterior distribution and used for inference on parameters of interest. We are concerned with the use of historical data,i.e. observations from previous similar studies,for SSD. We illustrate how the class of power priors can be fruitfully employed to deal with lack of homogeneity between historical data and observations of the upcoming experiment. This problem, in fact, determines the necessity of discounting prior information and of evaluating the effect of heterogeneity on the optimal sample size. Some of the most popular Bayesian SSD methods are reviewed and their use, in concert with power priors, is illustrated in several medical experimental contexts. [source]


Bayesian classification of tumours by using gene expression data

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2005
Bani K. Mallick
Summary., Precise classification of tumours is critical for the diagnosis and treatment of cancer. Diagnostic pathology has traditionally relied on macroscopic and microscopic histology and tumour morphology as the basis for the classification of tumours. Current classification frameworks, however, cannot discriminate between tumours with similar histopathologic features, which vary in clinical course and in response to treatment. In recent years, there has been a move towards the use of complementary deoxyribonucleic acid microarrays for the classi-fication of tumours. These high throughput assays provide relative messenger ribonucleic acid expression measurements simultaneously for thousands of genes. A key statistical task is to perform classification via different expression patterns. Gene expression profiles may offer more information than classical morphology and may provide an alternative to classical tumour diagnosis schemes. The paper considers several Bayesian classification methods based on reproducing kernel Hilbert spaces for the analysis of microarray data. We consider the logistic likelihood as well as likelihoods related to support vector machine models. It is shown through simulation and examples that support vector machine models with multiple shrinkage parameters produce fewer misclassification errors than several existing classical methods as well as Bayesian methods based on the logistic likelihood or those involving only one shrinkage parameter. [source]


Using unlabelled data to update classification rules with applications in food authenticity studies

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2006
Nema Dean
Summary., An authentic food is one that is what it purports to be. Food processors and consumers need to be assured that, when they pay for a specific product or ingredient, they are receiving exactly what they pay for. Classification methods are an important tool in food authenticity studies where they are used to assign food samples of unknown type to known types. A classification method is developed where the classification rule is estimated by using both the labelled and the unlabelled data, in contrast with many classical methods which use only the labelled data for estimation. This methodology models the data as arising from a Gaussian mixture model with parsimonious covariance structure, as is done in model-based clustering. A missing data formulation of the mixture model is used and the models are fitted by using the EM and classification EM algorithms. The methods are applied to the analysis of spectra of food-stuffs recorded over the visible and near infra-red wavelength range in food authenticity studies. A comparison of the performance of model-based discriminant analysis and the method of classification proposed is given. The classification method proposed is shown to yield very good misclassification rates. The correct classification rate was observed to be as much as 15% higher than the correct classification rate for model-based discriminant analysis. [source]


Multi-dimensional combustion waves for Lewis number close to one

MATHEMATICAL METHODS IN THE APPLIED SCIENCES, Issue 3 2007
A. Ducrot
Abstract This paper is devoted to the study of multi-dimensional travelling wave solution for a thermo-diffusive model, describing the propagation of curved flames in an infinite cylinder. The linear dependence of the components of the reaction rate together with the existence of an ignition temperature ensure that the corresponding linearized operator does not satisfy the Fredholm property. A direct consequence is that solvability conditions for the linearized operator are not known and classical methods of nonlinear analysis cannot be directly applied. We prove in this paper existence results of such travelling waves, by first introducing a suitable re-formulation of the equations and then by choosing suitable weighted spaces that allows us to move the essential spectrum away from zero. Copyright © 2006 John Wiley & Sons, Ltd. [source]


An implicit QR algorithm for symmetric semiseparable matrices

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS, Issue 7 2005
Raf Vandebril
Abstract The QR algorithm is one of the classical methods to compute the eigendecomposition of a matrix. If it is applied on a dense n × n matrix, this algorithm requires O(n3) operations per iteration step. To reduce this complexity for a symmetric matrix to O(n), the original matrix is first reduced to tridiagonal form using orthogonal similarity transformations. In the report (Report TW360, May 2003) a reduction from a symmetric matrix into a similar semiseparable one is described. In this paper a QR algorithm to compute the eigenvalues of semiseparable matrices is designed where each iteration step requires O(n) operations. Hence, combined with the reduction to semiseparable form, the eigenvalues of symmetric matrices can be computed via intermediate semiseparable matrices, instead of tridiagonal ones. The eigenvectors of the intermediate semiseparable matrix will be computed by applying inverse iteration to this matrix. This will be achieved by using an O(n) system solver, for semiseparable matrices. A combination of the previous steps leads to an algorithm for computing the eigenvalue decompositions of semiseparable matrices. Combined with the reduction of a symmetric matrix towards semiseparable form, this algorithm can also be used to calculate the eigenvalue decomposition of symmetric matrices. The presented algorithm has the same order of complexity as the tridiagonal approach, but has larger lower order terms. Numerical experiments illustrate the complexity and the numerical accuracy of the proposed method. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Rapid techniques for the extraction of vitamin E isomers from Amaranthus caudatus seeds: ultrasonic and supercritical fluid extraction

PHYTOCHEMICAL ANALYSIS, Issue 5 2002
Renato Bruni
Abstract Supercritical fluid extraction (SFE) of seeds of Amaranthus caudatus (Amaranthaceae) and the use of ultrasound as a co-adjuvant in the extraction process were compared with methods traditionally used in the extraction of tocopherols and fatty acids. The use of readily available ultrasound equipment as an adjunct to the classical methods employed for the extraction of tocols provided qualitatively acceptable results more rapidly and more economically. SFE gave quantitatively better yields in shorter times, with solvent-free extracts obtained under conditions that minimised the degradation of thermolabile components. No significant variations were observed in the profile of the fatty acids extracted from amaranth oil by SFE or other methods, thus confirming the qualitative comparability of the faster supercritical extraction with the more time-consuming classical techniques even when processed with the aid of ultrasound. Copyright © 2002 John Wiley & Sons, Ltd. [source]


Data analysis in plant physiology: are we missing the reality?

PLANT CELL & ENVIRONMENT, Issue 9 2001
G. N. Amzallag
Abstract In plant physiology, data analysis is based on the comparison of mean values. In this perspective, variability around the mean value has no significance per se, but only for estimating statistical significance of the difference between two mean values. Another approach to variability is proposed here, derived from the difference between redundant and deterministic patterns of regulation in their capacity to buffer noise. From this point of view, analysis of variability enables the investigation of the level of redundancy of a regulation pattern, and even allows us to study its modifications. As an example, this method is used to investigate the effect of brassinosteroids (BSs) during vegetative growth in Sorghum bicolor. It is shown that, at physiological concentrations, BSs modulate the network of regulation without affecting the mean value. Thus, it is concluded that the physiological effect of BSs cannot be revealed by comparison of mean values. This example illustrates how a part of the reality (in this case, the most relevant one) is hidden by the classical methods of comparison between mean values. The proposed tools of analysis open new perspectives in understanding plant development and the non-linear processes involved in its regulation. They also ask for a redefinition of fundamental concepts in physiology, such as growth regulator, optimality, stress and adaptation. [source]


The seasonal forecast of electricity demand: a hierarchical Bayesian model with climatological weather generator

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 2 2006
Sergio Pezzulli
Abstract In this paper we focus on the one year ahead prediction of the electricity peak-demand daily trajectory during the winter season in Central England and Wales. We define a Bayesian hierarchical model for predicting the winter trajectories and present results based on the past observed weather. Thanks to the flexibility of the Bayesian approach, we are able to produce the marginal posterior distributions of all the predictands of interest. This is a fundamental progress with respect to the classical methods. The results are encouraging in both skill and representation of uncertainty. Further extensions are straightforward at least in principle. The main two of those consist in conditioning the weather generator model with respect to additional information like the knowledge of the first part of the winter and/or the seasonal weather forecast. Copyright © 2006 John Wiley & Sons, Ltd. [source]


Recent developments in classical density modification

ACTA CRYSTALLOGRAPHICA SECTION D, Issue 4 2010
Kevin Cowtan
Classical density-modification techniques (as opposed to statistical approaches) offer a computationally cheap method for improving phase estimates in order to provide a good electron-density map for model building. The rise of statistical methods has lead to a shift in focus away from the classical approaches; as a result, some recent developments have not made their way into classical density-modification software. This paper describes the application of some recent techniques, including most importantly the use of prior phase information in the likelihood estimation of phase errors within a classical density-modification framework. The resulting software gives significantly better results than comparable classical methods, while remaining nearly two orders of magnitude faster than statistical methods. [source]


Axial Dispersion and Wall Effects in Narrow Fixed Bed Reactors: A Comparative Study Based on RTD and NMR Measurements

CHEMICAL ENGINEERING & TECHNOLOGY (CET), Issue 8 2004
D. Tang
Abstract Axial dispersion and wall effects in narrow fixed beds with aspect ratios < 10 were investigated, both by classical methods and by NMR imaging. The residence time distribution (RTD) in the center and at the wall was measured, system water/NaCl-solution as tracer, and subsequently compared with radial velocity profiles based on NMR imaging. The influence of the aspect ratio and Rep on dispersion and on the degree of non-uniformity of the velocity profile was studied. The NMR results are consistent with the RTD and also with literature data of numerical simulations. For low aspect ratios, dispersion/wall effects have a strong influence on the reactor behavior, above all, in cases where a low effluent concentration is essential, as proven by breakthrough experiments with the reaction of H2S with ZnO. [source]