Model Terms (model + term)

Distribution by Scientific Domains


Selected Abstracts


Modelling of small-angle X-ray scattering data using Hermite polynomials

JOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 4 2001
A. K. Swain
A new algorithm, called the term-selection algorithm (TSA), is derived to treat small-angle X-ray scattering (SAXS) data by fitting models to the scattering intensity using weighted Hermite polynomials. This algorithm exploits the orthogonal property of the Hermite polynomials and introduces an error-reduction ratio test to select the correct model terms or to determine which polynomials are to be included in the model and to estimate the associated unknown coefficients. With no a priori information about particle sizes, it is possible to evaluate the real-space distribution function as well as three- and one-dimensional correlation functions directly from the models fitted to raw experimental data. The success of this algorithm depends on the choice of a scale factor and the accuracy of orthogonality of the Hermite polynomials over a finite range of SAXS data. An algorithm to select a weighted orthogonal term is therefore derived to overcome the disadvantages of the TSA. This algorithm combines the properties and advantages of both weighted and orthogonal least-squares algorithms and is numerically more robust for the estimation of the parameters of the Hermite polynomial models. The weighting feature of the algorithm provides an additional degree of freedom to control the effects of noise and the orthogonal feature enables the reorthogonalization of the Hermite polynomials with respect to the weighting matrix. This considerably reduces the error in orthogonality of the Hermite polynomials. The performance of the algorithm has been demonstrated considering both simulated data and experimental data from SAXS measurements of dewaxed cotton fibre at different temperatures. [source]


A METHOD FOR SIMPLIFYING LARGE ECOSYSTEM MODELS

NATURAL RESOURCE MODELING, Issue 2 2008
JOCK LAWRIE
Abstract Simplifying large ecosystem models is essential if we are to understand the underlying causes of observed behaviors. However, such understanding is often employed to achieve simplification. This paper introduces two model simplification methods that can be applied without requiring intimate prior knowledge of the system. Their utility is measured by the resulting values of given model diagnostics relative to those of the large model. The first method is a simple automated procedure for nondimensionalizing large ecosystem models, which identifies and eliminates terms that have little effect on model diagnostics. Some of its limitations are then addressed by the rate elimination method, which measures the relative importance of model terms using least-squares regression. The methods are applied to a model of the nitrogen cycle in Port Phillip Bay, Victoria, Australia. The rate elimination method provided more insights into the causal relationships built into the model than the nondimensionalizing method. It also allowed the reduction of the model's dimension. Thus it is a useful first step in model simplification. [source]


Fenton and photo-Fenton treatment of distillery effluent and optimization of treatment conditions with response surface methodology

ASIA-PACIFIC JOURNAL OF CHEMICAL ENGINEERING, Issue 3 2010
Mojtaba Hadavifar
Abstract Two chemical processes, Fenton and photo-Fenton, were tested separately for the treatment of vinasse, which was generated from an alcohol distillery process. In order to evaluate the processes effectiveness, four independent variables viz. SCOD concentration, initial pH, H2O2 and FeSO4 dosage were applied. Experiments were conducted based on a central composite design (CCD) and analyzed using response surface methodology (RSM). UV radiation for 80 min was applied in each experiment during the photo-Fenton process. Two modified quadratic equations were used for data fitting. The most significant model terms in the processes were found to be the initial chemical oxygen demand (COD) concentration and the initial pH. A higher removal efficiency was achieved in the photo-Fenton process compared to Fenton alone. The efficiency varied from 18 to 97% for the photo-Fenton process, while it was in the range of 5,47% for Fenton process. The R2 value of the models (R2 > 0.97) shows a very high degree of correlation between the parameters. Copyright © 2009 Curtin University of Technology and John Wiley & Sons, Ltd. [source]


Variable Selection and Model Choice in Geoadditive Regression Models

BIOMETRICS, Issue 2 2009
Thomas Kneib
Summary Model choice and variable selection are issues of major concern in practical regression analyses, arising in many biometric applications such as habitat suitability analyses, where the aim is to identify the influence of potentially many environmental conditions on certain species. We describe regression models for breeding bird communities that facilitate both model choice and variable selection, by a boosting algorithm that works within a class of geoadditive regression models comprising spatial effects, nonparametric effects of continuous covariates, interaction surfaces, and varying coefficients. The major modeling components are penalized splines and their bivariate tensor product extensions. All smooth model terms are represented as the sum of a parametric component and a smooth component with one degree of freedom to obtain a fair comparison between the model terms. A generic representation of the geoadditive model allows us to devise a general boosting algorithm that automatically performs model choice and variable selection. [source]


Marginalized Models for Moderate to Long Series of Longitudinal Binary Response Data

BIOMETRICS, Issue 2 2007
Jonathan S. Schildcrout
Summary Marginalized models (Heagerty, 1999, Biometrics55, 688,698) permit likelihood-based inference when interest lies in marginal regression models for longitudinal binary response data. Two such models are the marginalized transition and marginalized latent variable models. The former captures within-subject serial dependence among repeated measurements with transition model terms while the latter assumes exchangeable or nondiminishing response dependence using random intercepts. In this article, we extend the class of marginalized models by proposing a single unifying model that describes both serial and long-range dependence. This model will be particularly useful in longitudinal analyses with a moderate to large number of repeated measurements per subject, where both serial and exchangeable forms of response correlation can be identified. We describe maximum likelihood and Bayesian approaches toward parameter estimation and inference, and we study the large sample operating characteristics under two types of dependence model misspecification. Data from the Madras Longitudinal Schizophrenia Study (Thara et al., 1994, Acta Psychiatrica Scandinavica90, 329,336) are analyzed. [source]