Home About us Contact | |||
Model Fitting (model + fitting)
Selected AbstractsSkew-normal linear calibration: a Bayesian perspectiveJOURNAL OF CHEMOMETRICS, Issue 8 2008Cléber da Costa Figueiredo Abstract In this paper, we present a Bayesian approach for estimation in the skew-normal calibration model, as well as the conditional posterior distributions which are useful for implementing the Gibbs sampler. Data transformation is thus avoided by using the methodology proposed. Model fitting is implemented by proposing the asymmetric deviance information criterion, ADIC, a modification of the ordinary DIC. We also report an application of the model studied by using a real data set, related to the relationship between the resistance and the elasticity of a sample of concrete beams. Copyright © 2008 John Wiley & Sons, Ltd. [source] Design of experiments with unknown parameters in varianceAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 3 2002Valerii V. Fedorov Abstract Model fitting when the variance function depends on unknown parameters is a popular problem in many areas of research. Iterated estimators which are asymptotically equivalent to maximum likelihood estimators are proposed and their convergence is discussed. From a computational point of view, these estimators are very close to the iteratively reweighted least-squares methods. The additive structure of the corresponding information matrices allows us to apply convex design theory which leads to optimal design algorithms. We conclude with examples which illustrate how to bridge our general results with specific applied needs. In particular, a model with experimental costs is introduced and is studied within the normalized design paradigm. Copyright © 2002 John Wiley & Sons, Ltd. [source] Spatial Modeling of Wetland Condition in the U.S. Prairie Pothole RegionBIOMETRICS, Issue 2 2002J. Andrew Royle Summary. We propose a spatial modeling framework for wetland data produced from a remote-sensing-based waterfowl habitat survey conducted in the U.S. Prairie Pothole Region (PPR). The data produced from this survey consist of the area containing water on many thousands of wetland basins (i.e., prairie potholes). We propose a two-state model containing wet and dry states. This model provides a concise description of wet probability, i.e., the probability that a basin contains water, and the amount of water contained in wet basins. The two model components are spatially linked through a common latent effect, which is assumed to be spatially correlated. Model fitting and prediction is carried out using Markov chain Monte Carlo methods. The model primarily facilitates mapping of habitat conditions, which is useful in varied monitoring and assessment capacities. More importantly, the predictive capability of the model provides a rigorous statistical framework for directing management and conservation activities by enabling characterization of habitat structure at any point on the landscape. [source] Kinetics and mechanism of myristic acid and isopropyl alcohol esterification reaction with homogeneous and heterogeneous catalystsINTERNATIONAL JOURNAL OF CHEMICAL KINETICS, Issue 3 2008Tuncer Yalçinyuva The reaction of myristic acid (MA) and isopropyl alcohol (IPA) was carried out by using both homogeneous and heterogeneous catalysts. For a homogeneously catalyzed system, the experimental data have been interpreted with a second order, using the power-law kinetic model, and a good agreement between the experimental data and the model has been obtained. In this approach, it was assumed that a protonated carboxylic acid is a possible reaction intermediate. After a mathematical model was proposed, reaction rate constants were computed by the Polymath* program. For a heterogeneously catalyzed system, interestingly, no pore diffusion limitation was detected. The influences of initial molar ratios, catalyst loading and type, temperature, and water amount in the feed have been examined, as well as the effects of catalyst size for heterogeneous catalyst systems. Among used catalysts, p -toluene sulfonic acid (p -TSA) gave highest reaction rates. Kinetic parameters such as activation energy and frequency factor were determined from model fitting. Experimental K values were found to be 0.54 and 1.49 at 60°C and 80°C, respectively. Furthermore, activation energy and frequency factor at forward were calculated as 54.2 kJ mol,1 and 1828 L mol,1 s,1, respectively. © 2008 Wiley Periodicals, Inc. 40: 136,144, 2008 [source] Temperature reconstructions and comparisons with instrumental data from a tree-ring network for the European AlpsINTERNATIONAL JOURNAL OF CLIMATOLOGY, Issue 11 2005David Frank Abstract Ring-width and maximum latewood density data from a network of high-elevation sites distributed across the European Alps are used to reconstruct regional temperatures. The network integrates 53 ring-width and 31 density chronologies from stands of four species all located above 1500 m a.s.l. The development and basic climatic response patterns of this network are described elsewhere (Frank and Esper, 2005). The common temperature signal over the study region allowed regional reconstructions to be developed using principal component regression models for average June,August (1600,1988) and April,September (1650,1987) temperatures from ring-width and density records, respectively. Similar climatic histories are derived for both seasons, but with the ring-width and density-based reconstructions seemingly weighted toward carrying more of their variance in the lower and higher frequency domains, respectively. Distinct warm decades are the 1940s, 1860s, 1800s, 1730s, 1660s and the 1610s, and cold decades, the 1910s, 1810s, 1710s, 1700s and the 1690s. Because of the model fitting and the shorter time spans involved, comparisons between the reconstructions with high-elevation instrumental data during the majority of the 1864,1972 calibration period show good agreement. Yet, prior to this period, from which only a few low elevation temperature records are available, a trend divergence between tree-ring and instrumental records is observed. We present evidence that this divergence may be explained by the ring-width data carrying more of an annual rather than warm-season signal in the lower frequency domain. Other factors such as noise, tree-ring standardization, or the more uncertain nature of low-frequency trends in early instrumental records and their homogenization, might help explain this divergence as well. Copyright © 2005 Royal Meteorological Society. [source] Robust Methods for the Analysis of Income Distribution, Inequality and PovertyINTERNATIONAL STATISTICAL REVIEW, Issue 3 2000Maria-Pia Victoria-Feser Summary Income distribution embeds a large field of research subjects in economics. It is important to study how incomes are distributed among the members of a population in order for example to determine tax policies for redistribution to decrease inequality, or to implement social policies to reduce poverty. The available data come mostly from surveys (and not censuses as it is often believed) and often subject to long debates about their reliability because the sources of errors are numerous. Moreover the forms in which the data are availabe is not always as one would expect, i.e. complete and continuous (microdata) but one also can only have data in a grouped form (in income classes) and/or truncated data where a portion of the original data has been omitted from the sample or simply not recorded. Because of these data features, it is important to complement classical statistical procedures with robust ones. In tis paper such methods are presented, especially for model selection, model fitting with several types of data, inequality and poverty analysis and ordering tools. The approach is based on the Influence Function (IF) developed by Hampel (1974) and further developed by Hampel, Ronchetti, Rousseeuw & Stahel (1986). It is also shown through the analysis of real UK and Tunisian data, that robust techniques can give another picture of income distribution, inequality or poverty when compared to classical ones. [source] Effect of genetic variance in plant quality on the population dynamics of a herbivorous insectJOURNAL OF ANIMAL ECOLOGY, Issue 4 2009Nora Underwood Summary 1Species diversity can affect many ecological processes; much less is known about the importance of population genetic diversity, particularly for the population dynamics of associated species. Genetic diversity within a host species can create habitat diversity; when associated species move among hosts, this variation could affect populations additively (an effect of average habitat) or non-additively (an effect of habitat variance). Mathematical theory suggests that non-additive effects of variance among patches should influence population size, but this theory has not been tested. 2This prediction was tested in the field by asking whether aphid population dynamics parameters on strawberry plant genotype mixtures were additive or non-additive functions of parameters on individual plant genotypes in monoculture using model fitting. 3Results show that variance in quality among plant genotypes can have non-additive effects on aphid populations, and that the form of this effect depends on the particular plant genotypes involved. 4Genetic variation among plants also influenced the spatial distribution of aphids within plant populations, but the number of plant genotypes per population did not affect aphid populations. 5These results suggest that predicting the behaviour of populations in heterogeneous environments can require knowledge of both average habitat quality and variance in quality. [source] Simulating and evaluating small-angle X-ray scattering of micro-voids in polypropylene during mechanical deformationJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3 2010Stefan Fischer Micro-voids that evolve during mechanical deformation in polypropylene have been characterized by small-angle X-ray scattering. Such voids can be modelled as randomly distributed cylinders which are oriented along the stretching direction, showing a log-normal size distribution. The model and simulation results are presented here. Advantages and disadvantages of the approach, the validity of the model, and important considerations for data evaluation are discussed. Data analysis of two-dimensional scattering images has been performed using a fully automated MATLAB routine by direct model fitting to scattering images. [source] Nano-sized ceramics of coated alumina and zirconia analyzed with SANSJOURNAL OF APPLIED CRYSTALLOGRAPHY, Issue 3-1 2000U. Keiderling The sintering behaviour of two new types of coated ceramics, made from alumina grains coated with a zirconia shell and from zirconia grains coated with an alumina shell, was analyzed with small angle neutron scattering (SANS). Measurements were performed both for the plain samples, and with contrast variation using D2 O as immersion liquid. The size distribution and the volume fraction of grains and pores were determined from the corrected scattering curves using a direct model fitting, applying two different approaches, a sphere model and a combined sphere/spherical shell model. Results are discussed in context with the macroscopic density of the samples. The sintering behaviour of the two ceramics types was found to be very different. [source] Bayesian strategy assessment in multi-attribute decision makingJOURNAL OF BEHAVIORAL DECISION MAKING, Issue 3 2003Arndt Bröder Abstract Behavioral Decision Research on multi-attribute decision making is plagued with the problem of drawing inferences from behavioral data on cognitive strategies. This bridging problem has been tackled by a range of methodical approaches, namely Structural Modeling (SM), Process Tracing (PT), and comparative model fitting. Whereas SM and PT have been criticized for a number of reasons, the comparative fitting approach has some theoretical advantages as long as the formal relation between theories and data is specified. A Bayesian method is developed that is able to assess, whether an empirical data vector was most likely generated by a ,Take The Best' heuristic (Gigerenzer et al., 1991), by an equal weight rule, or a compensatory strategy. Equations are derived for the two- and three-alternative cases, respectively, and a simulation study supports its validity. The classification also showed convergent validity with Process Tracing measures in an experiment. Potential extensions of the general approach to other applications in behavioral decision research are discussed. Copyright © 2003 John Wiley & Sons, Ltd. [source] Patterns of density, diversity, and the distribution of migratory strategies in the Russian boreal forest avifaunaJOURNAL OF BIOGEOGRAPHY, Issue 11 2008Russell Greenberg Abstract Aim, Comparisons of the biotas in the Palaearctic and Nearctic have focused on limited portions of the two regions. The purpose of this study was to assess the geographic pattern in the abundance, species richness, and importance of different migration patterns of the boreal forest avifauna of Eurasia from Europe to East Asia as well as their relationship to climate and forest productivity. We further examine data from two widely separated sites in the New World to see how these conform to the patterns found in the Eurasian system. Location, Boreal forest sites in Russia and Canada. Methods, Point counts were conducted in two to four boreal forest habitats at each of 14 sites in the Russian boreal forest from near to the Finnish border to the Far East, as well as at two sites in boreal Canada. We examined the abundance and species richness of all birds, and specific migratory classes, against four gradients (climate, primary productivity, latitude, and longitude). We tested for spatial autocorrelation in both dependent and independent variables using Moran's I to develop spatial correlograms. For each migratory class we used maximum likelihood to fit models, first assuming uncorrelated residuals and then assuming spatially autocorrelated residuals. For models assuming unstructured residuals we again generated correlograms on model residuals to determine whether model fitting removed spatial autocorrelation. Models were compared using Akaike's information criterion, adjusted for small sample size. Results, Overall abundance was highest at the eastern and western extremes of the survey region and lowest at the continent centre, whereas the abundance of tropical and short-distance migrants displayed an east,west gradient, with tropical migrants increasing in abundance in the east (and south), and short-distance migrants in the west. Although overall species richness showed no geographic pattern, richness within migratory classes showed patterns weaker than, but similar to, their abundance patterns described above. Overall abundance was correlated with climate variables that relate to continentality. The abundances of birds within different migration strategies were correlated with a second climatic gradient , increasing precipitation from west to east. Models using descriptors of location generally had greater explanatory value for the abundance and species-richness response variables than did those based on climate data and the normalized difference vegetation index (NDVI). Main conclusions, The distribution patterns for migrant types were related to both climatic and locational variables, and thus the patterns could be explained by either climatic regime or the accessibility of winter habitats, both historically and currently. Non-boreal wintering habitat is more accessible from both the western and eastern ends than from the centre of the boreal forest belt, but the tropics are most accessible from the eastern end of the Palaearctic boreal zone, in terms of distance and the absence of geographical barriers. Based on comparisons with Canadian sites, we recommend that future comparative studies between Palaearctic and Nearctic faunas be focused more on Siberia and the Russian Far East, as well as on central and western Canada. [source] Genetic Contribution to Bone Metabolism, Calcium Excretion, and Vitamin D and Parathyroid Hormone RegulationJOURNAL OF BONE AND MINERAL RESEARCH, Issue 2 2001D. Hunter Abstract A classical twin study was performed to assess the relative contribution of genetic and environmental factors to bone metabolism, calcium homeostasis, and the hormones regulating them. It was examined further whether the genetic effect is menopause dependent. The subjects were 2136 adult twins (98.3% female): 384 monozygotic (MZ) and 684 dizygotic (DZ) twin pairs. The intraclass correlations were calculated, and maximum likelihood model fitting was used to estimate genetic and environmental variance components. The intraclass correlations for all of the variables assessed were higher in MZ twin pairs. The heritabilities (95% CIs) obtained from model fitting for hormones regulating bone metabolism and calcium homeostasis were parathyroid hormone (PTH), 60% (54,65%); 25-hydroxyvitamin D [25(OH)D]; 43% (28,57%), 1,25-hydroxyvitamin D [1,25(OH)], 65% (53,74%); and vitamin D binding protein 62% (56,66%). The heritabilities (95% CIs) for markers of bone formation also were assessed; bone-specific alkaline phosphatase (BSAP), 74% (67,80%), and osteocalcin, 29% (14,44%); marker of bone resorption deoxypyridinoline (DPD), 58% (52,64%); and measure of calcium homeostasis 24 h urine calcium, creatinine (Cr), 52% (41,61%). The magnitude of genetic influence differed with menopause for most variables. This study provides evidence for the importance of genetic factors in determining bone resorption and formation, calcium excretion, and the hormones regulating these processes. It shows for the first time a clear genetic effect on bone resorption in premenopausal women and the regulation of PTH, vitamin D metabolism, and calcium excretion. The genes controlling bone hormones and markers are likely to be useful therapeutic and diagnostic targets. [source] Deterministic fallacies and model validationJOURNAL OF CHEMOMETRICS, Issue 3-4 2010Douglas M. Hawkins Abstract Stochastic settings differ from deterministic ones in many subtle ways, making it easy to slip into errors through applying deterministic thinking inappropriately. We suspect this is the cause of much of the disagreement about model validation. A further technical issue is a common misapplication of cross-validation, in which it is applied only partially, leading to incorrect results. Statistical theory and empirical investigation verify the efficacy of cross-validation when it is applied correctly. In settings where data are relatively scarce, cross-validation is attractive in that it makes the maximum possible use of all available information, at the cost of potentially substantial computation. The bootstrap is another method that makes full use of all available data for both model fitting and model validation, at a cost of substantially increased computation, and it shares many of the broad philosophical background of cross-validation. Increasingly, the computational cost of these methods is not a major concern, leading to the recommendation, in most circumstances, to use cross-validation or bootstrapping rather than the earlier standard method of splitting the available data into a learning and a testing portion. Copyright © 2010 John Wiley & Sons, Ltd. [source] Dynamic Predictive Model for Growth of Salmonella Enteritidis in Egg YolkJOURNAL OF FOOD SCIENCE, Issue 7 2007V. Gumudavelli ABSTRACT:,Salmonella Enteritidis (SE) contamination of poultry eggs is a major human health concern worldwide. The risk of SE from shell eggs can be significantly reduced through rapid cooling of eggs after they are laid and their storage under safe temperature conditions. Predictive models for the growth of SE in egg yolk under varying ambient temperature conditions (dynamic) were developed. The growth of SE in egg yolk under several isothermal conditions (10, 15, 20, 25, 30, 35, 37, 39, 41, and 43 °C) was determined. The Baranyi model, a primary model, was fitted with growth data for each temperature and corresponding maximum specific growth rates were estimated. Root mean squared error (RMSE) values were less than 0.44 log10 CFU/g and pseudo- R2 values were greater than 0.98 for the primary model fitting. For developing the secondary model, the estimated maximum specific growth rates were then modeled as a function of temperature using the modified Ratkowsky's equation. The RMSE and pseudo- R2 were 0.05/h and 0.99, respectively. A dynamic model was developed by integrating the primary and secondary models and solving it numerically using the 4th-order Runge,Kutta method to predict the growth of SE in egg yolk under varying temperature conditions. The integrated dynamic model was then validated with 4 temperature profiles (varying) such as linear heating, exponential heating, exponential cooling, and sinusoidal temperatures. The predicted values agreed well with the observed growth data with RMSE values less than 0.29 log10 CFU/g. The developed dynamic model can predict the growth SE in egg yolk under varying temperature profiles. [source] Solid lipid microparticles produced by spray congealing: Influence of the atomizer on microparticle characteristics and mathematical modeling of the drug releaseJOURNAL OF PHARMACEUTICAL SCIENCES, Issue 2 2010Nadia Passerini Abstract The first aim of the work was to evaluate the effect of atomizer design on the properties of solid lipid microparticles produced by spray congealing. Two different air atomizers have been employed: a conventional air pressure nozzle (APN) and a recently developed atomizer (wide pneumatic nozzle, WPN). Milled theophylline and Compritol® 888ATO were used to produce microparticles at drug-to-carrier ratios of 10:90, 20:80, and 30:70 using the two atomizers. The results showed that the application of different nozzles had significant impacts on the morphology, encapsulation efficiency, and drug release behavior of the microparticles. In contrast, the characteristics of the atomizer did not influence the physicochemical properties of the microparticles as differential scanning calorimetry, Hot Stage microscopy, X-ray powder diffraction, and Fourier transform infrared spectroscopy analysis demonstrated. The drug and the lipid carrier presented in their original crystalline forms in both WPN and APN systems. A second objective of this study was to develop a novel mathematical model for describing the dynamic process of drug release from the solid lipid microparticles. For WPN microparticles the model predicted the changes of the drug release behavior with particle size and drug loading, while for APN microparticles the model fitting was not as good as for the WPN systems, confirming the influence of the atomizer on the drug release behavior. © 2009 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 99:916,931, 2010 [source] Pharmacodynamic interactions between recombinant mouse interleukin-10 and prednisolone using a mouse endotoxemia modelJOURNAL OF PHARMACEUTICAL SCIENCES, Issue 3 2005Abhijit Chakraborty Abstract The pharmacodynamic interactions between recombinant mouse interleukin-10 (IL-10) and prednisolone were examined in lipopolysaccharide (LPS)-induced experimental endotoxemia in Balb/c mice. Treatment phases consists of single doses of IL-10 (10 ,g/kg i.p.), prednisolone (25 (mg/kg i.p.), IL-10 (2.5 ,g/kg i.p.) with prednisolone (6.25 mg/kg i.p.), or placebo (saline). Measurements included plasma steroid kinetics and IL-10 concentrations and responses to LPS including proinflammatory cytokines (TNF-,, IFN-,) and circulatory NO measured as plasma nitrate/nitrite concentrations. The intraperitoneal dosing of LPS produced large and transient elevations of plasma TNF-,, IFN-,, and NO concentrations. Noncompartmental and model fitting using extended indirect response models based on drug inhibition of multiphase stimulation of biomarkers by LPS were used to describe the in vivo pharmacodynamics and drug interactions. Dosing with prednisolone, IL-10, or their combinations produced strong inhibition of cytokine and NO production. The IC50 values of prednisolone ranged from 54 to 171 ng/mL, and IC50 values for IL-10 ranged from 0.06 to 0.69 ng/mL. The production of NO was described as a cascading consequence of the TNF-, and IFN-, plasma concentrations. The joint dosing of IL-10 with prednisolone produces moderately synergistic immunosuppressive effects in this system. Both drugs were sufficiently protective in suppressing the inflammatory mediators when administered prior to the LPS trigger, while such effects were modest when administered after the inflammatory stimulus was provoked. The integrated and complex pharmacokinetic/pharmacodynamic models well capture the in vivo processes, drug potencies, and interactions of IL-10 and prednisolone. © 2005 Wiley-Liss, Inc. and the American Pharmacists Association J Pharm Sci 94:590,603, 2005 [source] DISCUSSION II ON FITTING EQUATIONS TO SENSORY DATAJOURNAL OF SENSORY STUDIES, Issue 1 2000STEVEN M. SEIFERHELD ABSTRACT In his article " On Fitting Equations to Sensory Data." Moskowitz suggests many strategies for model fitting which depart from current statistical methodology. Four areas discussed by Moskowitz are addressed here: (1) Forcing terms into a model; (2) The use of hold-out samples; (3) The use of aggregate data (averaging across people, suppressing the person-to-person variation); and (4) The use of random data as a predictor variable in a regression equation. All four of these points will be examined within this article. [source] Fast stable direct fitting and smoothness selection for generalized additive modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 3 2008Simon N. Wood Summary., Existing computationally efficient methods for penalized likelihood generalized additive model fitting employ iterative smoothness selection on working linear models (or working mixed models). Such schemes fail to converge for a non-negligible proportion of models, with failure being particularly frequent in the presence of concurvity. If smoothness selection is performed by optimizing ,whole model' criteria these problems disappear, but until now attempts to do this have employed finite-difference-based optimization schemes which are computationally inefficient and can suffer from false convergence. The paper develops the first computationally efficient method for direct generalized additive model smoothness selection. It is highly stable, but by careful structuring achieves a computational efficiency that leads, in simulations, to lower mean computation times than the schemes that are based on working model smoothness selection. The method also offers a reliable way of fitting generalized additive mixed models. [source] Wavelet-based functional mixed modelsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2006Jeffrey S. Morris Summary., Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects' wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. [source] High-resolution observations of SN 2001gd in NGC 5033MONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2005M. A. Pérez-Torres ABSTRACT We report on 8.4-GHz very-long-baseline interferometry (VLBI) observations of SN 2001gd in the spiral galaxy NGC 5033 made on 2002 June 26 (2002.48) and 2003 April 8 (2003.27). We used the interferometric visibility data to estimate angular diameter sizes for the supernova by model fitting. Our data nominally suggest a relatively strong deceleration for the expansion of SN 2001gd, but we cannot dismiss the possibility of a free supernova expansion. From our VLBI observations on 2003 April 8, we inferred a minimum total energy in relativistic particles and magnetic fields in the supernova shell of Emin= (0.3,14) × 1047 erg, and a corresponding equipartition average magnetic field of Bmin= 50,350 mG. We also present multiwavelength Very Large Array (VLA) measurements of SN 2001gd made at our second VLBI epoch at frequencies of 1.4, 4.9, 8.4, 15.0, 22.5 and 43.3 GHz. The VLA data are well fitted by an optically thin, synchrotron spectrum (,=,1.0 ± 0.1; S,,,,), partially absorbed by thermal plasma. We obtain a supernova flux density of 1.02 ± 0.05 mJy at the observing frequency of 8.4 GHz for the second epoch, which results in an isotropic radio luminosity of (6.0 ± 0.3) × 1036 erg s,1 between 1.4 and 43.3 GHz, at an adopted distance of 13.1 Mpc. Finally, we report on an XMM,Newton X-ray detection of SN 2001gd on 2002 December 18. The supernova X-ray spectrum is consistent with optically thin emission from a soft component (associated with emission from the reverse shock) at a temperature of around 1 keV. The observed flux corresponds to an isotropic X-ray luminosity of LX= (1.4 ± 0.4) × 1039 erg s,1 in the 0.3,5 keV band. We suggest that both radio and X-ray observations of SN 2001gd indicate that a circumstellar interaction similar to that displayed by SN 1993J in M 81 is taking place. [source] Twenty-five pitfalls in the analysis of diffusion MRI data,NMR IN BIOMEDICINE, Issue 7 2010Derek K. Jones Abstract Obtaining reliable data and drawing meaningful and robust inferences from diffusion MRI can be challenging and is subject to many pitfalls. The process of quantifying diffusion indices and eventually comparing them between groups of subjects and/or correlating them with other parameters starts at the acquisition of the raw data, followed by a long pipeline of image processing steps. Each one of these steps is susceptible to sources of bias, which may not only limit the accuracy and precision, but can lead to substantial errors. This article provides a detailed review of the steps along the analysis pipeline and their associated pitfalls. These are grouped into 1 pre-processing of data; 2 estimation of the tensor; 3 derivation of voxelwise quantitative parameters; 4 strategies for extracting quantitative parameters; and finally 5 intra-subject and inter-subject comparison, including region of interest, histogram, tract-specific and voxel-based analyses. The article covers important aspects of diffusion MRI analysis, such as motion correction, susceptibility and eddy current distortion correction, model fitting, region of interest placement, histogram and voxel-based analysis. We have assembled 25 pitfalls (several previously unreported) into a single article, which should serve as a useful reference for those embarking on new diffusion MRI-based studies, and as a check for those who may already be running studies but may have overlooked some important confounds. While some of these problems are well known to diffusion experts, they might not be to other researchers wishing to undertake a clinical study based on diffusion MRI. Copyright © 2010 John Wiley & Sons, Ltd. [source] A mathematical and statistical framework for modelling dispersalOIKOS, Issue 6 2007Tord Snäll Mechanistic and phenomenological dispersal modelling of organisms has long been an area of intensive research. Recently, there has been an increased interest in intermediate models between the two. Intermediate models include major mechanisms that affect dispersal, in addition to the dispersal curve of a phenomenological model. Here we review and describe the mathematical and statistical framework for phenomenological dispersal modelling. In the mathematical development we describe modelling of dispersal in two dimensions from a point source, and in one dimension from a line or area source. In the statistical development we describe applicable observation distributions, and the procedures of model fitting, comparison, checking, and prediction. The procedures are also demonstrated using data from dispersal experiments. The data are hierarchically structured, and hence, we fit hierarchical models. The Bayesian modelling approach is applied, which allows us to show the uncertainty in the parameter estimates and in predictions. Finally, we show how to account for the effect of wind speed on the estimates of the dispersal parameters. This serves as an example of how to strengthen the coupling in the modelling between the phenomenon observed in an experiment and the underlying process , something that should be striven for in the statistical modelling of dispersal. [source] A quantitative genetic study of cephalometric variables in twinsORTHODONTICS & CRANIOFACIAL RESEARCH, Issue 3 2001C. Carels This study aimed at determining the relative genetic and environmental impact on a number of well-known cephalometric variables in twins. In order to find a clue in the heritability pattern of some dentofacial characteristics and on the expected limits of the therapeutic impact on the dentofacial subparts they are representing. Cephalograms were collected from 33 monozygotic and 46 dizygotic twins, who did not undergo any orthodontic treatment. Nineteen linear and four angular variables were selected all representing a different definite subpart of the dentofacial complex. The reproducibility of the measurement of most of the linear variables was very high. A genetic analysis using model fitting and path analysis was carried out. First, data were checked on the fulfilment of the conditions for genetic analysis in twins reared together. The results show that the genetic determination is significantly higher for vertical (72%) than for horizontal (61%) variables. As far as the genetic component is concerned, all variables selected seem to be inherited by additive genes, except for mandibular body length, which was determined by dominant alleles. Sex differences in genetic determination were found for the anterior face height, showing a significantly higher genetic component for boys (91%) than for girls (68%). For the angular measurements, no genetic influence was found: only environmental influences common to both members of each pair could be demonstrated. [source] Statistical models of shape for the analysis of protein spots in two-dimensional electrophoresis gel imagesPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 6 2003Mike Rogers Abstract In image analysis of two-dimensional electrophoresis gels, individual spots need to be identified and quantified. Two classes of algorithms are commonly applied to this task. Parametric methods rely on a model, making strong assumptions about spot appearance, but are often insufficiently flexible to adequately represent all spots that may be present in a gel. Nonparametric methods make no assumptions about spot appearance and consequently impose few constraints on spot detection, allowing more flexibility but reducing robustness when image data is complex. We describe a parametric representation of spot shape that is both general enough to represent unusual spots, and specific enough to introduce constraints on the interpretation of complex images. Our method uses a model of shape based on the statistics of an annotated training set. The model allows new spot shapes, belonging to the same statistical distribution as the training set, to be generated. To represent spot appearance we use the statistically derived shape convolved with a Gaussian kernel, simulating the diffusion process in spot formation. We show that the statistical model of spot appearance and shape is able to fit to image data more closely than the commonly used spot parameterizations based solely on Gaussian and diffusion models. We show that improvements in model fitting are gained without degrading the specificity of the representation. [source] The validity of analyses testing the etiology of comorbidity between two disorders: a review of family studiesTHE JOURNAL OF CHILD PSYCHOLOGY AND PSYCHIATRY AND ALLIED DISCIPLINES, Issue 4 2003Soo Hyun Rhee Background:, Knowledge regarding the causes of comorbidity between two disorders has a significant impact on research regarding the classification, treatment, and etiology of the disorders. Two main analytic methods have been used to test alternative explanations for the causes of comorbidity in family studies: biometric model fitting and family prevalence analyses. Unfortunately, the conclusions of family studies using these two methods have been conflicting. In the present study, we examined the validity of family prevalence analyses in testing alternative comorbidity models. Method:, We reviewed 42 family studies that used family prevalence analyses to test three comorbidity models: the alternate forms model, the correlated liabilities model, or the three independent disorders model. We conducted the analyses used in these studies on datasets simulated under the assumptions of 13 alternative comorbidity models including the three models tested most often in the literature. Results:, Results suggest that some analyses may be valid tests of the alternate forms model (i.e., two disorders are alternate manifestations of a single liability), but that none of the analyses are valid tests of the correlated liabilities model (i.e., a significant correlation between the risk factors for the two disorders) or the three independent disorders model (i.e., the comorbid disorder is a third, independent disorder). Conclusion:, Family studies using family prevalence analyses may have made incorrect conclusions regarding the etiology of comorbidity between disorders. [source] Efficiency measure, modelling and estimation in combined array designsAPPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 4 2003Tak Mak Abstract In off-line quality control, the settings that minimize the variance of a quality characteristic are unknown and must be determined based on an estimated dual response model of mean and variance. The present paper proposes a direct measure of the efficiency of any given design-estimation procedure for variance minimization. This not only facilitates the comparison of different design-estimation procedures, but may also provide a guideline for choosing a better solution when the estimated dual response model suggests multiple solutions. Motivated by the analysis of an industrial experiment on spray painting, the present paper also applies a class of link functions to model process variances in off-line quality control. For model fitting, a parametric distribution is employed in updating the variance estimates used in an iteratively weighted least squares procedure for mean estimation. In analysing combined array experiments, Engel and Huele (Technometrics, 1996; 39:365) used log-link to model process variances and considered an iteratively weighted least squares leading to the pseudo-likelihood estimates of variances as discussed in Carroll and Ruppert (Transformation and Weighting in Regression, Chapman & Hall: New York). Their method is a special case of the approach considered in this paper. It is seen for the spray paint data that the log-link may not be satisfactory and the class of link functions considered here improves substantially the fit to process variances. This conclusion is reached with a suggested method of comparing ,empirical variances' with the ,theoretical variances' based on the assumed model. Copyright © 2003 John Wiley & Sons, Ltd. [source] Hierarchical Bayesian Analysis of Correlated Zero-inflated Count DataBIOMETRICAL JOURNAL, Issue 6 2004Getachew A. Dagne Abstract This article presents two-component hierarchical Bayesian models which incorporate both overdispersion and excess zeros. The components may be resultants of some intervention (treatment) that changes the rare event generating process. The models are also expanded to take into account any heterogeneity that may exist in the data. Details of the model fitting, checking and selecting alternative models from a Bayesian perspective are also presented. The proposed methods are applied to count data on the assessment of an efficacy of pesticides in controlling the reproduction of whitefly. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source] Testing Marginal Homogeneity Against Stochastic Order in Multivariate Ordinal DataBIOMETRICS, Issue 2 2009B. Klingenberg Summary Many assessment instruments used in the evaluation of toxicity, safety, pain, or disease progression consider multiple ordinal endpoints to fully capture the presence and severity of treatment effects. Contingency tables underlying these correlated responses are often sparse and imbalanced, rendering asymptotic results unreliable or model fitting prohibitively complex without overly simplistic assumptions on the marginal and joint distribution. Instead of a modeling approach, we look at stochastic order and marginal inhomogeneity as an expression or manifestation of a treatment effect under much weaker assumptions. Often, endpoints are grouped together into physiological domains or by the body function they describe. We derive tests based on these subgroups, which might supplement or replace the individual endpoint analysis because they are more powerful. The permutation or bootstrap distribution is used throughout to obtain global, subgroup, and individual significance levels as they naturally incorporate the correlation among endpoints. We provide a theorem that establishes a connection between marginal homogeneity and the stronger exchangeability assumption under the permutation approach. Multiplicity adjustments for the individual endpoints are obtained via stepdown procedures, while subgroup significance levels are adjusted via the full closed testing procedure. The proposed methodology is illustrated using a collection of 25 correlated ordinal endpoints, grouped into six domains, to evaluate toxicity of a chemical compound. [source] Estimation of Rates of Births, Deaths, and Immigration from Mark,Recapture DataBIOMETRICS, Issue 1 2009R. B. O'Hara Summary The analysis of mark,recapture data is undergoing a period of development and expansion. Here we contribute to that by presenting a model which includes both births and immigration, as well as the usual deaths. Data come from a long-term study of the willow tit (Parus montanus), where we can assume that all births are recorded, and hence immigrants can also be identified as birds captured as adults for the first time. We model the rates of immigration, birth rate per parent, and death rates of juveniles and adults. Using a hierarchical model allows us to incorporate annual variation in these parameters. The model is fitted to the data using Markov chain Monte Carlo, as a Bayesian analysis. In addition to the model fitting, we also check several aspects of the model fit, in particular whether survival varies with age or immigrant status, and whether capture probability is affected by previous capture history. The latter check is important, as independence of capture histories is a key assumption that simplifies the model considerably. Here we find that the capture probability depends strongly on whether the individual was captured in the previous year. [source] Reparameterizing the Pattern Mixture Model for Sensitivity Analyses Under Informative DropoutBIOMETRICS, Issue 4 2000Michael J. Daniels Summary. Pattern mixture models are frequently used to analyze longitudinal data where missingness is induced by dropout. For measured responses, it is typical to model the complete data as a mixture of multivariate normal distributions, where mixing is done over the dropout distribution. Fully parameterized pattern mixture models are not identified by incomplete data; Little (1993, Journal of the American Statistical Association88, 125,134) has characterized several identifying restrictions that can be used for model fitting. We propose a reparameterization of the pattern mixture model that allows investigation of sensitivity to assumptions about nonidentified parameters in both the mean and variance, allows consideration of a wide range of nonignorable missing-data mechanisms, and has intuitive appeal for eliciting plausible missing-data mechanisms. The parameterization makes clear an advantage of pattern mixture models over parametric selection models, namely that the missing-data mechanism can be varied without affecting the marginal distribution of the observed data. To illustrate the utility of the new parameterization, we analyze data from a recent clinical trial of growth hormone for maintaining muscle strength in the elderly. Dropout occurs at a high rate and is potentially informative. We undertake a detailed sensitivity analysis to understand the impact of missing-data assumptions on the inference about the effects of growth hormone on muscle strength. [source] |