Bayesian Inference (bayesian + inference)

Distribution by Scientific Domains


Selected Abstracts


EASY AND FLEXIBLE BAYESIAN INFERENCE OF QUANTITATIVE GENETIC PARAMETERS

EVOLUTION, Issue 6 2009
Patrik Waldmann
There has been a tremendous advancement of Bayesian methodology in quantitative genetics and evolutionary biology. Still, there are relatively few publications that apply this methodology, probably because the availability of multipurpose and user-friendly software is somewhat limited. It is here described how only a few rows of code of the well-developed and very flexible Bayesian software WinBUGS (Lunn et al. 2000) can be used for inference of the additive polygenic variance and heritabilty in pedigrees of general design. The presented code is illustrated by application to an earlier published dataset of Scots pine. [source]


Semiparametric Bayesian Inference in Autoregressive Panel Data Models

ECONOMETRICA, Issue 2 2002
Keisuke Hirano
First page of article [source]


Bayesian Inference in Semiparametric Mixed Models for Longitudinal Data

BIOMETRICS, Issue 1 2010
Yisheng Li
Summary We consider Bayesian inference in semiparametric mixed models (SPMMs) for longitudinal data. SPMMs are a class of models that use a nonparametric function to model a time effect, a parametric function to model other covariate effects, and parametric or nonparametric random effects to account for the within-subject correlation. We model the nonparametric function using a Bayesian formulation of a cubic smoothing spline, and the random effect distribution using a normal distribution and alternatively a nonparametric Dirichlet process (DP) prior. When the random effect distribution is assumed to be normal, we propose a uniform shrinkage prior (USP) for the variance components and the smoothing parameter. When the random effect distribution is modeled nonparametrically, we use a DP prior with a normal base measure and propose a USP for the hyperparameters of the DP base measure. We argue that the commonly assumed DP prior implies a nonzero mean of the random effect distribution, even when a base measure with mean zero is specified. This implies weak identifiability for the fixed effects, and can therefore lead to biased estimators and poor inference for the regression coefficients and the spline estimator of the nonparametric function. We propose an adjustment using a postprocessing technique. We show that under mild conditions the posterior is proper under the proposed USP, a flat prior for the fixed effect parameters, and an improper prior for the residual variance. We illustrate the proposed approach using a longitudinal hormone dataset, and carry out extensive simulation studies to compare its finite sample performance with existing methods. [source]


Bayesian Inference for Smoking Cessation with a Latent Cure State

BIOMETRICS, Issue 3 2009
Sheng Luo
Summary We present a Bayesian approach to modeling dynamic smoking addiction behavior processes when cure is not directly observed due to censoring. Subject-specific probabilities model the stochastic transitions among three behavioral states: smoking, transient quitting, and permanent quitting (absorbent state). A multivariate normal distribution for random effects is used to account for the potential correlation among the subject-specific transition probabilities. Inference is conducted using a Bayesian framework via Markov chain Monte Carlo simulation. This framework provides various measures of subject-specific predictions, which are useful for policy-making, intervention development, and evaluation. Simulations are used to validate our Bayesian methodology and assess its frequentist properties. Our methods are motivated by, and applied to, the Alpha-Tocopherol, Beta-Carotene Lung Cancer Prevention study, a large (29,133 individuals) longitudinal cohort study of smokers from Finland. [source]


Bayesian Inference for Gene Expression and Proteomics by Do, K.-A., Müller, P., and Vannucci, M.

BIOMETRICS, Issue 2 2008
J. Sunil Rao
No abstract is available for this article. [source]


MLE and Bayesian Inference of Age-Dependent Sensitivity and Transition Probability in Periodic Screening

BIOMETRICS, Issue 4 2005
Dongfeng Wu
Summary This article extends previous probability models for periodic breast cancer screening examinations. The specific aim is to provide statistical inference for age dependence of sensitivity and the transition probability from the disease free to the preclinical state. The setting is a periodic screening program in which a cohort of initially asymptomatic women undergo a sequence of breast cancer screening exams. We use age as a covariate in the estimation of screening sensitivity and the transition probability simultaneously, both from a frequentist point of view and within a Bayesian framework. We apply our method to the Health Insurance Plan of Greater New York study of female breast cancer and give age-dependent sensitivity and transition probability density estimates. The inferential methodology we develop is also applicable when analyzing studies of modalities for early detection of other types of progressive chronic diseases. [source]


Bayesian Inference for Stochastic Kinetic Models Using a Diffusion Approximation

BIOMETRICS, Issue 3 2005
A. Golightly
Summary This article is concerned with the Bayesian estimation of stochastic rate constants in the context of dynamic models of intracellular processes. The underlying discrete stochastic kinetic model is replaced by a diffusion approximation (or stochastic differential equation approach) where a white noise term models stochastic behavior and the model is identified using equispaced time course data. The estimation framework involves the introduction of m, 1 latent data points between every pair of observations. MCMC methods are then used to sample the posterior distribution of the latent process and the model parameters. The methodology is applied to the estimation of parameters in a prokaryotic autoregulatory gene network. [source]


Sensitivity to sampling in Bayesian word learning

DEVELOPMENTAL SCIENCE, Issue 3 2007
Fei Xu
We report a new study testing our proposal that word learning may be best explained as an approximate form of Bayesian inference (Xu & Tenenbaum, in press). Children are capable of learning word meanings across a wide range of communicative contexts. In different contexts, learners may encounter different sampling processes generating the examples of word,object pairings they observe. An ideal Bayesian word learner could take into account these differences in the sampling process and adjust his/her inferences about word meaning accordingly. We tested how children and adults learned words for novel object kinds in two sampling contexts, in which the objects to be labeled were sampled either by a knowledgeable teacher or by the learners themselves. Both adults and children generalized more conservatively in the former context; that is, they restricted the label to just those objects most similar to the labeled examples when the exemplars were chosen by a knowledgeable teacher, but not when chosen by the learners themselves. We discuss how this result follows naturally from a Bayesian analysis, but not from other statistical approaches such as associative word-learning models. [source]


Identification of soil degradation during earthquake excitations by Bayesian inference

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 6 2003
Jianye Ching
Abstract A Bayesian inference approach is introduced to identify soil degradation behaviours at four downhole array sites. The approach of inference is based on a parametric time-varying infinite impulse response filter model. The approach is shown to be adaptive to the changes of filter parameters and noise amplitudes. Four sites, including the Lotung (Taiwan), Chiba (Japan), Garner Valley (California), and Treasure Island (California) sites with downhole seismic arrays are analysed. Our results show two major types of soil degradation behaviour: the well-known strain-dependent softening, and reduction in stiffness that is not instantaneously recoverable. It is also found that both types of soil degradation are more pronounced in sandy soils than in clayey soils. The mechanism for the second type of soil degradation is not yet clear to the authors and suggested to be further studied. Copyright © 2003 John Wiley & Sons, Ltd. [source]


Radial resolving power of far-field differential sea-level highstands in the inference of mantle viscosity

GEOPHYSICAL JOURNAL INTERNATIONAL, Issue 2 2007
Roblyn A. Kendall
SUMMARY For two decades leading to the late 1980s, the prevailing view from studies of glacial isostatic adjustment (GIA) data was that the viscosity of the Earth's mantle increased moderately, if at all, from the base of the lithosphere to the core,mantle boundary. This view was first questioned by Nakada & Lambeck, who argued that differential sea-level (DSL) highstands between pairs of sites in the Australian region preferred an increase of approximately two orders of magnitude from the mean viscosity of the upper to the lower mantle, in accord with independent inferences from observables related to mantle convection. We use non-linear Bayesian inference to provide the first formal resolving power analysis of the Australian DSL data set. We identify three radial regions, two within the upper mantle (110,270 km and 320,570 km depth) and one in the lower mantle (1225,2265 km depth), over which the average of viscosity is well constrained by the data. We conclude that: (1) the DSL data provide a resolution in the inference of upper mantle viscosity that is better than implied by forward analyses based on isoviscous regions above and below the 670 km depth discontinuity and (2) the data do not strongly constrain viscosity at either the base or top of the lower mantle. Finally, our inversions also quantify the significant bias that may be introduced in inversions of the DSL highstands that do not simultaneously estimate the thickness of the elastic lithosphere. [source]


PRIOR ELICITATION IN MULTIPLE CHANGE-POINT MODELS,

INTERNATIONAL ECONOMIC REVIEW, Issue 3 2009
Gary Koop
This article discusses Bayesian inference in change-point models. The main existing approaches treat all change-points equally, a priori, using either a Uniform prior or an informative hierarchical prior. Both approaches assume a known number of change-points. Some undesirable properties of these approaches are discussed. We develop a new Uniform prior that allows some of the change-points to occur out of sample. This prior has desirable properties, can be interpreted as "noninformative," and treats the number of change-points as unknown. Artificial and real data exercises show how these different priors can have a substantial impact on estimation and prediction. [source]


A Foundational Justification for a Weighted Likelihood Approach to Inference

INTERNATIONAL STATISTICAL REVIEW, Issue 3 2004
Russell J. Bowater
Summary Two types of probability are discussed, one of which is additive whilst the other is non-additive. Popular theories that attempt to justify the importance of the additivity of probability are then critically reviewed. By making assumptions the two types of probability put forward are utilised to justify a method of inference which involves betting preferences being revised in light of the data. This method of inference can be viewed as a justification for a weighted likelihood approach to inference where the plausibility of different values of a parameter , based on the data X, is measured by the quantity q(,) =l(X,, ,)w(,), where l(X,, ,) is the likelihood function and w(,) is a weight function. Even though, unlike Bayesian inference, the method has the disadvantageous property that the measure q(,) is generally non-additive, it is argued that the method has other properties which may be considered very desirable and which have the potential to imply that when everything is taken into account, the method is a serious alternative to the Bayesian approach in many situations. The methodology that is developed is applied to both a toy example and a real example. Résumé Deux types de probabilité sont discutées, dont l'une est additive et l'autre non additive. Les théories populaires qui tentent de justifier l'importance de l'additivité des probabilités font l'objet d'une analyse critique. En faisant des hypothèses, on utilise les deux types de probabilité proposés pour justifier une méthode d'inférence concernant des préférences de paris que l'on révise en fonction des données. Cette méthode d'inférence peut être considérée comme une justification de l'approche de vraisemblance pondérée, oú l'on mesure la plausibilité de différentes valeurs d'un paramètre ,. Bien que dans cette méthode, la mesure du paramètre de vraisemblance ne soit pas additive, la méthode a d'autres propriétés intéressantes qui font qu'elle peut être considérée comme une alternative sérieuse à l'aproche Bayésienne dans de nombreuses situations. La méthodologie est appliquée à la fois à un exemple fictif et un exemple réel. [source]


Bayesian inference in a piecewise Weibull proportional hazards model with unknown change points

JOURNAL OF ANIMAL BREEDING AND GENETICS, Issue 4 2007
J. Casellas
Summary The main difference between parametric and non-parametric survival analyses relies on model flexibility. Parametric models have been suggested as preferable because of their lower programming needs although they generally suffer from a reduced flexibility to fit field data. In this sense, parametric survival functions can be redefined as piecewise survival functions whose slopes change at given points. It substantially increases the flexibility of the parametric survival model. Unfortunately, we lack accurate methods to establish a required number of change points and their position within the time space. In this study, a Weibull survival model with a piecewise baseline hazard function was developed, with change points included as unknown parameters in the model. Concretely, a Weibull log-normal animal frailty model was assumed, and it was solved with a Bayesian approach. The required fully conditional posterior distributions were derived. During the sampling process, all the parameters in the model were updated using a Metropolis,Hastings step, with the exception of the genetic variance that was updated with a standard Gibbs sampler. This methodology was tested with simulated data sets, each one analysed through several models with different number of change points. The models were compared with the Deviance Information Criterion, with appealing results. Simulation results showed that the estimated marginal posterior distributions covered well and placed high density to the true parameter values used in the simulation data. Moreover, results showed that the piecewise baseline hazard function could appropriately fit survival data, as well as other smooth distributions, with a reduced number of change points. [source]


Semiparametric Bayesian inference for dynamic Tobit panel data models with unobserved heterogeneity

JOURNAL OF APPLIED ECONOMETRICS, Issue 6 2008
Tong Li
This paper develops semiparametric Bayesian methods for inference of dynamic Tobit panel data models. Our approach requires that the conditional mean dependence of the unobserved heterogeneity on the initial conditions and the strictly exogenous variables be specified. Important quantities of economic interest such as the average partial effect and average transition probabilities can be readily obtained as a by-product of the Markov chain Monte Carlo run. We apply our method to study female labor supply using a panel data set from the National Longitudinal Survey of Youth 1979. Copyright © 2008 John Wiley & Sons, Ltd. [source]


Age and probabilistic reasoning: Biases in conjunctive, disjunctive and Bayesian judgements in early and late adulthood

JOURNAL OF BEHAVIORAL DECISION MAKING, Issue 1 2005
John E. Fisk
Abstract Probabilistic reasoning plays an essential part in many aspects of our daily routine and it has been argued that as we grow older, the need to make judgements under uncertainty becomes increasingly important. Two studies were conducted to establish whether the propensity to commit probabilistic reasoning errors increased with age. Young (aged 16,24), middle aged (25,54), and older persons (55 years and above) were included. Study 1 revealed systematic biases and errors across a range of judgement tasks. However, no evidence of any age effect in Bayesian inference, the incidence of the conjunction fallacy, or in the number of disjunction errors was found. The results obtained in Study 1 were replicated in Study 2, where the potential mediating role of working memory processes and intellectual capacity were explicitly assessed. While some aspects of probabilistic reasoning performance were correlated with measures of intelligence and working memory functioning among young adults, this was much less evident in older persons. The present findings are discussed in relation to the evolution of the dualistic heuristic,analytical system over the adult lifespan. Copyright © 2005 John Wiley & Sons, Ltd. [source]


Estimation of Age-at-Death for Adult Males Using the Acetabulum, Applied to Four Western European Populations,

JOURNAL OF FORENSIC SCIENCES, Issue 4 2007
Carme Rissech Ph.D.
Abstract:, Methods to estimate adult age from observations of skeletal elements are not very accurate and motivate the development of better methods. In this article, we test recently published method based on the acetabulum and Bayesian inference, developed using Coimbra collection (Portugal). In this study, to evaluate its utility in other populations, this methodology was applied to 394 specimens from four different documented Western European collections. Four strategies of analysis to estimate age were outlined: (a) each series separately; (b) on Lisbon collection, taken as a reference Coimbra collection; (c) on Barcelona collection, taken as a reference both Portuguese collections; and (d) on London collection taken as reference the three Iberian collections combined. Results indicate that estimates are accurate (83,100%). As might be expected, the least accurate estimates were obtained when the most distant collection was used as a reference. Observations of the fused acetabulum can be used to make accurate estimates of age for adults of any age, with less accurate estimates when a more distant reference collection is used. [source]


Using the Acetabulum to Estimate Age at Death of Adult Males,

JOURNAL OF FORENSIC SCIENCES, Issue 2 2006
Carme Rissech Ph.D.
ABSTRACT: The acetabular region is often present and adequately preserved in adult human skeletal remains. Close morphological examination of the 242 left male os coxae from the identified collection of Coimbra (Portugal) has enabled the recognition of seven variables that can be used to estimate age at death. This paper describes these variables and argues their appropriateness by analyzing the correlation between these criteria and the age, the intra- and interobserver consistence, and the accuracy in age prediction using Bayesian inference to estimate age of identified specimens. Results show significant close correlation between the acetabular criteria and age, nonsignificant differences in intra- and interobserver test, and 89% accuracy in Bayes prediction. Obtained estimated age of the specimens had similar accuracy in all ages. These results indicate that these seven variables, based on the acetabular area, are potentially useful to estimate age at death for adult specimens. [source]


Simulation and multi-attribute utility modelling of life cycle profit

JOURNAL OF MULTI CRITERIA DECISION ANALYSIS, Issue 4 2001
Tony RosqvistArticle first published online: 16 NOV 200
Abstract Investments on capital goods are assessed with respect to the life cycle profit as well as the economic lifetime of the investment. The outcome of an investment with respect to these economic criteria is generally non-deterministic. An assessment of different investment options thus requires probabilistic modelling to explicitly account for the uncertainties. A process for the assessment of life cycle profit and the evaluation of the adequacy of the assessment is developed. The primary goal of the assessment process is to aid the decision-maker in structuring and quantifying investment decision problems characterized by multiple criteria and uncertainty. The adequacy of the assessment process can be evaluated by probabilistic criteria indicating the degree of uncertainty in the assessment. Bayesian inference is used to re-evaluate the initial assessment, as evidence of the system performance becomes available. Thus authentication of contracts of guarantee is supported. Numerical examples are given to demonstrate features of the described life cycle profit assessment process. Copyright © 2001 John Wiley & Sons, Ltd. [source]


PHYLOGENETIC PLACEMENT OF BOTRYOCOCCUS BRAUNII (TREBOUXIOPHYCEAE) AND BOTRYOCOCCUS SUDETICUS ISOLATE UTEX 2629 (CHLOROPHYCEAE),

JOURNAL OF PHYCOLOGY, Issue 2 2004
Hoda H. Senousy
The phylogenetic placement of four isolates of Botryococcus braunii Kützing and of Botryococcus sudeticus Lemmermann isolate UTEX 2629 was investigated using sequences of the nuclear small subunit (18S) rRNA gene. The B. braunii isolates represent the A (two isolates), B, and L chemical races. One isolate of B. braunii (CCAP 807/1; A race) has a group I intron at Escherichia coli position 1046 and isolate UTEX 2629 has group I introns at E. coli positions 516 and 1512. The rRNA sequences were aligned with 53 previously reported rRNA sequences from members of the Chlorophyta, including one reported for B. braunii (Berkeley strain). Phylogenetic trees were constructed using distance, weighted maximum parsimony, and maximum likelihood, and their reliability was estimated using bootstrap analysis for distance and parsimony and Bayesian inference for likelihood. All methods showed, with high bootstrap or credibility support, that the four isolates of B. braunii form a monophyletic group whose closest relatives are in the genus Choricystis in the Trebouxiophyceae, whereas the previously reported B. braunii sequence is from a member of the Chlamydomonadales in the Chlorophyceae and isolate UTEX 2629 is a member of the Sphaeropleales in the Chlorophyceae. Polyphyly of these sequences was confirmed by Kishino-Hasegawa tests on artificial trees in which sequences were moved to a single lineage. [source]


PHYLOGENY OF PHAGOTROPHIC EUGLENIDS (EUGLENOZOA): A MOLECULAR APPROACH BASED ON CULTURE MATERIAL AND ENVIRONMENTAL SAMPLES,

JOURNAL OF PHYCOLOGY, Issue 4 2003
Ingo Busse
Molecular studies based on small subunit (SSU) rDNA sequences addressing euglenid phylogeny hitherto suffered from the lack of available data about phagotrophic species. To extend the taxon sampling, SSU rRNA genes from species of seven genera of phagotrophic euglenids were investigated. Sequence analyses revealed an increasing genetic diversity among euglenid SSU rDNA sequences compared with other well-known eukaryotic groups, reflecting an equally broad diversity of morphological characters among euglenid phagotrophs. Phylogenetic inference using standard parsimony and likelihood approaches as well as Bayesian inference and spectral analyses revealed no clear support for euglenid monophyly. Among phagotrophs, monophyly of Petalomonas cantuscygni and Notosolenus ostium, both comprising simple ingestion apparatuses, is strongly supported. A moderately supported clade comprises phototrophic euglenids and primary osmotrophic euglenids together with phagotrophs, exhibiting a primarily flexible pellicle composed of numerous helically arranged strips and a complex ingestion apparatus with two supporting rods and four curved vanes. Comparison of molecular and morphological data is used to demonstrate the difficulties to formulate a hypothesis about how the ingestion apparatus evolved in this group. [source]


Approximate Bayesian inference for latent Gaussian models by using integrated nested Laplace approximations

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2009
Håvard Rue
Summary., Structured additive regression models are perhaps the most commonly used class of models in statistical applications. It includes, among others, (generalized) linear models, (generalized) additive models, smoothing spline models, state space models, semiparametric regression, spatial and spatiotemporal models, log-Gaussian Cox processes and geostatistical and geoadditive models. We consider approximate Bayesian inference in a popular subset of structured additive regression models, latent Gaussian models, where the latent field is Gaussian, controlled by a few hyperparameters and with non-Gaussian response variables. The posterior marginals are not available in closed form owing to the non-Gaussian response variables. For such models, Markov chain Monte Carlo methods can be implemented, but they are not without problems, in terms of both convergence and computational time. In some practical applications, the extent of these problems is such that Markov chain Monte Carlo sampling is simply not an appropriate tool for routine analysis. We show that, by using an integrated nested Laplace approximation and its simplified version, we can directly compute very accurate approximations to the posterior marginals. The main benefit of these approximations is computational: where Markov chain Monte Carlo algorithms need hours or days to run, our approximations provide more precise estimates in seconds or minutes. Another advantage with our approach is its generality, which makes it possible to perform Bayesian analysis in an automatic, streamlined way, and to compute model comparison criteria and various predictive measures so that models can be compared and the model under study can be challenged. [source]


Measurement error modelling with an approximate instrumental variable

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 5 2007
Paul Gustafson
Summary., Consider using regression modelling to relate an exposure (predictor) variable to a disease outcome (response) variable. If the exposure variable is measured with error, but this error is ignored in the analysis, then misleading inferences can result. This problem is well known and has spawned a large literature on methods which adjust for measurement error in predictor variables. One theme is that the requisite assumptions about the nature of the measurement error can be stronger than what is actually known in many practical situations. In particular, the assumptions that are required to yield a model which is formally identified from the observable data can be quite strong. The paper deals with one particular strategy for measurement error modelling, namely that of seeking an instrumental variable, i.e. a covariate S which is associated with exposure and conditionally independent of the outcome given exposure. If these two conditions hold exactly, then we call S an exact instrumental variable, and an identified model results. However, the second is not checkable empirically, since the actual exposure is unobserved. In practice then, investigators typically seek a covariate which is plausibly thought to satisfy it. We study inferences which acknowledge the approximate nature of this assumption. In particular, we consider Bayesian inference with a prior distribution that posits that S is probably close to conditionally independent of outcome given exposure. We refer to this as an approximate instrumental variable assumption. Although the approximate instrumental variable assumption is more realistic for most applications, concern arises that a non-identified model may result. Thus the paper contrasts inferences arising from the approximate instrumental variable assumption with their exact instrumental variable counterparts, with particular emphasis on the benefit of basing inferences on a more realistic model versus the cost of basing inferences on a non-identified model. [source]


Wavelet-based functional mixed models

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 2 2006
Jeffrey S. Morris
Summary., Increasingly, scientific studies yield functional data, in which the ideal units of observation are curves and the observed data consist of sets of curves that are sampled on a fine grid. We present new methodology that generalizes the linear mixed model to the functional mixed model framework, with model fitting done by using a Bayesian wavelet-based approach. This method is flexible, allowing functions of arbitrary form and the full range of fixed effects structures and between-curve covariance structures that are available in the mixed model framework. It yields nonparametric estimates of the fixed and random-effects functions as well as the various between-curve and within-curve covariance matrices. The functional fixed effects are adaptively regularized as a result of the non-linear shrinkage prior that is imposed on the fixed effects' wavelet coefficients, and the random-effect functions experience a form of adaptive regularization because of the separately estimated variance components for each wavelet coefficient. Because we have posterior samples for all model quantities, we can perform pointwise or joint Bayesian inference or prediction on the quantities of the model. The adaptiveness of the method makes it especially appropriate for modelling irregular functional data that are characterized by numerous local features like peaks. [source]


Bayesian inference in hidden Markov models through the reversible jump Markov chain Monte Carlo method

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 1 2000
C. P. Robert
Hidden Markov models form an extension of mixture models which provides a flexible class of models exhibiting dependence and a possibly large degree of variability. We show how reversible jump Markov chain Monte Carlo techniques can be used to estimate the parameters as well as the number of components of a hidden Markov model in a Bayesian framework. We employ a mixture of zero-mean normal distributions as our main example and apply this model to three sets of data from finance, meteorology and geomagnetism. [source]


Using Bayesian inference to understand the allocation of resources between sexual and asexual reproduction

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2009
C. Jessica E. Metcalf
Summary., We address the problem of Markov chain Monte Carlo analysis of a complex ecological system by using a Bayesian inferential approach. We describe a complete likelihood framework for the life history of the wavyleaf thistle, including missing information and density dependence. We indicate how, to make inference on life history transitions involving both missing information and density dependence, the stochastic models underlying each component can be combined with each other and with priors to obtain expressions that can be directly sampled. This innovation and the principles described could be extended to other species featuring such missing stage information, with potential for improving inference relating to a range of ecological or evolutionary questions. [source]


A continuous latent spatial model for crack initiation in bone cement

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 1 2008
Elizabeth A. Heron
Summary., Hip replacements rovide a means of achieving a higher quality of life for individuals who have, through aging or injury, accumulated damage to their natural joints. This is a very common operation, with over a million people a year benefiting from the procedure. The replacements themselves fail mainly as a result of the mechanical loosening of the components of the artificial joint due to damage accumulation. This damage accumulation consists of the initiation and growth of cracks in the bone cement which is used to fixate the replacement in the human body. The data come from laboratory experiments that are designed to assess the effectiveness of the bone cement in resisting damage. We examine the properties of the bone cement, with the aim being to estimate the effect that both observable and unobservable spatially varying factors have on causing crack initiation. To do this, an explicit model for the damage process is constructed taking into account the tension and compression at different locations in the specimens. A gamma random field is used to model any latent spatial factors that may be influential in crack initiation. Bayesian inference is carried out for the parameters of this field and related covariates by using Markov chain Monte Carlo techniques. [source]


Optimal predictive sample size for case,control studies

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2004
Fulvio De Santis
Summary., The identification of factors that increase the chances of a certain disease is one of the classical and central issues in epidemiology. In this context, a typical measure of the association between a disease and risk factor is the odds ratio. We deal with design problems that arise for Bayesian inference on the odds ratio in the analysis of case,control studies. We consider sample size determination and allocation criteria for both interval estimation and hypothesis testing. These criteria are then employed to determine the sample size and proportions of units to be assigned to cases and controls for planning a study on the association between the incidence of a non-Hodgkin's lymphoma and exposition to pesticides by eliciting prior information from a previous study. [source]


Bayesian cure rate models for malignant melanoma: a case-study of Eastern Cooperative Oncology Group trial E1690

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2002
Ming-Hui Chen
We propose several Bayesian models for modelling time-to-event data. We consider a piecewise exponential model, a fully parametric cure rate model and a semiparametric cure rate model. For each model, we derive the likelihood function and examine some of its properties for carrying out Bayesian inference with non-informative priors. We also examine model identifiability issues and give conditions which guarantee identifiability. Also, for each model, we construct a class of informative prior distributions based on historical data, i.e. data from similar previous studies. These priors, called power priors, prove to be quite useful in this context. We examine the properties of the power priors for Bayesian inference and, in particular, we study their effect on the current analysis. Tools for model comparison and model assessment are also proposed. A detailed case-study of a recently completed melanoma clinical trial conducted by the Eastern Cooperative Oncology Group is presented and the methodology proposed is demonstrated in detail. [source]


Quantification of uncertainty using Bayesian and bootstrap models to simulate the impact of nitrogen fertilisation on ,-glucan levels in barley

JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE, Issue 11 2009
Marta Fontana
Abstract BACKGROUND: ,-Glucans have enjoyed renewed interest as a functional food ingredient, with current attention focused on optimising ,-glucan levels in finished products without compromising final product quality. In order to measure the uncertainty about the level of ,-glucans in barley, two different statistical methods (Bayesian inference and Bootstrap technique) were applied to measured levels of ,-glucan in three different varieties of barley grain (n = 83). RESULTS: The resulting probability density distributions were similar for the full data set and also when applied to smaller sample sizes, highlighting the potential for either method in quantifying the total uncertainty in ,-glucan levels. Bayesian inference was used to model the effect of nitrogen treatment on ,-glucan and protein contents in barley. The model found that a low level of fertilisation (50 kg N ha,1) did not have a significant effect on ,-glucan or protein content. However, fertilisation above this level did result in an increase in ,-glucan and protein levels, the effect seeming to plateau at 100 kg N ha,1. In addition, the uncertainty distributions were significantly different for two consecutive years of data, highlighting the potential environmental influence on ,-glucan content. CONCLUSION: The model developed in this study could be a useful tool for processors to quantify the uncertainty about the initial level of ,-glucan in barley and to evaluate the influence of environmental factors, thus enabling them to formulate their ingredient base to optimise levels of ,-glucan without compromising final product quality. Copyright © 2009 Society of Chemical Industry [source]


Phylogeographic analysis of Pimoidae (Arachnida: Araneae) inferred from mitochondrial cytochrome c oxidase subunit I and nuclear 28S rRNA gene regions

JOURNAL OF ZOOLOGICAL SYSTEMATICS AND EVOLUTIONARY RESEARCH, Issue 2 2008
Q. Wang
Abstract Using mitochondrial DNA cytochrome c oxidase subunit I and nuclear DNA 28S rRNA data, we explored the phylogenetic relationships of the family Pimoidae (Arachnida: Araneae) and tested the North America to Asia dispersal hypothesis. Sequence data were analysed using maximum parsimony and Bayesian inference. A phylogenetic analysis suggested that vicariance, instead of dispersal, better explained the present distribution pattern of Pimoidae. Times of divergence events were estimated using penalized likelihood method. The dating analysis suggested that the emergence time of Pimoidae was approximately 140 million years ago (Ma). The divergence time of the North American and Asian species of Pimoa was approximately 110 Ma. Our phylogenetic hypothesis supports the current morphology-based taxonomy and suggests that the cave dwelling might have played an important role in the speciation of pimoids in arid areas. Kurzfassung Die verwandtschaftlichen Verhältnisse der Spinnenfamilie Pimoidae (Arachnida: Araneae) wurden mit Hilfe von mtDNA COI und nuDNA 28S rRNA-Daten untersucht und die Ausbreitungshypothese von Nordamerika nach Asien getestet. Sequenzen wurden mit Maximum Parsimonie und Bayesian Inferenz analysiert. Die Analyse zeigte, dass das rezente Verbreitungsmuster der Pimoidae durch Vikarianz besser erklärt wird als durch Ausbreitung. Zeiten für Aufspaltungsereignisse wurden geschätzt mit Hilfe der Bayesischen Molekularen Analyse. Diese legt eine Abspaltung der Pimoidae vor etwa 140 Millionen Jahren nahe. Die Aufspaltung zwischen Nordamerika und Asien hat demzufolge vor 110 Millionen Jahren stattgefunden. Unsere phylogenetische Analyse unterstützt die aktuelle auf Morphologie basierende Taxonomie und zeigt, dass das Höhlenleben eine größere Rolle bei der Speziation in trockenen als in feuchten Gebieten spielte. [source]