Posterior Distributions (posterior + distribution)

Distribution by Scientific Domains

Kinds of Posterior Distributions

  • marginal posterior distribution

  • Selected Abstracts

    Joint projections of temperature and precipitation change from multiple climate models: a hierarchical Bayesian approach

    Claudia Tebaldi
    Summary., Posterior distributions for the joint projections of future temperature and precipitation trends and changes are derived by applying a Bayesian hierachical model to a rich data set of simulated climate from general circulation models. The simulations that are analysed here constitute the future projections on which the Intergovernmental Panel on Climate Change based its recent summary report on the future of our planet's climate, albeit without any sophisticated statistical handling of the data. Here we quantify the uncertainty that is represented by the variable results of the various models and their limited ability to represent the observed climate both at global and at regional scales. We do so in a Bayesian framework, by estimating posterior distributions of the climate change signals in terms of trends or differences between future and current periods, and we fully characterize the uncertain nature of a suite of other parameters, like biases, correlation terms and model-specific precisions. Besides presenting our results in terms of posterior distributions of the climate signals, we offer as an alternative representation of the uncertainties in climate change projections the use of the posterior predictive distribution of a new model's projections. The results from our analysis can find straightforward applications in impact studies, which necessitate not only best guesses but also a full representation of the uncertainty in climate change projections. For water resource and crop models, for example, it is vital to use joint projections of temperature and precipitation to represent the characteristics of future climate best, and our statistical analysis delivers just that. [source]

    Likelihood-based tests for localized spatial clustering of disease

    ENVIRONMETRICS, Issue 8 2004
    Ronald E. Gangnon
    Abstract Numerous methods have been proposed for detecting spatial clustering of disease. Two methods for likelihood-based inference using parametric models for clustering are the spatial scan statistic and the weighted average likelihood ratio (WALR) test. The spatial scan statistic provides a measure of evidence for clustering at a specific, data-identified location; it can be biased towards finding clusters in areas with greater spatial resolution. The WALR test provides a more global assessment of the evidence for clustering and identifies cluster locations in a relatively unbiased fashion using a posterior distribution over potential clusters. We consider two new statistics which attempt to combine the specificity of the scan statistic with the lack of bias of the WALR test: a scan statistic based on a penalized likelihood ratio and a localized version of the WALR test. We evaluate the power of these tests and bias of the associated estimates through simulations and demonstrate their application using the well-known New York leukemia data. Copyright © 2004 John Wiley & Sons, Ltd. [source]

    Hierarchical Bayesian modelling of wind and sea surface temperature from the Portuguese coast

    Ricardo T. Lemos
    Abstract In this work, we revisit a recent analysis that pointed to an overall relaxation of the Portuguese coastal upwelling system, between 1941 and 2000, and apply more elaborate statistical techniques to assess that evidence. Our goal is to fit a model for environmental variables that accommodate seasonal cycles, long-term trends, short-term fluctuations with some degree of autocorrelation, and cross-correlations between measuring sites and variables. Reference cell coding is used to investigate similarities in behaviour among sites. Parameter estimation is performed in a single modelling step, thereby producing more reliable credibility intervals than previous studies. This is of special importance in the assessment of trend significance. We employ a Bayesian approach with a purposely developed Markov chain Monte Carlo method to explore the posterior distribution of the parameters. Our results substantiate most previous findings and provide new insight on the relationship between wind and sea surface temperature off the Portuguese coast. Copyright © 2009 Royal Meteorological Society [source]

    Jointness of growth determinants

    Gernot Doppelhofer
    This paper introduces a new measure of dependence or jointness among explanatory variables. Jointness is based on the joint posterior distribution of variables over the model space, thereby taking model uncertainty into account. By looking beyond marginal measures of variable importance, jointness reveals generally unknown forms of dependence. Positive jointness implies that regressors are complements, representing distinct but mutually reinforcing effects. Negative jointness implies that explanatory variables are substitutes and capture similar underlying effects. In a cross-country dataset we show that jointness among 67 determinants of growth is important, affecting inference and informing economic policy. Copyright © 2009 John Wiley & Sons, Ltd. [source]

    Learning, forecasting and structural breaks

    John M. Maheu
    We provide a general methodology for forecasting in the presence of structural breaks induced by unpredictable changes to model parameters. Bayesian methods of learning and model comparison are used to derive a predictive density that takes into account the possibility that a break will occur before the next observation. Estimates for the posterior distribution of the most recent break are generated as a by-product of our procedure. We discuss the importance of using priors that accurately reflect the econometrician's opinions as to what constitutes a plausible forecast. Several applications to macroeconomic time-series data demonstrate the usefulness of our procedure. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Effects of Practical Constraints on Item Selection Rules at the Early Stages of Computerized Adaptive Testing

    Shu-Ying Chen
    The purpose of this study was to compare the effects of four item selection rules,(1) Fisher information (F), (2) Fisher information with a posterior distribution (FP), (3) Kullback-Leibler information with a posterior distribution (KP), and (4) completely randomized item selection (RN),with respect to the precision of trait estimation and the extent of item usage at the early stages of computerized adaptive testing. The comparison of the four item selection rules was carried out under three conditions: (1) using only the item information function as the item selection criterion; (2) using both the item information function and content balancing; and (3) using the item information function, content balancing, and item exposure control. When test length was less than 10 items, FP and KP tended to outperform F at extreme trait levels in Condition 1. However, in more realistic settings, it could not be concluded that FP and KP outperformed F, especially when item exposure control was imposed. When test length was greater than 10 items, the three nonrandom item selection procedures performed similarly no matter what the condition was, while F had slightly higher item usage. [source]

    Monte Carlo Based Null Distribution for an Alternative Goodness-of-Fit Test Statistic in IRT Models

    Clement A. Stone
    Assessing the correspondence between model predictions and observed data is a recommended procedure for justifying the application of an IRT model. However, with shorter tests, current goodness-of-fit procedures that assume precise point estimates of ability, are inappropriate. The present paper describes a goodness-of-fit statistic that considers the imprecision with which ability is estimated and involves constructing item fit tables based on each examinee's posterior distribution of ability, given the likelihood of their response pattern and an assumed marginal ability distribution. However, the posterior expectations that are computed are dependent and the distribution of the goodness-of-fit statistic is unknown. The present paper also describes a Monte Carlo resampling procedure that can be used to assess the significance of the fit statistic and compares this method with a previously used method. The results indicate that the method described herein is an effective and reasonably simple procedure for assessing the validity of applying IRT models when ability estimates are imprecise. [source]

    Gamma-SLAM: Visual SLAM in unstructured environments using variance grid maps

    Tim K. Marks
    This paper describes an online stereo visual simultaneous localization and mapping (SLAM) algorithm developed for the Learning Applied to Ground Robotics (LAGR) program. The Gamma-SLAM algorithm uses a Rao,Blackwellized particle filter to obtain a joint posterior over poses and maps: the pose distribution is estimated using a particle filter, and each particle has its own map that is obtained through exact filtering conditioned on the particle's pose. Visual odometry is used to provide good proposal distributions for the particle filter, and maps are represented using a Cartesian grid. Unlike previous grid-based SLAM algorithms, however, the Gamma-SLAM map maintains a posterior distribution over the elevation variance in each cell. This variance grid map can capture rocks, vegetation, and other objects that are typically found in unstructured environments but are not well modeled by traditional occupancy or elevation grid maps. The algorithm runs in real time on conventional processors and has been evaluated for both qualitative and quantitative accuracy in three outdoor environments over trajectories totaling 1,600 m in length. © 2008 Wiley Periodicals, Inc. [source]

    Darwin, Galton and the Statistical Enlightenment

    Stephen M. Stigler
    Summary., On September 10th, 1885, Francis Galton ushered in a new era of Statistical Enlightenment with an address to the British Association for the Advancement of Science in Aberdeen. In the process of solving a puzzle that had lain dormant in Darwin's Origin of Species, Galton introduced multivariate analysis and paved the way towards modern Bayesian statistics. The background to this work is recounted, including the recognition of a failed attempt by Galton in 1877 as providing the first use of a rejection sampling algorithm for the simulation of a posterior distribution, and the first appearance of a proper Bayesian analysis for the normal distribution. [source]

    Models for potentially biased evidence in meta-analysis using empirically based priors

    N. J. Welton
    Summary., We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited. We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions. [source]

    Using historical data for Bayesian sample size determination

    Fulvio De Santis
    Summary., We consider the sample size determination (SSD) problem, which is a basic yet extremely important aspect of experimental design. Specifically, we deal with the Bayesian approach to SSD, which gives researchers the possibility of taking into account pre-experimental information and uncertainty on unknown parameters. At the design stage, this fact offers the advantage of removing or mitigating typical drawbacks of classical methods, which might lead to serious miscalculation of the sample size. In this context, the leading idea is to choose the minimal sample size that guarantees a probabilistic control on the performance of quantities that are derived from the posterior distribution and used for inference on parameters of interest. We are concerned with the use of historical data,i.e. observations from previous similar studies,for SSD. We illustrate how the class of power priors can be fruitfully employed to deal with lack of homogeneity between historical data and observations of the upcoming experiment. This problem, in fact, determines the necessity of discounting prior information and of evaluating the effect of heterogeneity on the optimal sample size. Some of the most popular Bayesian SSD methods are reviewed and their use, in concert with power priors, is illustrated in several medical experimental contexts. [source]

    Bayesian analysis of single-molecule experimental data

    S. C. Kou
    Summary., Recent advances in experimental technologies allow scientists to follow biochemical processes on a single-molecule basis, which provides much richer information about chemical dynamics than traditional ensemble-averaged experiments but also raises many new statistical challenges. The paper provides the first likelihood-based statistical analysis of the single-molecule fluorescence lifetime experiment designed to probe the conformational dynamics of a single deoxyribonucleic acid (DNA) hairpin molecule. The conformational change is initially treated as a continuous time two-state Markov chain, which is not observable and must be inferred from changes in photon emissions. This model is further complicated by unobserved molecular Brownian diffusions. Beyond the simple two-state model, a competing model that models the energy barrier between the two states of the DNA hairpin as an Ornstein,Uhlenbeck process has been suggested in the literature. We first derive the likelihood function of the simple two-state model and then generalize the method to handle complications such as unobserved molecular diffusions and the fluctuating energy barrier. The data augmentation technique and Markov chain Monte Carlo methods are developed to sample from the posterior distribution desired. The Bayes factor calculation and posterior estimates of relevant parameters indicate that the fluctuating barrier model fits the data better than the simple two-state model. [source]

    Modelling species diversity through species level hierarchical modelling

    Alan E. Gelfand
    Summary., Understanding spatial patterns of species diversity and the distributions of individ-ual species is a consuming problem in biogeography and conservation. The Cape floristic region of South Africa is a global hot spot of diversity and endemism, and the Protea atlas project, with about 60 000 site records across the region, provides an extraordinarily rich data set to model patterns of biodiversity. Model development is focused spatially at the scale of 1, grid cells (about 37 000 cells total for the region). We report on results for 23 species of a flowering plant family known as Proteaceae (of about 330 in the Cape floristic region) for a defined subregion. Using a Bayesian framework, we developed a two-stage, spatially explicit, hierarchical logistic regression. Stage 1 models the potential probability of presence or absence for each species at each cell, given species attributes, grid cell (site level) environmental data with species level coefficients, and a spatial random effect. The second level of the hierarchy models the probability of observing each species in each cell given that it is present. Because the atlas data are not evenly distributed across the landscape, grid cells contain variable numbers of sampling localities. Thus this model takes the sampling intensity at each site into account by assuming that the total number of times that a particular species was observed within a site follows a binomial distribution. After assigning prior distributions to all quantities in the model, samples from the posterior distribution were obtained via Markov chain Monte Carlo methods. Results are mapped as the model-estimated probability of presence for each species across the domain. This provides an alternative to customary empirical ,range-of-occupancy' displays. Summing yields the predicted richness of species over the region. Summaries of the posterior for each environmental coefficient show which variables are most important in explaining the presence of species. Our initial results describe biogeographical patterns over the modelled region remarkably well. In particular, species local population size and mode of dispersal contribute significantly to predicting patterns, along with annual precipitation, the coefficient of variation in rainfall and elevation. [source]

    Bayesian Unit Root Test in Nonnormal AR(1) Model

    Hikaru Hasegawa
    In this paper, we approximate the distribution of disturbances by the Edgeworth series distribution and propose a Bayesian analysis in a nonnormal AR(1) model. We derive the posterior distribution of the autocorrelation and the posterior odds ratio for unit roots hypothesis in the AR(1) model when the first four cumulants of the Edgeworth series distribution are finite and the higher order cumulants are negligible. We also apply the posterior analysis to eight real exchange rates and investigate whether these exchange rates behave like a random walk or not. [source]

    lea (likelihood-based estimation of admixture): a program to estimate simultaneously admixture and time since the admixture event

    O. Langella
    Abstract We consider an admixture event, T generations in the past, where two ,parental' populations, P1 and P2, of size N1 and N2, contribute different proportions into the gene pool of an admixed population, H of size Nh. lea (likelihood-based estimator of admixture) is a program which allows the user to obtain the posterior distribution of the parameters of the model. This includes p1, the contribution of P1, and t1, t2 and th, the time since the admixture event (scaled by the population size) for the three populations. lea allows the user to stop and restart the analyses at any time. [source]

    The abundance and radial distribution of satellite galaxies

    Frank C. Van Den Bosch
    ABSTRACT Using detailed mock galaxy redshift surveys (MGRSs) we investigate the abundance and radial distribution of satellite galaxies. The mock surveys are constructed using large numerical simulations and the conditional luminosity function (CLF), and are compared against data from the Two Degree Field Galaxy Redshift Survey (2dFGRS). We use Monte Carlo Markov chains to explore the full posterior distribution of the CLF parameter space, and show that the average relation between light and mass is tightly constrained and in excellent agreement with our previous models and with that of Vale & Ostriker. The radial number density distribution of satellite galaxies in the 2dFGRS reveals a pronounced absence of satellites at small projected separations from their host galaxies. This is (at least partly) owing to the overlap and merging of galaxy images in the 2dFGRS parent catalogue. Owing to the resulting close-pair incompleteness we are unfortunately unable to put meaningful constraints on the radial distribution of satellite galaxies; the data are consistent with a radial number density distribution that follows that of the dark matter particles, but we cannot rule out alternatives with a constant number density core. Marginalizing over the full CLF parameter space, we show that in a ,CDM concordance cosmology the observed abundances of host and satellite galaxies in the 2dFGRS indicate a power spectrum normalization of ,8, 0.7. The same cosmology but with ,8= 0.9 is unable to match simultaneously the abundances of host and satellite galaxies. This confirms our previous conclusions based on the pairwise peculiar velocity dispersions and the group multiplicity function. [source]

    Practical pharmacovigilance analysis strategies

    A. Lawrence Gould
    Abstract Purpose To compare two recently proposed Bayesian methods for quantitative pharmacovigilance with respect to assumptions and results, and to describe some practical strategies for their use. Methods The two methods were expressed in common terms to simplify identifying similarities and differences, some extensions to both methods were provided, and the empirical Bayes method was applied to accumulated experience on a new antihypertensive drug to elucidate the pattern of adverse-event reporting. Both methods use the logarithm of the proportional risk ratio as the basic metric for association. Results The two methods provide similar numerical results for frequently reported events, but not necessarily when few events are reported. Using a lower 5% quantile of the posterior distribution gives some assurance that potential signals are unlikely to be noise. The calculations indicated that most potential adverse event,drug associations that were well-recognized after 6 years of use could be identified within the first year, that most of the associations identified in the first year persisted over time. Other insights into the pattern of event reporting were also noted. Conclusion Both methods can provide useful early signals of potential drug,event associations that subsequently can be the focus of detailed evaluation by skilled clinicians and epidemiologists. Copyright © 2002 John Wiley & Sons, Ltd. [source]

    Fragile beliefs and the price of uncertainty

    Lars Peter Hansen
    C11; C44; C72; E44; G12 A representative consumer uses Bayes' law to learn about parameters of several models and to construct probabilities with which to perform ongoing model averaging. The arrival of signals induces the consumer to alter his posterior distribution over models and parameters. The consumer's specification doubts induce him to slant probabilities pessimistically. The pessimistic probabilities tilt toward a model that puts long-run risks into consumption growth. That contributes a countercyclical history-dependent component to prices of risk. [source]

    On describing multivariate skewed distributions: A directional approach

    José T. A. S. Ferreira
    Abstract Most multivariate measures of skewness in the literature measure the overall skewness of a distribution. These measures were designed for testing the hypothesis of distributional symmetry; their relevance for describing skewed distributions is less obvious. In this article, the authors consider the problem of characterizing the skewness of multivariate distributions. They define directional skewness as the skewness along a direction and analyze two parametric classes of skewed distributions using measures based on directional skewness. The analysis brings further insight into the classes, allowing for a more informed selection of classes of distributions for particular applications. The authors use the concept of directional skewness twice in the context of Bayesian linear regression under skewed error: first in the elicitation of a prior on the parameters of the error distribution, and then in the analysis of the skewness of the posterior distribution of the regression residuals. Décrire I'asyrnétrie de lois rnultivariées: une approche directionnelle La plupart des mesures d'asymétrie multivariées existantes donnent une idée de l'asymétrie globale d'une loi à plusieurs dimensions. Ces mesures ont été conçues pour tester l'hypothèse de symétrie distributionnelle; leur pertinence en tant qu'outil de description de l'asymétrie est moins claire. Dans cet article, les auteurs cherchent à caractériser l'asymétrie de lois multivariées. Ils définissent une notion d'asymétrie propre à une direction et étudient deux classes paramétriques de lois asymétriques à l'aide de mesures fondées sur ce concept. Leur analyse fournit des renseignements utiles sur les propriétés de ces classes de lois, permettant ainsi un choix plus éclairé dans des applications spécifiques. Les auteurs font double emploi de leur concept d'asymétrie directionnelle dans un contexte de régression linéaire bayésien-ne: d'abord pour l'élicitation d'une loi a priori sur les paramètres de la loi du terme d'erreur, puis pour l'analyse de l'asymétrie de la loi a posteriori des résidus du modèle. [source]

    Population Size Estimation Using Individual Level Mixture Models

    Daniel Manrique-Vallier
    Abstract We revisit the heterogeneous closed population multiple recapture problem, modeling individual-level heterogeneity using the Grade of Membership model (Woodbury et al., 1978). This strategy allows us to postulate the existence of homogeneous latent "ideal" or "pure" classes within the population, and construct a soft clustering of the individuals, where each one is allowed partial or mixed membership in all of these classes. We propose a full hierarchical Bayes specification and a MCMC algorithm to obtain samples from the posterior distribution. We apply the method to simulated data and to three real life examples. (© 2008 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim) [source]

    Bayesian Case Influence Diagnostics for Survival Models

    BIOMETRICS, Issue 1 2009
    Hyunsoon Cho
    Summary We propose Bayesian case influence diagnostics for complex survival models. We develop case deletion influence diagnostics for both the joint and marginal posterior distributions based on the Kullback,Leibler divergence (K,L divergence). We present a simplified expression for computing the K,L divergence between the posterior with the full data and the posterior based on single case deletion, as well as investigate its relationships to the conditional predictive ordinate. All the computations for the proposed diagnostic measures can be easily done using Markov chain Monte Carlo samples from the full data posterior distribution. We consider the Cox model with a gamma process prior on the cumulative baseline hazard. We also present a theoretical relationship between our case-deletion diagnostics and diagnostics based on Cox's partial likelihood. A simulated data example and two real data examples are given to demonstrate the methodology. [source]

    Statistical Inference in a Stochastic Epidemic SEIR Model with Control Intervention: Ebola as a Case Study

    BIOMETRICS, Issue 4 2006
    Phenyo E. Lekone
    Summary A stochastic discrete-time susceptible-exposed-infectious-recovered (SEIR) model for infectious diseases is developed with the aim of estimating parameters from daily incidence and mortality time series for an outbreak of Ebola in the Democratic Republic of Congo in 1995. The incidence time series exhibit many low integers as well as zero counts requiring an intrinsically stochastic modeling approach. In order to capture the stochastic nature of the transitions between the compartmental populations in such a model we specify appropriate conditional binomial distributions. In addition, a relatively simple temporally varying transmission rate function is introduced that allows for the effect of control interventions. We develop Markov chain Monte Carlo methods for inference that are used to explore the posterior distribution of the parameters. The algorithm is further extended to integrate numerically over state variables of the model, which are unobserved. This provides a realistic stochastic model that can be used by epidemiologists to study the dynamics of the disease and the effect of control interventions. [source]

    Bayesian Inference for Stochastic Kinetic Models Using a Diffusion Approximation

    BIOMETRICS, Issue 3 2005
    A. Golightly
    Summary This article is concerned with the Bayesian estimation of stochastic rate constants in the context of dynamic models of intracellular processes. The underlying discrete stochastic kinetic model is replaced by a diffusion approximation (or stochastic differential equation approach) where a white noise term models stochastic behavior and the model is identified using equispaced time course data. The estimation framework involves the introduction of m, 1 latent data points between every pair of observations. MCMC methods are then used to sample the posterior distribution of the latent process and the model parameters. The methodology is applied to the estimation of parameters in a prokaryotic autoregulatory gene network. [source]

    A Queueing Model for Chronic Recurrent Conditions under Panel Observation

    BIOMETRICS, Issue 1 2005
    Catherine M. Crespi
    Summary In many chronic conditions, subjects alternate between an active and an inactive state, and sojourns into the active state may involve multiple lesions, infections, or other recurrences with different times of onset and resolution. We present a biologically interpretable model of such chronic recurrent conditions based on a queueing process. The model has a birth,death process describing recurrences and a semi-Markov process describing the alternation between active and inactive states, and can be fit to panel data that provide only a binary assessment of the active or inactive state at a series of discrete time points using a hidden Markov approach. We accommodate individual heterogeneity and covariates using a random effects model, and simulate the posterior distribution of unknowns using a Markov chain Monte Carlo algorithm. Application to a clinical trial of genital herpes shows how the method can characterize the biology of the disease and estimate treatment efficacy. [source]

    Survival of Bowhead Whales, Balaena mysticetus, Estimated from 1981,1998 Photoidentification Data

    BIOMETRICS, Issue 4 2002
    Judith Zeh
    Summary. Annual survival probability of bowhead whales, Balaena mysticetus, was estimated using both Bayesian and maximum likelihood implementations of Cormack and Jolly-Seber (JS) models for capture-recapture estimation in open populations and reduced-parameter generalizations of these models. Aerial photographs of naturally marked bowheads collected between 1981 and 1998 provided the data. The marked whales first photographed in a particular year provided the initial ,capture' and ,release' of those marked whales and photographs in subsequent years the ,recaptures'. The Cormack model, often called the Cormack-Jolly-Seber (CJS) model, and the program MARK were used to identify the model with a single survival and time-varying capture probabilities as the most appropriate for these data. When survival was constrained to be one or less, the maximum likelihood estimate computed by MARK was one, invalidating confidence interval computations based on the asymptotic standard error or profile likelihood. A Bayesian Markov chain Monte Carlo (MCMC) implementation of the model was used to produce a posterior distribution for annual survival. The corresponding reduced-parameter JS model was also fit via MCMC because it is the more appropriate of the two models for these photoidentification data. Because the CJS model ignores much of the information on capture probabilities provided by the data, its results are less precise and more sensitive to the prior distributions used than results from the JS model. With priors for annual survival and capture probabilities uniform from 0 to 1, the posterior mean for bowhead survival rate from the JS model is 0.984, and 95% of the posterior probability lies between 0.948 and 1. This high estimated survival rate is consistent with other bowhead life history data. [source]

    Bayesian Nonparametric Modeling Using Mixtures of Triangular Distributions

    BIOMETRICS, Issue 2 2001
    F. Perron
    Summary. Nonparametric modeling is an indispensable tool in many applications and its formulation in an hierarchical Bayesian context, using the entire posterior distribution rather than particular expectations, increases its flexibility. In this article, the focus is on nonparametric estimation through a mixture of triangular distributions. The optimality of this methodology is addressed and bounds on the accuracy of this approximation are derived. Although our approach is more widely applicable, we focus for simplicity on estimation of a monotone nondecreasing regression on [0, 1] with additive error, effectively approximating the function of interest by a function having a piecewise linear derivative. Computationally accessible methods of estimation are described through an amalgamation of existing Markov chain Monte Carlo algorithms. Simulations and examples illustrate the approach. [source]

    Comparison of three expert elicitation methods for logistic regression on predicting the presence of the threatened brush-tailed rock-wallaby Petrogale penicillata

    ENVIRONMETRICS, Issue 4 2009
    Rebecca A. O'Leary
    Abstract Numerous expert elicitation methods have been suggested for generalised linear models (GLMs). This paper compares three relatively new approaches to eliciting expert knowledge in a form suitable for Bayesian logistic regression. These methods were trialled on two experts in order to model the habitat suitability of the threatened Australian brush-tailed rock-wallaby (Petrogale penicillata). The first elicitation approach is a geographically assisted indirect predictive method with a geographic information system (GIS) interface. The second approach is a predictive indirect method which uses an interactive graphical tool. The third method uses a questionnaire to elicit expert knowledge directly about the impact of a habitat variable on the response. Two variables (slope and aspect) are used to examine prior and posterior distributions of the three methods. The results indicate that there are some similarities and dissimilarities between the expert informed priors of the two experts formulated from the different approaches. The choice of elicitation method depends on the statistical knowledge of the expert, their mapping skills, time constraints, accessibility to experts and funding available. This trial reveals that expert knowledge can be important when modelling rare event data, such as threatened species, because experts can provide additional information that may not be represented in the dataset. However care must be taken with the way in which this information is elicited and formulated. Copyright © 2008 John Wiley & Sons, Ltd. [source]

    Bayesian analysis of dynamic factor models: an application to air pollution and mortality in São Paulo, Brazil

    ENVIRONMETRICS, Issue 6 2008
    T. Sáfadi
    Abstract The Bayesian estimation of a dynamic factor model where the factors follow a multivariate autoregressive model is presented. We derive the posterior distributions for the parameters and the factors and use Monte Carlo methods to compute them. The model is applied to study the association between air pollution and mortality in the city of São Paulo, Brazil. Statistical analysis was performed through a Bayesian analysis of a dynamic factor model. The series considered were minimal temperature, relative humidity, air pollutant of PM10 and CO, mortality circulatory disease and mortality respiratory disease. We found a strong association between air pollutant (PM10), Humidity and mortality respiratory disease for the city of São Paulo. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Variable smoothing in Bayesian intrinsic autoregressions

    ENVIRONMETRICS, Issue 8 2007
    Mark J. Brewer
    Abstract We introduce an adapted form of the Markov random field (MRF) for Bayesian spatial smoothing with small-area data. This new scheme allows the amount of smoothing to vary in different parts of a map by employing area-specific smoothing parameters, related to the variance of the MRF. We take an empirical Bayes approach, using variance information from a standard MRF analysis to provide prior information for the smoothing parameters of the adapted MRF. The scheme is shown to produce proper posterior distributions for a broad class of models. We test our method on both simulated and real data sets, and for the simulated data sets, the new scheme is found to improve modelling of both slowly-varying levels of smoothness and discontinuities in the response surface. Copyright © 2007 John Wiley & Sons, Ltd. [source]

    Bayesian uncertainty assessment in multicompartment deterministic simulation models for environmental risk assessment

    ENVIRONMETRICS, Issue 4 2003
    Samantha C. Bates
    Abstract We use a special case of Bayesian melding to make inference from deterministic models while accounting for uncertainty in the inputs to the model. The method uses all available information, based on both data and expert knowledge, and extends current methods of ,uncertainty analysis' by updating models using available data. We extend the methodology for use with sequential multicompartment models. We present an application of these methods to deterministic models for concentration of polychlorinated biphenyl (PCB) in soil and vegetables. The results are posterior distributions of concentration in soil and vegetables which account for all available evidence and uncertainty. Model uncertainty is not considered. Copyright © 2003 John Wiley & Sons, Ltd. [source]