Home About us Contact | |||
Prior Information (prior + information)
Selected AbstractsCONFIDENCE INTERVALS UTILIZING PRIOR INFORMATION IN THE BEHRENS,FISHER PROBLEMAUSTRALIAN & NEW ZEALAND JOURNAL OF STATISTICS, Issue 4 2008Paul Kabaila Summary Consider two independent random samples of size f,+ 1, one from an N (,1, ,21) distribution and the other from an N (,2, ,22) distribution, where ,21/,22, (0, ,). The Welch ,approximate degrees of freedom' (,approximate t -solution') confidence interval for ,1,,2 is commonly used when it cannot be guaranteed that ,21/,22= 1. Kabaila (2005, Comm. Statist. Theory and Methods,34, 291,302) multiplied the half-width of this interval by a positive constant so that the resulting interval, denoted by J0, has minimum coverage probability 1 ,,. Now suppose that we have uncertain prior information that ,21/,22= 1. We consider a broad class of confidence intervals for ,1,,2 with minimum coverage probability 1 ,,. This class includes the interval J0, which we use as the standard against which other members of will be judged. A confidence interval utilizes the prior information substantially better than J0 if (expected length of J)/(expected length of J0) is (a) substantially less than 1 (less than 0.96, say) for ,21/,22= 1, and (b) not too much larger than 1 for all other values of ,21/,22. For a given f, does there exist a confidence interval that satisfies these conditions? We focus on the question of whether condition (a) can be satisfied. For each given f, we compute a lower bound to the minimum over of (expected length of J)/(expected length of J0) when ,21/,22= 1. For 1 ,,= 0.95, this lower bound is not substantially less than 1. Thus, there does not exist any confidence interval belonging to that utilizes the prior information substantially better than J0. [source] Bayesian Estimation of Species Richness from Quadrat Sampling Data in the Presence of Prior InformationBIOMETRICS, Issue 3 2006Jérôme A. Dupuis Summary We consider the problem of estimating the number of species of an animal community. It is assumed that it is possible to draw up a list of species liable to be present in this community. Data are collected from quadrat sampling. Models considered in this article separate the assumptions related to the experimental protocol and those related to the spatial distribution of species in the quadrats. Our parameterization enables us to incorporate prior information on the presence, detectability, and spatial density of species. Moreover, we elaborate procedures to build the prior distributions on these parameters from information furnished by external data. A simulation study is carried out to examine the influence of different priors on the performances of our estimator. We illustrate our approach by estimating the number of nesting bird species in a forest. [source] Environmental power analysis , a new perspectiveENVIRONMETRICS, Issue 5 2001David R. Fox Abstract Power analysis and sample-size determination are related tools that have recently gained popularity in the environmental sciences. Their indiscriminate application, however, can lead to wildly misleading results. This is particularly true in environmental monitoring and assessment, where the quality and nature of data is such that the implicit assumptions underpinning power and sample-size calculations are difficult to justify. When the assumptions are reasonably met these statistical techniques provide researchers with an important capability for the allocation of scarce and expensive resources to detect putative impact or change. Conventional analyses are predicated on a general linear model and normal distribution theory with statistical tests of environmental impact couched in terms of changes in a population mean. While these are ,optimal' statistical tests (uniformly most powerful), they nevertheless pose considerable practical difficulties for the researcher. Compounding this difficulty is the subsequent analysis of the data and the impost of a decision framework that commences with an assumption of ,no effect'. This assumption is only discarded when the sample data indicate demonstrable evidence to the contrary. The alternative (,green') view is that any anthropogenic activity has an impact on the environment and therefore a more realistic initial position is to assume that the environment is already impacted. In this article we examine these issues and provide a re-formulation of conventional mean-based hypotheses in terms of population percentiles. Prior information or belief concerning the probability of exceeding a criterion is incorporated into the power analysis using a Bayesian approach. Finally, a new statistic is introduced which attempts to balance the overall power regardless of the decision framework adopted. Copyright © 2001 John Wiley & Sons, Ltd. [source] A Probabilistic Framework for Bayesian Adaptive Forecasting of Project ProgressCOMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, Issue 3 2007Paolo Gardoni An adaptive Bayesian updating method is used to assess the unknown model parameters based on recorded data and pertinent prior information. Recorded data can include equality, upper bound, and lower bound data. The proposed approach properly accounts for all the prevailing uncertainties, including model errors arising from an inaccurate model form or missing variables, measurement errors, statistical uncertainty, and volitional uncertainty. As an illustration of the proposed approach, the project progress and final time-to-completion of an example project are forecasted. For this illustration construction of civilian nuclear power plants in the United States is considered. This application considers two cases (1) no information is available prior to observing the actual progress data of a specified plant and (2) the construction progress of eight other nuclear power plants is available. The example shows that an informative prior is important to make accurate predictions when only a few records are available. This is also the time when forecasts are most valuable to the project manager. Having or not having prior information does not have any practical effect on the forecast when progress on a significant portion of the project has been recorded. [source] Sensitivity to communicative relevance tells young children what to imitateDEVELOPMENTAL SCIENCE, Issue 6 2009Victoria Southgate How do children decide which elements of an action demonstration are important to reproduce in the context of an imitation game? We tested whether selective imitation of a demonstrator's actions may be based on the same search for relevance that drives adult interpretation of ostensive communication. Three groups of 18-month-old infants were shown a toy animal either hopping or sliding (action style) into a toy house (action outcome), but the communicative relevance of the action style differed depending on the group. For the no prior information group, all the information in the demonstration was new and so equally relevant. However, for infants in the ostensive prior information group, the potential action outcome was already communicated to the infant prior to the main demonstration, rendering the action style more relevant. Infants in the ostensive prior information group imitated the action style significantly more than infants in the no prior information group, suggesting that the relevance manipulation modulated their interpretation of the action demonstration. A further condition (non-ostensive prior information) confirmed that this sensitivity to new information is only present when the ,old' information had been communicated, and not when infants discovered this information for themselves. These results indicate that, like adults, human infants expect communication to contain relevant content, and imitate action elements that, relative to their current knowledge state or to the common ground with the demonstrator, is identified as most relevant. [source] Variable smoothing in Bayesian intrinsic autoregressionsENVIRONMETRICS, Issue 8 2007Mark J. Brewer Abstract We introduce an adapted form of the Markov random field (MRF) for Bayesian spatial smoothing with small-area data. This new scheme allows the amount of smoothing to vary in different parts of a map by employing area-specific smoothing parameters, related to the variance of the MRF. We take an empirical Bayes approach, using variance information from a standard MRF analysis to provide prior information for the smoothing parameters of the adapted MRF. The scheme is shown to produce proper posterior distributions for a broad class of models. We test our method on both simulated and real data sets, and for the simulated data sets, the new scheme is found to improve modelling of both slowly-varying levels of smoothness and discontinuities in the response surface. Copyright © 2007 John Wiley & Sons, Ltd. [source] Adaptive group detection for DS/CDMA systems over frequency-selective fading channels,EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 3 2003Stefano Buzzi In this paper we consider the problem of group detection for asynchronous Direct-Sequence Code Division Multiple Access (DS/CDMA) systems operating over frequency-selective fading channels. A two-stage near-far resistant detection structure is proposed. The first stage is a linear filter, aimed at suppressing the effect of the unwanted user signals, while the second stage is a non-linear block, implementing a maximum likelihood detection rule on the set of desired user signals. As to the linear stage, we consider both the Zero-Forcing (ZF) and the Minimum Mean Square Error (MMSE) approaches; in particular, based on the amount of prior knowledge on the interference parameters which is available to the receiver and on the affordable computational complexity, we come up with several receiving structures, which trade system performance for complexity and needed channel state information. We also present adaptive implementations of these receivers, wherein only the parameters from the users to be decoded are assumed to be known. The case that the channel fading coefficients of the users to be decoded are not known a priori is also considered. In particular, based on the transmission of pilot signals, we adopt a least-squares criterion in order to obtain estimates of these coefficients. The result is thus a fully adaptive structure, which can be implemented with no prior information on the interfering signals and on the channel state. As to the performance assessment, the new receivers are shown to be near-far resistant, and simulation results confirm their superiority with respect to previously derived detection structures. Copyright © 2003 AEI. [source] Source density-driven independent component analysis approach for fMRI dataHUMAN BRAIN MAPPING, Issue 3 2005Baoming Hong Abstract Independent component analysis (ICA) has become a popular tool for functional magnetic resonance imaging (fMRI) data analysis. Conventional ICA algorithms including Infomax and FAST-ICA algorithms employ the underlying assumption that data can be decomposed into statistically independent sources and implicitly model the probability density functions of the underlying sources as highly kurtotic or symmetric. When source data violate these assumptions (e.g., are asymmetric), however, conventional ICA methods might not work well. As a result, modeling of the underlying sources becomes an important issue for ICA applications. We propose a source density-driven ICA (SD-ICA) method. The SD-ICA algorithm involves a two-step procedure. It uses a conventional ICA algorithm to obtain initial independent source estimates for the first-step and then, using a kernel estimator technique, the source density is calculated. A refitted nonlinear function is used for each source at the second step. We show that the proposed SD-ICA algorithm provides flexible source adaptivity and improves ICA performance. On SD-ICA application to fMRI signals, the physiologic meaningful components (e.g., activated regions) of fMRI signals are governed typically by a small percentage of the whole-brain map on a task-related activation. Extra prior information (using a skewed-weighted distribution transformation) is thus additionally applied to the algorithm for the regions of interest of data (e.g., visual activated regions) to emphasize the importance of the tail part of the distribution. Our experimental results show that the source density-driven ICA method can improve performance further by incorporating some a priori information into ICA analysis of fMRI signals. Hum Brain Mapping, 2005. © 2005 Wiley-Liss, Inc. [source] A split,merge-based region-growing method for fMRI activation detectionHUMAN BRAIN MAPPING, Issue 4 2004Yingli Lu Abstract We introduce a hybrid method for functional magnetic resonance imaging (fMRI) activation detection based on the well-developed split,merge and region-growing techniques. The proposed method includes conjoining both of the spatio-temporal priors inherent in split,merge and the prior information afforded by the hypothesis-led component of region selection. Compared to the fuzzy c-means clustering analysis, this method avoids making assumptions about the number of clusters and the computation complexity is reduced markedly. We evaluated the effectiveness of the proposed method in comparison with the general linear model and the fuzzy c-means clustering method conducted on simulated and in vivo datasets. Experimental results show that our method successfully detected expected activated regions and has advantages over the other two methods. Hum. Brain Mapping 22:271,279, 2004. © 2004 Wiley-Liss, Inc. [source] Geotechnical parameter estimation in tunnelling using relative convergence measurementINTERNATIONAL JOURNAL FOR NUMERICAL AND ANALYTICAL METHODS IN GEOMECHANICS, Issue 2 2006Kook-Hwan Cho Abstract Accurate estimation of geotechnical parameters is an important and difficult task in tunnel design and construction. Optimum evaluation of the geotechnical parameters have been carried out by the back-analysis method based on estimated absolute convergence data. In this study, a back-analysis technique using measured relative convergence in tunnelling is proposed. The extended Bayesian method (EBM), which combines the prior information with the field measurement data, is adopted and combined with the 3-dimensional finite element analysis to predict ground motion. By directly using the relative convergence as observation data in the EBM, we can exclude errors that arise in the estimation of absolute displacement from measured convergence, and can evaluate the geotechnical parameters with sufficient reliability. The proposed back-analysis technique is applied and validated by using the measured data from two tunnel sites in Korea. Copyright © 2005 John Wiley & Sons, Ltd. [source] Population-based detection of Lynch syndrome in young colorectal cancer patients using microsatellite instability as the initial testINTERNATIONAL JOURNAL OF CANCER, Issue 5 2009Lyn Schofield Abstract Approximately 1,2% of colorectal cancers (CRC) arise because of germline mutations in DNA mismatch repair genes, referred to as Lynch syndrome. These tumours show microsatellite instability (MSI) and loss of expression of mismatch repair proteins. Pre-symptomatic identification of mutation carriers has been demonstrated to improve survival; however, there is concern that many are not being identified using current practices. We evaluated population-based MSI screening of CRC in young patients as a means of ascertaining mutation carriers. CRC diagnosed in patients aged <60 years were identified from pathology records. No prior information was available on family history of cancer. PCR techniques were used to determine MSI in the BAT-26 mononucleotide repeat and mutation in the BRAF oncogene. Loss of MLH1, MSH2, MSH6 and PMS2 protein expression was evaluated in MSI+ tumours by immunohistochemistry. MSI+ tumours were found in 105/1,344 (7.8%) patients, of which 7 were excluded as possible Lynch syndrome because of BRAF mutation. Of the 98 "red flag" cases that were followed up, 25 were already known as mutation carriers or members of mutation carrier families. Germline test results were obtained for 35 patients and revealed that 22 showed no apparent mutation, 11 showed likely pathogenic mutations and 2 had unclassified variants. The proportion of MSI+ cases in different age groups that were estimated to be mutation carriers was 89% (<30 years), 83% (30,39), 68% (40,49) and 17% (50,59). We recommend MSI as the initial test for population-based screening of Lynch syndrome in younger CRC patients, regardless of family history. © 2008 Wiley-Liss, Inc. [source] Model-based shape from shading for microelectronics applicationsINTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Issue 2 2006A. Nissenboim Abstract Model-based shape from shading (SFS) is a promising paradigm introduced by Atick et al. [Neural Comput 8 (1996), 1321,1340] in 1996 for solving inverse problems when we happen to have a lot of prior information on the depth profiles to be recovered. In the present work we adopt this approach to address the problem of recovering wafer profiles from images taken using a scanning electron microscope (SEM). This problem arises naturally in the microelectronics inspection industry. A low-dimensional model, based on our prior knowledge on the types of depth profiles of wafer surfaces, has been developed, and based on it the SFS problem becomes an optimal parameter estimation. Wavelet techniques were then employed to calculate a good initial guess to be used in a minimization process that yields the desired profile parametrization. A Levenberg,Marguardt (LM) optimization procedure has been adopted to address ill-posedness of the SFS problem and to ensure stable numerical convergence. The proposed algorithm has been tested on synthetic images, using both Lambertian and SEM imaging models. © 2006 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 16, 65,76, 2006 [source] Bayesian Hypothesis Testing: a Reference ApproachINTERNATIONAL STATISTICAL REVIEW, Issue 3 2002José M. Bernardo Summary For any probability model M={p(x|,, ,), ,,,, ,,,} assumed to describe the probabilistic behaviour of data x,X, it is argued that testing whether or not the available data are compatible with the hypothesis H0={,=,0} is best considered as a formal decision problem on whether to use (a0), or not to use (a0), the simpler probability model (or null model) M0={p(x|,0, ,), ,,,}, where the loss difference L(a0, ,, ,) ,L(a0, ,, ,) is proportional to the amount of information ,(,0, ,), which would be lost if the simplified model M0 were used as a proxy for the assumed model M. For any prior distribution ,(,, ,), the appropriate normative solution is obtained by rejecting the null model M0 whenever the corresponding posterior expectation ,,,(,0, ,, ,),(,, ,|x)d,d, is sufficiently large. Specification of a subjective prior is always difficult, and often polemical, in scientific communication. Information theory may be used to specify a prior, the reference prior, which only depends on the assumed model M, and mathematically describes a situation where no prior information is available about the quantity of interest. The reference posterior expectation, d(,0, x) =,,,(,|x)d,, of the amount of information ,(,0, ,, ,) which could be lost if the null model were used, provides an attractive nonnegative test function, the intrinsic statistic, which is invariant under reparametrization. The intrinsic statistic d(,0, x) is measured in units of information, and it is easily calibrated (for any sample size and any dimensionality) in terms of some average log-likelihood ratios. The corresponding Bayes decision rule, the Bayesian reference criterion (BRC), indicates that the null model M0 should only be rejected if the posterior expected loss of information from using the simplified model M0 is too large or, equivalently, if the associated expected average log-likelihood ratio is large enough. The BRC criterion provides a general reference Bayesian solution to hypothesis testing which does not assume a probability mass concentrated on M0 and, hence, it is immune to Lindley's paradox. The theory is illustrated within the context of multivariate normal data, where it is shown to avoid Rao's paradox on the inconsistency between univariate and multivariate frequentist hypothesis testing. Résumé Pour un modèle probabiliste M={p(x|,, ,) ,,,, ,,,} censé décrire le comportement probabiliste de données x,X, nous soutenons que tester si les données sont compatibles avec une hypothèse H0={,=,0 doit être considéré comme un problème décisionnel concernant l'usage du modèle M0={p(x|,0, ,) ,,,}, avec une fonction de coût qui mesure la quantité d'information qui peut être perdue si le modèle simplifiéM0 est utilisé comme approximation du véritable modèle M. Le coût moyen, calculé par rapport à une loi a priori de référence idoine fournit une statistique de test pertinente, la statistique intrinsèque d(,0, x), invariante par reparamétrisation. La statistique intrinsèque d(,0, x) est mesurée en unités d'information, et sa calibrage, qui est independante de la taille de léchantillon et de la dimension du paramètre, ne dépend pas de sa distribution à l'échantillonage. La règle de Bayes correspondante, le critère de Bayes de référence (BRC), indique que H0 doit seulement êetre rejeté si le coût a posteriori moyen de la perte d'information à utiliser le modèle simplifiéM0 est trop grande. Le critère BRC fournit une solution bayésienne générale et objective pour les tests d'hypothèses précises qui ne réclame pas une masse de Dirac concentrée sur M0. Par conséquent, elle échappe au paradoxe de Lindley. Cette théorie est illustrée dans le contexte de variables normales multivariées, et on montre qu'elle évite le paradoxe de Rao sur l'inconsistence existant entre tests univariés et multivariés. [source] How Much New Information Is There in Earnings?JOURNAL OF ACCOUNTING RESEARCH, Issue 5 2008RAY BALL ABSTRACT We quantify the relative importance of earnings announcements in providing new information to the share market, using the R2 in a regression of securities' calendar-year returns on their four quarterly earnings-announcement "window" returns. The R2, which averages approximately 5% to 9%, measures the proportion of total information incorporated in share prices annually that is associated with earnings announcements. We conclude that the average quarterly announcement is associated with approximately 1% to 2% of total annual information, thus providing a modest but not overwhelming amount of incremental information to the market. The results are consistent with the view that the primary economic role of reported earnings is not to provide timely new information to the share market. By inference, that role lies elsewhere, for example, in settling debt and compensation contracts and in disciplining prior information, including more timely managerial disclosures of information originating in the firm's accounting system. The relative informativeness of earnings announcements is a concave function of size. Increased information during earnings-announcement windows in recent years is due only in part to increased concurrent releases of management forecasts. There is no evidence of abnormal information arrival in the weeks surrounding earnings announcements. Substantial information is released in management forecasts and in analyst forecast revisions prior (but not subsequent) to earnings announcements. [source] Habitat specialization and adaptive phenotypic divergence of anuran populationsJOURNAL OF EVOLUTIONARY BIOLOGY, Issue 3 2005J. VAN BUSKIRK Abstract We tested for adaptive population structure in the frog Rana temporaria by rearing tadpoles from 23 populations in a common garden experiment, with and without larval dragonfly predators. The goal was to compare tadpole phenotypes with the habitats of their source ponds. The choice of traits and habitat variables was guided by prior information about phenotypic function. There were large differences among populations in life history, behaviour, morphological shape, and the predator-induced plasticities in most of these. Body size and behaviour were correlated with predation risk in the source pond, in agreement with adaptive population divergence. Tadpoles from large sunny ponds were morphologically distinct from those inhabiting small woodland ponds, although here an adaptive explanation was unclear. There was no evidence that plasticity evolves in populations exposed to more variable environments. Much among-population variation in phenotype and plasticity was not associated with habitat, perhaps reflecting rapid changes in wetland habitats. [source] DECISION SUPPORT FOR ALLOCATION OF WATERSHED POLLUTION LOAD USING GREY FUZZY MULTIOBJECTIVE PROGRAMMING,JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 3 2006Ho-Wen Chen ABSTRACT: This paper uses the grey fuzzy multiobjective programming to aid in decision making for the allocation of waste load in a river system under versatile uncertainties and risks. It differs from previous studies by considering a multicriteria objective function with combined grey and fuzzy messages under a cost benefit analysis framework. Such analysis technically integrates the prior information of water quality models, water quality standards, wastewater treatment costs, and potential benefits gained via in-stream water quality improvement. While fuzzy sets are characterized based on semantic and cognitive vagueness in decision making, grey numbers can delineate measurement errors in data collection. By employing three distinct set theoretic fuzzy operators, the synergy of grey and fuzzy implications may smoothly characterize the prescribed management complexity. With the aid of genetic algorithm in the solution procedure, the modeling outputs contribute to the development of an effective waste load allocation and reduction scheme for tributaries in this subwatershed located in the lower Tseng-Wen River Basin, South Taiwan. Research findings indicate that the inclusion of three fuzzy set theoretic operators in decision analysis may delineate different tradeoffs in decision making due to varying changes, transformations, and movements of waste load in association with land use pattern within the watershed. [source] Models for potentially biased evidence in meta-analysis using empirically based priorsJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2009N. J. Welton Summary., We present models for the combined analysis of evidence from randomized controlled trials categorized as being at either low or high risk of bias due to a flaw in their conduct. We formulate a bias model that incorporates between-study and between-meta-analysis heterogeneity in bias, and uncertainty in overall mean bias. We obtain algebraic expressions for the posterior distribution of the bias-adjusted treatment effect, which provide limiting values for the information that can be obtained from studies at high risk of bias. The parameters of the bias model can be estimated from collections of previously published meta-analyses. We explore alternative models for such data, and alternative methods for introducing prior information on the bias parameters into a new meta-analysis. Results from an illustrative example show that the bias-adjusted treatment effect estimates are sensitive to the way in which the meta-epidemiological data are modelled, but that using point estimates for bias parameters provides an adequate approximation to using a full joint prior distribution. A sensitivity analysis shows that the gain in precision from including studies at high risk of bias is likely to be low, however numerous or large their size, and that little is gained by incorporating such studies, unless the information from studies at low risk of bias is limited. We discuss approaches that might increase the value of including studies at high risk of bias, and the acceptability of the methods in the evaluation of health care interventions. [source] Using historical data for Bayesian sample size determinationJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES A (STATISTICS IN SOCIETY), Issue 1 2007Fulvio De Santis Summary., We consider the sample size determination (SSD) problem, which is a basic yet extremely important aspect of experimental design. Specifically, we deal with the Bayesian approach to SSD, which gives researchers the possibility of taking into account pre-experimental information and uncertainty on unknown parameters. At the design stage, this fact offers the advantage of removing or mitigating typical drawbacks of classical methods, which might lead to serious miscalculation of the sample size. In this context, the leading idea is to choose the minimal sample size that guarantees a probabilistic control on the performance of quantities that are derived from the posterior distribution and used for inference on parameters of interest. We are concerned with the use of historical data,i.e. observations from previous similar studies,for SSD. We illustrate how the class of power priors can be fruitfully employed to deal with lack of homogeneity between historical data and observations of the upcoming experiment. This problem, in fact, determines the necessity of discounting prior information and of evaluating the effect of heterogeneity on the optimal sample size. Some of the most popular Bayesian SSD methods are reviewed and their use, in concert with power priors, is illustrated in several medical experimental contexts. [source] Optimal predictive sample size for case,control studiesJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 3 2004Fulvio De Santis Summary., The identification of factors that increase the chances of a certain disease is one of the classical and central issues in epidemiology. In this context, a typical measure of the association between a disease and risk factor is the odds ratio. We deal with design problems that arise for Bayesian inference on the odds ratio in the analysis of case,control studies. We consider sample size determination and allocation criteria for both interval estimation and hypothesis testing. These criteria are then employed to determine the sample size and proportions of units to be assigned to cases and controls for planning a study on the association between the incidence of a non-Hodgkin's lymphoma and exposition to pesticides by eliciting prior information from a previous study. [source] Estimation of origin,destination trip rates in LeicesterJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 4 2001Martin L. Hazelton The road system in region RA of Leicester has vehicle detectors embedded in many of the network's road links. Vehicle counts from these detectors can provide transportation researchers with a rich source of data. However, for many projects it is necessary for researchers to have an estimate of origin-to-destination vehicle flow rates. Obtaining such estimates from data observed on individual road links is a non-trivial statistical problem, made more difficult in the present context by non-negligible measurement errors in the vehicle counts collected. The paper uses road link traffic count data from April 1994 to estimate the origin,destination flow rates for region RA. A model for the error prone traffic counts is developed, but the resulting likelihood is not available in closed form. Nevertheless, it can be smoothly approximated by using Monte Carlo integration. The approximate likelihood is combined with prior information from a May 1991 survey in a Bayesian framework. The posterior is explored using the Hastings,Metropolis algorithm, since its normalizing constant is not available. Preliminary findings suggest that the data are overdispersed according to the original model. Results for a revised model indicate that a degree of overdispersion exists, but that the estimates of origin,destination flow rates are quite insensitive to the change in model specification. [source] Bayesian incidence analysis of animal tumorigenicity dataJOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES C (APPLIED STATISTICS), Issue 2 2001D. B. Dunson Statistical inference about tumorigenesis should focus on the tumour incidence rate. Unfortunately, in most animal carcinogenicity experiments, tumours are not observable in live animals and censoring of the tumour onset times is informative. In this paper, we propose a Bayesian method for analysing data from such studies. Our approach focuses on the incidence of tumours and accommodates occult tumours and censored onset times without restricting tumour lethality, relying on cause-of-death data, or requiring interim sacrifices. We represent the underlying state of nature by a multistate stochastic process and assume general probit models for the time-specific transition rates. These models allow the incorporation of covariates, historical control data and subjective prior information. The inherent flexibility of this approach facilitates the interpretation of results, particularly when the sample size is small or the data are sparse. We use a Gibbs sampler to estimate the relevant posterior distributions. The methods proposed are applied to data from a US National Toxicology Program carcinogenicity study. [source] Bayesian selection of threshold autoregressive modelsJOURNAL OF TIME SERIES ANALYSIS, Issue 4 2004Edward P. Campbell Abstract., An approach to Bayesian model selection in self-exciting threshold autoregressive (SETAR) models is developed within a reversible jump Markov chain Monte Carlo (RJMCMC) framework. Our approach is examined via a simulation study and analysis of the Zurich monthly sunspots series. We find that the method converges rapidly to the optimal model, whilst efficiently exploring suboptimal models to quantify model uncertainty. A key finding is that the parsimony of the model selected is influenced by the specification of prior information, which can be examined and subjected to criticism. This is a strength of the Bayesian approach, allowing physical understanding to constrain the model selection algorithm. [source] Assimilating humidity pseudo-observations derived from the cloud profiling radar aboard CloudSat in ALADIN 3D-VarMETEOROLOGICAL APPLICATIONS, Issue 4 2009Andrea Storto Abstract This paper describes an experimental procedure for assimilating CloudSat Cloud Profiling Radar (CPR) observations in ALADIN 3D-Var through the use of humidity pseudo-observations derived from a one-dimensional Bayesian analysis. Cloud data are considered as binary occurrences (,cloud' vs ,no-cloud'), which makes the approach feasible to be extended to other cloudiness observations, and to any other binary observation in general. A simple large-scale condensation scheme is used for projecting the prior information from a Numerical Weather Prediction model into cloud fraction space. Verification over a 1 month assimilation test period indicates a clear benefit of the pseudo-observation assimilation scheme for the limited CloudSat CPR data set, especially in terms of improved skill scores for dynamical parameters such as geopotential and wind. Copyright © 2008 Royal Meteorological Society [source] Photometric redshifts with surface brightness priorsMONTHLY NOTICES OF THE ROYAL ASTRONOMICAL SOCIETY, Issue 3 2008Hans F. Stabenau ABSTRACT We use galaxy surface brightness as prior information to improve photometric redshift (photo- z) estimation. We apply our template-based photo- z method to imaging data from the ground-based VVDS survey and the space-based GOODS field from HST, and use spectroscopic redshifts to test our photometric redshifts for different galaxy types and redshifts. We find that the surface brightness prior eliminates a large fraction of outliers by lifting the degeneracy between the Lyman and 4000-Å breaks. Bias and scatter are improved by about a factor of 2 with the prior in each redshift bin in the range 0.4 < z < 1.3, for both the ground and space data. Ongoing and planned surveys from the ground and space will benefit, provided that care is taken in measurements of galaxy sizes and in the application of the prior. We discuss the image quality and signal-to-noise ratio requirements that enable the surface brightness prior to be successfully applied. [source] Delamination of multilayer packaging caused by exfoliating cream ingredientsPACKAGING TECHNOLOGY AND SCIENCE, Issue 3 2007Gustavo Ortiz Abstract Exfoliating creams were packed in sachets of composite packaging consisting of polyethylene, aluminium and polyester layers stuck together by polyurethane adhesive, and they were kept in an oven at 40°C in order to accelerate the delamination process. The sachets were then delaminated and the resulting layers were analysed. A headspace solid-phase microextraction mass spectrometry method (HS,SPME,GC,MS) using a 75µm carboxen polydimethylsiloxane fibre was used to identify the compounds migrating from the exfoliating creams through the polyethylene layer to the aluminium interface and suspected to be responsible for packaging delamination. Several volatile compounds used in the cosmetic industry as perfumes, fixing agents and preservatives, such as menthol, dihydromyrcenol and 2-phenoxyethanol, were detected in the aluminium/polyester delaminated layer. The exfoliating creams were also analysed by HS,SPME,GC,MS. The study of loss of adhesion of the laminated material exposed to the exfoliating products revealed that the product with a higher concentration of 2-phenoxyethanol caused a faster decrease in adhesion strength, but the lower adhesion values were found in products with higher concentrations of menthol and dihydromyrcenol. The results obtained showed that the analytical method used was suitable for identifying volatile compounds that migrate through polyethylene to the inner layers of the packaging of exfoliating products, as well as for providing prior information on which products may be difficult to package in sachets. Copyright © 2006 John Wiley & Sons, Ltd. [source] Bayesian analysis of plant disease predictionPLANT PATHOLOGY, Issue 4 2002J. E. Yuen Rule-based systems for the prediction of the occurrence of disease can be evaluated in a number of different ways. One way is to examine the probability of disease occurrence before and after using the predictor. Bayes's Theorem can be a useful tool to examine how a disease forecast (either positive or negative) affects the probability of occurrence, and simple analyses can be conducted without knowing the risk preferences of the targeted decision makers. Likelihood ratios can be calculated from the sensitivity and specificity of the forecast, and provide convenient summaries of the forecast performance. They can also be used in a simpler form of Bayes's Theorem. For diseases where little or no prior information on occurrence is available, most forecasts will be useful in that they will increase or decrease the probability of disease occurrence. For extremely common or extremely rare diseases, likelihood ratios may not be sufficiently large or small to substantially affect the probability of disease occurrence or make any difference to the actions taken by the decision maker. [source] Molecular replacement: the probabilistic approach of the program REMO09 and its applicationsACTA CRYSTALLOGRAPHICA SECTION A, Issue 6 2009Rocco Caliandro The method of joint probability distribution functions has been applied to molecular replacement techniques. The rotational search is performed by rotating the reciprocal lattice of the protein with respect to the calculated transform of the model structure; the translation search is performed by fast Fourier transform. Several cases of prior information are studied, both for the rotation and for the translation step: e.g. the conditional probability density for the rotation or the translation of a monomer is found both for ab initio and when the rotation and/or the translation values of other monomers are given. The new approach has been implemented in the program REMO09, which is part of the package for global phasing IL MILIONE [Burla, Caliandro, Camalli, Cascarano, De Caro, Giacovazzo, Polidori, Siliqi & Spagna (2007). J. Appl. Cryst.40, 609,613]. A large set of test structures has been used for checking the efficiency of the new algorithms, which proved to be significantly robust in finding the correct solutions and in discriminating them from noise. An important design concept is the high degree of automatism: REMO09 is often capable of providing a reliable model of the target structure without any user intervention. [source] Bayesian methods for proteomicsPROTEINS: STRUCTURE, FUNCTION AND BIOINFORMATICS, Issue 16 2007Gil Alterovitz Dr. Abstract Biological and medical data have been growing exponentially over the past several years [1, 2]. In particular, proteomics has seen automation dramatically change the rate at which data are generated [3]. Analysis that systemically incorporates prior information is becoming essential to making inferences about the myriad, complex data [4,6]. A Bayesian approach can help capture such information and incorporate it seamlessly through a rigorous, probabilistic framework. This paper starts with a review of the background mathematics behind the Bayesian methodology: from parameter estimation to Bayesian networks. The article then goes on to discuss how emerging Bayesian approaches have already been successfully applied to research across proteomics, a field for which Bayesian methods are particularly well suited [7,9]. After reviewing the literature on the subject of Bayesian methods in biological contexts, the article discusses some of the recent applications in proteomics and emerging directions in the field. [source] SPEDEN: reconstructing single particles from their diffraction patternsACTA CRYSTALLOGRAPHICA SECTION A, Issue 4 2004Stefan P. Hau-Riege SPEDEN is a computer program that reconstructs the electron density of single particles from their X-ray diffraction patterns, using a single-particle adaptation of the holographic method in crystallography [Szöke, Szöke & Somoza (1997). Acta Cryst. A53, 291,313]. The method, like its parent, is unique because it does not rely on `back' transformation from the diffraction pattern into real space and on interpolation within measured data. It is designed to deal successfully with sparse, irregular, incomplete and noisy data. It is also designed to use prior information for ensuring sensible results and for reliable convergence. This article describes the theoretical basis for the reconstruction algorithm, its implementation, and quantitative results of tests on synthetic and experimentally obtained data. The program could be used for determining the structures of radiation-tolerant samples and, eventually, of large biological molecular structures without the need for crystallization. [source] Construction of statistical shape atlases for bone structures based on a two-level framework,THE INTERNATIONAL JOURNAL OF MEDICAL ROBOTICS AND COMPUTER ASSISTED SURGERY, Issue 1 2010Chenyu Wu Abstract Background The statistical shape atlas is a 3D medical image analysis tool that encodes shape variations between populations. However, efficiency, accuracy and finding the correct correspondence are still unsolved issues during the construction of the atlas. Methods We developed a two-level-based framework that speeds up the registration process while maintaining accuracy of the atlas. We also proposed a semi-automatic strategy to achieve segmentation and registration simultaneously, without knowing any prior information about the shape. Results We have separately constructed the atlas for the femur and spine. The experimental results demonstrate the efficiency and accuracy of our methods. Conclusions Our two-level framework and semi-automatic strategy are able to efficiently construct the atlas for bone structures without losing accuracy. We can handle either 3D surface data or raw DICOM images. Copyright © 2009 John Wiley & Sons, Ltd. [source] |