Importance Sampling (importance + sampling)

Distribution by Scientific Domains


Selected Abstracts


Deterministic Importance Sampling with Error Diffusion

COMPUTER GRAPHICS FORUM, Issue 4 2009
László Szirmay-Kalos
This paper proposes a deterministic importance sampling algorithm that is based on the recognition that delta-sigma modulation is equivalent to importance sampling. We propose a generalization for delta-sigma modulation in arbitrary dimensions, taking care of the curse of dimensionality as well. Unlike previous sampling techniques that transform low-discrepancy and highly stratified samples in the unit cube to the integration domain, our error diffusion sampler ensures the proper distribution and stratification directly in the integration domain. We also present applications, including environment mapping and global illumination rendering with virtual point sources. [source]


Compression and Importance Sampling of Near-Field Light Sources

COMPUTER GRAPHICS FORUM, Issue 8 2008
Albert Mas
I.3.7 Computer Graphics: Three-Dimensional Graphics and Realism Abstract This paper presents a method for compressing measured datasets of the near-field emission of physical light sources (represented by raysets). We create a mesh on the bounding surface of the light source that stores illumination information. The mesh is augmented with information about directional distribution and energy density. We have developed a new approach to smoothly generate random samples on the illumination distribution represented by the mesh, and to efficiently handle importance sampling of points and directions. We will show that our representation can compress a 10 million particle rayset into a mesh of a few hundred triangles. We also show that the error of this representation is low, even for very close objects. [source]


Efficient sampling and data reduction techniques for probabilistic seismic lifeline risk assessment

EARTHQUAKE ENGINEERING AND STRUCTURAL DYNAMICS, Issue 10 2010
Nirmal Jayaram
Abstract Probabilistic seismic risk assessment for spatially distributed lifelines is less straightforward than for individual structures. While procedures such as the ,PEER framework' have been developed for risk assessment of individual structures, these are not easily applicable to distributed lifeline systems, due to difficulties in describing ground-motion intensity (e.g. spectral acceleration) over a region (in contrast to ground-motion intensity at a single site, which is easily quantified using Probabilistic Seismic Hazard Analysis), and since the link between the ground-motion intensities and lifeline performance is usually not available in closed form. As a result, Monte Carlo simulation (MCS) and its variants are well suited for characterizing ground motions and computing resulting losses to lifelines. This paper proposes a simulation-based framework for developing a small but stochastically representative catalog of earthquake ground-motion intensity maps that can be used for lifeline risk assessment. In this framework, Importance Sampling is used to preferentially sample ,important' ground-motion intensity maps, and K -Means Clustering is used to identify and combine redundant maps in order to obtain a small catalog. The effects of sampling and clustering are accounted for through a weighting on each remaining map, so that the resulting catalog is still a probabilistically correct representation. The feasibility of the proposed simulation framework is illustrated by using it to assess the seismic risk of a simplified model of the San Francisco Bay Area transportation network. A catalog of just 150 intensity maps is generated to represent hazard at 1038 sites from 10 regional fault segments causing earthquakes with magnitudes between five and eight. The risk estimates obtained using these maps are consistent with those obtained using conventional MCS utilizing many orders of magnitudes more ground-motion intensity maps. Therefore, the proposed technique can be used to drastically reduce the computational expense of a simulation-based risk assessment, without compromising the accuracy of the risk estimates. This will facilitate computationally intensive risk analysis of systems such as transportation networks. Finally, the study shows that the uncertainties in the ground-motion intensities and the spatial correlations between ground-motion intensities at various sites must be modeled in order to obtain unbiased estimates of lifeline risk. Copyright © 2010 John Wiley & Sons, Ltd. [source]


Approaches to Evaluate Water Quality Model Parameter Uncertainty for Adaptive TMDL Implementation,

JOURNAL OF THE AMERICAN WATER RESOURCES ASSOCIATION, Issue 6 2007
Craig A. Stow
Abstract:, The National Research Council recommended Adaptive Total Maximum Daily Load implementation with the recognition that the predictive uncertainty of water quality models can be high. Quantifying predictive uncertainty provides important information for model selection and decision-making. We review five methods that have been used with water quality models to evaluate model parameter and predictive uncertainty. These methods (1) Regionalized Sensitivity Analysis, (2) Generalized Likelihood Uncertainty Estimation, (3) Bayesian Monte Carlo, (4) Importance Sampling, and (5) Markov Chain Monte Carlo (MCMC) are based on similar concepts; their development over time was facilitated by the increasing availability of fast, cheap computers. Using a Streeter-Phelps model as an example we show that, applied consistently, these methods give compatible results. Thus, all of these methods can, in principle, provide useful sets of parameter values that can be used to evaluate model predictive uncertainty, though, in practice, some are quickly limited by the "curse of dimensionality" or may have difficulty evaluating irregularly shaped parameter spaces. Adaptive implementation invites model updating, as new data become available reflecting water-body responses to pollutant load reductions, and a Bayesian approach using MCMC is particularly handy for that task. [source]


A Bayesian Monte Carlo Approach to Global Illumination

COMPUTER GRAPHICS FORUM, Issue 8 2009
Jonathan Brouillat
I.3.7 [Computer Graphics]: Three-Dimensional Graphics an Realism Abstract Most Monte Carlo rendering algorithms rely on importance sampling to reduce the variance of estimates. Importance sampling is efficient when the proposal sample distribution is well-suited to the form of the integrand but fails otherwise. The main reason is that the sample location information is not exploited. All sample values are given the same importance regardless of their proximity to one another. Two samples falling in a similar location will have equal importance whereas they are likely to contain redundant information. The Bayesian approach we propose in this paper uses both the location and value of the data to infer an integral value based on a prior probabilistic model of the integrand. The Bayesian estimate depends only on the sample values and locations, and not how these samples have been chosen. We show how this theory can be applied to the final gathering problem and present results that clearly demonstrate the benefits of Bayesian Monte Carlo. [source]


An Optimizing Compiler for Automatic Shader Bounding

COMPUTER GRAPHICS FORUM, Issue 4 2010
Petrik Clarberg
Abstract Programmable shading provides artistic control over materials and geometry, but the black box nature of shaders makes some rendering optimizations difficult to apply. In many cases, it is desirable to compute bounds of shaders in order to speed up rendering. A bounding shader can be automatically derived from the original shader by a compiler using interval analysis, but creating optimized interval arithmetic code is non-trivial. A key insight in this paper is that shaders contain metadata that can be automatically extracted by the compiler using data flow analysis. We present a number of domain-specific optimizations that make the generated code faster, while computing the same bounds as before. This enables a wider use and opens up possibilities for more efficient rendering. Our results show that on average 42,44% of the shader instructions can be eliminated for a common use case: single-sided bounding shaders used in lightcuts and importance sampling. [source]


A Bayesian Monte Carlo Approach to Global Illumination

COMPUTER GRAPHICS FORUM, Issue 8 2009
Jonathan Brouillat
I.3.7 [Computer Graphics]: Three-Dimensional Graphics an Realism Abstract Most Monte Carlo rendering algorithms rely on importance sampling to reduce the variance of estimates. Importance sampling is efficient when the proposal sample distribution is well-suited to the form of the integrand but fails otherwise. The main reason is that the sample location information is not exploited. All sample values are given the same importance regardless of their proximity to one another. Two samples falling in a similar location will have equal importance whereas they are likely to contain redundant information. The Bayesian approach we propose in this paper uses both the location and value of the data to infer an integral value based on a prior probabilistic model of the integrand. The Bayesian estimate depends only on the sample values and locations, and not how these samples have been chosen. We show how this theory can be applied to the final gathering problem and present results that clearly demonstrate the benefits of Bayesian Monte Carlo. [source]


Replica Exchange Light Transport

COMPUTER GRAPHICS FORUM, Issue 8 2009
Shinya Kitaoka
I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism; I.3.3 [Computer Graphics]: Picture/Image Generation Abstract We solve the light transport problem by introducing a novel unbiased Monte Carlo algorithm called replica exchange light transport, inspired by the replica exchange Monte Carlo method in the fields of computational physics and statistical information processing. The replica exchange Monte Carlo method is a sampling technique whose operation resembles simulated annealing in optimization algorithms using a set of sampling distributions. We apply it to the solution of light transport integration by extending the probability density function of an integrand of the integration to a set of distributions. That set of distributions is composed of combinations of the path densities of different path generation types: uniform distributions in the integral domain, explicit and implicit paths in light (particle/photon) tracing, indirect paths in bidirectional path tracing, explicit and implicit paths in path tracing, and implicit caustics paths seen through specular surfaces including the delta function in path tracing. The replica-exchange light transport algorithm generates a sequence of path samples from each distribution and samples the simultaneous distribution of those distributions as a stationary distribution by using the Markov chain Monte Carlo method. Then the algorithm combines the obtained path samples from each distribution using multiple importance sampling. We compare the images generated with our algorithm to those generated with bidirectional path tracing and Metropolis light transport based on the primary sample space. Our proposing algorithm has better convergence property than bidirectional path tracing and the Metropolis light transport, and it is easy to implement by extending the Metropolis light transport. [source]


Deterministic Importance Sampling with Error Diffusion

COMPUTER GRAPHICS FORUM, Issue 4 2009
László Szirmay-Kalos
This paper proposes a deterministic importance sampling algorithm that is based on the recognition that delta-sigma modulation is equivalent to importance sampling. We propose a generalization for delta-sigma modulation in arbitrary dimensions, taking care of the curse of dimensionality as well. Unlike previous sampling techniques that transform low-discrepancy and highly stratified samples in the unit cube to the integration domain, our error diffusion sampler ensures the proper distribution and stratification directly in the integration domain. We also present applications, including environment mapping and global illumination rendering with virtual point sources. [source]


Compression and Importance Sampling of Near-Field Light Sources

COMPUTER GRAPHICS FORUM, Issue 8 2008
Albert Mas
I.3.7 Computer Graphics: Three-Dimensional Graphics and Realism Abstract This paper presents a method for compressing measured datasets of the near-field emission of physical light sources (represented by raysets). We create a mesh on the bounding surface of the light source that stores illumination information. The mesh is augmented with information about directional distribution and energy density. We have developed a new approach to smoothly generate random samples on the illumination distribution represented by the mesh, and to efficiently handle importance sampling of points and directions. We will show that our representation can compress a 10 million particle rayset into a mesh of a few hundred triangles. We also show that the error of this representation is low, even for very close objects. [source]


Adaptive state- dependent importance sampling simulation of markovian queueing networks

EUROPEAN TRANSACTIONS ON TELECOMMUNICATIONS, Issue 4 2002
Pieter-Tjerk De Boer
In this paper, a method is presented for the efficient estimation of rare-event (buffer overflow) probabilities in queueing networks using importance sampling. Unlike previously proposed change of measures, the one used here is not static, i.e., it depends on the buffer contents at each of the network nodes. The ,optimal' state-dependent change of measure is determined adaptively during the simulation, using the cross-entropy method. The adaptive state-dependent importance sampling algorithm proposed in this paper yields asymptotically efficient simulation of models for which it is shown (formally or otherwise) that no effective static change of measure exists. Simulation results for queueing models of communication systems are presented to demonstrate the effectiveness of the method. [source]


Linkage analysis with sequential imputation

GENETIC EPIDEMIOLOGY, Issue 1 2003
Zachary Skrivanek
Abstract Multilocus calculations, using all available information on all pedigree members, are important for linkage analysis. Exact calculation methods in linkage analysis are limited in either the number of loci or the number of pedigree members they can handle. In this article, we propose a Monte Carlo method for linkage analysis based on sequential imputation. Unlike exact methods, sequential imputation can handle large pedigrees with a moderate number of loci in its current implementation. This Monte Carlo method is an application of importance sampling, in which we sequentially impute ordered genotypes locus by locus, and then impute inheritance vectors conditioned on these genotypes. The resulting inheritance vectors, together with the importance sampling weights, are used to derive a consistent estimator of any linkage statistic of interest. The linkage statistic can be parametric or nonparametric; we focus on nonparametric linkage statistics. We demonstrate that accurate estimates can be achieved within a reasonable computing time. A simulation study illustrates the potential gain in power using our method for multilocus linkage analysis with large pedigrees. We simulated data at six markers under three models. We analyzed them using both sequential imputation and GENEHUNTER. GENEHUNTER had to drop between 38,54% of pedigree members, whereas our method was able to use all pedigree members. The power gains of using all pedigree members were substantial under 2 of the 3 models. We implemented sequential imputation for multilocus linkage analysis in a user-friendly software package called SIMPLE. Genet Epidemiol 25:25,35, 2003. © 2003 Wiley-Liss, Inc. [source]


Basic ingredients of free energy calculations: A review

JOURNAL OF COMPUTATIONAL CHEMISTRY, Issue 8 2010
Clara D. Christ
Abstract Methods to compute free energy differences between different states of a molecular system are reviewed with the aim of identifying their basic ingredients and their utility when applied in practice to biomolecular systems. A free energy calculation is comprised of three basic components: (i) a suitable model or Hamiltonian, (ii) a sampling protocol with which one can generate a representative ensemble of molecular configurations, and (iii) an estimator of the free energy difference itself. Alternative sampling protocols can be distinguished according to whether one or more states are to be sampled. In cases where only a single state is considered, six alternative techniques could be distinguished: (i) changing the dynamics, (ii) deforming the energy surface, (iii) extending the dimensionality, (iv) perturbing the forces, (v) reducing the number of degrees of freedom, and (vi) multi-copy approaches. In cases where multiple states are to be sampled, the three primary techniques are staging, importance sampling, and adiabatic decoupling. Estimators of the free energy can be classified as global methods that either count the number of times a given state is sampled or use energy differences. Or, they can be classified as local methods that either make use of the force or are based on transition probabilities. Finally, this overview of the available techniques and how they can be best used in a practical context is aimed at helping the reader choose the most appropriate combination of approaches for the biomolecular system, Hamiltonian and free energy difference of interest. © 2009 Wiley Periodicals, Inc. J Comput Chem, 2010 [source]


Inference in molecular population genetics

JOURNAL OF THE ROYAL STATISTICAL SOCIETY: SERIES B (STATISTICAL METHODOLOGY), Issue 4 2000
Matthew Stephens
Full likelihood-based inference for modern population genetics data presents methodological and computational challenges. The problem is of considerable practical importance and has attracted recent attention, with the development of algorithms based on importance sampling (IS) and Markov chain Monte Carlo (MCMC) sampling. Here we introduce a new IS algorithm. The optimal proposal distribution for these problems can be characterized, and we exploit a detailed analysis of genealogical processes to develop a practicable approximation to it. We compare the new method with existing algorithms on a variety of genetic examples. Our approach substantially outperforms existing IS algorithms, with efficiency typically improved by several orders of magnitude. The new method also compares favourably with existing MCMC methods in some problems, and less favourably in others, suggesting that both IS and MCMC methods have a continuing role to play in this area. We offer insights into the relative advantages of each approach, and we discuss diagnostics in the IS framework. [source]


Falling and explosive, dormant, and rising markets via multiple-regime financial time series models

APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY, Issue 1 2010
Cathy W. S. Chen
Abstract A multiple-regime threshold nonlinear financial time series model, with a fat-tailed error distribution, is discussed and Bayesian estimation and inference are considered. Furthermore, approximate Bayesian posterior model comparison among competing models with different numbers of regimes is considered which is effectively a test for the number of required regimes. An adaptive Markov chain Monte Carlo (MCMC) sampling scheme is designed, while importance sampling is employed to estimate Bayesian residuals for model diagnostic testing. Our modeling framework provides a parsimonious representation of well-known stylized features of financial time series and facilitates statistical inference in the presence of high or explosive persistence and dynamic conditional volatility. We focus on the three-regime case where the main feature of the model is to capturing of mean and volatility asymmetries in financial markets, while allowing an explosive volatility regime. A simulation study highlights the properties of our MCMC estimators and the accuracy and favourable performance as a model selection tool, compared with a deviance criterion, of the posterior model probability approximation method. An empirical study of eight international oil and gas markets provides strong support for the three-regime model over its competitors, in most markets, in terms of model posterior probability and in showing three distinct regime behaviours: falling/explosive, dormant and rising markets. Copyright © 2009 John Wiley & Sons, Ltd. [source]


An Importance Sampling Method to Evaluate Value-at-Risk for Assets with Jump Risk,

ASIA-PACIFIC JOURNAL OF FINANCIAL STUDIES, Issue 5 2009
Ren-Her Wang
Abstract Risk management is an important issue when there is a catastrophic event that affects asset price in the market such as a sub-prime financial crisis or other financial crisis. By adding a jump term in the geometric Brownian motion, the jump diffusion model can be used to describe abnormal changes in asset prices when there is a serious event in the market. In this paper, we propose an importance sampling algorithm to compute the Value-at-Risk for linear and nonlinear assets under a multi-variate jump diffusion model. To be more precise, an efficient computational procedure is developed for estimating the portfolio loss probability for linear and nonlinear assets with jump risks. And the titling measure can be separated for the diffusion and the jump part under the assumption of independence. The simulation results show that the efficiency of importance sampling improves over the naive Monte Carlo simulation from 7 times to 285 times under various situations. We also show the robustness of the importance sampling algorithm by comparing it with the EVT-Copula method proposed by Oh and Moon (2006). [source]


Bayesian Methods for Examining Hardy,Weinberg Equilibrium

BIOMETRICS, Issue 1 2010
Jon Wakefield
Summary Testing for Hardy,Weinberg equilibrium is ubiquitous and has traditionally been carried out via frequentist approaches. However, the discreteness of the sample space means that uniformity of,p -values under the null cannot be assumed, with enumeration of all possible counts, conditional on the minor allele count, offering a computationally expensive way of,p -value calibration. In addition, the interpretation of the subsequent,p -values, and choice of significance threshold depends critically on sample size, because equilibrium will always be rejected at conventional levels with large sample sizes. We argue for a Bayesian approach using both Bayes factors, and the examination of posterior distributions. We describe simple conjugate approaches, and methods based on importance sampling Monte Carlo. The former are convenient because they yield closed-form expressions for Bayes factors, which allow their application to a large number of single nucleotide polymorphisms (SNPs), in particular in genome-wide contexts. We also describe straightforward direct sampling methods for examining posterior distributions of parameters of interest. For large numbers of alleles at a locus we resort to Markov chain Monte Carlo. We discuss a number of possibilities for prior specification, and apply the suggested methods to a number of real datasets. [source]


Monte Carlo Inference for State,Space Models of Wild Animal Populations

BIOMETRICS, Issue 2 2009
Ken B. Newman
Summary We compare two Monte Carlo (MC) procedures, sequential importance sampling (SIS) and Markov chain Monte Carlo (MCMC), for making Bayesian inferences about the unknown states and parameters of state,space models for animal populations. The procedures were applied to both simulated and real pup count data for the British grey seal metapopulation, as well as to simulated data for a Chinook salmon population. The MCMC implementation was based on tailor-made proposal distributions combined with analytical integration of some of the states and parameters. SIS was implemented in a more generic fashion. For the same computing time MCMC tended to yield posterior distributions with less MC variation across different runs of the algorithm than the SIS implementation with the exception in the seal model of some states and one of the parameters that mixed quite slowly. The efficiency of the SIS sampler greatly increased by analytically integrating out unknown parameters in the observation model. We consider that a careful implementation of MCMC for cases where data are informative relative to the priors sets the gold standard, but that SIS samplers are a viable alternative that can be programmed more quickly. Our SIS implementation is particularly competitive in situations where the data are relatively uninformative; in other cases, SIS may require substantially more computer power than an efficient implementation of MCMC to achieve the same level of MC error. [source]