Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The paper reviews existing methods for generating discrete random variables and their suitability for vector processing. A new method for generating discrete random variables for use in vectorized Monte Carlo simulations is presented. The method uses the concept of importance sampling and generates random variables by employing uniform distribution to speed up the computation. The sampled random variables are subsequently adjusted so that unbiased estimates are obtained. The method preserves both the mean and variance of the original distribution. It is demonstrated that the method requires simpler coding and shorter execution time for both scalar and vector processing, when compared with other existing methods. The vectorization speedup of the method is demonstrated on an IBM 3090–180 machine with a vector facility.
We present a variation of the N-fold way algorithm, which improves efficiency when one combines the Wang–Landau method with the N-fold way. It is shown that the new version of the N-fold way algorithm has good performance when used for importance sampling and compared with the usual N-fold version or the Metropolis algorithm. The new N-fold algorithm combined with the Wang–Landau method is applied to the square Ising model using a multi-range approach. A comparative study is presented for all these algorithms, Wang–Landau and the two combined versions with the N-fold way. The role of boundary effects is discussed.
Fatigue causes about 90% of service failures in machines. Fatigue analysis involves significant randomness in the loads, material properties and geometry. Designers often use Monte Carlo simulation to estimate fatigue reliability under dynamic, random loads such as those due to ocean waves. Monte Carlo simulation is computationally expensive because it requires calculation of the stresses for thousands of simulated time histories of the loads. This paper presents and demonstrates a method to estimate efficiently the fatigue life of a structure subjected to a dynamic load, which is represented by a stationary, Gaussian random process, for many different spectra of the excitation. The method requires only one Monte Carlo simulation for one power spectral density function of the excitation.
Tolerances in component values will affect a product manufacturing yield. The yield can be maximized by selecting component nominal values judiciously. Several yield optimization routines have been developed. A simple algorithm known as the center of gravity (CoG) method makes use of a simple Monte Carlo sampling to estimate the yield and to generate a search direction for the optimal nominal values. This technique is known to be able to identify the region of high yield in a small number of iterations. The use of the importance sampling technique is investigated. The objective is to reduce the number of samples needed to reach the optimal region. A uniform distribution centered at the mean is studied as the importance sampling density. The results show that a savings of about 40% as compared to Monte Carlo sampling can be achieved using importance sampling when the starting yield is low. The importance sampling density also helped the search process to identify the high yield region quickly and the region identified is generally better than that of Monte Carlo sampling.
The goal of the paper is the numerical analysis of the performance of Monte Carlo simulation based methods for the computation of credit-portfolio loss-distributions in the context of Markovian intensity models of credit risk. We concentrate on two of the most frequently touted methods of variance reduction in the case of stochastic processes: importance sampling (IS) and interacting particle systems (IPS) based algorithms. Because the subtle differences between these methods are often misunderstood, as IPS is often regarded as a mere particular case of IP, we describe in detail the two kinds of algorithms, and we highlight their fundamental differences. We then proceed to a detailed comparative case study based on benchmark numerical experiments chosen for their popularity in the quantitative finance circles.
It is well known that for highly skewed distributions the standard method of using the t statistic for the confidence interval of the mean does not give robust results. This is an important problem for importance sampling (IS) as its final distribution is often skewed due to a heavy tailed weight distribution. In this paper, we first explain Hall's transformation and its variants to correct the confidence interval of the mean and then evaluate the performance of these methods for two numerical examples from finance which have closed-form solutions. Finally, we assess the performance of these methods for credit risk examples. Our numerical results suggest that Hall's transformation or one of its variants can be safely used in correcting the two-sided confidence intervals of financial simulations.
This paper presents a new approach to perform a nearly unbiased simulation using inversion of the characteristic function. As an application we are able to give unbiased estimates of the price of forward starting options in the Heston model and of continuously monitored Parisian options in the Black-Scholes framework. This method of simulation can be applied to problems for which the characteristic functions are easily evaluated but the corresponding probability density functions are complicated.
This paper proposes an improved procedure for stochastic volatility model estimation with an application to Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR) estimation. This improved procedure is composed of the following instrumental components: Fourier transform method for volatility estimation, and importance sampling for extreme event probability estimation. The empirical analysis is based on several foreign exchange series and the S&P 500 index data. In comparison with empirical results by RiskMetrics, historical simulation, and the GARCH(1,1) model, our improved procedure outperforms on average.
Numerical calculations of risk measures and risk contributions in credit risk models amount to the evaluation of various forms of quantiles, tail probabilities and tail expectations of the portfolio loss distribution. Though the moment generating function of the loss distribution in the CreditRisk+ model is available in analytic closed form, efficient, accurate and reliable computation of risk measures (Value-at-Risk and Expected Shortfall) and risk contributions for the CreditRisk+ model poses technical challenges. We propose various numerical algorithms for risk measures and risk contributions calculations of the enhanced CreditRisk+ model under the common background vector framework using the Johnson curve fitting method, saddlepoint approximation method, importance sampling in Monte Carlo simulation and check function formulation. Our numerical studies on stylized credit portfolios and benchmark industrial credit portfolios reveal that the Johnson curve fitting approach works very well for credit portfolios with a large number of obligors, demonstrating high level of numerical reliability and computational efficiency. Once we implement the systematic procedure of finding the saddlepoint within an approximate domain, the saddlepoint approximation schemes provide efficient calculation and accurate numerical results. The importance sampling in Monte Carlo simulation methods are easy to implement, but they compete less favorably in accuracy and efficiency with other numerical algorithms. The less commonly used check function formulation is limited to risk measures calculations. It competes favorably in accuracy and reliability, but an extra optimization algorithm is required.
In this paper, we present a new Monte Carlo computational way for solving the global illumination problem whereby plenty of unbiased estimators can be employed to enrich the solutions leading to simple error control and faster estimation. Especially so, the zero variance importance sampling procedure can be exploited to calculate the global illumination optimally. Based on the new scheme, a new Monte Carlo global illumination algorithm and its importance driven version have been developed and carried out. Results, which have been obtained by rendering test scenes, show that this new framework is promising.
We consider a class of convex decentralized consensus optimization problems over connected multi-agent networks. Each agent in the network holds its local objective function privately, and can only communicate with its directly connected agents during the computation to find the minimizer of the sum of all objective functions. We propose a randomized incremental primal-dual method to solve this problem, where the dual variable over the network in each iteration is only updated at a randomly selected node, whereas the dual variables elsewhere remain the same as in the previous iteration. Thus, the communication only occurs in the neighborhood of the selected node in each iteration and hence can greatly reduce the chance of communication delay and failure in the standard fully synchronized consensus algorithms. We provide comprehensive convergence analysis including convergence rates of the primal residual and consensus error of the proposed algorithm, and conduct numerical experiments to show its performance using both uniform sampling and important sampling as node selection strategy.
A molecular interaction library modeling favorable non-bonded interactions between atoms and molecular fragments is considered. In this paper, we represent the structure of the interaction library by a network diagram, which demonstrates that the underlying prediction model obtained for a molecular fragment is multi-layered. We clustered the molecular fragments into four groups by analyzing the pairwise distances between the molecular fragments. The distances are represented as an unrooted tree, in which the molecular fragments fall into four groups according to their function. For each fragment group, we modeled a group-specific a priori distribution with a Dirichlet distribution. The group-specific Dirichlet distributions enable us to derive a large population of similar molecular fragments that vary only in their contact preferences. Bayes' theorem then leads to a population distribution of the posterior probability vectors referred to as a "Dickey–Savage"-density. Two known methods for approximating multivariate integrals are applied to obtain marginal distributions of the Dickey–Savage density. The results of the numerical integration methods are compared with the simulated marginal distributions. By studying interactions between the protein structure of cyclohydrolase and its ligand guanosine-5′-triphosphate, we show that the marginal distributions of the posterior probabilities are more informative than the corresponding point estimates.
Reducing the failure probability is an important task in the design of engineering structures. In this paper, a reliability sensitivity analysis technique, called failure probability ratio function, is firstly developed for providing the analysts quantitative information on failure probability reduction while one or a set of distribution parameters of model inputs are changed. Then, based on the failure probability ratio function, a global sensitivity analysis technique, called R-index, is proposed for measuring the average contribution of the distribution parameters to the failure probability while they vary in intervals. The proposed failure probability ratio function and R-index can be especially useful for failure probability reduction, reliability-based optimization and reduction of the epistemic uncertainty of parameters. The Monte Carlo simulation (MCS), Importance Sampling (IS) and Truncated Importance Sampling (TIS) procedures, which need only a set of samples for implementing them, are introduced for efficiently computing the proposed sensitivity indices. A numerical example is introduced for illustrating the engineering significance of the proposed sensitivity indices and verifying the efficiency and accuracy of the MCS, IS and TIS procedures. At last, the proposed sensitivity techniques are applied to a planar 10-bar structure for achieving a targeted 80% reduction of the failure probability.
We propose a method of sampling regular and irregular-grid volume data for visualization. The method is based on the Metropolis algorithm that is a type of Monte Carlo technique. Our method enables "importance sampling" of local regions of interest in the visualization by generating sample points intensively in regions where a user-specified transfer function takes the peak values. The generated sample-point distribution is independent of the grid structure of the given volume data. Therefore, our method is applicable to irregular grids as well as regular grids. We demonstrate the effectiveness of our method by applying it to regular cubic grids and irregular tetrahedral grids with adaptive cell sizes. We visualize volume data by projecting the generated sample points onto the 2D image plane. We tested our sampling with three rendering models: an X-ray model, a simple illuminant particle model, and an illuminant particle model with light-attenuation effects. The grid-independency and the efficiency in the parallel processing mean that our method is suitable for visualizing large-scale volume data. The former means that the required number of sample points is proportional to the number of 2D pixels, not the number of 3D voxels. The latter means that our method can be easily accelerated on the multiple-CPU and/or GPU platforms. We also show that our method can work with adaptive space partitioning of volume data, which also enables us to treat large-scale/complex volume data easily.
Monte-Carlo estimation of an integral is usually based on the method of moments or on an estimating equation. Recently, Kong et al. (2003) proposed a likelihood based theory, which puts Monte-Carlo estimation of integrals on a firmer, less ad hoc, basis by formulating the problem as a likelihood inference problem for the baseline measure with simulated observations as data. In this paper, we provide further exploration and development of this theory. After an overview of the likelihood formulation, we first demonstrate the power of the likelihood-based method by presenting a universally improved importance sampling estimator. We then prove that the formal, infinite-dimensional Fisher-information based variance calculation given in Kong et al. (2003) is asymptotically the same as the sampling based “sandwich” variance estimator. Next, we explore the gain in Monte Carlo efficiency when the baseline measure can be parameterized. Furthermore, we show how the Monte Carlo integration problem can also be dealt with by the method of empirical likelihood, and how the baseline measure parameter can be properly profiled out to form a profile likelihood for the integrals only. As a byproduct, we obtain four equivalent conditions for the existence of unique maximum likelihood estimate for mixture models with known components. We also discuss an apparent paradox for Bayesian inference with Monte Carlo integration.
Importance Sampling (IS) is a well-known Monte Carlo method which is used in order to estimate expectations with respect to a target distribution π, using a sample from another distribution g and weighting properly the output. Here, we consider IS from a different point of view. By considering the weights as sojourn times until the next jump, we associate a jump process with the weighted sample. Under certain conditions, the associated jump process is an ergodic semi-Markov process with stationary distribution π. Besides its theoretical interest, the proposed point of view has also interesting applications. Working along the lines of the above approach, we are allowed to run more convenient Markov Chain Monte Carlo algorithms. This can prove to be very useful when applied in conjunction with a discretization of the state space.
Importance Sampling is a variance reduction technique possessing the potential of zero-variance estimators in its optimal case. It has been successfully applied in a variety of settings ranging from Monte Carlo methods for static models to simulations of complex dynamical systems governed by stochastic processes. We demonstrate the applicability of Importance Sampling to the simulation of coupled molecular reactions constituting biological or genetic networks. This fills a gap between great efforts spent on enhanced trajectory generation and the largely neglected issue of reduced variance among trajectories in the context of biological and genetic networks.
Several variance reduction techniques including importance sampling, (martingale) control variate, (randomized) Quasi Monte Carlo method, QMC in short, and some possible combinations are considered to evaluate option prices. By means of perturbation methods to derive some option price approximations, we find from numerical results in Monte Carlo simulations that the control variate method is more efficient than importance sampling to solve European option pricing problems under multifactor stochastic volatility models. As an alternative, QMC method also provides better convergence than basic Monte Carlo method. But we find an example where QMC method may produce erroneous solutions when estimating the low-biased solution of an American option. This drawback can be effectively fixed by adding a martingale control to the estimator adopting Quasi random sequences so that low-biased estimates obtained are more accurate than results from Monte Carlo method. Therefore by taking advantages of martingale control variate and randomized QMC, we find significant improvement on variance reduction for pricing derivatives and their sensitivities. This effect should be understood as that martingale control variate plays the role of a smoother under QMC method to permit better convergence.
Statistical methods are compared to assess the conformity of outputs of computationally expensive systems with respect to regulatory thresholds. The direct Monte Carlo method provides baseline results, obtained with a high computational cost. Metamodel-based methods (in conjunction with Monte Carlo or importance sampling) allow to reduce the computation time, the latter correcting for the metamodel approximation. These methods have been implemented on a fire engineering case study to compute the probability that the temperature above the smoke layer exceeds 200°C.