Please login to be able to save your searches and receive alerts for new content matching your search criteria.
A physician chooses not only the supply of medical treatment, contingent on the result of a diagnostic test, but also the quality of his service. Two sources of uncertainty are introduced. One source arises as, based on the patient's "apparent" symptoms, only a priori estimates of the likelihood of alternative medical conditions can be inferred. They can be improved upon by a diagnostic test, but inherent in such tests is the possibility of a "false positive". This second source of uncertainty is shown to be critical in the possible over- or undersupply of medical treatment. Remedial pricing structures are suggested.
A Bayesian approach is proposed to estimate unknown parameters in stochastic dynamic equations (SDEs). The Fokker–Planck equation from statistical physics method is adopted to calculate the quasi-stationary probability density function. A hybrid algorithm combining the Gibbs sampler and the Metropolis–Hastings (MH) algorithm is proposed to obtain Bayesian estimates of unknown parameters in SDEs. Three simulation studies of SDEs are conducted to investigate the performance of the proposed methodologies. Empirical results evidence that the proposed method performs well in the sense that Bayesian estimates of unknown parameters are quite close to their corresponding true values and their corresponding standard divinations are quite small, and the computational accuracy of normalization parameters strongly affects the accuracy of the proposed Bayesian estimates.
We investigate the detectability of massive mode of polarization of Gravitational Waves (GWs) in f(R) theory of gravity associated with Gamma Ray Bursts (GRBs) sources. We obtain the beam pattern function of Laser Interferometric Gravitational wave Observatory (LIGO) corresponding to the massive polarization of GWs and perform Bayesian analysis to study this polarization. It is found that the massive polarization component with a mass of 10−22eV/c2 is too weak to be detected at LIGO with its current configuration.
In this paper we discuss how to implement a Bayesian thinking for multistate reliability analysis. The Bayesian paradigm comprises a unified and consistent framework for analysing and expressing reliability, but in our view the standard Bayesian procedures gives too much emphasis on probability models and inference on fictional parameters. We believe that there is a need for a rethinking on how to implement the Bayesian approach, and in this paper we present and discuss such a rethinking for multistate reliability analysis. The starting point of the analysis should be observable quantities, expressing states of the world, not fictional parameters.
This paper discusses the reliability analysis for a series system in step-stress partially accelerated lifetime test under Type-I progressive hybrid censoring, where independent Burr-XII distributed lifetimes are assumed for the components. In many cases, the exact component causing the system failure cannot be identified and the cause of failure is masked. Bayesian approach combined with auxiliary variables is applied for estimating the parameters of the model when the masking probability is dependent on the component. Further, the reliability and hazard rate functions of the system and components are estimated under use stress level. Simulation studies are performed to demonstrate the efficiency of the methods proposed in this paper under different masking probabilities and different progressive removal schemes.
In this paper we investigate the behavior of the market around dividend payment dates. Our empirical analysis, based on a Bayesian approach applied to Italian stock data, confirms the presence of abnormal returns at the ex-dividend date, as already documented in the literature for other markets. Calibrating a suitable model introduced in [1] to take care of the additional randomness pertubing the market around dividend payment dates, we investigate the effects on the derivative evaluation. Looking at the NoArbitrage prices of American call options written on some Italian dividend-paying stock and comparing them with the marketed prices, we conclude that the effect of this additional randomness can be neglected.
Several financial markets impose daily price limits on individual securities. Once a price limit is triggered, investors observe either the limit floor or ceiling, but cannot know with certainty what the true equilibrium price would have been in the absence of such limits. The price limits in most exchanges are typically based on a percentage change from the previous day's closing price, and can be expressed as return limits. We develop a Bayesian forecasting model in the presence of return limits, assuming that security returns are governed by identically and independently shifted-exponential random variables with an unknown parameter. The unique features of our Bayesian model are the derivations of the posterior and predictive densities. Several numerical predictions are generated and depicted graphically. Our main theoretical result with policy implications is that when return-limit regulations are tightened, the price-discovery process is impeded and investor's welfare is reduced.
In this paper, we consider the estimation of the weights of tangent portfolios from the Bayesian point of view assuming normal conditional distributions of the logarithmic returns. For diffuse and conjugate priors for the mean vector and the covariance matrix, we derive stochastic representations for the posterior distributions of the weights of tangent portfolio and their linear combinations. Separately, we provide the mean and variance of the posterior distributions, which are of key importance for portfolio selection. The analytic results are evaluated within a simulation study, where the precision of coverage intervals is assessed.
Risk measurement and pricing of financial positions are based on modeling assumptions, which are common assumptions on the probability distribution of the position’s outcomes. We associate a model with a probability measure and investigate model risk by considering a model space. First, we incorporate model risk into market risk measures by introducing model weighted and superposed market risk measures. Second, we quantify model risk itself and propose axioms for model risk measures. We introduce superposed model risk measures that quantify model risk relative to a reference model, which is the financial institution’s model of choice. Several risk measures that we propose require a probability distribution on the model space, which can be obtained from data by applying Bayesian analysis. Examples and a case study illustrate our approaches.
Stochastic volatility models describe asset prices St as driven by an unobserved process capturing the random dynamics of volatility σt. We quantify how much information about σt can be inferred from asset prices St in terms of Shannon’s mutual information in a twofold way: theoretically, by means of a thorough study of Heston’s model; from a machine learning perspective, by means of investigating a family of exponential Ornstein–Uhlenbeck (OU) processes fitted on S&P 500 data.
The diverse nature of cerebral activity, as measured using neuroimaging techniques, has been recognised long ago. It seems obvious that using single modality recordings can be limited when it comes to capturing its complex nature. Thus, it has been argued that moving to a multimodal approach will allow neuroscientists to better understand the dynamics and structure of this activity. This means that integrating information from different techniques, such as electroencephalography (EEG) and the blood oxygenated level dependent (BOLD) signal recorded with functional magnetic resonance imaging (fMRI), represents an important methodological challenge. In this work, we review the work that has been done thus far to derive EEG/fMRI integration approaches. This leads us to inspect the conditions under which such an integration approach could work or fail, and to disclose the types of scientific questions one could (and could not) hope to answer with it.
The ability to predict coastline evolution in the long term is of great importance to coastal managers for social, economic and environmental risk assessment purposes. A Bayesian statistical approach to the problem of modeling and predicting the occurrence of major failure events in soft coastal cliffs is presented. This approach combines available data with expert judgement about the behavior of the cliff under study. Expert judgement is described in the form of expected values for cliff failure occurrences, confidence or degree of belief and subjective probability of occurrence of extremes. WinBUGS software is used to derive predictive probabilities, estimates of the rate of failure and standard deviations, which combine all information. The statistical method is illustrated in the cases of cliffs situated in Herne Bay and Scarborough, UK. The results can be used in a risk-based assessment of coastal cliff recession.
We propose and estimate a new class of equity return models that incorporate scale mixtures of the skew-normal distribution for the error distribution into the standard stochastic volatility framework. The main advantage of our models is that they can simultaneously accommodate the skewness, heavy-tailedness, and leverage effect of equity index returns observed in the data. The proposed models are flexible and parsimonious, and include many asymmetrically heavy-tailed error distributions — such as skew-t and skew-slash distributions — as special cases. We estimate a variety of specifications of our models using the Bayesian Markov Chain Monte Carlo method, with data on daily returns of the S&P 500 index over 1987–2009. We find that the proposed models outperform existing ones of index returns.
This paper studies the dynamics of cryptocurrency volatility using a stochastic volatility model with simultaneous and correlated jumps in returns and volatility. We estimate the model using an efficient sequential learning algorithm that allows for learning about multiple unknown model parameters simultaneously, with daily data on four popular cryptocurrencies. We find that these cryptocurrencies have quite different volatility dynamics. In particular, they exhibit different return-volatility relationships: While Ethereum and Litecoin show a negative relationship, Chainlink displays a positive one and interestingly, Bitcoin’s one changes from negative to positive in June 2016. We also provide evidence that the sequential learning algorithm helps better detect large jumps in the cryptocurrency market in real time. Overall, incorporating volatility jumps helps better capture the dynamic behavior of highly volatile cryptocurrencies.
It is well known that the core concept of Jain logic, the conditional holistic principle (known as Anekāntavāda), originated in the ancient Jain literature. From this core concept, the conditional predication principle (known as Syādvāda) was developed and has since become one of the important areas of ancient Jain logic in which some aspects have been well studied in modern terminology.
However, some aspects of Jain logic have not yet been explored and are dealt with in this article. This article gives historical details on the five-fold Jain syllogism (known as Pañcāvayavavākya) as part of a comprehensive analysis, thereby unifying the literature and bringing the concept in line with modern mathematical terminology. Its five components — proposition (Pratijñā), reason (Hetu), example (Udāharaṇam), application (Upanaya), and conclusion (Nigamanam) — are discussed in depth to show how Pañcāvayavavākya matches the basis of the current statistical inference. In particular, Syādvāda in combination with Pañcāvayavavākya yields the Bayesian approach.
Further, a formal and succinct notation is introduced to describe Syādvāda and to illustrate the modern mathematical connections, while Anekāntavāda is shown to have as its corollary a stratified sampling.
Finally, an element of Syādvāda inherent to Turing’s cryptographic work towards breaking the Enigma code — work that could fairly be called ‘enigmatic statistics’ — is revealed.
In cancer drug development, demonstrated efficacy in tumor xenograft experiments on severe combined immunodeficient mice who are grafted with human tumor tissues or cells is an important step to bring a promising compound to human. These experiments also demonstrated a good correlation in efficacy with clinical outcomes. A key outcome variable is tumor volumes measured over a period of time, while mice are treated with certain treatment regimens. To analyze such data from xenograft experiments and evaluate the efficacy of a new drug, some statistical methods have been developed in literature. However, a mouse may die during the experiment or may be sacrificed when its tumor volume reaches a threshold. A tumor may be suppressed its tumor burden (volume) may become undetectable for some time but regrow and its tumor burden (volume) may become (e.g., < 0.01cm3) undetectable at times. Thus, incomplete repeated measurements arise. Because of the small sample sizes in these experiments, asymptotic inferences are usually questionably. In addition, were the tumor-bearing mice not treated, the tumors would keep growing until the mice die or are sacrificed. This intrinsic growth of tumor in the absence of treatment constrains the parameters in the statistical model and causes further difficulties in statistical analysis. In this paper, we review the recent advance in statistical inference accounting for these statistical challenges. Furthermore, we develop a multivariate random effects model with constrained parameters for multiple tumors in xenograft experiments. A real xenograft study on the antitumor agent exemestane, an aromatase inhibitor, combined with tamoxifen against the postmenopausal breast cancer is analyzed using the proposed methods.
The aim of this paper is to create a platform for developing an interface between the mathematical theory of reliability and the mathematics of finance. This we are able to do because there exists an isomorphic relationship between the survival function of reliability, and the asset pricing formula of fixed income investments. This connection suggests that the exponentiation formula of reliability theory and survival analysis be reinterpreted from a more encompassing perspective, namely, as the law of a diminishing resource. The isomorphism also helps us to characterize the asset pricing formula in non-parametric classes of functions, and to obtain its crossing properties. The latter provides bounds and inequalities on investment horizons. More generally, the isomorphism enables us to expand the scope of mathematical finance and of mathematical reliability by importing ideas and techniques from one discipline to the other. As an example of this interchange we consider interest rate functions that are determined up to an unknown constant so that the set-up results in a Bayesian formulation. We may also model interest rates as “shot-noise processes”, often used in reliability, and conversely, the failure rate function as a Lévy process, popular in mathematical finance. A consideration of the shot noise process for modelling interest rates appears to be new.
There have been great strides in shape analysis in this decade. Pattern recognition, image analysis, and morphometries have been the major contributors to this area but now bioinformatics is driving the subject as well, and new challenges are emerging; also the methods of pattern recognition are evolving for bioinformatics. Shape analysis for labelled landmarks is now moving to the new challenges of unlabelled landmarks motivated by these new applications. ICP, EM algorithms, etc. are well used in image analysis but now Bayesian methods are coming into the arena. Dynamic Bayesian networks are other developments. We will discuss the problem of averaging, image deformation, projective shape and Bayesian alignment. The aim of this talk will be to convince the scientists that statistical shape analysis is pivotal to the modern pattern recognition.
The mathematical foundations of statistics as a separate discipline were laid by Fisher, Neyman and Wald during the second quarter of the last century. Subsequent research in statistics and the courses taught in the universities are mostly based on the guidelines set by these pioneers. Statistics is used in some form or other in all areas of human endeavor from scientific research to optimum use of resources for social welfare, prediction and decision-making. However, there are controversies in statistics, especially in the choice of a model for data, use of prior probabilities and subject-matter judgments by experts. The same data analyzed by different consulting statisticians may lead to different conclusions.
What is the future of statistics in the present millennium dominated by information technology encompassing the whole of communications, interaction with intelligent systems, massive data bases, and complex information processing networks? The current statistical methodology based on simple probabilistic models developed for the analysis of small data sets appears to be inadequate to meet the needs of customers for quick on line processing of data and making the information available for practical use. Some methods are being put forward in the name of data mining for such purposes. A broad review of the current state of the art in statistics, its merits and demerits, and possible future developments will be presented.