Please login to be able to save your searches and receive alerts for new content matching your search criteria.
In this paper, we provide a flexible framework for optimal trading in an asset listed on different venues. We take into account the dependencies between the imbalance and spread of the venues, and allow for partial execution of limit orders at different limits as well as market orders. We present a Bayesian update of the model parameters to take into account possibly changing market conditions and propose extensions to include short/long trading signals, market impact or hidden liquidity. To solve the stochastic control problem of the trader we apply the finite difference method and also develop a deep reinforcement learning algorithm allowing to consider more complex settings.
This paper presents spatio-temporal modeling and analysis methods to fMRI data. Based on the nonlinear autoregressive with exogenous inputs (NARX) model realized by the Bayesian radial basis function (RBF) neural networks, two methods (NARX-1 and NARX-2) are proposed to capture the unknown complex dynamics of the brain activities. Simulation results on both synthetic and real fMRI data clearly show that the proposed schemes outperform the conventional t-test method in detecting the activated regions of the brain.
Buy-price auction has been successfully used as a new channel of online sales. This paper studies an online sequential buy-price auction problem, where a seller has an inventory of identical products and needs to clear them through a sequence of online buy-price auctions such that the total profit is maximized by optimizing the buy price in each auction. We propose a methodology by dynamic programming approach to solve this optimization problem. Since the consumers’ behavior affects the seller’s revenue, the consumers’ strategy used in this auction is first investigated. Then, two different dynamic programming models are developed to optimize the seller’s decision-making: one is the clairvoyant model corresponding to a situation where the seller has complete information about consumer valuations, and the other is the Bayesian learning model where the seller makes optimal decisions by continuously recording and utilizing auction data during the sales process. Numerical experiments are employed to demonstrate the impacts of several key factors on the optimal solutions, including the size of inventory, the number of potential consumers, and the rate at which the seller discounts early incomes. It is shown that when the consumers’ valuations are uniformly distributed, the Bayesian learning model is of great efficiency if the demand is adequate.
Bayesian learning is applied on two class systems. Partitioning a big sample made up of many elements of two classes of indistinguishable objects, we indistinctly pursue from 2 to 5 training sets called hypotheses in the probability field, with a plausible rate of object from each hypothesis. Objects are taken one by one from the sample. The basic aim faced is to predict one type of objects in the following occasion in which an agent takes one of them from the original sample to test it. We obtain the graph of a posteriori probability for each hypothesis of one of the objects. A prediction that the following object is specifically one of them is acquired in one probability curve by means of training previously accomplished. This methodology is applied on manufacture of glass bottles of two classes: good or crash. The main interest is to predict which machine produced one detected crash bottle because bottles turn to be indistinguishable when they are reviewed. This is solved by fixing a priori probabilities and taking into account all possible probability distribution combinations in the classes.
Mutation testing (Mutation Analysis), although powerful in revealing faults, is considered a computationally expensive criterion, due to the high number of mutants created and the effort to determine the equivalent mutants. Using mutation-based alternative testing criteria it is possible to reduce the number of mutants but it is still necessary to determine the equivalent ones. In this paper the Bayesian Learning(one of the Artificial Intelligence techniques used in machine learning) is investigated to define the Bayesian Learning-Based Equivalent Detection Technique (BaLBEDeT), which provides guidelines to help the tester to analyze the live mutants in order to determine the equivalent ones.
The naïve Bayes classifier is built on the assumption of conditional independence between the attributes given the class. The algorithm has been shown to be surprisingly robust to obvious violations of this condition, but is is natural to ask if it is possible to further improve the accuracy by relaxing this assumption. We examine an approach where naïve Bayes is augmented by the addition of correlation arcs between attributes. We explore two methods for finding the set of augmenting arcs, a greedy hill-climbing search, and a novel, more computationally efficient algorithm that we call SuperParent. We compare these methods to TAN; a state-of the-art distribution-based approach to finding the augmenting arcs.
This paper derives the multi-period fair actuarial values for six deductible insurance policies offered in today's insurance markets. The loss in any given period is generated by the Weibull distribution with a known shape parameter but an unknown scale parameter. The insurer is assumed to be a Bayesian decision maker, in the sense that he/she learns sequentially about the unknown scale parameter by observing the realizations of the filed claims. It is shown that the insurer's underlying predictive loss distributions belong to the Burr family, and the multi-period actuarially fair policy value can be derived. With a proper loading, an insurance premium can be quoted. Our major contribution is the analytical derivations of the fair actuarial values for deductible insurance policies in the presence of parameter uncertainty and Bayesian learning.
We study the Markowitz portfolio selection problem with unknown drift vector in the multi-dimensional framework. The prior belief on the uncertain expected rate of return is modeled by an arbitrary probability law, and a Bayesian approach from filtering theory is used to learn the posterior distribution about the drift given the observed market data of the assets. The Bayesian Markowitz problem is then embedded into an auxiliary standard control problem that we characterize by a dynamic programming method and prove the existence and uniqueness of a smooth solution to the related semi-linear partial differential equation (PDE). The optimal Markowitz portfolio strategy is explicitly computed in the case of a Gaussian prior distribution. Finally, we measure the quantitative impact of learning, updating the strategy from observed data, compared to nonlearning, using a constant drift in an uncertain context, and analyze the sensitivity of the value of information with respect to various relevant parameters of our model.
Climate change will push the weather experienced by people affected outside the bounds of historic norms, resulting in unprecedented weather events. But people and firms should be able to learn from their experience of unusual weather and adjust their expectations about the climate distribution accordingly. The efficiency of this learning process gives an upper bound on the rate at which adaptation can occur and is therefore important in determining the adjustment costs associated with climate change. Learning about climate change requires people to infer the state of a changing probability distribution (climate) given annual draws from that distribution (weather). If the climate is stationary, it can be inferred from the distribution of historic weather observations, but if it is changing, the inference problem is more challenging. This paper first develops different learning models, including an efficient hierarchical Bayesian model in which the observer learns whether the climate is changing and, if it is, the functional form that describes that change. I contrast this with a less efficient but simpler learning model in which observers react to past changes but are unable to anticipate future changes. I propose a general metric of learning costs based on the average, discounted squared difference between beliefs and the true climate state and use climate model output to calculate this metric for two emissions scenarios, finding substantial relative differences between learning models and scenarios but small absolute values. Geographic differences arise from spatial patterns of warming rates and natural weather variability (noise). Finally, I present results from an experimental game simulating the adaptation decision, which suggests that people are able to learn about a trending climate and respond proactively.
How does risk and uncertainty in climate thresholds impact optimal short-run mitigation? This paper contrasts the near-term mitigation consequences of using an expected value, stochastic programming, and stochastic control model to capture the policy effects of uncertain climate thresholds. The risk of threshold outcomes increases expected climate damages. The passive learning associated with stochastic programming creates an extra incentive to mitigate promptly by reducing the damages from remaining threshold hazards. The active learning associated with stochastic control creates yet another incentive to do near-term mitigation, through the delaying of potential threshold effects.
Accurate identification of pathways associated with cancer phenotypes (e.g., cancer subtypes and treatment outcomes) could lead to discovering reliable prognostic and/or predictive biomarkers for better patients stratification and treatment guidance. In our previous work, we have shown that non-negative matrix tri-factorization (NMTF) can be successfully applied to identify pathways associated with specific cancer types or disease classes as a prognostic and predictive biomarker. However, one key limitation of non-negative factorization methods, including various non-negative bi-factorization methods, is their limited ability to handle negative input data. For example, many types of molecular data that consist of real-values containing both positive and negative values (e.g., normalized/log transformed gene expression data where negative values represent down-regulated expression of genes) are not suitable input for these algorithms. In addition, most previous methods provide just a single point estimate and hence cannot deal with uncertainty effectively.
To address these limitations, we propose a Bayesian semi-nonnegative matrix trifactorization method to identify pathways associated with cancer phenotypes from a realvalued input matrix, e.g., gene expression values. Motivated by semi-nonnegative factorization, we allow one of the factor matrices, the centroid matrix, to be real-valued so that each centroid can express either the up- or down-regulation of the member genes in a pathway. In addition, we place structured spike-and-slab priors (which are encoded with the pathways and a gene-gene interaction (GGI) network) on the centroid matrix so that even a set of genes that is not initially contained in the pathways (due to the incompleteness of the current pathway database) can be involved in the factorization in a stochastic way specifically, if those genes are connected to the member genes of the pathways on the GGI network. We also present update rules for the posterior distributions in the framework of variational inference. As a full Bayesian method, our proposed method has several advantages over the current NMTF methods, which are demonstrated using synthetic datasets in experiments. Using the The Cancer Genome Atlas (TCGA) gastric cancer and metastatic gastric cancer immunotherapy clinical-trial datasets, we show that our method could identify biologically and clinically relevant pathways associated with the molecular subtypes and immunotherapy response, respectively. Finally, we show that those pathways identified by the proposed method could be used as prognostic biomarkers to stratify patients with distinct survival outcome in two independent validation datasets. Additional information and codes can be found at https://github.com/parks-cs-ccf/BayesianSNMTF.