Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The classic distributed computation is done by atoms, molecules or spins in vast numbers, each equipped with nothing more than the knowledge of their immediate neighborhood and the rules of statistical mechanics. These agents, 1023 or more, are able to form liquids and solids from gases, realize extremely complex ordered states, such as liquid crystals, and even decode encrypted messages. We will describe a study done for a sensor-array "challenge problem" in which we have based our approach on old-fashioned simulated annealing to accomplish target acquisition and tracking under the rules of statistical mechanics. We believe the many additional constraints that occur in the real problem can be folded, step by step, into this stochastic approach. The results have applicability to other network management problems on scales where a distributed solution will be mandatory.
An important issue in deregulated power systems is a provision of ancillary services. This paper presents a Mixed Integer Non-Linear Programming (MINLP) formulation for the provision of ancillary services (Automatic Generation Control or AGC, Spinning, Non-spinning and operational reserves) as well as the energy in simultaneous auctions by pool-based aggregated market scheme. The uncertainty of the market clearing procedure is represented by a stochastic model. Thus, the market clearing process is expressed as a stochastic optimization problem in which the expected objective function is minimized, while meeting AC power flow constraints, system reserve requirements and Lost Opportunity Cost (LOC). The model is applied to the IEEE 24-bus Reliability Test System (IEEE 24-bus RTS), and simulation studies carried out to examine the effectiveness of the proposed method.
In this paper, we formulate a new search model for detecting two related targets that randomly located in a finite set of different cells or randomly moved through those cells. We assume that the search effort at each fixed number of time intervals is a random variable with a normal distribution. Rather than minimizing the expected effort of detecting two related targets, the proposed mathematical model allows us to include the search effort as a function with fuzzy parameter (discounted parameter). Another feature of this paper is considering a fuzzy extension of a stochastic optimization problem, which is interesting. We present an algorithm that gives the optimal distribution of an effort which makes the discounted effort reward of finding the targets is maximum. Two numerical examples are illustrated to show the effectiveness of this model by setting some parameters to represent some situations, such as detecting the enemy ships, fighters and the landmines in the war.
Real-time decision making has acquired increasing interest as a means to efficiently operating complex systems. The main challenge in achieving real-time decision making is to understand how to develop next generation optimization procedures that can work efficiently using: (i) real data coming from a large complex dynamical system, (ii) simulation models available that reproduce the system dynamics. While this paper focuses on a different problem with respect to the literature in RL, the methods proposed in this paper can be used as a support in a sequential setting as well. The result of this work is the new Generalized Ordinal Learning Framework (GOLF) that utilizes simulated data interpreting them as low accuracy information to be intelligently collected offline and utilized online once the scenario is revealed to the user. GOLF supports real-time decision making on complex dynamical systems once a specific scenario is realized. We show preliminary results of the proposed techniques that motivate the authors in further pursuing the presented ideas.
Stochastic computer simulations enable users to gain new insights into complex physical systems. Optimization is a common problem in this context: users seek to find model inputs that maximize the expected value of an objective function. The objective function, however, is time-intensive to evaluate, and cannot be directly measured. Instead, the stochastic nature of the model means that individual realizations are corrupted by noise. More formally, we consider the problem of optimizing the expected value of an expensive black-box function with continuously-differentiable mean, from which observations are corrupted by Gaussian noise. We present parallel simultaneous perturbation optimization (PSPO), which extends a well-known stochastic optimization algorithm, simultaneous perturbation stochastic approximation, in several important ways. Our modifications allow the algorithm to fully take advantage of parallel computing resources, like high-performance cloud computing. The resulting PSPO algorithm takes fewer time-consuming iterations to converge, automatically chooses the step size, and can vary the error tolerance by step. Theoretical results are supported by a numerical example.
Stochastic optimization models based on risk-averse measures are of essential importance in financial management and business operations. This paper studies new algorithms for a popular class of these models, namely, the mean-deviation models in multistage decision making under uncertainty. It is argued that these types of problems enjoy a scenario-decomposable structure, which could be utilized in an efficient progressive hedging procedure. In case that linkage constraints arise in reformulations of the original problem, a Lagrange progressive hedging algorithm could be utilized to solve the reformulated problem. Convergence results of the algorithms are obtained based on the recent development of the Lagrangian form of stochastic variational inequalities. Numerical results are provided to show the effectiveness of the proposed algorithms.
With fixed running times at sections, cooperative scheduling (CS) approach optimizes the dwell times and the headway time to coordinate the accelerating and braking processes for trains, such that the recovery energy generated from the braking trains can be used by the accelerating trains. In practice, trains always have stochastic departure delays at busy stations. For reducing the divergence from the given timetable, the operation company generally adjusts the running times at the following sections. Focusing on the randomness on delay times and running times, this paper proposes a stochastic cooperative scheduling (SCS) approach. Firstly, we estimate the conversion and transmission losses of recovery energy, and then formulate a stochastic expected value model to maximize the utilization of the recovery energy. Furthermore, we design a binary-coded genetic algorithm to solve the optimal timetable. Finally, we conduct experimental studies based on the operation data from Beijing Yizhuang subway line. The results show that the SCS approach can save energy by 15.13% compared with the current timetable, and 8.81% compared with the CS approach.
The problem of determining the European-style option price in incomplete markets is examined within the framework of stochastic optimization. An analytic method based on the stochastic optimization is developed that gives the general formalism for determining the option price and the optimal trading strategy (optimal feedback control) that reduces the total risk inherent in writing the option. The cases involving transaction costs, the stochastic volatility with uncertainty, stochastic adaptive process, and forecasting process are considered. A software package for the option pricing for incomplete markets is developed and the results of numerical simulations are presented.
We investigate the growth optimal strategy over a finite time horizon for a stock and bond portfolio in an analytically solvable multiplicative Markovian market model. We show that the optimal strategy consists in holding the amount of capital invested in stocks within an interval around an ideal optimal investment. The size of the holding interval is determined by the intensity of the transaction costs and the time horizon.
A new one-parameter family of risk measures called Conditional Drawdown (CDD) has been proposed. These measures of risk are functionals of the portfolio drawdown (underwater) curve considered in active portfolio management. For some value of the tolerance parameter α, in the case of a single sample path, drawdown functional is defined as the mean of the worst (1 - α) * 100% drawdowns. The CDD measure generalizes the notion of the drawdown functional to a multi-scenario case and can be considered as a generalization of deviation measure to a dynamic case. The CDD measure includes the Maximal Drawdown and Average Drawdown as its limiting cases. Mathematical properties of the CDD measure have been studied and efficient optimization techniques for CDD computation and solving asset-allocation problems with a CDD measure have been developed. The CDD family of risk functionals is similar to Conditional Value-at-Risk (CVaR), which is also called Mean Shortfall, Mean Excess Loss, or Tail Value-at-Risk. Some recommendations on how to select the optimal risk functionals for getting practically stable portfolios have been provided. A real-life asset-allocation problem has been solved using the proposed measures. For this particular example, the optimal portfolios for cases of Maximal Drawdown, Average Drawdown, and several intermediate cases between these two have been found.
We study nonconvex (composite) optimization and learning problems where the decision variables can be split into blocks of variables. Random block projection is a popular technique to handle this kind of problem for its remarkable reduction of the computational cost from the projection. This powerful method has not been well proposed for the situation that first-order information is prohibited and only zeroth-order information is available. In this paper, we propose to develop different classes of zeroth-order stochastic block coordinate type methods. Zeroth-order block coordinate descent (ZS-BCD) is proposed for solving unconstrained nonconvex optimization problem. For composite optimization, we establish the zeroth-order stochastic block mirror descent (ZS-BMD) and its associated two-phase method to achieve the complexity bound for the composite optimization problem. Furthermore, we also establish zeroth-order stochastic block coordinate conditional gradient (ZS-BCCG) method for nonconvex (composite) optimization. By implementing ZS-BCCG method, in each iteration, only (approximate) linear programming subproblem needs to be solved on a random block instead of a rather costly projection subproblem on the whole decision space, in contrast to the existing traditional stochastic approximation methods. In what follows, an approximate ZS-BCCG method and corresponding two-phase ZS-BCCG method are proposed. This is also the first time that a two-phase BCCG method has been developed to carry out the complexity analysis of nonconvex composite optimization problem.
The present paper is conceived to expose a new model of bank capital adequacy. Our analysis highly relies on the dynamics of the capital adequacy ratio (CAR) as computed from balance sheet items, namely, loans, securities and bank regulatory capital, in a stochastic dynamic setting. In addition, an attempt is made to demonstrate how the CAR can be optimized in terms of asset allocation and the rate at which the bank remains solvent. Consequently, a dynamic programming principle has been applied to solve the Hamilton–Jacobi–Bellman (HJB) equation explicitly in the case of CRRA utility function. The parameters’ estimation is based on the maximum likelihood method, using Tunisian data relevant to the period 2004–2012. As regard computations, they are carried out in MATLAB. The simulation results have shown the model relevance as a decision tool for bank risk management. Moreover, the relationship persisting between the macroeconomic activity and the bank capital adequacy has also been discussed.
For hybrid geometric Brownian motion stock liquidation models, it has been proved that the optimal selling policy is of threshold type, which can be obtained by solving a set of two-point boundary value problems. The total number of equations to be solved is the same as that of the numbers of states of the underlying Markov chain. To reduce the computational burden, this work develops Monte Carlo algorithms, which are recursive and are stochastic optimization type, to approximate the optimal threshold values in stock trading. Then asymptotic properties of the proposed algorithms such as the convergence and rates of convergence are developed.
A stochastic optimization problem with incomplete information is considered. Optimal solutions are selected using the minimax quantile criterion. This problem is related to a confidence estimation problem for a random vector with incompletely known distribution. Generalized confidence regions are used as confidence estimates for a statistically uncertain vector. The quantile stochastic optimization problem under incomplete information is solved by an optimal choice of the generalized confidence region.
This work considers trading a mean-reverting asset. The strategy is to determine a low price to buy and a high price to sell so that the expected return is maximized. Slippage cost is imposed on each transaction. Our effort is devoted to developing a recursive stochastic approximation type algorithm to estimate the desired selling and buying prices. The advantage of this approach is that the underlying asset is model free. Only observed stock prices are required, so it can be performed on line. After briefly presenting the asymptotic results, simulations and real market data are used to demonstrate the performance of the proposed algorithm.
In this paper we present a set of approaches for stochastic optimizing of immunization strategies based on risk averse measures as alternatives to the optimization of the objective function expected value, i.e., in the so-called risk neutral environment. The risk averse measures to consider whose validity is analyzed in this work are as follows: two-stage mean-risk immunization, two-stage and multistage Value-at-Risk strategy, two-stage and multistage stochastic dominance strategy, and a new measure as a mixture of the multistage VaR & stochastic dominance strategies. The common characteristic of these measures is that they require from the modeler a threshold for the objective function value related to each scenario (the recent ones even allow a set of thresholds) and a failure probability when not reaching it. Uncertainty is represented by a scenario tree and is dealt with by both two-stage and multi-stage stochastic (linear and) mixed integer models with complete recourse. We will test the different risk averse strategies presented in the paper by using, as a pilot case, the optimization of the immunization strategies in fixed-income security portfolios under some sources of uncertainty. One of the main differences in the bond portfolio strategies that are proposed in the work and the models that have been encountered in the literature is that we consider an investor who wishes to invest in a market with coupon bonds that have a different level of risk of default.