This paper attempts to identify whether the inflation expectation formation models adopted by Chinese agents are heterogeneous or homogeneous. A Gaussian mixture model is developed assuming that agents form inflation expectations by selecting a model from alternatives. Analysis results reveal that only adaptive expectation (AE) model is significant, indicating that both households and financial participants are fairly homogeneous in selecting inflation expectation formation models. Therefore, the mechanism of heterogeneous models is inoperative in explaining the heterogeneous inflation expectations in China, and the AE is the main driver of Chinese agents’ perceptions about cost.
The last decades proved that policymaking without considering uncertainty is impracticable. In an environment of uncertainty, policymakers have doubts about the policy models they routinely use. This paper focuses specifically on the situation where uncertainty on the financial side of the economy leads to misspecification in the policy model. We describe a coherent strategy for policymakers who are averse to model misspecification and analyze optimal policy design in the face of Knightian uncertainty. To do so, we augment a financial dynamic stochastic general equilibrium model with model misspecification in a simple minimax framework where the central bank plays a zero-sum game versus a hypothetical evil agent. The policy is tailored to insure against the worst-case outcomes. We show that model ambiguity on the financial side requires a passive monetary policy stance. However, if the uncertainty originates from the supply side of the economy, an aggressive response of interest rate is required. We also show the impact of an additional macroprudential tool on the dynamics of the economy.
In this paper, a self-adaptive PD (SAPD) is employed for motion control of omni-directional robots. The method contains a PD controller that can be tuned online using a fuzzy logic system (FLS). Fast and accurate positioning is one of significant challenges in robot platforms. In addition, some uncertainties have adverse effects on traditional control system's performance during the robot's motion. Slow responses, low accuracy and instability are the most important drawbacks of widespread controllers in presence of uncertain dynamics. Since the fuzzy algorithm can deal with uncertainties and nonlinearities, the proposed method can tackle the mentioned problems. The controller is designed based on an uncertain model and implemented on a four wheeled omni-directional fast robot. The novelty of this article is proposing an enhanced version of well-known gain scheduling PD controller to improve positioning performance of the robot in different circumstances. Experimental results show that the method can provide a desirable performance in the presence of uncertainties.
We discuss the problem of exponential hedging in the presence of model uncertainty expressed by a set of probability measures. This is a robust utility maximization problem with a contingent claim. We first consider the dual problem which is the minimization of penalized relative entropy over a product set of probability measures, showing the existence and variational characterizations of the solution. These results are applied to the primal problem. Then we consider the robust version of exponential utility indifference valuation, giving the representation of indifference price using a duality result.
In this paper, a stochastic control problem under model uncertainty with general penalty term is studied. Two types of penalties are considered. The first one is of type ff-divergence penalty treated in the general framework of a continuous filtration. The second one called consistent time penalty is studied in the context of a Brownian filtration. In the case of consistent time penalty, we characterize the value process of our stochastic control problem as the unique solution of a class of quadratic backward stochastic differential equation with unbounded terminal condition.
We consider the super-hedging price of an American option in a discrete-time market in which stocks are available for dynamic trading and European options are available for static trading. We show that the super-hedging price ππ is given by the supremum over the prices of the American option under randomized models. That is, π=sup(ci,Qi)i∑iciϕQiπ=sup(ci,Qi)i∑iciϕQi, where ci∈ℝ+ and the martingale measure Qi are chosen such that ∑ici=1 and ∑iciQi prices the European options correctly, and ϕQi is the price of the American option under the model Qi. Our result generalizes the example given in Hobson & Neuberger (2016) that the highest model-based price can be considered as a randomization over models.
In this paper, we study a class of time-inconsistent terminal Markovian control problems in discrete time subject to model uncertainty. We combine the concept of the sub-game perfect strategies with the adaptive robust stochastic control method to tackle the theoretical aspects of the considered stochastic control problem. Consequently, as an important application of the theoretical results and by applying a machine learning algorithm we solve numerically the mean-variance portfolio selection problem under the model uncertainty.
In this paper, we introduce a dynamical model for the time evolution of probability density functions incorporating uncertainty in the parameters. The uncertainty follows stochastic processes, thereby defining a new class of stochastic processes with values in the space of probability densities. The purpose is to quantify uncertainty that can be used for probabilistic forecasting. Starting from a set of traded prices of equity indices, we do some empirical studies. We apply our dynamic probabilistic forecasting to option pricing, where our proposed notion of model uncertainty reduces to uncertainty on future volatility. A distribution of option prices follows, reflecting the uncertainty on the distribution of the underlying prices. We associate measures of model uncertainty of prices in the sense of Cont.
In this paper, we consider a continuous time portfolio optimization problem that includes the possibility of a crash scenario as well as parameter uncertainty. To do this, we combine the worst-case scenario approach, introduced by Korn & Wilmott (2002) with a model ambiguity approach that is also based on Knightian uncertainty. In our model, the crash scenario occurs at the worst possible time for the investor, which also implies that there can be no crash at all. For the modeling of the parameter uncertainty, we choose a general definition of the sets of possible drift and volatility parameters, conditioned by the solution of an optimization problem. In addition, these sets may be different in the pre-crash and post-crash market. We solve this portfolio problem and then consider two particular examples with box uncertainty and ellipsoidal drift ambiguity.
The paper is continuation of Kim and Ryom (2022), in which a pathwise superhedging duality was proved for multidimensional contingent claims under model-free strict no-arbitrage and efficient frictions. We consider a two-dimensional market with transaction costs beyond efficient friction in a model-free framework. We get a condition to hold model-free weak no-arbitrage and prove a superhedging duality under model-free weak no-arbitrage.
We present a parsimonious method of improving forecasts and show that fit, the discrepancy between model forecasts and realized values, is persistent for individual stocks. Conditioning on fit profoundly affects the forecast error for future and out-of-sample returns. Forecasts of stock price direction with the best (worst) decile of historical fit are correct 63.6% (49.2%) of the time and are significantly different from the unconditioned model’s 56% accuracy. We find that superior factor forecasts are essential to profit from model conditioning and conclude that analysts who possess superior factor estimates can dramatically improve their forecasts through the technique we present.
This paper presents a sensitivity analysis on the performance of optimal proportional–integral–derivative (PID) controller for use in nonlinear smart base-isolated structures with uncertainties. A set of nine performance indices is defined to evaluate the performance of the controller in the presence of uncertainties in the superstructure, lead rubber bearing (LRB) seismic isolation system, and applied loads. The time delay effect on the stability and performance of the PID controller is also examined. The results show that the PID controller is robust against the uncertainties up to ±15% in the damping and stiffness coefficients of the superstructure, the yield force of the LRB and the artificial earthquake. In the case with ±15% of uncertainty, the input energy entering into the structure is increased with respect to the nominal model. However, the changes of the performance indices related to the damage energies are negligible. An uncertainty of −20% in the stiffness coefficient and stiffness ratio of the LRB gives an increase of 15% in the maximum and root mean square (RMS) of the structural responses. In the case with −20% of uncertainty, the damage and damping energies do not change in comparison with the case of the nominal model, but a significant decay in the performance index related to the input energy response is obtained in this case. Not all performance indices are sensitive to time delay. For large time delays, the performance index for the seismic input energy increases significantly, while the maximum damage and damping energies increase up to 5% and 10%, respectively.
In this work, efficient human activity recognition (HAR) algorithm based on deep learning architecture is proposed to classify activities into seven different classes. In order to learn spatial and temporal features from only 3D skeleton data captured from a “Microsoft Kinect” camera, the proposed algorithm combines both convolution neural network (CNN) and long short-term memory (LSTM) architectures. This combination allows taking advantage of LSTM in modeling temporal data and of CNN in modeling spatial data. The captured skeleton sequences are used to create a specific dataset of interactive activities; these data are then transformed according to a view invariant and a symmetry criterion. To demonstrate the effectiveness of the developed algorithm, it has been tested on several public datasets and it has achieved and sometimes has overcome state-of-the-art performance. In order to verify the uncertainty of the proposed algorithm, some tools are provided and discussed to ensure its efficiency for continuous human action recognition in real time.
Nonlinear state-space models (SSMs) are widely used to model actual industrial processes. System identification is an important method to reduce the uncertainty of the simulation model. In recent years, system identification has been greatly improved with the rise of machine learning. However, there are a few reviews on the latest identification methods based on machine learning. Therefore, this paper focuses on the latest development of identification methods for nonlinear SSM in recent years. In particular, this paper comprehensively compares the identification methods based on traditional methods and machine learning. In addition, according to the type of uncertainty, we divided the paper into the parameter’s identification and the identification of unknown parts of the model. Compared with the classification of other reviews, our classification method is clearer. Briefly, this paper organizes the review according to the classification of uncertainty. Each type is extended from offline identification to online identification. Specifically, interval identification and point estimation methods are reviewed for offline parameter identification. For online parameter identification, point estimation methods are reviewed. In the case that the model is partially unknown or black-box, the modeling methods and identification methods are mainly reviewed. In addition to the traditional methods, this paper focuses on the latest progress in the application of machine learning in system recognition. Finally, at the end of the paper, this paper summarizes the existing methods and points out the key problems that still need to be solved.
I provide evidence on the existence of unspanned macro risk. I investigate the usefulness of unspanned macro information for forecasting bond risk premia in a macro-finance term structure model from the perspective of a bond investor. I account for model uncertainty by combining forecasts with and without unspanned output and inflation risks optimally from the forecaster’s objective. Incorporating macro information generates significant gains in forecasting bond risk premia relative to yield curve information at long forecast horizons, especially when allowing for time-varying combination weight. These gains in predictive accuracy significantly improve investor utility.
In a recent consultative document, the Basel Committee on Banking Supervision suggests replacing Value-at-Risk (VaR) by expected shortfall (ES) for setting capital requirements for banks' trading books because ES better captures tail risk than VaR. However, besides ES, another risk measure called median shortfall (MS) also captures tail risk by taking into account both the size and likelihood of losses. We argue that MS is a better alternative than ES as a risk measure for setting capital requirements because: (i) MS is elicitable but ES is not; (ii) MS has distributional robustness with respect to model misspecification but ES does not; (iii) MS is easy to implement but ES is not.
Current methods of neural machine translation may generate sentences with different levels of quality. Methods for automatically evaluating translation output from machine translation can be broadly classified into two types: a method that uses human post-edited translations for training an evaluation model, and a method that uses a reference translation that is the correct answer during evaluation. On the one hand, it is difficult to prepare post-edited translations because it is necessary to tag each word in comparison with the original translated sentences. On the other hand, users who actually employ the machine translation system do not have a correct reference translation. Therefore, we propose a method that trains the evaluation model without using human post-edited sentences and in the test set, estimates the quality of output sentences without using reference translations. We define some indices and predict the quality of translations with a regression model. For the quality of the translated sentences, we employ the BLEU score calculated from the number of word n-gram matches between the translated sentence and the reference translation. After that, we compute the correlation between quality scores predicted by our method and BLEU actually computed from references. According to the experimental results, the correlation with BLEU is the highest when XGBoost uses all the indices. Moreover, looking at each index, we find that the sentence log-likelihood and the model uncertainty, which are based on the joint probability of generating the translated sentence, are important in BLEU estimation.
This chapter aims to summarize the theories, models, methods and tools of modern econometrics which we have covered in the previous chapters. We first review the classical assumptions of the linear regression model and discuss the historical development of modern econometrics by various relaxations of the classical assumptions. We also discuss the challenges and opportunities for econometrics in the Big data era and point out some important directions for the future development of econometrics.
We consider a multi-period stochastic control problem where the multivariate driving stochastic factor of the system has known marginal distributions but uncertain dependence structure. To solve the problem, we propose to implement the non-parametric adaptive robust control framework. We aim to find the optimal control against the worst-case copulae in a sequence of shrinking uncertainty sets which are generated from continuously observing the data. Then, we use a stochastic gradient descent ascent algorithm to numerically handle the corresponding high-dimensional dynamic inf-sup optimization problem. We present the numerical results in the context of utility maximization and show that the controller benefits from knowing more information about the uncertain model.
Due to lack of scientific understanding, some mechanisms may be missing in mathematical modeling of complex phenomena in science and engineering. These mathematical models thus contain some uncertainties such as uncertain parameters. One method to estimate these parameters is based on pathwise observations, i.e., quantifying model uncertainty in the space of sample paths for system evolution. Another method is devised here to estimate uncertain parameters, or unknown system functions, based on experimental observations of probability distributions for system evolution. This is called the quantification of model uncertainties in the space of probability measures. A few examples are presented to demonstrate this method, analytically or numerically.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.