Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Despite the widespread realization that financial models for contingent claim pricing, asset allocation and risk management depend critically on their underlying assumptions, the vast majority of financial models are based on single probability measures. In such models, asset prices are assumed to be random, but asset price probabilities are assumed to be known with certainty, an obviously false assumption.
We explore practical methods to specify collections of probability measures for an assortment of important financial problems; we provide practical methods to solve the robust financial optimization problems that arise and, in the process, discover "dangerous" measures.
The following paper focuses on the incompleteness arising from model misspecification combined with trading restrictions. While asset price dynamics are assumed to be continuous time processes, the hedging of contingent claims occurs in discrete time. The trading strategies under consideration are understood to be self-financing with respect to an assumed model which may deviate from the "true" model, thus associating duplication costs with respect to a contingent claim to be hedged. Based on the robustness result of Gaussian hedging strategies, which states that a superhedge is achieved for convex payoff-functions if the "true" asset price volatility is dominated by the assumed one, the error of time discretising these strategies is analysed. It turns out that the time discretisation of Gaussian hedges gives rise to a duplication bias caused by asset price trends, which can be avoided by discretising the hedging model instead of discretising the hedging strategies. Additionally it is shown, that on the one hand binomial strategies incorporate similar robustness features as Gaussian hedges. On the other hand, the distribution of the cost process associated with the binomial hedge coincides, in the limit, with the distribution of the cost process associated with the Gaussian hedge. Together, the last results yield a strong argument in favour of discretising the hedge model instead of time-discretising the strategies.
We formulate the portfolio choice problem as a robust control problem under uncertainty or ambiguity aversion. By considering a stochastic investment opportunity set, we derive optimal robust portfolio rules in the cases of one and two risky assets. With two risky assets and ambiguity structure determined by economy-wide factors, we show that the robust portfolio rule could lead to an increase in the total holdings of risky assets as compared to the holdings under the Merton rule, which is the standard risk aversion case. This result goes against the general belief that uncertainty aversion and robust control methods lead to conservative behavior. We also show that the investor is more likely to increase the holdings of the asset for which there is no ambiguity, and reduce the holdings of the asset for which there is ambiguity, a result that might provide an explanation for the home bias puzzle.
This chapter introduces regression analysis, the most popular statistical tool to explore the dependence of one variable (say Y) on others (say X). The variable Y is called the dependent variable or response variable, and X is called the independent variable or explanatory variable. The regression relationship between X and Y can be used to study the effect of X on Y or to predict Y using X. We motivate the importance of the regression function from both the economic and statistical perspectives, and characterize the condition for correct specification of a linear model for the regression function, which is shown to be crucial for a valid economic interpretation of model parameters.
This chapter aims to summarize the theories, models, methods and tools of modern econometrics which we have covered in the previous chapters. We first review the classical assumptions of the linear regression model and discuss the historical development of modern econometrics by various relaxations of the classical assumptions. We also discuss the challenges and opportunities for econometrics in the Big data era and point out some important directions for the future development of econometrics.
We consider the problem of constructing nonlinear regression models, using multilayer perceptrons and radial basis function network with the help of the technique of regularization. Crucial issues in the model building process are the choices of the number of basis functions, the number of hidden units and a regularization parameter. We consider the properties of nonlinear regression modeling based on neural networks, and investigate the performance of model selection criteria from an information-theoretic point of view.
Rank regression offers a valuable alternative to the classical least squares approach. The use of rank regression not only provides protection against outlier contamination but also leads to substantial efficiency gain in the presence of heavier-tailed errors. This article studies the asymptotic performance of rank regression with Wilcoxon scores when the regression function is possibly mis-specified. We establish that under general conditions, the Wilcoxon rank regression estimator converges in probability to a well-defined limit and has an asymptotic normal distribution. We also derive a formula for the bias of omitted variables. Besides furthering our understanding of the properties of rank regression, these theoretical results have important implications for developing rank-based model selection and model checking procedures.