https://doi.org/10.1142/9781848160156_fmatter
Please refer to full text.
https://doi.org/10.1142/9781848160156_0001
Properties of linear continuous-time ARMA (or CARMA) processes driven by second-order Lévy processes are examined. These extend the class of Gaussian CARMA processes to include heavier-tailed series such as those frequently encountered in financial applications. Non-linear Gaussian CAR processes are also considered and illustrated with threshold models fitted to daily returns on the Australian All-ordinaries and Dow-Jones Industrial Indices. AIC comparisons are made with ARCH and GARCH models fitted to the same data.
https://doi.org/10.1142/9781848160156_0002
By expressing a time series model with time-varying variance in nonlinear state space model form, it is possible to decompose the financial time series into trend, stationary component and the noise with varying volatility. The models with changing volatility are significantly better than the ordinary trend models in terms of the AIC, indicating that it is possible to obtain a good model by an explicit modeling of volatility changes. In the analysis of Nikkei 225 Japanese stock price index data, a clear relation between the difference of the trend and the local variance was found. Therefore we further developed models that take into account the relation between the trend and the variance. For the Nikkei 225 data, the AIC value was further decreased suggesting the presence of causal relation. On the other hand, for exchange rate data, such relation was not found.
https://doi.org/10.1142/9781848160156_0003
In this paper nonparametric statistical methods for financial time series are investigated. In the beginning it is argued that conditional mean and conditional variance (volatility) of the underlying process are quantities of interest. After presenting briefly the idea of local polynomial estimation and giving an asymptotic result of the behaviour of such kernel estimates for quite general mixing observation we deal in the main part of the paper with the construction of confidence intervals for the conditional mean and conditional variance or deviation function. Since usual pointwise confidence intervals are not completely satisfactory it is suggested to use simultaneous confidence bands. For both pointwise and simultaneous confidence bands a bootstrap procedure (wild bootstrap) is proposed, which can be used to calculate critical values. Besides theoretical results we apply the proposals to real financial data.
https://doi.org/10.1142/9781848160156_0004
We have applied the trapezium method to approximate integrals in an implementation of the EM algorithm proposed by Tsai and Chan (1999b) for estimating continuous-time autoregressive models, whose original implementation was based on Euler's method for approximating integrals. It is well known that the trapezium method generally provides a second order approximation to an integral of a well-behaved functional of Wiener process, whereas the Euler method is generally of first order. Simulation results confirm that with increasing discretization frequency, the EM estimators based on the trapezium method converge to the (conditional) ML estimator at a faster rate than the EM estimators based on Euler's method. However, with an appropriate choice of discretization frequency, the EM estimator based on Euler's method outperforms both the EM estimator based on the trapezium method and the ML estimator in terms of biases and standard deviations of the estimates. An invariance property of the EM estimator based on the trapezium method is briefly discussed.
https://doi.org/10.1142/9781848160156_0005
The kernel estimation method for integrated time series has recently received attention in the literature. See e.g. Xia, Li and Tong (1998), Karlsen and Tjøstheim (1998) and Phillips and Park (1999). Specifically, two relevant observations are discussed: (1) in contrast to a linear time series regression set-up, which has a convergence rate much faster than the case of stationary data, the convergence rate of kernel estimators is much slower in this case than in the case of stationary data and (2) the optimal bandwidth is O(n−1/10) rather than O(n−1/5) which holds for stationary data, thus suggesting that the procedures developed by Granger et al. (1997) have to be modified. This note gives some intuition behind these results and explores the problem of bandwidth selection for integrated time series.
https://doi.org/10.1142/9781848160156_0006
The possibility of specific long-memory temporal properties exponential marginal distributionals of absolute returns are considered for daily data for a number of markets and similar results are found in each case. Possible explanations are considered but no complete explanation is found. A fractionally integrated model is considered, found to require an unusual distribution for its inputs, has a poor forecasting performance, and its properties may be explained by a regime-switching process.
https://doi.org/10.1142/9781848160156_0007
Most financial markets produce inhomogeneous (i. e. unequally spaced) tick-by-tick data at high frequency. Recently developed time series operators can be used to directly compute statistical variables such as volatility from inhomogeneous data. This is not possible with traditional time series methods. Value-at-Risk computations require measurements of current volatility, but the conventional calculation from daily data, sampled at a certain daytime, is strongly sensitive to the choice of this daytime, revealing a high amount of stochastic noise. An alternative calculation from high-frequency, tick-by-tick data with time series operators is shown to have similar results, except for two advantages: distinctly reduced noise and up-to-date results at each tick. The time series operator method is flexible and computationally efficient. It can be used to express generating process equations and to compute the Value-at-Risk in real time.
https://doi.org/10.1142/9781848160156_0008
This paper discusses state space techniques to estimate and forecast long–memory models with missing observations. A Kalman filter approach to compute maximum likelihood estimates is studied and the behavior of short and long term forecasts is analyzed. As an illustration, this methodology is applied to the analysis of foreign exchange rates.
https://doi.org/10.1142/9781848160156_0009
Semi-parametric extremal analysis can be a useful tool to calculate the Value-at-Risk (VaR) for loss probabilities which are at and below the inverse of the sample size. We first review the standard estimation procedures and VaR implications on the basis of the first order expansion to the tail probabilities of heavy tail distributed random variables. Subsequently we present some new results that are based on using a second order expansion of the tail risk. In particular, we discuss the issue of efficiency in estimation using high or low frequency data; and we investigate the relation between the VaR over a short and a long investment horizon.
https://doi.org/10.1142/9781848160156_0010
This article surveys some of the recent developments in the modeling of heteroskedastic financial time series. Both discrete-time and continuous-time frameworks for some commonly used models and their estimating methodologies are discussed. In particular, the recently popularized long-memory heteroskedastic models are reviewed. A simulation-based Bayesian approach for long-memory stochastic volatility models is proposed. The paper concludes with an illustration of the proposed method applying to a value-weighted index from the Center for Research in Security Prices.
https://doi.org/10.1142/9781848160156_0011
This paper considers statistical inference of stochastic volatility (SV) models. The usual choice of normal and Student-t distributions for asset returns is replaced by the exponential-power (EP) distribution which can be light- and heavy-tailed than the normal distribution. This modification provides a wider choice of distributions for the SV models and simplifies the Markov chain Monte Carlo procedures for carrying out statistical analysis via uniform scale mixtures.
https://doi.org/10.1142/9781848160156_0012
This paper considers a generalization of the double threshold ARCH model by using smooth transition functions as links between different regimes in the conditional mean and variance of the time series. The model can cope with the situation where both specifications of the mean and variance of a financial time series change with respect to the market condition. Lagrange multiplier tests for linearity are derived and a modelling procedure for the proposed new class of models is proposed. An application to real data is considered.
https://doi.org/10.1142/9781848160156_0013
This paper develops non-nested tests of the GARCH and E-GARCH models against each other, based on a weighted function of the competing conditional variances. The asymptotic distributions and power functions of the non-nested tests are derived. Two novel joint tests of the ARCH and E-ARCH models against their GARCH and E-GARCH counterparts are analysed. Non-nested tests based on the weighting scheme in an Lλ–family are also examined. It is shown that the non-nested test based on a linear weighting of the competing conditional variances is optimal in the Lλ–family.
https://doi.org/10.1142/9781848160156_0014
In this paper, we introduce a percentile-based method to predict return distribution of financial time series and provide a way to calculate value at risk under non-normal portfolio changes.
https://doi.org/10.1142/9781848160156_0015
This paper addresses the problem of forecast evaluation in the context of a simple but realistic decision problem, and proposes a procedure, for the evaluation of forecasts based on their average realized value to the decision maker. It is shown that by concentrating on probability forecasts stronger theoretical results can be achieved than if just event forecasts were used. A possible generalisation is considered concerning the use of the correct, conditional predictive density function when forming forecasts.
https://doi.org/10.1142/9781848160156_0016
Although the neural networks have been reported to be successful in different areas such as engineering, finance, computer science, applied mathematics and statistics, the commonly used “backpropagation” algorithm to estimate the network parameters is still difficult to apply directly without fine tuning and subjective tinkering, especially when the number of parameters is large. To circumvent the estimation difficulty, we propose a new model, namely, the stochastic neural network (SNN) by using neurons with stochastic firing mechanism. SNN shares the universal approximation property with neural networks and provides a parallel estimation procedure via the EM algorithm. We also suggest a stepwise model selection procedure for SNN to avoid overfitting. Applications to regression analysis and time series forecasting are also discussed.
https://doi.org/10.1142/9781848160156_0017
This study reviews and discusses empirical evidence corroborating the existence of overreaction in the short-term responses of real exchange rates. The amplification of shock responses, albeit occurring over a short time period only, can delay and substantially prolong the time it takes for the real exchange rate to converge to parity. Interestingly, the findings of short-term amplified responses of the real exchange rate—with its subsequent reversal and gradual reversion toward the long-run equilibrium—appear compatible with the chartist-fundamentalist model of the foreign exchange market microstructure.
https://doi.org/10.1142/9781848160156_0018
We discuss how neural networks may be used to estimate conditional means, variances and quantiles of financial time series nonparametrically. These estimates may be used to forecast, to derive trading rules and to measure market risk.
https://doi.org/10.1142/9781848160156_0019
We use a discrete time model to investigate the optimal asset allocation strategy of a risk averse investor whose wealth consists of a single risky asset and a riskless asset. The objective is to maximize the expected utility of wealth over a planning horizon. We assume that the return of the risky asset follows the generalized autoregressive conditional heteroscedastic (GARCH) process. We illustrate the approach through numerical examples.
https://doi.org/10.1142/9781848160156_0020
The sparse coefficient regression has been applied satisfactorily to model the effect of Mexican Pesos exchange rate on the nation's trade balance. The full effect is found to take fourteen quarters to pass through and the J-curve effect in the trade balance lasts for twenty quarters.
https://doi.org/10.1142/9781848160156_0021
This paper summerizes the main results of our recent research on the ruin theory under compound Poisson model with constant interest force. We present some results on the distribution of the severity of ruin, the distribution of the surplus immediately prior to ruin, and the joint distribution of the surplus immediately before and after ruin. The probability of ruin for good is defined. By adapting the techniques of Sundt and Teugels (1995), integral equations satisfied by the above distributions and probability are obtained. The Laplace Transforms of the above distributions and probability are also obtained. Some asymptotic results and upper and lower bounds for the above distributions and probability are discussed. Some new results on the classical models are obtained as special cases of our model.
https://doi.org/10.1142/9781848160156_0022
Structural changes usually refer to the changes in some parameters or in the structure of a chosen model that is postulated to describe the operation of a data generating process. However, the structure of the underlying data generating process may not necessarily be equivalent to a model. It may be a pattern or an operating mechanism identifiable by certain cognitive processes. Accordingly, structural changes are changes to the operating mechanism of the underlying system. This paper considers the application of genetic programming to the cognition of the operating mechanism of a dynamic system. Based on the knowledge accumulated in the cognition process, a diagnostic statistic is defined to detect structural changes in the system. This approach is model free since it is performed without reference to model specification. The effectiveness of the model-free approach is empirically illustrated through an application to four stock markets, namely the Greater-China markets.