![]() |
This four-volume handbook covers important concepts and tools used in the fields of financial econometrics, mathematics, statistics, and machine learning. Econometric methods have been applied in asset pricing, corporate finance, international finance, options and futures, risk management, and in stress testing for financial institutions. This handbook discusses a variety of econometric methods, including single equation multiple regression, simultaneous equation regression, and panel data analysis, among others. It also covers statistical distributions, such as the binomial and log normal distributions, in light of their applications to portfolio theory and asset management in addition to their use in research regarding options and futures contracts.
In both theory and methodology, we need to rely upon mathematics, which includes linear algebra, geometry, differential equations, Stochastic differential equation (Ito calculus), optimization, constrained optimization, and others. These forms of mathematics have been used to derive capital market line, security market line (capital asset pricing model), option pricing model, portfolio analysis, and others.
In recent times, an increased importance has been given to computer technology in financial research. Different computer languages and programming techniques are important tools for empirical research in finance. Hence, simulation, machine learning, big data, and financial payments are explored in this handbook.
Led by Distinguished Professor Cheng Few Lee from Rutgers University, this multi-volume work integrates theoretical, methodological, and practical issues based on his years of academic and industry experience.
Sample Chapter(s)
Preface
Chapter 2: Do Managers Use Earnings Forecasts to Fill a Demand They Perceive from Analysts?
Contents:
Readership: Researchers and professionals who are interested in financial econometrics, mathematics, statistics, and technology.
https://doi.org/10.1142/9789811202391_fmatter01
The following sections are included:
https://doi.org/10.1142/9789811202391_0001
The main purpose of this introductory chapter is to give an overview of the following 130 papers, which discuss financial econometrics, mathematics, statistics, and machine learning. There are eight sections in this introductory chapter. Section 1 is the introduction, Section 2 discusses financial econometrics, Section 3 explores financial mathematics, and Section 4 discusses financial statistics. Section 5 of this introductory chapter discusses financial technology and machine learning, Section 6 explores applications of financial econometrics, mathematics, statistics, and machine learning, and Section 7 gives an overview in terms of chapter and keyword classification of the handbook. Finally, Section 8 is a summary and includes some remarks.
https://doi.org/10.1142/9789811202391_0002
This paper examines how the nature of the information possessed by individual analysts influences managers’ decisions to issue forecasts and the consequences of those decisions. Our analytical model yields the prediction that managers prefer to issue guidance when they perceive their private information to be more precise, and analysts possess mostly common, imprecise information (i.e., there is high commonality and uncertainty). Based on an econometric model, we obtain theory-based analyst variables and our empirical evidence confirms our predictions. High commonality and uncertainty in analysts’ prior information are accompanied by increases in analysts’ forecast revisions and trading volume following guidance, consistent with greater analyst incentives to generate idiosyncratic information. Yet, management guidance increases only with the commonality contained in analysts’ pre-disclosure information, but not with the level of uncertainty. Indeed, the disclosure propensity among a subset of firms (those with less able managers, bad news, and infrequent forecasts) has an inverse relationship with analyst uncertainty due to its reflection on the low precision of management information. Our results are robust to a variety of alternative analyses, including the use of propensity-score matched pairs with similar disclosure environments but differing degrees of commonality and uncertainty among analysts. We also demonstrate that the use of forecast dispersion as an empirical proxy for analysts’ prior information may lead to erroneous inferences. Overall, we define and support improved measures of analyst information environment based on an econometric model and find that the commonality of information among analysts acts as a reliable forecast antecedent by informing managers about the amount of idiosyncratic information in the market.
https://doi.org/10.1142/9789811202391_0003
Our study explores a possible benefit of conforming book income to taxable income. We expect that increased book–tax conformity can reduce audit fees by simplifying tax accruals and increasing tax authorities’ monitoring, which reduce audit workload and audit risk, respectively. Consistent with our expectations, we find that a higher country level of required book–tax conformity leads to lower audit fees. Moreover, firm-level book–tax differences are positively associated with audit fees. We also find that the negative association between country level of required book–tax conformity and audit fees is mitigated among firms with larger book–tax differences. Our findings are robust to including country-level legal investor protection or other extra-legal institutions. Overall, our results suggest that one benefit of increasing book–tax conformity is the reduction in audit fees. In the appendix we extend our main empirical test by including firm fixed effects and clustering standard errors of regression coefficients, and we find that these do not change our conclusions.
https://doi.org/10.1142/9789811202391_0004
The purpose of this chapter is to evaluate the role played by gold in a diversified portfolio comprised of bond and stock. The continuous wavelet transform analysis is applied to capture the correlation features between gold and other risky assets at specific time horizons to determine whether gold should be included in a diversified portfolio. This chapter uses the US stock, bond, and gold data from 1990 until 2013 to investigate the optimal weights of gold obtained from the minimum variance portfolio. Empirical findings suggest that little evidences support that gold acts an efficient diversifier in traditional stock and bond portfolio. Gold typically has been a long-term diversifier in the traditional portfolio comprised of bond and stock only before the early 2000s and acts as a short-term diversifier in times of crisis periods. The significant drop in the long-term weight of gold indicate that gold loses much of its long-term role in the diversified portfolio. These findings are useful for portfolio managers to justify the gold’s diversification benefits over different investment horizons.
https://doi.org/10.1142/9789811202391_0005
In this chapter we first review the basic models related to simultaneous equation, such as 2SLS, 3SLS, and SUR. Then we discuss how to estimate different kinds of simultaneous equation models. The application of this model in financial analysis, planning, and forecasting is also explored. Simultaneity and dynamics of corporate-budgeting is explored in detail in terms of data from Johnson & Johnson.
https://doi.org/10.1142/9789811202391_0006
This research introduces the following to establish a TAIEX prediction model: intervention analysis integrated into the ARIMA-GARCH model, ECM, intervention analysis integrated into the transfer function model, the simple average combination forecasting model, and the minimum error combination forecasting model. The results show that intervention analysis integrated into the transfer function model yields a more accurate prediction model than ECM and intervention analysis integrated into the ARIMA-GARCH model. The minimum error combination forecasting model can improve prediction accuracy much better than non combination models and also maintain robustness. Intervention analysis integrated into the transfer function model shows that the TAIEX is affected by external factors, the INDU, the exchange rate, and the consumer price index; therefore, facing the different issues of the TAIEX, the government could pursue some macroeconomic policies to reach the goals of policies.
https://doi.org/10.1142/9789811202391_0007
Based upon Ritchken (1985), Levy (1985), Lo (1987), Zhang (1994), Jackwerth and Rubinstein (1996), and others, this chapter discusses the alternative method to determine option bound in terms of the first two moments of distribution. This approach includes stochastic dominance method and linear programming method, then we discuss semi-parametric method and non-parametric method for option-bound determination. Finally, we incorporate both skewness and kurtosis explicitly through extending Zhang (1994) to provide bounds for the prices of the expected payoffs for options, given the first two moments and skewness and kurtosis.
https://doi.org/10.1142/9789811202391_0008
Market makers or liquidity providers play a central role for the operation of the stock markets. In general, these agents execute contrarian strategies so that their profitability depends on the distribution of stock returns across the market. The more widespread the distribution is, the more arbitrage opportunities are available. This implies that the collective correlation of stocks is an indicator for the possible turmoil in the market. This paper proposes a novel approach to measure the collective correlation of stock market with the network as a tool for extracting information. The market network can be easily constructed by digitizing pairwise correlations. While the number of stocks becomes very large, the network can be approximated by an exponential random graph model under which the clustering coefficient of the market network is a natural candidate for measuring the collective correlation of the stock market. With a sample of S&P 500 components in the period from January 1996 to August 2009, we show that clustering coefficient can be used as alternative risk measure in addition to volatility. Furthermore, investigations on higher order statistics also reveal the distinctions on the clustering effect between bear markets and bull markets.
https://doi.org/10.1142/9789811202391_0009
We propose a novel method to estimate the level of interconnectedness of a financial institution or system, as the measures currently suggested in the literature do not fully take into consideration an important aspect of interconnectedness — group interactions of agents. Our approach is based on the power index and centrality analysis and is employed to find a key borrower in a loan market. It has three distinctive features: it considers long-range interactions among agents, agents’ attributes and a possibility of an agent to be affected by a group of other agents. This approach allows us to identify systemically important elements which cannot be detected by classical centrality measures or other indices. The proposed method is employed to analyze the banking foreign claims as of 1Q 2015. Using our approach, we detect two types of key borrowers (a) major players with high ratings and positive credit history; (b) intermediary players, which have a great scale of financial activities through the organization of favorable investment conditions and positive business climate.
https://doi.org/10.1142/9789811202391_0010
The standard multivariate test of Gibbons et al. (1989) used in studies examining relative performance of alternative asset pricing models requires the number of stocks to be less than the number of time-series observations, which requires stocks to be grouped into portfolios. This results in a loss of disaggregate stock information. We apply a new statistical test to get around this problem. We find that the multivariate average F-test developed by Hwang and Satchell (2014) has superior power to discriminate among competing models and does not reject tested models altogether, unlike the standard multivariate test. Application of the multivariate average F-test for examination of relative performance of asset pricing models demonstrate that a parsimonious 6-factor model with the market, size, orthogonal value, profitability, investment, and momentum factors outperforms all other models.
https://doi.org/10.1142/9789811202391_0011
This chapter discusses both static and dynamic hedge ratio in detail. In static analysis, we discuss minimum-variance hedge ratio, Sharpe hedge ratio, and optimum mean-variance hedge ratio. In addition, several time series analysis methods such as the multivariate skew-normal distribution method, the autoregressive conditional heteroskedasticity (ARCH) and generalized autoregressive conditional heteroskedasticity (GARCH) methods, the regime-switching GARCH model, and the random coefficient method are used to show how hedge ratio can be estimated.
https://doi.org/10.1142/9789811202391_0012
This chapter discusses both intertemporal asset pricing model and international asset pricing model (IAPM) in detail. In intertemporal asset pricing model we discuss Campbell (1993) model which assumes that investors are assumed to be endowed with Kreps–Porteus utility and consumption is substituted out from the model. In addition, it extends Campbell’s (1993) model to develop an intertemporal IAPM. We show that the expected international asset return is determined by a weighted average of market risk, market hedging risk, exchange rate risk and exchange rate hedging risk. A test of the conditional version of our intertemporal IAPM using a multivariate GARCH process supports the asset pricing model. We find that the exchange rate risk is important for pricing international equity returns and it is much more important than intertemporal hedging risk.
https://doi.org/10.1142/9789811202391_0013
In this chapter, we show that, as the world becomes increasingly integrated, the benefits of global diversification still remain positive and economically significant over time. Both regression analysis and explanatory power tests show that international integration, measured by adjusted R2 from a multifactor model, has more profound impact on the diversification benefits than correlation. Our results support Roll (2013)’s argument that R2, but not correlation, is an appropriate measure of market integration. We examine the impact of market integration determinants such as default risk, inflation, TED spread, past local equity market return, liquidity, and the relative performance of domestic portfolio on the potential diversification benefits.
https://doi.org/10.1142/9789811202391_0014
The modern portfolio theory in Markowitz (1952) is a cornerstone for investment management, but its implementations are challenging in that the optimal portfolio weight is extremely sensitive to the estimation for the mean and covariance of the asset returns. As a sophisticate modification, the Black–Litterman portfolio model allows the optimal portfolio’s weight to rely on a combination of the implied market equilibrium returns and investors’ views (Black and Litterman, 1991). However, the performance of a Black–Litterman model is closely related to investors’ views and the estimated covariance matrix. To overcome these problems, we first propose a predictive regression to form investors’ views, where asset returns are regressed against their lagged values and the market return. Second, motivated by stylized features of volatility clustering, heavy-tailed distribution, and leverage effects, we estimate the covariance of asset returns via heteroscedastic models. Empirical analysis using five industry indexes in the Taiwan stock market shows that the proposed portfolio outperforms existing ones in terms of cumulative returns.
https://doi.org/10.1142/9789811202391_0015
In this chapter, we propose the structural model in terms of the Stair Tree model and barrier option to evaluate the fair deposit insurance premium in accordance with the constraints of the deposit insurance contracts and the consideration of bankruptcy costs. First, we show that the deposit insurance model in Brockman and Turle (2003) is a special case of our model. Second, the simulation results suggest that insurers should adopt a forbearance policy instead of a strict policy for closure regulation to avoid losses from bankruptcy costs. An appropriate deposit insurance premium can alleviate potential moral hazard problems caused by a forbearance policy. Our simulation results can be used as reference in risk management for individual banks and for the Federal Deposit Insurance Corporation (FDIC).
https://doi.org/10.1142/9789811202391_0016
Studies on behavioral finance argue that cognitive/emotional biases could influence investors’ decisions and result in the disposition effect, wherein investors have the tendency to sell winning stocks too early and hold losing stocks too long. In this regard, this study proposes a conceptual model to examine the relationship among cognitive/emotional biases, the disposition effect, and investment performance. Cognitive/emotional biases mainly consist of mental accounting, regret avoidance, and self-control. Furthermore, this study examines whether gender and marital status moderate the relationship between these biases and the disposition effect by collecting quantitative data through a questionnaire survey and employing a structural equation modeling (SEM) approach to execute the estimation procedure. The results of this study show that mental accounting has the most significant influence on the disposition effect, which implies that prospect theory is an alternative to expected utility theory in accounting for investor’s behavior. The findings of moderating analysis indicate that female investors display a larger disposition effect than male investors.
https://doi.org/10.1142/9789811202391_0017
Economic intuition and theories suggest that banks are motivated to voluntarily disclose information and signal their quality, for example, through early adoption of accounting standards, to better access capital markets. Examining accounting standards from January 1995 to March 2008, I find that US bank holding companies (BHCs) with lower profitability and higher risk profiles are more likely to choose early adoption. This evidence is consistent with a BHC’s incentive to better access external financing through information disclosure and signaling. Moreover, a counter-signaling effect of decisions not to early adopt is first identified because early-adopting BHCs are not necessarily the least risky and the most profitable. I also find the counter-signaling effect to be most evident when an accounting standard has no effect on the financial statement proper (i.e., only disclosure requirements). This finding complements prior research that managers treat recognition and disclosure differently and that financial statement users weigh more on recognized than disclosed values. Finally, the results show that early adopters generally experience higher fund growth in uninsured debts than matched late adopters in economic expansions, times when BHCs are most motivated to obtain funds. This finding is consistent with the bank capital structure literature that banks have shifted towards nondeposit debts to finance their balance sheet growth.
https://doi.org/10.1142/9789811202391_0018
This chapter discusses how to exploit various Web information to improve the stock market prediction. We first discuss the impacts of investors’ social network on the stock market, and then propose several information fusion methods, that is, the tensor-based model and the multiple-instance learning model, to integrate the Web information and the quantitative information to improve the prediction capability.
https://doi.org/10.1142/9789811202391_0019
The sources of risk in a market place are systematic, cross-sectional and time varying in nature. Though the CAPM provides an excellent risk-return framework and the market beta may reflect the risk associated with risky assets, there are opportunities for investors to take advantage of dimensional and time varying return anomalies in order to improve their investment returns. In this paper, we restrict our analysis to return variations linked to market factor anomalies or factor/dimensional beta using the Fama–French 3 factor, Carhart 4 factor, and Asness, Frazzini and Pederson (AFP)’s 5 and 6 factor models. We find significant variations in explaining sources of risk across 22 developed and 21 emerging markets with data over a long period from 1991 to 2016. Each market is unique in terms of factor risk characteristics and market risk as explained by the CAPM is not the true risk measure. Hence, contrary to the risk-return efficiency framework, we find that lower market risk results in higher excess return in 19 out of the 22 developed markets, which is a major anomaly. Although in majority of the markets, the AFP models result in reducing market risk (15 countries) and enhancing Alpha (11 countries), it is very interesting to note that, the CAPM is second only in generating excess returns in the developed markets. We are also conscious of the fact that each market is unique in its composition and trend even over a long time horizon and hence a generalized approach in asset allocation cannot be adopted across all the markets.
https://doi.org/10.1142/9789811202391_0020
Credit risk analysis is a classical and crucial problem which has attracted great attention from both academic researchers and financial institutions. Through the accurate classification of borrowers, it enables financial institutions to develop lending strategies to obtain optimal profit and avoid potential risk. Actually, in recent decades, several different kinds of classification methods have been widely used to solve this problem. Owing to the specific attributes of the credit data, such as its small sample size and nonlinear characteristics, support vector machines (SVMs) show their advantages and have been widely used for scores of years. SVM adopts the principle of structural risk minimization (SRM), which could avoid the “dimension disaster” and has great generalization ability. In this study, we systematically review and analyze SVM based methodology in the field of credit risk analysis, which is composed of feature extraction methods, kernel function selection of SVM and hyper-parameter optimization methods, respectively. For verification purpose, two UCI credit datasets and a real-life credit dataset are used to compare the effectiveness of SVM-based methods and other frequently used classification methods. The experiment results show that the adaptive Lq SVM model with Gauss kernel and ES hyper-parameter optimization approach (ES-ALqG-SVM) outperforms all the other models listed in this study, and its average classification accuracy in the two UCI datasets could achieve 90.77% and 75.21%, respectively. Moreover, the classification accuracy of SVM-based methods is generally better or equal than other kinds of methods, such as See5, DT, MCCQP and other popular algorithms. Besides, Gauss kernel based SVM models show better classification accuracy than models with linear and polynomial kernel functions when choosing the same penalty form of the model, and the classification accuracy of Lq-based methods is generally better or equal than L1- and L2-based methods. In addition, for a certain SVM model, hyper-parameter optimization utilizing evolution strategy (ES) could effectively reduce the computing time in the premise of guaranteeing a higher accuracy, compared with the grid search (GS), particle swarm optimization (PSO) and simulated annealing (SA).
https://doi.org/10.1142/9789811202391_0021
This chapter shows examples of applying several current data mining approaches and alternative models in an accounting and finance context such as predicting bankruptcy using US, Korean, and Chinese capital market data. Big data in accounting and finance context is a good fit for data analytic tool applications like data mining. Our previous study also empirically tested Japanese capital market data and found similar prediction rates. However, overall prediction rates depend on different countries and time periods (Mihalovic, 2016). These results are an improvement on previous bankruptcy prediction studies using traditional probit or logit analysis or multiple discriminant analysis. The recent survival model shows similar prediction rates in bankruptcy studies. However, we need longitudinal data to use the survival model. Because of computer technology advances, it is easier to apply data mining approaches. In addition, current data mining methods can be applied to other accounting and finance contexts such as auditor changes, audit opinion prediction studies, and internal control weakness studies. Our first paper shows 13 data mining approaches to predict bankruptcy after the Sarbanes–Oxley Act (SOX) (2002) implementation using 2008–2009 US data with 13 financial ratios and internal control weaknesses, dividend payout, and market return variables. Our second paper shows application of a Multiple Criteria Linear Programming Data Mining Approach using Korean data. Our last paper shows bankruptcy prediction models using Chinese firm data via several data mining tools and compared with those of traditional logit analysis. Analytic Hierarchy Process and Fuzzy Set also can be applied as an alternative method of data mining tools in accounting and finance studies. Natural language processing can be used as a part of the artificial intelligence domain in accounting and finance in the future (Fisher et al., 2016).
https://doi.org/10.1142/9789811202391_0022
This chapter utilizes a panel threshold regression model to plow two of the most profound issues in auditing: First of all, does economic bonding compromise audit quality and secondly, does the SOX’s prohibition of certain nonaudit services mitigate the association between fees and auditor independence? Empirical results suggest that there indeed exists a threshold value which would impair audit quality once nonaudit services surpass it. Moreover, the threshold value has yet plummeted subsequent to the SOX’s prohibition of certain nonaudit services designated to mitigate auditors’ economic bonding with their clients, suggesting that the effort made by the authorities has been by large ineffective. The results lead us to ponder whether the fee structure and the existing practice of employing auditors at the discretion of the management should be rigorously reviewed to warrant audit quality.
https://doi.org/10.1142/9789811202391_0023
This paper deals with the analysis of the trade balances in the 10 countries that form the ASEAN Economic Community (Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, The Philippines, Singapore, Thailand and Vietnam). For this purpose, we use standard unit roots along with fractional integration and cointegration methods. The latter techniques are more general than those based on integer differentiation and allow for a greater degree of flexibility in the dynamic specification of the series. The results based on unit roots were very inconclusive about the order of integration of the series. In fact, using fractional integration, the two hypotheses of stationarity I(0) and non-stationarity I(1) were decisively rejected in all cases, with orders of integration ranging between 0 and 1 and thus displaying long memory and mean reverting behavior. Focusing on the bivariate long-run equilibrium relationships between the countries, a necessary condition is that the two series must display the same degree of integration. This condition was fulfilled in a large number of cases. We observe some relations where cointegration could be satisfied, mainly involving countries such as Cambodia, Indonesia, Malaysia and the Philippines.
https://doi.org/10.1142/9789811202391_0024
This paper first reviews alternative methods for determining option bounds. This method includes Stochastic dominance, linear programming, semi-parametric method, and non-parametric method for European option. Then option bounds for American and Asian options are discussed. Finally, we discuss empirical applications in equities and equity indices, index futures, foreign exchange rates, and real options.
https://doi.org/10.1142/9789811202391_0025
This study documents a substantial difference in impact on an emerging market firm’s value due to its use of foreign bank debt relative to domestic bank debt. It finds a positive association between the use of collateral by foreign banks and firm value, however finds no such corresponding association for the use of collateral by domestic banks. The results suggest that as an emerging market’s banking system matures and becomes more sophisticated, the differences between the information contained in local versus foreign bank lending diminishes; this diminishment erodes the differential impact on firm value of foreign versus local bank lending.
https://doi.org/10.1142/9789811202391_0026
In this chapter, we first discuss the classical time-series component model, then we discuss the moving average and seasonally adjusted time-series. A discussion on linear and log-linear time trend regressions follows. The autoregressive forecasting model as well as the ARIMA model are both reviewed. Finally, composite forecasting is discussed.
https://doi.org/10.1142/9789811202391_fmatter02
The following sections are included:
https://doi.org/10.1142/9789811202391_0027
The purpose of this chapter is to develop certain relatively recent mathematical discoveries known generally as stochastic calculus, or more specifically as Itô’s Calculus and to also illustrate their application in the pricing of options. The mathematical methods of stochastic calculus are illustrated in alternative derivations of the celebrated Black–Scholes–Merton model. The topic is motivated by a desire to provide an intuitive understanding of certain probabilistic methods that have found significant use in financial economics.
https://doi.org/10.1142/9789811202391_0028
This chapter discusses Durbin, Wu, and Hausman (DWH) specification tests and provides examples of their application and interpretation. DWH tests compare alternative parameter estimates and can be useful in discerning endogeneity issues (omitted variables, measurement error/errors in variables, and simultaneity), incorrect functional form and contemporaneous correlation in the lagged dependent variable — serial correlation model, testing alternative estimators for a model, and testing alternative theoretical models. Empirical applications are provided illustrating the use of DWH tests in comparing LS, IV, FE, RE, and GMM estimators.
https://doi.org/10.1142/9789811202391_0029
In this chapter, we review econometric methodology that is used to test for jumps and to decompose realized volatility into continuous and jump components. In order to illustrate how to implement the methods discussed, we also present the results of an empirical analysis in which we separate continuous asset return variation and finite activity jump variation from excess returns on various US market sector exchange traded funds (ETFs), during and around the Great Recession of 2008. Our objective is to characterize the financial contagion that was present during one of the greatest financial crises in US history. In particular, we study how shocks, as measured by jumps, propagate through nine different market sectors. One element of our analysis involves the investigation of causal linkages associated with jumps (via use of vector autoregressions), and another involves the examination of the predictive content of jumps for excess returns. We find that as early as 2006, jump spillover effects became more pronounced in the markets. We also observe that jumps had a significant effect on excess returns during 2008 and 2009; but not in the years before and after the recession.
https://doi.org/10.1142/9789811202391_0030
Earnings forecasting data has been a consistent, and highly statistically significant, source of excess returns. This chapter discusses a composite model of earnings forecasts, revisions, and breadth, CTEF, a model of forecasted earnings acceleration, was developed in 1997 to identify mispriced stocks. Our most important result is that the forecasted earnings acceleration variable has produced statistically significant Active and Specific Returns in the Post-Global Financial Crisis Period. Simple earnings revisions and forecasted yields have not enhanced returns in the past 7–20 years, leading many financial observers to declare earnings research passé. We disagree! Moreover, earnings forecasting models complement fundamental data (earnings, book value, cash flow, sales, dividends, liquidity) and price momentum strategies in a composite model for stock selection. The composite model strategy excess returns are greater in international stocks than in US stocks. The models reported in Guerard and Mark (2003) are highly statistically significant in its post-publication time period, including booms, recessions, and highly volatile market conditions.
https://doi.org/10.1142/9789811202391_0031
This paper proposes a novel approach to rank analysts using their positions in a network constructed by peer analysts connected with overlapping firm coverage. We hypothesize that analysts occupying the network structural holes can produce higher quality equity research by a better access to their peer analysts’ wealth and diversity of information and knowledge. We report consistent empirical evidence that high-ranked analysts identified by network structural holes have greater ability to affect stock prices. Furthermore, those analysts tend to issue timely opinions, but not necessarily more accurate or consistent earnings forecasts. Analysts occupying structural holes tend to be more experienced, have a higher impact on stock prices when they work for large brokerages, and are rewarded with better career outcomes.
https://doi.org/10.1142/9789811202391_0032
We examine the effect of book-tax differences on CEO compensation. We posit that CEOs can opportunistically exercise the discretion in GAAP to increase accounting income without affecting taxable income and in so doing increase their compensation. We test the data to determine which competing hypothesis dominates — efficiency or rent-seeking. Under the efficiency hypothesis, the board of directors uses the information in book-tax differences to undo CEOs’ attempts to artificially inflate accounting income and hence CEO compensation is negatively associated with book-tax differences. Under the rent-seeking hypothesis, CEOs gain effective control of the pay-setting process so that they set their own pay with little oversight from shareholders and directors. Directors do not use the information in book-tax differences to undo CEOs’ attempted earnings manipulation and this gives rise to a positive association between CEO compensation and book-tax differences. Consistent with the efficiency hypothesis, we find that CEO compensation is negatively associated with book-tax differences suggesting that directors use the information in book-tax differences to reduce excessive CEO compensation. We also find that strong corporate governance structure strengthens the negative association between CEO compensation and book-tax differences. Specifically, firms with high insider equity ownership and high proportion of independent directors on the board have lower CEO compensation when book-tax differences are large.
https://doi.org/10.1142/9789811202391_0033
Stochastic volatility models of option prices treat variance as a variable. However, the application of such models requires calibration to market prices that often treats variance as an optimized parameter. If variance represents a variable, option pricing models should reflect measure-invariant features of its historic evolution. Alternatively, if variance is a parameter used to generate desired features of the implied volatility surface, stochastic volatility models lose their connection to the historic evolution of variance. This chapter obtains evidence that variance in stochastic volatility models is an artificial construct used to confer desired properties to the generated implied volatility surface.
https://doi.org/10.1142/9789811202391_0034
This chapter extends the Margrabe formula such that it is suitable for accounting for any type jump of stocks. Despite the fact that prices of an exchange option are characterized by jumps, it seems no study has explored those price jumps of an exchange option. The jump in this chapter is illustrated by a Poisson process. Moreover, the Poisson process can be extended into Cox process in case there is more than one jump. The results illustrate that incompleteness in an exchange option leads to a premium which in turn increases an option value whilst hedging strategies reveal mixed-bag type of results.
https://doi.org/10.1142/9789811202391_0035
We develop a simultaneous determination model of capital structure and stock returns. Specifically, we incorporate the managerial investment autonomy theory into the structural equation modeling with confirmatory factor analysis to jointly determine the capital structure and stock return. Besides attributes introduced in previous studies, we introduce indicators affecting a firm’s financing decision, such as managerial entrenchment, macro-economic factors, government financial policy, and pricing factors. Empirical results show that stock returns, asset structure, growth, industry classification, uniqueness, volatility, financial rating, profitability, government financial policy, and managerial entrenchment are major factors of the capital structure.
https://doi.org/10.1142/9789811202391_0036
In the context of globalization, through a growing process of market liberalization, an advanced technology and an economic trading bloc, national stock markets have become more interdependent, which limits the international portfolio diversification opportunities. This chapter investigates the degree of stock market co-movement between and within 13 developed European Union markets, six developing Latin American markets, two developed North American markets, 10 developing Asian markets, Norway, Switzerland, Australia and Japan markets. The research methodology employed includes wavelet correlation, wavelet multiple cross-correlation and wavelet coherence. Results show a positive correlation across intra and inter trading blocs in all investment horizons and over time, and they show that the linkage between stock returns increases with the time scale, implying that the international diversification benefits have largely disappeared in globalized world markets. Moreover, we found a high degree of co-movement at low frequencies in crisis and no crisis periods, which indicates a fundamental relationship between stock market returns. Finally, multiple cross-correlation analysis reveals that stock markets are positively correlated at all wavelet scales and at all lags, and it reveals that France’s stock market is the potential leader or follower of the other European and other major world stock markets at low and high frequencies.
https://doi.org/10.1142/9789811202391_0037
Specification error and measurement error are two major issues in finance research. The main purpose of this chapter is (i) to review and extend existing errors-in-variables (EIV) estimation methods, including classical method, grouping method, instrumental variable method, mathematical programming method, maximum likelihood method, LISREL method, and the Bayesian approach; (ii) to investigate how EIV estimation methods have been used to finance related studies, such as cost of capital, capital structure, investment equation, and test capital asset pricing models; and (iii) to give a more detailed explanation of the methods used by Almeida et al. (2010).
https://doi.org/10.1142/9789811202391_0038
This chapter proposes a mixture copula framework for integration of different types of bank risks, which is able to capture comprehensively the nonlinearity, tail dependence, tail asymmetry and structure asymmetry of bank risk dependence. We analyze why mixture copula is well-suited for bank risk integration, discuss how to construct a proper mixture copula and present detailed steps for using mixture copula. In the empirical analysis, the proposed framework is employed to model the dependence structure between credit risk, market risk and operational risk of Chinese banks. The comparisons with seven other major approaches provide strong evidence of the effectiveness of the constructed mixture copulas and help to uncover several important pitfalls and misunderstandings in risk dependence modeling.
https://doi.org/10.1142/9789811202391_0039
Recent progress of graphics processing unit (GPU) computing with applications in science and technology has demonstrated tremendous impact over the last decade. However, financial applications by GPU computing are less discussed and may cause an obstacle toward the development of financial technology, an emerging and disruptive field focusing on the efficiency improvement of our current financial system. This chapter aims to raise the attention of GPU computing in finance by first empirically investigating the performance of three basic computational methods including solving a linear system, Fast Fourier transform, and Monte Carlo simulation. Then a fast calibration of the wing model to implied volatilities is explored with a set of traded futures and option data in high frequency. At least 60% executing time reduction on this calibration is obtained under the Matlab computational environment. This finding enables the disclosure of an instant market change so that a real-time surveillance for financial markets can be established for either trading or risk management purposes.
https://doi.org/10.1142/9789811202391_0040
This chapter demonstrates theoretically that without imposing any structure on the underlying forcing process, the model-free CBOE volatility index (VIX) does not measure market expectation of volatility but that of a linear moment-combination. Particularly, VIX undervalues (overvalues) volatility when market return is expected to be negatively (positively) skewed. Alternatively, we develop a model-free generalized volatility index (GVIX). With no diffusion assumption, GVIX is formulated directly from the definition of log-return variance, and VIX is a special case of the GVIX. Empirically, VIX generally understates the true volatility, and the estimation errors considerably enlarge during volatile markets. The spread between GVIX and VIX follows a mean-reverting process.
https://doi.org/10.1142/9789811202391_0041
Under the assumption that the asset value follows a phase-type jump-diffusion, we show that the expected discounted penalty satisfies an ODE and obtain a general form for the expected discounted penalty. In particular, if only downward jumps are allowed, we get an explicit formula in terms of the penalty function and jump distribution. On the other hand, if the downward jump distribution is a mixture of exponential distributions (and upward jumps are determined by a general Lévy measure), we obtain closed-form solutions for the expected discounted penalty. As an application, we work out an example in Leland’s structural model with jumps. For earlier and related results, see Gerber and Landry et al. (1998), Hilberink and Rogers et al. (2002), Asmussen et al. (2004), and Kyprianou and Surya et al. (2007).
https://doi.org/10.1142/9789811202391_0042
This chapter investigates the characteristics of implied risk-neutral distributions derived separately from call and put options prices. Differences in risk-neutral moments between call and put options indicate deviations from put–call parity. We find that sentiment effect is significantly related to differences between call and put option prices. Our results suggest there is differential impact of investor sentiment and consumer sentiment on call and put option traders’ expectations. Rational and irrational sentiment components have different influence on call and put option traders’ beliefs as well.
https://doi.org/10.1142/9789811202391_0043
This chapter presents the state of the art of the Intelligent Portfolio Theory which consists of three parts: the basic theory — principles and framework of intelligent portfolio management, the strength investing methodology as the driving engine, and the dynamic investability map in the confluence of business and market cycles and sector and location rotations. The theory is based on the tenet of “invest in trading” beyond “invest in assets”, distinguishing asset portfolio versus trading strategies and integrating them into a multi-asset portfolio which consists of many multi-strategy portfolios, one for each asset. The multi-asset portfolio is managed with an active portfolio management framework, where the asset allocation weights are dynamically estimated from a multi-factor model. The weighted investment on each single asset is then managed via a portfolio of trading strategies. Each trading strategy is itself a dynamically adapting trading agent with its own optimization mechanism. Strength investing as a methodology for asset selection with market timing focuses on dynamically tracing a small open cluster of assets which exhibit stronger trends and simultaneously follow trends of those assets, so to alleviate the drawbacks of single-asset trend following such as drawdown and stop loss. In the real world of global financial markets, the investability both in terms of asset selection and trade timing emerges in the confluence of business cycles and market cycles as well as the sector rotation for stock markets and location rotation for real estate markets.
https://doi.org/10.1142/9789811202391_0044
Credit risk analysis has long attracted great attention from both academic researchers and practitioners. However, the recent global financial crisis has made the issue even more important because of the need for further enhancement of accuracy of classification of borrowers. In this study, an evolution strategy (ES)-based adaptive Lq SVM model with Gauss kernel (ES-ALqG-SVM) is proposed for credit risk analysis. Support vector machine (SVM) is a classification method that has been extensively studied in recent years. Many improved SVM models have been proposed, with non-adaptive and pre-determined penalties. However, different credit data sets have different structures that are suitable for different penalty forms in real life. Moreover, the traditional parameter search methods, such as the grid search method, are time consuming. The proposed ES-based adaptive Lq SVM model with Gauss kernel (ES-ALqG-SVM) aims to solve these problems. The non-adaptive penalty is extended to (0, 2] to fit different credit data structures, with the Gauss kernel, to improve classification accuracy.
For verification purpose, two UCI credit datasets and a real-life credit dataset are used to test our model. The experiment results show that the proposed approach performs better than See5, DT, MCCQP, SVM light and other popular algorithms listed in this study, and the computing speed is greatly improved, compared with the grid search method.
https://doi.org/10.1142/9789811202391_0045
This chapter examines the impact of product market competition on the benchmarking of a CEO’s compensation to their counterparts in peer companies. Using a large sample of US firms, we find a significantly greater effect of CEO pay benchmarking in more competitive industries than in less competitive industries. Using three proxies for managerial talent that have been used by Albuquerque et al. (2013), we find that CEO benchmarking is more pronounced in competitive markets wherein managerial talent is more valuable. This suggests that pay benchmarking and product market competition are complements. The above results are not due to industry homogeneity.
https://doi.org/10.1142/9789811202391_0046
This chapter defines and studies a class of cash conversion systems in firms, consisting of a funds pool, a single-product Make-to-Stock inventory, and a receivables pool. The system implements a perpetual flow cycle where funds convert to product and back to funds. The equilibrium rate analysis (ERA) methodology is used to analyze the firm’s operational and financial performance metrics, including net profit rate, rate of return on investment, and cash conversion cycle statistics. Specifically, in this chapter, we model the case where the firm is a subsidiary of a financially stable parent corporation, and the subsidiary’s cash conversion system is capital-rationed. We model this system as a discrete-state continuous-time Markovian process, and compute its stochastic equilibrium distribution using analytic and numerical methods. These are used, in turn, to compute the aforesaid financial metrics in stochastic equilibrium. Finally, we present a methodology that uses these financial metrics to optimize the financial and operational design of the system, and specifically, the firm’s capital structure and the sizing of the inventory’s base stock level. Numerical results show that optimal designs for profit rate maximization and rate of return maximization can differ substantially, reflecting the differing interests of firm managers and investors.
https://doi.org/10.1142/9789811202391_0047
This chapter investigates the characteristics of a subset of the infinite number of Security Market Lines (SMLs) that ensure the market portfolio is mean–variance efficient both at a point in time and over time. The analysis employs raw rather than excess returns. With some specifications of the SML, the risk-free rate exceeds the market portfolio’s equilibrium mean, which is inconsistent with CAPM theory. At a point in time, a Hotelling’s T2 test may reject most of the SMLs or none of them, although other mean–variance criteria may indicate some are economically reasonable and others are not.
https://doi.org/10.1142/9789811202391_0048
In this chapter, we propose a novel model to incorporate prospect theory into the consumption-based asset pricing model, where habit formation of consumption is employed to determine endogenously the reference point. Our model is motivated by the common element of prospect theory and habit formation of consumption that investors care little about the absolute level of wealth (consumption), but rather pay attention to gains or losses (excess or shortage in consumption level) compared to a reference point. The results show that if investors evaluate their excess or shortage amounts in consumption relative to their habit consumption levels based on prospect theory, the equity premium puzzle can be resolved.
https://doi.org/10.1142/9789811202391_0049
To enhance the value of a firm, the firm’s management must attempt to minimize the total discounted cost of financing over a planning horizon. Unfortunately, the variety of sources of funds and the constraints that may be imposed on accessing funds from any one source make this exercise a difficult task. The model presented and illustrated here accomplishes this task considering issuing new equity and new bonds, refunding the bonds, borrowing short term from financial institutions, temporarily parking surplus funds in short-term securities, repurchasing its stock, and retaining part or all of a firm’s earnings. The proportions of these sources of funds are determined subject to their associated costs and various constraints such as not exceeding a specific debt/equity ratio and following a stable dividend policy, among others.
https://doi.org/10.1142/9789811202391_0050
This chapter first reviews empirical evidence and estimation methods of structural credit risk models. Next, an empirical investigation of the performance of default prediction under the down-and-out barrier option framework is provided. In the literature review, a brief overview of the structural credit risk models is provided. Empirical investigations in extant literature papers are described in some detail, and their results are summarized in terms of subject and estimation method adopted in each paper. Current estimation methods and their drawbacks are discussed in detail. In our empirical investigation, we adopt the Maximum Likelihood Estimation method proposed by Duan (1994). This method has been shown by Ericsson and Reneby (2005) through simulation experiments to be superior to the volatility restriction approach commonly adopted in the literature. Our empirical results surprisingly show that the simple Merton model outperforms the Brockman and Turtle (2003) model in default prediction. The inferior performance of the Brockman and Turtle model may be the result of its unreasonable assumption of the flat barrier.
https://doi.org/10.1142/9789811202391_0051
In this chapter, we empirically test the Constant–Elasticity-of-variance (CEV) option pricing model by Cox (1975, 1996) and Cox and Ross (1976), and compare the performances of the CEV and alternative option pricing models, mainly the stochastic volatility model, in terms of European option pricing and cost-accuracy-based analysis of their numerical procedures. In European-style option pricing, we have tested the empirical pricing performance of the CEV model and compared the results with those by Bakshi et al. (1997). The CEV model, introducing only one more parameter compared with Black–Scholes formula, improves the performance notably in all of the tests of in-sample, out-of-sample and the stability of implied volatility. Furthermore, with a much simpler model, the CEV model can still perform better than the stochastic volatility model in short-term and out-of-the-money categories. When applied to American option pricing, high-dimensional lattice models are prohibitively expensive. Our numerical experiments clearly show that the CEV model performs much better in terms of the speed of convergence to its closed-form solution, while the implementation cost of the stochastic volatility model is too high and practically infeasible for empirical work. In summary, with a much less implementation cost and faster computational speed, the CEV option pricing model could be a better candidate than more complex option pricing models, especially when one wants to apply the CEV process for pricing more complicated path-dependent options or credit risk models.
https://doi.org/10.1142/9789811202391_0052
We study the heteroskedasticity and jump behavior of the Thai baht using models of the square root stochastic volatility with or without jumps. The Bayesian factor is used to evaluate the explanatory power of competing models. The results suggest that in our sample, the square root stochastic volatility model with independent jumps in the observation and state equations (SVIJ) has the best explanatory power for the 1996 Asian financial crisis. Using the estimation results of the SVIJ model, we are able to link the major events of the Asian financial crisis to jump behavior in either volatility or observation.
https://doi.org/10.1142/9789811202391_0053
This chapter attempts to explore the puzzle of post-earnings-announcement drifts by focusing on the revision of systematic risk subsequent to the release of earnings information. This chapter proposes a market model with time-varying systematic risk by incorporating ARCH into the CAPM. The Kalman filter is then employed to estimate how the market revises its risk assessment subsequent to earnings announcement. This chapter also conducts empirical analysis based on a sample of US publicly held companies during the five-fiscal year sample period, 2010–2014. After controlling for the revision of risk and isolating potential confounding effect, this chapter finds that the phenomenon of post-earnings announcement drifts, so well documented in accounting literature, no longer exists.
https://doi.org/10.1142/9789811202391_0054
In today’s dynamic but ambiguous business environment the fuzzy set applications are growing continuously as one of a manager’s most useful decision-making tools. Recent fuzzy set business applications show promising results (Alcantud et al., 2017; Frini, 2017; Toklu, 2017; Wang et al., 2017). International transfer pricing recently has received more attention as the US wages trade wars with China and other countries as some firms try to choose minimizing taxes as a transfer pricing strategy. This chapter demonstrates how to apply the fuzzy set in international transfer pricing problems. Applications of Fuzzy Set to Other Business Decision are also discussed in some detail.
https://doi.org/10.1142/9789811202391_0055
Data mining is quite common in econometric modeling when a given dataset is applied multiple times for the purpose of inference; it in turn could bias inference. Given the existence of data mining, it is likely that any reported investment performance is simply due to random chance (luck). This study develops a time-series bootstrapping simulation method to distinguish skill from luck in the investment process. Empirically, we find little evidence showing that investment strategies based on UK analyst recommendation revisions can generate statistically significant abnormal returns. Our rolling window-based bootstrapping simulations confirm that the reported insignificant portfolio performance is due to sell-side analysts’ lack of skill in making valuable stock recommendations, rather than their bad luck, irrespective of whether they work for more prestigious brokerage houses.
https://doi.org/10.1142/9789811202391_0056
Banks now are facing strong competition from both technological giants and small fintech startups. Under these conditions, banks also have started to implement disruptive technologies in their day-to-day operations. However, in some cases huge investments in different technological systems do not lead to the increase in company performance due to the resistance of employees. In this chapter, we focus on both internal and external factors that may influence employees’ labor productivity and performance of the whole company. The sample includes 148 employees with education in banking and finance. The model was estimated based on Partial Least Squares Structural Equation Modelling (PLS-SEM). We show that both motivation to use disruptive technologies and digital skills have a strong impact of labor productivity, while both labor productivity and organizational support positively contribute to the improvement of company performance that is based on the usage of new technologies.
https://doi.org/10.1142/9789811202391_0057
The financial ratio-based credit-scoring model for bond rating system requires the maximization of two conflicting objectives simultaneously, namely, the explanatory and discriminatory power, which had not been directly addressed in literature. The main purpose of this study is to develop a credit-scoring model that combines the principle component analysis and Fisher’s discriminant analysis using the MINIMAX goal programming technique so the maximization of the two conflicting objectives can be compromised. The performance of alternative credit-scoring models including the stepwise discriminant analysis by Pinch and Mingo, Fisher’s discriminant analysis, and the principle component analysis is analyzed and compared using dataset from previous studies. We find that the proposed hybrid credit-scoring model outperforms other alternative models in both explanatory and discriminatory powers.
https://doi.org/10.1142/9789811202391_0058
This chapter resolves an inconclusive issue in the empirical literature about the relationship between downside risk and stock returns for Asian markets. This study demonstrates that the mixed signs on the risk coefficient stem from the fact that the excess stock return series is assumed to be stationary with a short memory, which is inconsistent with the downside risk series featuring a long memory process. After we appropriately model the long memory property of downside risk and apply a fractional difference to downside risk, the evidence consistently supports a significant and positive risk–return relation. This holds true for downside risk not only in the domestic market but also across markets. The evidence suggests that the risk premium is higher if the risk originates in a dominant market, such as the US. These findings are robust even when we consider the leverage effect, value-at-risk feedback, and the long memory effect in the conditional variance.
https://doi.org/10.1142/9789811202391_0059
This study uses recurrent survival analysis technique to show that higher spread of conversion-stock prices and higher buy-back ratio of stock repurchase provide the CBs’ debt-like signals of Constantinides and Grundy (1989); while lower risk-free rate, higher capital expenditures, higher non-management institutional ownership and higher total asset value provide the CBs’ equity-like signals of Stein (1992). While the equity-like signals might accelerate the rate of sequential conversions and weaken the CBs’ risk-mitigating effect in the presence of risk-shifting potential, this study shows that this can happen only in a financially healthy firm with higher free cash flow. For financially distressed firms, the CBs’ risk-mitigating effect is maintained.
https://doi.org/10.1142/9789811202391_0060
This study relies on a structural approach model to investigate the determinants of Credit Default Swap (CDS) spread changes for Euro-zone’s financial institutions over the period January 2005 to October 2015. Going beyond the structural model, this study incorporates features such as the role of systemic risk factors, bank-specific characteristics and credit ratings. We adopt the dynamic framework provided by panel Vector Autoregressive Models which allows for endogeneity issues and this is a novelty of our approach. The main findings are that structural models seem to be more relevant during high volatile periods and that the relation between the CDS and its theoretical determinants is not constant over time. Overall, the empirical results suggest that structural models perform well in explaining bank credit risk, but determinants of CDS also rely on the underlying economic situation which should be taken into consideration when interpreting CDS spread changes.
https://doi.org/10.1142/9789811202391_0061
This chapter examines the empirical performance of dynamic Gaussian affine term structure models (DGATSMs) at the zero lower bound (ZLB) when principal components analysis (PCA) is used to extract factors. We begin by providing a comprehensive review of DGATSM when PCA is used to extract factors highlighting its numerous auspicious qualities; it specifies bond yields to be a simple linear function of underlying Gaussian factors. This is especially favorable since, in principle, PCA works best when the model is linear and the first two moments are sufficient to describe the data, among other characteristics. DGATSM have a strong theoretical foundation grounded in the absence of arbitrage. DGATSM produce reasonable cross-sectional fits of the yield curve. Both of these qualities are inherited into the model when PCA is used to extract the state vector. Additionally, the implementation of PCA is simple in that it takes a matter of seconds to estimate factors and is convenient to include in estimation as most software packages have ready-to-use algorithms to compute the factors immediately. The results from our empirical investigation lead us to conclude that DGATSM, when PCA is employed to extract factors, perform very poorly at the ZLB. It frequently crosses the ZLB enroot to producing negative out-of-sample forecasts for bond yields. The main implication in this study is that despite its numerous positive characteristics, DGATSM when PCA is used to extract factors produce poor empirical forecasts around the ZLB.
https://doi.org/10.1142/9789811202391_0062
In this chapter, we first investigate how measurement errors can affect the estimators of CAPM, such as αj and βj. Then, we derive Plimˆbˆb assuming Rm and Rb are measured with error. Finally, we develop an alternative hypothesis testing procedure for the CAPM.
https://doi.org/10.1142/9789811202391_0063
This chapter relies on a factor-based forecasting model for net charge-off rates of banks in a data-rich environment. More specifically, we employ a partial least squares (PLS) method to extract target-specific factors and find that it outperforms the principal component approach in-sample by construction. Further, we apply PLS to out-of-sample forecasting exercises for aggregate bank net charge-off rates on various loans as well as for similar individual bank rates using over 250 quarterly macroeconomic data from 1987Q1 to 2016Q4. Our empirical results demonstrate superior performance of PLS over benchmark models, including both a stationary autoregressive type model and a nonstationary random walk model. Our approach can help banks identify important variables that contribute to bank losses so that they are better able to contain losses to manageable levels.
https://doi.org/10.1142/9789811202391_0064
Filtering methods such as the Kalman Filter (KF) and its extended algorithms have been widely used in estimating asset pricing models in many topics such as rational stock bubble, interest rate term structure and derivative pricing. The basic idea of filtering is to cast the discrete or continuous time series model of asset prices into a discrete state-space model where the state variables are the latent factors driving the system and the observable variables are usually asset prices. Based on a state-space model, we can choose a specific filtering method to compute its likelihood and estimate unknown parameters using maximum likelihood method. The classical KF can be used to estimate the linear state-space model with Gaussian measurement error. If the model becomes nonlinear, we can rely on Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF) or Particle Filter (PF), for estimation. For a piecewise linear state-space model with regime switching, the Mixture Kalman Filter (MKF), which inherits merits of both KF and PF, can be employed. However, if the measurement error is non-Gaussian, only PF is the applicable method. For each filtering method, we review its algorithm, application scope, computational efficiency and asset pricing applications. This chapter provides a brief summary of applications of filtering methods in estimating asset pricing models.
https://doi.org/10.1142/9789811202391_0065
Brown and Gibbons (1985) developed a theory of relative risk aversion estimation in terms of average market rates of return and the variance of market rates of return. However, the exact sampling distributions of sampling distribution of an appropriate relative risk aversion estimator. First, we have derived theoretically the density of Brown and Gibbons’ maximum likelihood estimator. It is shown that the central t is not appropriate for estimating the significance of estimated relative risk aversion distribution. Then, we derived the minimum variance unbiased estimator by a linear transformation of Brown and Gibbons’ maximum likelihood estimator. The density function is neither a central nor a noncentral t distribution. Then, density function of this new distribution has been tabulated. There is an empirical example to illustrate the application of this new sampling distribution.
https://doi.org/10.1142/9789811202391_fmatter03
The following sections are included:
https://doi.org/10.1142/9789811202391_0066
This study examines how a firm’s usage of social media and banking relationships influence its value. Using a sample of 6,636 year-firm observations from 2008 to 2015, the results show that social media (Facebook, Google+, and LinkedIn) positively influence firm value, whereas bank relationships affect firm value differently: the high number of banks a firm borrows from reducing value, whereas the high bank debt a firm using creates value. The impacts of YouTube and Twitter on firm value are insignificant. Although social media have a similar function as banks in mitigating the information asymmetry between firms and outsiders, the information types vary. Banks create more soft and private information, while social media deliver more public and hard information. The accuracy of information is more than the quantity; hence, whether more information sharing via social media creates value is uncertain. We also find the substitution and complementary effects between various types of social media and banking relationships on firm value. Our results remain robust after conducting a difference-in-differences (DID) analysis using the exogenous shock of the Facebook IPO in 2012.
https://doi.org/10.1142/9789811202391_0067
The objective of this chapter is to provide an update to the literature on initial public offering (IPO) performance and issuance focusing explicitly on the methodological approaches used to conduct these analyses and to develop a more general approach to evaluating aggregate IPO issuance and performance. Traditionally, empirical studies of IPO performance have been critically dependent on the general methodology that researchers use to adjust the individual IPO’s returns to account for market performance and the time horizon of the study; however, more recent studies have examined the patterns of returns that IPOs emit, in general, sometimes prior to performance adjustments. In the US market, for instance, changes in the regulatory regime as a result of the introduction of the JOB’s Act and events such as the financial collapse have led to a period of relatively benign issuance associated with IPOs. This has recently led to new questions about the true relationship between the volume of IPO issuance and performance. Historically, we have assumed that hot and cold market cycles affect performance; however, recently the methodology used to capture whether markets are indeed hot or cold has been questioned. In addition, there has been a renaissance of late as researchers critically examine the validity of research projects that claim to identify hot and cold markets or identify cyclicality in the performance of IPO. The research has evolved from a segmentation of a population of IPO returns into quartiles or terciles and referring to the segments of these populations hot and cold, to Markov two and three state regime-shifting models, to more recent applications of event specific and spline regression models; researchers have been working to uncover what actually causes the IPO markets to move and the cyclical nature of IPO performance and issuance seems to indicate that the current state of research on IPOs needs some restructuring and clarification. This chapter has important implications for financial markets participants, portfolio managers, investment bankers, regulatory bodies, and business owners. Furthermore, this review chapter can aid in the setting of benchmarks for the valuation of IPOs, help investors, business owners, and the managers of businesses to understand the relationship between IPO performance and issuance so that they are better positioned to make wise investment decisions when purchasing IPOs or when issuing their IPOs and enable researchers to think more critically about developing their models of IPO issuance and performance.
https://doi.org/10.1142/9789811202391_0068
In our previous study, the empirical relationship between Sharpe measure and its risk proxy was shown to be dependent on the sample size, the investment horizon and the market conditions. This important result is generalized in the present study to include Treynor and Jensen performance measures. Moreover, it is shown that the conventional sample estimate of ex ante Treynor measure is biased. As a result, the ranking of mutual fund performance based on the biased estimate is not an unbiased ranking as implied by the ex ante Treynor measure. In addition, a significant relationship between the estimated Jensen measure and its risk proxy may produce a potential bias associated with the cumulative average residual technique which is frequently used for testing the market efficiency hypothesis. Finally, the impact of the dependence between risk and average return in Friend and Blume’s findings is also investigated.
https://doi.org/10.1142/9789811202391_0069
Sharpe’s, Treynor’s and Jensen’s measures have been extensively used for performance evaluation of mutual funds or portfolios. These three widely used performance measures have been found to be highly correlated with their corresponding risk measures by a number of empirical studies. This paper focuses the investigation on the possible sources of the bias associated with the empirical relationship between the estimated Sharpe’s measure and its estimated risk measure. In general, the sample size, the investment horizon and the market conditions are three important factors in determining the strong relationship between the ex post Sharpe’s measure and its estimated risk surrogate.
The interesting findings of this study are as follows: (1) the estimated Sharpe’s measure is uncorrelated with the estimated risk measure either when the risk-free rate of interest equals the expected return on the market portfolio over the sample period or when the sample size is infinite, (2) the estimated Sharpe’s measure is positively (or negatively) correlated with the estimated risk measure if the risk-free rate of interest is greater than (or less than) the expected return on the market portfolio, (3) an observation horizon shorter than the true investment horizon can reduce the dependence of the estimated Sharpe’s measure on its estimated risk measure, and (4) an observation horizon longer than the true investment horizon will magnify the dependence. The results have indicated that, in conducting empirical research, a shorter observation horizon and a large sample size should be used to reduce the bias associated with the estimated Sharpe’s measure.
https://doi.org/10.1142/9789811202391_0070
This study proposes and calibrates the VG NGARCH model, which provides a more informative and parsimonious model by formulating the dynamics of log-returns as a variance-gamma (VG) process by Madan et al. (1998). An autoregressive structure is imposed on the shape parameter of the VG process, which describes the news arrival rates that affect the price movements. The performance of the proposed VG NGARCH model is compared with the GARCH-jump with autoregressive conditional jump intensity (GARJI) model by Chan and Maheu (2002), in which two conditional independent autoregressive processes are used to describe stock price movements caused by normal and extreme news events, respectively. The comparison is made based on daily stock prices of five financial companies in the S&P 500, namely, Bank of America, Wells Fargo, J. P. Morgan, CitiGroup, and AIG, from January 3, 2006 to December 31, 2009. The goodness of fit of the VG NGARCH model and its ability to predict the ex ante probabilities of large price movements are demonstrated and compared with the benchmark GARJI model.
https://doi.org/10.1142/9789811202391_0071
Purpose: Increased penetration of mobile phones has built great opportunities for increasing the level of financial inclusion around the world. Digital channels help banks in not only attracting new customers but also in ensuring that the existing ones remain loyal. This chapter studies the incentives to encourage the use of mobile banking by smartphone and tablet users.
Design/methodology/approach: An online survey is conducted to explore possible relations between the potential determinants of the intention to use mobile banking. The model is assessed with Partial Least Squares Structural Equation Modelling (PLS-SEM) technique.
Findings: The results show that perceived usefulness and perceived efforts tend to be the most significant factors in the adoption of mobile banking. However, such factors as perceived risks, compatibility with lifestyle and social influence are found to be insignificant due to some cultural and institutional features attributed to CIS countries.
Originality/value: This chapter contributes to the field of m-banking studies by focusing on both smartphone and tablet users. At least, the majority of respondents represent Y and Z generations who seem to move from traditional banking to digital channels.
https://doi.org/10.1142/9789811202391_0072
When evaluating the market risk of long-horizon equity returns, it is always difficult to provide a statistically sound solution due to the limitation of the sample size. To solve the problem for the value-at-risk (VaR) and the conditional tail expectation (CTE), Ho et al. (2016, 2018) introduce a general multivariate stochastic volatility return model from which asymptotic formulas for the VaR and the CTE are derived for integrated returns with the length of integration increasing to infinity. Based on the formulas, simple non-parametric estimators for the two popular risk measures of the long-horizon returns are constructed. The estimates are easy to implement and shown to be consistent and asymptotically normal. In this chapter, we further address the issue of testing the equality of the CTEs of integrated returns. Extensive finite-sample analysis and real data analysis are conducted to demonstrate the efficiency of the test statistics we propose.
https://doi.org/10.1142/9789811202391_0073
This chapter discusses the copula methods for application in finance. It provides an overview of the concept of copula, and the underlying statistical theories as well as theorems involved. The focus is on two copula families, namely, the elliptical and Archimedean copulas. The Gaussian and Student’s t copulas in the family of elliptical copulas which have symmetrical tails in their distributions are explained. The Clayton and Gumbel copulas in the family of Archimedean copulas whose distributions are asymmetrical are also described. Elaborations are given on tail dependence and the associated measures for these copulas. The estimation process is illustrated using an application of the methods on the returns of two exchange series.
https://doi.org/10.1142/9789811202391_0074
By assuming multivariate normal distribution of excess returns, we find that the sample maximum squared Sharpe ratio (MSR) has a significant upward bias. We then construct estimators for MSR based on Bayes estimation and unbiased estimation of the squared slope of the asymptote to the minimum variance frontier (ψ2). While the often used unbiased estimator may lead to unreasonable negative estimates in the case of finite sample, Bayes estimators will never produce negative values as long as the prior is bounded below by zero although it has a larger bias. We also design a mixed estimator by combining the Bayes estimator with the unbiased estimator. We show by simulation that the new mixed estimator performs as good as the unbiased estimator in terms of bias and root mean square errors and it is always positive. The mixed estimators are particularly useful in trend analysis when MSR is very low, for example, during crisis or depression time. While negative or zero estimates from unbiased estimator are not admissible, Bayes and mixed estimators can provide more information.
https://doi.org/10.1142/9789811202391_0075
Errors-in-variables (EIVs) and measurement errors are commonly encountered in asset prices and returns in capital market. This study examines the explanatory power of direct and reverse regression technique to bound the true regression estimates in the presence of EIVs and measurement error. We also derive standard error of reverse regression estimates to compute t-ratio of these estimates for the purpose of testing their statistical significance.
https://doi.org/10.1142/9789811202391_0076
This study investigates the role of financial advisors on the influence of deal outcomes in M&As. In particular, this study examines whether firms hired by domestic financial advisors can outperform those by foreign counterparts. Using 333 targets and 949 bidders from 1995 to 2011, the results show that targets take more (less) time to complete the deals when hiring low-reputable (foreign) financial advisors. When bidders hire low-reputable financial advisors and foreign advisors, bidders can complete the deals faster. In addition, the evidence indicates that low-reputable financial advisors create higher gains to both targets and bidders around the announcement date. However, bidders hired by less prestigious financial advisors suffer larger losses during post-announcement period. Interestingly, when hiring domestic advisors, both targets and bidders obtain higher announcement returns. The regression analysis further reveals that bidders obtain higher post-announcement returns when bidders hire domestic advisors. Hence, this study reveals that domestic advisors play an important role in M&As.
https://doi.org/10.1142/9789811202391_0077
In this chapter, we first discuss the basic concepts of linear algebra and linear combination and its distribution. Then we discuss the concepts of vectors, matrices, and their operations. Linear-equation system and its solution are also explored in detail. Based upon this information, we discuss discriminant analysis, factor analysis, and principal component analysis. Some applications of these three analyses are also demonstrated.
https://doi.org/10.1142/9789811202391_0078
In this chapter, we will discuss how to use discriminant analysis to do credit analysis and calculate financial z-score, then we will use both discriminant analysis and factor analysis to forecast bond rating by using financial ratio information. In addition, we will discuss Ohlson’s model and the KMV–Merton model for default probability estimation. Finally, we will cite some empirical results about default probability estimation and compare the results of two different probability estimation models.
https://doi.org/10.1142/9789811202391_0079
This chapter uses the concepts of basic portfolio analysis and dominance principle to derive the CAPM. A graphical approach is first utilized to derive the CAPM, after which a mathematical approach to the derivation is developed that illustrates how the market model can be used to decompose total risk into two components. This is followed by a discussion of the importance of beta in security analysis and further exploration of the determination and forecasting of beta. The discussion closes with the applications and implications of the CAPM, and the appendix offers empirical evidence of the risk–return relationship.
In this chapter, we define both market beta and accounting beta, and how they are determined by different accounting and economic information. Then, we forecast both market beta and accounting beta. Finally, we propose a composite method to forecast beta.
https://doi.org/10.1142/9789811202391_0080
In this chapter, we first discuss utility theory and utility function in detail, then we show how asset allocation can be done in terms of the quadratic utility function. Based upon these concepts, we show Markowitz’s portfolio selection model can be executed by constrained maximization approach. Real world examples in terms of three securities are also demonstrated. In the Markowitz selection model, we consider that short sale is both allowed and not allowed.
https://doi.org/10.1142/9789811202391_0081
This chapter offers some simplifying assumptions that reduce the overall number of calculations of Markowitz models through the use of the Sharpe single-index and multiple-index models. Besides single-index model, we also discuss how multiple-index model can be applied to portfolio selection. We have theoretically demonstrated how single-index and multiple-index portfolio selection models can be used to replace Markowitz portfolio selection model. An Excel example of how to apply the single-index model approach is also demonstrated.
https://doi.org/10.1142/9789811202391_0082
The main points of this chapter show how Markowitz’s portfolio selection method can be simplified by either Sharpe performance measure or Treynor performance measure. These two approaches do not need to use constrained optimization procedure, however, these methods do require the existence of risk-free rate. Overall, this chapter has mathematically demonstrated how Sharpe measure and Treynor measure can be used to determine optimal portfolio weights.
https://doi.org/10.1142/9789811202391_0083
This chapter aims to establish a basic knowledge of options and the markets in which they are traded. It begins with the most common types of options, calls, and puts, explaining their general characteristics and discussing the institutions where they are traded. In addition, the concepts relevant to the new types of options on indexes and futures are introduced. The next focus is the basic pricing relationship between puts and calls, known as put–call parity. The final study concerns how options can be used as investment tools. Alternative option strategies theory has been presented. Excel is used to demonstrate how different option strategies can be executed.
https://doi.org/10.1142/9789811202391_0084
In this chapter, we (i) use the decision-tree approach to derive binomial option pricing model (OPM) in terms of the method used by Rendleman and Barter (RB, 1979) and Cox et al. (CRR, 1979) and (ii) use Microsoft Excel to show how decision-tree model can be converted to Black–Scholes model when the number period increases to infinity. In addition, we develop binomial tree model for American option and trinomial tree model. The efficiency of binomial and trinomial tree methods is also compared. In sum, this chapter shows how binomial OPM can be converted step by step to Black–Scholes OPM.
https://doi.org/10.1142/9789811202391_0085
In this chapter, we first review the basic theory of normal and log-normal distribution and their relationship, then bivariate and multivariate normal density function are analyzed in detail. Next, we discuss American options in terms of random dividend payment. We then use bivariate normal density function to analyze American options with random dividend payment. Computer programs are used to show how American co-options can be evaluated. Finally, pricing option bounds are analyzed in some detail.
https://doi.org/10.1142/9789811202391_0086
Based on comparative analysis, we first discuss different kinds of Greek letters in terms of Black–Scholes option pricing model, then we show how these Greek letters can be applied to perform hedging and risk management. The relationship between delta, theta, and gamma is also explored in detail.
https://doi.org/10.1142/9789811202391_0087
This chapter discusses the methods and applications of fundamental analysis and technical analysis. In addition, it investigates the ranking performance of the Value Line and the timing and selectivity of mutual funds. A detailed investigation of technical versus fundamental analysis is first presented. This is followed by an analysis of regression time-series and composite methods for forecasting security rates of return. Value Line ranking methods and their performance then are discussed, leading finally into a study of the classification of mutual funds and the mutual-fund managers’ timing and selectivity ability. In addition, the hedging ability is also briefly discussed. Sharpe measure, Treynor measure, and Jensen measure are defined and analyzed. All of these topics can help improve performance in security analysis and portfolio management.
https://doi.org/10.1142/9789811202391_0088
This chapter first focuses on the bond strategies of riding the yield curve and structuring the maturity of the bond portfolio in order to generate additional return. This is followed by a discussion of swapping, which are essentially interest-rate swaps. Next is an analysis of duration or the measure of the portfolio sensitivity to changes in interest rates with and without convexity, after which immunization is the focus. The convexity is essentially discussed in nonlinear relationship between bond price and duration. Finally, a case study is presented of bond-portfolio management in the context of portfolio theory. Overall, this chapter presents how interest rate changes affect bond price and how maturity and duration can be used to manage portfolios.
https://doi.org/10.1142/9789811202391_0089
This chapter discusses how futures, options, and futures options can be used in portfolio insurance (dynamic hedging). Four alternative portfolio insurance strategies are discussed in this chapter. These strategies are: (i) stop-loss orders, (ii) portfolio insurance with listed put options, (iii) portfolio insurance with synthetic options, and (iv) portfolio insurance with dynamic hedging. In addition, the techniques of combining stocks and futures to derive synthetic options are explored in detail. Finally, important literature related to portfolio insurance is also reviewed.
https://doi.org/10.1142/9789811202391_0090
In this chapter, we will discuss four alternative security valuation models. These four models are as follows: (i) Warren and Shelton model, (ii) Francis and Rowell model, (iii) Feltham–Ohlson model, and (iv) combined forecasting model. In this chapter, we will show how accounting, stock price, and economic information can be used to determine security values in terms of finance theory. Algebraic simultaneous equation, econometrics model, and Excel program will be used for empirical studies.
https://doi.org/10.1142/9789811202391_0091
The purpose of this chapter is to critically evaluate the methods used to examine hedge fund performance, review and synthesize studies that attempt to explain the inconsistencies associated with the performance of hedge funds and to attempt to compare the returns of hedge funds against more liquid investments. In fact, research related to hedge fund performance seems to have been focused on whether hedge fund managers manipulate their performance and what investors should think about this performance manipulation; however, recent studies have questioned whether this perceived performance manipulation is manipulation per se or something else. In general, researchers have used a number of different techniques to attempt to model hedge fund performance and the relative opacity and latency that is evident in the reporting of hedge fund returns. Nevertheless, the very nature of the structure of a hedge fund makes it difficult to mark the returns to market on a frequent basis and even if managers wanted their performance marked to market, which would unveil their positioning through time, the relative illiquidity and stale pricing associated with some of the investments that are held by hedge funds make pricing the hedge fund a difficult and somewhat pointless exercise. To this end, studies that attempt to analyze and evaluate aggregate performance for hedge fund returns have focused on identifying the true determinates of hedge fund performance, attempted to account for and explain the relative staleness of pricing in hedge fund returns, and to relate the performance of hedge funds to more liquid and transparent investments. This chapter offers key suggestions for financial market participants such as hedge funds managers, portfolio managers, risk managers, regulatory bodies, financial analysts, and investors about their evaluation and interpretation of hedge fund performance. In addition, this critical review chapter can benefit investors, portfolio managers, and researchers in the establishment of a yardstick for the assessment of hedge fund performance and the performance of assets that have stale pricing and are relatively opaque.
https://doi.org/10.1142/9789811202391_0092
This study examines the relationships between the gold spot and futures with different maturities using a time-varying and quantile-dependent approach, that is, the quantile co-integration model. This model allows the co-integrating coefficient to vary over the conditional distribution of gold spot prices. We find that the returns of gold at lower quantiles, the co-integration among gold spot prices and one- to six-month gold futures prices are less stronger than the returns at high quantiles. When the gold returns of quantiles are high, these relationships become even stronger. In terms of the co-integration between gold and VIX (CBOE Volatility Index), we find that the co-integration of gold spot prices, futures prices and VIX at high quantile are greater than those observed at low quantiles. Our work adds another cross-sectional dimension to the extant literature, which uses only the time-series dimension to examine the co-integration. Furthermore, the results suggest that while investors intend to hedge risk by exercising futures contracts, using short-term futures would be a better choice than the long-term contracts.
https://doi.org/10.1142/9789811202391_0093
This study proposes a Bayesian test for a test portfolio p’s mean–variance efficiency that takes into account the sampling errors associated with the ex post Sharpe ratio ŜR of the test portfolio p. The test is based on the Bayes factor that compares the joint likelihoods under the null hypothesis H0 and the alternative H1, respectively. Using historical monthly return data of 10 industrial portfolios and a test portfolio, namely, the CRSP value-weighted index, from January 1941 to December 1973 and January 1980 to December 2012, the power function of the proposed Bayesian test is compared to the conditional multivariate F-test by Gibbons, Ross and Shanken (1989) and the Bayesian test by Shanken (1987). In an independent simulation study, the performance of the proposed Bayesian test is also demonstrated.
https://doi.org/10.1142/9789811202391_0094
This chapter examines the profits of revenue, earnings, and price momentum strategies in an attempt to understand investor reactions when facing multiple information of firm performance in various scenarios. We first offer evidence that there is no dominating momentum strategy among the revenue, earnings, and price momentums, suggesting that revenue surprises, earnings surprises, and prior returns each carry some exclusive unpriced information content. We next show that the profits of momentum driven by firm fundamental performance information (revenue or earnings) depend upon the accompanying firm market performance information (price), and vice versa. The robust monotonicity in multivariate momentum returns is consistent with the argument that the market does not only underestimate the individual information but also the joint implications of multiple information on firm performance, particularly when they point in the same direction. A three-way combined momentum strategy may offer monthly return as high as 1.44%. The information conveyed by revenue surprises and earnings surprises combined account for about 19% of price momentum effects, which finding adds to the large literature on tracing the sources of price momentum.
https://doi.org/10.1142/9789811202391_0095
This study examines how fundamental accounting information can be used to supplement technical information to separate momentum winners from losers. We first introduce a ratio of liquidity buy volume to liquidity sell volume (BOS ratio) to proxy the level of information asymmetry for stocks and show that the BOS momentum strategy can enhance the profits of momentum strategy. We further propose a unified framework, produced by incorporating two fundamental indicators — the FSCORE (Piotroski, 2000) and the GSCORE (Mohanram, 2005) — into momentum strategy. The empirical results show that the combined investment strategy includes stocks with a larger information content that the market cannot reflect in time, and therefore, the combined investment strategy outperforms momentum strategy by generating significantly higher returns.
https://doi.org/10.1142/9789811202391_0096
Following the dividend flexibility hypothesis used by DeAngelo and DeAngelo (2006), Blau and Fuller (2008), and others, we theoretically extend the proposition of DeAngelo and DeAngelo’s (2006) optimal payout policy in terms of the flexibility dividend hypothesis. In addition, we also introduce growth rate, systematic risk, and total risk variables into the theoretical model.
To test the theoretical results derived in this paper, we use data collected in the US from 1969 to 2009 to investigate the impact of growth rate, systematic risk, and total risk on the optimal payout ratio in terms of the fixed-effect model. We find that based on flexibility considerations, a company will reduce its payout when the growth rate increases. In addition, we find that a nonlinear relationship exists between the payout ratio and the risk. In other words, the relationship between the payout ratio and risk is negative (or positive) when the growth rate is higher (or lower) than the rate of return on total assets. Our theoretical model and empirical results can therefore be used to identify whether flexibility or the free cash flow hypothesis should be used to determine the dividend policy.
https://doi.org/10.1142/9789811202391_0097
A large number of studies have examined issues of dividend policy, while they rarely consider the investment decision and dividend policy jointly from a non-steady state to a steady state. We extend Higgins’ (1977, 1981, 2008) sustainable growth rate model and develop a dynamic model which jointly optimizes the growth rate and payout ratio. We optimize the firm value to obtain the optimal growth rate in terms of a logistic equation and find that the steady-state growth rate can be used as the benchmark for the mean-reverting process of the optimal growth rate. We also investigate the specification error of the mean and variance of dividend per share when introducing the stochastic growth rate. Empirical results support the mean-reverting process of the growth rate and the importance of covariance between the profitability and the growth rate in determining dividend payout policy. In addition, the intertemporal behavior of the covariance may shed some light on the fact of disappearing dividends over decades.
https://doi.org/10.1142/9789811202391_0098
It is well known that in simple linear regression, measurement errors in the explanatory variable lead to a downward bias in the OLS slope estimator. In two-pass regression tests of asset-pricing models, one is confronted with such measurement errors as the second-pass cross-sectional regression uses as explanatory variables imprecise estimates of asset betas extracted from the first-pass time-series regression. The slope estimator of the second-pass regression is used to get an estimate of the pricing-model’s factor risk-premium. Since the significance of this estimate is decisive for the validity of the model, knowledge of the properties of the slope estimator, in particular, its bias, is crucial. First, we show that cross-sectional correlations in the idiosyncratic errors of the first-pass time-series regression lead to correlated measurement errors in the betas used in the second-pass cross-sectional regression. We then study the effect of correlated measurement errors on the bias of the OLS slope estimator. Using Taylor approximation, we develop an analytic expression for the bias in the slope estimator of the second-pass regression with a finite number of test assets N and a finite time-series sample size T. The bias is found to depend in a non-trivial way not only on the size and correlations of the measurement errors but also on the distribution of the true values of the explanatory variable (the betas). In fact, while the bias increases with the size of the errors, it decreases the more the errors are correlated. We illustrate and validate our result using a simulation approach based on empirical return data commonly used in asset-pricing tests. In particular, we show that correlations seen in empirical returns (e.g., due to industry effects in sorted portfolios) substantially suppress the bias.
https://doi.org/10.1142/9789811202391_0099
Breeden (1979), Grinols (1984), and Cox et al. (1985) describe the importance of supply side for the capital asset pricing. Black (1976) derives a dynamic, multi-period CAPM, integrating endogenous demand and supply. However, this theoretically elegant model has never been empirically tested for its implications in dynamic asset pricing. We first review and theoretically extend Black’s CAPM to allow for a price adjustment process. We then derive the disequilibrium model for asset pricing in terms of the disequilibrium model developed by Fair and Jaffe (1972), Amemiya (1974), Quandt (1988), and others. We discuss two methods of estimating an asset pricing model with disequilibrium price adjustment effect. Finally, using price per share, dividend per share, and outstanding shares data, we test the existence of price disequilibrium adjustment process with international index data and US equity data. We find that there exists disequilibrium price adjustment process in our empirical data. Our results support Lo and Wang’s (2000) findings that trading volume is one of the important factors in determining capital asset pricing.
https://doi.org/10.1142/9789811202391_fmatter04
The following sections are included:
https://doi.org/10.1142/9789811202391_0100
Breeden [An intertemporal asset pricing model with stochastic consumption and investment opportunities. Journal of Financial Economics 7, (1979) 265–296], Grinols [Production and risk leveling in the intertemporal capital asset pricing model. Journal of Finance 39, 5, (1984) 1571–1595] and Cox et al. [An intertemporal general equilibrium model of asset prices. Econometrica 53, (1985) 363–384] have described the importance of supply side for the capital asset pricing. Black [Rational response to shocks in a dynamic model of capital asset pricing. American Economic Review 66, (1976) 767–779] derives a dynamic, multiperiod CAPM, integrating endogenous demand and supply. However, Black’s theoretically elegant model has never been empirically tested for its implications in dynamic asset pricing. We first theoretically extend Black’s CAPM. Then we use price, dividend per share and earnings per share to test the existence of supply effect with US equity data. We find the supply effect is important in US domestic stock markets. This finding holds as we break the companies listed in the S&P 500 into 10 portfolios by different level of payout ratio. It also holds consistently if we use individual stock data. A simultaneous equation system is constructed through a standard structural form of a multiperiod equation to represent the dynamic relationship between supply and demand for capital assets. The equation system is exactly identified under our specification. Then, two hypothesis related to supply effect are tested regarding the parameters in the reduced-form system. The equation system is estimated by the seemingly unrelated regression (SUR) method, since SUR allows one to estimate the presented system simultaneously while accounting for the correlated errors.
https://doi.org/10.1142/9789811202391_0101
Machine learning has successful applications in credit risk management, portfolio management, automatic trading, and fraud detection, to name a few, in the domain of financial technology. Reformulating and solving these topics adequately and accurately is problem specific and challenging along with the availability of complex and voluminous data. In credit risk management, one major problem is to predict the default of credit card holders using real data set. We review five machine learning methods: the k-nearest neighbors, decision trees, boosting, support vector machine, and neural networks, and apply them to the above problem. In addition, we give explicit Python scripts to conduct analysis using a data set of 29,999 instances with 23 features collected from a major bank in Taiwan, downloadable in the UC Irvine Machine Learning Repository. We show that the decision tree performs best among others in terms of validation curves.
https://doi.org/10.1142/9789811202391_0102
The main purposes of this paper are: (i) to review three alternative methods for deriving option pricing models (OPM), (ii) to discuss the relationship between binomial OPM and Black–Scholes OPM, (iii) to compare the Cox et al. method and the Rendleman and Bartter method for deriving Black–Scholes OPM, (iv) to discuss lognormal distribution method to derive the Black–Scholes OPM, and (v) to show how the Black–Scholes model can be derived by stochastic calculus. This paper shows that the main methodologies used to derive the Black–Scholes model are: binomial distribution, lognormal distribution, and differential and integral calculus. If we assume risk neutrality, then we do not need stochastic calculus to derive the Black–Scholes model. However, the stochastic calculus approach for deriving the Black–Scholes model is still presented in Section 102.6. In sum, this paper can help statisticians and mathematicians understand how alternative methods can be used to derive the Black–Scholes option model.
https://doi.org/10.1142/9789811202391_0103
Option prices tend to be correlated to past stock market returns due to market imperfections. This chapter discuss this issue in Chinese derivative market. Implied volatility spread based on pairs of options is constructed to measure the price pressure in the option market. By regressing the implied volatility spread on past stock returns, we find that past stock returns exert a strong influence on the pricing of index options. Specifically, the SSE 50 ETF calls are significantly overvalued relative to SSE 50 ETF puts after stock price increases, and vice versa. Moreover, we empirically validate that momentum effects in the underlying stock market are responsible for the price pressure. These findings are both economically and statistically significant and have important implications.
https://doi.org/10.1142/9789811202391_0104
This chapter presents advancement of several widely applied portfolio models to ensure flexibility in their applications: Mean–variance (MV), Mean–absolute deviation (MAD), linearized value-at-risk (LVaR), conditional value-at-risk (CVaR), and Omega models. We include short-sales and transaction costs in modeling portfolios and further investigate their effectiveness. Using the daily data of international ETFs over 15 years, we generate the results of the rebalancing portfolios. The empirical findings show that the MV, MAD, and Omega models yield higher realized return with lower portfolio diversity than the LVaR and CVaR models. The outperformance of these risk-return-based models over the downside-risk-focused models comes from efficient asset allocation but not only the saving of transaction costs.
https://doi.org/10.1142/9789811202391_0105
The due process of the International Financial Reporting Standards (IFRS) enables interested parties to comment on the development of new IFRS. Unsurprisingly, different advocacy groups have very different perspectives and interests. For example, businesses are more likely to be interested in “user-friendly” rules, whereas standard-setters and academics tend to prefer theoretically coherent standards.
This paper analyzes the response behavior of different advocacy groups using the example of lease accounting reform whereas leasing seems to be a promising example. First, to analyze the response behavior, five different advocacy groups are defined. The 657 comment letters submitted for the Re-Exposure Draft “Leases” are then assigned to these five advocacy groups. The Re-Exposure Draft formulates questions about different aspects of the new standard and asks for comments regarding these aspects. Next, the response behavior of the different advocacy groups with respect to the most relevant questions is examined quantitatively and qualitatively. The quantitative analysis uses the Kruskal–Wallis test (H-test) and the Mann–Whitney test (U-test) to evaluate the response behavior. The main result of the study is that the response behavior to various questions differs significantly between advocacy groups. In particular, it is shown that the response behavior differs drastically between more “user-oriented” and more “theoretically oriented” advocacy groups.
https://doi.org/10.1142/9789811202391_0106
The main purpose of this chapter is to demonstrate how to estimate implied variance for both Black–Scholes option pricing model (OPM) and constant elasticity of variance (CEV) OPM. For the Black–Scholes OPM model, we classify them into two different estimation routines: numerical search methods and closed-form derivation approaches. Both MATLAB approach and approximation method are used to empirically estimate implied variance for American and Chinese options. For the CEV model, we present the theory and demonstrate how to use related Excel program in detail.
https://doi.org/10.1142/9789811202391_0107
This research paper aims to examine the predictability of the Spanish Stock Market returns. Earlier studies suggest that stock market returns in developed countries can be predicted with a noise term but this study has specifically covered two time horizons; one pre-crisis period and the other one current crisis period to evaluate the stock market returns predictability. Since mean returns cannot prove all the time to be efficient predictor, variance of such returns do, hence various autoregressive models have been used to test the existence of persisting volatility in the Spanish Stock Market. The empirical results show that higher order of autoregressive models such as ARCH(5) and GARCH(2, 2) can be used to predict future risk in Spanish Stock Market both in pre-crisis and current crisis period. The paper also reveals that there is a positive correlation between Spanish Stock Market returns and the conditional standard deviations as produced by ARCH(5) and GARCH(2, 2), implying that the models have some success in predicting future risk on Spanish Stock Market. The predictability of stock market returns during crisis period is not found to be affected contrary though the degree of predictability may be.
https://doi.org/10.1142/9789811202391_0108
Building on the work of Barras, Scaillet and Wermers (BSW, 2010), we propose a modified approach to inferring performance for a cross-section of investment funds. Our model assumes that funds belong to groups of different abnormal performance or alpha. Using the structure of the probability model, we simultaneously estimate the alpha locations and the fractions of funds for each group, taking multiple testing into account. Our approach allows for tests with imperfect power that may falsely classify good funds as bad, and vice versa. Examining both mutual funds and hedge funds, we find smaller fractions of zero-alpha funds and more funds with abnormal performance, compared with the BSW approach. We also use the model as prior information about the cross-section of funds to evaluate and predict fund performance.
https://doi.org/10.1142/9789811202391_0109
In this paper, we review the renowned constant elasticity of variance (CEV) option pricing model and give the detailed derivations. There are two purposes of this chapter. First, we show the details of the formulae needed in deriving the option pricing and bridge the gaps in deriving the necessary formulae for the model. Second, we use a result by Feller to obtain the transition probability density function of the stock price at time T given its price at time t. In addition, some computational considerations are given for the facilitation of computing the CEV option pricing formula.
https://doi.org/10.1142/9789811202391_0110
We study bond prices in Black–Cox model with jumps in asset value. We assume that the jump size distribution is arbitrary and, if default occurs, following Longstaff and Schwartz [A Simple Approach to Valuing Risky Fixed and Floating Rate Debt. Journal of Finance 50 (1995), 789–819] and Zhou [The Term Structure of Credit Spreads with Jump Risk. Journal of Banking & Finance 26 (2001), 2015–2040], the payoff at maturity date depends on a general write-down function. Under this general setting, we propose an integral equation approach for the bond prices. As an application of this approach, we study the analytic properties of the bond prices. Also we derive an infinite series expression for the bond prices.
https://doi.org/10.1142/9789811202391_0111
In many occasions of regression analysis, researchers may encounter the problem of a non-random sample that leads to a biased estimator when using the OLS method. This study thus examines some related issues of sample selection bias due to non-random sampling. We first explain the source of bias caused by non-random sampling and then demonstrate that the direction of such bias in most cases cannot be ascertained based on prior information. By treating the sample selection as informative sampling, we can formulate the sample selection bias issue as an omitted variable problem in the regression model. Heckman (1979) proposed a two-stage estimation procedure to correct for selection bias. The first stage applies the Probit model to produce the estimated value of the inverse Mill’s ratio and then includes it into the second-stage regression model as an explanatory variable to yield unbiased estimators. As the sample selection rule may not always be derived from a yes–no choice, our study further utilizes Lee’s (1983) extension by applying the Multinomial Logit model into the first-stage estimation procedure to allow for its application with multi-choice sample selection rule. Since the pioneer works related to sample selection issues are mostly in the field of labor economics, we give two examples of an empirical study in labor economics to respectively demonstrate applications of the Probit correction approach and Multinomial Logit correction approach. Finally, we point out that the problem of a non-random sample is not limited to applications in economics. In the past 20 years, quite a few researchers have taken into account the issue of sample selection for studies of finance and management issues.
https://doi.org/10.1142/9789811202391_0112
This chapter discusses and compares the performances of the traditional time-series models and the neural network (NN) model to see which one does a better job of predicting changes in stock prices and to identify critical predictors in forecasting stock prices in order to increase forecasting accuracy for professionals in the market. Time-series analysis is somewhat parallel to technical analysis, but it differs from the latter by using different statistical methods and models to analyze historical stock prices and predict the future prices. Neural network approaches can make important contributions since they can incorporate very large number of variables and observations into their models. In this study, the authors apply the traditional time-series decomposition (TSD), Holt/Winters (H/W) models, Box–Jenkins (B/J) methodology, and neural network (NN) model to 50 randomly selected stocks from September 1, 1998 to December 31, 2010 with a total of 3105 observations for each company’s close stock price. This sample period covers high tech boom and bust, the historical 9/11 event, housing boom and bust, and the recent serious recession and current slow recovery. During this exceptionally uncertain period of global economic and financial crises, it is expected that stock prices are extremely difficult to predict.
https://doi.org/10.1142/9789811202391_0113
Recently, Zou et al. (2017) proposed a novel covariance regression model to study the relationship between the covariance matrix of responses and their associated similarity matrices induced by auxiliary information. To estimate the covariance regression model, they introduced five estimators: the maximum likelihood, ordinary least squares, constrained ordinary least squares, feasible generalized least squares and constrained feasible generalized least squares estimators. Among these five, they recommended the constrained feasible generalized least squares estimator due to its estimation efficiency and computational convenience. Under the normality assumption, they further demonstrated the theoretical properties of these estimators. However, the data in the area of finance and accounting may exhibit heavy tails. Hence, to broaden the usefulness of the covariance regression model, we relax the normality assumption and employ Lee’s (2004) approach to obtain inferences for covariance regression parameters based on the five estimators proposed by Zou et al. (2017). Two empirical examples are presented to illustrate the practical applications of the covariance regression model in analyzing stock return comovement and herding behavior of mutual funds.
https://doi.org/10.1142/9789811202391_0114
Data for big and small market-value firms are used to evaluate the effects of temporal aggregation on beta estimates, t-values, and R2 estimates. In addition to our analysis of the standard market model within addictive rates of return framework, the standard model under assumption of multiplicative rates of return is also discussed. Furthermore, dynamic is estimated in this study to evaluate differences in the short-term and long-term dynamic relationships between the market and each type of firm. It is found that temporal aggregation has important effects on both the specification of a market model and the stability of beta and R2 estimates.
https://doi.org/10.1142/9789811202391_0115
In this chapter, we discuss large sample theory that can be applied under conditions that are quite likely to be met in large samples even when the Gauss–Markov conditions are broken. There are two reasons for using large sample theory. First, there may be some problems that corrupt our estimators in small samples but tends to diminish down as the sample gets bigger. Thus, if we cannot get a perfect small sample estimator, we will usually want to choose the one that will be best in large samples. Second, in some circumstances, the theory used to derive the properties of estimators in small samples just does not work, and working out the properties of the estimators can be impossible. This makes it very hard to choose between alternative estimators. In these circumstances we judge different estimators on their “large sample properties” because their “small (or finite) sample properties” are unknown.
https://doi.org/10.1142/9789811202391_0116
This chapter analyzes the errors-in-variables problems in a simultaneous equation estimation in dividend and investment decisions. We first investigate the effects of measurement errors in exogenous variables on the estimation of a just-identified or an over-identified simultaneous equations system. The impacts of measurement errors on the estimation of structural parameters are discussed. Moreover, we use a simultaneous system in terms of dividend and investment policies to illustrate how theoretically the unknown variance of measurement errors can be identified by the over-identified information. Finally, we summarize the findings.
https://doi.org/10.1142/9789811202391_0117
Big data and artificial intelligence (AI) assist businesses with decision-making. They help companies create new products and processes or improve existing ones. As the amount of data grows exponentially and data storage and computing power costs drop, AI is predicted to have great potentials for banks. This chapter discusses the implications of big data and AI for the banking industry. First, we provide background on big data and AI. Second, we identify areas in which banks can benefit from big data and AI, and evaluate their applications for the banking industry. Third, we discuss the implications of big data and AI for regulatory compliance and supervision. Last, we conclude with the limitations and challenges facing the use of big-data based AI.
https://doi.org/10.1142/9789811202391_0118
Prior studies on financial markets integration use parametric estimators whose underlying assumptions of linearity and normality are, at best, questionable, particularly when using high frequency data. We re-examine the evidence regarding financial integration trends using data for 14 emerging equity markets from Southeast Asia, Latin America, and the Middle East, along with US and Japan. We employ non-parametric estimators of Pukthuanthong and Roll’s (2009) adjusted R2 measure of financial integration. Results from non-parametric estimators are contrasted with results from parametric estimators of adjusted R2 financial integration measure using bi-daily returns for contiguous yearly sub-periods from 1993 to 2016. We find two key results. First, we confirm prior evidence in Pukthuanthong and Roll (2009) that simple correlation (SC) understates financial integration trends compared to parametric adjusted R2. Second, parametric adjusted R2 understates financial integration trends relative to non-parametric adjusted R2. Hence, emerging equity markets may be more financially integrated, and offer fewer diversification benefits to global investors than previously thought. The results underscore the need to exercise caution when drawing inferences regarding financial markets integration using parametric estimators.
https://doi.org/10.1142/9789811202391_0119
This chapter presents Algorithmic Analyst (ALAN), an application that implements statistics and artificial intelligence methods with natural language generation to publish multimedia financial reports in Chinese and English. ALAN is a portion of a long-term project to develop an Artificial Intelligence Content as a Service (AICaaS) platform. ALAN gathers global capital market data, performs big data analysis driven by algorithms, and makes market forecasts. ALAN uses a multi-factor risk model to identify equity risk factors and ranks stocks based on a set of over 150 financial market variables. For each instrument analyzed, ALAN computes and produces narrative metadata to describe its historical trends, forecast results, and any causal relationship with global macroeconomic variables. ALAN generates English and Chinese text commentaries in html and pdf formats, audio in mp3 format, and video in mp4 format for the US and Taiwanese equity markets on a daily basis.
https://doi.org/10.1142/9789811202391_0120
This chapter outlines some commonly used statistical methods for studying the occurrence and timing of events, i.e., survival analysis. It is also called duration analysis or transition analysis in econometrics. Statistical methods for survival data usually include non-parametric method, parametric method and semiparametric method. While some non-parametric estimators (e.g., the Kaplan–Meier estimator and life-table estimator) estimate survivor functions, others (e.g., the Nelson–Aalen estimator) estimate the cumulative hazard function. The commonly used non-parametric test for comparing survivor functions is the log-rank test. Parametric models such as the exponential model, Weibull model, and the generalized Gamma model, etc., are based on different assumptions of survival time. Semiparametric regression models are also called the Cox proportional hazards (PH) model, which is estimated by the method of partial likelihood and do not require the assumption of survival time. Other applications of discrete time data and the competing risks model are also introduced.
https://doi.org/10.1142/9789811202391_0121
In this study, we aim to test the pricing power of market liquidity in the cross-section of US stock returns. We examine three liquidity measures: Pástor and Stambaugh (2003)’s liquidity factor, Bali et al. (2014)’s liquidity shocks, and Dreshsler, Savov, and Schanbl (2017)’s money market liquidity premium. With a large set of test assets and the time-series regression approach of Fama and French (2015), we find that aggregate liquidity is not priced in the cross-sections of stock returns. That is, adding the liquidity factor to common asset-pricing models does not improve the performance of models significantly. Therefore, our results call for more research on the impact of aggregate liquidity on the stock market.
https://doi.org/10.1142/9789811202391_0122
Since Sharpe (1964) derived the CAPM, it has been the benchmark of asset pricing models and has been used to calculate the cost of equity capital and other asset pricing determinations for more than four decades. Many researchers have tried to relax the original assumptions and generalize the static CAPM. In addition, Merton (1973) and Black (1976) have generalized the static CAPM in terms of intertemporal CAPM. In this chapter, we survey the important alternative theoretical models of capital asset pricing and provide a complete review of the evolution of both static and intertemporal asset pricing models. We also discuss the interrelationships among these models and suggest several possible directions for future research. In addition, we review the asset pricing tests in terms of individual companies’ data instead of portfolio data. Our results might be used as a guideline for future theoretical and empirical research in capital asset pricing.
https://doi.org/10.1142/9789811202391_0123
We review briefly multivariate GARCH models in contrast with univariate GARCH models, and clarify the statistical perspective of the DCC-GARCH model introduced by Engel (2002). This model ingeniously compromises two contrary requirements for constructing a model: sufficiently flexible to catch the behaviors of actually observed data process, and sufficiently parsimonious for statistical analysis in practice. Then, we illustrate practical usefulness of the DCC-GARCH through its application to the bond and stock markets in the emerging East Asian countries. The DCC-GARCH can evaluate the comovements of different financial assets by use of dynamic variance decomposition (volatility spillover) in addition to the DCCs. Empirical investigation of this paper clarifies that the bond market integration is still limited in terms of both DCCs and volatility spillover, while the stock markets are highly integrated both regionally and globally.
https://doi.org/10.1142/9789811202391_0124
In this chapter, we review the difference-in-difference (DID) method and first-difference method, which have been widely used in quantitative research designs in the social sciences (e.g., economics, finance, accounting, etc.). First, we define the DID and first-difference methods. Then, we explain the models that may be used in the DID and first-difference methods and briefly discuss the critical assumptions when researchers make a casual inference from the results. Next, we use some examples documented in previous studies to illustrate how to apply the DID and first-difference methods in the research related to policy implementations. Finally, we compare the DID method to the comparative interrupted time series (CITS) design and briefly introduce two popular methods that researchers have used to create a control sample in order to reduce sample selection bias in a quasi-experimental design: propensity score matching (PSM) and regression discontinuity design (RDD).
https://doi.org/10.1142/9789811202391_0125
The smooth transition regression (STR) methodology was developed to model nonlinear relationships in the business cycle. We demonstrate the methodology can be used to analyse return series where exposure to financial market risk factors depends on market regime. The smooth transition between regimes inherent in STR is particularly appropriate for risk models as it allows for gradual transition of risk factor exposures. Variations in the methodology and tests its appropriateness are defined and discussed. We apply the STR methodology to model the risk of the return series of the convertible arbitrage (CA) hedge fund strategy. CA portfolios are comprised of instruments that have both equity and bond characteristics and alternate between the two depending on market level (state). The dual characteristics make the CA strategy a strong candidate for nonlinear risk models. Using the STR model, we confirm that the strategy’s risk factor exposure changes with market regime and, using this result, are able to account for the abnormal returns reported for the strategy in earlier studies.
https://doi.org/10.1142/9789811202391_0126
The main purposes of this paper are to review and integrate the applications of discriminant analysis, factor analysis, and logistic regression in credit risk management. First, we discuss how the discriminant analysis can be used for credit rating such as calculating financial z-score to determine the chance of bankruptcy of the firm. In addition, we also discuss how discriminant analysis can be used to classify banks into problem banks and non-problem banks. Secondly, we discuss how factor analysis can be combined with discriminant analysis to perform bond rating forecasting. Thirdly, we show how logistic and generalized regression techniques can be used to calculate the default risk probability. Fourthly, we will discuss the KMV-Merton model and Merton distance model for calculating default probability. Finally, we compare all techniques discussed in previous sections and draw conclusions and give suggestions for future research. We propose using CEV option model to improve the original Merton DD model. In addition, we also propose a modified naïve model to improve Bharath and Shumway’s (2008) naïve model.
https://doi.org/10.1142/9789811202391_0127
The objective of this paper is 2-fold. First, it develops a prediction system to help the credit card issuer model the credit card delinquency risk. Second, it seeks to explore the potential of deep learning (also called deep neural network), an emerging artificial intelligence technology, in credit risk domain. With a real-life credit card data linked to 711,397 credit card holders from a large bank in Brazil, this study develops a deep neural network to evaluate the risk of credit card delinquency based on the client’s personal characteristics and the spending behaviors. Compared to machine learning algorithms of logistic regression, naïve Bayes, traditional artificial neural network, and decision tree, deep neural network has a better overall predictive performance with the highest F scores and AUC. The successful application of deep learning implies that artificial intelligence has great potential to support and automate credit risk assessment for financial institutions and credit bureaus.
https://doi.org/10.1142/9789811202391_0128
US tax laws provide investors an incentive to time the sales of their bonds to minimize tax liability. This grants a tax timing option that affects bond value. In reality, corporate bond investors’ tax-timing strategy is complicated by risk of default. In this chapter, we assess the effects of taxes and stochastic interest rates on the timing option value and equilibrium price of corporate bonds by considering discount and premium amortization, multiple trading dates, transaction costs, and changes in the level and volatility of interest rates. We find that the value of tax-timing option account for a substantial proportion of corporate bond price and the option value increases with bond maturity and credit risk.
https://doi.org/10.1142/9789811202391_0129
Understanding the dynamic correlations among asset returns is essential for ascertaining the behavior of asset prices and their comovements. It also has important implications for portfolio diversification and risk management. In this chapter, we apply the DCC-GARCH model pioneered by Engle (2001) and Engle and Sheppard (2002) to investigate the dynamics of correlations among S&P 500 stocks during the sub-prime crisis. Using the daily data of stocks in the S&P 500 index, we document strong evidence of persistent dynamic correlations among the returns of the index component stocks. Conditional correlations between S&P 500 index and the component stocks increase substantially during the period of sub-prime crisis, showing strong evidence of contagion. In addition, stock return variance is time-varying and peaks at the crest of financial crisis. The results show that the DCC-GARCH model is a powerful tool for forecasting return correlations and performing value-at-risk portfolio analysis.
https://doi.org/10.1142/9789811202391_0130
This chapter utilizes path analysis, an approach common in behavioral and natural science literatures but relatively unseen in finance and accounting, to improve inferences drawn from a combined database of financial and non-financial information. Focusing on the revenue generating activities of internet firms, this paper extends the literature on internet valuation while addressing the potentially endogenous and multicollinear nature of the internet activity measures applied in their tests. Results suggest that both SG&A and R&D have significant explanatory power over the web activity measures, suggestive that these expenditures represent investments in product quality. Evidence from the path analysis also indicates that both accounting and non-financial measures, in particular SG&A and pageviews, are significantly associated with firm revenues. Finally, this paper suggests other areas of accounting research which could benefit from a path analysis approach.
https://doi.org/10.1142/9789811202391_0131
This chapter examines the relationship between financial performance, regulatory reform, and management of community banks. The consequences of the Sarbanes–Oxley Act (SOX) and Dodd–Frank Act (DFA) regulations are observed. Risk management responses to regulatory reforms, as observed in the loan loss provision, are examined in relation to these reforms. We also observe the consequences of compliance costs on product offerings and competitive condition. Empirical methods and results provided here show that sustained operations for community banks will require a commitment to developing management expertise that observes the consequences of regulatory objectives at the firm level.
https://doi.org/10.1142/9789811202391_bmatter
The following sections are included:
Cheng Few Lee is a Distinguished Professor of Finance at Rutgers Business School, Rutgers University and was chairperson of the Department of Finance from 1988–1995. He has also served on the faculty of the University of Illinois (IBE Professor of Finance) and the University of Georgia. He has maintained academic and consulting ties in Taiwan, Hong Kong, China and the United States for the past four decades. He has been a consultant to many prominent groups including, the American Insurance Group, the World Bank, the United Nations, The Marmon Group Inc., Wintek Corporation and Polaris Financial Group, etc.
Professor Lee founded the Review of Quantitative Finance and Accounting (RQFA) in 1990 and the Review of Pacific Basin Financial Markets and Policies (RPBFMP) in 1998, and serves as managing editor for both journals. He was also a co-editor of the Financial Review (1985–1991) and the Quarterly Review of Economics and Business (1987–1989).
In the past 45 years, Dr Lee has written numerous textbooks ranging in subject matter from financial management to corporate finance, security analysis and portfolio management to financial analysis, planning and forecasting, and business statistics. Dr Lee has also published more than 230 articles in more than 20 different journals in finance, accounting, economics, statistics, and management. Professor Lee has been ranked the most published finance professor worldwide during 1953–2008. In addition to this new handbook, Dr Lee published Handbook of Quantitative Finance and Risk Management with John C Lee and Alice C Lee in 2010 and Handbook of Financial Econometrics and Statistics with John C Lee in 2015. Both handbooks have been published with Springer.
John C Lee is a Microsoft Certified Professional in Microsoft Visual Basic and Microsoft Excel VBA. He has a Bachelor and Masters degree in accounting from the University of Illinois at Urbana-Champaign. John has worked over 20 years in both the business and technical fields as an accountant, auditor, systems analyst and as a business software developer. He is the lead author of Essentials of Excel VBA, SAS, and MINITAB for Statistical and Financial Analysis published in 2017 by Springer. This book is a companion text to Statistics of Business and Financial Economics, of which he is one of the co-authors. In addition, he also published Financial Analysis, Planning and Forecasting, 3e (with Cheng Few Lee). John has been a Senior Technology Officer at the Chase Manhattan Bank and Assistant Vice President at Merrill Lynch. Currently, he is the Director of the Center for PBBEF Research.