Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    THE DETERMINANTS OF CREDIT DEFAULT SWAP RATES: AN EXPLANATORY STUDY

    The aim of this paper is to explain empirically the determinants of credit default swap rates using a linear regression. We document that the majority of variables, detected from the credit risk pricing theories, explain more than 60% of the total level of credit default swap. These theoretical variables are credit rating, maturity, riskless interest rate, slope of the yield curve and volatility of equities. The estimated coefficients for the majority of these variables are consistent with theory and they are significant both statistically and economically. We conclude that credit rating is the most determinant of credit default swap rates.

  • articleNo Access

    AN ALTERNATIVE APPROACH TO FIRMS' EVALUATION: EXPERT SYSTEMS AND FUZZY LOGIC

    Discounted cash flow techniques are the generally accepted methods for valuing firms. Such methods do not provide explicit acknowledgment of the value determinants and overlook their interrelations. This paper proposes a different method of firm valuation based on fuzzy logic and expert systems. It does represent a conceptual transposition of discounted cash flow techniques but, unlike the latter, it takes explicit account of quantitative and qualitative variables and their mutual integration. Financial, strategic and business aspects are considered by focusing on 29 value drivers that are combined together via "if–then" rules. The output of the system is a real number in the interval [0, 1], which represents the value-creation power of the firm. To corroborate the model a sensitivity analysis is conducted. The system may be used for rating and ranking firms as well as for assessing the impact of managers' decisions on value creation and as a tool of corporate governance.

  • articleNo Access

    EVALUATING INFORMATION QUALITY AND VALIDITY OF VALUE LINE STOCK RATINGS USING DOMINATION CONES

    Investment in stock market involves many decision criteria and variables; hence investors are increasingly relying on the ratings provided by rating agencies to guide their stock selections. However, do these stock ratings have information value? Are these agencies' ratings valid? We establish the dominance cone principle and use the Value Line stock ratings to demonstrate the application of the dominance principle. Our results based upon limited data show that the Value Line rating does not support the notion that better rating results in better rate of return during 2006–2007.

  • articleNo Access

    Pointer-Based Item-to-Item Collaborative Filtering Recommendation System Using a Machine Learning Model

    The creation of digital marketing has enabled companies to adopt personalized item recommendations for their customers. This process keeps them ahead of the competition. One of the techniques used in item recommendation is known as item-based recommendation system or item–item collaborative filtering. Presently, item recommendation is based completely on ratings like 1–5, which is not included in the comment section. In this context, users or customers express their feelings and thoughts about products or services. This paper proposes a machine learning model system where 0, 2, 4 are used to rate products. 0 is negative, 2 is neutral, 4 is positive. This will be in addition to the existing review system that takes care of the users’ reviews and comments, without disrupting it. We have implemented this model by using Keras, Pandas and Sci-kit Learning libraries to run the internal work. The proposed approach improved prediction with 79% accuracy for Yelp datasets of businesses across 11 metropolitan areas in four countries, along with a mean absolute error (MAE) of 21%, precision at 79%, recall at 80% and F1-Score at 79%. Our model shows scalability advantage and how organizations can revolutionize their recommender systems to attract possible customers and increase patronage. Also, the proposed similarity algorithm was compared to conventional algorithms to estimate its performance and accuracy in terms of its root mean square error (RMSE), precision and recall. Results of this experiment indicate that the similarity recommendation algorithm performs better than the conventional algorithm and enhances recommendation accuracy.

  • articleNo Access

    THE NEW BASEL ACCORD AND THE NATURE OF RISK: A GAME THEORETIC PERSPECTIVE

    Basel II changes risk management in banks strongly. Internal rating procedures would lead one to expect that banks are changing over to active risk control. But, if risk management is no longer a simple "game against nature", if all agents involved are active players then a shift from a non-strategic model setting (measuring event risk stochastically) to a more general strategic model setting (measuring behavioral risk adequately) comes true. Knowing that a game is any situation in which the players make strategic decisions — i.e. decisions that take into account each other's actions and responses — game theory is a useful set of tools for better understanding different risk settings. Embedded in a short history of the Basel Accord in this article we introduce some basic ideas of game theory in the context of rating procedures in accordance with Basel II. As well, some insight is given how game theory works. Here, the primary value of game theory stems from its focus on behavioral risk: risk when all agents are presumed rational, each attempting to anticipate likely actions and reactions by its rivals.

  • articleOpen Access

    THE DIVERGENCE OF ESG RATINGS: AN ANALYSIS OF ITALIAN LISTED COMPANIES

    The increasing attention to sustainability issues in finance has brought a proliferation of environmental, social, and governance (ESG) metrics and rating providers that results in divergences among the ESG ratings. Based on a sample of Italian listed firms, this paper investigates these divergences through a framework that decomposes ESG ratings into a value and a weight component at the pillar (i.e. E, S, and G) and category (i.e. sub-pillar) levels. We find that weights divergence and social and governance indicators are the main drivers of rating divergences. The research contributes to develop a new tool for analyzing ESG divergences and provides a number of recommendations for researchers and practitioners, stressing the need to understand what is really measured by the ESG rating agencies and the need for standardization and transparency of ESG measurement to favor a more homogeneous set of indicators.

  • articleOpen Access

    ABSOLUTE OR RELATIVE: THE DARK SIDE OF FUND RATING SYSTEMS

    Academic literature and market practitioners have always devoted great attention to the analysis of asset management products, with particular regard to fund classification and performance metrics. Less attention has been paid to rating methodologies and to the risk of attributing positive ratings to underperforming asset managers. The most widespread rating criterion is the ordinal one, which is based on the assumption that the best asset managers are those who have performed better than their competitors regardless of their ability to achieve a given threshold (i.e. a positive overperformance against the benchmark). Our study, after a description of the most common risk-adjusted performance measures, introduces the idea of attributing the rating on a cardinal basis, setting in advance a given threshold that should be achieved to receive a positive evaluation (i.e. a rating equal to or higher than 3 on a scale of 1–5). The empirical test conducted on a sample of funds (belonging to the main equity and bond asset classes) made it possible to quantify the effects of the cardinal approach on the attribution of the rating and on the probability of assigning a good rating to underperforming funds. Empirical analysis also highlighted how the cardinal method allows, on average, better performance than the ordinal one even in an out-of-sample framework. The differences between the two methodologies are particularly remarkable in efficient markets such as the North American equity market. The two rating assignment systems were also analyzed using contingency tables to test the ability to anticipate the default event (underperformance relative to the benchmark). The policy suggestion emerging from our study concerns the significant impact of the rating criterion in reducing the risk of recommending funds that, despite a good rating, have failed to perform satisfactorily and are unlikely to do so in the future either.