Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The role of macroprudential policies (MPPs) in influencing bank behavior has expanded significantly in recent years. However, the evidence regarding the impact of MPPs in influencing bank behavior across countries with different Future time reference (FTR) of languages has not been adequately examined. To inform this debate, utilizing bank-level data during 2010–2019, we examine how MPPs affect bank return and risk across countries with varying FTR of languages. The findings show that using MPPs lowers risk in countries with strong FTR. This is manifest in baseline regressions as well as in robustness tests that incorporate additional dimensions of a country’s economic and institutional environment. Over and above, the results show that although borrower- and lender-focused macroprudential measures are equally effective, their efficacy differs, with the former set of instruments being more useful in Emerging Market and Developing Economies (EMDEs). In contrast, the latter holds greater traction in advanced economies.
Transmission control protocol (TCP) ensures that data are safely and accurately transported over the network for applications that use the transport protocol to allow reliable information delivery. Nowadays, internet usage in the network is growing and has been developing many protocols in the network layer. Congestion leads to packet loss, the high time required for data transmission in the TCP protocol transport layer for end-to-end connections is one of the biggest issues with the internet. An optimized random forest algorithm (RFA) with improved random early detection (IRED) for congestion prediction and avoidance in transport layer is proposed to overcome the drawbacks. Data are initially gathered and sent through data pre-processing to improve the data quality. For data pre-processing, KNN-based missing value imputation is applied to replace the values that are missing in raw data and z-score normalization is utilized to scale the data in a certain range. Following that, congestion is predicted using an optimized RFA and whale optimization algorithm (WOA) is used to set the learning rate as efficiently as possible in order to reduce error and improve forecast accuracy. To avoid congestion, IRED method is utilized for a congestion-free network in the transport layer. Performance metrics are evaluated and compared with the existing techniques with respect to accuracy, precision, recall, specificity, and error, whose values that occur for the proposed model are 98%, 98%, 99%, 98%, and 1%. Throughput and latency are also evaluated in the proposed method to determine the performance of the network. Finally, the proposed method performs better when compared to the existing techniques and prediction, and avoidance of congestion is identified accurately in the network.
This paper presents some essential findings and results on using ranking-based kernels for the analysis and utilization of high dimensional and noisy biomedical data in applied clinical diagnostics. We claim that presented kernels combined with a state-of-the-art classification technique — a Support Vector Machine (SVM) — could significantly improve the classification rate and predictive power of the wrapper method, e.g. SVM. Moreover, the advantage of such kernels could be potentially exploited for other kernel methods and essential computer-aided tasks such as novelty detection and clustering. Our experimental results and theoretical generalization bounds imply that ranking-based kernels outperform other traditionally employed SVM kernels on high dimensional biomedical and microarray data.
Using the financial data from 645 companies that were listed in the Taiwan Stock Exchange (TSE) between 2000 and 2009, this paper applied a least square dummy variable (LSDV) model to estimate the effect of leverage on firm market values and examine how contextual variables influence this relationship. The empirical results are as follows. First, the values of leveraged firms are greater than the values of unleveraged firms if we do not consider the probability of bankruptcy. If we simultaneously consider the benefits and costs of debt, we find that leverage is positively related to the firm value until a firm has issued sufficient debt to attain its optimal capital structure. Second, the positive influence of leverage on the firm value tends to be stronger for firms of higher financial quality (firms with greater Z-scores), firms with greater growth opportunities and firms with higher corporate tax rates. Third, the negative influence of leverage on firm value tends to be strengthened if increases occur in a firm's free cash flow, a firm's non-debt tax rate, or the inflation rate it experiences. Finally, leverage may also have a positive effect on firm value provided that a firm with a higher free cash flow, a higher corporate rate or a higher inflation, is able to properly capitalize on the resultant opportunities. These findings provide insight into firms' debt financing decisions, helping firms to maximize their values.
Data imbalance occurring among multiclass datasets is very common in real-world applications. Existing studies reveal that various attempts were made in the past to overcome this multiclass imbalance problem, which is a severe issue related to the typical supervised machine learning methods such as classification and regression. But, still there exists a need to handle the imbalance problem efficiently as the datasets include both safe and unsafe minority samples. Most of the widely used oversampling techniques like SMOTE and its variants face challenges in replicating or generating the new data instances for balancing them across multiple classes, particularly when the imbalance is high and the number of rare samples count is too minimal thus leading the classifier to misclassify the data instances. To lessen this problem, we proposed a new data balancing method namely a two-stage iterative ensemble method to tackle the imbalance in multiclass environment. The proposed approach focuses on the rare minority sample’s influence on learning from imbalanced datasets and the main idea of the proposed approach is to balance the data without any change in class distribution before it gets trained by the learner such that it improves the learner’s learning process. Also, the proposed approach is compared against two widely used oversampling techniques and the results reveals that the proposed approach shows a much significant improvement in the learning process among the multiclass imbalanced data.
We study and compare two classes of statistical criteria to assess the significance of exceptional words. Indeed, the Z-score-like criteria, or the normal approximation that is a strict equivalent, suffer from several drawbacks in terms of sensitivity and specificity. Thanks to the combinatorial structure of words, a computation of the exact P-value has been made possible by recent mathematical results. We study here the drawbacks of the Z-score, the choice of the threshold and the tightness to the P-value.
A major conclusion is that the normal approximation is always very poor and overestimates statistical significance.
As a protein evolves, not every part of the amino acid sequence has an equal probability of being deleted or for allowing insertions, because not every amino acid plays an equally important role in maintaining the protein structure. However, the most prevalent models in fold recognition methods treat every amino acid deletion and insertion as equally probable events. We have analyzed the alignment patterns for homologous and analogous sequences to determine patterns of insertion and deletion, and used that information to determine the statistics of insertions and deletions for different amino acids of a target sequence. We define these patterns as insertion/deletion (indel) frequency arrays (IFAs). By applying IFAs to the protein threading problem, we have been able to improve the alignment accuracy, especially for proteins with low sequence identity. We have also demonstrated that the application of this information can lead to an improvement in fold recognition.
Banks have been revising their business models since the financial crisis, diversifying income sources to pursue profitability and stability in a rapidly evolving environment. The effectiveness of this strategy is still debated. We investigate if revenue diversification of 1250 EU and US banks improved performance or its stability between 2008 and 2016. We adopt a broad econometric approach and define diversification as the share of non-interest revenue and the HH index of the net operating income. We find that diversification is not clearly associated with performance or its volatility, that benefits change remarkably over time and, where present, show significant variability. Our results support recent evidence on the limitations of diversification in banking, raising potential concerns on converging supervisory practices and general calls for revenue diversity. The variability of business models and the impacts of different economic and institutional environments matter.
This paper investigates the extension of the z-score model in predicting the health of UK companies. It uses multiple discriminant analysis (MDA) and performance ratios to test which ratios were statistically significant in predicting the health of companies between 2000 and 2013. The purpose of this study is to contribute towards Altman's (1968) original z-score model by adding new variables. It was found that cash flow, when combined with the original z-score variable, is highly significant in predicting the health of UK companies. A J-UK model was thus developed to test the health of UK companies. When compared with the z-score model, the predictive power of the model was 82.9%, which is consistent with Taffler's (1982) UK model. Furthermore, to test the predictive power of the model before, during and after the financial crisis of 2007–2008, results showed that the J-UK model had a higher accuracy in predicting the health of UK companies than the z-score UK model.
This study aims to compare capital adequacy and financial stability in Islamic and conventional Saudi banks and investigate the impact of capital adequacy on the financial stability of a bank. Our study uses the annual data of five conventional banks and four Islamic banks listed on the Saudi Stock Exchange for the period 2016–2020. The Z-score has been computed and used as the measure of the stability of listed Saudi Islamic and conventional banks for the period 2016–2020. This study uses ordinary least square regression to investigate the impact of capital adequacy on the financial stability of banks. The researchers adopt the development of research hypotheses in the light of the theory of stakeholders and the foundations of Islamic law. The findings indicated that, first, there are significant differences in the capital adequacy ratio between conventional and Islamic banks. This difference is due to the increase in the mean capital adequacy ratio of Islamic banks over conventional banks. Second, our result found significant differences in financial stability between conventional and Islamic banks. This difference is due to the increase in the mean of the Z-score for Islamic banks over conventional banks. Third, our result refers to significant negative impacts of capital adequacy ratio on financial stability. Our study applied to listed Saudi banks from 2016 to 2020. The empirical results of our study are very useful for supervisors, banks management, investors, bank customers, and policymakers. The results contribute to knowing the unexpected negative effects of increased capital adequacy and its negative impact on the bank’s profits and the threat to financial stability, in addition to knowing the main indicators of capital adequacy and financial stability for Islamic and conventional banks in a way that helps bank supervisors, policymakers, and investors in rationalizing. Their decisions are specific to both Islamic and conventional banks, in addition to identifying the factors that help to enhance the financial stability process. This study is among a few studies that provide empirical evidence for the claim that the increase in capital adequacy rates is always one of the positive indicators to achieve financial stability, in addition to the great role of Islamic banks in achieving this. The study found a rejection of the validity of this claim and reached unexpected results.
Currently, condition monitoring and anomaly detection in wind turbines show great promise for research. Wind energy offers notable benefits, such as sustainability and a low environmental impact. However, wind turbines encounter significant challenges, including operational issues and high maintenance costs. Therefore, this chapter focuses on anomaly detection in wind turbines using Supervisory Control and Data Acquisition (SCADA) vibration data. The proposed approach involves an iterative algorithm that enhances robustness and adaptability by continually adjusting thresholds through a combination of linear regression and z-score. The proposed algorithm detected 246 anomalies with a precision rate of 71.13% and an error rate of 28.87%.