Recently, auditors have used audit risk models’ conceptual tools to assess and control the many risks that could occur during an audit. The tool guides the auditor in determining the necessary evidence for each relevant allegation and the required categories of evidence. The importance of refining audit risk assessment models depends on the crucial role that these models perform in allocating resources and reducing financial disparities. The challenging characteristic of such an audit risk assessment model is that inaccurate information can lead to incorrect risk assessments, legal frameworks and failure to detect material misstatements in financial statements. Hence, in this research, BP neural network-enabled machine learning (BPNN-ML) technologies have been improved for the audit risk assessment model. In that financial disparity, the regression algorithm was used to establish the audit risk assessment for data processing and entirely monitor it. The suggested method provides a flexible framework that may be utilized by a wide range of organizations, including global enterprises and financial institutions, to optimize audit processes and ensure regulatory compliance. This research adds to advancing auditing procedures and regulatory compliance efforts in contemporary company contexts by addressing the issues inherent in traditional methods and proposing a practical approach to these concerns. The experimental analysis of BPNN-ML outperforms physical monitoring in terms of feature importance ranking analysis, contextual adaptability analysis, sensitivity analysis, performance analysis and optimized risk assessment analysis.
The energy finance sector is characterized by its complexity and volatility, driven by fluctuating commodity prices, regulatory changes and evolving market dynamics. Energy companies face financial, regulatory and operational risks, relying on historical data and static risk assessment frameworks that may not accurately reflect real-time market changes. This research aims to develop an energy finance risk compliance early warning system, leveraging a machine learning approach to enhance the early detection of compliance risks, enabling proactive decision-making and improving organizational resilience. Initially, data were collected from various sources, including historical financial records, market trends and regulatory frameworks. These data are essential for developing an early warning system that aims to enhance compliance risk detection in the energy sector. The collected data are preprocessed using cleaning and normalization and prepared for analysis. Exploratory Data Analysis (EDA) was conducted using statistical methods such as correlation analysis and regression analysis to identify patterns and relationships between variables. The study proposes a novel Revenue Optimizer with a weighted Support Vector Machine (RO-WSVM) model to enhance risk detection and ensure regulatory compliance in energy finance. Optimizing revenue while adhering to compliance standards provides proactive insights for effective risk management and decision-making. The result demonstrates that the application of the proposed RO-WSVM model works successfully and this is because RO-WSVM has higher accuracy (90%), error value (0.1%) and less time taken to process the data than any other model (6s). The study highlights that the innovative approach RO-WSVM enhances the prediction model in the finance risk compliance early warning of energy finance.
We present an analytical study of an insurance company. We model the company's performance on a statistical basis and evaluate the predicted annual income of the company in terms of insurance parameters namely the premium, the total number of insured, average loss claims etc. We restrict ourselves to a single insurance class the so-called automobile insurance. We show the existence of a crossover premium pc below which the company is operating at a loss. Above pc, we also give a detailed statistical analysis of the company's financial status and obtain the predicted profit along with the corresponding risk as well as ruin probability in terms of premium. Furthermore we obtain the optimal premium popt which maximizes the company's profit.
We study the performance of various agent strategies in an artificial investment scenario. Agents are equipped with a budget, x(t), and at each time step invest a particular fraction, q(t), of their budget. The return on investment (RoI), r(t), is characterized by a periodic function with different types and levels of noise. Risk-avoiding agents choose their fraction q(t) proportional to the expected positive RoI, while risk-seeking agents always choose a maximum value qmax if they predict the RoI to be positive ("everything on red"). In addition to these different strategies, agents have different capabilities to predict the future r(t), dependent on their internal complexity. Here, we compare "zero-intelligent" agents using technical analysis (such as moving least squares) with agents using reinforcement learning or genetic algorithms to predict r(t). The performance of agents is measured by their average budget growth after a certain number of time steps. We present results of extensive computer simulations, which show that, for our given artificial environment, (i) the risk-seeking strategy outperforms the risk-avoiding one, and (ii) the genetic algorithm was able to find this optimal strategy itself, and thus outperforms other prediction approaches considered.
Asset allocation is one of the most important and also challenging issues in finance. In this paper using level crossing analysis we introduce a new approach for portfolio selection. We introduce a portfolio index that is obtained based on minimizing the waiting time to receive known return and risk values. By the waiting time, we mean time that a special level is observed in average. The advantage of this approach is that the investors are able to set their goals based on gaining return and knowing the average waiting time and risk value at the same time. As an example we use our model for forming portfolio of stocks in Tehran Stock Exchange (TSE).
This study explores a foreign bias model to examine if the degree of foreign bias of sovereign wealth fund depends on the spatial spillover effects of cultural distances. Using the spatial panel data of foreign investment by sovereign wealth fund in 2008–2014, we empirically test (1) whether the relationships between return, risk and foreign bias of sovereign wealth fund are statistically significant and (2) whether this relationship depends on the spatial spillover effects of cultural distances. The evidence strongly supports our hypotheses across six target countries (Australia, Canada, China, Germany, the United Kingdom and the United States).
We examine the determinants of bank capital structure using a large sample of banks in the world. We find that banks determine their capital structure in much the same way as non-financial firms, except for growth opportunities. We also provide evidence that country-level factors, such as the legal system, bank-specific factors and economic conditions influence banks’ capital decisions through their impacts on bankruptcy costs, agency costs, information asymmetry and liquidity creation. The results show that, besides the direct effects, there are indirect impacts of country-level factors on the decision of bank capital. Our results have potential policy implications for the on-going regulatory reform.
The development in information technology results in a significant increase in bank competition. The question of whether increased competition improves bank profitability and risk reduction is important in many aspects. This paper analyzes the impact of competition on profitability and risk in the context of Vietnam using OLS estimator on data set of 37 Vietnamese commercial banks. The main results present that banks with a higher competition index tend to have higher profitability which is measured by ROE and NIM. In addition, our empirical results also show that banks tend to take on more risk when facing increased competition.
In this paper, we examine the role of Brazil, Russia, India, China and South Africa’s (BRICS) currency in energy market by using vine copula method. The value-at-risk (VaR) and expected shortfall of two portfolios are calculated. One is a benchmark portfolio which is consisted of only energy prices, the other is a portfolio which adding the BRICS’s exchange rate into the benchmark portfolio. The data period is from 24 August 2010 to 29 November 2019. Our results show the BRICS’s currency can reduce the risk in energy investment.
In this paper, we analyze the impacts of joint energy and output prices uncertainties on the input demands in a mean–variance framework. We find that an increase in expected output price will surely cause the risk-averse firm to increase the input demand, while an increase in expected energy price will surely cause the risk-averse firm to decrease the demand for energy, but increase the demand for the non-risky inputs. Furthermore, we investigate the two cases with only uncertain energy price and only uncertain output price. In the case with only uncertain energy price, we find that the uncertain energy price has no impact on the demands for the non-risky inputs. We also show that the concepts of elasticity and decreasing absolute risk aversion (DARA) play an important role in the comparative statics analysis.
The roles of the trading time risks (TTRs) on stock investment return and risks are investigated in the condition of stock price crashes with Hushen300 data (CSI300) and Dow Jones Industrial Average (ˆDJI), respectively. In order to describe the TTR, we employ the escape time that the stock price drops from the maximum to minimum value in a data window length (DWL). After theoretical and empirical research on probability density function of return, the results in both ˆDJI and CSI300 indicate that: (i) As increasing DWL, the expectation of returns and its stability are weakened. (ii) An optimal TTR is related to a maximum return and minimum risk of stock investment in stock price crashes.
The paper continues the application of the bifurcation analysis in the research on local climate dynamics based on processing the historically observed data on the daily average land surface air temperature. Since the analyzed data are from instrumental measurements, we are doing the experimental bifurcation analysis. In particular, we focus on the discussion where is the joint between the normal dynamics of local climate systems (norms) and situations with the potential to create damages (hazards)? We illustrate that, perhaps, the criteria for hazards (or violent and unfavorable weather factors) relate mainly to empirical considerations from human opinion, but not to the natural qualitative changes of climate dynamics. To build the bifurcation diagrams, we base on the unconventional conceptual model (HDS-model) which originates from the hysteresis regulator with double synchronization. The HDS-model is characterized by a variable structure with the competition between the amplitude quantization and the time quantization. Then the intermittency between three periodical processes is considered as the typical behavior of local climate systems instead of both chaos and quasi-periodicity in order to excuse the variety of local climate dynamics. From the known specific regularities of the HDS-model dynamics, we try to find a way to decompose the local behaviors into homogeneous units within the time sections with homogeneous dynamics. Here, we present the first results of such decomposition, where the quasi-homogeneous sections (QHS) are determined on the basis of the modified bifurcation diagrams, and the units are reconstructed within the limits connected with the problem of shape defects. Nevertheless, the proposed analysis of the local climate dynamics (QHS-analysis) allows to exhibit how the comparatively modest temperature differences between the mentioned units in an annual scale can step-by-step expand into the great temperature differences of the daily variability at a centennial scale. Then the norms and the hazards relate to the fundamentally different viewpoints, where the time sections of months and, especially, seasons distort the causal effects of natural dynamical processes. The specific circumstances to realize the qualitative changes of the local climate dynamics are summarized by the notion of a likely periodicity. That, in particular, allows to explain why 30-year averaging remains the most common rule so far, but the decadal averaging begins to substitute that rule. We believe that the QHS-analysis can be considered as the joint between the norms and the hazards from a bifurcation analysis viewpoint, where the causal effects of the local climate dynamics are projected into the customary timescale only at the last step. We believe that the results could be interesting to develop the fields connected with climatic change and risk assessment.
In this paper we present a new neuroeconomics model for decision-making applied to the Attention-Deficit/Hyperactivity Disorder (ADHD). The model is based on the hypothesis that decision-making is dependent on the evaluation of expected rewards and risks assessed simultaneously in two decision spaces: the personal (PDS) and the interpersonal emotional spaces (IDS). Motivation to act is triggered by necessities identified in PDS or IDS. The adequacy of an action in fulfilling a given necessity is assumed to be dependent on the expected reward and risk evaluated in the decision spaces. Conflict generated by expected reward and risk influences the easiness (cognitive effort) and the future perspective of the decision-making. Finally, the willingness (not) to act is proposed to be a function of the expected reward (or risk), adequacy, easiness and future perspective. The two most frequent clinical forms are ADHD hyperactive(AD/HDhyp) and ADHD inattentive(AD/HDdin). AD/HDhyp behavior is hypothesized to be a consequence of experiencing high rewarding expectancies for short periods of time, low risk evaluation, and short future perspective for decision-making. AD/HDin is hypothesized to be a consequence of experiencing high rewarding expectancies for long periods of time, low risk evaluation, and long future perspective for decision-making.
Natural disasters — earthquakes, hurricanes and other storms — cause substantial property damage and loss of life in many parts of the world. The relative infrequency and importance of extreme cases leads to a preferential use of simulation models over historical statistical/actuarial models in studying the impact of such catastrophes on insurance systems. Given the increasing awareness of the highly intermittent nature of geophysical phenomena, modelers need to revisit their assumptions not only of the geophysical fields, but also of the geographical distribution of insured property as well. This paper explores the distribution of insured property through the lens of multifractal theory.
Independent of science branch, scientists have a consensus that peoples lives are highly susceptible to risk, and effectively quantifying risk is a big challenge. This paper assesses the Multifractal Cross-Correlation Measure (MRCC) among West Texas Intermediate (WTI), seven fiat currencies and three foreign exchange rates. Therefore, we use the Multifractal Detrended Cross-Correlation Analysis (MF-DCCA) to examine the volatility dynamics considering the pairs of these financial records. We discover that all these volatility time series pairs (αxy(0)>0.5) are characterized by overall persistent behavior based on the values of αxy(0). The MRCC values exhibit that the pairs WTI versus MXN (Γ=0.821425), WTI versus JPY (Γ=0.796747) and WTI versus NOK (Γ=0.756545) are more complex and persistent than the other pairs. Otherwise, the pairs WTI versus AUD (Γ=0.580362), WTI versus CAD (Γ=0.667706) and WTI versus EMK (Γ=0.705446) are less complex and persistent. Thus, our empirical findings shed light on the problem of quantification risk based on a multifractal perspective.
In this paper, we introduce the interior-outer-set model for calculating a fuzzy risk represented by a possibility-probability distribution. The model involving combination calculus is very difficult to follow. In this paper, we transform it into a matrix algorithm. Although the algorithm is still difficult to follow, fortunately, it is easy to make a computer program for realizing. This algorithm consists of MOVING-subalgorithm and INDEX-subalgorithm. The former works out leaving and joining matrices. The latter is a combination algorithm to get index sets. An example is presented showing how a user can calculate a risk of strong earthquake with the algorithm.
Beta distributions have been applied in a variety of fields in part due to its similarity to the normal distribution while allowing for a larger flexibility of skewness and kurtosis coverage when compared to the normal distribution. In spite of these advantages, the two-sided power (TSP) distribution was presented as an alternative to the beta distribution to address some of its short-comings, such as not possessing a cumulative density function (cdf) in a closed form and a difficulty with the interpretation of its parameters. The introduction of the biparabolic distribution and its generalization in this paper may be thought of in the same vein. Similar to the TSP distribution, the generalized biparabolic (GBP) distribution also possesses a closed form cdf, but contrary to the TSP distribution its density function is smooth at the mode. We shall demonstrate, using a moment ratio diagram comparison, that the GBP distribution provides for a larger flexibility in skewness and kurtosis coverage than the beta distribution when restricted to the unimodal domain. A detailed mean-variance comparison of GBP, beta and TSP distributions is presented in a Project Evaluation and Review Technique (PERT) context. Finally, we shall fit a GBP distribution to an example of financial European stock data and demonstrate a favorable fit of the GBP distribution compared to other distributions that have traditionally been used in that field, including the beta distribution.
As more and more organizations collect, store, and release large amounts of personal information, it is increasingly important for the organizations to conduct privacy risk assessment so as to comply with various emerging privacy laws and meet information providers' demands. Existing statistical database security and inference control solutions may not be appropriate for protecting privacy in many new uses of data as these methods tend to be either less or over-restrictive in disclosure limitation or are prohibitively complex in practice. We address a fundamental question in privacy risk assessment which asks: how to accurately derive bounds for protected information from inaccurate released information or, more particularly, from bounds of released information. We give an explicit formula for calculating such bounds from bounds, which we call square bounds or S-bounds. Classic F-bounds in statistics become a special case of S-bounds when all released bounds retrograde to exact values. We propose a recursive algorithm to extend our S-bounds results from two dimensions to high dimensions. To assess privacy risk for a protected database of personal information given some bounds of released information, we define typical privacy disclosure measures. For each type of disclosure, we investigate the distribution patterns of privacy breaches as well as effective and efficient controls that can be used to eliminate privacy risk, both based on our S-bounds results.
Performance, reliability and safety are relevant factors when analyzing or designing a computer system. Many studies about on performance are based on monitoring and analyzing data from a computer system. One of the most useful pieces of data is the Load Average (LA) that which shows the load average of the system in the last minute, the sequence of in the last five minutes and the sequence of in the last fifteen last minutes. There are a lot ofmany studies of the system performance based on the load average. This is shown by mean means of monitoring the commands of the operative system, but sometimes they are sometimes difficult to understand and far of removed from human intuition. The aim of this paper is to show demonstrate a new procedure that allows us to determine the stability of a computer system from a list of load average sample data. The idea is shown as an algorithm based in statistic analysis, the aggregation of information and its formal specification. The result is an evaluation of the stability of the load and the computer system by monitoring but without adding any overhead to the system. In addition, the procedure can be used as a software monitor for risk prevention of on any vulnerable system.
Modern financial institutions require sophisticated risk assessment tools to integrate human expertise and historical data in a market that is changing and broadening qualitatively, quantitatively, and geographically. The need is especially acute in newly developed countries where expertise and data are scarce, and knowledge bases and assumptions imported from the West may be of limited applicability.
Second order logical models can be a valuable tool in such situations. They integrate the robustness of neural or statistical modeling of data, the perspicuity of logical rule induction, and the experience and understanding of skilled human experts. The approach is illustrated in the context of risk assessment in the Korean surety insurance industry.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.