Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

Bestsellers

Handbook of Machine Learning
Handbook of Machine Learning

Volume 1: Foundation of Artificial Intelligence
by Tshilidzi Marwala
Handbook on Computational Intelligence
Handbook on Computational Intelligence

In 2 Volumes
edited by Plamen Parvanov Angelov

 

  • articleNo Access

    New compounding lifetime distributions with applications to real data

    Motivation to fit the Danish reinsurance claim and aircraft windshield data sets, we introduce exponentiated half logistic-power series (EHLPS) distributions, which are obtained by compounding exponentiated half-logistic distribution with power series distributions. Some mathematical properties have been studied. Maximum likelihood estimation of unknown parameters for the complete data is discussed via the EM algorithm. Simulation results to assess the performance of the estimation methods are discussed. Finally, two applied examples are given for indicating the flexibility and appropriateness of the distribution.

  • chapterNo Access

    Chapter 11: Deep Learning in Insurance: An Incremental Deep Learning Approach for Pricing Prediction Strategy in the Insurance Industry

    Deep learning is a type of machine learning known for its competitive advantage in discovering complex relationships in all data types. However, the insurance applications of deep learning were used for damage detection and churn prediction applications, while the premium prediction received low attention from previous researchers. This study aims to build an incremental deep learning model to predict insurance premiums. The model contributes to the previously studied Usage-Based Insurance (UBI) concept. We propose a deep learning model consistent with the UBI concept that considers the available factors affecting the premium to predict the insurance premium. The proposed model consists of two parts. Part one is the Convolutional Neural Network (CNN) for deep features extraction. Part two is the Support Vector Regression (SVR) built on the extracted deep features to predict the premium. The proposed model is called CNN-SVR after combining the two parts of CNN and SVR. The dataset was collected from an insurance company to train the proposed model and evaluate its performance compared to the other classical models adopted previously by other researchers, namely the Neural Network (NN), Random Forests (RF), Decision Trees (DT), Linear Regression and Support Vector Regression (SVR). The model performance evaluation was achieved using some metrics and the execution time needed to add a new data point to the model. The selected metrics are the Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Percentage of Error (MAPE), Explained Variance (EV), Correlation Coefficient (R), and t-test. The proposed CNN-SVR model reported the best averages among the other models of 1363.935 MSE, 36.838 RMSE, 18.774 MAE, 11.940 MAPE, 0.957 R, and 1 − p values close to 1 in the t-test. The proposed incremental model reported a faster execution time than the classical models, which need to be retrained fully to add a new data point. The study concluded that CNN-SVR model outperforms the other models in prediction performance and execution time, which supports the hypothesis. A possible future direction for this study is to use a larger dataset with more factors affecting the premium for a better contribution to the UBI and predictions.

  • chapterNo Access

    Stochastic models for claims reserving in insurance business

    Insurance companies have to build a reserve for their future payments which is usually done by deterministic methods giving only a point estimate. In this paper two semi-stochastic methods are presented along with a more sophisticated hierarchical Bayesian model containing MCMC technique. These models allow us to determine quantiles and confidence intervals of the reserve which can be more reliable as just a point estimate. A sort of cross-validation technique is also used to test the models.

  • chapterNo Access

    Location as risk factor Spatial analysis of an insurance data-set

    Our aim was to examine the territorial dependence of risk for household insurances. Besides the classical risk factors such as type of wall, type of building, etc., we consider the location associated to each contract. A Markov random field model seems to be appropriate to describe the spatial effect. Basically there are two ways of fitting the model; we fit a GLM to the counts of claims with the classical risk factors and regarding their effects as fixed we fit the spatial model. Alternatively we can estimate the effects of all covariates (including location) jointly. Although this latter approach may seem to be more accurate, its high complexity and computational demands makes it unfeasible in our case. To overcome the disadvantages of the distinct estimation of the classical and the spatial risk factors proceed as follows: use first a GLM for the non-spatial covariates, and then fit the spatial model by MCMC. Refit next the GLM with keeping the obtained spatial effect fixed and afterwards refit the spatial model, too. Iterate this procedure several times. We achieve much better fit by performing eight iterations.

  • chapterNo Access

    A Hierarchical Bayesian model to predict belatedly reported claims in insurances

    Latent, that is Incurred But Not Reported (IBNR) claims influence heavily the calculation of the reserves of an insurer, necessitating an accurate estimation of such claims. The highly diverse estimations of the latent claim amount produced by the traditional estimation methods (chain-ladder, etc.) underline the need for more sophisticated modelling. We are aimed at predicting the number of latent claims, not yet reported. This means the continuation the so called run-off triangle by filling in the lower triangle of the delayed claims matrix. In order to do this the dynamics of claims occurrence and reporting tendency is specified in a hierarchical Bayesian model. The complexity of the model building requires an algorithmic estimation method, that we carry out along the lines of the Bayesian paradigm using the MCMC technique. The predictive strength of the model against the future disclosed claims is analysed by cross validation. Simulations serve to check model stability. Bootstrap methods are also available as we have full record of the individual claims at our disposal. Those methods are used for assessing the variability of the estimated structural parameters.

  • chapterNo Access

    Insurance Applications of Neural Networks, Fuzzy Logic, and Genetic Algorithms

    The insurance industry has numerous areas with potential applications for neural networks, fuzzy logic and genetic algorithms. Given this potential and the impetus on these technologies during the last decade, a number of studies have focused on insurance applications. This chapter presents an overview of these studies. The specific purposes of the chapter are twofold: first, to review the insurance applications of these technologies so as to document the unique characteristics of insurance as an application area; and second, to document the extent to which these technologies have been employed.