Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    DATA DISCRETIZATION FOR THE TRANSFER ENTROPY IN FINANCIAL MARKET

    Recently, an information theoretic inspired concept of transfer entropy has been introduced by Schreiber. It aims to quantify in a nonparametric and explicitly nonsymmetric way the flow of information between two time series. This model-free based on Shannon entropy approach in principle allows us to detect statistical dependencies of all types, i.e., linear and nonlinear temporal correlations. However, we always analyze the transfer entropy based on the data, which is discretized into three partitions by some coarse graining. Naturally, we are interested in investigating the effect of the data discretization of the two series on the transfer entropy. In our paper, we analyze the results based on the data which are generated by the linear modeling and the ARFIMA modeling, as well as the dataset consists of seven indices during the period 1992–2002. The results show that the higher the degree of data discretization get, the larger the value of the transfer entropy will be, besides, the direction of the information flow is unchanged along with the degree of data discretization.

  • articleNo Access

    Data discretization impact on deep learning for missing value imputation of continuous data

    In various fields of information examination, for example, AI, profoundly getting the hang of missing information is a typical issue. Missing qualities should be tended to since they can adversely affect the exactness and adequacy of prescient models. This research investigates how data discretization affects deep learning methods for filling the missing values in datasets with continuous features. They provide a unique method for imputing missing values using deep neural networks (DNNs) called extravagant expectation maximization-deep neural network (EEM-DNN). This approach discretizes continuous features into separate intervals initially. This is justified by treating the issue of missing value imputation as a classification work, with the missing values being considered a distinct class. A DNN, designed explicitly for imputation, is then trained using the discretized data. The expectation maximization concepts are incorporated into the network architecture, and as a result, the network iteratively improves its imputation predictions. They run comprehensive experiments on several datasets from different fields to gauge the efficacy of the suggested strategy. The effectiveness of EEM-DNN is compared to that of other imputation approaches, such as traditional imputation techniques and deep learning methods without data discretization. Our findings show that data discretization significantly enhances imputation accuracy. In terms of imputation accuracy and prediction performance on downstream tasks, the EEM-DNN method regularly performs better than alternative methods. It also examines if various discretization techniques affect the overall imputation process. They find that the trade-off between bias and variance in imputed data depends on the discretization method selected. This highlights the significance of choosing a suitable discretization approach depending on the unique properties of the dataset.