Please login to be able to save your searches and receive alerts for new content matching your search criteria.
Recently, an information theoretic inspired concept of transfer entropy has been introduced by Schreiber. It aims to quantify in a nonparametric and explicitly nonsymmetric way the flow of information between two time series. This model-free based on Shannon entropy approach in principle allows us to detect statistical dependencies of all types, i.e., linear and nonlinear temporal correlations. However, we always analyze the transfer entropy based on the data, which is discretized into three partitions by some coarse graining. Naturally, we are interested in investigating the effect of the data discretization of the two series on the transfer entropy. In our paper, we analyze the results based on the data which are generated by the linear modeling and the ARFIMA modeling, as well as the dataset consists of seven indices during the period 1992–2002. The results show that the higher the degree of data discretization get, the larger the value of the transfer entropy will be, besides, the direction of the information flow is unchanged along with the degree of data discretization.
In various fields of information examination, for example, AI, profoundly getting the hang of missing information is a typical issue. Missing qualities should be tended to since they can adversely affect the exactness and adequacy of prescient models. This research investigates how data discretization affects deep learning methods for filling the missing values in datasets with continuous features. They provide a unique method for imputing missing values using deep neural networks (DNNs) called extravagant expectation maximization-deep neural network (EEM-DNN). This approach discretizes continuous features into separate intervals initially. This is justified by treating the issue of missing value imputation as a classification work, with the missing values being considered a distinct class. A DNN, designed explicitly for imputation, is then trained using the discretized data. The expectation maximization concepts are incorporated into the network architecture, and as a result, the network iteratively improves its imputation predictions. They run comprehensive experiments on several datasets from different fields to gauge the efficacy of the suggested strategy. The effectiveness of EEM-DNN is compared to that of other imputation approaches, such as traditional imputation techniques and deep learning methods without data discretization. Our findings show that data discretization significantly enhances imputation accuracy. In terms of imputation accuracy and prediction performance on downstream tasks, the EEM-DNN method regularly performs better than alternative methods. It also examines if various discretization techniques affect the overall imputation process. They find that the trade-off between bias and variance in imputed data depends on the discretization method selected. This highlights the significance of choosing a suitable discretization approach depending on the unique properties of the dataset.