Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    Assessing Intellectual Capital Using the Evidential Reasoning Approach: Application on Small and Medium Enterprises

    One of the fundamental features of enterprises' supremacy over one another, particularly in SMEs, is intangible assets and their intellectual capital (IC). Varieties of approaches to assessing and evaluating enterprises' intellectual capital have far been offered. None of them, however, has been considered in the face of qualitative indicators and their synthesis and quantitative indicators, as well as accounting the failure to have access to evidence during evaluation. In this article, a framework for the assessment of intellectual capital is offered by using an evidential reasoning approach developed based on a hierarchical model and Dempster–Shafer (D–S) theory of evidence. The explanation of this framework is accompanied by its application in the assessment of intellectual capital of SMEs in the field of IT in one of Iran's regions.

  • articleNo Access

    Data discretization impact on deep learning for missing value imputation of continuous data

    In various fields of information examination, for example, AI, profoundly getting the hang of missing information is a typical issue. Missing qualities should be tended to since they can adversely affect the exactness and adequacy of prescient models. This research investigates how data discretization affects deep learning methods for filling the missing values in datasets with continuous features. They provide a unique method for imputing missing values using deep neural networks (DNNs) called extravagant expectation maximization-deep neural network (EEM-DNN). This approach discretizes continuous features into separate intervals initially. This is justified by treating the issue of missing value imputation as a classification work, with the missing values being considered a distinct class. A DNN, designed explicitly for imputation, is then trained using the discretized data. The expectation maximization concepts are incorporated into the network architecture, and as a result, the network iteratively improves its imputation predictions. They run comprehensive experiments on several datasets from different fields to gauge the efficacy of the suggested strategy. The effectiveness of EEM-DNN is compared to that of other imputation approaches, such as traditional imputation techniques and deep learning methods without data discretization. Our findings show that data discretization significantly enhances imputation accuracy. In terms of imputation accuracy and prediction performance on downstream tasks, the EEM-DNN method regularly performs better than alternative methods. It also examines if various discretization techniques affect the overall imputation process. They find that the trade-off between bias and variance in imputed data depends on the discretization method selected. This highlights the significance of choosing a suitable discretization approach depending on the unique properties of the dataset.