Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    PROBABILISTIC ROUGH SETS CHARACTERIZED BY FUZZY SETS

    Theories of fuzzy sets and rough sets have emerged as two major mathematical approaches for managing uncertainty that arises from inexact, noisy, or incomplete information. They are generalizations of classical set theory for modelling vagueness and uncertainty. Some integrations of them are expected to develop a model of uncertainty stronger than either. The present work may be considered as an attempt in this line, where we would like to study fuzziness in probabilistic rough set model, to portray probabilistic rough sets by fuzzy sets. First, we show how the concept of variable precision lower and upper approximation of a probabilistic rough set can be generalized from the vantage point of the cuts and strong cuts of a fuzzy set which is determined by the rough membership function. As a result, the characters of the (strong) cut of fuzzy set can be used conveniently to describe the feature of variable precision rough set. Moreover we give a measure of fuzziness, fuzzy entropy, induced by roughness in a probabilistic rough set and make some characterizations of this measure. For three well-known entropy functions, including the Shannon function, we show that the finer the information granulation is, the less the fuzziness (fuzzy entropy) in a rough set is. The superiority of fuzzy entropy to Pawlak's accuracy measure is illustrated with examples. Finally, the fuzzy entropy of a rough classification is defined by the fuzzy entropy of corresponding rough sets. and it is shown that one possible application of it is lies in measuring the inconsistency in a decision table.

  • articleNo Access

    CLUSTERING OF DECISION TABLES TOWARD ROUGH SET-BASED GROUP DECISION AID

    In order to analyze the distribution of individual opinions (decision rules) in a group, clustering of decision tables is proposed. An agglomerative hierarchical clustering (AHC) of decision tables has been examined. The result of AHC does not always optimize some criterion. We develop non-hierarchical clustering techniques for decision tables. In order to treat positive and negative evaluations to a common profile, we use a vector of rough membership values to represent individual opinion to a profile. Using rough membership values, we develop a K-means method as well as fuzzy c-means methods for clustering decision tables. We examined the proposed methods in clustering real world decision tables obtained by a questionnaire investigation.

  • articleNo Access

    Knowledge-Based Expert System Development and Validation with Petri Nets

    Expert systems (ESs) are complex information systems that are expensive to build and difficult to validate. Numerous knowledge representation strategies such as rules, semantic networks, frames, objects and logical expressions are developed to provide high-level abstraction of a system. Rules are the most commonly used form of knowledge representation and they are derived from popular techniques such as decision trees and decision tables. Despite their huge popularity, decision trees and decision tables are static and cannot model the dynamic requirements of a system. In this study, we propose Petri Nets (PNs) for dynamic system representation and rule derivation. PNs with their graphical and precise nature and their firm mathematical foundation are especially useful for building ESs that exhibit a variety of situations, including: sequential execution, conflict, concurrency, synchronisation, merging, confusion, or prioritisation. We demonstrate the application of our methodology in the design and development of a medical diagnostic expert system.

  • chapterNo Access

    Chapter 21: Data Mining Applications in Accounting and Finance Context

    This chapter shows examples of applying several current data mining approaches and alternative models in an accounting and finance context such as predicting bankruptcy using US, Korean, and Chinese capital market data. Big data in accounting and finance context is a good fit for data analytic tool applications like data mining. Our previous study also empirically tested Japanese capital market data and found similar prediction rates. However, overall prediction rates depend on different countries and time periods (Mihalovic, 2016). These results are an improvement on previous bankruptcy prediction studies using traditional probit or logit analysis or multiple discriminant analysis. The recent survival model shows similar prediction rates in bankruptcy studies. However, we need longitudinal data to use the survival model. Because of computer technology advances, it is easier to apply data mining approaches. In addition, current data mining methods can be applied to other accounting and finance contexts such as auditor changes, audit opinion prediction studies, and internal control weakness studies. Our first paper shows 13 data mining approaches to predict bankruptcy after the Sarbanes–Oxley Act (SOX) (2002) implementation using 2008–2009 US data with 13 financial ratios and internal control weaknesses, dividend payout, and market return variables. Our second paper shows application of a Multiple Criteria Linear Programming Data Mining Approach using Korean data. Our last paper shows bankruptcy prediction models using Chinese firm data via several data mining tools and compared with those of traditional logit analysis. Analytic Hierarchy Process and Fuzzy Set also can be applied as an alternative method of data mining tools in accounting and finance studies. Natural language processing can be used as a part of the artificial intelligence domain in accounting and finance in the future (Fisher et al., 2016).

  • chapterNo Access

    A Novel Attribute Recognition Algorithm for Radar Emitter Based on Knowledge Reduction

    Emitter recognition problem plays an important role in electronical countermeasure intelligence processing. Practically, emitter information detected by multisensor system takes on uncertainty, illegibility and contradiction. Rough set theory is a relatively new soft computing tool to deal with vagueness and uncertainty, and is regarded as a field of leading edge. In order to solve emitter recognition problem, rough set theory is introduced. A new emitter recognition algorithm is presented in this paper. And decision-making method is discussed. At last, application example is given, which demonstrates this new method is accurate and effective. Moreover, computer simulation of recognizing the emitter class is selected, and compared with classical statistical recognition algorithm through simulation. Experiments results demonstrate the excellent performance of this new recognition method.