Processing math: 100%
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    USING CULTURAL ALGORITHMS TO RE-ENGINEER LARGE-SCALE SEMANTIC NETWORKS

    Evolutionary computation has been successfully applied in a variety of problem domains and applications. In this paper we discuss the use of a specific form of evolutionary computation known as Cultural Algorithms to improve the efficiency of the subsumption algorithm in semantic networks. We identify two complementary methods of using Cultural Algorithms to solve the problem of re-engineering large-scale dynamic semantic networks in order to optimize the efficiency of subsumption: top-down and bottom-up.

    The top-down re-engineering approach improves subsumption efficiency by reducing the number of attributes that need to be compared for every node without impacting the results. We demonstrate that a Cultural Algorithm approach can be used to identify these defining attributes that are most significant for node retrieval. These results are then utilized within an existing vehicle assembly process planning application that utilizes a semantic network based knowledge base to improve the performance and reduce complexity of the network. It is shown that the results obtained by Cultural Algorithms are at least as good, and in most cases better, than those obtained by the human developers. The advantage of Cultural Algorithms is especially pronounced for those classes in the network that are more complex.

    The goal of bottom-up approach is to classify the input concepts into new clusters that are most efficient for subsumption and classification. While the resultant subsumption efficiency for the bottom-up approach exceeds that for the top-down approach, it does so by removing structural relationships that made the network understandable to human observers. Like a Rete network in expert systems, it is a compilation of only those relationships that impact subsumption. A direct comparison of the two approaches shows that bottom-up semantic network re-engineering creates a semantic network that is approximately 5 times more efficient than the top-down approach in terms of the cost of subsumption. In conclusion, we will discuss these results and show that some knowledge that is useful to the system users is lost during the bottom-up re-engineering process and that the best approach for re-engineering a semantic network requires a combination of both of these approaches.

  • articleNo Access

    Fusing Cooperative Technical-Specification Knowledge Components

    In this paper, the problem of fusing various logic-based technical specification knowledge components about a same physical device or process is investigated. It is shown that most standard logic approaches to beliefs and knowledge fusion are not relevant in this context since some rules should be merged even in the case of mutually consistent knowledge components. Accordingly, we discuss the various types of formulas that should be merged during a fusion process, in order to avoid necessary conditions for the absence of failure to become sufficient conditions. This transformation is then described formally. It can be performed as an efficient preprocessing step on the knowledge components to be fused. The properties of this transformation schema are then investigated from a semantical point of view. Finally, a series of subsumption tests are proposed, preventing conditions for the absence of failure from being overridden by subsumption.

  • articleNo Access

    LEARNING FOR DYNAMIC SUBSUMPTION

    This paper presents an original dynamic subsumption technique for Boolean CNF formulae. It exploits simple and sufficient conditions to detect, during conflict analysis, clauses from the formula that can be reduced by subsumption. During the learnt clause derivation, and at each step of the associated resolution process, checks for backward subsumption between the current resolvent and clauses from the original formula are efficiently performed. The resulting method allows the dynamic removal of literals from the original clauses. Experimental results show that the integration of our dynamic subsumption technique within the state-of-the-art SAT solvers Minisat and Rsat particularly benefits to crafted problems.

  • articleNo Access

    A Computational Method for Enforcing Knowledge that Cannot be Subsumed

    We introduce a practical computational method that enforces some incoming piece of information δ in a clausal Boolean logic knowledge base Δ in such a way that δ is not strictly subsumed by the resulting knowledge base. δ is not strictly subsumed by Δ iff for every piece of information δ that is entailed by Δ and that is such that δ entails δ, we have that δ entails δ, too. We claim that this paradigm is a useful form of reasoning for both human and artificial intelligence systems. Under a usual minimal change policy, it amounts to computing one cardinality-maximal satisfiable subset of Δ{δ} that contains δ but that does not strictly subsume δ. Although this task is intractable in the worst case, we provide a practical method that appears experimentally efficient very often, even for large and complex knowledge bases.

  • articleNo Access

    ON AUTOMATIC REASONING FOR SCHEMA INTEGRATION

    Success in database schema integration depends on the ability to capture real world semantics of the schema objects, and to reason about the semantics. Earlier schema integration approaches mainly rely on heuristics and human reasoning. In this paper, we discuss an approach to automate a significant part of the schema integration process.

    Our approach consists of three phases. An attribute hierarchy is generated in the first phase. This involves identifying relationships (equality, disjointness and inclusion) among attributes. We discuss a strategy based on user-specified semantic clustering. In the second phase, a classification algorithm based on the semantics of class subsumption is applied to the class definitions and the attribute hierarchy to automatically generate a class taxonomy. This class taxonomy represents a partially integrated schema. In the third phase, the user may employ a set of well-defined comparison operators in conjunction with a set of restructuring operators, to further modify the schema. These operators as well as the automatic reasoning during the second phase are based on subsumption.

    The formal semantics and automatic reasoning utilized in the second phase is based on a terminological logic as adapted in the CANDIDE data model. Classes are completely defined in terms of attributes and constraints. Our observation is that the inability to completely define attributes and thus completely capture their real world semantics imposes a fundamental limitation on the possibility of automatically reasoning about attribute definitions. This necessitates human reasoning during the first phase of the integration approach.