We present an approach to knowledge compilation that transforms a function-free first-order Horn knowledge base to propositional logic. This form of compilation is important since the most efficient reasoning methods are defined for propositional logic, while knowledge is most conveniently expressed within a first-order language. To obtain compact propositional representations, we employ techniques from (ir)relevance reasoning as well as theory transformation via unfold/fold transformations. Application areas include diagnosis, planning, and vision. Preliminary experiments with a hypothetical reasoner indicate that our method may yield significant speed-ups.
In this paper we describe how fuzzy reasoning can assist an automated program understanding/fault localization tool with program understanding tasks. We are developing such a tool called BUG-DOCTOR, which is based on a blackboard framework. A fuzzy reasoner is proposed as a component for one of its knowledge sources, the Plan Processor. The Plan Processor retrieves a set of program plans from a plan library using indices called signatures. These plans are candidates for matching against the code we are trying to understand. The fuzzy reasoner will support the Plan Processor with the task of ranking the retrieved plans in order of similarity to the target code. The most highly ranked plan can then be used for the complex plan/code matching required for automated program understanding. Experiments with a fuzzy reasoning prototype are promising, and we believe that this approach to plan processing may eliminate the need for exhaustive plan library searches. The success of the fuzzy reasoning approach could lead to automated program understanders that scale up for use on large software systems from a variety of problem domains.
In many domains, software development has to meet the challenges of developing highly adaptable software very rapidly. In order to accomplish this task, domain specific, formal description languages and knowledge-based systems are employed. From the viewpoint of the industrial software development process, it is important to integrate the construction and maintenance of these systems into standard software engineering processes. In addition, the descriptions should be comprehensible for the domain experts in order to facilitate the review process.
For the realization of product configuration systems, we show how these requirements can be met by using a standard design language (UML-Unified Modeling Language) as notation in order to simplify the construction of a logic-based description of the domain knowledge. We show how classical description concepts for expressing configuration knowledge can be introduced into UML and be translated into logical sentences automatically. These sentences are exploited by a general inference engine solving the configuration task.
Ontologies have been applied in a wide range of practical domains. They play a key role in data modeling, information integration, and the creation of semantic web services, intelligent web sites and intelligent software agents. The Web ontology language OWL, recommended by W3C, is based on description logics (DLs). Automated reasoning in DLs is very important for the success of OWL, as it provides support for visualization, debugging, and querying of ontologies. The existing ontology reasoners are not yet satisfactory, especially when dealing with qualified number restrictions and large ontologies. In this paper, we present the design of our new reasoner TGC2, which uses tableaux with global caching for reasoning in E xpTime-complete DLs. The characteristic of TGC2 is that it is based on our tableau methods with the optimal ( ExpTime) complexity, while the existing well-known tableau-based reasoners for DLs have a non-optimal complexity (at least NExpTime). We briefly describe the tableau methods used by TGC2. We then provide the design principles of TGC2 and some important optimization techniques for increasing the efficiency of this reasoner. We also present preliminary evaluation results of TGC2. They show that TGC2 deals with qualified number restrictions much better than the other existing reasoners.
In the beginning (the early 1960s), the long-term goal of automated reasoning (although the field at the time was not known by that name) was the design and implementation of a program whose use would lead to "real" and significant contributions to mathematics by offering sufficient power for the discovery of proofs. The realization of that goal appeared to be at least six decades in the future. However, with amazement and satisfaction, we can report that fewer than four decades were required. In this article, we present evidence for this claim, thanks to W. McCune's program OTTER. Our focus is on various landmarks, or milestones, of two types. One type concerns the formulation of new strategies and methodologies whose use greatly enhances the power of a reasoning program. A second type focuses on actual contributions to mathematics and (although not initially envisioned) to logic. We give examples of each type of milestone, and, perhaps of equal importance, demonstrate that advances are far more likely to occur if the two classes are closely intertwined. We draw heavily on material presented in great detail in the new book Automated Reasoning and the Discovery of Missing and Elegant Proofs, published by Rinton Press.
The articles in this special issue represent advances in several areas of knowledge acquisition and knowledge representation. In this article we attempt to place these advances in the context of a fundamental challenge in AI; namely, the automated acquisition of knowledge from data and the representation of this knowledge to support understanding and reasoning. We observe that while this work does indeed advance the field in important areas, the need exists to integrate these components into an end-to-end system and begin to extract general methodologies for this challenge. At the heart of this integration is the need for performance feedback throughout the process to guide the selection of alternative methods, the support for human interaction in the process, and the definition of general metrics and testbeds to evaluate progress.
Artificially intelligent systems are nowadays presented as systems that should, among other things, be explainable and ethical. In parallel, both in the popular culture and within the scientific literature, there is a tendency to anthropomorphize Artificial Intelligence (AI) and reify intelligent systems as persons. From the perspective of machine ethics and ethical AI, this has resulted in the belief that truly autonomous ethical agents (i.e., machines and algorithms) can be defined, and that machines could, by themselves, behave ethically and perform actions that are justified (explainable) from a normative (ethical) standpoint. Under this assumption, and given that utilities and risks are generally seen as quantifiable, many scholars have seen consequentialism (or utilitarianism) and rational choice theory as likely candidates to be implemented in automated ethical decision procedures, for instance to assess and manage risks as well as maximize expected utility. While some see this implementation as unproblematic, there are important limitations to such attempts that need to be made explicit so that we can properly understand what artificial autonomous ethical agents are, and what they are not. From the perspective of explainable AI, there are value-laden technical choices made during the implementation of automated ethical decision procedures that cannot be explained as decisions made by the system. Building on a recent example from the machine ethics literature, we use computer simulations to study whether autonomous ethical agents can be considered as explainable AI systems. Using these simulations, we argue that technical issues with ethical ramifications leave room for reasonable disagreement even when algorithms are based on ethical and rational foundations such as consequentialism and rational choice theory. By doing so, our aim is to illustrate the limitations of automated behavior and ethical AI and, incidentally, to raise awareness on the limits of so-called autonomous ethical agents.
In many applications, such as decision support, negotiation, planning, scheduling, etc., one needs to express requirements that can be only partially satisfied. In order to express such requirements, we propose a technique called forward-tracking. Intuitively, forward-tracking is a kind of dual of chronological back-tracking: if a program globally fails to find a solution, then a new execution is started from a program point and a state 'forward' in the computation tree. This search technique is applied to the specific paradigm of constraint logic programming, obtaining a powerful extension that preserves all the useful properties of the original scheme. We report on the successful practical application of forward-tracking to the evolutionary training of (constrained) neural networks, and discuss other interesting applications.
Information integration is one of the most important aspects of a Data Warehouse. When data passes from the sources of the application-oriented operational environment to the Data Warehouse, possible inconsistencies and redundancies should be resolved, so that the warehouse is able to provide an integrated and reconciled view of data of the organization. We describe a novel approach to data integration in Data Warehousing. Our approach is based on a conceptual representation of the Data Warehouse application domain, and follows the so-called local-as-view paradigm: both source and Data Warehouse relations are defined as views over the conceptual model. We propose a technique for declaratively specifying suitable reconciliation correspondences to be used in order to solve conflicts among data in different sources. The main goal of the method is to support the design of mediators that materialize the data in the Data Warehouse relations. Starting from the specification of one such relation as a query over the conceptual model, a rewriting algorithm reformulates the query in terms of both the source relations and the reconciliation correspondences, thus obtaining a correct specification of how to load the data in the materialized view.
This paper addresses the issue of automatic service composition. We first develop a framework in which the exported behavior of a service is described in terms of a so-called execution tree, that is an abstraction for its possible executions. We then study the case in which such exported behavior (i.e. the execution tree of the service) can be represented by a finite state machine (i.e. finite state transition system). In this specific setting, we devise sound, complete and terminating techniques both to check for the existence of a composition, and to return a composition, if one exists. We also analyze the computational complexity of the proposed algorithms. Finally, we present an open source prototype tool, called (E-Service Composer), that implements our composition technique. To the best of our knowledge, our work is the first attempt to provide a provably correct technique for the automatic synthesis of service composition, in a framework where the behavior of services is explicitly specified.
Automated composition of Web services or the process of forming new value-added Web services is one of the most promising challenges facing the Semantic Web today. Semantics enables Web service to describe capabilities together with their processes, hence one of the key elements for the automated composition of Web services. In this paper, we focus on the functional level of Web services i.e. services are described according to some input, output parameters semantically enhanced by concepts in a domain ontology. Web service composition is then viewed as a composition of semantic links wherein the latter links refer to semantic matchmaking between Web service parameters (i.e. outputs and inputs) in order to model their connection and interaction. The key idea is that the matchmaking enables, at run time, finding semantic compatibilities among independently defined Web service descriptions. By considering such a level of composition, a formal model to perform the automated composition of Web services i.e. Semantic Link Matrix, is introduced. The latter model is required as a starting point to apply problem-solving techniques such as regression (or progression)-based search for Web service composition. The model supports a semantic context in order to find correct, complete, consistent and robust plans as solutions. In this paper, an innovative and formal model for an AI (Artificial Intelligence) planning-oriented composition is presented. Our system is implemented and interacting with Web services which are dedicated to Telecom scenarios. The preliminary evaluation results showed high efficiency and effectiveness of the proposed approach.
Most of the work on automated web service composition has focused so far on composition of stateful web services. This level of composition so-called "Process Level" considers web services with their internal and complex behaviors. At process level formal models such as State Transition Systems (STS from now) or Interface Automata are the most appropriate models to represent the internal behavior of stateful web services. However such models focus only on semantics of their behaviors and unfortunately not on semantics of actions and their parameters. In this paper, we suggest to extend the STS model, by following the WSMO based annotation for Abstract State Machine. This semantic enhancement of STS so called S2TS will enable to model semantics of internal behaviors and semantics of their actions together with their input and output parameters. Secondly, we will focus on automated generation of data flow (or the process to perform automated assignments between parameter of services involved in a composition). Thus we do not restrict to assignments of exact parameters (which is practically never used in industrial scenario) but extend assignments to semantically close parameters (e.g., through a subsumption matching) in the same ontology. Our system is implemented and interacting with web services dedicated to Telecommunication scenarios. The preliminary evaluation results showed high efficiency and effectiveness of the proposed approach.
As the complexity of software systems is ever increasing, so is the need for practical tools for formal verification. Among these are automatic theorem provers, capable of solving various reasoning problems automatically, and proof assistants, capable of deriving more complex results when guided by a mathematician/programmer. In this paper we consider using the latter to build the former. In the proof assistant Isabelle/HOL we combine functional programming and logical program verification to build a theorem prover for propositional logic. We also consider how such a prover can be used to solve a reasoning task without much mental labor. The development is extended with a formalized proof system for writing machine-checked sequent calculus proofs. We consider how this can be used to teach computer science students about logic, automated reasoning and proof assistants.
Guidance ability is one of the typical feature of the novel contradiction separation based automated deduction that extends the canonical resolution rule to a dynamic, flexible multi-clause deduction framework. In order to take better advantage of the guidance ability during the deduction process, we propose in this paper a clause reusing framework for contradiction separation based automated deduction. This framework is able to generate more decision literals, on which the guidance ability of the contradiction separation based automated deduction relies. Technical analysis along with some examples are provided to illustrate the feasibility of the proposed framework.
This paper summarizes the major research progress in the logical algebras – lattice implication algebras (LIAs), algebraic logics – lattice-valued logics based on LIAs, as well as uncertainty inference and automated reasoning based on these lattice-valued logics. The relevant further research topics and prospects are also outlined. The theory and method of these kinds of logic systems for characterizing the lin guistic truth-value-based uncertainty inference and automated reasoning are highlighted and will be specially investigated.
The ‘job-shop scheduling problem’ is known to be NP-complete. The version of interest in this paper concerns an assembly line designed to produce various cars, each of which requires a (possibly different) set of options. The combinatorics of the problem preclude seeking a maximal solution. Nevertheless, because of the underlying economic considerations, an approach that yields a ‘good’ sequence of cars, given the specific required options for each, would be most valuable. In this paper, we focus on an environment for seeking, studying, and evaluating approaches for yielding good sequences. The environment we discuss relies on the automated reasoning program ITP. Automated reasoning programs of this type offer a wide variety of ways to reason, strategies for controlling the reasoning, and auxiliary procedures that contribute to the effective study of problems of this kind. We view the study presented in this paper as a prototype of what can be accomplished with the assistance of an automated reasoning program.
We present in this article an application of automated theorem proving to a study of a theorem in combinatory logic. The theorem states: the strong fixed point property is satisfied in a system that contains the B and W fR combinators. The theorem can be stated in terms of Smullyan's forests of birds: a sage exists in a forest that contains a bluebird and a warbler. Proofs of the theorem construct sages from B and W. Prior to the study, one sage, discovered by Statman, was known to exist. During the study, with much assistance from two automated theorem-proving programs, four new sages were discovered. The study was conducted from a syntactic point of view because the authors know very little about combinatory logic. The uses of the automated theorem-proving programs are described in detail.
The term automated reasoning (first introduced in 1980) accurately describes the objective of the field, the automation of logical reasoning. This article introduces scientists to the field and then briefly describes the papers found in this special issue.
Especially in mathematics and in logic, lemmas (basic truths) play a key role for proving theorems. In ring theory, for example, a useful lemma asserts that, for all elements x, the product in either order of 0 and x is 0; in two-valued sentential (or propositional) calculus, a useful lemma asserts that, for all x, x implies x. Even in algorithm writing and in circuit design, lemmas play a key role: minusminus(x)) = x in the former and NOT(AND(x, y)) = OR(NOT(x),NOT(y)) in the latter. Whether the object is to prove a theorem, write an algorithm, or design a circuit, and whether the assignment is given to a person or (preferably) to an automated reasoning program, the judicious use of lemmas often spells the difference between success and failure. In this article, we focus on what might be thought of as a generalization of the concept of lemma, namely, the concept of resonator, and on a strategy, the resonance strategy, that keys on resonators. For example, where in Boolean groups—those in which the square of every x is the identity element e—the lemmas yzyz = e and yyzz = e are such that neither generalizes the other, the resonator (formula schema) ⋆ ⋆ ⋆⋆ = e, by using each occurrence of “star” to assert the presence of some variable, generalizes and captures (in a manner that is discussed in this article) both lemmas. Note that the cited resonator, if viewed as a lemma with star replaced by some chosen variable, captures neither cited lemma as an instance. Lemmas of a theory are provably “true” in the theory and, therefore, can be used to complete an assignment. In contrast, resonators, which capture collections of equations or collections of formulas that may or may not include truths, are used by the resonance strategy to direct the search for the information needed for assignment completion. In addition to discussing how one finds useful resonators, we detail various successes, in some of which the resonance strategy played a key role in obtaining a far better proof and in some of which the resonance strategy proved indispensable. The successes are taken from group theory, Robbins algebra, and various logic calculi.
When given a set of properties or conditions (say, three) that are claimed to be equivalent, the claim can be verified by supplying what we call a circle of proofs. In the case in point, one proves the second property or condition from the first, the third from the second, and the first from the third. If the proof that 1 implies 2 does not rely on 3, then we say that the proof is pure with respect to 3, or simply say the proof is pure. If one can renumber the three properties or conditions in such a way that one can find a circle of three pure proofs—technically, each proof pure with respect to the condition that is neither the hypothesis nor the conclusion—then we say that a circle of pure proofs has been found. Here we study the specific question of the existence of a circle of pure proofs for the thirteen shortest single axioms for equivalential calculus, subject to the requirement that condensed detachment be used as the rule of inference. For an indication of the difficulty of answering the question, we note that a single application of condensed detachment to the (shortest single) axiom known as P4 (also known as U M) with itself yields the (shortest single) axiom P5 (also known as X G F), and two applications of condensed detachment beginning with P5 as hypothesis yields P4. Therefore, except for P5, one cannot find a pure proof of any of the twelve shortest single axioms when using P4 as hypothesis or axiom, for the first application of condensed detachment must focus on two copies of P4, which results in the deduction of P5, forcing P5 to be present in all proofs that use P4 as the only axiom. Further, the close proximity in the proof sense of P4 when using as the only axiom P5 threatens to make impossible the discovery of a circle of pure proofs for the entire set of thirteen shortest single axioms. Perhaps more important than our study of pure proofs, and of a more general nature, we also present the methodology used to answer the cited specific question, a methodology that relies on various strategies and features offered by W. McCune's automated reasoning program OTTER The strategies and features of OTTER we discuss here offer researchers the needed power to answer deep questions and solve difficult problems. We close this article (in the last two sections) with some challenges and some topics for research and (in the Appendix) with a sample input file and some proofs for study.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.