Data, information, knowledge, and wisdom forms a progressive relationship. Information is formed by data collation. Knowledge is filtered, refined, and processed from relevant information. Wisdom is based on knowledge and is accumulated through experience. This paper uses the progressive relationship of service data, information, knowledge, and wisdom to explain the expression of service knowledge graph. It is an increasingly challenging demand to discover trusted Cloud service providers with service data, information, and knowledge. We propose an efficient method of trusted service provider discovery based on service knowledge graphs, called PDG (Provider Discovery based on Graphs), to ensure that each service instance of composite services in Cloud systems is trustworthy. PDG evaluates the outputs of service providers in service classes with the help of additional service information. According to the additional service information, service knowledge is generated and trusted service providers can be found easily. PDG improves the accuracy of processing results by automatically replacing data provided by untrusted service providers with results provided by trusted service providers.
Nowadays, representing entities and relations in a machine understandable way through Knowledge graph embedding (KGE) has been proven as an effective approach for predicting missing links in knowledge graphs (KGs). Mainly, the success of such approach depends on the model ability to infer the patterns of the relations. Indeed, most of the existing KG models highly focus on modeling simple relation patterns such as symmetry, anti-symmetry, inversion, and composition. However, there are few models in the literature that take into consideration the modeling of complex relation patterns like 1-NN, NN-1 and NN-NN, which are common in real-world applications. To overcome this challenge, this paper presents a new KGE model called KEMA++++, i.e. KGE using Modular Arithmetic, that relies on the combination of projection and modular arithmetic. The main idea behind KEMA++++ is to project the entities of a relation to represent the relations of a KG, before applying a modular arithmetic over it. Thus, KEMA++++ will be able to infer all simple and complex relation patterns for any KGE applications. Through extensive experiments on several datasets, we demonstrated the relevance of KEMA++++ in terms of effectively representing all model relations in a KG. Simulations on several tested datasets show that KEMA++++ obtains the good scores for Mean Rank (MR) and Hits@1 tests. Moreover, KEMA++++ obtains the good Hits@1 score compared to the existing models.
One of the major challenges facing real time world applications that employ Bayesian networks, is the design and development of efficient inference algorithms. In this paper we present an approximate real time inference algorithm for Bayesian Networks. The algorithm is an anytime reasoning method based on probabilistic inequalities, capable of handling fully and partially quantified Bayesian networks. In our method the accuracy of the results improve gradually as computation time increases, providing a trade-off between resource consumption and output quality. The method is tractable in providing the initial answers, as well as complete in the limiting case.
This article constitutes a contribution to an analysis of the notion of variable. Within the framework of Combinatory Logic as a formalism without bound variables, the Logic of Determination of Objects (LDO) provides an explanation for the necessary distinction between "whatever, any" and "indeterminate, indefinite" used by the introduction and elimination rules of quantifiers in Natural Deduction. The intension of a concept and typical and atypical occurrences of a concept are also introduced yielding new quantifiers which are more adequate to natural language processing (NPL) and to the study of natural inferences in common reasoning.
Reasoning with ontologies is a challenging task specially for non-logic experts. When checking whether an ontology contains rules that contradict each other, current description logic reasoners can only provide a list of the unsatisfiable concepts. Figuring out why these concepts are unsatisfiable, which rules cause conflicts, and how to resolve these conflicts, is all left to the ontology modeler himself. The problem becomes even more challenging in case of large or medium size ontologies, because an unsatisfiable concept may cause many of its neighboring concepts to be unsatisfiable.
The goal of this article is to empower ontology engineering with a user-friendly reasoning mechanism. We propose a pattern-based reasoning approach, which offers 9 patterns of constraint contradictions that lead to unsatisfiability in Object-role (ORM) models. The novelty of this approach is not merely that constraint contradictions are detected, but mainly that it provides the causes and suggestions to resolve contradictions. The approach is implemented in the DogmaModeler ontology engineering tool, and tested in building the CCFORM ontology. We discuss that, although this pattern-based reasoning covers most of contradictions in practice, compared with description logic based reasoning, it is not complete. We argue and illustrate both approaches, pattern-based and description logic-based, their implementation in the DogmaModeler, and conclude that both complement each other from a methodological perspective.
Decision-making on uncertain and dynamic domains is still a challenging research area. This paper explores a solution to handle such complex decision making based on a combined logic system. We provide an explanation of our reasoning system focused on the algorithms and their implementations. The reasoning system is based on a multi-valued temporal propositional logic which we use as the foundation for the implementation of simulation/prediction and query answering tools. This system is available for users to represent knowledge and to refine these systems to debug them and to try different problem solving strategies. We provide examples to illustrate how the system can be used including a problem based on a real smart environment.
Representing information evolving in time in ontologies is a difficult problem to deal with. Temporal relations are in fact ternary (i.e., properties of objects that change in time involve also a temporal value in addition to the object and the subject) and cannot be handled directly by OWL. The standard solution to this problem is to map all temporal relations to a set of binary ones with new (intermediate) classes introduced by the temporal model applied. Nevertheless, ontologies then become complicated and difficult to handle by standard editors such as Protégé (e.g., property restrictions of temporal classes might refer to the new classes rather than to the classes on which they were meant to be defined). It also requires that the user be familiar with the peculiarities of the temporal representation. This is exactly the problem this work is dealing with. We introduce CHRONOS Ed, a plug-in for Protégé that enables handling of temporal ontologies in Protégé the same way static ontologies are handled. It is implemented as a Tab plug-in for Protégé and can be downloaded from the Web.
We first describe a metric for uncertain probabilities called opinion, and subsequently a set of logical operators that can be used for logical reasoning with uncertain propositions. This framework which is called subjective logic uses elements from the Dempster-Shafer belief theory and we show that it is compatible with binary logic and probability calculus.
In order to enable a secure interaction between dynamically discovered software services and the client’s application in a cooperative information system such as service oriented system, one of the pre-requisites is the reconciliation of service-specific security policies of all stakeholders. Existing service discovery research does not address the issue of enormous search space in finding security-aware services based on preferred security policy alternatives of the client of software services. In this paper, we propose an answer set programming (ASP) approach, drawn from the field of artificial intelligence (AI), to explore a viable solution of finding security-aware services for the client. We argue that the ASP approach can significantly reduce the search space and achieve great performance gains. We use ASP to: (i) specify security policies including expressing service-specific security preference weighting and importance scoring in quantifiable terms; and (ii) reason about the compliance between the security policies of the client and the software service.
In this paper, we give a historical overview of the transition from classical game theory to epistemic game theory. To that purpose we will discuss how important notions such as reasoning about the opponents, belief hierarchies, common belief, and the concept of common belief in rationality arose, and gradually entered the game theoretic picture, thereby giving birth to the field of epistemic game theory. We will also address the question why it took game theory so long before it finally incorporated the natural aspect of "reasoning" into its analysis. To answer the latter question we will have a close look at the earliest results in game theory, and see how they shaped our approach to game theory for many years to come.
This paper considers the problem of searching for information equilibria in an oligopoly market in the case of Stackelberg leaders. A framework considers the reflexive behavior of three agents, and linear agent’s cost functions with different coefficients (i.e., marginal and fixed costs) are considered. The results of the study are as follows. First, models of the reflexive games for a triopoly that consider a diversity of agents’ reasonings about environmental strategies are developed. Second, formulas for calculating equilibria in the games with three agents for arbitrary reflexion rank are derived.
Managing flood-related data to assist in the disaster management is a critical process of high importance during a flood disaster. These data are heterogeneous and can be provided from different data sources, and integrating them is a challenging task which allows inferring new information that helps in limiting the consequences of a flood. In this paper, we propose a novel approach that manages heterogeneous flood-related data based on semantic web techniques and helps in limiting the damage caused by floods. We first propose an ontology that is used to formally describe the flood-related data, and we construct our knowledge graph through integrating heterogeneous data using the proposed ontology. Then, we propose a reasoning approach using SHACL rules to infer new information that helps manage the flood disaster or anticipate future events. The experimental evaluations of our proposed approach are conducted on a real case study in flood disaster management with the aim of generating evacuation priorities. The results show that the proposed approach succeeds in managing heterogeneous flood-related data and in generating evacuation priorities in a very short time.
A distinct property of robot vision systems is that they are embodied. Visual information is extracted for the purpose of moving in and interacting with the environment. Thus, different types of perception-action cycles need to be implemented and evaluated.
In this paper, we study the problem of designing a vision system for the purpose of object grasping in everyday environments. This vision system is firstly targeted at the interaction with the world through recognition and grasping of objects and secondly at being an interface for the reasoning and planning module to the real world. The latter provides the vision system with a certain task that drives it and defines a specific context, i.e. search for or identify a certain object and analyze it for potential later manipulation. We deal with cases of: (i) known objects, (ii) objects similar to already known objects, and (iii) unknown objects. The perception-action cycle is connected to the reasoning system based on the idea of affordances. All three cases are also related to the state of the art and the terminology in the neuroscientific area.
Designing a remarkable product innovation is a difficult challenge, which businesses today are continuously striving to tackle. This challenge is particularly present in the fuzzy front end of innovation, where the main product concept, the DNA of the innovation, is determined. A main challenge in the fuzzy front end is the reasoning process: innovation teams are faced with open-ended, ill-defined problems, where they need to make decisions about an unknown future but have only incomplete, ambiguous and contradicting insights available. We study the reasoning of experts, how they frame to make sense of all the insights and create a basis for decision-making in relation to a new project. Based on case studies of five innovative products from various industries, we propose a Product DNA model for understanding the reasoning in the fuzzy front end of innovation. The Product DNA Model explains how experts reason and what direct their reasoning.
This paper concerns the internal structure of reasoning that, basically consisting in conjecturing and refuting, is too often identified with only deducing, abducing and refuting, that is, with just the deductive search of consequences, hypotheses and refutations. With such identification, it is forgotten that in addition to consequences and hypotheses, there is a third class of conjectures, speculations or proper guesses. Speculations are inferentially non-comparable, or orthogonal, with the premise, and generate creativity. It is presented a very simple formal view for the structure of Commonsense Reasoning, a mathematical model allowing to show the importance of speculations, and specially to start with its systematic computational search.
Linked Data seem to play a seminal role in the establishment of the Semantic Web as the next-generation Web. This is even more important for digital object collections and educational institutions that aim not only at promoting and disseminating their content but also at aiding its discoverability and contextualization. In this paper we show how repository metadata can be exposed as Linked Data, thus enhancing their machine understandability and contributing to the LOD cloud. We use a popular digital repository system, namely DSpace, as our deployment platform. Without requiring additional annotations that would harden the curation task, educational resources are semantically enhanced by reusing and transforming existing metadata values. Our effort comes complete with an updated UI that allows for reasoning-based search and navigation between linked resources within and outside the scope of the digital repository. Therefore ontological descriptions of resources can now be accessed from within the repository’s core context, linked from outside datasets, link to external datasets and get discovered by semantic search.
The synergy of Data Stream Management Systems and Semantic Web applications has steered towards a new paradigm known as Stream Reasoning. The Semantic Web standards for knowledge base modeling and querying, namely RDF, OWL and SPARQL, has extensively been used by the Stream Reasoning community. However, the Semantic Web rule languages, such as SWRL and RIF, have never been used in stream data applications. Instead, different non-Semantic Web rule systems have been approached. Since RIF is primarily intended for exchanging rules among systems, we focused on SWRL applications with stream data. This proves difficult following the SWRL’s open world semantics. To overcome SWRL’s expressivity issues we propose an infrastructure extension, which will enable SWRL reasoning with stream data. Namely, a query processing system, such as C-SPARQL, was layered under SWRL to support closed-world and time-aware reasoning. Moreover, OWLAPI constructs were utilized to enable non-monotonicity, while SPARQL constructs were used to enable negation as failure. Water quality monitoring was used as a validation domain of the proposed system.
This paper describes research work aimed at designing realistic reasoning techniques for humanoid robots provided with advanced skills. Robots operating in real-world environments are expected to exhibit very complex behaviors, such as manipulating everyday objects, moving in crowded environments or interacting with people, both socially and physically. Such — yet to be achieved — capabilities pose the problem of being able to reason upon hundreds or even thousands different objects, places and possible actions to carry out, each one relevant for achieving robot goals or motivations. This article proposes a functional representation of everyday objects, places and actions described in terms of such abstractions as affordances and capabilities. The main contribution is twofold: (i) affordances and capabilities are represented as neural maps grounded in proper metric spaces; (ii) the reasoning process is decomposed into two phases, namely problem awareness (which is the focus of this work) and action selection. Experiments in simulation show that large-scale reasoning problems can be easily managed in the proposed framework.
In the last 10 years, Artificial Intelligence (AI) has seen successes in fields such as natural language processing, computer vision, speech recognition, robotics and autonomous systems. However, these advances are still considered as Narrow AI, i.e. AI built for very specific or constrained applications. These applications have its usefulness in improving the quality of human life; but it is not good enough to do highly general tasks like what the human can do. The holy grail of AI research is to develop Strong AI or Artificial General Intelligence (AGI), which produces human-level intelligence, i.e. the ability to sense, understand, reason, learn and act in dynamic environments. Strong AI is more than just a composition of Narrow AI technologies. We proposed that it has to be a holistic approach towards understanding and reacting to the operating environment and decision-making process. The Strong AI must be able to demonstrate sentience, emotional intelligence, imagination, effective command of other machines or robots, and self-referring and self-reflecting qualities. This paper will give an overview of current Narrow AI capabilities, present the technical gaps, and highlight future research directions for Strong AI. Could Strong AI become conscious? We provide some discussion pointers.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.