This paper defines a new concurrent logic language, Nested Guarded Horn Clauses (NGHC). The main new feature of the language is its concept of guard. In fact, an NGHC clause has several layers of (standard) guards. This syntactic innovation allows the definition of a complete (i.e. always applicable) set of unfolding rules and therefore of an unfolding semantics which is equivalent, with respect to the success set, to the operational semantics. A fixpoint semantics is also defined in the classic logic programming style and is proved equivalent to the unfolding one. Since it is possible to embed Flat GHC into NGHC, our method can be used to give a fixpoint semantics to FGHC as well.
We introduce an operational model of concurrent systems, called automata with concurrency relations. These are labeled transition systems in which the event set is endowed with a collection of symmetric binary relations which describe when two events at a particular state of
commute. This model generalizes the recent concept of Stark’s trace automata. A permutation equivalence for computation sequences of
arises canonically, and we obtain a natural domain
comprising the induced equivalence classes. We give a complete order-theoretic characterization of all such partial orders
which turn out to be particular finitary domains. The arising domains
are particularly pleasant Scott-domains, if
is assumed to be concurrent, i.e. if the concurrency relations of
depend (in a natural way) locally on each other, but not necessarily globally. We show that both event domains and dI-domains arise, up to isomorphism, as domains
with well-behaved such concurrent automata
. We introduce a subautomaton relationship for concurrent automata and show that, given two concurrency domains (D, ≤), (D′, ≤), there exists a nice stable embedding-projection pair from D to D′ iff D, D′ can be generated by concurrent automata
such that
is a subautomaton of
. Finally, we introduce the concept of locally finite concurrent automata as a limit of finite concurrent automata and show that there exists a universal homogeneous locally finite concurrent automaton, which is unique up to isomorphism.
In the world of Big Data analytics, there is a series of tools aiming at simplifying programming applications to be executed on clusters. Although each tool claims to provide better programming, data and execution models–for which only informal (and often confusing) semantics is generally provided–all share a common underlying model, namely, the Dataflow model. The model we propose shows how various tools share the same expressiveness at different levels of abstraction. The contribution of this work is twofold: first, we show that the proposed model is (at least) as general as existing batch and streaming frameworks (e.g., Spark, Flink, Storm), thus making it easier to understand high-level data-processing applications written in such frameworks. Second, we provide a layered model that can represent tools and applications following the Dataflow paradigm and we show how the analyzed tools fit in each level.
Human body motion pattern recognition in video images is an important research direction in the field of pattern recognition. It has a very broad application prospect in many fields such as intelligent video surveillance, human-computer interaction, motion analysis, video retrieval, etc. Research has also received extensive attention from scholars at home and abroad. Pattern recognition is essentially a branch of artificial intelligence. It has its unique role in the field of artificial intelligence. Accurate recognition of human body motion patterns in video images is of great help in image classification, retrieval, human tracking and video surveillance. Based on the human visual perception mechanism, this paper proposes a human behavior recognition algorithm based on semantic saliency map. Through the combination of sliding window and similarity measure, the behavioral region that best exhibits the semantic features of the image is found, which is the semantically significant region. The semantic significant region and the original image are used as the dual input source to study the human behavior recognition, and the image is enhanced. The utilization of significant regional information better reveals the identifiable area of the image and contributes to the recognition of human behavior.
Hindi exhibits an obligatory grammatical agreement pattern (subject-verb, object-verb, or neutral agreement), and any adequate parser must reject or mark strings that violate grammatical agreement. We develop a combination of strategies for parsing grammatical agreement in Hindi and implement them in an Augmented Transition Network (ATN) parser. It is shown that semantic information is required to parse grammatical agreement in Hindi.
Semantics is the essence of human communication. It concerns the manufacture and use of symbols as representations to exchange meanings. Information technology is faced with the problem of using intelligent machines as intermediaries for interpersonal communication. The problem of designing such semantic machines has been intractable because brains and machines work on very different principles. Machines process information that is fed to them. Brains construct hypotheses and test them by acting and sensing. Brains do not process information because the intake through the senses is infinite. Brains sample information, hold it briefly, construct meaning, and then discard the information. A solution to the problem of communication with machines is to simulate how brains create meaning and express it as information by making a symbol to represent the meaning to another brain in pairwise communication. An understanding of the neurodynamics by which brains create meaning and represent it may enable engineers to build devices with which they can communicate pairwise, as they do now with colleagues.
In recent years, 3D media have become more and more widespread and have been made available in numerous online repositories. A systematic and formal approach for representing and organizing shape-related information is needed to share 3D media, to communicate the knowledge associated to shape modeling processes and to facilitate its reuse in useful cross-domain usage scenarios. In this paper we present an initial attempt to formalize an ontology for digital shapes, called the Common Shape Ontology (CSO). We discuss about the rationale, the requirements and the scope of this ontology, we present in detail its structure and describe the most relevant choices related to its development. Finally, we show how the CSO conceptualization is used in domain-specific application scenarios.
In this study, we highlight some fundamental issues of knowledge management and cast them in the setting of Granular Computing (GrC). We show how its formal constructs — information granules are instrumental in knowledge representation and specification of its level of abstraction.
"The new source of power is not money in the hands of a few, but information in the hands of many." The aforementioned quote from John Naisbitt seems to be even more relevant in the world of finance at this very moment. Many financial decisions come from watching the information stream, selecting relevant data, analyzing it and acting accordingly. With the increasing global competition, the need for swift data analysis, high accuracy and quality becomes a must. XBRL (Extensible Business Reporting Language)XBRL: http://www.xbrl.org/ standard was proposed to improve efficiency of data exchange in the financial domain. However; it is still struggling with interoperability problems, not to mention comparability of data or multisource data integration. This paper presents the FLORA intelligent platform: an approach for dealing with current financial information shortcomings and achieving more effective way of processing financial data based on the Linked Data principles. The article also explains the process of data extraction and semantic modeling which are the cornerstones of efficient financial data analysis. As a result, the FLORA architecture facilitates effective, data-driven, financial analyses and Web-scale integration between financial applications and platforms.
We present MABLE, a fully implemented programming language for multiagent systems, which is intended to support the automatic verification of such systems via model checking. In addition to the conventional constructs of imperative programming languages, MABLE provides a number of agent-oriented development features. First, agents in MABLE are endowed with a BDI-like mental state: they have data structures corresponding to beliefs, desires, and intentions, and these mental states may be arbitrarily nested. Second, agents in MABLE communicate via ACL-like performatives: however, neither the performatives nor their semantics are hardwired into the language. It is possible to define the performatives and the semantics of these performatives independently of the system in which they are used. Using this feature, a developer can explore the design space of ACL performatives and semantics without changing the target system. Finally, MABLE supports automatic verification via model checking. Claims about the behaviour of a MABLE system can be expressed in a linear-time BDI-like logic, and the truth, or otherwise, of these claims can be automatically determined. Following a description of the MABLE language and the language of MABLE claims, we present two case studies to illustrate the language and its use in the verification of multiagent systems. We then describe the key ideas underpinning the current implementation of MABLE. Finally, we survey related work, and discuss some avenues for future research.
This work studies collective intelligence behavior of Web users that share and watch video content. Accordingly, it is proposed that the aggregated users' video activity exhibits characteristic patterns. Such patterns may be used in order to infer important video scenes leading thus to collective intelligence concerning the video content. To this end, experimentation is based on users' interactions (e.g., pause, seek/scrub) that have been gathered in a controlled user experiment with information-rich videos. Collective information seeking behavior is then modeled by means of the corresponding probability distribution function. Thus, it is argued that the bell-shaped reference patterns are shown to significantly correlate with predefined scenes of interest for each video, as annotated by the users. In this way, the observed collective intelligence may be used to provide a video-segment detection tool that identifies the importance of video scenes. Accordingly, both a stochastic and a pattern matching approach are applied on the users' interactions information. The results received indicate increased accuracy in identifying the areas selected by users as having high importance information. In practice, the proposed techniques might improve both navigation within videos on the web as well as video search results with personalised video thumbnails.
Relevant contributions of fuzzy logic to the logical models in information retrieval is studied. It makes it possible to grasp the graduality of some relevant concepts and to model both imprecision and uncertainty inherent to the retrieval process, still in the framework of the broadly meant logical approach. In this perspective we discuss various extensions to the basic Boolean model which are needed to attain such a greater expressivity. In particular, we show how the well-known semantics of keywords weights may be recovered in various fuzzy logic based information retrieval models.
Many emerging applications need continuous querying over uncertain event streams, mostly for online monitoring. These streaming uncertain events may come from radars, sensors, or even software hooks. The uncertainty is usually due to measurement errors, inherent ambiguities and privacy preserving reasons. To cover new requirements, we have designed and implemented a new system called Probabilistic Data Stream Management System (PDSMS) in Ref. 1. PDSMS is a data processing engine which runs continuous queries over probabilistic streams. However, lack of a semantics for probabilistic databases which supports continuous distributions prevented us from having a strong foundation for our query operators. It also precludes us from proving consistency and correctness of query operations especially after optimization and adaption. In fact, in the probabilistic database literature, there is no semantics available which covers continuous distributions. This limitation is very restrictive as in real-world, uncertainty is usually modeled by continuous distributions. In this paper, after presenting a basic probabilistic data model for PDSMS, we focus on querying and formally present the first semantics for probabilistic query operations which supports continuous distributions as well as discrete ones. Using this new semantics, we define our query operators (e.g. select, project, and join) formally without ambiguity and compatible with operators in relational algebra. Thus, we can leverage many transformation rules in relational algebra as well. This new semantics allows us to have different strictness levels and consistency between operators. We also proved many strictness theorems about different alternatives for query operators.
Modern organizations are keen to work towards their customer needs. To achieve this, analyzing their activities and identifying their interest in any entity becomes important. Every user has been identified as the most important factor in point of organization, and they never give up even a single user. Several approaches have been discussed earlier, which use artificial intelligence to mine the users and their interest in the problem. However, the deep learning algorithms are identified as most efficient in identifying the user interest but suffer to achieve higher performance. Towards this issue, an efficient multi-feature semantic similarity-based online social recommendation system has been proposed. The method uses Convolution Neural Network (CNN) to train and predict user interest in any topic. Each layer has been identified as a single interest, and neurons of the layers are initialized with huge data set. The neuron estimates the Multi-Feature Semantic Similarity (MFSS) towards each interest of the user. Finally, the method identifies the single interest for the user by ranking each interest to produce recommendations to the user. The proposed algorithm improves the performance of recommendation generation with less false ratio.
Behaviour analysis loop is largely performed on virtual product model before its physical manufacturing. The last avoids high expenses in terms of money and time spent on intermediate manufacturing. It is gainful from the reality to the virtuality but the process could be further optimized especially during the product behaviour optimization phase. This process involves repetition of four main processing steps: CAD design and modification, mesh creation, Finite Element (FE) model generation with the association of physical and geometric data, FE Analysis. The product behaviour analysis loop is performed on the first design solution as well as on the numerous successive product optimization loops. Each design solution evaluation necessitates the same time as required for the first product design that is particularly crucial in the context of maintenance.
In this paper we propose a new framework for CAD-less product optimisation through FE analysis which reduces the model preparation activities traditionally required for FE model creation. More concretely, the idea is to directly operate on the firstly created FE mesh, enriched with physical/geometric semantics, to perform the product modifications required to achieve its optimised version.
In order to accomplish the proposed CAD-less FE analysis framework, modification operators acting on both the mesh geometry and the associated semantics need to be devised. In this paper we discuss the underlying concepts and present possible components for the development of such operators. A high-level operator specification is proposed according to a modular structure that allows an easy realisation of different mesh modification operators. Here, two instances of this high-level operator are described: the planar cracking and the drilling. The realised prototypes validated on industrial FE models show clearly the feasibility of this approach.
Ontobroker applies Artificial Intelligence techniques to improve access to heterogeneous, distributed and semi-structured information sources as they are presented in the World Wide Web or organization-wide intranets. It relies on the use of ontologies to annotate web pages, formulate queries and derive answers. In this paper we will briefly sketch Ontobroker. Then we will discuss its main shortcomings, i.e. we will share the lessons we learned from our exercise. We will also show how On2broker overcomes these limitations. Most important is the separation of the query and inference engines and the integration of new web standards like XML and RDF.
This paper presents a semantics-based dynamic service composition architecture that composes an application through combining distributed components based on the semantics of the components. This architecture consists of a component model called Component Service Model with Semantics (CoSMoS), a middleware called Component Runtime Environment (CoRE), and a service composition mechanism called Semantic Graph based Service Composition (SeGSeC). CoSMoS represents the semantics of components. CoRE provides interfaces to discover and access components modeled by CoSMoS. SeGSeC composes an application by discovering components through CoRE, and synthesizing a workflow of the application based on the semantics of the components modeled by CoSMoS.
This paper describes the latest design of the semantics-based dynamic service composition architecture, and also illustrates the implementation of the architecture based on the Web Service standards, i.e. WSDL, RDF, SOAP, and UDDI. The Web Service based implementation of the architecture allows existing Web Services to migrate onto the architecture without reimplementation. It also simplifies the development and deployment of a new Web Service on the architecture by automatically generating the necessary description files (i.e. WSDL and RDF files) of the Web Service from its runtime binary (i.e. a Java class file).
Development of agent systems is without question a complex task when autonomous, reactive and proactive characteristics of agents are considered. Furthermore, internal agent behavior model and interaction within the agent organizations become even more complex and hard to implement when new requirements and interactions for new agent environments such as the Semantic Web are taken into account. We believe that the use of both domain specific modeling and a Domain-specific Modeling Language (DSML) may provide the required abstraction and support a more fruitful methodology for the development of Multi-agent Systems (MASs) especially when they are working on the Semantic Web environment. Although syntax definition based on a metamodel is an essential part of a modeling language, an additional and required part would be the determination and implementation of DSML constraints that constitute the (formal) semantics which cannot be defined solely with a metamodel. Hence, in this paper, formal semantics of a MAS DSML called Semantic Web enabled Multi-agent Systems (SEA_ML) is introduced. SEA_ML is a modeling language for agent systems that specifically takes into account the interactions of semantic web agents with semantic web services. What is more, SEA_ML also supports the modeling of semantic agents from their internals to MAS perspective. Based on the defined abstract and concrete syntax definitions, we first give the formal representation of SEA_ML's semantics and then discuss its use on MAS validation. In order to define and implement semantics of SEA_ML, we employ Alloy language which is declarative and has a strong description capability originating from both relational and first-order logic in order to easily define complex structures and behaviors of these systems. Differentiating from similar contributions of other researchers on formal semantics definition for MAS development languages, SEA_ML's semantics, presented in this paper, defines both static and dynamic aspects of the interaction between software agents and semantic web services, in addition to the definition of the semantics already required for agent internals and MAS communication. Implementation with Alloy makes definition of SEA_ML's semantics to include relations and sets with a simple notation for MAS model definitions. We discuss how the automatic analysis and hence checking of SEA_ML models can be realized with the defined semantics. Design of an agent-based electronic barter system is exemplified in order to give some flavor of the use of SEA_ML's formal semantics. Lessons learned during the development of such a MAS DSML semantics are also reported in this paper.
This article gives the brief description of the Korean Society for Bioinformatics.
Providing the same pedagogical and educational methods to all students is pedagogically ineffective. In contrast, the pedagogical strategies that adapt to the fundamental individual skills of the students have proved to be more effective. An important innovation in this direction is the adaptive educational systems (AESs) that adjust the teaching content on educational needs and students’ skills. Effective utilization of these approaches can be enhanced with artificial intelligence (AI) and semantic web technologies that can increase data generation, access, flow, integration, and comprehension using the same open standards driving the World Wide Web. This study proposes a novel adaptive educational eLearning system (AEeLS) that can gather and analyze data from learning repositories and adapt these to the educational curriculum according to the student’s skills and experience. It is an innovative hybrid machine learning system that combines a semi-supervised classification method for ontology matching and a recommendation mechanism that uses a sophisticated way from neighborhood-based collaborative and content-based filtering techniques to provide a personalized educational environment for each student.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.