Please login to be able to save your searches and receive alerts for new content matching your search criteria.
According to recent conjectures on the existence of large extra dimensions in our universe, black holes could be produced during the interaction of Ultra High Energy Cosmic Rays with the atmosphere. However, and so far, the proposed signatures are based on statistical effects, not allowing identification on an event by event basis, and may lead to large uncertainties. In this note, events with a double bang topology, where the production and instantaneous decay of a microscopic black hole (first bang) is followed, at a measurable distance, by the decay of an energetic tau lepton (second bang) are proposed as an almost background free signature.
The Software & Systems Process Engineering meta-model (SPEM) allows the modelling of software processes using OMG (Object Management Group) standards such as the MOF (Meta-Object Facility) and UML (Unified Modelling Language) making it possible to represent software processes using tools compliant with UML. Process definition encompasses both the static and dynamic structure of roles, tasks and work products together with imposed constraints on those elements. However, the latter requires support for constraint enforcement that is not always directly available in SPEM. Such constraint-checking behaviour could be used to detect possible mismatches between process definitions and the actual processes being carried out in the course of a project. This paper approaches the modelling of such constraints using the SWRL (Semantic Web Rule Language), which is a W3C recommendation. To do so, we need to first represent generic processes modelled with SPEM using an underlying ontology based on the OWL (Ontology Web Language) representation together with data derived from actual projects.
Despite the many integration tools proposed for mapping between OWL ontologies and the object-oriented paradigm, developers are still reluctant to incorporate ontologies into their code repositories. In this paper we survey existing approaches for OWL-to-OOP mapping trying to identify reasons for this shy adoption of ontologies among conventional software developers. We present a classification of the surveyed approaches and tools based on their technical characteristics and their resulting artifacts. We discuss further potential reasons beyond what have been addressed in the literature before finally providing our own reflection and outlook.
Machine learning has been implemented as a part of many software systems to support data-driven decisions and recommendations. The prominent machine learning technique is the artificial neural network, which lacks the explanation of how it produces the output. However, many application domains require algorithmic decision making to be transparent so explainability in these systems has been an important challenge. This paper proposes an automated framework that elicits the contributing rules describing how the neural network model makes decisions. The explainability of contributing rules can be measured and it is able to address issues in the training dataset. With the ontology representation of contributing rules, an individual decision can be automatically explained through ontology reasoning. We have developed a tool that supports applying our framework in practice. The evaluation has been conducted to assess the effectiveness of our framework using open datasets from different domains. The results prove that our framework performs well to explain the neural network models, as it can achieve the average accuracy of 81% to explain the subject models. Also, our framework takes significantly less time to process than the other technique.
This paper investigates a secure mechanism for Electronic Health Records (EHR) exchange over a Peer to Peer (P2P) agent based coordination framework. Our study is based on the SemHealthCoord framework, a platform for the exchange of EHR between autonomous health organisations that extends the existing interoperability standards as proposed by the Integrating Healthcare Enterprise (IHE). Every health organisation in SemHealthCoord represents a community within a P2P network. Communities use a set of autonomous agents and a set of distributed coordination rules to coordinate the agents in the search of specific health records. To enable secure interactions among communities, we propose the use of asymmetric keys and digital certificates. We specify the interaction protocols to provide integrity and authenticity between the communities, and, to illustrate the scalability of our approach, we evaluate the proposed solution in distributed settings by comparing the performance between secured and unsecured data exchange. The contribution of this work is that enables IHE based health communities to dynamically exchange EHR of patients with a secure P2P agent coordination framework.
Ontology authoring is a complex process, where commonly the automated reasoner is invoked for verification of newly introduced changes, therewith amounting to a time-consuming test-last approach. Test-Driven Development (TDD) for ontology authoring is a recent test-first approach that aims to reduce authoring time and increase authoring efficiency. Current TDD testing falls short on coverage of OWL features and possible test outcomes, the rigorous foundation thereof, and evaluations to ascertain its effectiveness. We aim to address these issues in one instantiation of TDD for ontology authoring. We first propose a succinct, logic-based specification of TDD testing and present novel TDD algorithms so as to cover also any OWL 2 class expression for the TBox and for the principal ABox assertions, and prove their correctness. The algorithms use methods from the OWL API directly such that reclassification is not necessary for test execution, therewith reducing ontology authoring time. The algorithms were implemented in TDDonto2, a Protégé plugin. TDDonto2 was evaluated by users, which demonstrated that modellers make significantly fewer errors with TDDonto2 compared to the standard Protégé interface and complete their tasks better using less time. Thus, the results indicate that TDD is a promising approach in an ontology development methodology.
Ontology represents a data source at a higher level of abstraction. Extracting metadata from an autonomous data source and transforming it into source ontology is a tedious and error prone task because the metadata are either incomplete or not available. The essential metadata of a source can be extracted from its data. Our proposed methodology extracts the essential metadata from the data through reverse engineering. In addition, it comprises a set of transformation rules that transform extracted metadata into ontology. The transformation system R2O has been implemented. The evaluation of the proposed transformation is based on two factors, namely (a) correct identification and transformation of metadata and (b) preservation of information capacity. The research has been evaluated through experimental results and mathematical proof. The evaluation shows that the transformation is total and injective, and it preserves information capacity.
The Semantic Application Design Language (SADL) combines advances in standardized declarative modeling languages based on formal logic with advances in domain-specific language (DSL) development environments to create a controlled-English language that translates directly into the Web Ontology Language (OWL), the SPARQL graph query language, and a compatible if/then rule language. Models in the SADL language can be authored, tested, and maintained in an Eclipse-based integrated development environment (IDE). This environment offers semantic highlighting, statement completion, expression templates, hyperlinking of concepts to their definition, model validation, automatic error correction, and other advanced authoring features to enhance the ease and productivity of the modeling environment. In addition, the SADL language offers the ability to build in validation tests and test suites that can be used for regression testing. Through common Eclipse functionality, the models can be easily placed under source code control, versioned, and managed throughout the life of the model. Differences between versions can be compared side-by-side. Finally, the SADL-IDE offers an explanation capability that is useful in understanding what was inferred by the reasoner/rule engine and why those conclusions were reached. Perhaps more importantly, explanation is available of why an expected inference failed to occur. The objective of the language and the IDE is to enable domain experts to play a more active and productive role in capturing their knowledge and making it available as computable artifacts useful for automation where appropriate and for decision support systems in applications that benefit from a collaborative human-computer approach. SADL is built entirely on open source code and most of SADL is itself released to open source. This paper explores the concepts behind the language and provides details and examples of the authoring and model lifecycle support facilities.
Ontologies are increasingly being developed on web-based repository hosting platforms such as GitHub. Accordingly, there is a demand for ontology editors which can be easily connected to the hosted repositories. TurtleEditor is a web-based RDF editor that provides this capability and supports the distributed development of ontologies on repository hosting platforms. It offers features such as syntax checking, syntax highlighting, and auto completion, along with a SPARQL endpoint to query the ontology. Furthermore, TurtleEditor integrates a visual editing view that allows for the graphical manipulation of the RDF graph and includes some basic clustering functionality. The text and graph views are constantly synchronized so that all changes to the ontology are immediately propagated and the views are updated accordingly. The results of a user study and performance tests show that TurtleEditor can indeed be effectively used to support the distributed development of ontologies on repository hosting platforms.
Distributed Identity Management (DIM) refers to the ability of defining distributed identities of agents and roles, i.e. a single agent is represented using multiple unique identifiers managed in different namespaces and may have various roles across those namespaces. We propose semDIM, a novel approach for Semantic DIM based on a Semantic Web architecture. For the first time, semDIM provides a framework for a distributed definition and management of entities such as persons being part of an organization, groups, and roles across namespaces. It is suitable for informal, i.e. social networks, as well as for professional networks such as cross-organizational collaborations. In addition, the framework ensures authenticity, authorization and integrity for such distributed identities by featuring certificate-based graph signatures. Beyond the capabilities of existing Identity Management solutions, we allow distributed identifiers and management of groups (consisting of agents and sub-groups) and roles as “first-class entities”. semDIM uses owl:sameAs relations to represent and verify distributed identities via formal reasoning. This concept enables novel functionalities for DIM, as these entities can be identified, related to one another, as well as be managed across namespaces. Our semDIM approach consists of a modular software architecture, a process model using a novel approach for pattern-based concurrency control, as well as a set of state-of-the-art formal OWL ontology patterns. The use of formal patterns ensures semantic interoperability, and extensibility for future requirements. Thereby, our approach can be combined with other applications based on the same or related patterns. We evaluate semDIM in the context of a real-world scenario of securely exchanging DIM information across organizations.
The new vision of the Web as a global intelligent repository needs advanced knowledge structure to manage complex data and services. From this perspective, the use of formal models to represent information on the web is a suitable way to allow the cooperation of users and services. This paper describes a general ontological approach to represent knowledge using multimedia data and linguistic properties to bridge the gap between the target semantic classes and the available low-level multimedia descriptors. We choose to implement our approach in a system to edit, manage and share multimedia ontology in the WEB. The system provides tools to add multimedia objects by means of user interaction. The multimedia features are automatically extracted using algorithms based on standard MPEG-7 descriptors.
Semantic Web and Sensor Networks are two exciting areas of ongoing research and development. While much of the existing research in sensor networks has focused on the design of sensors, communications, and networking issues, the needs and opportunities arising from the rapidly growing capabilities and scale of dynamically networked sensing devices open up demands for efficient data management and convenient programming models. Semantic web represents a spectrum of effective technologies that support complex, cross-jurisdictional, heterogeneous, dynamic and large scale information systems. Recently, growing research efforts on integrating sensor networks with semantic web technologies have led to a new frontier in networking and data management research. The goal of the chapter is to develop an understanding of the semantic web technologies, including sensor web, ontologies, and semantic sensor web services, which can contribute to the growth, application and deployment of large-scale sensor networks, leading to a broad interdisciplinary scope, such as ontology-based sensor networks, semantic sensor networks, cognitive radio networks, and so on.
Currently, there are no components identification and extraction methods are that further specify the components retrieval targets from the software design document. Therefore, this paper proposes a method for components identification and extraction based on XML document by using a multi-agent system to learn keywords and semantics from the XML document transferred from UML-based software design document. Thereafter OWL, a descriptive language, is used to verify the correctness and completeness of the derived results. The experiment verifies the effectiveness of the proposed method, thus providing a reliable foundation for future works on component retrieval.
Existing RDF keyword search studies focus on constructing smallest trees or subgraphs which contain all query keywords, but neglect the semantic association between RDF data. Thus, this paper proposes the keyword parallel search over RDF data based on semantic association (KPSRSA)) algorithm which utilizes a score function to measure semantic association by combining OWL ontology and the probability model. It uses a distributed database Hbase as a storage medium and Mapreduce to perform parallel query, which queries sub-clusters with semantic association in Map phase and constructs a series of associated clusters as query results in Reduce phase. The experimental results demonstrate that the KPSRSA algorithm improves the precision and relevance of search results and keywords. In addition, distributed storage and parallel computing inquiry has improved scalability.
The new vision of the Web as a global intelligent repository needs advanced knowledge structure to manage complex data and services. From this perspective, the use of formal models to represent information on the web is a suitable way to allow the cooperation of users and services. This paper describes a general ontological approach to represent knowledge using multimedia data and linguistic properties to bridge the gap between the target semantic classes and the available low-level multimedia descriptors. We choose to implement our approach in a system to edit, manage and share multimedia ontology in the WEB. The system provides tools to add multimedia objects by means of user interaction. The multimedia features are automatically extracted using algorithms based on standard MPEG-7 descriptors.