The Ubiquitous Bio-Information Computing (UBIC2) project aims to disseminate protocols and software packages to facilitate the development of heterogeneous bio-information computing units that are interoperable and may run distributedly. UBIC2 specifies biological data in XML formats and queries data using XQuery. The UBIC2 programming library provides interfaces for integrating, retrieving, and manipulating heterogeneous biological data. Interoperability is achieved via Simple Object Access Protocol (SOAP) based web services. The documents and software packages of UBIC2 are available at .
Interoperability among different development tools is not a straightforward task since ontology editors rely on specific internal knowledge models which are translated into common formats such as RDF(S). This paper addresses the urgent need for interoperability by providing an exhaustive set of benchmark suites for evaluating RDF(S) import, export and interoperability. It also demonstrates, in an extensive field study, the state-of-the-art of interoperability among six Semantic Web tools. From this field study we have compiled a comprehensive set of practices that may serve as recommendations for Semantic Web tool developers and ontology engineers.
With the advent of Internet of things and cyber physical system, sensor networks become more and more important. To better accomplish a task, multiple sensors or even multiple sensor networks need to interoperate. Logical workflow nets and cooperative logical workflow nets are introduced to formally model and analyze interoperability of sensor networks. Independent feasibility and Interoperable feasibility are important properties for ensuring correct execution and interoperability of sensor networks. Complete path nets, possible path nets, cooperative complete path nets and cooperative possible path nets are presented to decide independent feasibility and interoperable feasibility of logical workflow nets and cooperative logical workflow nets denoting interoperability of sensor networks.
In the world of the Internet of Things (IoT), heterogeneous systems and devices need to be connected and exchange data with others. How data exchange can be automatically realized becomes a critical issue. An information model (IM) is frequently adopted and utilized to solve the data interoperability problem. Meanwhile, as IoT systems and devices can have different IMs with different modeling methodologies and formats such as UML, IEC 61360, etc., automated data interoperability based on various IMs is recognized as an urgent problem. In this paper, we propose an approach to automate the data interoperability, i.e. data exchange among similar entities in different IMs. First, similarity scores among entities are calculated based on their syntactic and semantic features. Then, in order to precisely get similar candidates to exchange data, a concept of class distance calculated with a Virtual Distance Graph (VDG) is proposed to narrow down obtained similar properties for data exchange. Through analyzing the results of a case study, the class distance based on VDG can effectively improve the precisions of calculated similar properties. Furthermore, data exchange rules can be generated automatically. The results reveal that the approach of this research can efficiently contribute to resolving the data interoperability problem.
Blogs are one of the most prominent means of communication on the web. Their content, interconnections and influence constitute a unique socio-technical artefact of our times which needs to be preserved. The BlogForever project has established best practices and developed an innovative system to harvest, preserve, manage and reuse blog content. This paper presents the latest developments of the blog crawler which is a key component of the BlogForever platform. More precisely, our work concentrates on techniques to automatically extract content such as articles, authors, dates and comments from blog posts. To achieve this goal, we introduce a simple yet robust and scalable algorithm to generate extraction rules based on string matching using the blog's web feed in conjunction with blog hypertext. Furthermore, we present a system architecture which is characterised by efficiency, modularity, scalability and interoperability with third-party systems. Finally, we conduct thorough evaluations of the performance and accuracy of our system.
Future information systems will involve large numbers of heterogeneous, intelligent agents distributed over large computer/communication networks. Agents may be humans, humans interacting with computers, humans working with computer support, and computer systems performing tasks without human intervention. We call such systems Intelligent and Cooperative Information Systems (ICISs). Although we can imagine extensions of capabilities of current ISs and of individual contributing core technologies, such as databases, artificial intelligence, operating systems, and programming languages, we cannot imagine the capabilities of ICISs which we believe will be based on extensions of these and other technologies. Neither do we know exactly what technologies and capabilities will be required, what challenges will arise, nor how the technologies might be integrated or work together to address the challenges.
In this paper, we provide initial definitions for key concepts and terms in this new area, identify potential core contributing technologies, illustrate the ICIS concept with example systems, and pose basic research questions. We also describe the results of discussions on these topics that took place at the Second International Workshop on Intelligent and Cooperative Information Systems held in Como, Italy, October 1991. The workshop focused on core technologies for ICISs. The workshop and the results reflect the multi-disciplinary nature of this omerging area.
Existing and legacy software systems are the product of lengthy and individual developmental histories. Interoperability among such systems offers the support of global applications on these systems and intelligent information processing. However, interoperability among these heterogeneous systems is hampered by the absence of an integrated environment that would allow the development of global applications requiring intersystem cooperation. A uniform application-system interface is necessary to abstract the common properties of the global applications and of systems, mask their differences, and thus overcome this heterogeneity barrier. This paper presents such a solution, termed Remote System Interfaces (RSIs), which has been designed and implemented in the course of the InterBase project at Purdue University.
An approach to accommodating semantic heterogeneity in a federation of interoperable, autonomous, heterogeneous databases is presented. A mechanism is described for identifying and resolving semantic heterogeneity while at the same time honoring the autonomy of the database components that participate in the federation. A minimal, common data model is introduced as the basis for describing sharable information, and a three-pronged facility for determining the relationships between information units (objects) is developed. Our approach serves as a basis for the sharing of related concepts through (partial) schema unification without the need for a global view of the data that is stored in the different components. The mechanism presented here can be seen in contrast with more traditional approaches such as “integrated databases” or “distributed databases”. An experimental prototype implementation has been constructed within the framework of the Remote-Exchange experimental system.
We report on the design of a novel architecture for data warehousing based on the introduction of an explicit "logical" layer to the traditional data warehousing framework. This layer serves to guarantee a complete independence of OLAP applications from the physical storage structure of the data warehouse and thus allows users and applications to manipulate multidimensional data ignoring implementation details. For example, it makes possible the modification of the data warehouse organization (e.g. MOLAP or ROLAP implementation, star scheme or snowflake scheme structure) without influencing the high level description of multidimensional data and programs that use the data. Also, it supports the integration of multidimensional data stored in heterogeneous OLAP servers. We propose , a simple data model for multidimensional databases, as the reference for the logical layer.
provides an abstract formalism to describe the basic concepts that can be found in any OLAP system (fact, dimension, level of aggregation, and measure). We show that
databases can be implemented in both relational and multidimensional storage systems. We also show that
can be profitably used in OLAP applications as front-end. We finally describe the design of a practical system that supports the above logical architecture; this system is used to show in practice how the architecture we propose can hide implementation details and provides a support for interoperability between different and possibly heterogeneous data warehouse applications.
In this paper, we present and discuss a novel architectural approach supporting the integration among legacy information systems of autonomous organizations. It is based on the use of a data warehouse in a new conceptual role. Namely, we propose to use it, during the design and implementation phases of a cooperative information system, as a tool supporting the coherence maintenance of the underlying databases and the efficient management of accesses to them. Our approach is rooted in the SICC project for cadastral data exchange among Italian Municipalities, Ministry of Finance, Notaries, and Certified Land Surveyors. Research results reported here are an abstraction of solutions introduced in the SICC project and validated through the development of various inter-organization cooperative information systems, managed by the "Coordinamento dei Progetti Intersettoriali" of AIPA, the Italian Authority for Information Technology in Public Administration.
The incompatibilities among complex data formats and various schema used by biological databases that house these data are becoming a bottleneck in biological research. For example, biological data format varies from simple words (e.g. gene name), numbers (e.g. molecular weight) to sequence strings (e.g. nucleic acid sequence), to even more complex data formats such as taxonomy trees. Some information is embedded in narrative text, such as expert comments and publications. Some other information is expressed as graphs or images (e.g. pathways networks). The confederation of heterogeneous web databases has become a crucial issue in today's biological research. In other words, interoperability has to be archieved among the biological web databases and the heterogeneity of the web databases has to be resolved. This paper presents a biological ontology, BAO, and discusses its advantages in supporting the semantic integration of biological web databases are discussed.
Nowadays, Enterprise Application Integration (EAI) constitutes a real and growing need for most of enterprises, particularly for large and dynamic ones. Actually, the major problem of EAI is the heterogeneity problem, especially the semantic one. This latter is not correctly addressed by today's integration solutions, which focus mainly on syntactical integration. Dealing with the semantic aspect, which will certainly promote EAI by providing it more consistency and robustness, needs some appropriate principles such as ontology urbanization and mediation. These latter that constitute the main focus of this paper aim to favor the semantic application integration allowing to correctly capture, structure, master and reason upon the semantics, which currently constitutes a big challenge for several enterprises that are in quest of more flexibility and manageability.
Interoperability frameworks provide specifications for different aspects of interoperability, for communicating and sharing information. Ten prominent industry-neutral interoperability frameworks are analyzed in this paper, distinguishing between operational interoperability frameworks and conceptual interoperability frameworks. To support this analysis, 16 criteria were defined, which represent the basis of a comparison framework. The operational interoperability frameworks analyzed have similarities and differences, and complement each other in some aspects (e.g. messaging service). The differences refer to the manner they handle (or not) different interoperability details relevant for performing e-business, e.g. only ebXML provides guidelines for negotiation and setting-up a collaboration agreement prior to conducting e-business. The five conceptual interoperability frameworks were analyzed based on specific structural elements, as they tackle differently the notion of interoperability, i.e. targeting types of integration, interoperability barriers, levels of interoperability. Despite the advances of interoperability frameworks, full interoperability is not yet achieved. The analysis performed allowed to conclude that although interoperability frameworks represent a good direction towards seamless interoperability in a networked environment, a big challenge is the harmonization of different aspects of the interoperability frameworks towards attaining full interoperability in complex cross-sectorial e-business scenarios, which can be addressed by joint actions of the scientific community and practitioners. Finally, this analysis yields a set of directions for future research work.
Interoperability is a critical factor for public administration-related entities operating in collaborative/cooperative environments. Thus, performing an interoperability diagnosis, with respect to other usual assessment approaches, provides a more adequate and extended view in establishing qualities and gaps, and helping to prioritize actions to increase performance and maturity of a government-related organization. This paper presents a diagnosis method called Public Administration Interoperability Diagnosis Method (PAIDM), using Analytic Hierarchy Process (AHP) as multi-criteria decision-making structure to calculate the capability levels diagnosed. A proposed development framework guides a systematic literature review, followed by a survey of experts and a set of quantitative and qualitative methods related to the extraction and modeling of the knowledge from the public administration domain mapped into theoretical, conceptual, and practical outputs devoted to PAIDM execution. The paper also raises a public administration interoperability capability model used as a reference for the diagnosis and presents general results of two public administration application cases regarding their capability levels.
In this article the technological possibilities offered by the interoperability standard High Level Architecture (HLA) are introduced and discussed. The main focus is hereby on manufacturing applications, but the same approach is applicable to a wide range of other scenarios, e.g. in the area of supply chains, logistics, product simulation etc.
Especially for challenging objectives like the digital factory which many enterprises are currently facing, simulation applications are gaining growing importance. While simulations nowadays are often still applied for isolated problems, the consideration of the global context has a growing importance. A solution for solving this problem is offered by the distributed simulation paradigm: simulations are no longer single purpose applications. Rather, individual simulation models can be combined with each other for serving different purposes. Coupled simulations of different parts of a factory can be used to perform global optimizations. The same paradigm can be used for entire supply chains.
For applying the distributed simulation paradigm, technological as well as organizational aspects have to be considered. On the technological side, it is necessary to integrate a certain interoperability standard into the tools which need to be coupled with each other. On the organizational level, an enterprise wide process has to be established, which defines how distributed modeling and simulation shall be applied. This article discusses solutions for both issues and illustrates them using a practical application scenario.
The common robot communication platform using Web Service is well accepted and becoming popular gradually. One of the key components for robot communication platform is the reliable messaging which provides reliable high performance message transfer. Both conforming to standard specifications and interoperability among multiple implementations are critical requirements for the platform, because different robots and different services should be able to connect and communicate each other on the platform. However, there is no conformance/interoperability testing tool for the reliable message component of Web Services. This paper describes requirements for the conformance and interoperability testing for Web Service technologies, and how we developed the verification suit that satisfies the requirements by automated error case verification model. This paper also reports the interoperability verification results for emerging reliable message components of Web Services and our contribution to the international standardization group based on this verification results.
The distributed simulation system interoperation can be divided into six levels. Interactive data encryption can be completed in each level, lead to six encryption strategies: data field encryption, data package encryption, program module encryption, simulation application encryption, simulation node encryption, and simulation system encryption. There are four basic Encryption/decryption realization modes: serial modes with software or hardware realization, parallel modes based on embedded processor or FPGA/ASIC system. Large and Complex distributed simulation system may employ one or several encryption strategies and realization modes.
Artificial neural networks (ANNs), a branch of artificial intelligence, has become a very interesting domain since the eighties when back-propagation (BP) learning algorithm for multilayer feed-forward architecture was introduced to solve nonlinear problems. It is used extensively to solve complex nonalgorithmic problems such as prediction, pattern recognition and clustering. However, in the context of a holistic study, there may be a need to integrate ANN with other models developed in various paradigms to solve a problem. In this paper, we suggest discrete event system specification (DEVS) be used as a model of computation (MoC) to make ANN models interoperable with other models (since all discrete event models can be expressed in DEVS, and continuous models can be approximated by DEVS). By combining ANN and DEVS, we can model the complex configuration of ANNs and express its internal workings. Therefore, we are extending the DEVS-based ANN proposed by Toma et al. [A new DEVS-based generic artficial neural network modeling approach, The 23rd European Modeling and Simulation Symp. (Simulation in Industry), Rome, Italy, 2011] for comparing multiple configuration parameters and learning algorithms and also to do prediction. The DEVS models are described using the high level language for system specification (HiLLS), [Maïga et al., A new approach to modeling dynamic structure systems, The 29th European Modeling and Simulation Symp. (Simulation in Industry), Leicester, United Kingdom, 2015] a graphical modeling language for clarity. The developed platform is a tool to transform ANN models into DEVS computational models, making them more reusable and more interoperable in the context of larger multi-perspective modeling and simulation (MAS).
An important part of the industry 4.0 concept is the horizontal and vertical integration of manufacturing systems.
Information exchange in traditional production environments happens through interfaces that are connections between strictly defined senders and receivers. This limits the possibility for changing and extending the manufacturing system. A possible approach to enable the information exchange between all system entities uniformly are information models. Such models are semantic descriptions of the available data. The creation of these models needs to follow the manufacturing process, but also requires certain standardization to improve efficiency. Another challenge is the actual technical integration of the information into a common address space.
This paper connects an approach for information modeling with a concept for dynamic aggregation. The approach is described with the help of a continuous example that uses OPC UA as a middleware technology.
Since interoperability in the operating room (OR) is considered a main factor to increase safety and improve the quality of surgeries, new challenges arise for the medical device industry in this increasingly connected environment. Therefore, new architectural approaches are needed, some of which may be inspired by other domains. In the meantime, novel communication paradigms are also gaining practical relevance in the automotive industry, which faces similar challenges. As a result, service-oriented architectures (SOAs) are often considered to provide a higher degree of flexibility for changes during the development and after product deployment. In this paper, we derive requirements of future networked OR tables from challenges and trends ahead. Based on these requirements, we propose a mixed electric and electronic architecture (E/E-architecture) inspired by the state-of-the-art measures from automotive domain and present an Identity and Access Management (IAM) approach to improve system security. In addition, a prototypical implementation is used to demonstrate the practicality of the proposed solution and to discuss necessary adjustments to the development process resulting from a mixed E/E-architecture approach.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.