Active replication has been widely explored to achieve fault tolerance and to improve system availability, especially in service oriented applications. In this paper we explore software diversity-based active replication in the context of advanced simulation systems, with the aim at improving the timeliness for the production of simulation output. Our proposal is framed by the High-Level-Architecture (HLA), namely a middleware based standard for simulation package interoperability, and results in the design and implementation of an Active Replication Management Layer (ARML) targeted to off-the-shelf SMP computing systems. This layer can be interposed in between each simulator instance and the underlying HLA middleware component, in order to support the execution of diversity-based active replicas of a same simulation package in a totally transparent manner. Beyond presenting the replication framework and the design/implementation of ARML, we also report the results of an experimental evaluation on a case study, quantifying the benefits from our proposal in terms of both simulation execution speed and performance guarantees vs tunable software parameters. (Free software releases of ARML can be found at the URL ).
Rapid advances in cloud computing have made the vision of utility computing a near-reality, but only in certain domains. For science and engineering parallel or distributed applications, on-demand access to resources within grids and clouds is hampered by two major factors: communication performance and paradigm mismatch issues. We propose a framework for addressing the latter aspect via software adaptations that attempt to reconcile model and interface differences between application needs and resource platforms. Such matching can greatly enhance flexibility in choice of execution platforms — a key characteristic of utility computing — even though they may not be a natural fit or may incur some performance loss. Our design philosophy, middleware components, and experiences from a cross-paradigm experiment are described.
Quantum key distribution (QKD) promises unconditionally secure communications, however, the low bit rate of QKD cannot meet the requirements of high-speed applications. Despite the many solutions that have been proposed in recent years, they are neither efficient to generate the secret keys nor compatible with other QKD systems. This paper, based on chaotic cryptography and middleware technology, proposes an efficient and universal QKD protocol that can be directly deployed on top of any existing QKD system without modifying the underlying QKD protocol and optical platform. It initially takes the bit string generated by the QKD system as input, periodically updates the chaotic system, and efficiently outputs the bit sequences. Theoretical analysis and simulation results demonstrate that our protocol can efficiently increase the bit rate of the QKD system as well as securely generate bit sequences with perfect statistical properties. Compared with the existing methods, our protocol is more efficient and universal, it can be rapidly deployed on the QKD system to increase the bit rate when the QKD system becomes the bottleneck of its communication system.
Internet of Things (IoT) can be defined as a thing or device, physical and virtual, connected and communicating together, and integrated to a network for a specific purpose. The IoT uses technologies and devices such as sensors, radio-frequency identification (RFID) and actuators to collect data. IoT is not only about collecting data generated from sensors, but also about analyzing it. IoT applications must, of necessity, keep out all attackers and intruders so as to thwart attacks. IoT must allow for information to be shared, with every assurance of confidentiality, and is about a connected environment where people and things interact to enhance the quality of life. IoT infrastructure must be an open source, without ownership, meaning that anyone can develop, deploy and use it. The objective of this paper is to discuss the various challenges, issues and applications confronting the Internet of Things.
Software architectures promote development focused on modular functional building blocks (components), their interconnections (configurations), and their interactions (connectors). Since architecture-level components often contain complex functionality, it is reasonable to expect that their interactions will be complex as well. Middleware technologies such as CORBA, COM, and RMI provide a set of predefined services for enabling component composition and interaction. However, the potential role of such services in the implementations of software architectures is not well understood. In practice, middleware can resolve various types of component heterogeneity — across platform and language boundaries, for instance — but also can induce unwanted architectural constraints on application development. We present an approach in which components communicate through architecture-level software connectors that are implemented using middleware. This approach preserves the properties of the architecture-level connectors while leveraging the beneficial capabilities of the underlying middleware. We have implemented this approach in the context of a component- and message-based architectural style called C2 and demonstrated its utility in the context of several diverse applications. We argue that our approach provides a systematic and reasonable way to bridge the gap between architecture-level connectors and implementation-level middleware packages.
The Ubiquitous Bio-Information Computing (UBIC2) project aims to disseminate protocols and software packages to facilitate the development of heterogeneous bio-information computing units that are interoperable and may run distributedly. UBIC2 specifies biological data in XML formats and queries data using XQuery. The UBIC2 programming library provides interfaces for integrating, retrieving, and manipulating heterogeneous biological data. Interoperability is achieved via Simple Object Access Protocol (SOAP) based web services. The documents and software packages of UBIC2 are available at .
Component-based software engineering has recently emerged as a promising solution to the development of system-level software. Unfortunately, current approaches are limited to specific platforms and domains. This lack of generality is particularly problematic as it prevents knowledge sharing and generally drives development costs up. In the past, we have developed a generic approach to component-based software engineering for system-level software called OpenCom. In this paper, we present OpenComL an instantiation of OpenCom to Linux environments and show how it can be profiled to meet a range of system-level software in Linux environments. For this, we demonstrate its application to constructing a programmable router platform and a middleware for parallel environments.
Database applications are being increasingly under pressure to respond effectively to ever more demanding performance requirements. Software architects can resort to several well-known architectural tactics to minimize the possibility of coming across with any performance bottleneck. The usage of call-level interfaces (CLIs) is a strategy aimed at reducing the overhead of business components. CLIs are low-level APIs that provide a high-performance environment to execute standard SQL statements on relational and also on some NoSQL database (DB) servers. In spite of these valuable features, CLIs are not thread-safe when distinct threads need to share datasets retrieved through Select statements from databases. Thus, even in situations where two or more threads could share a dataset, there is no other possibility than providing each thread with its own dataset, this way leading to an increased need of computational resources. To overcome this drawback, in this paper we propose a new natively thread-safe architecture. The implementation herein presented is based on a thread-safe updatable local memory structure (LMS) where the data retrieved from databases is kept. A proof of concept based on Java Database Connectivity type 4 (JDBC) for SQL Server 2008 is presented and also a performance assessment.
In previous work, we introduced a novel concept of a generalised event, an abstract event, which we define as a change of state of abstract predicates that represent knowledge about the surrounding world. Abstract predicates are defined by formulae in temporal first-order logic (Abstract Event Specification Language (AESL)) whose leaf predicates represent low-level sensor-derived knowledge. Abstract events are detected by Rete Networks structured as a deductive knowledge-base. Current Abstract Event detectors cannot express sufficiently well certain high-level situations, such activity derived from user trajectories.
In this work we introduce a novel type of abstract event detector, a hidden Markov Model detector (hMM-detector). hMM-detectors are implemented as pattern recognition engines that use several stochastic models, hidden Markov Models (hMMs), in order to classify observed activities to the most likely activity class. We link hMM-detectors with AESL by specifying a new AESL operator for defining hMM-based Abstract Events, thus increasing AESL's expressive power.
We describe the experimental evaluation of the above work that was carried out at the University of Cambridge. hMM-Detectors were trained and tested with real data from the Active BAT location system. We evaluate the expressiveness of the enhanced AESL by discussing three case studies in healthcare that relate to continuous monitoring of elderly or injured patients. We demonstrate that AESL can be used in order to improve the dependability of continuous monitoring of patients and the provision of high-quality healthcare.
To manage the development of cooperative information systems that support the dynamics and mobility of modern businesses, separation of concern mechanisms and abstractions are needed. Model driven development (MDD) approaches utilize abstraction and transformation to handle complexity. In MDD, specifying transformations between models at various levels of abstraction can be a complex task. Specifying transformations for pervasive system services that are tangled with other system services is particularly difficult because the elements to be transformed are distributed across a model. This paper presents an aspect oriented model driven framework (AOMDF) that facilitates separation of pervasive services and supports their transformation across different levels of abstraction. The framework facilitates composition of pervasive services with enterprise services at various levels of abstraction. The framework is illustrated using an example in which a platform independent model of a banking service is transformed to a platform specific model.
Cloud computing has invaded our lives so beautifully, that it is time to make cloud more productive. What makes Cloud computing novel is the possibility of an almost immediate resizing of resources on a pay-as-you-go usage model. With companies moving forward with various cloud initiatives, cloud consumers are discouraged from relinquishing their control of the infrastructure to the cloud providers. The rivalry among cloud providers to stay competitive in the market makes it necessary to lock-in its customers, and therefore customers cannot migrate easily to another due to non-interoperable APIs (Application Programming Interface), portability and migration issues. In addition, today’s cloud users are mobile devices and consuming a cloud service onto mobile device poses another set of risks. One way to handle this complexity is to devise an intermediary which can take care of the heterogeneity at the cloud and mobile level, and ensure a multi-cloud deployment of application by taking advantage of the best features from different vendors simultaneously. Thus, this paper is a sincere initiative to understand the problem beneath multi-cloud solutions and their embrace for the mobile world. Hence, the following paper begins with a broad coverage of existing work, gives an outline of a multi-cloud middleware, and discusses existing issues with API heterogeneity which is the prime point of concern in the vendor lock-in issue.
This paper describes the procedure followed for using third-party tools and applications, avoiding the development of complex communication software modules for data sharing. A common practice in robotics is the use of middlewares to interconnect different software applications, hardware components, or even complete systems. It allows code and tool reuse minimizing the development effort. In this way, applications developed for one middleware can be shared with others by means of establishing communication bridges among them. The most extended procedure is the development of software modules that use the low-level communication resources that middlewares provide. This procedure has many advantages but a clear disadvantage: the complexity of development. The procedure proposed is based on the use of cloud technologies for data sharing without the development of middleware bridges. The way of inter-relate different middlewares is by means of the development of a compatible robot model. This procedure has enabled the use of the ArmarX middleware tools and the application of the results obtained to the humanoid robot TEO, that uses the YARP middleware, in an easy and fast way.
Since the last few decades, research in the area of robotics technology has been emphasizing in the modeling and development of cognitive machines. A cognitive machine can have multiple cognitive capabilities to be programmed to make it artificially intelligent. Numerous cognitive modules interact to mimic human behavior in machines and result in such a heavily coupled system that a minor change in logic or hardware may affect a large number of its modules. To address such a problem, several middlewares exist to ease the development of cognitive machines. Although these layers decouple the process of logic building and communication infrastructure of modules, they are language-dependent and have their limitations. A cognitive module developed for one research work cannot be a part of another research work resulting in the re-invention of the wheel. This paper proposes a RESTful technology-based framework that provides language-independent access to low-level control of the iCub’s sensory-motor system. Moreover, the model is flexible enough to provide hybrid communications between cognitive modules running on different platforms and operating systems. Furthermore, a cognitive client is developed to test the proposed model. The experimental analysis performed by creating different scenarios shows the effectiveness of the proposed framework.
This paper discusses issues of engineering access control solutions in distributed applications for enterprise computing environments. It reviews application-level access control available in existing middleware technologies, discusses open problems in these technologies, and surveys research efforts to address the problems.
An important emerging requirement for distributed systems is adaptive QoS, which necessitates more flexible system infrastructure components that can adapt robustly to dynamic end-to-end changes in application requirements and environmental conditions. This paper shows how to add a characteristic of dynamic configuration, from the perspective of fault tolerance QoS, to the fault tolerant object management framework based on middleware. Three specific technical challenges in satisfying the requirements are: 1) perceive the changes of environment; 2) define the trigger for reconfiguration; 3) reconfigure the resources if the trigger satisfied. We define dynamic reconfiguration as the relation among fault tolerance properties, description of computing environment changes and dynamic adjustment algorithm, Based on this definition the management framework for dynamic reconfiguration is built. In which, the introduction of reflection model and publish/subscribe model makes the acquirement and notification of information more convenient and flexible, and the adoption of policy customization mechanism makes reconfiguration enforced more effectively. The framework presented in this paper has been validated in a prototype.
Database replication has gained a lot of attention in the last few years since databases are often the bottleneck of modern information systems. Database replication is often implemented at the middleware level. However, there is an important tradeoff in terms of scalability and performance if the database is dealt with as a black box. In this paper, we advocate for a gray box approach in which some minimal functionality of the database is exposed through a reflective interface enabling performant and scalable database replication at the middleware level. Reflection is the adequate paradigm to separate the regular functionality from the replication logic. However, traditional full-fledged reflection is too expensive. For this reason, the paper focuses on the exploration of some lightweight reflective interfaces that can combine the architectonic advantages of reflection with good performance and scalability. The paper also evaluates thoroughly the cost of the different proposed reflective mechanisms and its impact on the performance.
The importance of network-ready personal devices in collaborative systems is becoming apparent. The participants of a collaborative environment use mobile devices to overcome the physical constraint of using a conventional PC. However, the integration of disparate devices like mobile devices and PCs is difficult, because of the connectivity and the computing power limitations of a mobile device. We believe a dedicated middleware layer that supports universal access, event mapping, user management, and content adaptation is a great help to integrate mobile devices into the collaborative system. In this paper, we propose a ubiquitous computing environment architecture for a collaborative system called Garnet Message System Micro Edition (GMSME). This architecture provides the environment to resolve those issues and is designed efficiently considering the computing resource limitations of mobile devices.
This paper proposed a mobile-agent-based middleware model to fit the features of heterogeneity, distribution, dynamic and local autonomy in Grid environment. The model can resolve the main problems that expose in the Grid environment. The model architecture is illuminated and the work principle and implement of the mobile agent is analyzed and illuminated. Finally we present the experiment result by comparing the general grid with our mobile-agent-based grid.
A way of establishing user developing information management for database access through middleware is introduced. We put the data needed for share on the LAN so that users on the LAN can access the data according to the given authority to realize resource share. Yet, the system can be realized on the Internet. The shared data can be accessed through broadband or Internet telephone to extend the scope of system application. Through changing relevant configuration, the development interfaces provided by the middleware can be kept unchanged and the system developed by users can also be kept unchanged. It reduces the work of user's development. The progress and performance of entire system can be improved by letting the professional computer programmer to accomplish.
This work presents a Body Sensor Network (BSN) system architecture concept, which can be applied not only for medical purposes, but also for social applications, entertainment and lifestyle. Emphasises lies on keeping the user in control of his own, possible privacy-sensitive, data by offering a "body firewall" and allowing rapid development and deployment of new services by domain experts, reusing existing hardware and coexisting with present services.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.