One of the outcomes of the continuous research on the evolution of distributed computing is the Web services. The aim of this paper is to represent Power System data effectively in XML in order to improve the interoperability and to develop an enhanced distributed model for unique XMLised Power System Data generation for solving various Power System applications in heterogeneous environment. Power System industries are now increasingly becoming privatized and hence the system data is becoming increasingly distributed, with more constrained and complex operational and control requirements. Because of the complex physical connectivity of the power systems, all levels of industry like generation, transmission, distribution and market need proper operational and equipmental data. As expected, the data to be shared between different power system applications is huge and hence it is vital to have an efficient and reliable data generation model to reduce more human efforts and to have the data in a secure and compatible form. The developed JAX-RPC-based model has the capability to generate the data dynamically in XML, fetching the power system data from various sources such as database, text file, etc. The standards such as XML and SOAP enable software design based on loose coupling which reduces restriction and eliminates similarity requirement between coordinating applications.
This paper discusses how interactions between Web services engaged in composition scenarios are analyzed, modeled, and finally managed. Interactions are primarily considered as a means for conveying messages of different natures between separate components, for instance, Web services. To achieve better coordination and hence, avoid conflicts between Web services, in this paper interactions are handled by two layers known as business logic and support. The business-logic layer is the host of two flows known as control and transactional, whereas the support layer is the host of two additional flows namely exception and message. On top of the proposed layers and flows, context and policies are used to oversee the progress of interactions between Web services and to constrain the behavior of these Web services, respectively.
The technology that integrates various types of Web contents to build a new Web application through end-user programming is widely used nowadays. However, the Web contents do not have a uniform interface for accessing the data and computation. Most of the general Web users access information on the Web through applications until now. Hence, designing a uniform and flexible programmatic interface for integration of different Web contents is unavoidable. In this paper, we propose an approach that can be used to analyze Web applications automatically and reuse the information of Web applications through the programmatic interface we designed. Our approach can support the flexible integration of Web applications, Web services and Web feeds. In our experiments, we use a large number of Web pages from different types of Web applications and achieve the integration by the proposed programmatic interfaces. The experimental results show that our approach brings to the end-users a flexible and user-friendly programming environment.
In the era of service-oriented software engineering (SOSE), service clustering is used to organize Web services, and it can help to enhance the efficiency and accuracy of service discovery. In order to improve the efficiency and accuracy of service clustering, this paper uses the self-join operation in relational database (RDB) to realize Web service clustering. Based on storing service information, it does the self-join operation towards the Input, Output, Precondition, Effect (IOPE) tables of Web services, which can enhance the efficiency of computing services similarity. The semantic reasoning relationship between concepts and the concept status path are used to do the calculation, which can improve the calculation accuracy. Finally, we use experiments to validate the effectiveness of the proposed methods.
Ensuring the fairness and non-repudiation in the security exchange protocol of web service is critical. Model checking is often used for automatic verification for the security properties of protocol. However, the current model checker tools cannot support formalizing protocols with cryptographic primitives, specifying properties with linear temporal logic (LTL) and automatically generating resilient intruder model simultaneously and the application range of them is severely limited. To solve this problem, a model checker Fepchecker is proposed to verify the fairness and non-repudiation properties, which are critical features in security exchange protocols. Firstly, applied pi-calculus is extended to specify the protocols, and the LTL assertion is used for precisely describing fairness and non-repudiation. Secondly, an intruder model is applied to construct their behavior sequences automatically and the protocol sessions and message pattern are used to alleviate the states explosion problem. Thirdly, in our model checking algorithm, the fairness and non-repudiation properties are verified based on Labeled Transition System (LTS) semantics model and the MakeOneMove method is used to explore the state space on-the-fly in the verification process. Finally, Fepchecker is applied to verify six representative protocols and the results show that Fepchecker can effectively verify their fairness and non-repudiation properties.
Web service recommendation is one of the key problems in service computing, especially in the case of a large number of service candidates. The QoS (quality of service) values are usually leveraged to recommend services that best satisfy a user’s demand. There are many existing methods using collaborative filtering (CF) to predict QoS missing values, but very limited works can leverage the network location information in the user side and service side. In real-world service invocation scenario, the network location of a user or a service makes great impact on QoS. In this paper, we propose a novel collaborative recommendation framework containing three novel prediction models, which are based on two techniques, i.e. matrix factorization (MF) and network location-aware neighbor selection. We first propose two individual models that have the capability of using the user and service information, respectively. Then we propose a unified model that combines the results of the two individual models. We conduct sufficient experiments on a real-world dataset. The experimental results demonstrate that our models achieve higher prediction accuracy than baseline models, and are not sensitive to the parameters.
In open and changeful Internet, the enterprise business process needs to be organized or restructured dynamically in order to adapt to environment changes and business logic updates. The solution of Web service and service-oriented architecture (SOA) provides a promising approach. The business processes working as a temporary workflow can be composed by distributed services. However, the cross-organizational service feature of business process requires considering not only the functional requirements but also the timed constraints. The timed property plays an important role in service interactions between business processes, such as timed activity, timeout and timed deadlock. Thus, if time requirements cannot be guaranteed, the new created business process will not be acceptable. In this paper, it proposes a framework of using Petri Net to model timed service business process. First, it defines the behavior model of service business process and gives process composition patterns for different structural forms. Second, service model is extended with time specifications, describing timed constraints among business activity interactions. Third, to support further verifications, it introduces a method for the automatic timed properties generation in the form of temporal logic formulae. Our framework gives a reference in practice to formalize service business process into timed service model.
This article presents a cost sensitive probabilistic contingent planning approach for automated semantic web service composition, under the assumptions that the execution of each web service incurs some cost; its alternative outcomes along with their probabilities of occurring are known in advance; and actions do not involve delete effects. The implemented planner, MAPPPA2, produces a contingent plan in the form of a decision tree, by integrating multiple alternative deterministic plans computed by solving a determinized version of the original problem. Both the generation of the alternative deterministic plans, as well as the merging process attempt to maximize the expected utility of the final contingent plan. The article presents evaluation results regarding the approach, based on three web service composition domains.
Leveraging the latest research results on hypermedia-driven Web APIs and the newest update of the OpenAPI Specification, we propose a reference ontology for REST services along with a formal procedure for converting OpenAPI service descriptions to instances of this ontology. At the heart of the approach is a model for enhancing the meaning of Schema properties (i.e. re-usable JSON Schema properties that clarify the meaning of service components). Schema properties are semantically annotated (i.e. their meaning is mapped to a semantic model) and existing properties are combined to form composed or polymorphic expressions. The algorithm for mapping service descriptions to the OpenAPI ontology is implemented and is available as a Web Application.
In this paper, we propose an approach for preserving trade secrets in B2B interactions among competitors. Customer information exchanged during Business-to-Business (B2B) collaborations is usually considered as a business asset not to be freely shared with other businesses. This customer information is in essence a business trade secret. The full automation of B2B interactions is now possible because of the wide deployment of such technologies as Web services. However, these advances also create greater opportunities for businesses in acquiring sensitive customer data from other transacting businesses, thus creating an impediment for potential B2B collaboration between competitors. This calls for techniques to protect disclosure of sensitive data. Our approach leverages psycholinguistic knowledge to perturb data to computationally impede the disclosure of privileged customer information. We present an analytical model and a set of experiments to demonstrate the robustness of the proposed techniques.
This paper presents a semantics-based dynamic service composition architecture that composes an application through combining distributed components based on the semantics of the components. This architecture consists of a component model called Component Service Model with Semantics (CoSMoS), a middleware called Component Runtime Environment (CoRE), and a service composition mechanism called Semantic Graph based Service Composition (SeGSeC). CoSMoS represents the semantics of components. CoRE provides interfaces to discover and access components modeled by CoSMoS. SeGSeC composes an application by discovering components through CoRE, and synthesizing a workflow of the application based on the semantics of the components modeled by CoSMoS.
This paper describes the latest design of the semantics-based dynamic service composition architecture, and also illustrates the implementation of the architecture based on the Web Service standards, i.e. WSDL, RDF, SOAP, and UDDI. The Web Service based implementation of the architecture allows existing Web Services to migrate onto the architecture without reimplementation. It also simplifies the development and deployment of a new Web Service on the architecture by automatically generating the necessary description files (i.e. WSDL and RDF files) of the Web Service from its runtime binary (i.e. a Java class file).
Traditional middleware is usually developed on monolithic and non-evolving entities, resulting in a lack of flexibility and interoperability. Among current architectures, Service Oriented Architectures aim to easily develop more adaptable Information Systems. Most often, Web Service is the fitted technical solution which provides the required loose coupling to achieve such architectures. However, there is still much to be done in order to obtain a genuinely flawless Web Service, and current market implementations still do not provide adaptable Web Service behavior depending on the service contract. In this paper, we present our two last years of work toward a more adaptable SOA. We proposed two approaches that consider Aspect Oriented Programming (AOP) as a new design solution for Web Services. The two approaches enable us to glue new non-functional behaviors to a Web Service without going back to modify, recompile, retest and finally redeploy it.
Automated composition of Web services or the process of forming new value-added Web services is one of the most promising challenges facing the Semantic Web today. Semantics enables Web service to describe capabilities together with their processes, hence one of the key elements for the automated composition of Web services. In this paper, we focus on the functional level of Web services i.e. services are described according to some input, output parameters semantically enhanced by concepts in a domain ontology. Web service composition is then viewed as a composition of semantic links wherein the latter links refer to semantic matchmaking between Web service parameters (i.e. outputs and inputs) in order to model their connection and interaction. The key idea is that the matchmaking enables, at run time, finding semantic compatibilities among independently defined Web service descriptions. By considering such a level of composition, a formal model to perform the automated composition of Web services i.e. Semantic Link Matrix, is introduced. The latter model is required as a starting point to apply problem-solving techniques such as regression (or progression)-based search for Web service composition. The model supports a semantic context in order to find correct, complete, consistent and robust plans as solutions. In this paper, an innovative and formal model for an AI (Artificial Intelligence) planning-oriented composition is presented. Our system is implemented and interacting with Web services which are dedicated to Telecom scenarios. The preliminary evaluation results showed high efficiency and effectiveness of the proposed approach.
Nowadays, business collaborations have to be highly dynamic and flexible to allow companies to operate efficiently and effectively in complex and volatile markets. To increase the business agility of service consumers, it is fundamental that service providers enhance the visibility of parts of their collaborative processes. Service providers are required to release both the process structures of the services offered and their status during execution. To further increase the flexibility of business collaborations, certain control over the process execution has to be offered to service consumers. In this paper, we present a framework for the support of process control in cross-organizational settings. We specify the control primitives that can be used to exert control on activities and processes before, during and after their executions. These primitives empower service consumers to postpone activity and process executions, bypass minor activities, repeat their executions, etc. We describe an approach to the support of these control primitives by service providers. We demonstrate the application of our framework with a case study from the healthcare domain. A proof-of-concept prototype implementation based on Web service technology is presented.
As many software systems migrate their component communication to the web service paradigm, security becomes an immediate concern. Many existing solutions cannot handle complex situations involving the user, session, application, and any number of XML inter-related documents. The learner information management web service we develop for an e-learning environment has to control access of inter-related XML documents representing different roles of users in the e-learning environment. In this paper, we present an authentication and authorization model for web services that provides access control of inter-related XML documents. This scheme works especially well when the documents that will be operated on are organized in hierarchical structures, like the collections in a native XML database.
Selecting the optimal service from a mass of functionally equivalent services with low time cost is significant. Previous research addressed this issue excessively relying on several kinds of optimization algorithms. In this work, we propose an approach that prunes redundant services and reduces search space of service selection on the basis of Skyline computing and context inference. We firstly adopt Skyline computing to prune redundant services. Then, we present the notion of context service based on context inference to further reduce the size of service selection problem. Finally, mixed integer programming is employed to find the optimal service from context services according to users' Quality of Service (QoS) requirements. Experimental results on a test bed indicate that our approach can find the optimal service with low time cost.
The common robot communication platform using Web Service is well accepted and becoming popular gradually. One of the key components for robot communication platform is the reliable messaging which provides reliable high performance message transfer. Both conforming to standard specifications and interoperability among multiple implementations are critical requirements for the platform, because different robots and different services should be able to connect and communicate each other on the platform. However, there is no conformance/interoperability testing tool for the reliable message component of Web Services. This paper describes requirements for the conformance and interoperability testing for Web Service technologies, and how we developed the verification suit that satisfies the requirements by automated error case verification model. This paper also reports the interoperability verification results for emerging reliable message components of Web Services and our contribution to the international standardization group based on this verification results.
Most of the work on automated web service composition has focused so far on composition of stateful web services. This level of composition so-called "Process Level" considers web services with their internal and complex behaviors. At process level formal models such as State Transition Systems (STS from now) or Interface Automata are the most appropriate models to represent the internal behavior of stateful web services. However such models focus only on semantics of their behaviors and unfortunately not on semantics of actions and their parameters. In this paper, we suggest to extend the STS model, by following the WSMO based annotation for Abstract State Machine. This semantic enhancement of STS so called S2TS will enable to model semantics of internal behaviors and semantics of their actions together with their input and output parameters. Secondly, we will focus on automated generation of data flow (or the process to perform automated assignments between parameter of services involved in a composition). Thus we do not restrict to assignments of exact parameters (which is practically never used in industrial scenario) but extend assignments to semantically close parameters (e.g., through a subsumption matching) in the same ontology. Our system is implemented and interacting with web services dedicated to Telecommunication scenarios. The preliminary evaluation results showed high efficiency and effectiveness of the proposed approach.
Due to the increasing number of Web Services with the same functionality, selecting a Web Service that best serves the needs of the Web Client has become a tremendously challenging task. Present approaches use non-functional parameters of the Web Services but they do not consider any preprocessing of the set of functionally Similar Web Services. The lack of preprocessing results in increased use of computational resources due to unnecessary processing of Web Services that have a very low to no chance of satisfying the consumer’s requirements. In this paper, we propose an Ensemble classification method for preprocessing and a Web Service Selection method based on the Quality of Service (QoS) parameters. Once the most eligible Web Services are enumerated through classification, they are ranked using the Technique of Order Preference by Similarity to Ideal Solution (TOPSIS) method with Analytic Hierarchy Process (AHP) used for weight calculation. A prototype of the method is developed, and experiments are conducted on a real-world Web Services dataset. Results demonstrate the feasibility of the proposed method.
Efficient execution of data-intensive workflows has been playing an important role in bioinformatics as the amount of data has been rapidly increasing. The execution of such workflows must take into account the volume and pattern of communication. When orchestrating data-centric workflows, a centralized workflow engine can become a bottleneck to performance. To cope with the bottleneck, a hybrid approach with choreography for data management of workflows is proposed. However, when a workflow includes many repetitive operations, the approach might not gain good performance because of the overheads of its additional mechanism. This paper presents and evaluates an improvement of the hybrid approach for managing a large amount of data. The performance of the proposed method is demonstrated by measuring execution times of example workflows.
Please login to be able to save your searches and receive alerts for new content matching your search criteria.