Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    COOPERATIVE SELF-COMPOSITION AND DISCOVERY OF GRID SERVICES IN P2P NETWORKS

    The desirable global scalability of Grid systems has steered the research towards the employment of the peer-to-peer (P2P) paradigm for the development of new resource discovery systems. As Grid systems mature, the requirements for such a mechanism have grown from simply locating the desired service to compose more than one service to achieve a goal. In Semantic Grid, resource discovery systems should also be able to automatically construct any desired service if it is not already present in the system, by using other, already existing services. In this paper, we present a novel system for the automatic discovery and composition of services, based on the P2P paradigm, having in mind (but not limited to) a Grid environment for the application. The paper improves composition and discovery by exploiting a novel network partitioning scheme for the decoupling of services that belong to different domains and an ant-inspired algorithm that places co-used services in neighbouring peers.

  • articleNo Access

    RETRIEVING AND INTEGRATING DATA FROM MULTIPLE INFORMATION SOURCES

    With the current explosion of data, retrieving and integrating information from various sources is a critical problem. Work in multidatabase systems has begun to address this problem, but it has primarily focused on methods for communicating between databases and requires significant effort for each new database added to the system. This paper describes a more general approach that exploits a semantic model of a problem domain to integrate the information from various information sources. The information sources handled include both databases and knowledge bases, and other information sources (e.g. programs) could potentially be incorporated into the system. This paper describes how both the domain and the information sources are modeled, shows how a query at the domain level is mapped into a set of queries to individual information sources, and presents algorithms for automatically improving the efficiency of queries using knowledge about both the domain and the information sources. This work is implemented in a system called SIMS and has been tested in a transportation planning domain using nine Oracle databases and a Loom knowledge base.

  • articleNo Access

    COOPERATIVE INFORMATION AGENTS FOR DIGITAL CITIES

    A digital city is a social information infrastructure for urban life (including shopping, business, transportation, education, welfare and so on). We started a project to develop a digital city for Kyoto based on the newest technologies including cooperative information agents. This paper presents an architecture for digital cities and shows the roles of agent interfaces in it. We propose two types of cooperative information agents as follows: (a) the front-end agents determine and refine users' uncertain goals, (b) the back-end agents extract and organize relevant information from the Internet, (c) Both types of agents opportunistically cooperate through a blackboard. We also show the research guidelines towards social agents in digital cities; the agent will foster social interaction among people who are living in/visiting the city.

  • articleNo Access

    AN INTEGRATED LIFE CYCLE FOR WORKFLOW MANAGEMENT BASED ON LEARNING AND PLANNING

    The ability to describe business processes as executable models has always been one of the fundamental premises of workflow management. Yet, the tacit nature of human knowledge is often an obstacle to eliciting accurate process models. On the other hand, the result of process modeling is a static plan of action, which is difficult to adapt to changing procedures or to different business goals. In this article, we attempt to address these problems by approaching workflow management with a combination of learning and planning techniques. Assuming that processes cannot be fully described at build-time, we make use of learning techniques, namely Inductive Logic Programming (ILP), in order to discover workflow activities and to describe them as planning operators. These operators will be subsequently fed to a partial-order planner in order to find the process model as a planning solution. The continuous interplay between learning, planning and execution aims at arriving at a feasible plan by successive refinement of the operators. The approach is illustrated in two simple scenarios. Following a discussion of related work, the paper concludes by presenting the main challenges that remain to be solved.

  • articleNo Access

    AUTOMATIC GENERATION OF OPTIMIZED BUSINESS PROCESS MODELS FROM CONSTRAINT-BASED SPECIFICATIONS

    Business process (BP) models are usually defined manually by business analysts through imperative languages considering activity properties, constraints imposed on the relations between the activities as well as different performance objectives. Furthermore, allocating resources is an additional challenge since scheduling may significantly impact BP performance. Therefore, the manual specification of BP models can be very complex and time-consuming, potentially leading to non-optimized models or even errors. To overcome these problems, this work proposes the automatic generation of imperative optimized BP models from declarative specifications. The static part of these declarative specifications (i.e. control-flow and resource constraints) is expected to be useful on a long-term basis. This static part is complemented with information that is less stable and which is potentially unknown until starting the BP execution, i.e. estimates related to (1) number of process instances which are being executed within a particular timeframe, (2) activity durations, and (3) resource availabilities. Unlike conventional proposals, an imperative BP model optimizing a set of instances is created and deployed on a short-term basis. To provide for run-time flexibility the proposed approach additionally allows decisions to be deferred to run-time by using complex late-planning activities, and the imperative BP model to be dynamically adapted during run-time using replanning. To validate the proposed approach, different performance measures for a set of test models of varying complexity are analyzed. The results indicate that, despite the NP-hard complexity of the problems, a satisfactory number of suitable solutions can be produced.

  • articleNo Access

    Using Timed Automata for a Priori Warnings and Planning for Timed Declarative Process Models

    Many processes are characterized by high variability, making traditional process modeling languages cumbersome or even impossible to be used for their description. This is especially true in cooperative environments relying heavily on human knowledge. Declarative languages, like Declare, alleviate this issue by not describing what to do step-by-step but by defining a set of constraints between actions that must not be violated during the process execution. Furthermore, in modern cooperative business, time is of utmost importance. Therefore, declarative process models should be able to take this dimension into consideration. Timed Declare has already previously been introduced to monitor temporal constraints at runtime, but it has until now only been possible to provide an alert when a constraint has already been violated without the possibility of foreseeing and avoiding such violations. In this paper, we introduce an extended version of Timed Declare with a formal timed semantics for the entire language. The semantics degenerates to the untimed semantics in the expected way. We also introduce a translation to timed automata, which allows us to detect inconsistencies in models prior to execution and to early detect that a certain task is time sensitive. This means that either the task cannot be executed after a deadline (or before a latency), or that constraints are violated unless it is executed before (or after) a certain time. This makes it possible to use declarative process models to provide a priori guidance instead of just a posteriori detecting that an execution is invalid. We also outline how a Declare model with time can be used in resource planning and how Declare has been integrated into CPN Tools.