Please login to be able to save your searches and receive alerts for new content matching your search criteria.
The tracking technique that is examined in this study considers the nanosensor’s velocity and distance as independent random variables with known probability density functions (PDFs). The nanosensor moves continuously in both directions from the starting point of the real line (the line’s origin). It oscillates while traveling through the origin (both left and right). We provide an analytical expression for the density of this distance using the Fourier-Laplace representation and a sequence of random points. We can take the tracking distance into account as a function of a discounted effort-reward parameter in order to account for this uncertainty. We provide an analytical demonstration of the effects this parameter has on reducing the expected value of the first collision time between a nanosensor and the particle and confirming the existence of this technique.
The desirable global scalability of Grid systems has steered the research towards the employment of the peer-to-peer (P2P) paradigm for the development of new resource discovery systems. As Grid systems mature, the requirements for such a mechanism have grown from simply locating the desired service to compose more than one service to achieve a goal. In Semantic Grid, resource discovery systems should also be able to automatically construct any desired service if it is not already present in the system, by using other, already existing services. In this paper, we present a novel system for the automatic discovery and composition of services, based on the P2P paradigm, having in mind (but not limited to) a Grid environment for the application. The paper improves composition and discovery by exploiting a novel network partitioning scheme for the decoupling of services that belong to different domains and an ant-inspired algorithm that places co-used services in neighbouring peers.
In this paper we introduce the concept of knowledge granularity and study the relationship between different knowledge representation schemes and the scaling problem. By scale to a task, we mean that an agent's planning system and knowledge representation scheme are able to generate the range of behaviors required by the task in a timely fashion. Action selection is critical to an agent performing a task in a dynamic, unpredictable environment. Knowledge representation is central to the agent's action selection process. It is important to study how an agent should adapt its methods of representation such that its performance can scale to different task requirements. Here we study the following issues. One is the knowledge granularity problem: to what detail should an agent represent a certain kind of knowledge if a single granularity of representation is to be used. Another is the representation scheme problem: to scale to a given task, should an agent represent its knowledge using a single granularity or a set of hierarchical granularities.
Modern model-based reinforcement learning methods for high-dimensional inputs often incorporate an unsupervised learning step for dimensionality reduction. The training objective of these unsupervised learning methods often leverages only static inputs such as reconstructing observations. These representations are combined with predictor functions for simulating rollouts to navigate the environment. We advance this idea by taking advantage of the fact that we navigate dynamic environments with visual stimulus and create a representation that is specifically designed with control and actions in mind. We propose to learn a feature map that is maximally predictable for a predictor function. This results in representations that are well suited for the task of planning, where the predictor is used as a forward model. To this end, we introduce a new way of learning this representation along with the prediction function, a system we dub Latent Representation Prediction Network (LARP). The prediction function is used as a forward model for a search on a graph in a viewpoint-matching task, and the representation learned to maximize predictability is found to outperform other representations. The sample efficiency and overall performance of our approach are shown to rival standard reinforcement learning methods, and our learned representation transfers successfully to unseen environments.
This paper describes Grumman’s Rapid Expert Assessment to Counter Threats (REACT) project, designed to aid pilots in air combat decision making. We present a hierarchical design for a planning system which addresses some of the real-time aspects of planning for threat response. This paper concentrates on the lowest level of this hierarchy which is responsible for planning combat maneuvers at low altitude over hilly terrain when the enemy is not in sight. REACT’s Lost Line of Sight module attempts to maximize the amount and depth of knowledge which can be utilized in the time available before the system must commit to its next action. It utilizes a hybrid architecture for planning decisions which incorporates multiple knowledge representations and planners based on artificial intelligence, neural networks, and decision theory. This architecture allows planning at different degrees of competence to be performed by concurrently operating planners with differing amounts of knowledge. We describe research on the planning issues in REACT as well as the associated knowledge representation and knowledge acquisition issues. In addition, we describe how work on developing terrain reasoning capability in REACT has suggested guidelines for knowledge base design and data management, system and language specifications, and planner architectures pertinent to real-time coupled systems.
The paper addresses the problem of controlling situated image understanding processes. Two complementary control styles are considered and applied cooperatively, a deliberative one and a reactive one. The role of deliberative control is to account for the unpredictability of situations, by dynamically determining which strategies to pursue, based on the results obtained so far and more generally on the state of the understanding process. The role of reactive control is to account for the variability of local properties of the image by tuning operations to subimages, each one being homogeneous with respect to a given operation. A variable organization of agents is studied to face this variability. The two control modes are integrated into a unified formalism describing segmentation and interpretation activities. A feedback from high level interpretation tasks to low level segmentation tasks thus becomes possible and is exploited to recover wrong segmentations. Preliminary results in the field of liver biopsy image understanding are shown to demonstrate the potential of the approach.
The reliability of power distribution network is important. For high reliability, it is necessary for some nodes to have backup connections to other feeders in the network. The substation operator wants to expand the network such that some nodes have k redundant connection lines (i.e., k redundancy) in case the current feeder line does not work. The corporation is given this task to design the expansion planning to construct new connection lines. The substation operator will choose the minimum charged k redundant connection lines based on both of the existing network and the expansion network, which is designed by the corporation. The existing network has the cost for the redundant connection due to the operational expense. The corporation proposes the design with its own price, which may include the operational expense and the construction expense. Thus, for the corporation, how to assign the low price on the connection lines while maximizing the revenue becomes a Stackelberg minimum weight k-star game for the power distribution network expansion.
A heuristic algorithm is proposed to solve this Stackelberg minimum weight k-star game for the power distribution network expansion, using three heuristic rules for price setting in a scenario by scenario fashion. The experimental results show that the proposed algorithm always outperforms the greedy algorithm which is natural to k-star game in terms of corporation revenue. Compared to the greedy algorithm, the proposed algorithm improves up to 60.7% in the corporation revenue in the chosen minimum weight k-star, which is the minimum charged k connection lines. The average improvement is 7.5%. This effectively handles k redundancy in the power distribution network expansion while maximizing the corporation revenue.
The ability to handle changes is a characteristic feature of successful software projects. The problem addressed in this paper is what should be done in project planning and iterative replanning so that the project can react effectively to changes. Thus the work presents research results in software engineering, as well as transfer of methods in knowledge engineering to software engineering, applying the AI planning technique to software process modeling and software project management. Our method is based on inter-project experience and evolution patterns. We propose a new classification of software projects, identifying and characterizing ten software process evolution patterns and link them to different project profile. Based on the evolution patterns, we discuss the planning support for process evolution and propose several methods that are new or significantly extend existing work, e.g. cost estimation of process changes, evolution pattern analysis, and a coarse process model for the initial planning and the iterative replanning process. The preliminary results have shown that the study of evolution patterns, based on inter-project experience, can provide valuable guidance in software process understanding and improvement.
In this work, we present an approach for documenting object-oriented application frameworks and use the documentation to guide the framework instantiation process. Our approach is based on a shift from a framework-centered to a functionality-centered documentation, through which a tool can guide the instantiation process according to the functionality required for the new application. The fundamental idea of our work is the combination of the concept of user-tasks modeling and least commitment planning methods to guide the instantiation process. Based on these techniques, the tool is able to present the different high level activities that can be carried out when creating a new application from a framework to the developer, taking as a basis the documentation provided by the designer through instantiation rules.
This paper presents the concept and realization of configurable resource models as extension of a treatment scheduling system for users in the medical sector. Our approach aims to ease the handling of automated treatment scheduling by domain experts without the immediate assistance of IT-experts. Configurable resource models were integrated into our automated treatment planning system for medical facilities and support a user-friendly configuration of resources such as specialized treatment rooms, medical devices, or medical staff.
In our approach, treatment process models are defined in the BPMN workflow-language by the domain experts. The new concept of configurable resource models allows the end-user to interactively describe the available resources in their environment. These descriptions (i.e. configurable resource objects or CDOs) can then be linked to activities specified in the treatment models. Together, CDOs and the BPMN treatment models are automatically transformed into CSPs, i.e. mathematical descriptions which can be solved by constraint solvers, thus yielding optimal treatment plans.
In this paper, we apply π-calculus1,2 to the planning for agents. First, we have proposed a new language for describing agent plans based on π-calculus, called PDL (Plan Description Language), and the model to execute them. The plans described in PDL can be changed dynamically while they are executing, because π-calculus provides dynamically-changing structures. By using this property, agents can change their plans to adapt to the environment around them by themselves in executing their plans. This property called reflection is very important to agents. We state the properties as theorems and prove them. Second, we have implemented an interpreter for PDL. In order to implement the system, we propose a primitive language, called PiL (Pi-calculus Language). PiL can be used on computers more easily than using the mathematical notations of π-calculus. We have shown that this system executes programs correctly and a plan written in PDL can be executed on the system. Finally, we have proved that the PDL is useful in the dynamically-changing environment by experiments.
Planning with temporally extended goals has recently been the focus of much attention to researchers in the planning community. We study a class of planning goals where in addition to a main goal there exist other goals, which we call auxiliary goals, that act as constraints to the main goal. Both these type of goals can, in general, be a temporally extended goal. Linear temporal logic (LTL) is inadequate for specification of the overall goals of this type, although, for some situations, it is capable of expressing them separately. A branching-time temporal logic, like CTL, on the other hand, can be used for specifying these goals. However, we are interested in situations where an auxiliary goal has to be satisfiable within a fixed bound. We show that CTL becomes inadequate for capturing these situations. We bring out an existing logic, called min-max CTL, and show how it can effectively be used for the planning purpose. We give a logical framework for expressing the overall planning goals. We propose a sound and complete planning procedure that incorporates a model checking technology. Doing so, we can answer such planning queries as plan existence at the onset besides producing an optimal plan (if any) in polynomial time.
Significant advances have occurred in heuristic search for planning in the last eleven years. Many of these planners use A*-style search. We report on five sound and complete domain-independent forward state-space STRIPS planners in this paper. The planners are AWA* (Adjusted Weighted A*), MAWA* (Modified AWA*), AWA*-AC (AWA* with action conflict-based adjustment), AWA*-PD (AWA* with deleted preconditions-based adjustment), and AWA*-AC-LE (AWA*-AC with lazy evaluation). AWA* is the first planner to use node-dependent weighting in A*. MAWA*, AWA*-AC, AWA*-PD, and AWA*-AC-LE use conditional two-phase heuristic evaluation. MAWA* applies node-dependent weighting to a subset of the nodes in the fringe, after the two-phase evaluation. One novel idea in AWA*-AC-LE is lazy heuristic evaluation which does not construct relaxed plans to compute heuristic values for all nodes. We report on an empirical comparison of AWA*, MAWA*, AWA*-AC, AWA*-PD, and AWA*-AC-LE with classical planners AltAlt, FF, HSP-2 and STAN 4. Our variants of A* outperform these planners on several problems. The empirical evaluation shows that heuristic search planning is significantly benefitted by node-dependent weighting, conditional two-phase heuristic evaluation and lazy evaluation. We report on the insights about inferior performance of our planners in some domains using the notion of waiting time. We discuss many other variants of A*, state-space planners and directions for future work.
Reliability assessments of AI programs must consider not only possible program bugs which. remain in the program due to insufficient testing and debugging, but also faults due to intrinsic characteristics of AI programs that cannot be removed even after the program is fully debugged. This paper develops an analytical tool for assessing the reliability of AI programs. Possible intrinsic faults of AI programs are identified, and modifications to existing software reliability models for conventional programs are suggested. An example illustrating the effect of intrinsic faults of AI heuristics on the program reliability in a real-time situation is given. It is shown that under certain conditions the cost-based A* planning algorithm is less reliable than the node-based A* planning algorithm.
This article describes a methodology for building integtated planning-reacting systems. The work is based on a formal approach to building the reactive component (the reactor); this allows us to formalize the concept of a planner improving a reactive system. Our novel planner design emphasizes how the planner can use the reactor to focus its reasoning, as well as how the reactor is guided by the planner to improve its behavior.
The reactive component (the reactor) uses a process-based model of robot computation, the RS model. This gives us a powerful representation for actions with precise formal semantics. The duty of the planning component (the planner) is to adapt the reactor to suit a set of objectives and the possibilities afforded by the environment. Planner and reactor both operate continually, separately, and in a complementary fashion. The approach is illustrated with a kitting robot domain problem.
The design of a belief-desire-intention (BDI) architecture is presented. The architecture is defined using a unified object-based knowledge representation formalism, called the OK formalism, and a unified reasoning and acting module, called the OK rational engine. Together they form the OK BDI architecture for modeling rational agents endowed with beliefs, desires, and intentions.
With the current explosion of data, retrieving and integrating information from various sources is a critical problem. Work in multidatabase systems has begun to address this problem, but it has primarily focused on methods for communicating between databases and requires significant effort for each new database added to the system. This paper describes a more general approach that exploits a semantic model of a problem domain to integrate the information from various information sources. The information sources handled include both databases and knowledge bases, and other information sources (e.g. programs) could potentially be incorporated into the system. This paper describes how both the domain and the information sources are modeled, shows how a query at the domain level is mapped into a set of queries to individual information sources, and presents algorithms for automatically improving the efficiency of queries using knowledge about both the domain and the information sources. This work is implemented in a system called SIMS and has been tested in a transportation planning domain using nine Oracle databases and a Loom knowledge base.
If we want to find the shortest plan, then usually, we try plans of length 1, 2, …, until we find the first length for which such a plan exists. When the planning problem is difficult and the shortest plan is of a reasonable length, this linear search can take a long time; to speed up the process, it has been proposed to use binary search instead. Binary search for the value of a certain parameter x is optimal when for each tested value x, we need the same amount of computation time; in planning, the computation time increases with the size of the plan and, as a result, binary search is no longer optimal. We describe an optimal way of combining planning algorithms into a search for the shortest plan – optimal in the sense of worst-case complexity. We also describe an algorithm which is asymptotically optimal in the sense of average complexity.
This article describes a planning approach based on the object representation. A planning domain in OAP (Object-oriented Approach for Planning) consists of a dynamic set of objects. OAP provides a language for planning problems modeling and implementation. This approach can evolve a domain model from a literal (predicative) representation to an object based representation, as well as enhancing the development of planning problems. The goal of OAP is to offer the possibility to design and develop planning problems as any other software engineering problem, and to allow the application of planning to a larger class of domains by using methods (functions) that can be implemented within the world objects. Planning systems using OAP as language can be integrated into any existing object-oriented software with a slight additional effort to transform the system to a planning domain model, which allows the use of planning to solve generic tasks in existing software applications (Business, web,…). Therefore planning in real world systems will be easier to model and to implement using all the software engineering facilities offered by the object-oriented tools.
A digital city is a social information infrastructure for urban life (including shopping, business, transportation, education, welfare and so on). We started a project to develop a digital city for Kyoto based on the newest technologies including cooperative information agents. This paper presents an architecture for digital cities and shows the roles of agent interfaces in it. We propose two types of cooperative information agents as follows: (a) the front-end agents determine and refine users' uncertain goals, (b) the back-end agents extract and organize relevant information from the Internet, (c) Both types of agents opportunistically cooperate through a blackboard. We also show the research guidelines towards social agents in digital cities; the agent will foster social interaction among people who are living in/visiting the city.