Rapid advances in sensors, computers, and algorithms continue to fuel dramatic improvements in intelligent robots. In addition, robot vehicles are starting to appear in a number of applications. For example, they have been installed in public settings to perform such tasks as delivering items in hospitals and cleaning floors in supermarkets; recently, two small robot vehicles were launched to explore Mars.
This book presents the latest advances in the principal fields that contribute to robotics. It contains contributions written by leading experts addressing topics such as Path and Motion Planning, Navigation and Sensing, Vision and Object Recognition, Environment Modeling, and others.
https://doi.org/10.1142/9789812797698_fmatter
The following sections are included:
https://doi.org/10.1142/9789812797698_0001
We consider the problem of planning curvature constrained paths amidst polygonal obstacles, connecting given start and target configurations. This problem has been extensively studied in the automatic control and in the computational geometry communities, and has applications e.g. in flight path planning and in motion planning for car-like robots. In this paper, we first present a new approximation algorithm that relates the allowed approximation uncertainty to location and curvature. Second, we describe an efficient decision procedure to determine if there exists an admissible path that satisfies a given curvature constraint. Unlike previous approaches, this algorithm has the ability to solve intuitively easy problem instances quickly, i.e., in polynomial time.
https://doi.org/10.1142/9789812797698_0002
In this paper, we give a heuristic algorithm to compute a collision-free path between two configurations of a general robot. The heuristics only relies on the computation of a very simple primitive, namely the computation of the clearance of the robot at a given configuration. This restriction arises in the control of industrial welding robots, where pure computational geometry algorithms are doomed to fail due to technical restrictions. In particular, we are only able to control the robot with the help of a given tool without knowing the exact position and extensions neither of the robot nor of the obstacles in the workspace. This setting is adequately described by using the terms fast und secure for measuring the quality of the computed paths. The efficiency of our method is verified experimentally.
https://doi.org/10.1142/9789812797698_0003
The dynamic systems approach to robot planning and control defines a “dynamics” of robot behavior in which task constraints contribute independently to a nonlinear vector field that governs robot actions. Situations arise, however, in which superposition of contributions can lead to the formation of spurious attractors and cause related problems. To rectify such problems we define a task space dynamics that provides competition between a set of constraints. We find that competition among task constraints is able to deal with problems that arise when combining constraint contributions. We show how competition among constraints enables agents to execute sequences of behaviors that satisfy task requirements.
https://doi.org/10.1142/9789812797698_0004
Components developed under the Unmanned Ground Vehicles (UGV) program in the area of mobility are presented. Specifically, we describe approaches for road following, obstacle detection and avoidance, path planning, teleoperation, and an integrating architecture. These approaches were implemented and demonstrated on a fleet of HMMWV vehicles at Lockheed-Martin Denver and at Ft. Hood, Texas. Individual driving capabilities were integrated into complex systems and demonstrated in the context of complex, militarily relevant missions.
https://doi.org/10.1142/9789812797698_0005
Rich sensory information, robust control strategies and proper representation of the environment are crucial for successful navigation of the mobile robot. We propose a model of the environment which is suitable for global navigation using visual sensing. At the lowest level of interaction of the robot with the environment we employ visual servoing techniques which facilitate robust local navigation by means of relative positioning. Further we demonstrate how to use these local control strategies in a global setting. The environment is represented in terms of place graph, where the nodes correspond to places and arcs have associated servoing strategies for moving from one place to another. The global navigation is expressed as a sequence of relative positioning tasks obtainable from the search of a place graph. The proposed model facilitates generation of motion plans which can be executed in robust manner thanks to the presence of the sensory information in the feedback loop.
https://doi.org/10.1142/9789812797698_0006
This article describes a methodology for handling sensing failures in autonomous mobile robots. Our approach relies on a novel extension of Generate and Test to classify and recover from sensing failures with only partial causal models. This method takes advantage of the robot's ability to interact with its environment and use other sensors to confirm hypotheses about the cause of the sensing failure. The details of the exception handling implementation within the Sensor Fusion Effects (SFX-EH) architecture are presented, along with demonstrations of exception handling on two different autonomous mobile robots. The article discusses the advantages and disadvantages of the Generate and Test methodology, and lists future refinements.
https://doi.org/10.1142/9789812797698_0007
A new approach to represent visibility in a geometric scene is presented so that queries for visibility can be answered efficiently. It goes beyond the usually investigated visibility from a single point of view in that visibility even between extended objects can be determined. The method reduces the dimension of the visibility space against the straightforward approach by two, that is, two and four dimensions are sufficient for two- respectively three-dimensional scene spaces. The practical implementation in the 2d-case is based on quadtrees, and intersection, union, and difference of quadtrees are the basic operations for constructing and querying the resulting so-called hierarchical blocker trees. Applications are in computational simulation of the sensor-based behavior of mobile robots and in computer graphics.
https://doi.org/10.1142/9789812797698_0008
Self-localization is the process of determining an unknown starting or “wake-up” position in a given environment. In this paper we present a new fast implementation of a simple strategy for a robot to localize itself inside an environment that can be modeled as a simple polygon. The only information available to the robot is the map of the polygon and sensor data of a range sensing device. We assume that in this way the robot has access to its local visibility polygon. The simple strategy we consider repeatedly goes to the closest point at which the robot is able to eliminate at least one of the possible positions it may be located at. Our implementation of this strategy runs in time O(kn log n) and space O(kn) where n is number of vertices of the polgyon and k the number of possible robot positions at the beginning. The length of the path traveled by the robot using our strategy is at most 2(k − 1) times the length of the shortest path to verify the robot position if it is known in advance.
https://doi.org/10.1142/9789812797698_0009
Today range scanners are able to acquire high-resolution and high-quality range images in real-time. However, range image analysis and interpretation in a robust and fast way is still an open research topic. In particular, there is considerable room for improvement in range image segmentation with respect to both quality and speed. This paper reports our efforts to achieve robust and fast range image segmentation. We have developed three algorithms for edge detection in range images, and segmentation of range images into planar and curved surface patches, respectively. Compared to the methods known from the literature, our algorithms are unique in using high-level segmentation primitives instead of the individual pixels. This way we are able to significantly reduce the amount of data we have to deal with. Because of the simple handling of the high-level primitives a fast and reliable segmentation can be achieved. We have tested our algorithms on a large number of real images acquired by three types of scanners with quite different noise characteristics. The segmentation time ranges from less than one to a few seconds on a Sun SparcStation 20, dependent on the image resolution and complexity. The robustness of our algorithms is demonstrated by the different types and noise characteristics of the images used in our experiments.
https://doi.org/10.1142/9789812797698_0010
Symmetry is one of the shape features that is often used in computer vision. Some computer vision systems use symmetry-based indexing functions to retrieve images from an image database. Other computer vision applications need to detect the orientation of a shape before it is matched with a model and, if the shape is rotationally symmetric, a specific method has to be developed to find the principal axes of the shape. Symmetry is also useful to recover a planar symmetric figure from an image without the need of models. In this paper, a simple and fast method to detect perfect and distorted rotational symmetries of 2D objects is described. The boundary of a shape is polygonally approximated and represented as a string. A key observation is that the boundary of a rotationally symmetric shape consists of a sequence of identical substrings. Rotational symmetries are found by cyclic string matching between two identical copies of the shape string. The set of minimum cost edit sequences that transform the shape string to a cyclically shifted version of itself define the rotational symmetry and its order. Finally, it is observed that it is possible to find out if a shape string is reflectionally symmetric by computing the cyclic string matching between the string and a reversed version of itself. Thus, a modification of the algorithm is proposed to detect reflectional symmetries. Some experimental results are presented to show the reliability of the proposed algorithm.
https://doi.org/10.1142/9789812797698_0011
This work summarizes research which uses various aspects of functionality as a means to achieve generic object recognition. We examine several approaches which attempt to integrate computer vision and robotics for the purpose of achieving recognition of functional classes of objects. Such integrated approaches include those which incorporate both the knowledge of the potential functionality of an object and the steps to confirm said functionality through interaction. An overview of our system, GRUFF-I (Generic Recognition Using Form, Function and Interaction), is presented. This system reasons about and generates plans for interaction with 3-D shapes from the categories furniture and dishes.
https://doi.org/10.1142/9789812797698_0012
A robot is described which represents its environment by a net of typical sensor situations.
All sensor readings of the robot (light intensities, tactile readings and position) taken at a time are condensed into a vector, the sensor situation.
They are checked against typical situations taken thus far.
If the sensor situation is near a typical one, this is shifted towards the new reading. If it is far off any other it is taken as a new typical situation itself. A vertex is drawn between the last typical situation seen and the current one. A visiting counter is incremented for this vertex while the corresponding counters for all other vertices emanating from the last typical situation are proportionally decremented.
A visiting counter is attached to each typical situation and incremented every time this situation is the nearest typical one for a given sensor input. If the counter exeeds a preset value the input is taken as a new typical situation. If all vertex counters to a typical situation become zero this node is eliminated from the graph.
Drift effects are compensated by going back into well known regions once in a while. Sorting the typical situations according to their position in space limits the search effort to find the best matching typical situation for a given sensor situation.
The graph represents the environment of the robot and allows navigation through the environment.
https://doi.org/10.1142/9789812797698_0013
This article describes how to build a world model for an autonomous mobile robot in an unknown environment. We propose solutions for building both 2D and 3D world models. Both models are based on geometrical primitives. These models can be used for either planning or localisation purposes. The models are built up using a 3D optical scanner mounted on the autonomous mobile robot. The emphasis of this article is on representation of the complex objects (obstacles) and on integration of primitives from multiple scans. Algorithms for edge detection and matching are presented. Methods for complex obstacle reconstruction are given. Simulation and experimental results are presented.
https://doi.org/10.1142/9789812797698_0014
In this paper we introduce the notion of distributed spatial relations. Unlike traditional geometric relations this type of relations is neither restricted to algebraically describable objects nor does it refer to specific geometric reference feature, such as corner points, lines, or planes. Distributed spatial relations are based on artificial force fields. Each object applies a force to the other objects in the surrounding environment. The force at a specific point in such a field is a function of the distance of the point to the nearest point on the surface of the object which creates the field. A relation between two objects A and B is encoded in the forces created by object A along the surfaces of object B and vice versa. If the force field is implemented as a grid of discrete distance values, where each grid element is associated with a natural number approximating the Euclidean distance to the nearest surface point of the creating object A, then the forces sensed along the surface of B manifest themselves as a sequence of discrete distance values, which we denoted as distance signature. The core idea of distributed spatial relations is that the set of distance signatures created along the surface of an object by the surrounding objects enables one to uniquely identify the object from various views.
https://doi.org/10.1142/9789812797698_0015
This paper discusses how spatial information at multiple levels of abstraction is used in ATLAS, a multilayered replicated architecture for robot control. ATLAS was specifically designed to address the real-time requirements of robot systems executing complex tasks in real-world environments. The architecture is multilayered in the sense that robot system control is performed at multiple levels of abstraction and at varying control rates, and it is also replicated, meaning that the fundamental activities of sensing, decision-making, and execution occur at each layer of the architecture. ATLAS differs significantly from many other robot control structures in that the layers of the architecture are not ordered along increasing levels of abstraction of the information being processed, but are rather organized as a multirate control system. Consequently, the faster control layers operate closer to the physical robot system and the slower control layers operate on top of faster ones. This arrangement allows ATLAS to respond to urgent control demands in real-time, while maintaining the ability to approach complex tasks in a planned and deliberative manner. Since each ATLAS layer requires access to sensor-based information, we outline how different levels of abstraction of spatial information are used in the architecture, and discuss how these representations are automatically generated from sensor data. These levels of abstraction of spatial information range from raw sensor data to sensor-close Occupancy Grid models, geometric representations and topological models. Each representation is used in the ATLAS layers in which it fits most naturally, both in terms of the information needs of those layers and their cycle time requirements. The paper concludes with an application of these concepts to a mobile robot executing repetitive navigational tasks. We show that, by monitoring the accumulation of information by the robot, ATLAS can switch between its active layers, allowing the robot to improve in the execution of its task over time and thereby to exhibit a skill acquisition behaviour.
https://doi.org/10.1142/9789812797698_0016
Manufacturing and assembly processes often require objects to be held in such a way that they can resist all external wrenches. The problem of fixture planning is to compute, for a given object and a set of fixturing elements, the set of placements of the fixturing elements that constrain all finite and infinitesimal motions of the object. We extend the set of commonly used fixturing elements with so-called edge fixels, which contact a polygonal object either at a vertex or along an edge of its convex hull. We show that any polygon without parallel edges can be fixtured by a single edge fixel and two point fixels, and that any convex polygon without parallel edges can be fixtured by two perpendicular edges and a single point fixel. Besides some additional positive results, we point out some open problems on fixturability with edge fixels.
Modular fixturing toolkits offer the advantage of reusability of the fixturing elements and have therefore gained considerable popularity. A modular fixturing toolkit consists of a fixturing table with a rectangular grid of holes, and a set of fixturing elements whose placements are restricted by the grid. Several recent publications in the field of fixture planning discuss the problem of finding all modular fixtures of a part using a given toolkit. We translate the problem of computing all modular fixtures involving edge fixels into geometric searching problems and present efficient algorithms for some of these problems. In addition, we point out some open problems of algorithmic nature.
https://doi.org/10.1142/9789812797698_0017
Tabu search, an iterative discrete optimization technique which is widely used in the Operations Research community is virtually unknown in Computer Vision and related fields. This paper aims to fill this gap by introducing tabu search through two simple examples and then by describing an implementation of tabu search in a robot vision task. In this task the robot has to take away unmodeled objects from a heap. This heap is represented by a 3-dimensional surface description in form of a mesh of triangles. Tabu search is used to select optimal grasping opportunities in this 3D scene. Then the grasps are performed and the objects are taken away.
https://doi.org/10.1142/9789812797698_0018
Consider a system of multiple mobile robots in which each robot, at infinitely many unpredictable time instants, observes the positions of robots and moves to a new position determined by the given algorithm. The robots are anonymous in the sense that they all execute the same algorithm and they cannot be distinguished by their appearances. The robots are not necessarily synchronous, and initially they do not have a common x-y coordinate system. In this paper we discuss the agreement problem on a common x-y coordinate system and the formation problem of geometric patterns (e.g., a point, a circle and a line segment) by the robots. The latter is an extension of the former.
https://doi.org/10.1142/9789812797698_0019
During the last years, the need for large and complex technical systems has become obvious. Examples are manufacturing cells, transport systems consisting of many vehicles, or robots working together to reach a common goal. As a consequence, the control architectures for these systems are not longer able to guarantee the well known properties of modularity, fault-tolerance, integrability and extendibility. On that account, the former centralized architectures are often replaced by distributed control mechanisms. During the last three years, the University of Karlsruhe has developed an infrastructure for distributed robot systems. This way, the problems of communication, coordination and cooperation between the system components, the task allocation, optimization and deadlock-avoidance are managed by the architecture KAMARA (KArlsruher Multi Agent Robot Architecture) itself. Using these concepts, the planning system for the Karlsruhe Autonomous Mobile Robot KAMRO was realized in a very short time. To prove the general use of the infrastructure, the control architecture for the microrobots MINIMAN is being developed at IPR by use of these new concepts. One robot consists of two manipulators driven by three piezoactuators, and it can move by use of three piezolegs. In the application field of these robots (medicine, biology and chip production), it is often necessary that the microrobots cooperate to perform difficult tasks. Due to the complexity of the system, the interactions between the robots, and the possibility of dead-locks during the task allocation, the KAMARA architecture was used to solve these problems in an efficient way. In this paper, the results of the work and some experiments with the new system are described.
https://doi.org/10.1142/9789812797698_0020
Robots operating in the same environment share common driving space and common ultrasonic sensing space. Based upon a common world model and synchronized clocks across the robots we describe the different stages of cooperation that are investigated in the distributed robot system CoMRoS (= Cooperative Mobile Robot Systems Stuttgart). Implicit cooperation occurs if several autonomous mobile robots are aware of each other by periodically exchanging their individual states and plans. At this level all robots behave symmetrically and there is no further coordination between them. Asynchronous cooperation arises if preplanned trajectories or required sensing space of mobile robots overlap and the respective robots come to an arrangement with each other concerning resource allocation. Synchronous cooperation takes place if several robots coordinate their actions to fulfill a collective task. We present a mutual exclusion algorithm for tuning the use of ultrasonic sensors between neighbouring robots. Furthermore we concentrate on all effects that result in the course of intersection passing. If a robot intends to pass through an intersection and its destination road is blocked by another robot then the first robot must not enter the intersection. In case of cyclic dependencies the involved robots are faced with a deadlock situation. After deadlock detection the robots organize themselves to constitute a vehicle formation. The robots recover from deadlock by moving along the directions of the dependency cycle until each robot reaches its predecessor's former configuration. While intersection passing in its regular form is an example for asynchronous cooperation deadlock handling shows cooperative synchronous behaviour.
https://doi.org/10.1142/9789812797698_0021
An important issue that arises in the automation of many security, surveillance, and reconnaissance tasks is that of monitoring (or observing) the movements of targets navigating in a bounded area of interest. A key research issue in these problems is that of sensor placement — determining where sensors should be located to maintain the targets in view. In complex applications involving limited-range sensors, the use of multiple sensors dynamically moving over time is required. In this paper, we investigate the use of a cooperative team of autonomous sensor-based robots for the observation of multiple moving targets. We focus primarily on developing the distributed control strategies that allow the robot team to attempt to maximize the collective time during which each target is being observed by at least one robot team member in the area of interest. Our initial efforts on this problem address the aspects of distributed control in homogeneous robot teams with equivalent sensing and movement capabilities working in an uncluttered, bounded area. This paper first formalizes the problem and discusses related work. We then present a distributed approximate approach to solving this problem that combines low-level multi-robot control with higher-level control. The low-level control is described in terms of force fields emanating from the targets and the robots. The higher level control is presented in the ALLIANCE formalism, which provides mechanisms for fault tolerant cooperative control, and allows robot team members to adjust their low-level actions based upon the actions of their teammates. We then present the results of the ongoing implementation of our approach, both in simulation and on physical robots. To our knowledge, this is the first paper addressing this research problem that has been implemented on physical robot teams.
https://doi.org/10.1142/9789812797698_0022
The goals of this joint U.S.-Mexico research project are threefold: to provide an understanding and means by which fielded robotic systems are not competing with other agents that are more effective at their designated task; to permit them to be successful competitors within the ecological system and capable of displacing less efficient agents; and that they are ecologically sensitive so that agent environment dynamics are well-modeled and as predictable as possible whenever new robotic technology is introduced. Initial studies on neuroscientifically derived schema models of the praying mantis and frog are reported that have led to simulation studies and eventual robotic implementations that can provide guidance to neuroscientists, ethologists, and roboticists alike.
https://doi.org/10.1142/9789812797698_0023
Within this paper, we develop a model of robot skill acquisition, based on both the nature of elementary skills, characteristics of real robot systems and the role of the human user, who acts as a teacher. This model describes skill acquisition as a process consisting of several phases that must all be adequately supported. We present methods and algorithms that allow automation of most of theses phases, such that the involvement of the human user can be limited to the initial demonstration at the beginning and the high-level evaluation at the end of the process. The whole approach is related to human skill learning. Methods for support of skill acquisition are presented, and examples for the acquisition of both manipulation and navigation skills are given. Finally, the strengths and limitations of the demonstration-based approach in general are discussed.
https://doi.org/10.1142/9789812797698_0024
Four important issues in modeling and control of robotic motions are pointed out, which are 1) effect of the use of gears with high reduction ratios, 2) physical scales of the concerned robotic system: length, mass, inertia moment, and time scale (velocity), 3) interaction with an object or the environment (contact force and friction), and 4) discontinuity caused by impact (at the velocity level) or static and Coulomb frictions (at the acceleration level). It is shown that robot dynamics can be expressed equivalently by a non-linear position-dependent circuit consisting of circuit modules as extensions of electric circuit elements. Then, the first three issues can be treated through such a circuit by input-output passivity which is an extended notion of impedance.
Discontinuity caused by static and Coulomb frictions can be coped with by construction of special circuit modules. This suggests a proposal of inertia-only (gravity/friction-free) robots, whose motions are subject to the law of inertia.
https://doi.org/10.1142/9789812797698_0025
The first generation of service robots has now reached the application stage with the advent of commercially available systems for cleaning floors and transporting items within in-door environments. Building on these first applications, the next generation of service robots will need manipulation in order to significantly enhance their range of applications. Manipulation of everyday objects requires the following capabilities: object identification, localization, grasping, and obstacle avoidance (for the robot arm). In this paper, we briefly describe alternative implementation strategies for these capabilities. Each approach is rated with respect to its desirability, the state of the research in the area, and the current intensity of research.
https://doi.org/10.1142/9789812797698_0026
Automated Guided Vehicles (AGVs) can provide an effective alternative to fixed conveyor systems. Through the use of a Supervisor Administration System, it is even possible to integrate an AGV-System into a transportation system that is based on stationary conveyor subsystems. The supervisor controls the traffic by using (1) a database that keeps track of resources, such as docking stations, doors, and road intersections; (2) operations research techniques, enhanced with knowledge of the vehicle properties and the environment's features, for avoiding deadlocks; and (3) specially taught manoeuvres for difficult situations. Incorporation of the interactively taught manoeuvres makes high-speed cooperation among vehicles possible. The drawback of this approach, however, is a high communication bandwidth required between the supervisor and vehicles, which limits the maximum number of vehicles. We propose to dramatically reduce this bandwidth requirement by increasing the use of sensors and built-in behaviors on the vehicles. We expect that we will need a rule-based system for selecting and combining sets of pretaught manoeuvres to perform specific tasks.