Loading [MathJax]/jax/output/CommonHTML/jax.js
Skip main navigation

Cookies Notification

We use cookies on this site to enhance your user experience. By continuing to browse the site, you consent to the use of our cookies. Learn More
×

System Upgrade on Tue, May 28th, 2024 at 2am (EDT)

Existing users will be able to log into the site and access content. However, E-commerce and registration of new users may not be available for up to 12 hours.
For online purchase, please visit us again. Contact us at customercare@wspc.com for any enquiries.

SEARCH GUIDE  Download Search Tip PDF File

  • articleNo Access

    INTEGRATING WALKING AND VISION TO INCREASE HUMANOID AUTONOMY

    Aiming at building versatile humanoid systems, we present in this paper the real-time implementation of behaviors which integrate walking and vision to achieve general functionalities. The paper describes how real-time — or high-bandwidth — cognitive processes can be obtained by combining vision with walking. The central point of our methodology is to use appropriate models to reduce the complexity of the search space. We will describe the models introduced in the different blocks of the system and their relationships: walking pattern, self-localization and map building, real-time reactive vision behaviors, and planning.

  • articleNo Access

    Object-Based Visual Servoing for Autonomous Mobile Manipulators

    This paper proposes an object based vision (O-BV) system to implement visual servoing for autonomous mobile manipulators using two charge-coupled device (CCD) cameras. Conventional stereo vision (C-SV) system estimates the depth based on the disparity between two camera images for the same object. However, the disparity is not an effective cue for a small disparity at a long distance. To resolve this problem, in the proposed O-BV system, the individual camera tracks the object independently, and the angles of the two cameras are used to estimate the distance to the object. This depth estimation technique is applied for an autonomous mobile robot to approach to a target object precisely. The O-BV system is experimentally compared to the C-SV system in terms of computing time and depth estimation accuracy. Also the two cameras which are attached on the top of the autonomous mobile manipulator have been utilized for the mobile manipulator to approach to a target object precisely through the visual servoing. Through the experiments, it is demonstrated that the fast and precise depth estimation is a critical factor for the successful visual servoing.

  • articleNo Access

    Visual Servoing of Humanoid Dual-Arm Robot with Neural Learning Enhanced Skill Transferring Control

    This paper presents a novel combination of visual servoing (VS) control and neural network (NN) learning on humanoid dual-arm robot. A VS control system is built by using stereo vision to obtain the 3D point cloud of a target object. A least square-based method is proposed to reduce the stochastic error in workspace calibration. An NN controller is designed to compensate for the effect of uncertainties in payload and other parameters (both internal and external) during the tracking control. In contrast to the conventional NN controller, a deterministic learning technique is utilized in this work, to enable the learned neural knowledge to be reused before current dynamics changes. A skill transfer mechanism is also developed to apply the neural learned knowledge from one arm to the other, to increase the neural learning efficiency. Tracked trajectory of object is used to provide target position to the coordinated dual arms of a Baxter robot in the experimental study. Robotic implementations has demonstrated the efficiency of the developed VS control system and has verified the effectiveness of the proposed NN controller with knowledge-reuse and skill transfer features.

  • articleNo Access

    Learning an Image-Based Visual Servoing Controller for Object Grasping

    Adaptive and cooperative control of arms and fingers for natural object reaching and grasping, without explicit 3D geometric pose information, is observed in humans. In this study, an image-based visual servoing controller, inspired by human grasping behavior, is proposed for an arm-gripper system. A large-scale dataset is constructed using Pybullet simulation, comprising paired images and arm-gripper control signals mimicking expert grasping behavior. Leveraging this dataset, a network is directly trained to derive a control policy that maps images to cooperative grasp control. Subsequently, the learned synergy grasping policy from the network is directly applied to a real robot with the same configuration. Experimental results demonstrate the effectiveness of the algorithm. Videos can be found at https://www.bilibili.com/video/BV1tg4y1b7Qe/.

  • articleNo Access

    PREDICTING UNKNOWN MOTION FOR MODEL INDEPENDENT VISUAL SERVOING

    Prediction in real-time image sequences is a key-feature for visual servoing applications. It is used to compensate for the time-delay introduced by the image feature extraction process in the visual feedback loop. In order to track targets in a three-dimensional space in real-time with a robot arm, the target's movement and the robot end-effector's next position are predicted from the previous movements. A modular prediction architecture is presented, which is based on the Kalman filtering principle. The Kalman filter is an optimal stochastic estimation technique which needs an accurate system model and which is particularly sensitive to noise. The performances of this filter diminish with nonlinear systems and with time-varying environments. Therefore, we propose an adaptive Kalman filter using the modular framework of mixture of experts regulated by a gating network. The proposed filter has an adaptive state model to represent the system around its current state as close as possible. Different realizations of these state model adaptive Kalman filters are organized according to the divide-and-conquer principle: they all participate to the global estimation and a neural network mediates their different outputs in an unsupervised manner and tunes their parameters. The performances of the proposed approach are evaluated in terms of precision, capability to estimate and compensate abrupt changes in targets trajectories, as well as to adapt to time-variant parameters. The experiments prove that, without the use of models (e.g. the camera model, kinematic robot model, and system parameters) and without any prior knowledge about the targets movements, the predictions allow to compensate for the time-delay and to reduce the tracking error.

  • articleNo Access

    Developments in Visual Servoing for Mobile Manipulation

    Unmanned Systems20 Jun 2013

    A new trend in mobile robotics is to integrate visual information in feedback control for facilitating autonomous grasping and manipulation. The result is a visual servo system, which is quite beneficial in autonomous mobile manipulation. In view of mobility, it has wider application than the traditional visual servoing in manipulators with fixed base. In this paper, the state of art of vision-guided robotic applications is presented along with the associated hardware. Next, two classical approaches of visual servoing: image-based visual servoing (IBVS) and position-based visual servoing (PBVS) are reviewed; and their advantages and drawbacks in applying to a mobile manipulation system are discussed. A general concept of modeling a visual servo system is demonstrated. Some challenges in developing visual servo systems are discussed. Finally, a practical application of mobile manipulation system which is developed for applications of search and rescue and homecare robotics is introduced.

  • articleNo Access

    Dynamic Visual Servoing for a Quadrotor Using a Virtual Camera

    Unmanned Systems01 Jan 2017

    This paper presents a dynamic image-based visual servoing (IBVS) control law for a quadrotor unmanned aerial vehicle (UAV) equipped with a single fixed on-board camera. The motion control problem is to regulate the relative position and yaw of the vehicle to a moving planar target located within the camera’s field of view. The control law is termed dynamic as it’s based on the dynamics of the vehicle. To simplify the kinematics and dynamics, the control law relies on the notion of a virtual camera and image moments as visual features. The convergence of the closed-loop is proven to be globally asymptotically stable for a horizontal target. In the case of nonhorizontal targets, we modify the control using a homography decomposition. Experimental and simulation results demonstrate the control law’s performance.

  • articleNo Access

    A Tailless Flapping Wing MAV Performing Monocular Visual Servoing Tasks

    Unmanned Systems19 Aug 2020

    In the field of robotics, a major challenge is achieving high levels of autonomy with small vehicles that have limited mass and power budgets. The main motivation for designing such small vehicles is that compared to their larger counterparts, they have the potential to be safer, and hence be available and work together in large numbers. One of the key components in micro robotics is efficient software design to optimally utilize the computing power available. This paper describes the computer vision and control algorithms used to achieve autonomous flight with the 30g tailless flapping wing robot, used to participate in the International Micro Air Vehicle Conference and Competition (IMAV 2018) indoor microair vehicle competition. Several tasks are discussed: line following, circular gate detection and fly through. The emphasis throughout this paper is on augmenting traditional techniques with the goal to make these methods work with limited computing power while obtaining robust behavior.

  • articleNo Access

    Robot-Assisted Vascular Shunt Insertion with the dVRK Surgical Robot

    Vascular shunt insertion is a common surgical procedure performed to restore blood flow to damaged tissues temporarily. It usually requires a surgeon and a surgical assistant. We consider three scenarios: (1) a surgeon is available locally; (2) a remote surgeon is available via teleoperation; (3) no surgeon is available. In each scenario, a minimally invasive surgical-assistant da Vinci robot operates in a different mode either by teleoperation or automation. Robotic assistance for this procedure is challenging due to precision and control uncertainty. The role of the robot in this task depends on the availability of a human surgeon. We propose a trimodal framework for vascular shunt insertion assisted by a da Vinci Research Kit (dVRK) robotic surgical assistant (RSA). To help further study for the community, we also present a physics-based simulated environment for shunt insertion built on top of the NVIDIA Isaac ORBIT simulator. We collect a large dataset of trajectories for the shunt insertion environment using ORBIT and implement these trajectories to show the simulator’s realism, showcasing the possibility for future work to use the simulator for policy learning. Physical experiments demonstrate a success rate of 65–100% for mode (1), 100% for mode (2), and 75–95% for mode (3) across vessel phantoms with different sizes, color, and material properties. For dataset and videos, see https://sites.google.com/berkeley.edu/ravsi.

  • chapterNo Access

    MOTION PLANNING TO CATCH A MOVING OBJECT

    Visual Servoing is one of the techniques that focus on more research projects in the robotics field due to the great number of unsolved problems. One of these unsolved issues is the reduced velocity of the visual servoing task, therefore this paper shows a high speed acquisition and processing image system that, combined with a powerful cartesian robot can be used to catch flying objects. To estimate the trajectory of the moving/flying object (a ball in this research), two Kalman filters with different cinematic models (constant velocity model for horizontal movement and constant acceleration model for vertical movement) are used. In this work, a great number of experiments are done to check the reliability and robustness of the configuration setup and algorithms presented.

  • chapterNo Access

    POSE ESTIMATION FOR GRASPING PREPARATION FROM STEREO ELLIPSES

    This paper describes an approach for real-time preparation of grasping tasks, based on the low-order moments of the target's shape on a stereo pair of images acquired by an active vision head. The objective is to estimate the 3D position and orientation of an object and of the robotic hand, by using computationally fast and independent software components. These measurements are then used for the two phases of a reaching task: (i) an initial phase whereby the robot positions its hand close to the target with an appropriate hand orientation, and (ii) a final phase where a precise hand-to-target positioning is performed using Position-Based Visual Servoing methods.

  • chapterNo Access

    NON-SYMMETRIC MEMBERSHIP FUNCTION FOR FUZZY-BASED VISUAL SERVOING ONBOARD A UAV

    This paper presents the definition of non-symmetric membership function for Fuzzy controllers applied to a pan & tilt vision platform onboard an Unmanned Aerial Vehicle. This improvement allows the controllers to have a more adaptive behavior to the non-linearities presented in an UAV. This implementation allows the UAV to follow objects in the environment by using Lucas-Kanade visual tracker, in spite of the aircraft vibrations, the movements of the objects and the aircraft. update has been tested in real flights with an unmanned helicopter of the Computer Vision Group at the UPM, with very successful results, attaining a considerable reduction of the error during the tracking tests.

  • chapterNo Access

    AVOIDING TRACKING ERROR WITH ESTIMATION TECHNIQUES IN VISUAL SERVOING

    Field Robotics01 Aug 2011

    This paper focuses the visual servoing of a mobile robot in dynamic environments. We assume a target with maneuvering capabilities, and thus it can be hidden from the camera by the obstacles in the scene. These two problems must be taken into account in the control law to ensure correct servoing. The control law must consider the target movement to reduce tracking error as small as possible. Moreover, the control law should consider visual loss management (reconstruction of the visual signal in case of occultation), and collision avoidance, estimating the obstacle motion. We present a strategy to avoid the tracking error due to the movement of the target itself.