Please login to be able to save your searches and receive alerts for new content matching your search criteria.
It is a challenging task to localize and track an in-hand object in robotic domain. Researchers were mainly using the vision as major modality for extracting object’s pose. The vision approaches are fragile when the object is occluded by the robotic arm and hand. To this end, we propose a tactile-based DTI-Tracker (tracking object’s pose via Dynamic Tactile Interaction) approach and formalize the object’s tracking as a filter problem. An Extended Kalman Filter (EKF) is used to estimate the in-hand object pose exploiting the high spatial resolution tactile feedback. Given the initial estimation error, the proposed approach rapidly converges the estimation result to the real pose and the statistic evaluation shows the robustness of the proposed approach. We evaluate this method in physics simulation and real multi-fingered grasping setup while the object is static and movable. The proposed method is a potential tool to foster future research on dexterous manipulation using multifingered robotic hand.
Adaptive and cooperative control of arms and fingers for natural object reaching and grasping, without explicit 3D geometric pose information, is observed in humans. In this study, an image-based visual servoing controller, inspired by human grasping behavior, is proposed for an arm-gripper system. A large-scale dataset is constructed using Pybullet simulation, comprising paired images and arm-gripper control signals mimicking expert grasping behavior. Leveraging this dataset, a network is directly trained to derive a control policy that maps images to cooperative grasp control. Subsequently, the learned synergy grasping policy from the network is directly applied to a real robot with the same configuration. Experimental results demonstrate the effectiveness of the algorithm. Videos can be found at https://www.bilibili.com/video/BV1tg4y1b7Qe/.
The integration of robotics into domestic environments poses significant challenges due to the dynamic and varied nature of these settings. This paper introduces a new framework that combines vision-guided object recognition with adaptive grasping policies learned from human demonstrations. By harnessing computer vision technology, our system employs deep learning algorithms, particularly Convolutional Neural Networks (CNNs), to precisely detect and classify household objects. Simultaneously, the system uses imitation learning to refine grasping policies, enabling the robotic manipulator to dynamically adapt to new target objects. We validated our framework through a series of experimental setups that simulate typical kitchen tasks, such as manipulating utensils and preparing ingredients. These tasks, which primarily involve picking up and placing objects, served as practical tests for our system. The results demonstrate the system’s ability to effectively recognize a broad array of objects and adapt its grasping policies, thereby enhancing operational efficiency.