GelSight Simulation for Sim2Real Learning

Abstract

Grasping and manipulation of objects are common both in domestic and industrial environments. Recent works exploring learning based solutions have shown promising results on robotic manipulation tasks. One efficient approach for training such learning agents is to train them within a simulated environment, followed by their deployment on real robots (Sim2Real). Most current works leverage camera vision to facilitate such manipulation tasks. However, camera vision might be significantly occluded by robot hands during the manipulation. Tactile sensing is another important sensing modality that offers complementary information to vision and can make up the information loss caused by the occlusion. However, the use of tactile sensing is restricted in the Sim2Real research due to no simulated tactile sensors available in the current simulation platforms. To mitigate the gap, we introduce a novel approach for simulating a GelSight tactile sensor in the commonly used Gazebo simulator. Similar to the real GelSight sensor, the simulated sensor can produce high-resolution images by an optical sensor from the interaction between the touched object and an opaque soft membrane. It can indirectly sense forces, geometry, texture and other properties of the object and enables the research of Sim2Real learning with tactile sensing. Preliminary experiment results have shown that the simulated sensor could generate realistic outputs similar to ones captured by a real GelSight sensor

Publication
In ViTac:Integrating Vision and Touch for Multimodal and Cross-modal Perception Workshop, IEEE International Conference on Robotics & Automation (ICRA), Montreal