The sense of touch is one of the most powerful and amazing senses that humans have. It provides rich information about the environment that we make contact with. It is so powerfull that the visually impaired people mainly explores the world using the sense of touch. Deaf-Blind people trained in Tadoma can even do tactile lipreading, by picking up vibrations from the lips and throat using their fingers. Even the normal humans depends on tactile sensing a lot unconsciously for manipulation and grasping in visually inaccessible environments. One can grab a coin or key from their pocket just by tactile senses alone, without even looking into it.

Project BlindGrasp aims to bring similar skills - human like tactile sensing based grasping to robots.

Tactile sensing in robots are not widely used unlike the vision systems. Probably too much dependance on the vision systems may have made us ignore the rich information provided by the tactile sensing.The unavailability of high resoultion tactile sensors may also have been a reason for this. Most of the approaches on using tactile sensing are just confined to the detection of slip or quality of grasp, estimating mechanical properties like hardness of objects etc.

I came to know about Gelsight at ICRA 2017 from a couple of MIT PhD students( Wenzhen Yuan, Greg Izzat & Geronimo Mirano) working on it. They were at ICRA to present their research on using Gelsight to augment the 3D pointcloud generated by the vision systems.

My fingerprint on the Gelsight Gripper
My fingerprint on the Gelsight Gripper

Research Areas:

Blindgrasp needs both research into mechanical structure of the Gelsight gripper and the software algorithms to make sense of the data. The following are the two areas of concentration.

1. GelSight Finger

The original GelSight gripper consists of a planar sensing area of size 24x18mm. It works well for tasks like estimating hardness of objects, generating pointclouds for object recognition etc. But will be inefficient for exploring tasks in clutter, since it can only see forces coming from the normal direction. The traditional two finger gripper is good for grasping tasks, but would perform poor in exploring through the clutter because of its non-streamline shape. In case of human fingerrs, they are elliptical cylinder in shape with the most sensitive area as the inner flattened surface. So, one of the hardware enhancements would be to model and design a curved Gelsight sensor and gripper. It is best illustrated in the figure below

Tactile exploration in clutter would require a curved GelSight Gripper
Tactile exploration in clutter would require a curved GelSight Gripper


In addition to the mechanical design,this would also include non linear remapping of the touch information on the curved surface onto the planar camera sensor

2. Fingerprint like features

The original GelSight depends on the features on the object to generate the tactile data from the camera. It may fail with featureless smooth surfaces like glass. Adding features like ridges similar to human fingerprints on the outer sensing surface of GelSight can help in detecting contact with featureless surfaces. It could also help in generating vibrations when the finger is moved over surfaces, giving details on the surface quality and texture of the object.


3. Tactile exploration and non-prehensile manipulation using Deep Reinforcement Learning

This involves developing a tactile exploration policy for the control of a manipulator. The novel method will be to use a deep reinforcement learning agent to let the robot learn how to explore and reach the goal object. The agent will be given positive rewards when it makes contact with the goal object. This would also involve research into new reinforcement learning agents which can learn with high dimensional data and sparse rewards.The exploration policy would generate control commands (joint torques) for the manipulator based on tactile feedback such that it can search through a cluttered environment and pick up the desired object.

Once the robot is able to explore the environment and successfully reach the goal object, the next phase of research should focus on how to pick it up, without losing contact and thereby losing the positional information of the object. This would also involve development of novel physics based grasp planning algorithms involving non-prehensile manipulation.


Simulation Environment

As I am not having access to a good quality manipulator and the tactile sensors, the first phase is planned to be done entirely in simulation.The Bullet physics engine is chosen as it provides the best customizability and functionality. The simulation environment consists of a Kuka iiwa manipulator, a high resolution tactile sensor fitted to the gripper and a tray filled with small spherical marbles. The tray is also having a couple of gold coins buried under the small marbles. The task for the robot would be to explore the tray of marbles and pick up the coin, using tactile senses only.

Yes, there is a hidden gold coin under the marbles,  the robot has to dig through it, find the coin and pick it up.
Yes, there is a hidden gold coin under the marbles, the robot has to dig through it, find the coin and pick it up.


This would be a good scenario to bring out the strengths of tactile exploration, as it would be impossible to detect the coin with vision when it is buried under the marbles. The robot will have to dig /poke its tactile finger into the marbles, move around and explore till it gets in contact with the gold coin


Update-1 : Nov-22-2017: Simulation of the original Planar Gelsight Sensor

I tried to simulate Gelsight when I was right back from the conference. The initial approach was to simulate it in Drake as directed by Greg, but installing Drake and getting it working was a real mess. So, I began trying in Gazebo. But that also did not turn out well since I could not get the contact sensing to work reliably. Then I tried the physics simulator Bullet. It had softbody simulation, which I thought would be useful for simulating the elastomer of GelSight. But it turned out that the softbody simulation was a basic and needed much more development. So, I indirectly modelled Gelsight using the raytest functionality in bullet. It returns the depth at which a ray makes contact with a solid body.

Following is the video of the simulated gelsight gripper. The Kuka iiwa has a WSG 50 gripper fitted with a gelsight sensor. The sensor has a sensing area of 24x24mm and has a resolution of 256x256 pixels in the planar sensing area. It outputs standard ROS 3D pointcloud data, which is displayed in RViz.The simulation was run in my laptop and the pointcloud could be generated at 5Hz


The next steps would be to model, simulate and prototype the curved Gelsight gripper. At the same time, I have to strengthen my reinforcement learning skills. Follow me on Twitter or visit later for updates.