Background segmentation

  1. What and why
  2. Previous works
  3. How
  4. Result

What and Why

FingerVision is an optical tactile sensor [1], which provides the RGB image of the object within the fingers in addition to the contact forces from the optical markers.

Since the sensor captures the image of the background also in addition to the object, we need to seperate out the object and the background.

Previous Works

to update later

How

A deep learning approach is used to segment the objects and the background. Training data is collected, manually labelled and a U-Net based network is trained. The training acheives good accuracy in a couple of tens of minutes. Transfer learning methods could be used to quickly adapt to new environments.

Result

It works, as expected.

The left side shows the raw sensor image and the right one is the segmented image output.

Achu Wilson

Roboticist, still learning!