Leveraging Big Data for Grasping

We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with the standard epsilon-metric and a new physics-metric. We use a descriptive and efficient representation of the local object shape at which the grasp is applied. Each grasp is annotated with the proposed metrics and representation.

We use crowdsourcing to analyze the correlation of the two metrics with grasp success as predicted by humans. The results confirm that the proposed physics-metric is a more consistent predictor for grasp success than the epsilon-metric. Furthermore it supports the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used for simulation data.

Registration for database information

We hope that this database will increase over time and the community will contribute back. Therefore, we encourage people to subscribe to the database information service. We will send notification mails as soon as new data, code changes, or other changes related to the database are available.

Grasp Database Data & Software

Getting the Data & Software:
We provide the raw data and a basic python software package to visualize and interact with the data. To obtain the data and get more detailed information about the database please visit our dedicated website If you are interested in contributing to the database or the software packages please do not hesitate to contact us.

Data: grasp database bottles
            hdf5 vitables
Every database is stored in the HDF5 file format. There are existing tools to manually inspect the database, e.g. vitables shown on the right. In the software package we provide scripts to obtain the raw HDF5 files of different versions of the database.

Software: grasp database bottles
            gui
To have a good user experience we decided to provide a docker image which has been tested succesfully on Ubuntu 12.04/14.04 and Mac. The image ships with a pre-compiled software package to visualize grasps stored in the database and also store the grasp templates.

All dependencies to compile our code are described in the docker build description, making it transparent to the user to build our code outside of this environment.

The source code is also provided and explains how to acquire data from the hdf5 file using our python library.

Grasp Database Examples

The grasp database consists of 87 categories. In the following some example objects for the 3 groups, small, medium, and large objects are shown. We want to stress that there are more models available in our database. The subset shown in the following is chosen to illustrate the variaty of models we use.

Small:
database
            examples small

Medium:
database
            examples medium

Large:
database
            examples large

Mechanical Turk

mechanical turk
                     example For each experiment we had pre-labeled ground truth images as shown in the following. Ground Truth Positive and Ground Truth Negative are examples shown to the user during the whole experiment as illustrated on the right hand side picture. Please click on the right hand sight image to get to an example mechanical turk webpage.
The Ground Truth Reject pictures are used to block mechanical turk workers, to be able to get consistent data. Please click on the corresponding images to download all pre-labeled images.

Bottles:
Ground Truth Positive Ground Truth Negative Ground Truth Reject
ground truth positive
                            data ground truth negative
                            data ground truth reject
                            data
All:
Ground Truth Positive Ground Truth Negative Ground Truth Reject
ground truth positive
                            data ground truth negative
                            data ground truth reject
                            data

Real Data Analysis

Coming soon, not yet available.