robonet
Stay organized with collections
Save and categorize content based on your preferences.
RoboNet contains over 15 million video frames of robot-object interaction, taken
from 113 unique camera viewpoints.
@article{dasari2019robonet,
title={RoboNet: Large-Scale Multi-Robot Learning},
author={Dasari, Sudeep and Ebert, Frederik and Tian, Stephen and
Nair, Suraj and Bucher, Bernadette and Schmeckpeper, Karl
and Singh, Siddharth and Levine, Sergey and Finn, Chelsea},
journal={arXiv preprint arXiv:1910.11215},
year={2019}
}
robonet/robonet_sample_64 (default config)
Config description: 64x64 RoboNet Sample.
Download size: 119.80 MiB
Dataset size: 183.04 MiB
Auto-cached
(documentation):
Only when shuffle_files=False
(train)
Splits:
Split |
Examples |
'train' |
700 |
FeaturesDict({
'actions': Tensor(shape=(None, 5), dtype=float32),
'filename': Text(shape=(), dtype=string),
'states': Tensor(shape=(None, 5), dtype=float32),
'video': Video(Image(shape=(64, 64, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
actions |
Tensor |
(None, 5) |
float32 |
|
filename |
Text |
|
string |
|
states |
Tensor |
(None, 5) |
float32 |
|
video |
Video(Image) |
(None, 64, 64, 3) |
uint8 |
|
robonet/robonet_sample_128
Config description: 128x128 RoboNet Sample.
Download size: 119.80 MiB
Dataset size: 638.98 MiB
Auto-cached
(documentation):
No
Splits:
Split |
Examples |
'train' |
700 |
FeaturesDict({
'actions': Tensor(shape=(None, 5), dtype=float32),
'filename': Text(shape=(), dtype=string),
'states': Tensor(shape=(None, 5), dtype=float32),
'video': Video(Image(shape=(128, 128, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
actions |
Tensor |
(None, 5) |
float32 |
|
filename |
Text |
|
string |
|
states |
Tensor |
(None, 5) |
float32 |
|
video |
Video(Image) |
(None, 128, 128, 3) |
uint8 |
|
robonet/robonet_64
Split |
Examples |
'train' |
162,417 |
FeaturesDict({
'actions': Tensor(shape=(None, 5), dtype=float32),
'filename': Text(shape=(), dtype=string),
'states': Tensor(shape=(None, 5), dtype=float32),
'video': Video(Image(shape=(64, 64, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
actions |
Tensor |
(None, 5) |
float32 |
|
filename |
Text |
|
string |
|
states |
Tensor |
(None, 5) |
float32 |
|
video |
Video(Image) |
(None, 64, 64, 3) |
uint8 |
|
robonet/robonet_128
Split |
Examples |
'train' |
162,417 |
FeaturesDict({
'actions': Tensor(shape=(None, 5), dtype=float32),
'filename': Text(shape=(), dtype=string),
'states': Tensor(shape=(None, 5), dtype=float32),
'video': Video(Image(shape=(128, 128, 3), dtype=uint8)),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
actions |
Tensor |
(None, 5) |
float32 |
|
filename |
Text |
|
string |
|
states |
Tensor |
(None, 5) |
float32 |
|
video |
Video(Image) |
(None, 128, 128, 3) |
uint8 |
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-12-23 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2022-12-23 UTC."],[],[],null,["# robonet\n\n\u003cbr /\u003e\n\n- **Description**:\n\nRoboNet contains over 15 million video frames of robot-object interaction, taken\nfrom 113 unique camera viewpoints.\n\n- The actions are deltas in position and rotation to the robot end-effector\n with one additional dimension of the action vector reserved for the gripper\n joint.\n\n- The states are cartesian end-effector control action space with restricted\n rotation, and a gripper joint\n\n- **Additional Documentation** :\n [Explore on Papers With Code\n north_east](https://paperswithcode.com/dataset/robonet)\n\n- **Homepage** : \u003chttps://www.robonet.wiki/\u003e\n\n- **Source code** :\n [`tfds.datasets.robonet.Builder`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/datasets/robonet/robonet_dataset_builder.py)\n\n- **Versions**:\n\n - **`4.0.1`** (default): No release notes.\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Citation**:\n\n @article{dasari2019robonet,\n title={RoboNet: Large-Scale Multi-Robot Learning},\n author={Dasari, Sudeep and Ebert, Frederik and Tian, Stephen and\n Nair, Suraj and Bucher, Bernadette and Schmeckpeper, Karl\n and Singh, Siddharth and Levine, Sergey and Finn, Chelsea},\n journal={arXiv preprint arXiv:1910.11215},\n year={2019}\n }\n\nrobonet/robonet_sample_64 (default config)\n------------------------------------------\n\n- **Config description**: 64x64 RoboNet Sample.\n\n- **Download size** : `119.80 MiB`\n\n- **Dataset size** : `183.04 MiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n Only when `shuffle_files=False` (train)\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 700 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'actions': Tensor(shape=(None, 5), dtype=float32),\n 'filename': Text(shape=(), dtype=string),\n 'states': Tensor(shape=(None, 5), dtype=float32),\n 'video': Video(Image(shape=(64, 64, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------|--------------|-------------------|---------|-------------|\n| | FeaturesDict | | | |\n| actions | Tensor | (None, 5) | float32 | |\n| filename | Text | | string | |\n| states | Tensor | (None, 5) | float32 | |\n| video | Video(Image) | (None, 64, 64, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nrobonet/robonet_sample_128\n--------------------------\n\n- **Config description**: 128x128 RoboNet Sample.\n\n- **Download size** : `119.80 MiB`\n\n- **Dataset size** : `638.98 MiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 700 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'actions': Tensor(shape=(None, 5), dtype=float32),\n 'filename': Text(shape=(), dtype=string),\n 'states': Tensor(shape=(None, 5), dtype=float32),\n 'video': Video(Image(shape=(128, 128, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------|--------------|---------------------|---------|-------------|\n| | FeaturesDict | | | |\n| actions | Tensor | (None, 5) | float32 | |\n| filename | Text | | string | |\n| states | Tensor | (None, 5) | float32 | |\n| video | Video(Image) | (None, 128, 128, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nrobonet/robonet_64\n------------------\n\n- **Config description**: 64x64 RoboNet.\n\n- **Download size** : `36.20 GiB`\n\n- **Dataset size** : `41.37 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 162,417 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'actions': Tensor(shape=(None, 5), dtype=float32),\n 'filename': Text(shape=(), dtype=string),\n 'states': Tensor(shape=(None, 5), dtype=float32),\n 'video': Video(Image(shape=(64, 64, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------|--------------|-------------------|---------|-------------|\n| | FeaturesDict | | | |\n| actions | Tensor | (None, 5) | float32 | |\n| filename | Text | | string | |\n| states | Tensor | (None, 5) | float32 | |\n| video | Video(Image) | (None, 64, 64, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nrobonet/robonet_128\n-------------------\n\n- **Config description**: 128x128 RoboNet.\n\n- **Download size** : `36.20 GiB`\n\n- **Dataset size** : `144.90 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 162,417 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'actions': Tensor(shape=(None, 5), dtype=float32),\n 'filename': Text(shape=(), dtype=string),\n 'states': Tensor(shape=(None, 5), dtype=float32),\n 'video': Video(Image(shape=(128, 128, 3), dtype=uint8)),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------|--------------|---------------------|---------|-------------|\n| | FeaturesDict | | | |\n| actions | Tensor | (None, 5) | float32 | |\n| filename | Text | | string | |\n| states | Tensor | (None, 5) | float32 | |\n| video | Video(Image) | (None, 128, 128, 3) | uint8 | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]