mt_opt
Stay organized with collections
Save and categorize content based on your preferences.
Datasets for the MT-Opt paper.
@misc{kalashnikov2021mtopt,
title={MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale},
author={Dmitry Kalashnikov and Jacob Varley and Yevgen Chebotar and Benjamin Swanson and Rico Jonschkowski and Chelsea Finn and Sergey Levine and Karol Hausman},
year={2021},
eprint={2104.08212},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
mt_opt/rlds (default config)
Split |
Examples |
'train' |
920,165 |
FeaturesDict({
'episode_id': string,
'skill': uint8,
'steps': Dataset({
'action': FeaturesDict({
'close_gripper': bool,
'open_gripper': bool,
'target_pose': Tensor(shape=(7,), dtype=float32),
'terminate': bool,
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': FeaturesDict({
'gripper_closed': bool,
'height_to_bottom': float32,
'image': Image(shape=(512, 640, 3), dtype=uint8),
'state_dense': Tensor(shape=(7,), dtype=float32),
}),
}),
'task_code': string,
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_id |
Tensor |
|
string |
|
skill |
Tensor |
|
uint8 |
|
steps |
Dataset |
|
|
|
steps/action |
FeaturesDict |
|
|
|
steps/action/close_gripper |
Tensor |
|
bool |
|
steps/action/open_gripper |
Tensor |
|
bool |
|
steps/action/target_pose |
Tensor |
(7,) |
float32 |
|
steps/action/terminate |
Tensor |
|
bool |
|
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/observation |
FeaturesDict |
|
|
|
steps/observation/gripper_closed |
Tensor |
|
bool |
|
steps/observation/height_to_bottom |
Tensor |
|
float32 |
|
steps/observation/image |
Image |
(512, 640, 3) |
uint8 |
|
steps/observation/state_dense |
Tensor |
(7,) |
float32 |
|
task_code |
Tensor |
|
string |
|
mt_opt/sd
Split |
Examples |
'test' |
94,636 |
'train' |
380,234 |
FeaturesDict({
'image_0': Image(shape=(512, 640, 3), dtype=uint8),
'image_1': Image(shape=(480, 640, 3), dtype=uint8),
'image_2': Image(shape=(480, 640, 3), dtype=uint8),
'success': bool,
'task_code': string,
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
image_0 |
Image |
(512, 640, 3) |
uint8 |
|
image_1 |
Image |
(480, 640, 3) |
uint8 |
|
image_2 |
Image |
(480, 640, 3) |
uint8 |
|
success |
Tensor |
|
bool |
|
task_code |
Tensor |
|
string |
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2022-12-06 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2022-12-06 UTC."],[],[],null,["# mt_opt\n\n\u003cbr /\u003e\n\n- **Description**:\n\nDatasets for the [MT-Opt paper](https://arxiv.org/abs/2104.08212).\n\n- **Homepage** :\n \u003chttps://karolhausman.github.io/mt-opt/\u003e\n\n- **Source code** :\n [`tfds.robotics.mt_opt.MtOpt`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/mt_opt/mt_opt.py)\n\n- **Versions**:\n\n - **`1.0.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Citation**:\n\n @misc{kalashnikov2021mtopt,\n title={MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale},\n author={Dmitry Kalashnikov and Jacob Varley and Yevgen Chebotar and Benjamin Swanson and Rico Jonschkowski and Chelsea Finn and Sergey Levine and Karol Hausman},\n year={2021},\n eprint={2104.08212},\n archivePrefix={arXiv},\n primaryClass={cs.RO}\n }\n\nmt_opt/rlds (default config)\n----------------------------\n\n- **Config description** : This dataset contains task episodes collected across\n afleet of real robots. It follows the\n [RLDS format](https://github.com/google-research/rlds)to represent steps and\n episodes.\n\n- **Dataset size** : `4.38 TiB`\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 920,165 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_id': string,\n 'skill': uint8,\n 'steps': Dataset({\n 'action': FeaturesDict({\n 'close_gripper': bool,\n 'open_gripper': bool,\n 'target_pose': Tensor(shape=(7,), dtype=float32),\n 'terminate': bool,\n }),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'observation': FeaturesDict({\n 'gripper_closed': bool,\n 'height_to_bottom': float32,\n 'image': Image(shape=(512, 640, 3), dtype=uint8),\n 'state_dense': Tensor(shape=(7,), dtype=float32),\n }),\n }),\n 'task_code': string,\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|------------------------------------|--------------|---------------|---------|-------------|\n| | FeaturesDict | | | |\n| episode_id | Tensor | | string | |\n| skill | Tensor | | uint8 | |\n| steps | Dataset | | | |\n| steps/action | FeaturesDict | | | |\n| steps/action/close_gripper | Tensor | | bool | |\n| steps/action/open_gripper | Tensor | | bool | |\n| steps/action/target_pose | Tensor | (7,) | float32 | |\n| steps/action/terminate | Tensor | | bool | |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/gripper_closed | Tensor | | bool | |\n| steps/observation/height_to_bottom | Tensor | | float32 | |\n| steps/observation/image | Image | (512, 640, 3) | uint8 | |\n| steps/observation/state_dense | Tensor | (7,) | float32 | |\n| task_code | Tensor | | string | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\nmt_opt/sd\n---------\n\n- **Config description**: The success detectors dataset that contains human\n curated definitions of tasks completion.\n\n- **Dataset size** : `548.56 GiB`\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'test'` | 94,636 |\n| `'train'` | 380,234 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'image_0': Image(shape=(512, 640, 3), dtype=uint8),\n 'image_1': Image(shape=(480, 640, 3), dtype=uint8),\n 'image_2': Image(shape=(480, 640, 3), dtype=uint8),\n 'success': bool,\n 'task_code': string,\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-----------|--------------|---------------|--------|-------------|\n| | FeaturesDict | | | |\n| image_0 | Image | (512, 640, 3) | uint8 | |\n| image_1 | Image | (480, 640, 3) | uint8 | |\n| image_2 | Image | (480, 640, 3) | uint8 | |\n| success | Tensor | | bool | |\n| task_code | Tensor | | string | |\n\n- **Examples** ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples..."]]