ucsd_pick_and_place_dataset_converted_externally_to_rlds
Stay organized with collections
Save and categorize content based on your preferences.
xArm picking and placing objects with distractors
Split |
Examples |
'train' |
1,355 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'disclaimer': Text(shape=(), dtype=string),
'file_path': Text(shape=(), dtype=string),
'n_transitions': Scalar(shape=(), dtype=int32, description=Number of transitions in the episode.),
'success': Scalar(shape=(), dtype=bool, description=True if the last state of an episode is a success state, False otherwise.),
'success_labeled_by': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(4,), dtype=float32, description=Robot action, consists of [3x gripper velocities,1x gripper open/close torque].),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'image': Image(shape=(224, 224, 3), dtype=uint8, description=Camera RGB observation.),
'state': Tensor(shape=(7,), dtype=float32, description=Robot state, consists of [3x gripper position,3x gripper orientation, 1x finger distance].),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/disclaimer |
Text |
|
string |
Disclaimer about the particular episode. |
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
episode_metadata/n_transitions |
Scalar |
|
int32 |
Number of transitions in the episode. |
episode_metadata/success |
Scalar |
|
bool |
True if the last state of an episode is a success state, False otherwise. |
episode_metadata/success_labeled_by |
Text |
|
string |
Who labeled success (and thereby reward) of the episode. Can be one of: [human, classifier]. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(4,) |
float32 |
Robot action, consists of [3x gripper velocities,1x gripper open/close torque]. |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/image |
Image |
(224, 224, 3) |
uint8 |
Camera RGB observation. |
steps/observation/state |
Tensor |
(7,) |
float32 |
Robot state, consists of [3x gripper position,3x gripper orientation, 1x finger distance]. |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@preprint{Feng2023Finetuning,
title={Finetuning Offline World Models in the Real World},
author={Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, Xiaolong Wang},
year={2023}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-11 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-12-11 UTC."],[],[],null,["# ucsd_pick_and_place_dataset_converted_externally_to_rlds\n\n\u003cbr /\u003e\n\n- **Description**:\n\nxArm picking and placing objects with distractors\n\n- **Homepage** : \u003chttps://owmcorl.github.io\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.UcsdPickAndPlaceDatasetConvertedExternallyToRlds`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `3.53 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 1,355 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'disclaimer': Text(shape=(), dtype=string),\n 'file_path': Text(shape=(), dtype=string),\n 'n_transitions': Scalar(shape=(), dtype=int32, description=Number of transitions in the episode.),\n 'success': Scalar(shape=(), dtype=bool, description=True if the last state of an episode is a success state, False otherwise.),\n 'success_labeled_by': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(4,), dtype=float32, description=Robot action, consists of [3x gripper velocities,1x gripper open/close torque].),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'image': Image(shape=(224, 224, 3), dtype=uint8, description=Camera RGB observation.),\n 'state': Tensor(shape=(7,), dtype=float32, description=Robot state, consists of [3x gripper position,3x gripper orientation, 1x finger distance].),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-------------------------------------|--------------|---------------|---------|------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/disclaimer | Text | | string | Disclaimer about the particular episode. |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| episode_metadata/n_transitions | Scalar | | int32 | Number of transitions in the episode. |\n| episode_metadata/success | Scalar | | bool | True if the last state of an episode is a success state, False otherwise. |\n| episode_metadata/success_labeled_by | Text | | string | Who labeled success (and thereby reward) of the episode. Can be one of: \\[human, classifier\\]. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (4,) | float32 | Robot action, consists of \\[3x gripper velocities,1x gripper open/close torque\\]. |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image | Image | (224, 224, 3) | uint8 | Camera RGB observation. |\n| steps/observation/state | Tensor | (7,) | float32 | Robot state, consists of \\[3x gripper position,3x gripper orientation, 1x finger distance\\]. |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @preprint{Feng2023Finetuning,\n title={Finetuning Offline World Models in the Real World},\n author={Yunhai Feng, Nicklas Hansen, Ziyan Xiong, Chandramouli Rajagopalan, Xiaolong Wang},\n year={2023}\n }"]]