utokyo_saytap_converted_externally_to_rlds
Stay organized with collections
Save and categorize content based on your preferences.
A1 walking, no RGB
Split |
Examples |
'train' |
20 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(12,), dtype=float32, description=Robot action, consists of [12x joint positios].),
'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'desired_pattern': Tensor(shape=(4, 5), dtype=bool, description=Desired foot contact pattern for the 4 legs, the 4 rows are for the front right, front left, rear right and rear left legs, the pattern length is 5 (=0.1s).),
'desired_vel': Tensor(shape=(3,), dtype=float32, description=Desired velocites. The first 2 are linear velocities along and perpendicular to the heading direction, the 3rd is the desired angular velocity about the yaw axis.),
'image': Image(shape=(64, 64, 3), dtype=uint8, description=Dummy camera RGB observation.),
'prev_act': Tensor(shape=(12,), dtype=float32, description=Actions applied in the previous step.),
'proj_grav_vec': Tensor(shape=(3,), dtype=float32, description=The gravity vector [0, 0, -1] in the robot base frame.),
'state': Tensor(shape=(30,), dtype=float32, description=Robot state, consists of [3x robot base linear velocity, 3x base angular vel, 12x joint position, 12x joint velocity].),
'wrist_image': Image(shape=(64, 64, 3), dtype=uint8, description=Dummy wrist camera RGB observation.),
}),
'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(12,) |
float32 |
Robot action, consists of [12x joint positios]. |
steps/discount |
Scalar |
|
float32 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/desired_pattern |
Tensor |
(4, 5) |
bool |
Desired foot contact pattern for the 4 legs, the 4 rows are for the front right, front left, rear right and rear left legs, the pattern length is 5 (=0.1s). |
steps/observation/desired_vel |
Tensor |
(3,) |
float32 |
Desired velocites. The first 2 are linear velocities along and perpendicular to the heading direction, the 3rd is the desired angular velocity about the yaw axis. |
steps/observation/image |
Image |
(64, 64, 3) |
uint8 |
Dummy camera RGB observation. |
steps/observation/prev_act |
Tensor |
(12,) |
float32 |
Actions applied in the previous step. |
steps/observation/proj_grav_vec |
Tensor |
(3,) |
float32 |
The gravity vector [0, 0, -1] in the robot base frame. |
steps/observation/state |
Tensor |
(30,) |
float32 |
Robot state, consists of [3x robot base linear velocity, 3x base angular vel, 12x joint position, 12x joint velocity]. |
steps/observation/wrist_image |
Image |
(64, 64, 3) |
uint8 |
Dummy wrist camera RGB observation. |
steps/reward |
Scalar |
|
float32 |
Reward if provided, 1 on final step for demos. |
@article{saytap2023,
author = {Yujin Tang and Wenhao Yu and Jie Tan and Heiga Zen and Aleksandra Faust and
Tatsuya Harada},
title = {SayTap: Language to Quadrupedal Locomotion},
eprint = {arXiv:2306.07580},
url = {https://saytap.github.io},
note = "{https://saytap.github.io}",
year = {2023}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-12-11 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-12-11 UTC."],[],[],null,["# utokyo_saytap_converted_externally_to_rlds\n\n\u003cbr /\u003e\n\n- **Description**:\n\nA1 walking, no RGB\n\n- **Homepage** : \u003chttps://saytap.github.io/\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.UtokyoSaytapConvertedExternallyToRlds`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `55.34 MiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n Yes\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 20 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(12,), dtype=float32, description=Robot action, consists of [12x joint positios].),\n 'discount': Scalar(shape=(), dtype=float32, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'desired_pattern': Tensor(shape=(4, 5), dtype=bool, description=Desired foot contact pattern for the 4 legs, the 4 rows are for the front right, front left, rear right and rear left legs, the pattern length is 5 (=0.1s).),\n 'desired_vel': Tensor(shape=(3,), dtype=float32, description=Desired velocites. The first 2 are linear velocities along and perpendicular to the heading direction, the 3rd is the desired angular velocity about the yaw axis.),\n 'image': Image(shape=(64, 64, 3), dtype=uint8, description=Dummy camera RGB observation.),\n 'prev_act': Tensor(shape=(12,), dtype=float32, description=Actions applied in the previous step.),\n 'proj_grav_vec': Tensor(shape=(3,), dtype=float32, description=The gravity vector [0, 0, -1] in the robot base frame.),\n 'state': Tensor(shape=(30,), dtype=float32, description=Robot state, consists of [3x robot base linear velocity, 3x base angular vel, 12x joint position, 12x joint velocity].),\n 'wrist_image': Image(shape=(64, 64, 3), dtype=uint8, description=Dummy wrist camera RGB observation.),\n }),\n 'reward': Scalar(shape=(), dtype=float32, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|-----------------------------------|--------------|-------------|---------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (12,) | float32 | Robot action, consists of \\[12x joint positios\\]. |\n| steps/discount | Scalar | | float32 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/desired_pattern | Tensor | (4, 5) | bool | Desired foot contact pattern for the 4 legs, the 4 rows are for the front right, front left, rear right and rear left legs, the pattern length is 5 (=0.1s). |\n| steps/observation/desired_vel | Tensor | (3,) | float32 | Desired velocites. The first 2 are linear velocities along and perpendicular to the heading direction, the 3rd is the desired angular velocity about the yaw axis. |\n| steps/observation/image | Image | (64, 64, 3) | uint8 | Dummy camera RGB observation. |\n| steps/observation/prev_act | Tensor | (12,) | float32 | Actions applied in the previous step. |\n| steps/observation/proj_grav_vec | Tensor | (3,) | float32 | The gravity vector \\[0, 0, -1\\] in the robot base frame. |\n| steps/observation/state | Tensor | (30,) | float32 | Robot state, consists of \\[3x robot base linear velocity, 3x base angular vel, 12x joint position, 12x joint velocity\\]. |\n| steps/observation/wrist_image | Image | (64, 64, 3) | uint8 | Dummy wrist camera RGB observation. |\n| steps/reward | Scalar | | float32 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{saytap2023,\n author = {Yujin Tang and Wenhao Yu and Jie Tan and Heiga Zen and Aleksandra Faust and\n Tatsuya Harada},\n title = {SayTap: Language to Quadrupedal Locomotion},\n eprint = {arXiv:2306.07580},\n url = {https://saytap.github.io},\n note = \"{https://saytap.github.io}\",\n year = {2023}\n }"]]