berkeley_gnm_sac_son
Stay organized with collections
Save and categorize content based on your preferences.
office navigation
Split |
Examples |
'train' |
2,955 |
FeaturesDict({
'episode_metadata': FeaturesDict({
'file_path': Text(shape=(), dtype=string),
}),
'steps': Dataset({
'action': Tensor(shape=(2,), dtype=float64, description=Robot action, consists of 2x position),
'action_angle': Tensor(shape=(3,), dtype=float64, description=Robot action, consists of 2x position, 1x yaw),
'discount': Scalar(shape=(), dtype=float64, description=Discount if provided, default to 1.),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),
'language_instruction': Text(shape=(), dtype=string),
'observation': FeaturesDict({
'image': Image(shape=(120, 160, 3), dtype=uint8, description=Main camera RGB observation.),
'position': Tensor(shape=(2,), dtype=float64, description=Robot position),
'state': Tensor(shape=(3,), dtype=float64, description=Robot state, consists of [2x position, 1x yaw]),
'yaw': Tensor(shape=(1,), dtype=float64, description=Robot yaw),
}),
'reward': Scalar(shape=(), dtype=float64, description=Reward if provided, 1 on final step for demos.),
}),
})
Feature |
Class |
Shape |
Dtype |
Description |
|
FeaturesDict |
|
|
|
episode_metadata |
FeaturesDict |
|
|
|
episode_metadata/file_path |
Text |
|
string |
Path to the original data file. |
steps |
Dataset |
|
|
|
steps/action |
Tensor |
(2,) |
float64 |
Robot action, consists of 2x position |
steps/action_angle |
Tensor |
(3,) |
float64 |
Robot action, consists of 2x position, 1x yaw |
steps/discount |
Scalar |
|
float64 |
Discount if provided, default to 1. |
steps/is_first |
Tensor |
|
bool |
|
steps/is_last |
Tensor |
|
bool |
|
steps/is_terminal |
Tensor |
|
bool |
|
steps/language_embedding |
Tensor |
(512,) |
float32 |
Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5 |
steps/language_instruction |
Text |
|
string |
Language Instruction. |
steps/observation |
FeaturesDict |
|
|
|
steps/observation/image |
Image |
(120, 160, 3) |
uint8 |
Main camera RGB observation. |
steps/observation/position |
Tensor |
(2,) |
float64 |
Robot position |
steps/observation/state |
Tensor |
(3,) |
float64 |
Robot state, consists of [2x position, 1x yaw] |
steps/observation/yaw |
Tensor |
(1,) |
float64 |
Robot yaw |
steps/reward |
Scalar |
|
float64 |
Reward if provided, 1 on final step for demos. |
@article{hirose2023sacson,
title={SACSoN: Scalable Autonomous Data Collection for Social Navigation},
author={Hirose, Noriaki and Shah, Dhruv and Sridhar, Ajay and Levine, Sergey},
journal={arXiv preprint arXiv:2306.01874},
year={2023}
}
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-09-03 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-09-03 UTC."],[],[],null,["# berkeley_gnm_sac_son\n\n\u003cbr /\u003e\n\n- **Description**:\n\noffice navigation\n\n- **Homepage** :\n \u003chttps://sites.google.com/view/SACSoN-review\u003e\n\n- **Source code** :\n [`tfds.robotics.rtx.BerkeleyGnmSacSon`](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/robotics/rtx/rtx.py)\n\n- **Versions**:\n\n - **`0.1.0`** (default): Initial release.\n- **Download size** : `Unknown size`\n\n- **Dataset size** : `7.00 GiB`\n\n- **Auto-cached**\n ([documentation](https://www.tensorflow.org/datasets/performances#auto-caching)):\n No\n\n- **Splits**:\n\n| Split | Examples |\n|-----------|----------|\n| `'train'` | 2,955 |\n\n- **Feature structure**:\n\n FeaturesDict({\n 'episode_metadata': FeaturesDict({\n 'file_path': Text(shape=(), dtype=string),\n }),\n 'steps': Dataset({\n 'action': Tensor(shape=(2,), dtype=float64, description=Robot action, consists of 2x position),\n 'action_angle': Tensor(shape=(3,), dtype=float64, description=Robot action, consists of 2x position, 1x yaw),\n 'discount': Scalar(shape=(), dtype=float64, description=Discount if provided, default to 1.),\n 'is_first': bool,\n 'is_last': bool,\n 'is_terminal': bool,\n 'language_embedding': Tensor(shape=(512,), dtype=float32, description=Kona language embedding. See https://tfhub.dev/google/universal-sentence-encoder-large/5),\n 'language_instruction': Text(shape=(), dtype=string),\n 'observation': FeaturesDict({\n 'image': Image(shape=(120, 160, 3), dtype=uint8, description=Main camera RGB observation.),\n 'position': Tensor(shape=(2,), dtype=float64, description=Robot position),\n 'state': Tensor(shape=(3,), dtype=float64, description=Robot state, consists of [2x position, 1x yaw]),\n 'yaw': Tensor(shape=(1,), dtype=float64, description=Robot yaw),\n }),\n 'reward': Scalar(shape=(), dtype=float64, description=Reward if provided, 1 on final step for demos.),\n }),\n })\n\n- **Feature documentation**:\n\n| Feature | Class | Shape | Dtype | Description |\n|----------------------------|--------------|---------------|---------|--------------------------------------------------------------------------------------------|\n| | FeaturesDict | | | |\n| episode_metadata | FeaturesDict | | | |\n| episode_metadata/file_path | Text | | string | Path to the original data file. |\n| steps | Dataset | | | |\n| steps/action | Tensor | (2,) | float64 | Robot action, consists of 2x position |\n| steps/action_angle | Tensor | (3,) | float64 | Robot action, consists of 2x position, 1x yaw |\n| steps/discount | Scalar | | float64 | Discount if provided, default to 1. |\n| steps/is_first | Tensor | | bool | |\n| steps/is_last | Tensor | | bool | |\n| steps/is_terminal | Tensor | | bool | |\n| steps/language_embedding | Tensor | (512,) | float32 | Kona language embedding. See \u003chttps://tfhub.dev/google/universal-sentence-encoder-large/5\u003e |\n| steps/language_instruction | Text | | string | Language Instruction. |\n| steps/observation | FeaturesDict | | | |\n| steps/observation/image | Image | (120, 160, 3) | uint8 | Main camera RGB observation. |\n| steps/observation/position | Tensor | (2,) | float64 | Robot position |\n| steps/observation/state | Tensor | (3,) | float64 | Robot state, consists of \\[2x position, 1x yaw\\] |\n| steps/observation/yaw | Tensor | (1,) | float64 | Robot yaw |\n| steps/reward | Scalar | | float64 | Reward if provided, 1 on final step for demos. |\n\n- **Supervised keys** (See\n [`as_supervised` doc](https://www.tensorflow.org/datasets/api_docs/python/tfds/load#args)):\n `None`\n\n- **Figure**\n ([tfds.show_examples](https://www.tensorflow.org/datasets/api_docs/python/tfds/visualization/show_examples)):\n Not supported.\n\n- **Examples**\n ([tfds.as_dataframe](https://www.tensorflow.org/datasets/api_docs/python/tfds/as_dataframe)):\n\nDisplay examples... \n\n- **Citation**:\n\n @article{hirose2023sacson,\n title={SACSoN: Scalable Autonomous Data Collection for Social Navigation},\n author={Hirose, Noriaki and Shah, Dhruv and Sridhar, Ajay and Levine, Sergey},\n journal={arXiv preprint arXiv:2306.01874},\n year={2023}\n }"]]