- Description:
D4RL is an open-source benchmark for offline reinforcement learning. It provides standardized environments and datasets for training and benchmarking algorithms.
The datasets follow the RLDS format to represent steps and episodes.
Config description: See more details about the task and its versions in https://github.com/rail-berkeley/d4rl/wiki/Tasks#gym
Source code:
tfds.d4rl.d4rl_mujoco_halfcheetah.D4rlMujocoHalfcheetahVersions:
1.0.0: Initial release.1.0.1: Support for episode and step metadata, and unification of the reward shape across all the configs.1.1.0: Added is_last.1.2.0(default): Updated to take into account the next observation.
Supervised keys (See
as_superviseddoc):NoneFigure (tfds.show_examples): Not supported.
Citation:
@misc{fu2020d4rl,
title={D4RL: Datasets for Deep Data-Driven Reinforcement Learning},
author={Justin Fu and Aviral Kumar and Ofir Nachum and George Tucker and Sergey Levine},
year={2020},
eprint={2004.07219},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
d4rl_mujoco_halfcheetah/v0-expert (default config)
Download size:
83.44 MiBDataset size:
98.43 MiBAuto-cached (documentation): Yes
Splits:
| Split | Examples |
|---|---|
'train' |
1,002 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v0-medium
Download size:
82.92 MiBDataset size:
98.43 MiBAuto-cached (documentation): Yes
Splits:
| Split | Examples |
|---|---|
'train' |
1,002 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v0-medium-expert
Download size:
166.36 MiBDataset size:
196.86 MiBAuto-cached (documentation): Only when
shuffle_files=False(train)Splits:
| Split | Examples |
|---|---|
'train' |
2,004 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v0-mixed
Download size:
8.60 MiBDataset size:
9.94 MiBAuto-cached (documentation): Yes
Splits:
| Split | Examples |
|---|---|
'train' |
101 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v0-random
Download size:
84.79 MiBDataset size:
98.43 MiBAuto-cached (documentation): Yes
Splits:
| Split | Examples |
|---|---|
'train' |
1,002 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v1-expert
Download size:
146.94 MiBDataset size:
451.88 MiBAuto-cached (documentation): No
Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'policy': FeaturesDict({
'fc0': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 17), dtype=float32),
}),
'fc1': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 256), dtype=float32),
}),
'last_fc': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'last_fc_log_std': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'nonlinearity': string,
'output_distribution': string,
}),
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float32,
'qpos': Tensor(shape=(9,), dtype=float32),
'qvel': Tensor(shape=(9,), dtype=float32),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| policy | FeaturesDict | |||
| policy/fc0 | FeaturesDict | |||
| policy/fc0/bias | Tensor | (256,) | float32 | |
| policy/fc0/weight | Tensor | (256, 17) | float32 | |
| policy/fc1 | FeaturesDict | |||
| policy/fc1/bias | Tensor | (256,) | float32 | |
| policy/fc1/weight | Tensor | (256, 256) | float32 | |
| policy/last_fc | FeaturesDict | |||
| policy/last_fc/bias | Tensor | (6,) | float32 | |
| policy/last_fc/weight | Tensor | (6, 256) | float32 | |
| policy/last_fc_log_std | FeaturesDict | |||
| policy/last_fc_log_std/bias | Tensor | (6,) | float32 | |
| policy/last_fc_log_std/weight | Tensor | (6, 256) | float32 | |
| policy/nonlinearity | Tensor | string | ||
| policy/output_distribution | Tensor | string | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float32 | ||
| steps/infos/qpos | Tensor | (9,) | float32 | |
| steps/infos/qvel | Tensor | (9,) | float32 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v1-medium
Download size:
146.65 MiBDataset size:
451.88 MiBAuto-cached (documentation): No
Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'policy': FeaturesDict({
'fc0': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 17), dtype=float32),
}),
'fc1': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 256), dtype=float32),
}),
'last_fc': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'last_fc_log_std': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'nonlinearity': string,
'output_distribution': string,
}),
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float32,
'qpos': Tensor(shape=(9,), dtype=float32),
'qvel': Tensor(shape=(9,), dtype=float32),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| policy | FeaturesDict | |||
| policy/fc0 | FeaturesDict | |||
| policy/fc0/bias | Tensor | (256,) | float32 | |
| policy/fc0/weight | Tensor | (256, 17) | float32 | |
| policy/fc1 | FeaturesDict | |||
| policy/fc1/bias | Tensor | (256,) | float32 | |
| policy/fc1/weight | Tensor | (256, 256) | float32 | |
| policy/last_fc | FeaturesDict | |||
| policy/last_fc/bias | Tensor | (6,) | float32 | |
| policy/last_fc/weight | Tensor | (6, 256) | float32 | |
| policy/last_fc_log_std | FeaturesDict | |||
| policy/last_fc_log_std/bias | Tensor | (6,) | float32 | |
| policy/last_fc_log_std/weight | Tensor | (6, 256) | float32 | |
| policy/nonlinearity | Tensor | string | ||
| policy/output_distribution | Tensor | string | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float32 | ||
| steps/infos/qpos | Tensor | (9,) | float32 | |
| steps/infos/qvel | Tensor | (9,) | float32 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v1-medium-expert
Download size:
293.00 MiBDataset size:
342.37 MiBAuto-cached (documentation): No
Splits:
| Split | Examples |
|---|---|
'train' |
2,000 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float32,
'qpos': Tensor(shape=(9,), dtype=float32),
'qvel': Tensor(shape=(9,), dtype=float32),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float32 | ||
| steps/infos/qpos | Tensor | (9,) | float32 | |
| steps/infos/qvel | Tensor | (9,) | float32 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v1-medium-replay
Download size:
57.68 MiBDataset size:
34.59 MiBAuto-cached (documentation): Yes
Splits:
| Split | Examples |
|---|---|
'train' |
202 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float64),
'discount': float64,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float64),
'reward': float64,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float64 | |
| steps/discount | Tensor | float64 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float64 | |
| steps/reward | Tensor | float64 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v1-full-replay
Download size:
285.01 MiBDataset size:
171.22 MiBAuto-cached (documentation): Only when
shuffle_files=False(train)Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float64),
'discount': float64,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float64),
'reward': float64,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float64 | |
| steps/discount | Tensor | float64 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float64 | |
| steps/reward | Tensor | float64 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v1-random
Download size:
145.19 MiBDataset size:
171.18 MiBAuto-cached (documentation): Only when
shuffle_files=False(train)Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float32,
'qpos': Tensor(shape=(9,), dtype=float32),
'qvel': Tensor(shape=(9,), dtype=float32),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float32 | ||
| steps/infos/qpos | Tensor | (9,) | float32 | |
| steps/infos/qvel | Tensor | (9,) | float32 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v2-expert
Download size:
226.46 MiBDataset size:
451.88 MiBAuto-cached (documentation): No
Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'policy': FeaturesDict({
'fc0': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 17), dtype=float32),
}),
'fc1': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 256), dtype=float32),
}),
'last_fc': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'last_fc_log_std': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'nonlinearity': string,
'output_distribution': string,
}),
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| policy | FeaturesDict | |||
| policy/fc0 | FeaturesDict | |||
| policy/fc0/bias | Tensor | (256,) | float32 | |
| policy/fc0/weight | Tensor | (256, 17) | float32 | |
| policy/fc1 | FeaturesDict | |||
| policy/fc1/bias | Tensor | (256,) | float32 | |
| policy/fc1/weight | Tensor | (256, 256) | float32 | |
| policy/last_fc | FeaturesDict | |||
| policy/last_fc/bias | Tensor | (6,) | float32 | |
| policy/last_fc/weight | Tensor | (6, 256) | float32 | |
| policy/last_fc_log_std | FeaturesDict | |||
| policy/last_fc_log_std/bias | Tensor | (6,) | float32 | |
| policy/last_fc_log_std/weight | Tensor | (6, 256) | float32 | |
| policy/nonlinearity | Tensor | string | ||
| policy/output_distribution | Tensor | string | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v2-full-replay
Download size:
277.88 MiBDataset size:
171.22 MiBAuto-cached (documentation): Only when
shuffle_files=False(train)Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v2-medium
Download size:
226.71 MiBDataset size:
451.88 MiBAuto-cached (documentation): No
Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'policy': FeaturesDict({
'fc0': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 17), dtype=float32),
}),
'fc1': FeaturesDict({
'bias': Tensor(shape=(256,), dtype=float32),
'weight': Tensor(shape=(256, 256), dtype=float32),
}),
'last_fc': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'last_fc_log_std': FeaturesDict({
'bias': Tensor(shape=(6,), dtype=float32),
'weight': Tensor(shape=(6, 256), dtype=float32),
}),
'nonlinearity': string,
'output_distribution': string,
}),
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| policy | FeaturesDict | |||
| policy/fc0 | FeaturesDict | |||
| policy/fc0/bias | Tensor | (256,) | float32 | |
| policy/fc0/weight | Tensor | (256, 17) | float32 | |
| policy/fc1 | FeaturesDict | |||
| policy/fc1/bias | Tensor | (256,) | float32 | |
| policy/fc1/weight | Tensor | (256, 256) | float32 | |
| policy/last_fc | FeaturesDict | |||
| policy/last_fc/bias | Tensor | (6,) | float32 | |
| policy/last_fc/weight | Tensor | (6, 256) | float32 | |
| policy/last_fc_log_std | FeaturesDict | |||
| policy/last_fc_log_std/bias | Tensor | (6,) | float32 | |
| policy/last_fc_log_std/weight | Tensor | (6, 256) | float32 | |
| policy/nonlinearity | Tensor | string | ||
| policy/output_distribution | Tensor | string | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v2-medium-expert
Download size:
452.58 MiBDataset size:
342.37 MiBAuto-cached (documentation): No
Splits:
| Split | Examples |
|---|---|
'train' |
2,000 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v2-medium-replay
Download size:
56.69 MiBDataset size:
34.59 MiBAuto-cached (documentation): Yes
Splits:
| Split | Examples |
|---|---|
'train' |
202 |
- Feature structure:
FeaturesDict({
'algorithm': string,
'iteration': int32,
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| algorithm | Tensor | string | ||
| iteration | Tensor | int32 | ||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):
d4rl_mujoco_halfcheetah/v2-random
Download size:
226.34 MiBDataset size:
171.18 MiBAuto-cached (documentation): Only when
shuffle_files=False(train)Splits:
| Split | Examples |
|---|---|
'train' |
1,000 |
- Feature structure:
FeaturesDict({
'steps': Dataset({
'action': Tensor(shape=(6,), dtype=float32),
'discount': float32,
'infos': FeaturesDict({
'action_log_probs': float64,
'qpos': Tensor(shape=(9,), dtype=float64),
'qvel': Tensor(shape=(9,), dtype=float64),
}),
'is_first': bool,
'is_last': bool,
'is_terminal': bool,
'observation': Tensor(shape=(17,), dtype=float32),
'reward': float32,
}),
})
- Feature documentation:
| Feature | Class | Shape | Dtype | Description |
|---|---|---|---|---|
| FeaturesDict | ||||
| steps | Dataset | |||
| steps/action | Tensor | (6,) | float32 | |
| steps/discount | Tensor | float32 | ||
| steps/infos | FeaturesDict | |||
| steps/infos/action_log_probs | Tensor | float64 | ||
| steps/infos/qpos | Tensor | (9,) | float64 | |
| steps/infos/qvel | Tensor | (9,) | float64 | |
| steps/is_first | Tensor | bool | ||
| steps/is_last | Tensor | bool | ||
| steps/is_terminal | Tensor | bool | ||
| steps/observation | Tensor | (17,) | float32 | |
| steps/reward | Tensor | float32 |
- Examples (tfds.as_dataframe):