Module: tf_agents.environments.wrappers
Stay organized with collections
Save and categorize content based on your preferences.
Environment wrappers.
Wrappers in this module can be chained to change the overall behaviour of an
environment in common ways.
Classes
class ActionClipWrapper
: Wraps an environment and clips actions to spec before applying.
class ActionDiscretizeWrapper
: Wraps an environment with continuous actions and discretizes them.
class ActionOffsetWrapper
: Offsets actions to be zero-based.
class ActionRepeat
: Repeates actions over n-steps while acummulating the received reward.
class ExtraDisabledActionsWrapper
: Adds extra unavailable actions.
class FixedLength
: Truncates long episodes and pads short episodes to have a fixed length.
class FlattenActionWrapper
: Flattens the action.
class FlattenObservationsWrapper
: Wraps an environment and flattens nested multi-dimensional observations.
class GoalReplayEnvWrapper
: Adds a goal to the observation, used for HER (Hindsight Experience Replay).
class HistoryWrapper
: Adds observation and action history to the environment's observations.
class ObservationFilterWrapper
: Filters observations based on an array of indexes.
class OneHotActionWrapper
: Converts discrete action to one_hot format.
class PerformanceProfiler
: End episodes after specified number of steps.
class PyEnvironmentBaseWrapper
: PyEnvironment wrapper forwards calls to the given environment.
class RunStats
: Wrapper that accumulates run statistics as the environment iterates.
class TimeLimit
: End episodes after specified number of steps.
Other Members |
absolute_import
|
Instance of __future__._Feature
|
division
|
Instance of __future__._Feature
|
print_function
|
Instance of __future__._Feature
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf_agents.environments.wrappers\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/environments/wrappers.py) |\n\nEnvironment wrappers.\n\nWrappers in this module can be chained to change the overall behaviour of an\nenvironment in common ways.\n\nClasses\n-------\n\n[`class ActionClipWrapper`](../../tf_agents/environments/ActionClipWrapper): Wraps an environment and clips actions to spec before applying.\n\n[`class ActionDiscretizeWrapper`](../../tf_agents/environments/ActionDiscretizeWrapper): Wraps an environment with continuous actions and discretizes them.\n\n[`class ActionOffsetWrapper`](../../tf_agents/environments/ActionOffsetWrapper): Offsets actions to be zero-based.\n\n[`class ActionRepeat`](../../tf_agents/environments/ActionRepeat): Repeates actions over n-steps while acummulating the received reward.\n\n[`class ExtraDisabledActionsWrapper`](../../tf_agents/environments/wrappers/ExtraDisabledActionsWrapper): Adds extra unavailable actions.\n\n[`class FixedLength`](../../tf_agents/environments/wrappers/FixedLength): Truncates long episodes and pads short episodes to have a fixed length.\n\n[`class FlattenActionWrapper`](../../tf_agents/environments/wrappers/FlattenActionWrapper): Flattens the action.\n\n[`class FlattenObservationsWrapper`](../../tf_agents/environments/FlattenObservationsWrapper): Wraps an environment and flattens nested multi-dimensional observations.\n\n[`class GoalReplayEnvWrapper`](../../tf_agents/environments/GoalReplayEnvWrapper): Adds a goal to the observation, used for HER (Hindsight Experience Replay).\n\n[`class HistoryWrapper`](../../tf_agents/environments/HistoryWrapper): Adds observation and action history to the environment's observations.\n\n[`class ObservationFilterWrapper`](../../tf_agents/environments/ObservationFilterWrapper): Filters observations based on an array of indexes.\n\n[`class OneHotActionWrapper`](../../tf_agents/environments/OneHotActionWrapper): Converts discrete action to one_hot format.\n\n[`class PerformanceProfiler`](../../tf_agents/environments/PerformanceProfiler): End episodes after specified number of steps.\n\n[`class PyEnvironmentBaseWrapper`](../../tf_agents/environments/PyEnvironmentBaseWrapper): PyEnvironment wrapper forwards calls to the given environment.\n\n[`class RunStats`](../../tf_agents/environments/RunStats): Wrapper that accumulates run statistics as the environment iterates.\n\n[`class TimeLimit`](../../tf_agents/environments/TimeLimit): End episodes after specified number of steps.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Other Members ------------- ||\n|-----------------|-----------------------------------|\n| absolute_import | Instance of `__future__._Feature` |\n| division | Instance of `__future__._Feature` |\n| print_function | Instance of `__future__._Feature` |\n\n\u003cbr /\u003e"]]