tf_agents.bandits.environments.piecewise_stochastic_environment.PiecewiseStationaryDynamics
Stay organized with collections
Save and categorize content based on your preferences.
A piecewise stationary environment dynamics.
Inherits From: EnvironmentDynamics
tf_agents.bandits.environments.piecewise_stochastic_environment.PiecewiseStationaryDynamics(
observation_distribution: types.Distribution,
interval_distribution: types.Distribution,
observation_to_reward_distribution: types.Distribution,
additive_reward_distribution: types.Distribution
)
This is a piecewise stationary environment which computes rewards as:
rewards(t) = observation(t) * observation_to_reward(i) + additive_reward(i)
where t is the environment time (env_time) and i is the index of each piece.
The environment time is incremented after the reward is computed while the
piece index is incremented at the end of the time interval. The parameters
observation_to_reward(i), additive_reward(i), and the length of interval, are
drawn from given distributions at the beginning of each temporal interval.
Args |
observation_distribution
|
A distribution from tfp.distributions with shape
[batch_size, observation_dim] Note that the values of batch_size and
observation_dim are deduced from the distribution.
|
interval_distribution
|
A scalar distribution from tfp.distributions . The
value is casted to int64 to update the time range.
|
observation_to_reward_distribution
|
A distribution from
tfp.distributions with shape [observation_dim, num_actions] . The
value observation_dim must match the second dimension of
observation_distribution .
|
additive_reward_distribution
|
A distribution from tfp.distributions with
shape [num_actions] . This models the non-contextual behavior of the
bandit.
|
Attributes |
action_spec
|
Specification of the actions.
|
batch_size
|
Returns the batch size used for observations and rewards.
|
observation_spec
|
Specification of the observations.
|
Methods
compute_optimal_action
View source
compute_optimal_action(
observation: tf_agents.typing.types.NestedTensor
) -> tf_agents.typing.types.NestedTensor
compute_optimal_reward
View source
compute_optimal_reward(
observation: tf_agents.typing.types.NestedTensor
) -> tf_agents.typing.types.NestedTensor
observation
View source
observation(
unused_t
) -> tf_agents.typing.types.NestedTensor
Returns an observation batch for the given time.
Args |
env_time
|
The scalar int64 tensor of the environment time step. This is
incremented by the environment after the reward is computed.
|
Returns |
The observation batch with spec according to observation_spec.
|
reward
View source
reward(
observation: tf_agents.typing.types.NestedTensor
,
t: tf_agents.typing.types.Int
) -> tf_agents.typing.types.NestedTensor
Reward for the given observation and time step.
Args |
observation
|
A batch of observations with spec according to
observation_spec.
|
env_time
|
The scalar int64 tensor of the environment time step. This is
incremented by the environment after the reward is computed.
|
Returns |
A batch of rewards with spec shape [batch_size, num_actions] containing
rewards for all arms.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf_agents.bandits.environments.piecewise_stochastic_environment.PiecewiseStationaryDynamics\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/bandits/environments/piecewise_stochastic_environment.py#L38-L237) |\n\nA piecewise stationary environment dynamics.\n\nInherits From: [`EnvironmentDynamics`](../../../../tf_agents/bandits/environments/non_stationary_stochastic_environment/EnvironmentDynamics) \n\n tf_agents.bandits.environments.piecewise_stochastic_environment.PiecewiseStationaryDynamics(\n observation_distribution: types.Distribution,\n interval_distribution: types.Distribution,\n observation_to_reward_distribution: types.Distribution,\n additive_reward_distribution: types.Distribution\n )\n\nThis is a piecewise stationary environment which computes rewards as:\n\nrewards(t) = observation(t) \\* observation_to_reward(i) + additive_reward(i)\n\nwhere t is the environment time (env_time) and i is the index of each piece.\nThe environment time is incremented after the reward is computed while the\npiece index is incremented at the end of the time interval. The parameters\nobservation_to_reward(i), additive_reward(i), and the length of interval, are\ndrawn from given distributions at the beginning of each temporal interval.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `observation_distribution` | A distribution from tfp.distributions with shape `[batch_size, observation_dim]` Note that the values of `batch_size` and `observation_dim` are deduced from the distribution. |\n| `interval_distribution` | A scalar distribution from [`tfp.distributions`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions). The value is casted to `int64` to update the time range. |\n| `observation_to_reward_distribution` | A distribution from [`tfp.distributions`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions) with shape `[observation_dim, num_actions]`. The value `observation_dim` must match the second dimension of `observation_distribution`. |\n| `additive_reward_distribution` | A distribution from [`tfp.distributions`](https://www.tensorflow.org/probability/api_docs/python/tfp/distributions) with shape `[num_actions]`. This models the non-contextual behavior of the bandit. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------|-----------------------------------------------------------|\n| `action_spec` | Specification of the actions. |\n| `batch_size` | Returns the batch size used for observations and rewards. |\n| `observation_spec` | Specification of the observations. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `compute_optimal_action`\n\n[View source](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/bandits/environments/piecewise_stochastic_environment.py#L227-L237) \n\n compute_optimal_action(\n observation: ../../../../tf_agents/typing/types/NestedTensor\n ) -\u003e ../../../../tf_agents/typing/types/NestedTensor\n\n### `compute_optimal_reward`\n\n[View source](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/bandits/environments/piecewise_stochastic_environment.py#L217-L225) \n\n compute_optimal_reward(\n observation: ../../../../tf_agents/typing/types/NestedTensor\n ) -\u003e ../../../../tf_agents/typing/types/NestedTensor\n\n### `observation`\n\n[View source](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/bandits/environments/piecewise_stochastic_environment.py#L165-L166) \n\n observation(\n unused_t\n ) -\u003e ../../../../tf_agents/typing/types/NestedTensor\n\nReturns an observation batch for the given time.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|----------------------------------------------------------------------------------------------------------------------------|\n| `env_time` | The scalar int64 tensor of the environment time step. This is incremented by the environment after the reward is computed. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| The observation batch with spec according to `observation_spec.` ||\n\n\u003cbr /\u003e\n\n### `reward`\n\n[View source](https://github.com/tensorflow/agents/blob/v0.19.0/tf_agents/bandits/environments/piecewise_stochastic_environment.py#L168-L215) \n\n reward(\n observation: ../../../../tf_agents/typing/types/NestedTensor,\n t: ../../../../tf_agents/typing/types/Int\n ) -\u003e ../../../../tf_agents/typing/types/NestedTensor\n\nReward for the given observation and time step.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------|----------------------------------------------------------------------------------------------------------------------------|\n| `observation` | A batch of observations with spec according to `observation_spec.` |\n| `env_time` | The scalar int64 tensor of the environment time step. This is incremented by the environment after the reward is computed. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A batch of rewards with spec shape \\[batch_size, num_actions\\] containing rewards for all arms. ||\n\n\u003cbr /\u003e"]]