Module: tf.distribute.experimental
Stay organized with collections
Save and categorize content based on your preferences.
Public API for tf._api.v2.distribute.experimental namespace
Modules
coordinator
module: Public API for tf._api.v2.distribute.experimental.coordinator namespace
partitioners
module: Public API for tf._api.v2.distribute.experimental.partitioners namespace
rpc
module: Public API for tf._api.v2.distribute.experimental.rpc namespace
Classes
class CentralStorageStrategy
: A one-machine strategy that puts all variables on a single device.
class CollectiveCommunication
: Cross device communication implementation.
class CollectiveHints
: Hints for collective operations like AllReduce.
class CommunicationImplementation
: Cross device communication implementation.
class CommunicationOptions
: Options for cross device communications like All-reduce.
class MultiWorkerMirroredStrategy
: A distribution strategy for synchronous training on multiple workers.
class ParameterServerStrategy
: An multi-worker tf.distribute strategy with parameter servers.
class PreemptionCheckpointHandler
: Preemption and error handler for synchronous training.
class PreemptionWatcher
: Watch preemption signal and store it.
class TPUStrategy
: Synchronous training on TPUs and TPU Pods.
class TerminationConfig
: Customization of PreemptionCheckpointHandler
for various platforms.
class ValueContext
: A class wrapping information needed by a distribute function.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# Module: tf.distribute.experimental\n\n\u003cbr /\u003e\n\nPublic API for tf._api.v2.distribute.experimental namespace\n\nModules\n-------\n\n[`coordinator`](../../tf/distribute/experimental/coordinator) module: Public API for tf._api.v2.distribute.experimental.coordinator namespace\n\n[`partitioners`](../../tf/distribute/experimental/partitioners) module: Public API for tf._api.v2.distribute.experimental.partitioners namespace\n\n[`rpc`](../../tf/distribute/experimental/rpc) module: Public API for tf._api.v2.distribute.experimental.rpc namespace\n\nClasses\n-------\n\n[`class CentralStorageStrategy`](../../tf/distribute/experimental/CentralStorageStrategy): A one-machine strategy that puts all variables on a single device.\n\n[`class CollectiveCommunication`](../../tf/distribute/experimental/CommunicationImplementation): Cross device communication implementation.\n\n[`class CollectiveHints`](../../tf/distribute/experimental/CollectiveHints): Hints for collective operations like AllReduce.\n\n[`class CommunicationImplementation`](../../tf/distribute/experimental/CommunicationImplementation): Cross device communication implementation.\n\n[`class CommunicationOptions`](../../tf/distribute/experimental/CommunicationOptions): Options for cross device communications like All-reduce.\n\n[`class MultiWorkerMirroredStrategy`](../../tf/distribute/experimental/MultiWorkerMirroredStrategy): A distribution strategy for synchronous training on multiple workers.\n\n[`class ParameterServerStrategy`](../../tf/distribute/experimental/ParameterServerStrategy): An multi-worker tf.distribute strategy with parameter servers.\n\n[`class PreemptionCheckpointHandler`](../../tf/distribute/experimental/PreemptionCheckpointHandler): Preemption and error handler for synchronous training.\n\n[`class PreemptionWatcher`](../../tf/distribute/experimental/PreemptionWatcher): Watch preemption signal and store it.\n\n[`class TPUStrategy`](../../tf/distribute/experimental/TPUStrategy): Synchronous training on TPUs and TPU Pods.\n\n[`class TerminationConfig`](../../tf/distribute/experimental/TerminationConfig): Customization of `PreemptionCheckpointHandler` for various platforms.\n\n[`class ValueContext`](../../tf/distribute/experimental/ValueContext): A class wrapping information needed by a distribute function."]]