tf.tpu.experimental.DeviceAssignment
Stay organized with collections
Save and categorize content based on your preferences.
Mapping from logical cores in a computation to the physical TPU topology.
tf.tpu.experimental.DeviceAssignment(
topology, core_assignment
)
Prefer to use the DeviceAssignment.build()
helper to construct a
DeviceAssignment
; it is easier if less flexible than constructing a
DeviceAssignment
directly.
Args |
topology
|
A Topology object that describes the physical TPU topology.
|
core_assignment
|
A logical to physical core mapping, represented as a
rank 3 numpy array. See the description of the core_assignment
property for more details.
|
Raises |
ValueError
|
If topology is not Topology object.
|
ValueError
|
If core_assignment is not a rank 3 numpy array.
|
Attributes |
core_assignment
|
The logical to physical core mapping.
|
num_cores_per_replica
|
The number of cores per replica.
|
num_replicas
|
The number of replicas of the computation.
|
topology
|
A Topology that describes the TPU topology.
|
Methods
build
View source
@staticmethod
build(
topology, computation_shape=None, computation_stride=None, num_replicas=1
)
coordinates
View source
coordinates(
replica, logical_core
)
Returns the physical topology coordinates of a logical core.
host_device
View source
host_device(
replica=0, logical_core=0, job=None
)
Returns the CPU device attached to a logical core.
lookup_replicas
View source
lookup_replicas(
task_id, logical_core
)
Lookup replica ids by task number and logical core.
Args |
task_id
|
TensorFlow task number.
|
logical_core
|
An integer, identifying a logical core.
|
Returns |
A sorted list of the replicas that are attached to that task and
logical_core.
|
Raises |
ValueError
|
If no replica exists in the task which contains the logical
core.
|
tpu_device
View source
tpu_device(
replica=0, logical_core=0, job=None
)
Returns the name of the TPU device assigned to a logical core.
tpu_ordinal
View source
tpu_ordinal(
replica=0, logical_core=0
)
Returns the ordinal of the TPU device assigned to a logical core.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.tpu.experimental.DeviceAssignment\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/tpu/experimental/DeviceAssignment) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L59-L175) |\n\nMapping from logical cores in a computation to the physical TPU topology.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tpu.experimental.DeviceAssignment`](/api_docs/python/tf/tpu/experimental/DeviceAssignment)\n\n\u003cbr /\u003e\n\n tf.tpu.experimental.DeviceAssignment(\n topology, core_assignment\n )\n\nPrefer to use the [`DeviceAssignment.build()`](../../../tf/tpu/experimental/DeviceAssignment#build) helper to construct a\n`DeviceAssignment`; it is easier if less flexible than constructing a\n`DeviceAssignment` directly.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| `topology` | A `Topology` object that describes the physical TPU topology. |\n| `core_assignment` | A logical to physical core mapping, represented as a rank 3 numpy array. See the description of the `core_assignment` property for more details. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------------------|\n| `ValueError` | If `topology` is not `Topology` object. |\n| `ValueError` | If `core_assignment` is not a rank 3 numpy array. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|-------------------------|-----------------------------------------------|\n| `core_assignment` | The logical to physical core mapping. |\n| `num_cores_per_replica` | The number of cores per replica. |\n| `num_replicas` | The number of replicas of the computation. |\n| `topology` | A `Topology` that describes the TPU topology. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `build`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L169-L175) \n\n @staticmethod\n build(\n topology, computation_shape=None, computation_stride=None, num_replicas=1\n )\n\n### `coordinates`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L130-L132) \n\n coordinates(\n replica, logical_core\n )\n\nReturns the physical topology coordinates of a logical core.\n\n### `host_device`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L159-L162) \n\n host_device(\n replica=0, logical_core=0, job=None\n )\n\nReturns the CPU device attached to a logical core.\n\n### `lookup_replicas`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L134-L152) \n\n lookup_replicas(\n task_id, logical_core\n )\n\nLookup replica ids by task number and logical core.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|----------------|-----------------------------------------|\n| `task_id` | TensorFlow task number. |\n| `logical_core` | An integer, identifying a logical core. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A sorted list of the replicas that are attached to that task and logical_core. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|-------------------------------------------------------------------|\n| `ValueError` | If no replica exists in the task which contains the logical core. |\n\n\u003cbr /\u003e\n\n### `tpu_device`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L164-L167) \n\n tpu_device(\n replica=0, logical_core=0, job=None\n )\n\nReturns the name of the TPU device assigned to a logical core.\n\n### `tpu_ordinal`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/tpu/device_assignment.py#L154-L157) \n\n tpu_ordinal(\n replica=0, logical_core=0\n )\n\nReturns the ordinal of the TPU device assigned to a logical core."]]