TPU Estimator manages its own TensorFlow graph and session, so it is not
compatible with TF2 behaviors. We recommend that you migrate to the newer
tf.distribute.TPUStrategy. See the
TPU guide for details.
Description
See EstimatorSpec for mode, predictions, loss, train_op, and
export_outputs.
For evaluation, eval_metricsis a tuple of metric_fn and tensors, where
metric_fn runs on CPU to generate metrics and tensors represents the
Tensors transferred from TPU system to CPU host and passed to metric_fn.
To be precise, TPU evaluation expects a slightly different signature from the
tf.estimator.Estimator. While EstimatorSpec.eval_metric_ops expects a
dict, TPUEstimatorSpec.eval_metrics is a tuple of metric_fn and tensors.
The tensors could be a list of Tensors or dict of names to Tensors. The
tensors usually specify the model logits, which are transferred back from
TPU system to CPU host. All tensors must have be batch-major, i.e., the batch
size is the first dimension. Once all tensors are available at CPU host from
all shards, they are concatenated (on CPU) and passed as positional arguments
to the metric_fn if tensors is list or keyword arguments if tensors is
a dict. metric_fn takes the tensors and returns a dict from metric string
name to the result of calling a metric function, namely a (metric_tensor,
update_op) tuple. See TPUEstimator for MNIST example how to specify the
eval_metrics.
scaffold_fn is a function running on CPU to generate the Scaffold. This
function should not capture any Tensors in model_fn.
host_call is a tuple of a function and a list or dictionary of tensors
to pass to that function and returns a list of Tensors. host_call currently
works for train() and evaluate(). The Tensors returned by the function is
executed on the CPU on every step, so there is communication overhead when
sending tensors from TPU to CPU. To reduce the overhead, try reducing the
size of the tensors. The tensors are concatenated along their major (batch)
dimension, and so must be >= rank 1. The host_call is useful for writing
summaries with tf.contrib.summary.create_file_writer.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[],null,["# tf.compat.v1.estimator.tpu.TPUEstimatorSpec\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#L282-L397) |\n\nOps and objects returned from a `model_fn` and passed to `TPUEstimator`. (deprecated) \n\n tf.compat.v1.estimator.tpu.TPUEstimatorSpec(\n mode,\n predictions=None,\n loss=None,\n train_op=None,\n eval_metrics=None,\n export_outputs=None,\n scaffold_fn=None,\n host_call=None,\n training_hooks=None,\n evaluation_hooks=None,\n prediction_hooks=None\n )\n\n\u003cbr /\u003e\n\nMigrate to TF2\n--------------\n\n\u003cbr /\u003e\n\n| **Caution:** This API was designed for TensorFlow v1. Continue reading for details on how to migrate from this API to a native TensorFlow v2 equivalent. See the [TensorFlow v1 to TensorFlow v2 migration guide](https://www.tensorflow.org/guide/migrate) for instructions on how to migrate the rest of your code.\n\nTPU Estimator manages its own TensorFlow graph and session, so it is not\ncompatible with TF2 behaviors. We recommend that you migrate to the newer\n[`tf.distribute.TPUStrategy`](../../../../../tf/distribute/TPUStrategy). See the\n[TPU guide](https://www.tensorflow.org/guide/tpu) for details.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\nDescription\n-----------\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use tf.keras instead.\n\nSee `EstimatorSpec` for `mode`, `predictions`, `loss`, `train_op`, and\n`export_outputs`.\n\nFor evaluation, `eval_metrics`is a tuple of `metric_fn` and `tensors`, where\n`metric_fn` runs on CPU to generate metrics and `tensors` represents the\n`Tensor`s transferred from TPU system to CPU host and passed to `metric_fn`.\nTo be precise, TPU evaluation expects a slightly different signature from the\n[`tf.estimator.Estimator`](../../../../../tf/estimator/Estimator). While [`EstimatorSpec.eval_metric_ops`](https://www.tensorflow.org/api_docs/python/tf/estimator/EstimatorSpec#eval_metric_ops) expects a\ndict, `TPUEstimatorSpec.eval_metrics` is a tuple of `metric_fn` and `tensors`.\nThe `tensors` could be a list of `Tensor`s or dict of names to `Tensor`s. The\n`tensors` usually specify the model logits, which are transferred back from\nTPU system to CPU host. All tensors must have be batch-major, i.e., the batch\nsize is the first dimension. Once all tensors are available at CPU host from\nall shards, they are concatenated (on CPU) and passed as positional arguments\nto the `metric_fn` if `tensors` is list or keyword arguments if `tensors` is\na dict. `metric_fn` takes the `tensors` and returns a dict from metric string\nname to the result of calling a metric function, namely a `(metric_tensor,\nupdate_op)` tuple. See `TPUEstimator` for MNIST example how to specify the\n`eval_metrics`.\n\n`scaffold_fn` is a function running on CPU to generate the `Scaffold`. This\nfunction should not capture any Tensors in `model_fn`.\n\n`host_call` is a tuple of a `function` and a list or dictionary of `tensors`\nto pass to that function and returns a list of Tensors. `host_call` currently\nworks for train() and evaluate(). The Tensors returned by the function is\nexecuted on the CPU on every step, so there is communication overhead when\nsending tensors from TPU to CPU. To reduce the overhead, try reducing the\nsize of the tensors. The `tensors` are concatenated along their major (batch)\ndimension, and so must be \\\u003e= rank 1. The `host_call` is useful for writing\nsummaries with `tf.contrib.summary.create_file_writer`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------|------------------------------------------|\n| `mode` | A `namedtuple` alias for field number 0 |\n| `predictions` | A `namedtuple` alias for field number 1 |\n| `loss` | A `namedtuple` alias for field number 2 |\n| `train_op` | A `namedtuple` alias for field number 3 |\n| `eval_metrics` | A `namedtuple` alias for field number 4 |\n| `export_outputs` | A `namedtuple` alias for field number 5 |\n| `scaffold_fn` | A `namedtuple` alias for field number 6 |\n| `host_call` | A `namedtuple` alias for field number 7 |\n| `training_hooks` | A `namedtuple` alias for field number 8 |\n| `evaluation_hooks` | A `namedtuple` alias for field number 9 |\n| `prediction_hooks` | A `namedtuple` alias for field number 10 |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `as_estimator_spec`\n\n[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/tpu/tpu_estimator.py#L370-L397) \n\n as_estimator_spec()\n\nCreates an equivalent `EstimatorSpec` used by CPU train/eval."]]