To record statistics, use one of the custom transformation functions defined
in this module when defining your tf.data.Dataset. All statistics will be
aggregated by the StatsAggregator that is associated with a particular
iterator (see below). For example, to record the latency of producing each
element by iterating over a dataset:
To associate a StatsAggregator with a tf.data.Dataset object, use
the following pattern:
aggregator=tf.data.experimental.StatsAggregator()dataset=...# Apply `StatsOptions` to associate `dataset` with `aggregator`.options=tf.data.Options()options.experimental_stats.aggregator=aggregatordataset=dataset.with_options(options)
To get a protocol buffer summary of the currently aggregated statistics,
use the StatsAggregator.get_summary() tensor. The easiest way to do this
is to add the returned tensor to the tf.GraphKeys.SUMMARIES collection,
so that the summaries will be included with any existing summaries.
Returns a string tf.Tensor that summarizes the aggregated statistics.
The returned tensor will contain a serialized tf.compat.v1.summary.Summary
protocol
buffer, which can be used with the standard TensorBoard logging facilities.
Returns
A scalar string tf.Tensor that summarizes the aggregated statistics.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.data.experimental.StatsAggregator\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 2 version](/api_docs/python/tf/data/experimental/StatsAggregator) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/data/experimental/ops/stats_aggregator.py#L82-L140) |\n\nA stateful resource that aggregates statistics from one or more iterators.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.data.experimental.StatsAggregator`](/api_docs/python/tf/compat/v1/data/experimental/StatsAggregator)\n\n\u003cbr /\u003e\n\n tf.data.experimental.StatsAggregator()\n\nTo record statistics, use one of the custom transformation functions defined\nin this module when defining your [`tf.data.Dataset`](../../../tf/data/Dataset). All statistics will be\naggregated by the `StatsAggregator` that is associated with a particular\niterator (see below). For example, to record the latency of producing each\nelement by iterating over a dataset: \n\n dataset = ...\n dataset = dataset.apply(tf.data.experimental.latency_stats(\"total_bytes\"))\n\nTo associate a `StatsAggregator` with a [`tf.data.Dataset`](../../../tf/data/Dataset) object, use\nthe following pattern: \n\n aggregator = tf.data.experimental.StatsAggregator()\n dataset = ...\n\n # Apply `StatsOptions` to associate `dataset` with `aggregator`.\n options = tf.data.Options()\n options.experimental_stats.aggregator = aggregator\n dataset = dataset.with_options(options)\n\nTo get a protocol buffer summary of the currently aggregated statistics,\nuse the [`StatsAggregator.get_summary()`](../../../tf/data/experimental/StatsAggregator#get_summary) tensor. The easiest way to do this\nis to add the returned tensor to the [`tf.GraphKeys.SUMMARIES`](../../../tf/GraphKeys#SUMMARIES) collection,\nso that the summaries will be included with any existing summaries. \n\n aggregator = tf.data.experimental.StatsAggregator()\n # ...\n stats_summary = aggregator.get_summary()\n tf.compat.v1.add_to_collection(tf.GraphKeys.SUMMARIES, stats_summary)\n\n| **Note:** This interface is experimental and expected to change. In particular, we expect to add other implementations of `StatsAggregator` that provide different ways of exporting statistics, and add more types of statistics.\n\nMethods\n-------\n\n### `get_summary`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/python/data/experimental/ops/stats_aggregator.py#L130-L140) \n\n get_summary()\n\nReturns a string [`tf.Tensor`](../../../tf/Tensor) that summarizes the aggregated statistics.\n\nThe returned tensor will contain a serialized [`tf.compat.v1.summary.Summary`](../../../tf/Summary)\nprotocol\nbuffer, which can be used with the standard TensorBoard logging facilities.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A scalar string [`tf.Tensor`](../../../tf/Tensor) that summarizes the aggregated statistics. ||\n\n\u003cbr /\u003e"]]