tf.estimator.experimental.InMemoryEvaluatorHook
Stay organized with collections
Save and categorize content based on your preferences.
Hook to run evaluation in training without a checkpoint.
Inherits From: SessionRunHook
View aliases
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook`
tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, input_fn, steps=None, hooks=None, name=None, every_n_iter=100
)
Example:
def train_input_fn():
...
return train_dataset
def eval_input_fn():
...
return eval_dataset
estimator = tf.estimator.DNNClassifier(...)
evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(
estimator, eval_input_fn)
estimator.train(train_input_fn, hooks=[evaluator])
Current limitations of this approach are:
- It doesn't support multi-node distributed mode.
- It doesn't support saveable objects other than variables (such as boosted
tree support)
- It doesn't support custom saver logic (such as ExponentialMovingAverage
support)
Args |
estimator
|
A tf.estimator.Estimator instance to call evaluate.
|
input_fn
|
Equivalent to the input_fn arg to estimator.evaluate . A
function that constructs the input data for evaluation. See Creating
input functions
for more information. The function should construct and return one of
the following:
- A 'tf.data.Dataset' object: Outputs of
Dataset object must be a
tuple (features, labels) with same constraints as below.
- A tuple (features, labels): Where
features is a Tensor or a
dictionary of string feature name to Tensor and labels is a
Tensor or a dictionary of string label name to Tensor . Both
features and labels are consumed by model_fn . They should
satisfy the expectation of model_fn from inputs.
|
steps
|
Equivalent to the steps arg to estimator.evaluate . Number of
steps for which to evaluate model. If None , evaluates until input_fn
raises an end-of-input exception.
|
hooks
|
Equivalent to the hooks arg to estimator.evaluate . List of
SessionRunHook subclass instances. Used for callbacks inside the
evaluation call.
|
name
|
Equivalent to the name arg to estimator.evaluate . Name of the
evaluation if user needs to run multiple evaluations on different data
sets, such as on training data vs test data. Metrics for different
evaluations are saved in separate folders, and appear separately in
tensorboard.
|
every_n_iter
|
int , runs the evaluator once every N training iteration.
|
Raises |
ValueError
|
if every_n_iter is non-positive or it's not a single machine
training
|
Methods
after_create_session
View source
after_create_session(
session, coord
)
Does first run which shows the eval metrics before training.
after_run
View source
after_run(
run_context, run_values
)
Runs evaluator.
before_run
View source
before_run(
run_context
)
Called before each call to run().
You can return from this call a SessionRunArgs
object indicating ops or
tensors to add to the upcoming run()
call. These ops/tensors will be run
together with the ops/tensors originally passed to the original run() call.
The run args you return can also contain feeds to be added to the run()
call.
The run_context
argument is a SessionRunContext
that provides
information about the upcoming run()
call: the originally requested
op/tensors, the TensorFlow Session.
At this point graph is finalized and you can not add ops.
Args |
run_context
|
A SessionRunContext object.
|
Returns |
None or a SessionRunArgs object.
|
begin
View source
begin()
Build eval graph and restoring op.
end
View source
end(
session
)
Runs evaluator for final model.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[],null,["# tf.estimator.experimental.InMemoryEvaluatorHook\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L30-L211) |\n\nHook to run evaluation in training without a checkpoint.\n\nInherits From: [`SessionRunHook`](../../../tf/estimator/SessionRunHook)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.estimator.experimental.InMemoryEvaluatorHook\\`\n\n\u003cbr /\u003e\n\n tf.estimator.experimental.InMemoryEvaluatorHook(\n estimator, input_fn, steps=None, hooks=None, name=None, every_n_iter=100\n )\n\n#### Example:\n\n def train_input_fn():\n ...\n return train_dataset\n\n def eval_input_fn():\n ...\n return eval_dataset\n\n estimator = tf.estimator.DNNClassifier(...)\n\n evaluator = tf.estimator.experimental.InMemoryEvaluatorHook(\n estimator, eval_input_fn)\n estimator.train(train_input_fn, hooks=[evaluator])\n\nCurrent limitations of this approach are:\n\n- It doesn't support multi-node distributed mode.\n- It doesn't support saveable objects other than variables (such as boosted tree support)\n- It doesn't support custom saver logic (such as ExponentialMovingAverage support)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `estimator` | A [`tf.estimator.Estimator`](../../../tf/estimator/Estimator) instance to call evaluate. |\n| `input_fn` | Equivalent to the `input_fn` arg to `estimator.evaluate`. A function that constructs the input data for evaluation. See [Creating input functions](https://tensorflow.org/guide/premade_estimators#create_input_functions) for more information. The function should construct and return one of the following: \u003cbr /\u003e - A 'tf.data.Dataset' object: Outputs of `Dataset` object must be a tuple (features, labels) with same constraints as below. - A tuple (features, labels): Where `features` is a `Tensor` or a dictionary of string feature name to `Tensor` and `labels` is a `Tensor` or a dictionary of string label name to `Tensor`. Both `features` and `labels` are consumed by `model_fn`. They should satisfy the expectation of `model_fn` from inputs. |\n| `steps` | Equivalent to the `steps` arg to `estimator.evaluate`. Number of steps for which to evaluate model. If `None`, evaluates until `input_fn` raises an end-of-input exception. |\n| `hooks` | Equivalent to the `hooks` arg to `estimator.evaluate`. List of `SessionRunHook` subclass instances. Used for callbacks inside the evaluation call. |\n| `name` | Equivalent to the `name` arg to `estimator.evaluate`. Name of the evaluation if user needs to run multiple evaluations on different data sets, such as on training data vs test data. Metrics for different evaluations are saved in separate folders, and appear separately in tensorboard. |\n| `every_n_iter` | `int`, runs the evaluator once every N training iteration. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------------------------------------------|\n| `ValueError` | if `every_n_iter` is non-positive or it's not a single machine training |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `after_create_session`\n\n[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L146-L176) \n\n after_create_session(\n session, coord\n )\n\nDoes first run which shows the eval metrics before training.\n\n### `after_run`\n\n[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L203-L207) \n\n after_run(\n run_context, run_values\n )\n\nRuns evaluator.\n\n### `before_run`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.11.1/tensorflow/python/training/session_run_hook.py#L125-L146) \n\n before_run(\n run_context\n )\n\nCalled before each call to run().\n\nYou can return from this call a `SessionRunArgs` object indicating ops or\ntensors to add to the upcoming `run()` call. These ops/tensors will be run\ntogether with the ops/tensors originally passed to the original run() call.\nThe run args you return can also contain feeds to be added to the run()\ncall.\n\nThe `run_context` argument is a `SessionRunContext` that provides\ninformation about the upcoming `run()` call: the originally requested\nop/tensors, the TensorFlow Session.\n\nAt this point graph is finalized and you can not add ops.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------|-------------------------------|\n| `run_context` | A `SessionRunContext` object. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| None or a `SessionRunArgs` object. ||\n\n\u003cbr /\u003e\n\n### `begin`\n\n[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L122-L144) \n\n begin()\n\nBuild eval graph and restoring op.\n\n### `end`\n\n[View source](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/python/estimator/hooks/hooks.py#L209-L211) \n\n end(\n session\n )\n\nRuns evaluator for final model."]]