tf.mixed_precision.experimental.FixedLossScale
Stay organized with collections
Save and categorize content based on your preferences.
Loss scale with a fixed value.
Inherits From: LossScale
tf.mixed_precision.experimental.FixedLossScale(
loss_scale_value
)
The loss scale is not updated for the lifetime of instances of this class.
A given instance of this class always returns the same number when called.
Args |
loss_scale_value
|
A Python float. Its ideal value varies depending on
models to run. Choosing a too small loss_scale might affect model
quality; a too big loss_scale might cause inf or nan. There is no single
right loss_scale to apply. There is no harm choosing a relatively big
number as long as no nan or inf is encountered in training.
|
Raises |
ValueError
|
If loss_scale_value is less than 1.
|
Methods
from_config
View source
@classmethod
from_config(
config
)
Creates the LossScale from its config.
get_config
View source
get_config()
Returns the config of this loss scale.
update
View source
update(
grads
)
Updates the value of the loss scale.
The loss scale will be potentially updated, based on the value of grads
.
The tensor returned by calling this class is only updated when this function
is evaluated.
In eager mode, this directly updates the loss scale, so that calling
__call__
will return the newly updated loss scale. In graph mode,
this returns an op that, when evaluated, updates the loss scale.
This function also returns a should_apply_gradients
bool. If False,
gradients should not be applied to the variables that step, as nonfinite
gradients were found, and the loss scale has been be updated to reduce the
chance of finding nonfinite gradients in the next step. Some loss scale
classes will always return True, as they cannot adjust themselves in
response to nonfinite gradients.
When a DistributionStrategy is used, this function may only be called in a
cross-replica context.
Args |
grads
|
A nested structure of unscaled gradients, each which is the
gradient of the loss with respect to a weight. The gradients should have
already been divided by the loss scale being before passed to this
function. 'None' gradients are accepted, and are ignored.
|
Returns |
update_op
|
In eager mode, None. In graph mode, an op to update the loss
scale.
|
should_apply_gradients
|
Either a bool or a scalar boolean tensor. If
False, the caller should skip applying grads to the variables this
step.
|
__call__
View source
__call__()
Returns the current loss scale as a scalar float32
tensor.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-05-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2021-05-14 UTC."],[],[],null,["# tf.mixed_precision.experimental.FixedLossScale\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/experimental/loss_scale.py#L221-L276) |\n\nLoss scale with a fixed value.\n\nInherits From: [`LossScale`](../../../tf/mixed_precision/experimental/LossScale)\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tf.train.experimental.FixedLossScale`](https://www.tensorflow.org/api_docs/python/tf/mixed_precision/experimental/FixedLossScale)\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.mixed_precision.FixedLossScale`](https://www.tensorflow.org/api_docs/python/tf/mixed_precision/experimental/FixedLossScale), [`tf.compat.v1.mixed_precision.experimental.FixedLossScale`](https://www.tensorflow.org/api_docs/python/tf/mixed_precision/experimental/FixedLossScale), [`tf.compat.v1.train.experimental.FixedLossScale`](https://www.tensorflow.org/api_docs/python/tf/mixed_precision/experimental/FixedLossScale)\n\n\u003cbr /\u003e\n\n tf.mixed_precision.experimental.FixedLossScale(\n loss_scale_value\n )\n\n| **Warning:** This class is deprecated and will be unexposed from the TF 2 namespace in a future version of TensorFlow. Once this occurs, this class will only be accessible as [`tf.compat.v1.mixed_precision.FixedLossScale`](../../../tf/mixed_precision/experimental/FixedLossScale). All the functionality in this class has been merged into [`tf.keras.mixed_precision.LossScaleOptimizer`](../../../tf/keras/mixed_precision/LossScaleOptimizer), so this class is no longer needed.\n\nThe loss scale is not updated for the lifetime of instances of this class.\nA given instance of this class always returns the same number when called.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `loss_scale_value` | A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-------------------------------------|\n| `ValueError` | If loss_scale_value is less than 1. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `from_config`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/experimental/loss_scale.py#L205-L208) \n\n @classmethod\n from_config(\n config\n )\n\nCreates the LossScale from its config.\n\n### `get_config`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/experimental/loss_scale.py#L275-L276) \n\n get_config()\n\nReturns the config of this loss scale.\n\n### `update`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/experimental/loss_scale.py#L268-L270) \n\n update(\n grads\n )\n\nUpdates the value of the loss scale.\n\nThe loss scale will be potentially updated, based on the value of `grads`.\nThe tensor returned by calling this class is only updated when this function\nis evaluated.\n\nIn eager mode, this directly updates the loss scale, so that calling\n`__call__` will return the newly updated loss scale. In graph mode,\nthis returns an op that, when evaluated, updates the loss scale.\n\nThis function also returns a `should_apply_gradients` bool. If False,\ngradients should not be applied to the variables that step, as nonfinite\ngradients were found, and the loss scale has been be updated to reduce the\nchance of finding nonfinite gradients in the next step. Some loss scale\nclasses will always return True, as they cannot adjust themselves in\nresponse to nonfinite gradients.\n\nWhen a DistributionStrategy is used, this function may only be called in a\ncross-replica context.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `grads` | A nested structure of unscaled gradients, each which is the gradient of the loss with respect to a weight. The gradients should have already been divided by the loss scale being before passed to this function. 'None' gradients are accepted, and are ignored. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|--------------------------|-------------------------------------------------------------------------------------------------------------------------|\n| `update_op` | In eager mode, None. In graph mode, an op to update the loss scale. |\n| `should_apply_gradients` | Either a bool or a scalar boolean tensor. If False, the caller should skip applying `grads` to the variables this step. |\n\n\u003cbr /\u003e\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.5.0/tensorflow/python/training/experimental/loss_scale.py#L265-L266) \n\n __call__()\n\nReturns the current loss scale as a scalar `float32` tensor."]]