View source on GitHub
|
Loss scale manager with a fixed loss scale.
Inherits From: LossScaleManager
tf.contrib.mixed_precision.FixedLossScaleManager(
loss_scale
)
The loss scale is not updated for the lifetime of the class.
Args | |
|---|---|
loss_scale
|
A Python float. Its ideal value varies depending on models to run. Choosing a too small loss_scale might affect model quality; a too big loss_scale might cause inf or nan. There is no single right loss_scale to apply. There is no harm choosing a relatively big number as long as no nan or inf is encountered in training. |
Raises | |
|---|---|
ValueError
|
If loss_scale is less than 1. |
Methods
get_loss_scale
get_loss_scale()
Returns the loss scale as a scalar float32 tensor.
update_loss_scale
update_loss_scale(
finite_grads
)
Updates loss scale based on if gradients are finite in current step.
| Args | |
|---|---|
finite_grads
|
bool scalar tensor indicating if all gradients are finite (i.e., not inf or nan). |
| Returns | |
|---|---|
| An op, when executed updates the loss scale. If eager execution is enabled, does not return anything. |
View source on GitHub