tf.compat.v1.train.natural_exp_decay
Stay organized with collections
Save and categorize content based on your preferences.
Applies natural exponential decay to the initial learning rate.
tf.compat.v1.train.natural_exp_decay(
learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None
)
When training a model, it is often recommended to lower the learning rate as
the training progresses. This function applies an exponential decay function
to a provided initial learning rate. It requires an global_step
value to
compute the decayed learning rate. You can just pass a TensorFlow variable
that you increment at each training step.
The function returns the decayed learning rate. It is computed as:
decayed_learning_rate = learning_rate * exp(-decay_rate * global_step /
decay_step)
or, if staircase
is True
, as:
decayed_learning_rate = learning_rate * exp(-decay_rate * floor(global_step /
decay_step))
Example: decay exponentially with a base of 0.96:
...
global_step = tf.Variable(0, trainable=False)
learning_rate = 0.1
decay_steps = 5
k = 0.5
learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate,
global_step,
decay_steps, k)
# Passing global_step to minimize() will increment it at each step.
learning_step = (
tf.compat.v1.train.GradientDescentOptimizer(learning_rate)
.minimize(...my loss..., global_step=global_step)
)
Args |
learning_rate
|
A scalar float32 or float64 Tensor or a Python number.
The initial learning rate.
|
global_step
|
A Python number. Global step to use for the decay computation.
Must not be negative.
|
decay_steps
|
How often to apply decay.
|
decay_rate
|
A Python number. The decay rate.
|
staircase
|
Whether to apply decay in a discrete staircase, as opposed to
continuous, fashion.
|
name
|
String. Optional name of the operation. Defaults to
'ExponentialTimeDecay'.
|
Returns |
A scalar Tensor of the same type as learning_rate . The decayed
learning rate.
|
Raises |
ValueError
|
if global_step is not supplied.
|
Eager Compatibility
When eager execution is enabled, this function returns a function which in
turn returns the decayed learning rate Tensor. This can be useful for changing
the learning rate value across different invocations of optimizer functions.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.compat.v1.train.natural_exp_decay\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/training/learning_rate_decay.py#L283-L368) |\n\nApplies natural exponential decay to the initial learning rate. \n\n tf.compat.v1.train.natural_exp_decay(\n learning_rate, global_step, decay_steps, decay_rate, staircase=False, name=None\n )\n\nWhen training a model, it is often recommended to lower the learning rate as\nthe training progresses. This function applies an exponential decay function\nto a provided initial learning rate. It requires an `global_step` value to\ncompute the decayed learning rate. You can just pass a TensorFlow variable\nthat you increment at each training step.\n\nThe function returns the decayed learning rate. It is computed as: \n\n decayed_learning_rate = learning_rate * exp(-decay_rate * global_step /\n decay_step)\n\nor, if `staircase` is `True`, as: \n\n decayed_learning_rate = learning_rate * exp(-decay_rate * floor(global_step /\n decay_step))\n\nExample: decay exponentially with a base of 0.96: \n\n ...\n global_step = tf.Variable(0, trainable=False)\n learning_rate = 0.1\n decay_steps = 5\n k = 0.5\n learning_rate = tf.compat.v1.train.natural_exp_decay(learning_rate,\n global_step,\n decay_steps, k)\n\n # Passing global_step to minimize() will increment it at each step.\n learning_step = (\n tf.compat.v1.train.GradientDescentOptimizer(learning_rate)\n .minimize(...my loss..., global_step=global_step)\n )\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------|-----------------------------------------------------------------------------------------|\n| `learning_rate` | A scalar `float32` or `float64` `Tensor` or a Python number. The initial learning rate. |\n| `global_step` | A Python number. Global step to use for the decay computation. Must not be negative. |\n| `decay_steps` | How often to apply decay. |\n| `decay_rate` | A Python number. The decay rate. |\n| `staircase` | Whether to apply decay in a discrete staircase, as opposed to continuous, fashion. |\n| `name` | String. Optional name of the operation. Defaults to 'ExponentialTimeDecay'. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A scalar `Tensor` of the same type as `learning_rate`. The decayed learning rate. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------------------|\n| `ValueError` | if `global_step` is not supplied. |\n\n\u003cbr /\u003e\n\n#### Eager Compatibility\n\nWhen eager execution is enabled, this function returns a function which in\nturn returns the decayed learning rate Tensor. This can be useful for changing\nthe learning rate value across different invocations of optimizer functions."]]