View source on GitHub |
A LearningRateSchedule that uses a piecewise constant decay schedule.
Inherits From: base_lr_class
tfm.optimization.PiecewiseConstantDecayWithOffset(
offset=0, **kwargs
)
The function returns a 1-arg callable to compute the piecewise constant when passed the current optimizer step. This can be useful for changing the learning rate value across different invocations of optimizer functions.
Example: use a learning rate that's 1.0 for the first 100001 steps, 0.5 for the next 10000 steps, and 0.1 for any additional steps.
step = tf.Variable(0, trainable=False)
boundaries = [100000, 110000]
values = [1.0, 0.5, 0.1]
learning_rate_fn = keras.optimizers.schedules.PiecewiseConstantDecay(
boundaries, values)
# Later, whenever we perform an optimization step, we pass in the step.
learning_rate = learning_rate_fn(step)
You can pass this schedule directly into a tf.keras.optimizers.Optimizer
as the learning rate. The learning rate schedule is also serializable and
deserializable using tf.keras.optimizers.schedules.serialize
and
tf.keras.optimizers.schedules.deserialize
.
Returns | |
---|---|
A 1-arg callable learning rate schedule that takes the current optimizer
step and outputs the decayed learning rate, a scalar Tensor of the same
type as the boundary tensors.
The output of the 1-arg function that takes the |
Child Classes
Methods
from_config
@classmethod
from_config( config )
Instantiates a LearningRateSchedule
from its config.
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A LearningRateSchedule instance.
|
get_config
get_config()
__call__
__call__(
step
)