View source on GitHub |
Learning rate schedule follows lr * (step)^power.
tfm.optimization.DirectPowerDecay(
initial_learning_rate: float,
power: float = 1.0,
name: str = 'DirectPowerDecay'
)
Args | |
---|---|
initial_learning_rate
|
The initial learning rate. |
power
|
The order of the polynomial. |
name
|
Optional, name of learning rate schedule. |
Methods
from_config
@classmethod
from_config( config )
Instantiates a LearningRateSchedule
from its config.
Args | |
---|---|
config
|
Output of get_config() .
|
Returns | |
---|---|
A LearningRateSchedule instance.
|
get_config
get_config()
Get the configuration of the learning rate schedule.
__call__
__call__(
step
)
Call self as a function.