Optimizer that implements the RMSProp algorithm.
The gist of RMSprop is to:
- Maintain a moving (discounted) average of the square of gradients
- Divide the gradient by the root of this average
This implementation of RMSprop uses plain momentum, not Nesterov momentum.
The centered version additionally maintains a moving average of the gradients, and uses that average to estimate the variance.
Constants
| boolean | CENTERED_DEFAULT | |
| float | DECAY_DEFAULT | |
| float | EPSILON_DEFAULT | |
| float | LEARNING_RATE_DEFAULT | |
| String | MG | |
| String | MOMENTUM | |
| float | MOMENTUM_DEFAULT | |
| String | RMS | 
Inherited Constants
Public Constructors
Public Methods
| String | 
getOptimizerName()
                
                   Get the Name of the optimizer.  | 
| String | 
toString()
                
                   | 
Inherited Methods
Constants
public static final boolean CENTERED_DEFAULT
public static final float DECAY_DEFAULT
public static final float EPSILON_DEFAULT
public static final float LEARNING_RATE_DEFAULT
public static final String MG
public static final String MOMENTUM
public static final float MOMENTUM_DEFAULT
public static final String RMS
Public Constructors
public RMSProp (Graph graph, float learningRate)
Creates an RMSPRrop Optimizer
Parameters
| graph | the TensorFlow Graph | 
|---|---|
| learningRate | the learning rate | 
public RMSProp (Graph graph, float learningRate, float decay, float momentum, float epsilon, boolean centered)
Creates an RMSPRrop Optimizer
Parameters
| graph | the TensorFlow Graph | 
|---|---|
| learningRate | the learning rate | 
| decay | Discounting factor for the history/coming gradient. Defaults to 0.9. | 
| momentum | the acceleration factor, default is 0. | 
| epsilon | A small constant for numerical stability | 
| centered | If true, gradients are normalized by the estimated variance of the
     gradient; iffalse, by the uncentered second moment. Setting this to
     truemay help with training, but is slightly more expensive in terms of computation
     and memory. Defaults tofalse. | 
public RMSProp (Graph graph, String name, float learningRate)
Creates an RMSPRrop Optimizer
Parameters
| graph | the TensorFlow Graph | 
|---|---|
| name | the name of this Optimizer. Defaults to "RMSProp". | 
| learningRate | the learning rate | 
public RMSProp (Graph graph, String name, float learningRate, float decay, float momentum, float epsilon, boolean centered)
Creates an RMSPRrop Optimizer
Parameters
| graph | the TensorFlow Graph | 
|---|---|
| name | the name of this Optimizer. Defaults to "RMSProp". | 
| learningRate | the learning rate | 
| decay | Discounting factor for the history/coming gradient. Defaults to 0.9. | 
| momentum | The acceleration factor, default is 0. | 
| epsilon | A small constant for numerical stability | 
| centered | If true, gradients are normalized by the estimated variance of the
     gradient; iffalse, by the uncentered second moment. Setting this to
     truemay help with training, but is slightly more expensive in terms of computation
     and memory. Defaults tofalse. | 
Public Methods
public String getOptimizerName ()
Get the Name of the optimizer.
Returns
- The optimizer name.
public String toString ()