Exponential linear unit.
The exponential linear unit (ELU) with alpha > 0
is:
x
if x > 0
and alpha * (exp(x) -
1)
if x < 0
.
The ELU hyperparameter alpha
controls the value to which an ELU saturates for
negative net inputs. ELUs diminish the vanishing gradient effect.
ELUs have negative values which pushes the mean of the activations closer to zero. Mean activations that are closer to zero enable faster learning as they bring the gradient closer to the natural gradient. ELUs saturate to a negative value when the argument gets smaller. Saturation means a small derivative which decreases the variation and the information that is propagated to the next layer.
Example Usage:
Operand<TFloat32> input = ...; ELU<TFloat32> elu = new ELU<>(tf, 2.0f); Operand<TFloat32> result = elu.call(input);
Public Constructors
ELU(Ops tf)
Creates a new ELU with alpha=
ERROR(/#ALPHA_DEFAULT) . |
|
ELU(Ops tf, double alpha)
Creates a new ELU
|
Public Methods
Operand<T> |
Inherited Methods
Public Constructors
public ELU (Ops tf)
Creates a new ELU with alpha=ERROR(/#ALPHA_DEFAULT)
.
Parameters
tf | the TensorFlow Ops |
---|
public ELU (Ops tf, double alpha)
Creates a new ELU
Parameters
tf | the TensorFlow Ops |
---|---|
alpha | A scalar, slope of negative section. It controls the value to which an ELU saturates for negative net inputs. |