View source on GitHub |
Leaky version of a Rectified Linear Unit activation function.
tf.keras.ops.leaky_relu(
x, negative_slope=0.2
)
It allows a small gradient when the unit is not active, it is defined as:
f(x) = alpha * x for x < 0
or f(x) = x for x >= 0
.
Args | |
---|---|
x
|
Input tensor. |
negative_slope
|
Slope of the activation function at x < 0.
Defaults to 0.2 .
|
Returns | |
---|---|
A tensor with the same shape as x .
|
Example:
x = np.array([-1., 0., 1.])
x_leaky_relu = keras.ops.leaky_relu(x)
print(x_leaky_relu)
array([-0.2, 0. , 1. ], shape=(3,), dtype=float64)