tf.clip_by_norm
Stay organized with collections
Save and categorize content based on your preferences.
Clips tensor values to a maximum L2-norm.
tf.clip_by_norm(
t, clip_norm, axes=None, name=None
)
Given a tensor t
, and a maximum clip value clip_norm
, this operation
normalizes t
so that its L2-norm is less than or equal to clip_norm
,
along the dimensions given in axes
. Specifically, in the default case
where all dimensions are used for calculation, if the L2-norm of t
is
already less than or equal to clip_norm
, then t
is not modified. If
the L2-norm is greater than clip_norm
, then this operation returns a
tensor of the same type and shape as t
with its values set to:
t * clip_norm / l2norm(t)
In this case, the L2-norm of the output tensor is clip_norm
.
As another example, if t
is a matrix and axes == [1]
, then each row
of the output will have L2-norm less than or equal to clip_norm
. If
axes == [0]
instead, each column of the output will be clipped.
Code example:
some_nums = tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32)
tf.clip_by_norm(some_nums, 2.0).numpy()
array([[0.26967996, 0.5393599 , 0.80903983, 1.0787199 , 1.3483998 ]],
dtype=float32)
This operation is typically used to clip gradients before applying them with
an optimizer. Most gradient data is a collection of different shaped tensors
for different parts of the model. Thus, this is a common usage:
# Get your gradients after training
loss_value, grads = grad(model, features, labels)
# Apply some clipping
grads = [tf.clip_by_norm(g, norm)
for g in grads]
# Continue on with training
optimizer.apply_gradients(grads)
Args |
t
|
A Tensor or IndexedSlices . This must be a floating point type.
|
clip_norm
|
A 0-D (scalar) Tensor > 0. A maximum clipping value, also
floating point
|
axes
|
A 1-D (vector) Tensor of type int32 containing the dimensions
to use for computing the L2-norm. If None (the default), uses all
dimensions.
|
name
|
A name for the operation (optional).
|
Returns |
A clipped Tensor or IndexedSlices .
|
Raises |
ValueError
|
If the clip_norm tensor is not a 0-D scalar tensor.
|
TypeError
|
If dtype of the input is not a floating point or
complex type.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[],null,["# tf.clip_by_norm\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.13.1/tensorflow/python/ops/clip_ops.py#L150-L232) |\n\nClips tensor values to a maximum L2-norm.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.clip_by_norm`](https://www.tensorflow.org/api_docs/python/tf/clip_by_norm)\n\n\u003cbr /\u003e\n\n tf.clip_by_norm(\n t, clip_norm, axes=None, name=None\n )\n\nGiven a tensor `t`, and a maximum clip value `clip_norm`, this operation\nnormalizes `t` so that its L2-norm is less than or equal to `clip_norm`,\nalong the dimensions given in `axes`. Specifically, in the default case\nwhere all dimensions are used for calculation, if the L2-norm of `t` is\nalready less than or equal to `clip_norm`, then `t` is not modified. If\nthe L2-norm is greater than `clip_norm`, then this operation returns a\ntensor of the same type and shape as `t` with its values set to:\n\n`t * clip_norm / l2norm(t)`\n\nIn this case, the L2-norm of the output tensor is `clip_norm`.\n\nAs another example, if `t` is a matrix and `axes == [1]`, then each row\nof the output will have L2-norm less than or equal to `clip_norm`. If\n`axes == [0]` instead, each column of the output will be clipped.\n\n#### Code example:\n\n some_nums = tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32)\n tf.clip_by_norm(some_nums, 2.0).numpy()\n array([[0.26967996, 0.5393599 , 0.80903983, 1.0787199 , 1.3483998 ]],\n dtype=float32)\n\nThis operation is typically used to clip gradients before applying them with\nan optimizer. Most gradient data is a collection of different shaped tensors\nfor different parts of the model. Thus, this is a common usage: \n\n # Get your gradients after training\n loss_value, grads = grad(model, features, labels)\n\n # Apply some clipping\n grads = [tf.clip_by_norm(g, norm)\n for g in grads]\n\n # Continue on with training\n optimizer.apply_gradients(grads)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------|-------------------------------------------------------------------------------------------------------------------------------------------------|\n| `t` | A `Tensor` or `IndexedSlices`. This must be a floating point type. |\n| `clip_norm` | A 0-D (scalar) `Tensor` \\\u003e 0. A maximum clipping value, also floating point |\n| `axes` | A 1-D (vector) `Tensor` of type int32 containing the dimensions to use for computing the L2-norm. If `None` (the default), uses all dimensions. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A clipped `Tensor` or `IndexedSlices`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|----------------------------------------------------------------|\n| `ValueError` | If the clip_norm tensor is not a 0-D scalar tensor. |\n| `TypeError` | If dtype of the input is not a floating point or complex type. |\n\n\u003cbr /\u003e"]]