tf.compat.v1.nn.ctc_loss
Stay organized with collections
Save and categorize content based on your preferences.
Computes the CTC (Connectionist Temporal Classification) Loss.
tf.compat.v1.nn.ctc_loss(
labels,
inputs=None,
sequence_length=None,
preprocess_collapse_repeated=False,
ctc_merge_repeated=True,
ignore_longer_outputs_than_inputs=False,
time_major=True,
logits=None
)
This op implements the CTC loss as presented in (Graves et al., 2006).
sequence_length(b) <= time for all b
max(labels.indices(labels.indices[:, 1] == b, 2))
<= sequence_length(b) for all b.
Notes:
This class performs the softmax operation for you, so inputs should
be e.g. linear projections of outputs by an LSTM.
The inputs
Tensor's innermost dimension size, num_classes
, represents
num_labels + 1
classes, where num_labels is the number of true labels, and
the largest value (num_classes - 1)
is reserved for the blank label.
For example, for a vocabulary containing 3 labels [a, b, c]
,
num_classes = 4
and the labels indexing is {a: 0, b: 1, c: 2, blank: 3}
.
Regarding the arguments preprocess_collapse_repeated
and
ctc_merge_repeated
:
If preprocess_collapse_repeated
is True, then a preprocessing step runs
before loss calculation, wherein repeated labels passed to the loss
are merged into single labels. This is useful if the training labels come
from, e.g., forced alignments and therefore have unnecessary repetitions.
If ctc_merge_repeated
is set False, then deep within the CTC calculation,
repeated non-blank labels will not be merged and are interpreted
as individual labels. This is a simplified (non-standard) version of CTC.
Here is a table of the (roughly) expected first order behavior:
preprocess_collapse_repeated=False
, ctc_merge_repeated=True
Classical CTC behavior: Outputs true repeated classes with blanks in
between, and can also output repeated classes with no blanks in
between that need to be collapsed by the decoder.
preprocess_collapse_repeated=True
, ctc_merge_repeated=False
Never learns to output repeated classes, as they are collapsed
in the input labels before training.
preprocess_collapse_repeated=False
, ctc_merge_repeated=False
Outputs repeated classes with blanks in between, but generally does not
require the decoder to collapse/merge repeated classes.
preprocess_collapse_repeated=True
, ctc_merge_repeated=True
Untested. Very likely will not learn to output repeated classes.
The ignore_longer_outputs_than_inputs
option allows to specify the behavior
of the CTCLoss when dealing with sequences that have longer outputs than
inputs. If true, the CTCLoss will simply return zero gradient for those
items, otherwise an InvalidArgument error is returned, stopping training.
Args |
labels
|
An int32 SparseTensor .
labels.indices[i, :] == [b, t] means labels.values[i] stores the id
for (batch b, time t). labels.values[i] must take on values in [0,
num_labels) . See core/ops/ctc_ops.cc for more details.
|
inputs
|
3-D float Tensor .
If time_major == False, this will be a Tensor shaped: [batch_size,
max_time, num_classes] .
If time_major == True (default), this will be a Tensor shaped:
[max_time, batch_size, num_classes] . The logits.
|
sequence_length
|
1-D int32 vector, size [batch_size] . The sequence
lengths.
|
preprocess_collapse_repeated
|
Boolean. Default: False. If True, repeated
labels are collapsed prior to the CTC calculation.
|
ctc_merge_repeated
|
Boolean. Default: True.
|
ignore_longer_outputs_than_inputs
|
Boolean. Default: False. If True,
sequences with longer outputs than inputs will be ignored.
|
time_major
|
The shape format of the inputs Tensors. If True, these
Tensors must be shaped [max_time, batch_size, num_classes] . If False,
these Tensors must be shaped [batch_size, max_time, num_classes] .
Using time_major = True (default) is a bit more efficient because it
avoids transposes at the beginning of the ctc_loss calculation. However,
most TensorFlow data is batch-major, so by this function also accepts
inputs in batch-major form.
|
logits
|
Alias for inputs.
|
Returns |
A 1-D float Tensor , size [batch] , containing the negative log
probabilities.
|
Raises |
TypeError
|
if labels is not a SparseTensor .
|
References |
Connectionist Temporal Classification - Labeling Unsegmented Sequence Data
with Recurrent Neural Networks:
Graves et al., 2006
(pdf)
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2024-04-26 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.nn.ctc_loss\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/ops/ctc_ops.py#L71-L193) |\n\nComputes the CTC (Connectionist Temporal Classification) Loss. \n\n tf.compat.v1.nn.ctc_loss(\n labels,\n inputs=None,\n sequence_length=None,\n preprocess_collapse_repeated=False,\n ctc_merge_repeated=True,\n ignore_longer_outputs_than_inputs=False,\n time_major=True,\n logits=None\n )\n\nThis op implements the CTC loss as presented in (Graves et al., 2006).\n\n#### Input requirements:\n\n sequence_length(b) \u003c= time for all b\n\n max(labels.indices(labels.indices[:, 1] == b, 2))\n \u003c= sequence_length(b) for all b.\n\n#### Notes:\n\nThis class performs the softmax operation for you, so inputs should\nbe e.g. linear projections of outputs by an LSTM.\n\nThe `inputs` Tensor's innermost dimension size, `num_classes`, represents\n`num_labels + 1` classes, where num_labels is the number of true labels, and\nthe largest value `(num_classes - 1)` is reserved for the blank label.\n\nFor example, for a vocabulary containing 3 labels `[a, b, c]`,\n`num_classes = 4` and the labels indexing is `{a: 0, b: 1, c: 2, blank: 3}`.\n\nRegarding the arguments `preprocess_collapse_repeated` and\n`ctc_merge_repeated`:\n\nIf `preprocess_collapse_repeated` is True, then a preprocessing step runs\nbefore loss calculation, wherein repeated labels passed to the loss\nare merged into single labels. This is useful if the training labels come\nfrom, e.g., forced alignments and therefore have unnecessary repetitions.\n\nIf `ctc_merge_repeated` is set False, then deep within the CTC calculation,\nrepeated non-blank labels will not be merged and are interpreted\nas individual labels. This is a simplified (non-standard) version of CTC.\n\nHere is a table of the (roughly) expected first order behavior:\n\n- `preprocess_collapse_repeated=False`, `ctc_merge_repeated=True`\n\n Classical CTC behavior: Outputs true repeated classes with blanks in\n between, and can also output repeated classes with no blanks in\n between that need to be collapsed by the decoder.\n- `preprocess_collapse_repeated=True`, `ctc_merge_repeated=False`\n\n Never learns to output repeated classes, as they are collapsed\n in the input labels before training.\n- `preprocess_collapse_repeated=False`, `ctc_merge_repeated=False`\n\n Outputs repeated classes with blanks in between, but generally does not\n require the decoder to collapse/merge repeated classes.\n- `preprocess_collapse_repeated=True`, `ctc_merge_repeated=True`\n\n Untested. Very likely will not learn to output repeated classes.\n\nThe `ignore_longer_outputs_than_inputs` option allows to specify the behavior\nof the CTCLoss when dealing with sequences that have longer outputs than\ninputs. If true, the CTCLoss will simply return zero gradient for those\nitems, otherwise an InvalidArgument error is returned, stopping training.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `labels` | An `int32` `SparseTensor`. `labels.indices[i, :] == [b, t]` means `labels.values[i]` stores the id for (batch b, time t). `labels.values[i]` must take on values in `[0, num_labels)`. See `core/ops/ctc_ops.cc` for more details. |\n| `inputs` | 3-D `float` `Tensor`. If time_major == False, this will be a `Tensor` shaped: `[batch_size, max_time, num_classes]`. If time_major == True (default), this will be a `Tensor` shaped: `[max_time, batch_size, num_classes]`. The logits. |\n| `sequence_length` | 1-D `int32` vector, size `[batch_size]`. The sequence lengths. |\n| `preprocess_collapse_repeated` | Boolean. Default: False. If True, repeated labels are collapsed prior to the CTC calculation. |\n| `ctc_merge_repeated` | Boolean. Default: True. |\n| `ignore_longer_outputs_than_inputs` | Boolean. Default: False. If True, sequences with longer outputs than inputs will be ignored. |\n| `time_major` | The shape format of the `inputs` Tensors. If True, these `Tensors` must be shaped `[max_time, batch_size, num_classes]`. If False, these `Tensors` must be shaped `[batch_size, max_time, num_classes]`. Using `time_major = True` (default) is a bit more efficient because it avoids transposes at the beginning of the ctc_loss calculation. However, most TensorFlow data is batch-major, so by this function also accepts inputs in batch-major form. |\n| `logits` | Alias for inputs. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A 1-D `float` `Tensor`, size `[batch]`, containing the negative log probabilities. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|-------------|------------------------------------|\n| `TypeError` | if labels is not a `SparseTensor`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| References ---------- ||\n|---|---|\n| Connectionist Temporal Classification - Labeling Unsegmented Sequence Data with Recurrent Neural Networks: [Graves et al., 2006](https://dl.acm.org/citation.cfm?id=1143891) ([pdf](http://www.cs.toronto.edu/%7Egraves/icml_2006.pdf)) ||\n\n\u003cbr /\u003e"]]