tf.keras.layers.MultiHeadAttention
Stay organized with collections
Save and categorize content based on your preferences.
MultiHeadAttention layer.
Inherits From: Layer
, Module
View aliases
Compat aliases for migration
See
Migration guide for
more details.
`tf.compat.v1.keras.layers.MultiHeadAttention`
tf.keras.layers.MultiHeadAttention(
num_heads,
key_dim,
value_dim=None,
dropout=0.0,
use_bias=True,
output_shape=None,
attention_axes=None,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs
)
This is an implementation of multi-headed attention as described in the
paper "Attention is all you Need" (Vaswani et al., 2017).
If query
, key,
value
are the same, then
this is self-attention. Each timestep in query
attends to the
corresponding sequence in key
, and returns a fixed-width vector.
This layer first projects query
, key
and value
. These are
(effectively) a list of tensors of length num_attention_heads
, where the
corresponding shapes are (batch_size, <query dimensions>, key_dim)
,
(batch_size, <key/value dimensions>, key_dim)
,
(batch_size, <key/value dimensions>, value_dim)
.
Then, the query and key tensors are dot-producted and scaled. These are
softmaxed to obtain attention probabilities. The value tensors are then
interpolated by these probabilities, then concatenated back to a single
tensor.
Finally, the result tensor with the last dimension as value_dim can take an
linear projection and return.
When using MultiHeadAttention
inside a custom layer, the custom layer must
implement its own build()
method and call MultiHeadAttention
's
_build_from_signature()
there.
This enables weights to be restored correctly when the model is loaded.
Examples:
Performs 1D cross-attention over two sequence inputs with an attention mask.
Returns the additional attention weights over heads.
layer = MultiHeadAttention(num_heads=2, key_dim=2)
target = tf.keras.Input(shape=[8, 16])
source = tf.keras.Input(shape=[4, 16])
output_tensor, weights = layer(target, source,
return_attention_scores=True)
print(output_tensor.shape)
(None, 8, 16)
print(weights.shape)
(None, 2, 8, 4)
Performs 2D self-attention over a 5D input tensor on axes 2 and 3.
layer = MultiHeadAttention(
num_heads=2, key_dim=2, attention_axes=(2, 3))
input_tensor = tf.keras.Input(shape=[5, 3, 4, 16])
output_tensor = layer(input_tensor, input_tensor)
print(output_tensor.shape)
(None, 5, 3, 4, 16)
Args |
num_heads
|
Number of attention heads.
|
key_dim
|
Size of each attention head for query and key.
|
value_dim
|
Size of each attention head for value.
|
dropout
|
Dropout probability.
|
use_bias
|
Boolean, whether the dense layers use bias vectors/matrices.
|
output_shape
|
The expected shape of an output tensor, besides the batch
and sequence dims. If not specified, projects back to the key feature
dim.
|
attention_axes
|
axes over which the attention is applied. None means
attention over all axes, but batch, heads, and features.
|
kernel_initializer
|
Initializer for dense layer kernels.
|
bias_initializer
|
Initializer for dense layer biases.
|
kernel_regularizer
|
Regularizer for dense layer kernels.
|
bias_regularizer
|
Regularizer for dense layer biases.
|
activity_regularizer
|
Regularizer for dense layer activity.
|
kernel_constraint
|
Constraint for dense layer kernels.
|
bias_constraint
|
Constraint for dense layer kernels.
|
Call arguments |
query
|
Query Tensor of shape (B, T, dim) .
|
value
|
Value Tensor of shape (B, S, dim) .
|
key
|
Optional key Tensor of shape (B, S, dim) . If not given, will use
value for both key and value , which is the most common case.
|
attention_mask
|
a boolean mask of shape (B, T, S) , that prevents
attention to certain positions. The boolean mask specifies which query
elements can attend to which key elements, 1 indicates attention and 0
indicates no attention. Broadcasting can happen for the missing batch
dimensions and the head dimension.
|
return_attention_scores
|
A boolean to indicate whether the output should
be (attention_output, attention_scores) if True , or
attention_output if False . Defaults to False .
|
training
|
Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (no dropout).
Defaults to either using the training mode of the parent layer/model,
or False (inference) if there is no parent layer.
|
use_causal_mask
|
A boolean to indicate whether to apply a causal mask to
prevent tokens from attending to future tokens (e.g., used in a decoder
Transformer).
|
Returns |
attention_output
|
The result of the computation, of shape (B, T, E) ,
where T is for target sequence shapes and E is the query input last
dimension if output_shape is None . Otherwise, the multi-head outputs
are projected to the shape specified by output_shape .
|
attention_scores
|
[Optional] multi-head attention coefficients over
attention axes.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2023-10-06 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-10-06 UTC."],[],[],null,["# tf.keras.layers.MultiHeadAttention\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/v2.11.0/keras/layers/attention/multi_head_attention.py#L130-L726) |\n\nMultiHeadAttention layer.\n\nInherits From: [`Layer`](../../../tf/keras/layers/Layer), [`Module`](../../../tf/Module)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n\\`tf.compat.v1.keras.layers.MultiHeadAttention\\`\n\n\u003cbr /\u003e\n\n tf.keras.layers.MultiHeadAttention(\n num_heads,\n key_dim,\n value_dim=None,\n dropout=0.0,\n use_bias=True,\n output_shape=None,\n attention_axes=None,\n kernel_initializer='glorot_uniform',\n bias_initializer='zeros',\n kernel_regularizer=None,\n bias_regularizer=None,\n activity_regularizer=None,\n kernel_constraint=None,\n bias_constraint=None,\n **kwargs\n )\n\nThis is an implementation of multi-headed attention as described in the\npaper \"Attention is all you Need\" (Vaswani et al., 2017).\nIf `query`, `key,` `value` are the same, then\nthis is self-attention. Each timestep in `query` attends to the\ncorresponding sequence in `key`, and returns a fixed-width vector.\n\nThis layer first projects `query`, `key` and `value`. These are\n(effectively) a list of tensors of length `num_attention_heads`, where the\ncorresponding shapes are `(batch_size, \u003cquery dimensions\u003e, key_dim)`,\n`(batch_size, \u003ckey/value dimensions\u003e, key_dim)`,\n`(batch_size, \u003ckey/value dimensions\u003e, value_dim)`.\n\nThen, the query and key tensors are dot-producted and scaled. These are\nsoftmaxed to obtain attention probabilities. The value tensors are then\ninterpolated by these probabilities, then concatenated back to a single\ntensor.\n\nFinally, the result tensor with the last dimension as value_dim can take an\nlinear projection and return.\n\nWhen using `MultiHeadAttention` inside a custom layer, the custom layer must\nimplement its own `build()` method and call `MultiHeadAttention`'s\n`_build_from_signature()` there.\nThis enables weights to be restored correctly when the model is loaded.\n\n#### Examples:\n\nPerforms 1D cross-attention over two sequence inputs with an attention mask.\nReturns the additional attention weights over heads. \n\n layer = MultiHeadAttention(num_heads=2, key_dim=2)\n target = tf.keras.Input(shape=[8, 16])\n source = tf.keras.Input(shape=[4, 16])\n output_tensor, weights = layer(target, source,\n return_attention_scores=True)\n print(output_tensor.shape)\n (None, 8, 16)\n print(weights.shape)\n (None, 2, 8, 4)\n\nPerforms 2D self-attention over a 5D input tensor on axes 2 and 3. \n\n layer = MultiHeadAttention(\n num_heads=2, key_dim=2, attention_axes=(2, 3))\n input_tensor = tf.keras.Input(shape=[5, 3, 4, 16])\n output_tensor = layer(input_tensor, input_tensor)\n print(output_tensor.shape)\n (None, 5, 3, 4, 16)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|--------------------------------------------------------------------------------------------------------------------------------------|\n| `num_heads` | Number of attention heads. |\n| `key_dim` | Size of each attention head for query and key. |\n| `value_dim` | Size of each attention head for value. |\n| `dropout` | Dropout probability. |\n| `use_bias` | Boolean, whether the dense layers use bias vectors/matrices. |\n| `output_shape` | The expected shape of an output tensor, besides the batch and sequence dims. If not specified, projects back to the key feature dim. |\n| `attention_axes` | axes over which the attention is applied. `None` means attention over all axes, but batch, heads, and features. |\n| `kernel_initializer` | Initializer for dense layer kernels. |\n| `bias_initializer` | Initializer for dense layer biases. |\n| `kernel_regularizer` | Regularizer for dense layer kernels. |\n| `bias_regularizer` | Regularizer for dense layer biases. |\n| `activity_regularizer` | Regularizer for dense layer activity. |\n| `kernel_constraint` | Constraint for dense layer kernels. |\n| `bias_constraint` | Constraint for dense layer kernels. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Call arguments -------------- ||\n|---------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `query` | Query `Tensor` of shape `(B, T, dim)`. |\n| `value` | Value `Tensor` of shape `(B, S, dim)`. |\n| `key` | Optional key `Tensor` of shape `(B, S, dim)`. If not given, will use `value` for both `key` and `value`, which is the most common case. |\n| `attention_mask` | a boolean mask of shape `(B, T, S)`, that prevents attention to certain positions. The boolean mask specifies which query elements can attend to which key elements, 1 indicates attention and 0 indicates no attention. Broadcasting can happen for the missing batch dimensions and the head dimension. |\n| `return_attention_scores` | A boolean to indicate whether the output should be `(attention_output, attention_scores)` if `True`, or `attention_output` if `False`. Defaults to `False`. |\n| `training` | Python boolean indicating whether the layer should behave in training mode (adding dropout) or in inference mode (no dropout). Defaults to either using the training mode of the parent layer/model, or False (inference) if there is no parent layer. |\n| `use_causal_mask` | A boolean to indicate whether to apply a causal mask to prevent tokens from attending to future tokens (e.g., used in a decoder Transformer). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `attention_output` | The result of the computation, of shape `(B, T, E)`, where `T` is for target sequence shapes and `E` is the query input last dimension if `output_shape` is `None`. Otherwise, the multi-head outputs are projected to the shape specified by `output_shape`. |\n| `attention_scores` | \\[Optional\\] multi-head attention coefficients over attention axes. |\n\n\u003cbr /\u003e"]]