tf.nn.embedding_lookup
Stay organized with collections
Save and categorize content based on your preferences.
Looks up ids
in a list of embedding tensors.
tf.nn.embedding_lookup(
params, ids, max_norm=None, name=None
)
This function is used to perform parallel lookups on the list of
tensors in params
. It is a generalization of
tf.gather
, where params
is
interpreted as a partitioning of a large embedding tensor. params
may be
a PartitionedVariable
as returned by using tf.compat.v1.get_variable()
with a
partitioner.
If len(params) > 1
, each element id
of ids
is partitioned between
the elements of params
according to the partition_strategy
.
In all strategies, if the id space does not evenly divide the number of
partitions, each of the first (max_id + 1) % len(params)
partitions will
be assigned one more id.
The partition_strategy
is always "div"
currently. This means that we
assign ids to partitions in a contiguous manner. For instance, 13 ids are
split across 5 partitions as:
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]
The results of the lookup are concatenated into a dense
tensor. The returned tensor has shape shape(ids) + shape(params)[1:]
.
Args |
params
|
A single tensor representing the complete embedding tensor, or a
list of P tensors all of same shape except for the first dimension,
representing sharded embedding tensors. Alternatively, a
PartitionedVariable , created by partitioning along dimension 0. Each
element must be appropriately sized for the 'div' partition_strategy .
|
ids
|
A Tensor with type int32 or int64 containing the ids to be looked
up in params .
|
max_norm
|
If not None , each embedding is clipped if its l2-norm is larger
than this value.
|
name
|
A name for the operation (optional).
|
Returns |
A Tensor with the same type as the tensors in params .
|
Raises |
ValueError
|
If params is empty.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.nn.embedding_lookup\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/nn/embedding_lookup) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/ops/embedding_ops.py#L320-L364) |\n\nLooks up `ids` in a list of embedding tensors. \n\n tf.nn.embedding_lookup(\n params, ids, max_norm=None, name=None\n )\n\nThis function is used to perform parallel lookups on the list of\ntensors in `params`. It is a generalization of\n[`tf.gather`](../../tf/gather), where `params` is\ninterpreted as a partitioning of a large embedding tensor. `params` may be\na `PartitionedVariable` as returned by using [`tf.compat.v1.get_variable()`](../../tf/compat/v1/get_variable)\nwith a\npartitioner.\n\nIf `len(params) \u003e 1`, each element `id` of `ids` is partitioned between\nthe elements of `params` according to the `partition_strategy`.\nIn all strategies, if the id space does not evenly divide the number of\npartitions, each of the first `(max_id + 1) % len(params)` partitions will\nbe assigned one more id.\n\nThe `partition_strategy` is always `\"div\"` currently. This means that we\nassign ids to partitions in a contiguous manner. For instance, 13 ids are\nsplit across 5 partitions as:\n`[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]`\n\nThe results of the lookup are concatenated into a dense\ntensor. The returned tensor has shape `shape(ids) + shape(params)[1:]`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `params` | A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a `PartitionedVariable`, created by partitioning along dimension 0. Each element must be appropriately sized for the 'div' `partition_strategy`. |\n| `ids` | A `Tensor` with type `int32` or `int64` containing the ids to be looked up in `params`. |\n| `max_norm` | If not `None`, each embedding is clipped if its l2-norm is larger than this value. |\n| `name` | A name for the operation (optional). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` with the same type as the tensors in `params`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|-----------------------|\n| `ValueError` | If `params` is empty. |\n\n\u003cbr /\u003e"]]