Below is an example of a training and evaluation step:
optimizer=tf.keras.optimizers.SGD(0.1)@tf.functiondeftraining_step(dataset_iterator,num_steps):deftpu_step(embedding_features):withtf.GradientTape()astape:tape.watch(embedding.embedding_table.values())activation=embedding(embedding_features)model_output=model(activations)loss=...# some function of labels and model_outputembedding_gradients=tape.gradient(loss,embedding.embedding_table.values())optimizer.apply_gradients(list(zip(gradients,mid_level_api.embedding_tables.values())))# Insert your model gradient and optimizer application herefor_intf.range(num_steps):strategy.run(tpu_step,args=(next(dataset_iterator),))@tf.functiondefevalution_step(dataset_iterator,num_steps):deftpu_step(embedding_features):activations=embedding(embedding_features)model_output=model(activations)# Insert your evaluation code here.for_intf.range(num_steps):strategy.run(tpu_step,args=(next(dataset_iterator),))
optimizer=tf.keras.optimizers.Adagrad(learning_rate=0.1)defslot_variable_creation_fn(table,slot_names,slot_initializers):slots={}forslot,initializerinzip(slot_names,slot_initializers):slots[slot]=optimizer.add_slot(table,slot,initializer)returnslotsembedding_optimizer=tf.experimental.embedding.Adagrad(learning_rate=0.1,slot_variable_creation_fn=slot_variable_creation_fn)# Use the embedding optimizer to create mid level api and keras optimizer to# apply gradients.
Attributes
embedding_tables
Returns a dict of embedding tables, keyed by TableConfig.
Note that all the sparse and ragged tensors will be converted to dense
tensors on CPU and then passed to the TPU to do embedding look up. Large
embedding lookup is not supported by this API, use the TPUEmbedding mid
level api instead.
Args
features
a nested structure of Tensors, SparseTensors or RaggedTensors.
weights
a nested structure of Tensors, SparseTensors or RaggedTensors or
None for no weights. If not None, structure must match that of inputs,
but entries are allowed to be None.
Returns
A nested structure of Tensors with the same structure as inputs.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-01-23 UTC."],[],[],null,["# tf.tpu.experimental.embedding.TPUEmbeddingV0\n\n\u003cbr /\u003e\n\n|-----------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.15.0.post1/tensorflow/python/tpu/tpu_embedding_v1.py#L38-L445) |\n\nThe TPUEmbedding mid level API running on TPU without Embedding accelerator.\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.tpu.experimental.embedding.TPUEmbeddingV0`](https://www.tensorflow.org/api_docs/python/tf/tpu/experimental/embedding/TPUEmbeddingV0)\n\n\u003cbr /\u003e\n\n tf.tpu.experimental.embedding.TPUEmbeddingV0(\n feature_config: Union[../../../../tf/tpu/experimental/embedding/FeatureConfig, Iterable],\n optimizer: Optional[tpu_embedding_v2_utils._Optimizer]\n )\n\n| **Note:** This mid level API is not intended for large embedding table lookup. Embedding tables will be replicated across devices rather than sharding across them. To do large embedding table lookup, please use the [`tpu.experimental.embedding.TPUEmbedding`](../../../../tf/tpu/experimental/embedding/TPUEmbedding) class. This class is an alternative way to do embedding lookups when the TPU doesn't support any version of embedding feature. See `tpu.experimental.tpu_hardware_feature.embedding_feature` for a detailed explanation.\n\nThis class has to be created under the `TPUStrategy`, Otherwise a RuntimeError\nwill be raised. \n\n strategy = tf.distribute.TPUStrategy(...)\n with strategy.scope():\n embedding = tf.tpu.experimental.embedding.TPUEmbeddingV0(\n feature_config=feature_config,\n optimizer=tf.tpu.experimental.embedding.SGD(0.1))\n\nWhen creating a distributed dataset that is to be passed to the lookup\noperation a special input option must be specified: \n\n distributed_dataset = (\n strategy.distribute_datasets_from_function(\n dataset_fn=...,\n options=tf.distribute.InputOptions(\n experimental_fetch_to_device=False))\n dataset_iterator = iter(distributed_dataset)\n\nBelow is an example of a training and evaluation step: \n\n optimizer = tf.keras.optimizers.SGD(0.1)\n\n @tf.function\n def training_step(dataset_iterator, num_steps):\n def tpu_step(embedding_features):\n with tf.GradientTape() as tape:\n tape.watch(embedding.embedding_table.values())\n activation = embedding(embedding_features)\n model_output = model(activations)\n loss = ... # some function of labels and model_output\n\n embedding_gradients = tape.gradient(loss,\n embedding.embedding_table.values())\n optimizer.apply_gradients(list(zip(gradients,\n mid_level_api.embedding_tables.values())))\n # Insert your model gradient and optimizer application here\n\n for _ in tf.range(num_steps):\n strategy.run(tpu_step, args=(next(dataset_iterator), ))\n\n @tf.function\n def evalution_step(dataset_iterator, num_steps):\n def tpu_step(embedding_features):\n activations = embedding(embedding_features)\n model_output = model(activations)\n # Insert your evaluation code here.\n\n for _ in tf.range(num_steps):\n strategy.run(tpu_step, args=(next(dataset_iterator), ))\n\n**Note:** The optimizer used here is a Keras optimizer. In order to make the slot variable creation stay consistent between Keras optimizers and embedding optimizers, the `slot_variable_creation_fn` argument of the embedding optimizers has to be passed with the Keras `add_slot` function. Also note that the slot names might be slightly different between them. \n\n optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.1)\n\n def slot_variable_creation_fn(table, slot_names, slot_initializers):\n slots = {}\n for slot, initializer in zip(slot_names, slot_initializers):\n slots[slot] = optimizer.add_slot(table, slot, initializer)\n return slots\n\n embedding_optimizer = tf.experimental.embedding.Adagrad(\n learning_rate=0.1,\n slot_variable_creation_fn=slot_variable_creation_fn)\n\n # Use the embedding optimizer to create mid level api and keras optimizer to\n # apply gradients.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------------|-------------------------------------------------------------|\n| `embedding_tables` | Returns a dict of embedding tables, keyed by `TableConfig`. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `build`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.15.0.post1/tensorflow/python/tpu/tpu_embedding_base.py#L128-L133) \n\n build()\n\nCreate variables and slots variables for TPU embeddings.\n\n### `embedding_lookup`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.15.0.post1/tensorflow/python/tpu/tpu_embedding_v1.py#L241-L306) \n\n embedding_lookup(\n features: Any, weights: Optional[Any] = None\n ) -\u003e Any\n\nApply embedding lookup on TPUs using Tensorcore.\n\nNote that all the sparse and ragged tensors will be converted to dense\ntensors on CPU and then passed to the TPU to do embedding look up. Large\nembedding lookup is not supported by this API, use the TPUEmbedding mid\nlevel api instead.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `features` | a nested structure of Tensors, SparseTensors or RaggedTensors. |\n| `weights` | a nested structure of Tensors, SparseTensors or RaggedTensors or None for no weights. If not None, structure must match that of inputs, but entries are allowed to be None. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| A nested structure of Tensors with the same structure as inputs. ||\n\n\u003cbr /\u003e\n\n### `__call__`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v2.15.0.post1/tensorflow/python/tpu/tpu_embedding_base.py#L135-L139) \n\n __call__(\n features: Any, weights: Optional[Any] = None\n ) -\u003e Any\n\nCall the mid level api to do embedding lookup."]]