tf.keras.layers.experimental.preprocessing.PreprocessingLayer
Stay organized with collections
Save and categorize content based on your preferences.
Base class for Preprocessing Layers.
Inherits From: Layer
, Module
tf.keras.layers.experimental.preprocessing.PreprocessingLayer(
**kwargs
)
Don't use this class directly: it's an abstract base class! You may
be looking for one of the many built-in
preprocessing layers
instead.
Preprocessing layers are layers whose state gets computed before model
training starts. They do not get updated during training.
Most preprocessing layers implement an adapt()
method for state computation.
The PreprocessingLayer
class is the base class you would subclass to
implement your own preprocessing layers.
Attributes |
is_adapted
|
Whether the layer has been fit to data already.
|
Methods
adapt
View source
adapt(
data, batch_size=None, steps=None
)
Fits the state of the preprocessing layer to the data being passed.
After calling adapt
on a layer, a preprocessing layer's state will not
update during training. In order to make preprocessing layers efficient in
any distribution context, they are kept constant with respect to any
compiled tf.Graph
s that call the layer. This does not affect the layer use
when adapting each layer only once, but if you adapt a layer multiple times
you will need to take care to re-compile any compiled functions as follows:
- If you are adding a preprocessing layer to a
keras.Model
, you need to
call model.compile
after each subsequent call to adapt
.
- If you are calling a preprocessing layer inside
tf.data.Dataset.map
,
you should call map
again on the input tf.data.Dataset
after each
adapt
.
- If you are using a
tf.function
directly which calls a preprocessing
layer, you need to call tf.function
again on your callable after
each subsequent call to adapt
.
tf.keras.Model
example with multiple adapts:
layer = tf.keras.layers.experimental.preprocessing.Normalization(
axis=None)
layer.adapt([0, 2])
model = tf.keras.Sequential(layer)
model.predict([0, 1, 2])
array([-1., 0., 1.], dtype=float32)
layer.adapt([-1, 1])
model.compile() # This is needed to re-compile model.predict!
model.predict([0, 1, 2])
array([0., 1., 2.], dtype=float32)
tf.data.Dataset
example with multiple adapts:
layer = tf.keras.layers.experimental.preprocessing.Normalization(
axis=None)
layer.adapt([0, 2])
input_ds = tf.data.Dataset.range(3)
normalized_ds = input_ds.map(layer)
list(normalized_ds.as_numpy_iterator())
[array([-1.], dtype=float32),
array([0.], dtype=float32),
array([1.], dtype=float32)]
layer.adapt([-1, 1])
normalized_ds = input_ds.map(layer) # Re-map over the input dataset.
list(normalized_ds.as_numpy_iterator())
[array([0.], dtype=float32),
array([1.], dtype=float32),
array([2.], dtype=float32)]
Arguments |
data
|
The data to train on. It can be passed either as a tf.data
Dataset, or as a numpy array.
|
batch_size
|
Integer or None .
Number of samples per state update.
If unspecified, batch_size will default to 32.
Do not specify the batch_size if your data is in the
form of datasets, generators, or keras.utils.Sequence instances
(since they generate batches).
|
steps
|
Integer or None .
Total number of steps (batches of samples)
When training with input tensors such as
TensorFlow data tensors, the default None is equal to
the number of samples in your dataset divided by
the batch size, or 1 if that cannot be determined. If x is a
tf.data dataset, and 'steps' is None, the epoch will run until
the input dataset is exhausted. When passing an infinitely
repeating dataset, you must specify the steps argument. This
argument is not supported with array inputs.
|
compile
View source
compile(
run_eagerly=None, steps_per_execution=None
)
Configures the layer for adapt
.
Arguments |
run_eagerly
|
Bool. Defaults to False . If True , this Model 's logic
will not be wrapped in a tf.function . Recommended to leave this as
None unless your Model cannot be run inside a tf.function .
steps_per_execution: Int. Defaults to 1. The number of batches to run
during each tf.function call. Running multiple batches inside a
single tf.function call can greatly improve performance on TPUs or
small models with a large Python overhead.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates. Some content is licensed under the numpy license.
Last updated 2021-08-16 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2021-08-16 UTC."],[],[],null,["# tf.keras.layers.experimental.preprocessing.PreprocessingLayer\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/keras-team/keras/tree/master/keras/engine/base_preprocessing_layer.py#L39-L285) |\n\nBase class for Preprocessing Layers.\n\nInherits From: [`Layer`](../../../../../tf/keras/layers/Layer), [`Module`](../../../../../tf/Module)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.layers.experimental.preprocessing.PreprocessingLayer`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/experimental/preprocessing/PreprocessingLayer)\n\n\u003cbr /\u003e\n\n tf.keras.layers.experimental.preprocessing.PreprocessingLayer(\n **kwargs\n )\n\n**Don't use this class directly: it's an abstract base class!** You may\nbe looking for one of the many built-in\n[preprocessing layers](https://keras.io/guides/preprocessing_layers/)\ninstead.\n\nPreprocessing layers are layers whose state gets computed before model\ntraining starts. They do not get updated during training.\nMost preprocessing layers implement an `adapt()` method for state computation.\n\nThe `PreprocessingLayer` class is the base class you would subclass to\nimplement your own preprocessing layers.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Attributes ---------- ||\n|--------------|-------------------------------------------------|\n| `is_adapted` | Whether the layer has been fit to data already. |\n\n\u003cbr /\u003e\n\nMethods\n-------\n\n### `adapt`\n\n[View source](https://github.com/keras-team/keras/tree/master/keras/engine/base_preprocessing_layer.py#L160-L252) \n\n adapt(\n data, batch_size=None, steps=None\n )\n\nFits the state of the preprocessing layer to the data being passed.\n\nAfter calling `adapt` on a layer, a preprocessing layer's state will not\nupdate during training. In order to make preprocessing layers efficient in\nany distribution context, they are kept constant with respect to any\ncompiled [`tf.Graph`](../../../../../tf/Graph)s that call the layer. This does not affect the layer use\nwhen adapting each layer only once, but if you adapt a layer multiple times\nyou will need to take care to re-compile any compiled functions as follows:\n\n- If you are adding a preprocessing layer to a [`keras.Model`](../../../../../tf/keras/Model), you need to call `model.compile` after each subsequent call to `adapt`.\n- If you are calling a preprocessing layer inside [`tf.data.Dataset.map`](../../../../../tf/data/Dataset#map), you should call `map` again on the input [`tf.data.Dataset`](../../../../../tf/data/Dataset) after each `adapt`.\n- If you are using a [`tf.function`](../../../../../tf/function) directly which calls a preprocessing layer, you need to call [`tf.function`](../../../../../tf/function) again on your callable after each subsequent call to `adapt`.\n\n[`tf.keras.Model`](../../../../../tf/keras/Model) example with multiple adapts: \n\n layer = tf.keras.layers.experimental.preprocessing.Normalization(\n axis=None)\n layer.adapt([0, 2])\n model = tf.keras.Sequential(layer)\n model.predict([0, 1, 2])\n array([-1., 0., 1.], dtype=float32)\n layer.adapt([-1, 1])\n model.compile() # This is needed to re-compile model.predict!\n model.predict([0, 1, 2])\n array([0., 1., 2.], dtype=float32)\n\n[`tf.data.Dataset`](../../../../../tf/data/Dataset) example with multiple adapts: \n\n layer = tf.keras.layers.experimental.preprocessing.Normalization(\n axis=None)\n layer.adapt([0, 2])\n input_ds = tf.data.Dataset.range(3)\n normalized_ds = input_ds.map(layer)\n list(normalized_ds.as_numpy_iterator())\n [array([-1.], dtype=float32),\n array([0.], dtype=float32),\n array([1.], dtype=float32)]\n layer.adapt([-1, 1])\n normalized_ds = input_ds.map(layer) # Re-map over the input dataset.\n list(normalized_ds.as_numpy_iterator())\n [array([0.], dtype=float32),\n array([1.], dtype=float32),\n array([2.], dtype=float32)]\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments ||\n|--------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `data` | The data to train on. It can be passed either as a tf.data Dataset, or as a numpy array. |\n| `batch_size` | Integer or `None`. Number of samples per state update. If unspecified, `batch_size` will default to 32. Do not specify the `batch_size` if your data is in the form of datasets, generators, or [`keras.utils.Sequence`](../../../../../tf/keras/utils/Sequence) instances (since they generate batches). |\n| `steps` | Integer or `None`. Total number of steps (batches of samples) When training with input tensors such as TensorFlow data tensors, the default `None` is equal to the number of samples in your dataset divided by the batch size, or 1 if that cannot be determined. If x is a [`tf.data`](../../../../../tf/data) dataset, and 'steps' is None, the epoch will run until the input dataset is exhausted. When passing an infinitely repeating dataset, you must specify the `steps` argument. This argument is not supported with array inputs. |\n\n\u003cbr /\u003e\n\n### `compile`\n\n[View source](https://github.com/keras-team/keras/tree/master/keras/engine/base_preprocessing_layer.py#L138-L158) \n\n compile(\n run_eagerly=None, steps_per_execution=None\n )\n\nConfigures the layer for `adapt`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments ||\n|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `run_eagerly` | Bool. Defaults to `False`. If `True`, this `Model`'s logic will not be wrapped in a [`tf.function`](../../../../../tf/function). Recommended to leave this as `None` unless your `Model` cannot be run inside a [`tf.function`](../../../../../tf/function). steps_per_execution: Int. Defaults to 1. The number of batches to run during each [`tf.function`](../../../../../tf/function) call. Running multiple batches inside a single [`tf.function`](../../../../../tf/function) call can greatly improve performance on TPUs or small models with a large Python overhead. |\n\n\u003cbr /\u003e"]]