tf.keras.layers.Lambda
Stay organized with collections
Save and categorize content based on your preferences.
Wraps arbitrary expressions as a Layer
object.
Inherits From: Layer
tf.keras.layers.Lambda(
function, output_shape=None, mask=None, arguments=None, **kwargs
)
The Lambda
layer exists so that arbitrary TensorFlow functions
can be used when constructing Sequential
and Functional API
models. Lambda
layers are best suited for simple operations or
quick experimentation. For more advanced use cases, subclassing
keras.layers.Layer
is preferred. One reason for this is that
when saving a Model, Lambda
layers are saved by serializing the
Python bytecode, whereas subclassed Layers are saved via overriding
their get_config
method and are thus more portable. Models that rely
on subclassed Layers are also often easier to visualize and reason
about.
Examples:
# add a x -> x^2 layer
model.add(Lambda(lambda x: x ** 2))
# add a layer that returns the concatenation
# of the positive part of the input and
# the opposite of the negative part
def antirectifier(x):
x -= K.mean(x, axis=1, keepdims=True)
x = K.l2_normalize(x, axis=1)
pos = K.relu(x)
neg = K.relu(-x)
return K.concatenate([pos, neg], axis=1)
model.add(Lambda(antirectifier))
Variables can be created within a Lambda
layer. Like with
other layers, these variables will be created only once and reused
if the Lambda
layer is called on new inputs. If creating more
than one variable in a given Lambda
instance, be sure to use
a different name for each variable. Note that calling sublayers
from within a Lambda
is not supported.
Example of variable creation:
def linear_transform(x):
v1 = tf.Variable(1., name='multiplier')
v2 = tf.Variable(0., name='bias')
return x*v1 + v2
linear_layer = Lambda(linear_transform)
model.add(linear_layer)
model.add(keras.layers.Dense(10, activation='relu'))
model.add(linear_layer) # Reuses existing Variables
Note that creating two instances of Lambda
using the same function
will not share Variables between the two instances. Each instance of
Lambda
will create and manage its own weights.
Arguments |
function
|
The function to be evaluated. Takes input tensor as first
argument.
|
output_shape
|
Expected output shape from function. This argument can be
inferred if not explicitly provided. Can be a tuple or function. If a
tuple, it only specifies the first dimension onward;
sample dimension is assumed either the same as the input: output_shape =
(input_shape[0], ) + output_shape or, the input is None and
the sample dimension is also None : output_shape = (None, ) +
output_shape If a function, it specifies the entire shape as a function
of the
input shape: output_shape = f(input_shape)
|
mask
|
Either None (indicating no masking) or a callable with the same
signature as the compute_mask layer method, or a tensor that will be
returned as output mask regardless what the input is.
|
arguments
|
Optional dictionary of keyword arguments to be passed to the
function.
|
Input shape: Arbitrary. Use the keyword argument input_shape (tuple of
integers, does not include the samples axis) when using this layer as the
first layer in a model.
Output shape: Specified by output_shape
argument
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.keras.layers.Lambda\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|\n| [TensorFlow 1 version](/versions/r1.15/api_docs/python/tf/keras/layers/Lambda) | [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.0.0/tensorflow/python/keras/layers/core.py#L658-L919) |\n\nWraps arbitrary expressions as a `Layer` object.\n\nInherits From: [`Layer`](../../../tf/keras/layers/Layer)\n\n#### View aliases\n\n\n**Compat aliases for migration**\n\nSee\n[Migration guide](https://www.tensorflow.org/guide/migrate) for\nmore details.\n\n[`tf.compat.v1.keras.layers.Lambda`](/api_docs/python/tf/keras/layers/Lambda)\n\n\u003cbr /\u003e\n\n tf.keras.layers.Lambda(\n function, output_shape=None, mask=None, arguments=None, **kwargs\n )\n\nThe `Lambda` layer exists so that arbitrary TensorFlow functions\ncan be used when constructing `Sequential` and Functional API\nmodels. `Lambda` layers are best suited for simple operations or\nquick experimentation. For more advanced use cases, subclassing\n[`keras.layers.Layer`](../../../tf/keras/layers/Layer) is preferred. One reason for this is that\nwhen saving a Model, `Lambda` layers are saved by serializing the\nPython bytecode, whereas subclassed Layers are saved via overriding\ntheir `get_config` method and are thus more portable. Models that rely\non subclassed Layers are also often easier to visualize and reason\nabout.\n\n#### Examples:\n\n # add a x -\u003e x^2 layer\n model.add(Lambda(lambda x: x ** 2))\n\n # add a layer that returns the concatenation\n # of the positive part of the input and\n # the opposite of the negative part\n\n def antirectifier(x):\n x -= K.mean(x, axis=1, keepdims=True)\n x = K.l2_normalize(x, axis=1)\n pos = K.relu(x)\n neg = K.relu(-x)\n return K.concatenate([pos, neg], axis=1)\n\n model.add(Lambda(antirectifier))\n\nVariables can be created within a `Lambda` layer. Like with\nother layers, these variables will be created only once and reused\nif the `Lambda` layer is called on new inputs. If creating more\nthan one variable in a given `Lambda` instance, be sure to use\na different name for each variable. Note that calling sublayers\nfrom within a `Lambda` is not supported.\n\nExample of variable creation: \n\n def linear_transform(x):\n v1 = tf.Variable(1., name='multiplier')\n v2 = tf.Variable(0., name='bias')\n return x*v1 + v2\n\n linear_layer = Lambda(linear_transform)\n model.add(linear_layer)\n model.add(keras.layers.Dense(10, activation='relu'))\n model.add(linear_layer) # Reuses existing Variables\n\nNote that creating two instances of `Lambda` using the same function\nwill *not* share Variables between the two instances. Each instance of\n`Lambda` will create and manage its own weights.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|----------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `function` | The function to be evaluated. Takes input tensor as first argument. |\n| `output_shape` | Expected output shape from function. This argument can be inferred if not explicitly provided. Can be a tuple or function. If a tuple, it only specifies the first dimension onward; sample dimension is assumed either the same as the input: `output_shape = (input_shape[0], ) + output_shape` or, the input is `None` and the sample dimension is also `None`: `output_shape = (None, ) + output_shape` If a function, it specifies the entire shape as a function of the input shape: `output_shape = f(input_shape)` |\n| `mask` | Either None (indicating no masking) or a callable with the same signature as the `compute_mask` layer method, or a tensor that will be returned as output mask regardless what the input is. |\n| `arguments` | Optional dictionary of keyword arguments to be passed to the function. |\n\n\u003cbr /\u003e\n\nInput shape: Arbitrary. Use the keyword argument input_shape (tuple of\nintegers, does not include the samples axis) when using this layer as the\nfirst layer in a model.\nOutput shape: Specified by `output_shape` argument"]]