tf.compat.v1.tpu.experimental.FtrlParameters
Stay organized with collections
Save and categorize content based on your preferences.
Optimization parameters for Ftrl with TPU embeddings.
tf.compat.v1.tpu.experimental.FtrlParameters(
learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1,
l1_regularization_strength=0.0, l2_regularization_strength=0.0,
use_gradient_accumulation=True, clip_weight_min=None, clip_weight_max=None,
weight_decay_factor=None, multiply_weight_decay_factor_by_learning_rate=None
)
Pass this to tf.estimator.tpu.experimental.EmbeddingConfigSpec
via the
optimization_parameters
argument to set the optimizer and its parameters.
See the documentation for tf.estimator.tpu.experimental.EmbeddingConfigSpec
for more details.
estimator = tf.estimator.tpu.TPUEstimator(
...
embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(
...
optimization_parameters=tf.tpu.experimental.FtrlParameters(0.1),
...))
Args |
learning_rate
|
a floating point value. The learning rate.
|
learning_rate_power
|
A float value, must be less or equal to zero.
Controls how the learning rate decreases during training. Use zero for
a fixed learning rate. See section 3.1 in the
paper.
|
initial_accumulator_value
|
The starting value for accumulators.
Only zero or positive values are allowed.
|
l1_regularization_strength
|
A float value, must be greater than or
equal to zero.
|
l2_regularization_strength
|
A float value, must be greater than or
equal to zero.
|
use_gradient_accumulation
|
setting this to False makes embedding
gradients calculation less accurate but faster. Please see
optimization_parameters.proto for details.
for details.
|
clip_weight_min
|
the minimum value to clip by; None means -infinity.
|
clip_weight_max
|
the maximum value to clip by; None means +infinity.
|
weight_decay_factor
|
amount of weight decay to apply; None means that the
weights are not decayed.
|
multiply_weight_decay_factor_by_learning_rate
|
if true,
weight_decay_factor is multiplied by the current learning rate.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.compat.v1.tpu.experimental.FtrlParameters\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.3.0/tensorflow/python/tpu/tpu_embedding.py#L551-L630) |\n\nOptimization parameters for Ftrl with TPU embeddings. \n\n tf.compat.v1.tpu.experimental.FtrlParameters(\n learning_rate, learning_rate_power=-0.5, initial_accumulator_value=0.1,\n l1_regularization_strength=0.0, l2_regularization_strength=0.0,\n use_gradient_accumulation=True, clip_weight_min=None, clip_weight_max=None,\n weight_decay_factor=None, multiply_weight_decay_factor_by_learning_rate=None\n )\n\nPass this to `tf.estimator.tpu.experimental.EmbeddingConfigSpec` via the\n`optimization_parameters` argument to set the optimizer and its parameters.\nSee the documentation for `tf.estimator.tpu.experimental.EmbeddingConfigSpec`\nfor more details. \n\n estimator = tf.estimator.tpu.TPUEstimator(\n ...\n embedding_config_spec=tf.estimator.tpu.experimental.EmbeddingConfigSpec(\n ...\n optimization_parameters=tf.tpu.experimental.FtrlParameters(0.1),\n ...))\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `learning_rate` | a floating point value. The learning rate. |\n| `learning_rate_power` | A float value, must be less or equal to zero. Controls how the learning rate decreases during training. Use zero for a fixed learning rate. See section 3.1 in the [paper](https://www.eecs.tufts.edu/%7Edsculley/papers/ad-click-prediction.pdf). |\n| `initial_accumulator_value` | The starting value for accumulators. Only zero or positive values are allowed. |\n| `l1_regularization_strength` | A float value, must be greater than or equal to zero. |\n| `l2_regularization_strength` | A float value, must be greater than or equal to zero. |\n| `use_gradient_accumulation` | setting this to `False` makes embedding gradients calculation less accurate but faster. Please see `optimization_parameters.proto` for details. for details. |\n| `clip_weight_min` | the minimum value to clip by; None means -infinity. |\n| `clip_weight_max` | the maximum value to clip by; None means +infinity. |\n| `weight_decay_factor` | amount of weight decay to apply; None means that the weights are not decayed. |\n| `multiply_weight_decay_factor_by_learning_rate` | if true, `weight_decay_factor` is multiplied by the current learning rate. |\n\n\u003cbr /\u003e"]]