tf.contrib.linear_optimizer.SdcaModel
Stay organized with collections
Save and categorize content based on your preferences.
Stochastic dual coordinate ascent solver for linear models.
tf.contrib.linear_optimizer.SdcaModel(
examples, variables, options
)
Loss functions supported:
# Create a solver with the desired parameters.
lr = tf.contrib.linear_optimizer.SdcaModel(examples, variables, options)
min_op = lr.minimize()
opt_op = lr.update_weights(min_op)
predictions = lr.predictions(examples)
# Primal loss + L1 loss + L2 loss.
regularized_loss = lr.regularized_loss(examples)
# Primal loss only
unregularized_loss = lr.unregularized_loss(examples)
examples: {
sparse_features: list of SparseFeatureColumn.
dense_features: list of dense tensors of type float32.
example_labels: a tensor of type float32 and shape [Num examples]
example_weights: a tensor of type float32 and shape [Num examples]
example_ids: a tensor of type string and shape [Num examples]
}
variables: {
sparse_features_weights: list of tensors of shape [vocab size]
dense_features_weights: list of tensors of shape [dense_feature_dimension]
}
options: {
symmetric_l1_regularization: 0.0
symmetric_l2_regularization: 1.0
loss_type: "logistic_loss"
num_loss_partitions: 1 (Optional, with default value of 1. Number of
partitions of the global loss function, 1 means single machine solver,
and >1 when we have more than one optimizer working concurrently.)
num_table_shards: 1 (Optional, with default value of 1. Number of shards
of the internal state table, typically set to match the number of
parameter servers for large data sets.
}
In the training program you will just have to run the returned Op from
minimize().
# Execute opt_op and train for num_steps.
for _ in range(num_steps):
opt_op.run()
# You can also check for convergence by calling
lr.approximate_duality_gap()
Methods
approximate_duality_gap
View source
approximate_duality_gap()
Add operations to compute the approximate duality gap.
Returns |
An Operation that computes the approximate duality gap over all
examples.
|
minimize
View source
minimize(
global_step=None, name=None
)
Add operations to train a linear model by minimizing the loss function.
Args |
global_step
|
Optional Variable to increment by one after the
variables have been updated.
|
name
|
Optional name for the returned operation.
|
Returns |
An Operation that updates the variables passed in the constructor.
|
predictions
View source
predictions(
examples
)
Add operations to compute predictions by the model.
If logistic_loss is being used, predicted probabilities are returned.
If poisson_loss is being used, predictions are exponentiated.
Otherwise, (raw) linear predictions (w*x) are returned.
Args |
examples
|
Examples to compute predictions on.
|
Returns |
An Operation that computes the predictions for examples.
|
Raises |
ValueError
|
if examples are not well defined.
|
regularized_loss
View source
regularized_loss(
examples
)
Add operations to compute the loss with regularization loss included.
Args |
examples
|
Examples to compute loss on.
|
Returns |
An Operation that computes mean (regularized) loss for given set of
examples.
|
Raises |
ValueError
|
if examples are not well defined.
|
unregularized_loss
View source
unregularized_loss(
examples
)
Add operations to compute the loss (without the regularization loss).
Args |
examples
|
Examples to compute unregularized loss on.
|
Returns |
An Operation that computes mean (unregularized) loss for given set of
examples.
|
Raises |
ValueError
|
if examples are not well defined.
|
update_weights
View source
update_weights(
train_op
)
Updates the model weights.
This function must be called on at least one worker after minimize
.
In distributed training this call can be omitted on non-chief workers to
speed up training.
Args |
train_op
|
The operation returned by the minimize call.
|
Returns |
An Operation that updates the model weights.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.linear_optimizer.SdcaModel\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L56-L724) |\n\nStochastic dual coordinate ascent solver for linear models. \n\n tf.contrib.linear_optimizer.SdcaModel(\n examples, variables, options\n )\n\nLoss functions supported:\n\n- Binary logistic loss\n- Squared loss\n- Hinge loss\n- Smooth hinge loss\n- Poisson log loss\n\n This class defines an optimizer API to train a linear model.\n\n ### Usage\n\n\u003e # Create a solver with the desired parameters.\n\u003e lr = tf.contrib.linear_optimizer.SdcaModel(examples, variables, options)\n\u003e min_op = lr.minimize()\n\u003e opt_op = lr.update_weights(min_op)\n\u003e\n\u003e predictions = lr.predictions(examples)\n\u003e # Primal loss + L1 loss + L2 loss.\n\u003e regularized_loss = lr.regularized_loss(examples)\n\u003e # Primal loss only\n\u003e unregularized_loss = lr.unregularized_loss(examples)\n\u003e\n\u003e examples: {\n\u003e sparse_features: list of SparseFeatureColumn.\n\u003e dense_features: list of dense tensors of type float32.\n\u003e example_labels: a tensor of type float32 and shape [Num examples]\n\u003e example_weights: a tensor of type float32 and shape [Num examples]\n\u003e example_ids: a tensor of type string and shape [Num examples]\n\u003e }\n\u003e variables: {\n\u003e sparse_features_weights: list of tensors of shape [vocab size]\n\u003e dense_features_weights: list of tensors of shape [dense_feature_dimension]\n\u003e }\n\u003e options: {\n\u003e symmetric_l1_regularization: 0.0\n\u003e symmetric_l2_regularization: 1.0\n\u003e loss_type: \"logistic_loss\"\n\u003e num_loss_partitions: 1 (Optional, with default value of 1. Number of\n\u003e partitions of the global loss function, 1 means single machine solver,\n\u003e and \u003e1 when we have more than one optimizer working concurrently.)\n\u003e num_table_shards: 1 (Optional, with default value of 1. Number of shards\n\u003e of the internal state table, typically set to match the number of\n\u003e parameter servers for large data sets.\n\u003e }\n\nIn the training program you will just have to run the returned Op from\nminimize().\n\u003e\n\u003e # Execute opt_op and train for num_steps.\n\u003e for _ in range(num_steps):\n\u003e opt_op.run()\n\u003e\n\u003e # You can also check for convergence by calling\n\u003e lr.approximate_duality_gap()\n\nMethods\n-------\n\n### `approximate_duality_gap`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L608-L637) \n\n approximate_duality_gap()\n\nAdd operations to compute the approximate duality gap.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An Operation that computes the approximate duality gap over all examples. ||\n\n\u003cbr /\u003e\n\n### `minimize`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L384-L569) \n\n minimize(\n global_step=None, name=None\n )\n\nAdd operations to train a linear model by minimizing the loss function.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|---------------|--------------------------------------------------------------------------------|\n| `global_step` | Optional `Variable` to increment by one after the variables have been updated. |\n| `name` | Optional name for the returned operation. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An Operation that updates the variables passed in the constructor. ||\n\n\u003cbr /\u003e\n\n### `predictions`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L331-L360) \n\n predictions(\n examples\n )\n\nAdd operations to compute predictions by the model.\n\nIf logistic_loss is being used, predicted probabilities are returned.\nIf poisson_loss is being used, predictions are exponentiated.\nOtherwise, (raw) linear predictions (w\\*x) are returned.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|-------------------------------------|\n| `examples` | Examples to compute predictions on. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An Operation that computes the predictions for examples. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|-----------------------------------|\n| `ValueError` | if examples are not well defined. |\n\n\u003cbr /\u003e\n\n### `regularized_loss`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L699-L724) \n\n regularized_loss(\n examples\n )\n\nAdd operations to compute the loss with regularization loss included.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|------------------------------|\n| `examples` | Examples to compute loss on. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An Operation that computes mean (regularized) loss for given set of examples. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|-----------------------------------|\n| `ValueError` | if examples are not well defined. |\n\n\u003cbr /\u003e\n\n### `unregularized_loss`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L639-L697) \n\n unregularized_loss(\n examples\n )\n\nAdd operations to compute the loss (without the regularization loss).\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|--------------------------------------------|\n| `examples` | Examples to compute unregularized loss on. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An Operation that computes mean (unregularized) loss for given set of examples. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ||\n|--------------|-----------------------------------|\n| `ValueError` | if examples are not well defined. |\n\n\u003cbr /\u003e\n\n### `update_weights`\n\n[View source](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/linear_optimizer/python/ops/sdca_ops.py#L571-L606) \n\n update_weights(\n train_op\n )\n\nUpdates the model weights.\n\nThis function must be called on at least one worker after `minimize`.\nIn distributed training this call can be omitted on non-chief workers to\nspeed up training.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ||\n|------------|------------------------------------------------|\n| `train_op` | The operation returned by the `minimize` call. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ||\n|---|---|\n| An Operation that updates the model weights. ||\n\n\u003cbr /\u003e"]]