tf.contrib.estimator.logistic_regression_head
Stay organized with collections
Save and categorize content based on your preferences.
Creates a _Head
for logistic regression.
tf.contrib.estimator.logistic_regression_head(
weight_column=None, loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE,
name=None
)
Uses sigmoid_cross_entropy_with_logits
loss, which is the same as
binary_classification_head
. The differences compared to
binary_classification_head
are:
- Does not support
label_vocabulary
. Instead, labels must be float in the
range [0, 1].
- Does not calculate some metrics that do not make sense, such as AUC.
- In
PREDICT
mode, only returns logits and predictions
(=tf.sigmoid(logits)
), whereas binary_classification_head
also returns
probabilities, classes, and class_ids.
- Export output defaults to
RegressionOutput
, whereas
binary_classification_head
defaults to PredictOutput
.
The head expects logits
with shape [D0, D1, ... DN, 1]
.
In many applications, the shape is [batch_size, 1]
.
The labels
shape must match logits
, namely
[D0, D1, ... DN]
or [D0, D1, ... DN, 1]
.
If weight_column
is specified, weights must be of shape
[D0, D1, ... DN]
or [D0, D1, ... DN, 1]
.
This is implemented as a generalized linear model, see
https://en.wikipedia.org/wiki/Generalized_linear_model
The head can be used with a canned estimator. Example:
my_head = tf.contrib.estimator.logistic_regression_head()
my_estimator = tf.estimator.DNNEstimator(
head=my_head,
hidden_units=...,
feature_columns=...)
It can also be used with a custom model_fn
. Example:
def _my_model_fn(features, labels, mode):
my_head = tf.contrib.estimator.logistic_regression_head()
logits = tf.keras.Model(...)(features)
return my_head.create_estimator_spec(
features=features,
mode=mode,
labels=labels,
optimizer=tf.AdagradOptimizer(learning_rate=0.1),
logits=logits)
my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)
Args |
weight_column
|
A string or a _NumericColumn created by
tf.feature_column.numeric_column defining feature column representing
weights. It is used to down weight or boost examples during training. It
will be multiplied by the loss of the example.
|
loss_reduction
|
One of tf.losses.Reduction except NONE . Describes how to
reduce training loss over batch and label dimension. Defaults to
SUM_OVER_BATCH_SIZE , namely weighted sum of losses divided by
batch size * label_dimension . See tf.losses.Reduction .
|
name
|
name of the head. If provided, summary and metrics keys will be
suffixed by "/" + name . Also used as name_scope when creating ops.
|
Returns |
An instance of _Head for logistic regression.
|
Raises |
ValueError
|
If loss_reduction is invalid.
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.estimator.logistic_regression_head\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/estimator/tree/master/tensorflow_estimator/contrib/estimator/python/estimator/head.py) |\n\nCreates a `_Head` for logistic regression. \n\n tf.contrib.estimator.logistic_regression_head(\n weight_column=None, loss_reduction=losses.Reduction.SUM_OVER_BATCH_SIZE,\n name=None\n )\n\nUses `sigmoid_cross_entropy_with_logits` loss, which is the same as\n`binary_classification_head`. The differences compared to\n`binary_classification_head` are:\n\n- Does not support `label_vocabulary`. Instead, labels must be float in the range \\[0, 1\\].\n- Does not calculate some metrics that do not make sense, such as AUC.\n- In `PREDICT` mode, only returns logits and predictions (`=tf.sigmoid(logits)`), whereas `binary_classification_head` also returns probabilities, classes, and class_ids.\n- Export output defaults to `RegressionOutput`, whereas `binary_classification_head` defaults to `PredictOutput`.\n\nThe head expects `logits` with shape `[D0, D1, ... DN, 1]`.\nIn many applications, the shape is `[batch_size, 1]`.\n\nThe `labels` shape must match `logits`, namely\n`[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.\n\nIf `weight_column` is specified, weights must be of shape\n`[D0, D1, ... DN]` or `[D0, D1, ... DN, 1]`.\n\nThis is implemented as a generalized linear model, see\n\u003chttps://en.wikipedia.org/wiki/Generalized_linear_model\u003e\n\nThe head can be used with a canned estimator. Example: \n\n my_head = tf.contrib.estimator.logistic_regression_head()\n my_estimator = tf.estimator.DNNEstimator(\n head=my_head,\n hidden_units=...,\n feature_columns=...)\n\nIt can also be used with a custom `model_fn`. Example: \n\n def _my_model_fn(features, labels, mode):\n my_head = tf.contrib.estimator.logistic_regression_head()\n logits = tf.keras.Model(...)(features)\n\n return my_head.create_estimator_spec(\n features=features,\n mode=mode,\n labels=labels,\n optimizer=tf.AdagradOptimizer(learning_rate=0.1),\n logits=logits)\n\n my_estimator = tf.estimator.Estimator(model_fn=_my_model_fn)\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `weight_column` | A string or a `_NumericColumn` created by [`tf.feature_column.numeric_column`](../../../tf/feature_column/numeric_column) defining feature column representing weights. It is used to down weight or boost examples during training. It will be multiplied by the loss of the example. |\n| `loss_reduction` | One of [`tf.losses.Reduction`](../../../tf/losses/Reduction) except `NONE`. Describes how to reduce training loss over batch and label dimension. Defaults to `SUM_OVER_BATCH_SIZE`, namely weighted sum of losses divided by `batch size * label_dimension`. See `tf.losses.Reduction`. |\n| `name` | name of the head. If provided, summary and metrics keys will be suffixed by `\"/\" + name`. Also used as `name_scope` when creating ops. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| An instance of `_Head` for logistic regression. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------|\n| `ValueError` | If `loss_reduction` is invalid. |\n\n\u003cbr /\u003e"]]