This function generates a weighted sum based on output dimension units.
Weighted sum refers to logits in classification problems. It refers to the
prediction itself for linear regression problems.
Note on supported columns: linear_model treats categorical columns as
indicator_columns. To be specific, assume the input as SparseTensor looks
like:
shape=[2,2]{[0,0]:"a"[1,0]:"b"[1,1]:"c"}
linear_model assigns weights for the presence of "a", "b", "c' implicitly,
just like indicator_column, while input_layer explicitly requires wrapping
each of categorical columns with an embedding_column or an
indicator_column.
where y_i is the output, b is the bias, and w_x is the weight
assigned to the presence of x in the input features.
Args
features
A mapping from key to tensors. _FeatureColumns look up via these
keys. For example numeric_column('price') will look at 'price' key in
this dict. Values are Tensor or SparseTensor depending on
corresponding _FeatureColumn.
feature_columns
An iterable containing the FeatureColumns to use as inputs
to your model. All items should be instances of classes derived from
_FeatureColumns.
units
An integer, dimensionality of the output space. Default value is 1.
sparse_combiner
A string specifying how to reduce if a categorical column
is multivalent. Except numeric_column, almost all columns passed to
linear_model are considered as categorical columns. It combines each
categorical column independently. Currently "mean", "sqrtn" and "sum" are
supported, with "sum" the default for linear model. "sqrtn" often achieves
good accuracy, in particular with bag-of-words columns.
"sum": do not
normalize features in the column
"mean": do l1 normalization on features
in the column
"sqrtn": do l2 normalization on features in the column
weight_collections
A list of collection names to which the Variable will be
added. Note that, variables will also be added to collections
tf.GraphKeys.GLOBAL_VARIABLES and ops.GraphKeys.MODEL_VARIABLES.
trainable
If True also add the variable to the graph collection
GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).
cols_to_vars
If not None, must be a dictionary that will be filled with a
mapping from _FeatureColumn to associated list of Variables. For
example, after the call, we might have cols_to_vars = { _NumericColumn(
key='numeric_feature1', shape=(1,): [], 'bias': [], _NumericColumn(
key='numeric_feature2', shape=(2,)): []} If a column creates no
variables, its value will be an empty list. Note that cols_to_vars will
also contain a string key 'bias' that maps to a list of Variables.
Returns
A Tensor which represents predictions/logits of a linear model. Its shape
is (batch_size, units) and its dtype is float32.
Raises
ValueError
if an item in feature_columns is neither a _DenseColumn
nor _CategoricalColumn.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-04-26 UTC."],[],[],null,["# tf.compat.v1.feature_column.linear_model\n\n\u003cbr /\u003e\n\n|---------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v2.16.1/tensorflow/python/feature_column/feature_column.py#L280-L416) |\n\nReturns a linear prediction `Tensor` based on given `feature_columns`. (deprecated)\n| **Warning:** tf.feature_column is not recommended for new code. Instead, feature preprocessing can be done directly using either [Keras preprocessing\nlayers](https://www.tensorflow.org/guide/migrate/migrating_feature_columns) or through the one-stop utility [`tf.keras.utils.FeatureSpace`](https://www.tensorflow.org/api_docs/python/tf/keras/utils/FeatureSpace) built on top of them. See the [migration guide](https://tensorflow.org/guide/migrate) for details. \n\n tf.compat.v1.feature_column.linear_model(\n features,\n feature_columns,\n units=1,\n sparse_combiner='sum',\n weight_collections=None,\n trainable=True,\n cols_to_vars=None\n )\n\n| **Deprecated:** THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use Keras preprocessing layers instead, either directly or via the [`tf.keras.utils.FeatureSpace`](../../../../tf/keras/utils/FeatureSpace) utility. Each of `tf.feature_column.*` has a functional equivalent in `tf.keras.layers` for feature preprocessing when training a Keras model.\n\nThis function generates a weighted sum based on output dimension `units`.\nWeighted sum refers to logits in classification problems. It refers to the\nprediction itself for linear regression problems.\n\nNote on supported columns: `linear_model` treats categorical columns as\n`indicator_column`s. To be specific, assume the input as `SparseTensor` looks\nlike: \n\n shape = [2, 2]\n {\n [0, 0]: \"a\"\n [1, 0]: \"b\"\n [1, 1]: \"c\"\n }\n\n`linear_model` assigns weights for the presence of \"a\", \"b\", \"c' implicitly,\njust like `indicator_column`, while `input_layer` explicitly requires wrapping\neach of categorical columns with an `embedding_column` or an\n`indicator_column`.\n\n#### Example of usage:\n\n price = numeric_column('price')\n price_buckets = bucketized_column(price, boundaries=[0., 10., 100., 1000.])\n keywords = categorical_column_with_hash_bucket(\"keywords\", 10K)\n keywords_price = crossed_column('keywords', price_buckets, ...)\n columns = [price_buckets, keywords, keywords_price ...]\n features = tf.io.parse_example(..., features=make_parse_example_spec(columns))\n prediction = linear_model(features, columns)\n\nThe `sparse_combiner` argument works as follows\nFor example, for two features represented as the categorical columns: \n\n # Feature 1\n\n shape = [2, 2]\n {\n [0, 0]: \"a\"\n [0, 1]: \"b\"\n [1, 0]: \"c\"\n }\n\n # Feature 2\n\n shape = [2, 3]\n {\n [0, 0]: \"d\"\n [1, 0]: \"e\"\n [1, 1]: \"f\"\n [1, 2]: \"f\"\n }\n\nwith `sparse_combiner` as \"mean\", the linear model outputs consequently\nare: \n\n y_0 = 1.0 / 2.0 * ( w_a + w_b ) + w_d + b\n y_1 = w_c + 1.0 / 3.0 * ( w_e + 2.0 * w_f ) + b\n\nwhere `y_i` is the output, `b` is the bias, and `w_x` is the weight\nassigned to the presence of `x` in the input features.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|----------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `features` | A mapping from key to tensors. `_FeatureColumn`s look up via these keys. For example `numeric_column('price')` will look at 'price' key in this dict. Values are `Tensor` or `SparseTensor` depending on corresponding `_FeatureColumn`. |\n| `feature_columns` | An iterable containing the FeatureColumns to use as inputs to your model. All items should be instances of classes derived from `_FeatureColumn`s. |\n| `units` | An integer, dimensionality of the output space. Default value is 1. |\n| `sparse_combiner` | A string specifying how to reduce if a categorical column is multivalent. Except `numeric_column`, almost all columns passed to `linear_model` are considered as categorical columns. It combines each categorical column independently. Currently \"mean\", \"sqrtn\" and \"sum\" are supported, with \"sum\" the default for linear model. \"sqrtn\" often achieves good accuracy, in particular with bag-of-words columns. \u003cbr /\u003e - \"sum\": do not normalize features in the column - \"mean\": do l1 normalization on features in the column - \"sqrtn\": do l2 normalization on features in the column |\n| `weight_collections` | A list of collection names to which the Variable will be added. Note that, variables will also be added to collections `tf.GraphKeys.GLOBAL_VARIABLES` and `ops.GraphKeys.MODEL_VARIABLES`. |\n| `trainable` | If `True` also add the variable to the graph collection `GraphKeys.TRAINABLE_VARIABLES` (see [`tf.Variable`](../../../../tf/Variable)). |\n| `cols_to_vars` | If not `None`, must be a dictionary that will be filled with a mapping from `_FeatureColumn` to associated list of `Variable`s. For example, after the call, we might have cols_to_vars = { _NumericColumn( key='numeric_feature1', shape=(1,): \\[\\], 'bias': \\[\\], _NumericColumn( key='numeric_feature2', shape=(2,)): \\[\\]} If a column creates no variables, its value will be an empty list. Note that cols_to_vars will also contain a string key 'bias' that maps to a list of Variables. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| A `Tensor` which represents predictions/logits of a linear model. Its shape is (batch_size, units) and its dtype is `float32`. ||\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|--------------|---------------------------------------------------------------------------------------|\n| `ValueError` | if an item in `feature_columns` is neither a `_DenseColumn` nor `_CategoricalColumn`. |\n\n\u003cbr /\u003e"]]