This will be wrapped in a make_template to ensure the variables are only
created once. It takes the d-dimensional input x[0:d] and returns the D-d
dimensional outputs loc ("mu") and log_scale ("alpha").
Arguments
hidden_layers
Python list-like of non-negative integer, scalars
indicating the number of units in each hidden layer. Default: [512, 512].
</td>
</tr><tr>
<td>shift_only</td>
<td>
Pythonboolindicating if only theshiftterm shall be
computed (i.e. NICE bijector). Default:False.
</td>
</tr><tr>
<td>activation</td>
<td>
Activation function (callable). Explicitly setting toNoneimplies a linear activation.
</td>
</tr><tr>
<td>name</td>
<td>
A name for ops managed by this function. Default:
"real_nvp_default_template".
</td>
</tr><tr>
<td>args</td>
<td>
<a href="../../../../tf/layers/dense"><code>tf.compat.v1.layers.dense</code></a> arguments.
</td>
</tr><tr>
<td>*kwargs`
Float-like Tensor of shift terms ("mu" in
[Papamakarios et al. (2016)][1]).
log_scale
Float-like Tensor of log(scale) terms ("alpha" in
[Papamakarios et al. (2016)][1]).
Raises
NotImplementedError
if rightmost dimension of inputs is unknown prior to
graph execution.
References
[1]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked
Autoregressive Flow for Density Estimation. In Neural Information
Processing Systems, 2017. https://arxiv.org/abs/1705.07057
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.distributions.bijectors.real_nvp_default_template\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/distributions/python/ops/bijectors/real_nvp.py#L241-L308) |\n\nBuild a scale-and-shift function using a multi-layer neural network. (deprecated) \n\n tf.contrib.distributions.bijectors.real_nvp_default_template(\n hidden_layers, shift_only=False, activation=tf.nn.relu, name=None, *args,\n **kwargs\n )\n\n| **Warning:** THIS FUNCTION IS DEPRECATED. It will be removed after 2018-10-01. Instructions for updating: The TensorFlow Distributions library has moved to TensorFlow Probability (https://github.com/tensorflow/probability). You should update all references to use [`tfp.distributions`](/probability/api_docs/python/tfp/distributions) instead of [`tf.contrib.distributions`](../../../../tf/contrib/distributions).\n\nThis will be wrapped in a make_template to ensure the variables are only\ncreated once. It takes the `d`-dimensional input x\\[0:d\\] and returns the `D-d`\ndimensional outputs `loc` (\"mu\") and `log_scale` (\"alpha\").\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Arguments --------- ||\n|-----------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------|\n| `hidden_layers` | Python `list`-like of non-negative integer, scalars indicating the number of units in each hidden layer. Default: `[512, 512]. \u003c/td\u003e \u003c/tr\u003e\u003ctr\u003e \u003ctd\u003e`shift_only`\u003c/td\u003e \u003ctd\u003e Python`bool`indicating if only the`shift`term shall be computed (i.e. NICE bijector). Default:`False`. \u003c/td\u003e \u003c/tr\u003e\u003ctr\u003e \u003ctd\u003e`activation`\u003c/td\u003e \u003ctd\u003e Activation function (callable). Explicitly setting to`None`implies a linear activation. \u003c/td\u003e \u003c/tr\u003e\u003ctr\u003e \u003ctd\u003e`name`\u003c/td\u003e \u003ctd\u003e A name for ops managed by this function. Default: \"real_nvp_default_template\". \u003c/td\u003e \u003c/tr\u003e\u003ctr\u003e \u003ctd\u003e`*args`\u003c/td\u003e \u003ctd\u003e \u003ca href=\"../../../../tf/layers/dense\"\u003e\u003ccode\u003etf.compat.v1.layers.dense\u003c/code\u003e\u003c/a\u003e arguments. \u003c/td\u003e \u003c/tr\u003e\u003ctr\u003e \u003ctd\u003e`*\\*kwargs\\` | [`tf.compat.v1.layers.dense`](../../../../tf/layers/dense) keyword arguments. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------|---------------------------------------------------------------------------------------------|\n| `shift` | `Float`-like `Tensor` of shift terms (\"mu\" in \\[Papamakarios et al. (2016)\\]\\[1\\]). |\n| `log_scale` | `Float`-like `Tensor` of log(scale) terms (\"alpha\" in \\[Papamakarios et al. (2016)\\]\\[1\\]). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Raises ------ ||\n|-----------------------|-------------------------------------------------------------------------|\n| `NotImplementedError` | if rightmost dimension of `inputs` is unknown prior to graph execution. |\n\n\u003cbr /\u003e\n\n#### References\n\n\\[1\\]: George Papamakarios, Theo Pavlakou, and Iain Murray. Masked\nAutoregressive Flow for Density Estimation. In *Neural Information\nProcessing Systems* , 2017. \u003chttps://arxiv.org/abs/1705.07057\u003e"]]