tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler
Stay organized with collections
Save and categorize content based on your preferences.
Monte Carlo estimate of \(E_p[f(Z)] = E_q[f(Z) p(Z) / q(Z)]\).
tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler(
f, log_p, sampling_dist_q, z=None, n=None, seed=None,
name='expectation_importance_sampler'
)
With \(p(z) := exp^{log_p(z)}\), this Op
returns
\(n^{-1} sum_{i=1}^n [ f(z_i) p(z_i) / q(z_i) ], z_i ~ q,\)
\(\approx E_q[ f(Z) p(Z) / q(Z) ]\)
\(= E_p[f(Z)]\)
This integral is done in log-space with max-subtraction to better handle the
often extreme values that f(z) p(z) / q(z)
can take on.
If f >= 0
, it is up to 2x more efficient to exponentiate the result of
expectation_importance_sampler_logspace
applied to Log[f]
.
User supplies either Tensor
of samples z
, or number of samples to draw n
Args |
f
|
Callable mapping samples from sampling_dist_q to Tensors with shape
broadcastable to q.batch_shape .
For example, f works "just like" q.log_prob .
|
log_p
|
Callable mapping samples from sampling_dist_q to Tensors with
shape broadcastable to q.batch_shape .
For example, log_p works "just like" sampling_dist_q.log_prob .
|
sampling_dist_q
|
The sampling distribution.
tfp.distributions.Distribution .
float64 dtype recommended.
log_p and q should be supported on the same set.
|
z
|
Tensor of samples from q , produced by q.sample for some n .
|
n
|
Integer Tensor . Number of samples to generate if z is not provided.
|
seed
|
Python integer to seed the random number generator.
|
name
|
A name to give this Op .
|
Returns |
The importance sampling estimate. Tensor with shape equal
to batch shape of q , and dtype = q.dtype .
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2020-10-01 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2020-10-01 UTC."],[],[],null,["# tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler\n\n|-----------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/tensorflow/blob/v1.15.0/tensorflow/contrib/bayesflow/python/ops/monte_carlo_impl.py#L39-L108) |\n\nMonte Carlo estimate of \\\\(E_p\\[f(Z)\\] = E_q\\[f(Z) p(Z) / q(Z)\\]\\\\). \n\n tf.contrib.bayesflow.monte_carlo.expectation_importance_sampler(\n f, log_p, sampling_dist_q, z=None, n=None, seed=None,\n name='expectation_importance_sampler'\n )\n\nWith \\\\(p(z) := exp\\^{log_p(z)}\\\\), this `Op` returns\n\n\\\\(n\\^{-1} sum_{i=1}\\^n \\[ f(z_i) p(z_i) / q(z_i) \\], z_i \\~ q,\\\\)\n\\\\(\\\\approx E_q\\[ f(Z) p(Z) / q(Z) \\]\\\\)\n\\\\(= E_p\\[f(Z)\\]\\\\)\n\nThis integral is done in log-space with max-subtraction to better handle the\noften extreme values that `f(z) p(z) / q(z)` can take on.\n\nIf `f \u003e= 0`, it is up to 2x more efficient to exponentiate the result of\n`expectation_importance_sampler_logspace` applied to `Log[f]`.\n\nUser supplies either `Tensor` of samples `z`, or number of samples to draw `n`\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `f` | Callable mapping samples from `sampling_dist_q` to `Tensors` with shape broadcastable to `q.batch_shape`. For example, `f` works \"just like\" `q.log_prob`. |\n| `log_p` | Callable mapping samples from `sampling_dist_q` to `Tensors` with shape broadcastable to `q.batch_shape`. For example, `log_p` works \"just like\" `sampling_dist_q.log_prob`. |\n| `sampling_dist_q` | The sampling distribution. [`tfp.distributions.Distribution`](/probability/api_docs/python/tfp/distributions/Distribution). `float64` `dtype` recommended. `log_p` and `q` should be supported on the same set. |\n| `z` | `Tensor` of samples from `q`, produced by `q.sample` for some `n`. |\n| `n` | Integer `Tensor`. Number of samples to generate if `z` is not provided. |\n| `seed` | Python integer to seed the random number generator. |\n| `name` | A name to give this `Op`. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---|---|\n| The importance sampling estimate. `Tensor` with `shape` equal to batch shape of `q`, and `dtype` = `q.dtype`. ||\n\n\u003cbr /\u003e"]]