tfp.experimental.mcmc.sample_chain_with_burnin
Stay organized with collections
Save and categorize content based on your preferences.
Implements Markov chain Monte Carlo via repeated TransitionKernel
steps.
tfp.experimental.mcmc.sample_chain_with_burnin(
num_results,
current_state,
previous_kernel_results=None,
kernel=None,
num_burnin_steps=0,
num_steps_between_results=0,
trace_fn=_trace_current_state,
parallel_iterations=10,
seed=None,
name=None
)
This function samples from a Markov chain at current_state
whose
stationary distribution is governed by the supplied TransitionKernel
instance (kernel
).
This function can sample from multiple chains, in parallel. (Whether or not
there are multiple chains is dictated by the kernel
.)
The current_state
can be represented as a single Tensor
or a list
of
Tensors
which collectively represent the current state.
Since MCMC states are correlated, it is sometimes desirable to produce
additional intermediate states, and then discard them, ending up with a set of
states with decreased autocorrelation. See [Owen (2017)][1]. Such 'thinning'
is made possible by setting num_steps_between_results > 0
. The chain then
takes num_steps_between_results
extra steps between the steps that make it
into the results. The extra steps are never materialized, and thus do not
increase memory requirements.
In addition to returning the chain state, this function supports tracing of
auxiliary variables used by the kernel. The traced values are selected by
specifying trace_fn
. By default, all chain states but no kernel results are
traced.
Args |
num_results
|
Integer number of Markov chain draws.
|
current_state
|
Tensor or Python list of Tensor s representing the
current state(s) of the Markov chain(s).
|
previous_kernel_results
|
A Tensor or a nested collection of Tensor s
representing internal calculations made within the previous call to this
function (or as returned by bootstrap_results ).
|
kernel
|
An instance of tfp.mcmc.TransitionKernel which implements one step
of the Markov chain.
|
num_burnin_steps
|
Integer number of chain steps to take before starting to
collect results.
Default value: 0 (i.e., no burn-in).
|
num_steps_between_results
|
Integer number of chain steps between collecting
a result. Only one out of every num_steps_between_samples + 1 steps is
included in the returned results. The number of returned chain states is
still equal to num_results . Default value: 0 (i.e., no thinning).
|
trace_fn
|
A callable that takes in the current chain state and the previous
kernel results and return a Tensor or a nested collection of Tensor s
that is then traced along with the chain state.
|
parallel_iterations
|
The number of iterations allowed to run in parallel. It
must be a positive integer. See tf.while_loop for more details.
|
seed
|
PRNG seed; see tfp.random.sanitize_seed for details.
|
name
|
Python str name prefixed to Ops created by this function.
Default value: None (i.e.,
'experimental_mcmc_sample_chain_with_burnin').
|
Returns |
result
|
A RunKernelResults instance containing information about the
sampling run. Main field is trace , the history of outputs of
trace_fn . See RunKernelResults for contents of other fields.
|
References
[1]: Art B. Owen. Statistically efficient thinning of a Markov chain sampler.
Technical Report, 2017.
http://statweb.stanford.edu/~owen/reports/bestthinning.pdf
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.experimental.mcmc.sample_chain_with_burnin\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/experimental/mcmc/sample_fold.py#L181-L293) |\n\nImplements Markov chain Monte Carlo via repeated `TransitionKernel` steps. \n\n tfp.experimental.mcmc.sample_chain_with_burnin(\n num_results,\n current_state,\n previous_kernel_results=None,\n kernel=None,\n num_burnin_steps=0,\n num_steps_between_results=0,\n trace_fn=_trace_current_state,\n parallel_iterations=10,\n seed=None,\n name=None\n )\n\nThis function samples from a Markov chain at `current_state` whose\nstationary distribution is governed by the supplied `TransitionKernel`\ninstance (`kernel`).\n\nThis function can sample from multiple chains, in parallel. (Whether or not\nthere are multiple chains is dictated by the `kernel`.)\n\nThe `current_state` can be represented as a single `Tensor` or a `list` of\n`Tensors` which collectively represent the current state.\n\nSince MCMC states are correlated, it is sometimes desirable to produce\nadditional intermediate states, and then discard them, ending up with a set of\nstates with decreased autocorrelation. See \\[Owen (2017)\\]\\[1\\]. Such 'thinning'\nis made possible by setting `num_steps_between_results \u003e 0`. The chain then\ntakes `num_steps_between_results` extra steps between the steps that make it\ninto the results. The extra steps are never materialized, and thus do not\nincrease memory requirements.\n\nIn addition to returning the chain state, this function supports tracing of\nauxiliary variables used by the kernel. The traced values are selected by\nspecifying `trace_fn`. By default, all chain states but no kernel results are\ntraced.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-----------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `num_results` | Integer number of Markov chain draws. |\n| `current_state` | `Tensor` or Python `list` of `Tensor`s representing the current state(s) of the Markov chain(s). |\n| `previous_kernel_results` | A `Tensor` or a nested collection of `Tensor`s representing internal calculations made within the previous call to this function (or as returned by `bootstrap_results`). |\n| `kernel` | An instance of [`tfp.mcmc.TransitionKernel`](../../../tfp/mcmc/TransitionKernel) which implements one step of the Markov chain. |\n| `num_burnin_steps` | Integer number of chain steps to take before starting to collect results. Default value: 0 (i.e., no burn-in). |\n| `num_steps_between_results` | Integer number of chain steps between collecting a result. Only one out of every `num_steps_between_samples + 1` steps is included in the returned results. The number of returned chain states is still equal to `num_results`. Default value: 0 (i.e., no thinning). |\n| `trace_fn` | A callable that takes in the current chain state and the previous kernel results and return a `Tensor` or a nested collection of `Tensor`s that is then traced along with the chain state. |\n| `parallel_iterations` | The number of iterations allowed to run in parallel. It must be a positive integer. See [`tf.while_loop`](https://www.tensorflow.org/api_docs/python/tf/while_loop) for more details. |\n| `seed` | PRNG seed; see [`tfp.random.sanitize_seed`](../../../tfp/random/sanitize_seed) for details. |\n| `name` | Python `str` name prefixed to Ops created by this function. Default value: `None` (i.e., 'experimental_mcmc_sample_chain_with_burnin'). |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `result` | A `RunKernelResults` instance containing information about the sampling run. Main field is `trace`, the history of outputs of `trace_fn`. See `RunKernelResults` for contents of other fields. |\n\n\u003cbr /\u003e\n\n#### References\n\n\\[1\\]: Art B. Owen. Statistically efficient thinning of a Markov chain sampler.\n*Technical Report* , 2017.\n[http://statweb.stanford.edu/\\~owen/reports/bestthinning.pdf](http://statweb.stanford.edu/~owen/reports/bestthinning.pdf)"]]