In a sense, this divergence is the "reverse" of the Jensen-Shannon
f-Divergence.
This Csiszar-function induces a symmetric f-Divergence, i.e.,
D_f[p, q] = D_f[q, p].
Args
logu
float-like Tensor representing log(u) from above.
self_normalized
Python bool indicating whether f'(u=1)=0. When
f'(u=1)=0 the implied Csiszar f-Divergence remains non-negative even
when p, q are unnormalized measures.
name
Python str name prefixed to Ops created by this function.
Returns
arithmetic_geometric_of_u
float-like Tensor of the
Csiszar-function evaluated at u = exp(logu).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.substrates.numpy.vi.arithmetic_geometric\n\n\u003cbr /\u003e\n\n|------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/substrates/numpy/vi/csiszar_divergence.py#L377-L427) |\n\nThe Arithmetic-Geometric Csiszar-function in log-space.\n\n#### View aliases\n\n\n**Main aliases**\n\n[`tfp.experimental.substrates.numpy.vi.arithmetic_geometric`](https://www.tensorflow.org/probability/api_docs/python/tfp/substrates/numpy/vi/arithmetic_geometric)\n\n\u003cbr /\u003e\n\n tfp.substrates.numpy.vi.arithmetic_geometric(\n logu, self_normalized=False, name=None\n )\n\nA Csiszar-function is a member of, \n\n F = { f:R_+ to R : f convex }.\n\nWhen `self_normalized = True` the Arithmetic-Geometric Csiszar-function is: \n\n f(u) = (1 + u) log( (1 + u) / sqrt(u) ) - (1 + u) log(2)\n\nWhen `self_normalized = False` the `(1 + u) log(2)` term is omitted.\n\nObserve that as an f-Divergence, this Csiszar-function implies: \n\n D_f[p, q] = KL[m, p] + KL[m, q]\n m(x) = 0.5 p(x) + 0.5 q(x)\n\nIn a sense, this divergence is the \"reverse\" of the Jensen-Shannon\nf-Divergence.\n\nThis Csiszar-function induces a symmetric f-Divergence, i.e.,\n`D_f[p, q] = D_f[q, p]`.\n| **Warning:** this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| \u003e\u003e 0`.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `logu` | `float`-like `Tensor` representing `log(u)` from above. |\n| `self_normalized` | Python `bool` indicating whether `f'(u=1)=0`. When `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even when `p, q` are unnormalized measures. |\n| `name` | Python `str` name prefixed to Ops created by this function. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-----------------------------|-----------------------------------------------------------------------------|\n| `arithmetic_geometric_of_u` | `float`-like `Tensor` of the Csiszar-function evaluated at `u = exp(logu)`. |\n\n\u003cbr /\u003e"]]