In a sense, this divergence is the "reverse" of the Arithmetic-Geometric
f-Divergence.
This Csiszar-function induces a symmetric f-Divergence, i.e.,
D_f[p, q] = D_f[q, p].
For more information, see:
Lin, J. "Divergence measures based on the Shannon entropy." IEEE Trans.
Inf. Th., 37, 145-151, 1991.
Args
logu
float-like Tensor representing log(u) from above.
self_normalized
Python bool indicating whether f'(u=1)=0. When
f'(u=1)=0 the implied Csiszar f-Divergence remains non-negative even
when p, q are unnormalized measures.
name
Python str name prefixed to Ops created by this function.
Returns
jensen_shannon_of_u
float-like Tensor of the Csiszar-function
evaluated at u = exp(logu).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.vi.jensen_shannon\n\n\u003cbr /\u003e\n\n|--------------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/vi/csiszar_divergence.py#L317-L374) |\n\nThe Jensen-Shannon Csiszar-function in log-space. \n\n tfp.vi.jensen_shannon(\n logu, self_normalized=False, name=None\n )\n\nA Csiszar-function is a member of, \n\n F = { f:R_+ to R : f convex }.\n\nWhen `self_normalized = True`, the Jensen-Shannon Csiszar-function is: \n\n f(u) = u log(u) - (1 + u) log(1 + u) + (u + 1) log(2)\n\nWhen `self_normalized = False` the `(u + 1) log(2)` term is omitted.\n\nObserve that as an f-Divergence, this Csiszar-function implies: \n\n D_f[p, q] = KL[p, m] + KL[q, m]\n m(x) = 0.5 p(x) + 0.5 q(x)\n\nIn a sense, this divergence is the \"reverse\" of the Arithmetic-Geometric\nf-Divergence.\n\nThis Csiszar-function induces a symmetric f-Divergence, i.e.,\n`D_f[p, q] = D_f[q, p]`.\n| **Warning:** this function makes non-log-space calculations and may therefore be numerically unstable for `|logu| \u003e\u003e 0`.\n\nFor more information, see:\nLin, J. \"Divergence measures based on the Shannon entropy.\" IEEE Trans.\nInf. Th., 37, 145-151, 1991.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|-------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `logu` | `float`-like `Tensor` representing `log(u)` from above. |\n| `self_normalized` | Python `bool` indicating whether `f'(u=1)=0`. When `f'(u=1)=0` the implied Csiszar f-Divergence remains non-negative even when `p, q` are unnormalized measures. |\n| `name` | Python `str` name prefixed to Ops created by this function. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-----------------------|-----------------------------------------------------------------------------|\n| `jensen_shannon_of_u` | `float`-like `Tensor` of the Csiszar-function evaluated at `u = exp(logu)`. |\n\n\u003cbr /\u003e"]]