When self_normalized = True, the KL-forward Csiszar-function is:
f(u) = u log(u) - (u - 1)
When self_normalized = False the (u - 1) term is omitted.
Observe that as an f-Divergence, this Csiszar-function implies:
D_f[p, q] = KL[p, q]
The KL is "forward" because in maximum likelihood we think of minimizing q
as in KL[p, q].
Args
logu
float-like Tensor representing log(u) from above.
self_normalized
Python bool indicating whether f'(u=1)=0. When
f'(u=1)=0 the implied Csiszar f-Divergence remains non-negative even
when p, q are unnormalized measures.
name
Python str name prefixed to Ops created by this function.
Returns
kl_forward_of_u
float-like Tensor of the Csiszar-function evaluated at
u = exp(logu).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-11-21 UTC."],[],[]]