We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Nitin Bansal • 5 years ago

Great Write up! Helped me really improve my understanding about the subject!
I had few small doubts, which I am not able to comprehend.
1) Why do we talk about Joint Probability Distribution, only when we discuss about Wasserstein Loss and not when we discuss about KL divergence?

Regards,
Nitin

Jeevesh Juneja • 4 years ago

The joint probability distribution in some ways is the reason why we can split a probability distribution as in parallel line example into something having two parallel lines.

llmorpheull • 5 years ago

disqus_ILLNP6vaYW gg

Bain Jammin • 6 years ago

When showing that maximizing MLE is equivalent to minimizing KL, you should take argmax and argmin instead of max/min since adding a constant does indeed modify the max/min but not the argmax/argmin.

Vladimir Ralev • 6 months ago

That would make more sense. And also m→∞ seems like a poor expression to capture the intent of sampling exhaustively the whole distribution Pr and not simply cycling infinitely over some unspecified finite or infinite subset of samples.

Kiki Rizki Arpiandi • 2 years ago

I think Bain is correct, should argmax instead of max, max make the equation incorrect

Ravi Teja • 6 years ago

Hi Alex, great post! .In the parellel lines example, when explaining about KL and reverse KL divergence, the points(theta, 0) and (theta, 0.5) should be switched I think. Q(x,y) cannot be zero at (theta,0.5)

Markus Wenzel • 5 years ago

Also, if I understood things correctly here, the statement holds for _any_ point (0,y) and (theta,y), respectively, as the distributions are completely disjoint. Finding only one (like (0,0.5)) is sufficient, though, for the argument.

Michael Dietz • 7 years ago

The weight clipping requirement introduced seems a bit extreme. Just thinking about it would this sort of weight clipping [-.01, .01] limit the ability of the critic/discriminator to represent non-linear functions? Considering we use non-linearities like relus for example, with weights clipped to this range wouldn't relus very rarely actually introduce any non-linearities in our network (their original purpose). same goes with most other activations i can think of. i guess this is done on purpose for the critic to be k-Lipschitz? (maybe biases could allow non linearities to be introduced but are there any limitations here?) but isn't this weight clipping a hack then and doesn't it constrain us to critics that only represent linear functions which defeats purpose? is this a big problem? hoping i am not understanding or missing something because otherwise Wasserstein seems very promising to me. Thanks, Mike

alexirpan • 7 years ago

ReLUs introduce non-linearities when some outputs are < 0 and other outputs are > 0. This doesn't have anything to do with the magnitude of the weights - I don't think weight clipping makes the network "more linear", or only able to represent linear functions.

However, it does limit what kinds of functions you can learn. The weight clipping is needed to make the theory carry through, but you can always clip to a larger range if you need to.

Michael Dietz • 7 years ago

thanks for clearing that up, great post btw!

Rafael Valle • 6 years ago

Alex, thank you for this great post.
In the part of the text where you define the probability distributions over R^2 to be (0, y) and compare distance functions, con you add a line to compare it with least-squares as well?

HS Choi • 7 years ago

Hi I'm reading through your article and have a question about a mathematical term.
Could you tell me what does "dimensional support" mean?? as in P_theta has low dimensional support. I've tried to google it but failed to figure out what that means. It would be a lot of help.
Thanks :)

Csaba Botos • 6 years ago

As far as I know, a general function's (i.e. f:X->Y) support is a subset of X where f(x) is non zero

Karan Desai • 7 years ago

Amazing, you broke down the math into very concise and understandable chunks. The paper scared me, but I managed to understand it by having this read through simultaneously. Keep up the good work !

gondolier • 7 years ago

The gradient computation \Del_\theta W(P_r,P_\theta) seems wrong because the optimal 1-Lip f_w that for the pair (P_r, P_\theta) depends on theta so the gradient of the second term is not zero.

Is this mistake not important?

alexirpan • 7 years ago

We're updating with the partial derivative with respect to \theta, not the total derivative, so we don't need to worry about the effects of other variables.

gondolier • 7 years ago

No I meant just partial derivative wrt theta of course. The calculation is correct for a fixed f_w. But your f_w is the maximizer that achieves the supremum in W(Pr,P_theta), so it depends on theta, which you can write as f_{w,theta} if you will.
Simple example: take theta=sigma, Pr=N(0,1) and P_theta=N(0,theta^2). Find what f is.

Therefore Del_theta of E_{x~Pr} f(x) is not zero, because f depends on theta. So the expression is incorrect.

alexirpan • 7 years ago

Oh okay, I see your concerns. I think you still don't need to worry about this because we're currently at the optimal f for a given \theta. At this maxima we have df/d\theta = 0, so the derivative of the first term goes to 0.

I'm not sure my math skills are up to par with making this argument rigorous, so I will at least say this: I believe the math works out, I may be wrong, and if the math doesn't work out, I don't think the effect will be important empirically. It likely matters much less than the approximation error you get from not optimizing over all 1-Lipschitz functions.

Stanislav Zámečník • 2 years ago

Great explanation.

But i would have a question about weight clipping. How come, that if we make a weight space W to be compact, then every function f_w will be K-lipschitz?

Thanks for help.
Stano.

Rajarshi Banerjee • 6 years ago

How on earth did you get support of (0, 0.5) or (theta, 0.5) to be zero, z is uniformly distributed from 0, 1. Just how do you get 0 as a support for that

Shashank Gupta • 6 years ago

Many thanks for this great blog post! :)

Marco Singh • 7 years ago

Thanks a lot for this excellent explanation! :-)
I have a question regarding the conditions for the marginal distribution you define: "The amount of mass that leaves x is ∫y γ(x,y)dy. This
must equal P_r(x), the amount of mass originally at x.". If we have two distributions and we want to move mass around, your definition implies that all the original probability mass at P_r(x) should be moved. Wouldn't it be the case that we only want to move as little as possible from P_r(x) to go from P_r to P_theta, i.e. that it shouldn't necessarily be all the mass that should be moved?

This explanation defines it differently fyi: https://vincentherrmann.git...

Marco Singh • 7 years ago

I figured it out I think. Since we sum over all possible y, the case where x = y is contained, hence we kinda move mass with a distance of 0 in the case where mass isn't moved. This way the EM/Wasserstein distance isn't penalized since the mass is multiplied with a distance of 0.

Siddarth Malreddy • 7 years ago

Thanks for the article. Could you explain what you mean when you say "we make the generator a feedforward net instead of a convolutional one"?

Khalid EL • 6 years ago

if you change the architecture of the neural networks representing the generator , I guess that we obtain bad samples when we switch to MLP , as convolutional neural networks are well suited to images as they encode the translation invariance property for example (with local filters applied with shared weights in the same layer) .

Roger Trullo • 7 years ago

Thanks for the post! I have a question, if I understand correctly, we first should train the critic, in order to have the function fw which will be used in the Wasserstein computation. This is done by maximizing E[fw(x)]-E[fw(gtheta(z))]. One we are done with that, we can now minimize the distance between Pr and Ptheta by minimizing -E[fw(gtheta(z))]. I am checking the original code posted here : https://github.com/martinar... , and they seem to be minimizing E[fw(x)]-E[fw(gtheta(z))] first, and then minimizing E[fw(gtheta(z))]. I am trying to understand why is that, if these two things are equivalent, why is that? thanks!

Zhiwen Lin • 6 years ago

Hi Roger, I have same question about it. Do you figure it out?

alexirpan • 7 years ago

I'm not great at reading PyTorch, but I think the code is doing the correct thing. Note one = 1 and mone is the negative.

Line 189: Train discriminator on real given label one.
Line 197: Train discriminator on fake given label mone.
Line 213: Train generator on fake given label one, which is adversarial compared to how the discriminator wants to do things.

洪語祥 • 7 years ago

Great read-through! Thanks for this article!

udibr • 7 years ago

Thank you!
In the formula just before Algo 1 you accidentaly wrote g_{\theta}(x) instead of g_{\theta}(z)

cuteboy • 7 years ago
Alex Coventry • 7 years ago

If anyone in Boston is interested in discussing this paper further, we'll be doing a walkthrough of its implementation this Tuesday evening. We've discussed the theory during the two preceding meetings. https://www.meetup.com/Camb...

Larry Guo • 7 years ago

Could you kindly take video and upload it to Youtube ?
Groups from Taiwan, appreciated your article and really hope to see your walkthrouhgh from the net. Larry

洪語祥 • 7 years ago

So, is there any video recording? So sad for not being there.

gwern • 7 years ago

I wondered about the connection to actor-critic too; GANs have taken inspiration from RL but so far they haven't given anything back, and offhand, I don't know of anything like clipping in actor-critic, but my thought was that it was the *critic* which should be clipped, not the actor. The critic seems exactly analogous to the discriminator in GAN, as it tries to judge the quality of the action taken by the generator (image emitted). So perhaps the key experiment here would be to add clipping to critic weights and see if it reduces the variance and the system as a whole learns faster?

I also wonder about the scale; with WGAN, the Wasserstein distance and
losses can change dramatically depending on the exact model structure,
and you seem to need to adjust the learning rate drastically (is that
the implication of your mention of the constant being buried in alpha?
I've mentioned this elsewhere that WGAN seems to need aggressive
tweaking of the learning rate, but so far no one else has mentioned it). One of the key ingredients is letting the loss vary over a wider range rather than logging it or whatever; what might the equivalent be for actor-critic?

alexirpan • 7 years ago

Yes, I meant clipping should be added to the critic.

The objective you're taking gradients on for the generator update is K * Wasserstein distance, The gradient update you get is alpha * K * grad_theta (Wasserstein distance), so yeah, I imagine learning rate needs to be carefully re-tuned whenever you change the model or c. If you imagine a fixed generator architecture, there's some optimal alpha * K you want, so whenever K changes alpha must change too. (Another argument for why estimating K would be cool - it would let you speed up the hyperparam search over learning rate.)

I'm not sure the log vs no log matters in the actor-critic setting - the thing that seems important is the gradients of the critic. It just so happens that using the log makes the gradient saturate more often (as argued in the paper.)