We were unable to load Disqus. If you are a moderator please see our troubleshooting guide.

Tomas Capretto • 4 years ago

Really nice post. I enjoyed a lot while reading through it. Very informative! Thanks!!

Facundo Muñoz • 4 years ago

Hi Martin! Thanks for the post. It's really informative.

A quick question: in the "weakly-identified sigmoid model", I don't quite get **where** or how does "the model strictly enforces that $wx+b > 0$". Both $w$ and $b$ are unconstrained parameters, and I don't see where non-positive values of $wx+b$ are excluded, other than yielding very small log-likelihoods.
Thanks!

Martin Modrák • 4 years ago

You are correct - the only reason thos are excluded is because they yield very small log-likelihoods. I now see how the wording might be a bit confusing, maybe "the data only support wx+b > 0" or something similar would have been clearer, thanks for pointing this out.

Facundo Muñoz • 4 years ago

Oh, thanks for the confirmation. Yes, I guess it was the
term "strictly" which suggested that -0.00001 is absolutely forbidden while 0.00001 is not.

T Kamiya • 6 years ago

I really enjoyed reading this blog. Thank you for posting.

Bob Carpenter • 7 years ago

Nice examples!

You're using something like Andrew's notion of soft identification of a posterior mean.

The traditional notion of identifiability is that a likelihood is not identifiable if there are two parameter values theta and theta' for which every data set y has the same likelihood, i.e., p(y|theta) = p(y |theta') for all data sets y.

See: https://en.wikipedia.org/wi...

Martin Modrák • 7 years ago

Thanks for pointing this out - should be more careful with my language. I updated the wording of the intro to reflect the technical meaning. I however still believe that on the forums people use this term a lot more loosely so I want to capture that meaning as well.