Max Likelihood / MAP estimates with autodifferentiation

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

Max Likelihood / MAP estimates with autodifferentiation

Alex Williams
I have a (somewhat complicated) posterior distribution that I'd like to maximize via numerical optimization. It would be nice if I could leverage some of Julia's autodifferentiation tools: http://www.juliadiff.org/

The following is a toy example of what I'd like to do

using Distributions
using Optim

N
= MultivariateNormal(zeros(2),eye(2))

f
(x) = logpdf(N,x)
optimize
(f,[3.0,1.5],autodiff=:true,method=:l_bfgs)

This produces the following error:

ERROR: MethodError: `_logpdf` has no method matching _logpdf(::Distributions.MvNormal{PDMats.PDMat,Array{Float64,1}}, ::Array{DualNumbers.Dual{Float64},1})


The JuliaDiff website notes the following: "[functions] must be written to take a generic input vector, e.g., f{T}(x::Vector{T}) or just f(x) instead of f(x::Vector{Float64})."


Some possible solutions:
  • Forget autodiff. Write functions to compute the gradient of various distributions grad(D::Normal,x)grad(D::Beta,x)grad(D::Dirichlet,x) and pass these to Optim.
  • Modify the pdf functions in the Distributions to accept generic input vectors (I have no idea whether this will break things, or cause other problems)
  • Use a gradient-free method
  • (For the simple toy example, you can find the maximum analytically, but this isn't an option for my real problem)
  • (I am also aware of the MLE and MAP fitting tools in Distributions.jl -- these aren't good enough for me either)
  • I could maybe check out Lora.jl?? I don't really care about sampling the whole posterior distribution though (yet)...
Is there a better way to go about this? Happy to share any code/solutions I come up with if others are interested.

Thanks!

-- Alex


--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|

Re: Max Likelihood / MAP estimates with autodifferentiation

Benjamin Deonovic
There is a long discussion at Distributions.jl (https://github.com/JuliaStats/Distributions.jl/pull/430) to make distributions compatible with DualNumbers (or something similar) so that the packages from juliadiff can be used for auto-differentiation.

The process is going to take a long time so you should probably find another work around for now.

For your particular example you could certainly write your own Distribution functions to handle DualNumbers (the switch shouldn't be that hard, you can just replace all instances of Float64 in Distributions.jl code with Number and it should work).

Another suggestion: check out https://github.com/johnmyleswhite/Calculus.jl which allows you to approximate gradients and hessians of functions.

If you are interested in MCMC please check out Mamba (source: https://github.com/brian-j-smith/Mamba.jl docs: http://mambajl.readthedocs.org/en/latest/). Mamba is in a much more refined state than lora.jl (including source code, examples, and documentation).

On Tuesday, November 17, 2015 at 9:03:17 PM UTC-6, Alex Williams wrote:
I have a (somewhat complicated) posterior distribution that I'd like to maximize via numerical optimization. It would be nice if I could leverage some of Julia's autodifferentiation tools: <a href="http://www.juliadiff.org/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\75http%3A%2F%2Fwww.juliadiff.org%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNFhXZOjhkvZQT_CESTXJK2CRRJuYw&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\75http%3A%2F%2Fwww.juliadiff.org%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNFhXZOjhkvZQT_CESTXJK2CRRJuYw&#39;;return true;">http://www.juliadiff.org/

The following is a toy example of what I'd like to do

using Distributions
using Optim

N
= MultivariateNormal(zeros(2),eye(2))

f
(x) = logpdf(N,x)
optimize
(f,[3.0,1.5],autodiff=:true,method=:l_bfgs)

This produces the following error:

ERROR: MethodError: `_logpdf` has no method matching _logpdf(::Distributions.MvNormal{PDMats.PDMat,Array{Float64,1}}, ::Array{DualNumbers.Dual{Float64},1})


The JuliaDiff website notes the following: "[functions] must be written to take a generic input vector, e.g., f{T}(x::Vector{T}) or just f(x) instead of f(x::Vector{Float64})."


Some possible solutions:
  • Forget autodiff. Write functions to compute the gradient of various distributions grad(D::Normal,x)grad(D::Beta,x)grad(D::Dirichlet,x) and pass these to Optim.
  • Modify the pdf functions in the Distributions to accept generic input vectors (I have no idea whether this will break things, or cause other problems)
  • Use a gradient-free method
  • (For the simple toy example, you can find the maximum analytically, but this isn't an option for my real problem)
  • (I am also aware of the MLE and MAP fitting tools in Distributions.jl -- these aren't good enough for me either)
  • I could maybe check out Lora.jl?? I don't really care about sampling the whole posterior distribution though (yet)...
Is there a better way to go about this? Happy to share any code/solutions I come up with if others are interested.

Thanks!

-- Alex


--
You received this message because you are subscribed to the Google Groups "julia-stats" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.