Quantcast

Re: ipopt and jump

Previous Topic
 
classic Classic list List threaded Threaded
2 messages Options
tcs
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: ipopt and jump

tcs
I got the following 

ERROR: type: typeassert: expected Dict{K,V}, got Array{(ASCIIString,Float64),1}

when trying to pass a tolerance option for Ipopt as indicated in the documentation:

solve(m, IpoptOptions=[("tol",1e-6)])

I am using the most up to date version of JuMP and Ipopt.

PS: I still owe some sample code of my model but I unfortunately can't find the time right now. As soon as I have more time I will post it here. 

On Saturday, July 19, 2014 2:11:43 PM UTC-4, Iain Dunning wrote:

Woops I don't think I reply-all-ed. Really anything you can provide would be very useful, even if its synthetic data.

On Jul 19, 2014 2:02 PM, "tcs" <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="RNP5Ai287KMJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">tobia...@...> wrote:
Also in response to Iains e-mail: my previous comment seemed to suggest a little more than I meant. For now I just wanted to generate some random data that has the approximate size of the data-set that I am looking at and post it together with my JuMP model code. It would take me a little longer to write the code that generates data from the true model. Of course, only the latter allows to compare recovered to true parameters. If you think that something like this would be helpful I try to work on it but it might take a while.

On Saturday, July 19, 2014 9:56:28 AM UTC-4, tcs wrote:
First of all thank you for your responses. I am always amazed by how quickly people here respond to questions. I will try to post some code with synthetic data later today.



On Saturday, July 19, 2014 12:37:13 AM UTC-4, Miles Lubin wrote:
To extend Tony's comments:

1) We're not aware of any memory leaks within JuMP or Ipopt, but without a test case, as Tony mentioned, it's hard to say much more. Persisting memory usage after Julia is closed is most likely a property of the Linux memory manager and isn't something that you should need to worry about. You can use a tool like "top" to observe memory usage system-wide and per process. Within a julia session, you could try calling gc(), which will invoke the garbage collector and hopefully free any memory that's not explicitly needed.

2) There's a discussion of nonlinear optimization performance issues in the JuMP manual: <a href="http://jump.readthedocs.org/en/release-0.5/nlp.html#performance" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fjump.readthedocs.org%2Fen%2Frelease-0.5%2Fnlp.html%23performance\46sa\75D\46sntz\0751\46usg\75AFQjCNGL0wtIzTdBYv7QG4z-QgFubAkqqQ';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Fjump.readthedocs.org%2Fen%2Frelease-0.5%2Fnlp.html%23performance\46sa\75D\46sntz\0751\46usg\75AFQjCNGL0wtIzTdBYv7QG4z-QgFubAkqqQ';return true;">http://jump.readthedocs.org/en/release-0.5/nlp.html#performance. More specifically, it's important to find out whether the bottleneck is in the function evaluation (JuMP's job) or in Ipopt itself. Could you report the "Total CPU secs in ..." lines from the output?

Thanks,
Miles

On Friday, July 18, 2014 11:14:13 PM UTC-4, Tony Kelman wrote:
1) Not sure on that one, but I also have not yet ported my large nonlinear models over to JuMP yet for detailed benchmarking. I know reducing the setup time and memory consumption for nonlinear models is on the JuMP team's to-do list, but the second-best-way to help with that is to provide reproducible example code. Can you replace any sensitive data with made-up inputs just to test the solver in a way that you could share? The best solution would be patches to address the problem, but that's much harder.

2) Are you watching the output from Ipopt? Does it even start the optimization process, or is the setup itself seeming to take a long time? Can you provide a breakdown of setup time vs Ipopt linear solver time vs function callback timing, perhaps for smaller instances of your problem that are able to run? There are too many unknowns when it comes to large nonlinear optimization models so you'll need to provide more information about what behavior you're seeing. I recommend setting the Ipopt option "print_timing_statistics" to "yes," this will give you a detailed timing breakdown but only if the optimization actually terminates (successfully, or via "max_iter" or "max_cpu_time").

If you want to solve a single very large problem, the version of Ipopt interfaced by Julia is not capable of parallelizing across multiple nodes of a distributed-memory cluster. There is an experimental MPI branch in the Ipopt repository, but to my knowledge it has not been hooked up to Julia, and the scalability results even from C++ or AMPL were not very encouraging. If you use a linear solver other than MUMPS, you can however parallelize the Newton step direction at each Ipopt iteration using shared-memory multithreading. But we'd have to know whether the Newton step direction is actually the bottleneck for your problems. In C++ or AMPL it often is, but your models may be taxing the JuMP auto-differentiation implementation to an unusual extent.

You can set the Ipopt option "hessian_approximation" to "limited-memory" to test whether the second derivatives are dramatically more expensive than the first derivatives. Typically using this quasi-Newton approximation comes at the cost of requiring more iterations to converge than using exact Hessian information, sometimes not converging at all, but for some problem types the Hessian may be very expensive to calculate and it may be worth it.

-Tony


On Friday, July 18, 2014 4:54:43 PM UTC-7, tcs wrote:
Hi all,

I have two JuMP related questions: 

(1) I have noticed a significant slowdown when I re-estimate a model with JuMP. I am mostly using the non-linear solver interface in combination with Ipopt. The first estimation seems to block memory resources that are not freed afterwards. 
This might of course be a more general Julia issue or a problem with my operating system (Ubuntu 14.4). The weird thing is that the problem does not even disappear after restarting Julia, only a system restart solves it. Can you confirm this issue on other platforms/operating systems?

(2) In general I would like to understand better what happens when JuMP builds the model because I feel like I am testing the limits of JuMP's nonlinear solver with the number of variables (sometimes > 100.000) and constraints that I am using in my applications. I like to use JuMP because it provides a convenient way of estimating economic models based on <a href="http://web.stanford.edu/group/SITE/archive/SITE_2007/segment_5/Judd_SMaxLikJuly2007.pdf" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fweb.stanford.edu%2Fgroup%2FSITE%2Farchive%2FSITE_2007%2Fsegment_5%2FJudd_SMaxLikJuly2007.pdf\46sa\75D\46sntz\0751\46usg\75AFQjCNHoE-cdGs0blejLC8hI7JLqUDM5pQ';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Fweb.stanford.edu%2Fgroup%2FSITE%2Farchive%2FSITE_2007%2Fsegment_5%2FJudd_SMaxLikJuly2007.pdf\46sa\75D\46sntz\0751\46usg\75AFQjCNHoE-cdGs0blejLC8hI7JLqUDM5pQ';return true;">http://web.stanford.edu/group/SITE/archive/SITE_2007/segment_5/Judd_SMaxLikJuly2007.pdf without having to use AMPL or hand-coding derivatives in MATLAB. However, some of the specifications I have to abort because the solver does not get to the estimation part in an acceptable time (in my case that means I let the computer run for a night and nothing seems to happen). 

I would like to understand whether I just need to run my code on a cluster that has more working memory or whether the problem is processor time. I can imagine that there are problems with the hessian computation due to the larger number of variables. If that was true, is it possible to sacrifice the hessian when using Ipopt if it means the difference between not being able to estimate the model and estimating it very slowly?

Unfortunately, I can not not simply provide you with examples of what I am doing due to the data that I am using. The problem I am talking about are structurally similar to the engine replacement problems mentioned in the paper above but of much higher dimensionality.

Thank you!



 

--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to <a href="javascript:" target="_blank" gdf-obfuscated-mailto="RNP5Ai287KMJ" onmousedown="this.href='javascript:';return true;" onclick="this.href='javascript:';return true;">julia-opt+...@googlegroups.com.
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Reply | Threaded
Open this post in threaded view
|  
Report Content as Inappropriate

Re: ipopt and jump

Iain Dunning
Try

IpoptOptions=["tol" => 1e-6]

on the master version of JuMP its actually
m = Model(solver=IpoptSolver(tol=1e-6))
but looks like we need to manually rebuild documentation.

On Wednesday, July 23, 2014 3:11:48 PM UTC-4, tcs wrote:
I got the following 

ERROR: type: typeassert: expected Dict{K,V}, got Array{(ASCIIString,Float64),1}

when trying to pass a tolerance option for Ipopt as indicated in the documentation:

solve(m, IpoptOptions=[("tol",1e-6)])

I am using the most up to date version of JuMP and Ipopt.

PS: I still owe some sample code of my model but I unfortunately can't find the time right now. As soon as I have more time I will post it here. 

On Saturday, July 19, 2014 2:11:43 PM UTC-4, Iain Dunning wrote:

Woops I don't think I reply-all-ed. Really anything you can provide would be very useful, even if its synthetic data.

On Jul 19, 2014 2:02 PM, "tcs" <[hidden email]> wrote:
Also in response to Iains e-mail: my previous comment seemed to suggest a little more than I meant. For now I just wanted to generate some random data that has the approximate size of the data-set that I am looking at and post it together with my JuMP model code. It would take me a little longer to write the code that generates data from the true model. Of course, only the latter allows to compare recovered to true parameters. If you think that something like this would be helpful I try to work on it but it might take a while.

On Saturday, July 19, 2014 9:56:28 AM UTC-4, tcs wrote:
First of all thank you for your responses. I am always amazed by how quickly people here respond to questions. I will try to post some code with synthetic data later today.



On Saturday, July 19, 2014 12:37:13 AM UTC-4, Miles Lubin wrote:
To extend Tony's comments:

1) We're not aware of any memory leaks within JuMP or Ipopt, but without a test case, as Tony mentioned, it's hard to say much more. Persisting memory usage after Julia is closed is most likely a property of the Linux memory manager and isn't something that you should need to worry about. You can use a tool like "top" to observe memory usage system-wide and per process. Within a julia session, you could try calling gc(), which will invoke the garbage collector and hopefully free any memory that's not explicitly needed.

2) There's a discussion of nonlinear optimization performance issues in the JuMP manual: <a href="http://jump.readthedocs.org/en/release-0.5/nlp.html#performance" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fjump.readthedocs.org%2Fen%2Frelease-0.5%2Fnlp.html%23performance\46sa\75D\46sntz\0751\46usg\75AFQjCNGL0wtIzTdBYv7QG4z-QgFubAkqqQ';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Fjump.readthedocs.org%2Fen%2Frelease-0.5%2Fnlp.html%23performance\46sa\75D\46sntz\0751\46usg\75AFQjCNGL0wtIzTdBYv7QG4z-QgFubAkqqQ';return true;">http://jump.readthedocs.org/en/release-0.5/nlp.html#performance. More specifically, it's important to find out whether the bottleneck is in the function evaluation (JuMP's job) or in Ipopt itself. Could you report the "Total CPU secs in ..." lines from the output?

Thanks,
Miles

On Friday, July 18, 2014 11:14:13 PM UTC-4, Tony Kelman wrote:
1) Not sure on that one, but I also have not yet ported my large nonlinear models over to JuMP yet for detailed benchmarking. I know reducing the setup time and memory consumption for nonlinear models is on the JuMP team's to-do list, but the second-best-way to help with that is to provide reproducible example code. Can you replace any sensitive data with made-up inputs just to test the solver in a way that you could share? The best solution would be patches to address the problem, but that's much harder.

2) Are you watching the output from Ipopt? Does it even start the optimization process, or is the setup itself seeming to take a long time? Can you provide a breakdown of setup time vs Ipopt linear solver time vs function callback timing, perhaps for smaller instances of your problem that are able to run? There are too many unknowns when it comes to large nonlinear optimization models so you'll need to provide more information about what behavior you're seeing. I recommend setting the Ipopt option "print_timing_statistics" to "yes," this will give you a detailed timing breakdown but only if the optimization actually terminates (successfully, or via "max_iter" or "max_cpu_time").

If you want to solve a single very large problem, the version of Ipopt interfaced by Julia is not capable of parallelizing across multiple nodes of a distributed-memory cluster. There is an experimental MPI branch in the Ipopt repository, but to my knowledge it has not been hooked up to Julia, and the scalability results even from C++ or AMPL were not very encouraging. If you use a linear solver other than MUMPS, you can however parallelize the Newton step direction at each Ipopt iteration using shared-memory multithreading. But we'd have to know whether the Newton step direction is actually the bottleneck for your problems. In C++ or AMPL it often is, but your models may be taxing the JuMP auto-differentiation implementation to an unusual extent.

You can set the Ipopt option "hessian_approximation" to "limited-memory" to test whether the second derivatives are dramatically more expensive than the first derivatives. Typically using this quasi-Newton approximation comes at the cost of requiring more iterations to converge than using exact Hessian information, sometimes not converging at all, but for some problem types the Hessian may be very expensive to calculate and it may be worth it.

-Tony


On Friday, July 18, 2014 4:54:43 PM UTC-7, tcs wrote:
Hi all,

I have two JuMP related questions: 

(1) I have noticed a significant slowdown when I re-estimate a model with JuMP. I am mostly using the non-linear solver interface in combination with Ipopt. The first estimation seems to block memory resources that are not freed afterwards. 
This might of course be a more general Julia issue or a problem with my operating system (Ubuntu 14.4). The weird thing is that the problem does not even disappear after restarting Julia, only a system restart solves it. Can you confirm this issue on other platforms/operating systems?

(2) In general I would like to understand better what happens when JuMP builds the model because I feel like I am testing the limits of JuMP's nonlinear solver with the number of variables (sometimes > 100.000) and constraints that I am using in my applications. I like to use JuMP because it provides a convenient way of estimating economic models based on <a href="http://web.stanford.edu/group/SITE/archive/SITE_2007/segment_5/Judd_SMaxLikJuly2007.pdf" target="_blank" onmousedown="this.href='http://www.google.com/url?q\75http%3A%2F%2Fweb.stanford.edu%2Fgroup%2FSITE%2Farchive%2FSITE_2007%2Fsegment_5%2FJudd_SMaxLikJuly2007.pdf\46sa\75D\46sntz\0751\46usg\75AFQjCNHoE-cdGs0blejLC8hI7JLqUDM5pQ';return true;" onclick="this.href='http://www.google.com/url?q\75http%3A%2F%2Fweb.stanford.edu%2Fgroup%2FSITE%2Farchive%2FSITE_2007%2Fsegment_5%2FJudd_SMaxLikJuly2007.pdf\46sa\75D\46sntz\0751\46usg\75AFQjCNHoE-cdGs0blejLC8hI7JLqUDM5pQ';return true;">http://web.stanford.edu/group/SITE/archive/SITE_2007/segment_5/Judd_SMaxLikJuly2007.pdf without having to use AMPL or hand-coding derivatives in MATLAB. However, some of the specifications I have to abort because the solver does not get to the estimation part in an acceptable time (in my case that means I let the computer run for a night and nothing seems to happen). 

I would like to understand whether I just need to run my code on a cluster that has more working memory or whether the problem is processor time. I can imagine that there are problems with the hessian computation due to the larger number of variables. If that was true, is it possible to sacrifice the hessian when using Ipopt if it means the difference between not being able to estimate the model and estimating it very slowly?

Unfortunately, I can not not simply provide you with examples of what I am doing due to the data that I am using. The problem I am talking about are structurally similar to the engine replacement problems mentioned in the paper above but of much higher dimensionality.

Thank you!



 

--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit <a href="https://groups.google.com/d/optout" target="_blank" onmousedown="this.href='https://groups.google.com/d/optout';return true;" onclick="this.href='https://groups.google.com/d/optout';return true;">https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "julia-opt" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [hidden email].
For more options, visit https://groups.google.com/d/optout.
Loading...