Should non-core devs start to look at 0.5 performance, or is it all still too much a work in progress to report back how performance for existing code bases changes between 0.4 and 0.5?
Just for fun I ran one of my code bases today on 0.4 and on 0.5. I see a pretty significant increase in runtime on 0.5, about 35%:
julia 0.4: 665.095440 seconds (2.03 G allocations: 36.515 GB, 0.53% gc time)
julia 0.5: 904.061203 seconds (4.55 G allocations: 75.579 GB, 0.72% gc time)
Also, is there a preference from the core team on how to report this kind of stuff? I don’t have the time nor expertise to track down the root cause of what I’m seeing there, and the code I’m using is not public at this point. I’d be happy to give julia core devs access to the repo if someone wanted to investigate what is going on.
University of California, Berkeley
Definitely! Reports of performance regressions are incredibly valuable. You can just open an issue titled with "perf regression: $(description)" and some code to reproduce if possible. We're very much in the phase of trying to take care of these.
On Fri, Apr 22, 2016 at 2:14 PM, David Anthoff <[hidden email]> wrote:
This is definitely the right time to start testing codes with 0.5 and report regressions. In my perfect world, we would have 0.5 at JuliaCon!
On Friday, April 22, 2016 at 11:53:34 PM UTC+5:30, Stefan Karpinski wrote:
Alright, I got it in shape so that one can run the example. Here is the issue:
The code is not public, but I’m happy to give access to any core developer who wants to figure out what is going on.
|Free forum by Nabble||Edit this page|