

I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for ODE BVP (basically, collocation method). In the process, I construct an overall nonlinear equation that needs to be solved. It takes "megavector" x[j] as an argument. However, internally it is more convenient to reshape it to y[m,i,n] where m is the index of original ODE vector, i is the index of the collocation point on time element (or layer) and n is time element index. Also, some inputs to the method (ODE RHS function and BVP function) expect z[m]kind vector. So far I chose to pass a @view of the "megavector" to them.
The alternatives for reshaping and @view would be:  Use the inline function or a macro that maps the indices between megavector and arrays (I've tried it, didn't see any difference in performance or memory allocation, but @code_warntype has less "red" spots)
 Copy relevant pieces of megavector into preallocated arrays of desired shape. This can also be an alternative for @view.
Is there some kind of rule of thumb where which one would be preferable? And are there any high costs associated with @view and reshape?


You can reshape the view. This should be cheap.


reshape makes a view, and views are cheap. Don't worry about this.
BTW, I would love to add a collocation method to JuliaDiffEq. Would you consider making this a package? On Sunday, October 30, 2016 at 3:52:37 AM UTC7, Alexey Cherkaev wrote: I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for ODE BVP (basically, collocation method). In the process, I construct an overall nonlinear equation that needs to be solved. It takes "megavector" x[j] as an argument. However, internally it is more convenient to reshape it to y[m,i,n] where m is the index of original ODE vector, i is the index of the collocation point on time element (or layer) and n is time element index. Also, some inputs to the method (ODE RHS function and BVP function) expect z[m]kind vector. So far I chose to pass a @view of the "megavector" to them.
The alternatives for reshaping and @view would be:  Use the inline function or a macro that maps the indices between megavector and arrays (I've tried it, didn't see any difference in performance or memory allocation, but @code_warntype has less "red" spots)
 Copy relevant pieces of megavector into preallocated arrays of desired shape. This can also be an alternative for @view.
Is there some kind of rule of thumb where which one would be preferable? And are there any high costs associated with @view and reshape?


The package is available atÂ https://github.com/mobiuseng/RadauBVP.jl
I haven't put it into METADATA yet. I would like to improve documentation and add some tests before doing this.
However, it is already usable, mostly optimised (except for sparsity, it is coming) and I believe it is the only available free ODE BVP solver for Julia right now (the only other alternative I am aware of is `bvpsol` from ODEInterface.jl, but it is not free and from limited amount of tests I've done, RadauBVP is faster). On Sunday, October 30, 2016 at 8:40:07 PM UTC+2, Chris Rackauckas wrote: reshape makes a view, and views are cheap. Don't worry about this.
BTW, I would love to add a collocation method to JuliaDiffEq. Would you consider making this a package? On Sunday, October 30, 2016 at 3:52:37 AM UTC7, Alexey Cherkaev wrote: I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for ODE BVP (basically, collocation method). In the process, I construct an overall nonlinear equation that needs to be solved. It takes "megavector" x[j] as an argument. However, internally it is more convenient to reshape it to y[m,i,n] where m is the index of original ODE vector, i is the index of the collocation point on time element (or layer) and n is time element index. Also, some inputs to the method (ODE RHS function and BVP function) expect z[m]kind vector. So far I chose to pass a @view of the "megavector" to them.
The alternatives for reshaping and @view would be:  Use the inline function or a macro that maps the indices between megavector and arrays (I've tried it, didn't see any difference in performance or memory allocation, but @code_warntype has less "red" spots)
 Copy relevant pieces of megavector into preallocated arrays of desired shape. This can also be an alternative for @view.
Is there some kind of rule of thumb where which one would be preferable? And are there any high costs associated with @view and reshape?


Cool!
On Tue, 20161101 at 10:09, Alexey Cherkaev < [hidden email]> wrote:
> The package is available at https://github.com/mobiuseng/RadauBVP.jl>
> I haven't put it into METADATA yet. I would like to improve documentation
> and add some tests before doing this.
>
> However, it is already usable, mostly optimised (except for sparsity, it is
> coming) and I believe it is the only available free ODE BVP solver for
> Julia right now (the only other alternative I am aware of is `bvpsol` from
> ODEInterface.jl, but it is not free and from limited amount of tests I've
> done, RadauBVP is faster).
Concerning BVP, what about ApproxFun:
https://github.com/ApproxFun/ApproxFun.jl#solvingordinarydifferentialequationsConcerning sparse Jacobians: I once wrote a matrix coloring package:
https://github.com/mauro3/MatrixColorings.jl It needs some love, but if
you think that it would be useful for you, ping me and I'll try to
update it.
> On Sunday, October 30, 2016 at 8:40:07 PM UTC+2, Chris Rackauckas wrote:
>>
>> reshape makes a view, and views are cheap. Don't worry about this.
>>
>> BTW, I would love to add a collocation method to JuliaDiffEq. Would you
>> consider making this a package?
>>
>> On Sunday, October 30, 2016 at 3:52:37 AM UTC7, Alexey Cherkaev wrote:
>>>
>>> I'm writing RadauIIA (for now, fixed order 5 with 3 points) method for
>>> ODE BVP (basically, collocation method). In the process, I construct an
>>> overall nonlinear equation that needs to be solved. It takes "megavector"
>>> x[j] as an argument. However, internally it is more convenient to reshape
>>> it to y[m,i,n] where m is the index of original ODE vector, i is the index
>>> of the collocation point on time element (or layer) and n is time element
>>> index. Also, some inputs to the method (ODE RHS function and BVP function)
>>> expect z[m]kind vector. So far I chose to pass a @view of the
>>> "megavector" to them.
>>>
>>> The alternatives for reshaping and @view would be:
>>>
>>>  Use the inline function or a macro that maps the indices between
>>> megavector and arrays (I've tried it, didn't see any difference in
>>> performance or memory allocation, but @code_warntype has less "red" spots)
>>>  Copy relevant pieces of megavector into preallocated arrays of
>>> desired shape. This can also be an alternative for @view.
>>>
>>> Is there some kind of rule of thumb where which one would be preferable?
>>> And are there any high costs associated with @view and reshape?
>>>
>>>

