GSOC Topic clarification

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
5 messages Options
Reply | Threaded
Open this post in threaded view
|

GSOC Topic clarification

Julia Developer mailing list
Hi,

My name is Christopher and I'm a Computational Engineering master student from Erlangen, Germany.
I already successfully had  taken part in Google Summer of Code in 2014 and recently decided to apply as well for this year.

Due to my study program and my personal interests, I'm very curious about programs and programming language related to HPC.
Julia is an exciting project which I have observed and tried out already for a year. This includes a project at my university, that has led me to hold an workshop about Julia and to write a paper "Implementing the Lattice Boltzmann method using the Julia language" including performance measuring and comparison to a C++ implementation, which is in the process of getting published right now.
I realized that Julia already has some very good futures but never the less is missing some imported ones. Other essential futures (like distributed parallelism) seems to be under development and the API and functionality differs from version to version.
I personally see this as an opportunity to apply and extend my knowledge at the same time on a real life project.

Going through the project ideas page there are three projects that attracts my attention.
* Native Julia implementation of iterative solvers for numerical linear algebra
* Native Julia implementation of massively parallel sparse linear algebra routines
* Ensure that Julia runs smoothly on current large HPC systems

At first this may look confusing but all three topics have something in commend  and I wonder if they are strictly separated.
So iterative solvers offend depend on sparse linear algebra routines.
Never the less I think the focus in the first topic is more on the basic (serial) routines (like Arnoldi and Lanczos iterations)  while in the second topic "massively parallel" are the key words.
It would be great if you can give me some hints if my interpretation is right and how strict it is.

I assume you have distributed parallelism in mint  when you are talking about "massively parallel" code.
This looks like there is an overlap with the third topic, as the most time in scientific codes offend is spend in (sparse or dense) linear algebra kernels.
Do you have a specific application in mind for this task?
Is the aim of this project like a proof of concept for the current distributed parallelism model?
Dose it means most of the work has to be done in the runtime system code or does it means to optimize and existing (and already running on a distributed cluster) code to get reasonable scaling results or do you think the hardest part will be to get get the julia code to run within the classically (mpi) environment?

I'm sorry for the long post on the mailing list but I couldn't figure out who are the experts/mentors for each topic.
So I hope I get some answers here ;)

Thanks,
Christopher Bross
Reply | Threaded
Open this post in threaded view
|

Re: GSOC Topic clarification

Erik Schnetter
Christopher

As you notice, HPC features in Julia are still experimental, and
people are pushing things in many directions simultaneously with much
experimentation. I expect to see some standardization in the future,
but only after certain approaches have been vetted by, well, actually
being used in practice.

The last topic ("Ensure that Julia runs smoothly on current large HPC
systems") was proposed by me, in an attempt to gather some hands-on
experience for the Julia community. I definitively think this is very
much related to other HPC projects. There is another HPC-related
project of mine, <https://github.com/eschnett/FunHPC.jl>, which is
more of an experiment of proof-of-concept to see how far a functional
programming style can help reduce complexity of distributed computing
with irregular data structures. (I'd be happy to discuss, and/or make
this a GSOC project as well.)

The major difference between "traditional" linear algebra and my
interests is that the former can often be based on existing libraries
that handle the low-level lifting (BLAS, LAPACK, Elemental, FFTW, ...)
whereas in other cases the compute kernels have to be developed as
well, preferably in Julia.

Speaking only for myself here:
- I want to target running on 100,000 cores, using MPI for communication
- Multi-threading will be important within a node
- SIMD parallelization is required for compute kernels
- Data structures will be irregular, and their implementations
complex, so a healthy dose of software engineering is required
- Load balancing will be necessary
- My application is solving the Einstein equations (simulating black
holes) and related systems, e.g. neutron stars, supernovae, etc.
- We have existing codes in Fortran/C/C++/Python in the Einstein
Toolkit <https://einsteintoolkit.org>, and various current
shortcomings require a fundamental redesign of parts of the system,
and I am exploring Julia as alternative.

Obviously, the laundry list above is too long for a summer project,
but any sub-projects in there would be (a) highly interesting to many
people, (b) could lead to publication if there's sufficient progress,
and (c) I (and probably others) would be more than happy to mentor.
The project you propose should be based on your own skills,
experience, and interest.

> Dose it means most of the work has to be done in the runtime system code or does it means to optimize and existing (and already running on a distributed cluster) code to get reasonable scaling results or do you think the hardest part will be to get get the julia code to run within the classically (mpi) environment?

I think either would be fine. The technical challenges in each of
these will be very different.

-erik

On Mon, Mar 21, 2016 at 7:31 PM, 'Christopher Bross' via julia-dev
<[hidden email]> wrote:

> Hi,
>
> My name is Christopher and I'm a Computational Engineering master student
> from Erlangen, Germany.
> I already successfully had  taken part in Google Summer of Code in 2014 and
> recently decided to apply as well for this year.
>
> Due to my study program and my personal interests, I'm very curious about
> programs and programming language related to HPC.
> Julia is an exciting project which I have observed and tried out already for
> a year. This includes a project at my university, that has led me to hold an
> workshop about Julia and to write a paper "Implementing the Lattice
> Boltzmann method using the Julia language" including performance measuring
> and comparison to a C++ implementation, which is in the process of getting
> published right now.
> I realized that Julia already has some very good futures but never the less
> is missing some imported ones. Other essential futures (like distributed
> parallelism) seems to be under development and the API and functionality
> differs from version to version.
> I personally see this as an opportunity to apply and extend my knowledge at
> the same time on a real life project.
>
> Going through the project ideas page there are three projects that attracts
> my attention.
> * Native Julia implementation of iterative solvers for numerical linear
> algebra
> * Native Julia implementation of massively parallel sparse linear algebra
> routines
> * Ensure that Julia runs smoothly on current large HPC systems
>
> At first this may look confusing but all three topics have something in
> commend  and I wonder if they are strictly separated.
> So iterative solvers offend depend on sparse linear algebra routines.
> Never the less I think the focus in the first topic is more on the basic
> (serial) routines (like Arnoldi and Lanczos iterations)  while in the second
> topic "massively parallel" are the key words.
> It would be great if you can give me some hints if my interpretation is
> right and how strict it is.
>
> I assume you have distributed parallelism in mint  when you are talking
> about "massively parallel" code.
> This looks like there is an overlap with the third topic, as the most time
> in scientific codes offend is spend in (sparse or dense) linear algebra
> kernels.
> Do you have a specific application in mind for this task?
> Is the aim of this project like a proof of concept for the current
> distributed parallelism model?
> Dose it means most of the work has to be done in the runtime system code or
> does it means to optimize and existing (and already running on a distributed
> cluster) code to get reasonable scaling results or do you think the hardest
> part will be to get get the julia code to run within the classically (mpi)
> environment?
>
> I'm sorry for the long post on the mailing list but I couldn't figure out
> who are the experts/mentors for each topic.
> So I hope I get some answers here ;)
>
> Thanks,
> Christopher Bross



--
Erik Schnetter <[hidden email]>
http://www.perimeterinstitute.ca/personal/eschnetter/
Reply | Threaded
Open this post in threaded view
|

Re: GSOC Topic clarification

Julia Developer mailing list
Hi Erik,

thank you very much for your replay.

Am Dienstag, 22. März 2016 15:40:54 UTC+1 schrieb Erik Schnetter:
Christopher

The major difference between "traditional" linear algebra and my
interests is that the former can often be based on existing libraries
that handle the low-level lifting (BLAS, LAPACK, Elemental, FFTW, ...)
whereas in other cases the compute kernels have to be developed as
well, preferably in Julia.

Speaking only for myself here:
- I want to target running on 100,000 cores, using MPI for communication

Does this mean you are not interested in the "Julia way of palatalization"(asynchronous parallelism and synchronization using futures) but in a in a traditional fashion (MPI+x)?
 
- Multi-threading will be important within a node
- SIMD parallelization is required for compute kernels
- Data structures will be irregular, and their implementations
complex, so a healthy dose of software engineering is required
- Load balancing will be necessary
- My application is solving the Einstein equations (simulating black
holes) and related systems, e.g. neutron stars, supernovae, etc.
- We have existing codes in Fortran/C/C++/Python in the Einstein
Toolkit <<a href="https://einsteintoolkit.org" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\75https%3A%2F%2Feinsteintoolkit.org\46sa\75D\46sntz\0751\46usg\75AFQjCNFdXwiSshN8qp7cXl6PBs-AQz6bRQ&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\75https%3A%2F%2Feinsteintoolkit.org\46sa\75D\46sntz\0751\46usg\75AFQjCNFdXwiSshN8qp7cXl6PBs-AQz6bRQ&#39;;return true;">https://einsteintoolkit.org>, and various current
shortcomings require a fundamental redesign of parts of the system,
and I am exploring Julia as alternative.

Obviously, the laundry list above is too long for a summer project,
but any sub-projects in there would be (a) highly interesting to many
people, (b) could lead to publication if there's sufficient progress,
and (c) I (and probably others) would be more than happy to mentor.
The project you propose should be based on your own skills,
experience, and interest.

So you would like to see some parts of the Einstein toolkit running on some of the big HPC systems, including inter and intra node communication as well as vectorization (SIMD), wouldn't you?
Is there already some parts of your code available in  Julia?
Would it be sufficient to concentrate on one or two kernels/subroutines to reimplement within Julia?
I personal would be interested in porting a kernel to Julia and starting to optimize it using SIMD instructions, followed by an multi threading implementation and finally a multi node version of it.
I think it will be exiting to see how well it will scale
especially in comparison to existing productive Fortran or C(++) code.

I do not have good background
knowledge in astrophysics, I hope this is not a not a big hazard.
Moreover I couldn't find a good starting point in the Einstein toolkit. I will appreciate if you can point me to some suitable subroutines.


-Christopher

-erik

On Mon, Mar 21, 2016 at 7:31 PM, 'Christopher Bross' via julia-dev
<<a href="javascript:" target="_blank" gdf-obfuscated-mailto="G8RJRirnCAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">juli...@...> wrote:

> Hi,
>
> My name is Christopher and I'm a Computational Engineering master student
> from Erlangen, Germany.
> I already successfully had  taken part in Google Summer of Code in 2014 and
> recently decided to apply as well for this year.
>
> Due to my study program and my personal interests, I'm very curious about
> programs and programming language related to HPC.
> Julia is an exciting project which I have observed and tried out already for
> a year. This includes a project at my university, that has led me to hold an
> workshop about Julia and to write a paper "Implementing the Lattice
> Boltzmann method using the Julia language" including performance measuring
> and comparison to a C++ implementation, which is in the process of getting
> published right now.
> I realized that Julia already has some very good futures but never the less
> is missing some imported ones. Other essential futures (like distributed
> parallelism) seems to be under development and the API and functionality
> differs from version to version.
> I personally see this as an opportunity to apply and extend my knowledge at
> the same time on a real life project.
>
> Going through the project ideas page there are three projects that attracts
> my attention.
> * Native Julia implementation of iterative solvers for numerical linear
> algebra
> * Native Julia implementation of massively parallel sparse linear algebra
> routines
> * Ensure that Julia runs smoothly on current large HPC systems
>
> At first this may look confusing but all three topics have something in
> commend  and I wonder if they are strictly separated.
> So iterative solvers offend depend on sparse linear algebra routines.
> Never the less I think the focus in the first topic is more on the basic
> (serial) routines (like Arnoldi and Lanczos iterations)  while in the second
> topic "massively parallel" are the key words.
> It would be great if you can give me some hints if my interpretation is
> right and how strict it is.
>
> I assume you have distributed parallelism in mint  when you are talking
> about "massively parallel" code.
> This looks like there is an overlap with the third topic, as the most time
> in scientific codes offend is spend in (sparse or dense) linear algebra
> kernels.
> Do you have a specific application in mind for this task?
> Is the aim of this project like a proof of concept for the current
> distributed parallelism model?
> Dose it means most of the work has to be done in the runtime system code or
> does it means to optimize and existing (and already running on a distributed
> cluster) code to get reasonable scaling results or do you think the hardest
> part will be to get get the julia code to run within the classically (mpi)
> environment?
>
> I'm sorry for the long post on the mailing list but I couldn't figure out
> who are the experts/mentors for each topic.
> So I hope I get some answers here ;)
>
> Thanks,
> Christopher Bross



--
Erik Schnetter <<a href="javascript:" target="_blank" gdf-obfuscated-mailto="G8RJRirnCAAJ" rel="nofollow" onmousedown="this.href=&#39;javascript:&#39;;return true;" onclick="this.href=&#39;javascript:&#39;;return true;">schn...@...>
<a href="http://www.perimeterinstitute.ca/personal/eschnetter/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\75http%3A%2F%2Fwww.perimeterinstitute.ca%2Fpersonal%2Feschnetter%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNGxlaNboZlt-tpAt8j3eV3SBzPUpg&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\75http%3A%2F%2Fwww.perimeterinstitute.ca%2Fpersonal%2Feschnetter%2F\46sa\75D\46sntz\0751\46usg\75AFQjCNGxlaNboZlt-tpAt8j3eV3SBzPUpg&#39;;return true;">http://www.perimeterinstitute.ca/personal/eschnetter/
Reply | Threaded
Open this post in threaded view
|

Re: GSOC Topic clarification

Erik Schnetter
On Wed, Mar 23, 2016 at 7:42 PM, 'Christopher Bross' via julia-dev
<[hidden email]> wrote:

> Hi Erik,
>
> thank you very much for your replay.
>
> Am Dienstag, 22. März 2016 15:40:54 UTC+1 schrieb Erik Schnetter:
>>
>> Christopher
>>
>> The major difference between "traditional" linear algebra and my
>> interests is that the former can often be based on existing libraries
>> that handle the low-level lifting (BLAS, LAPACK, Elemental, FFTW, ...)
>> whereas in other cases the compute kernels have to be developed as
>> well, preferably in Julia.
>>
>> Speaking only for myself here:
>> - I want to target running on 100,000 cores, using MPI for communication
>
>
> Does this mean you are not interested in the "Julia way of
> palatalization"(asynchronous parallelism and synchronization using futures)
> but in a in a traditional fashion (MPI+x)?

On the contrary, I am very much interested in using futures holding
references to remote objects. The implementation -- from which the
user is shielded -- probably needs to be built on MPI for the
foreseeable future, until equivalent transport protocols become widely
available. You can already today use other communication protocols,
but MPI is more widely available. I'm sorry for being unclean about
the user interface and the implementation in my comment.

>> - Multi-threading will be important within a node
>> - SIMD parallelization is required for compute kernels
>> - Data structures will be irregular, and their implementations
>> complex, so a healthy dose of software engineering is required
>> - Load balancing will be necessary
>> - My application is solving the Einstein equations (simulating black
>> holes) and related systems, e.g. neutron stars, supernovae, etc.
>> - We have existing codes in Fortran/C/C++/Python in the Einstein
>> Toolkit <https://einsteintoolkit.org>, and various current
>> shortcomings require a fundamental redesign of parts of the system,
>> and I am exploring Julia as alternative.
>>
>> Obviously, the laundry list above is too long for a summer project,
>> but any sub-projects in there would be (a) highly interesting to many
>> people, (b) could lead to publication if there's sufficient progress,
>> and (c) I (and probably others) would be more than happy to mentor.
>> The project you propose should be based on your own skills,
>> experience, and interest.
>
> So you would like to see some parts of the Einstein toolkit running on some
> of the big HPC systems, including inter and intra node communication as well
> as vectorization (SIMD), wouldn't you?
> Is there already some parts of your code available in  Julia?
> Would it be sufficient to concentrate on one or two kernels/subroutines to
> reimplement within Julia?
> I personal would be interested in porting a kernel to Julia and starting to
> optimize it using SIMD instructions, followed by an multi threading
> implementation and finally a multi node version of it.
> I think it will be exiting to see how well it will scale especially in
> comparison to existing productive Fortran or C(++) code.

The Einstein Toolkit is already running on large HPC systems; we've
e.g. ran on 300k cores on Blue Waters, and we're using SIMD
vectorization and multi-threading etc. However, future development
requires a programming environment that is much more flexible, and I
hope and expect that Julia can come in handy here.

> I do not have good background knowledge in astrophysics, I hope this is not
> a not a big hazard.

This is a computer science project, no astrophysics knowledge is required.

> Moreover I couldn't find a good starting point in the Einstein toolkit. I
> will appreciate if you can point me to some suitable subroutines.

This project isn't about the Einstein Toolkit per se -- you can view
it as an example application. A much simpler example is e.g.
<https://github.com/eschnett/WaveToy.jl>. This solves a much simpler
equation (the scalar wave equation), but using similar (but also
simpler) algorithms. You will see that this code uses futures and
remote references in a very Julian way. (Part of the code predates the
recent changes to futures, and thus duplicates mechanisms that are now
available in Base.)

-erik

> -Christopher
>
>> -erik
>>
>> On Mon, Mar 21, 2016 at 7:31 PM, 'Christopher Bross' via julia-dev
>> <[hidden email]> wrote:
>> > Hi,
>> >
>> > My name is Christopher and I'm a Computational Engineering master
>> > student
>> > from Erlangen, Germany.
>> > I already successfully had  taken part in Google Summer of Code in 2014
>> > and
>> > recently decided to apply as well for this year.
>> >
>> > Due to my study program and my personal interests, I'm very curious
>> > about
>> > programs and programming language related to HPC.
>> > Julia is an exciting project which I have observed and tried out already
>> > for
>> > a year. This includes a project at my university, that has led me to
>> > hold an
>> > workshop about Julia and to write a paper "Implementing the Lattice
>> > Boltzmann method using the Julia language" including performance
>> > measuring
>> > and comparison to a C++ implementation, which is in the process of
>> > getting
>> > published right now.
>> > I realized that Julia already has some very good futures but never the
>> > less
>> > is missing some imported ones. Other essential futures (like distributed
>> > parallelism) seems to be under development and the API and functionality
>> > differs from version to version.
>> > I personally see this as an opportunity to apply and extend my knowledge
>> > at
>> > the same time on a real life project.
>> >
>> > Going through the project ideas page there are three projects that
>> > attracts
>> > my attention.
>> > * Native Julia implementation of iterative solvers for numerical linear
>> > algebra
>> > * Native Julia implementation of massively parallel sparse linear
>> > algebra
>> > routines
>> > * Ensure that Julia runs smoothly on current large HPC systems
>> >
>> > At first this may look confusing but all three topics have something in
>> > commend  and I wonder if they are strictly separated.
>> > So iterative solvers offend depend on sparse linear algebra routines.
>> > Never the less I think the focus in the first topic is more on the basic
>> > (serial) routines (like Arnoldi and Lanczos iterations)  while in the
>> > second
>> > topic "massively parallel" are the key words.
>> > It would be great if you can give me some hints if my interpretation is
>> > right and how strict it is.
>> >
>> > I assume you have distributed parallelism in mint  when you are talking
>> > about "massively parallel" code.
>> > This looks like there is an overlap with the third topic, as the most
>> > time
>> > in scientific codes offend is spend in (sparse or dense) linear algebra
>> > kernels.
>> > Do you have a specific application in mind for this task?
>> > Is the aim of this project like a proof of concept for the current
>> > distributed parallelism model?
>> > Dose it means most of the work has to be done in the runtime system code
>> > or
>> > does it means to optimize and existing (and already running on a
>> > distributed
>> > cluster) code to get reasonable scaling results or do you think the
>> > hardest
>> > part will be to get get the julia code to run within the classically
>> > (mpi)
>> > environment?
>> >
>> > I'm sorry for the long post on the mailing list but I couldn't figure
>> > out
>> > who are the experts/mentors for each topic.
>> > So I hope I get some answers here ;)
>> >
>> > Thanks,
>> > Christopher Bross
>>
>>
>>
>> --
>> Erik Schnetter <[hidden email]>
>> http://www.perimeterinstitute.ca/personal/eschnetter/



--
Erik Schnetter <[hidden email]>
http://www.perimeterinstitute.ca/personal/eschnetter/
Reply | Threaded
Open this post in threaded view
|

Re: GSOC Topic clarification

Julia Developer mailing list
HI,

I'm very sorry, I  had some technical issues and missed the GSoC deadline.
I still like the idea of your project. Maybe I will come back to you
work on it, anyhow.

Christopher

On 24.03.2016 03:40, Erik Schnetter wrote:

> On Wed, Mar 23, 2016 at 7:42 PM, 'Christopher Bross' via julia-dev
> <[hidden email]> wrote:
>> Hi Erik,
>>
>> thank you very much for your replay.
>>
>> Am Dienstag, 22. März 2016 15:40:54 UTC+1 schrieb Erik Schnetter:
>>> Christopher
>>>
>>> The major difference between "traditional" linear algebra and my
>>> interests is that the former can often be based on existing libraries
>>> that handle the low-level lifting (BLAS, LAPACK, Elemental, FFTW, ...)
>>> whereas in other cases the compute kernels have to be developed as
>>> well, preferably in Julia.
>>>
>>> Speaking only for myself here:
>>> - I want to target running on 100,000 cores, using MPI for communication
>>
>> Does this mean you are not interested in the "Julia way of
>> palatalization"(asynchronous parallelism and synchronization using futures)
>> but in a in a traditional fashion (MPI+x)?
> On the contrary, I am very much interested in using futures holding
> references to remote objects. The implementation -- from which the
> user is shielded -- probably needs to be built on MPI for the
> foreseeable future, until equivalent transport protocols become widely
> available. You can already today use other communication protocols,
> but MPI is more widely available. I'm sorry for being unclean about
> the user interface and the implementation in my comment.
>
>>> - Multi-threading will be important within a node
>>> - SIMD parallelization is required for compute kernels
>>> - Data structures will be irregular, and their implementations
>>> complex, so a healthy dose of software engineering is required
>>> - Load balancing will be necessary
>>> - My application is solving the Einstein equations (simulating black
>>> holes) and related systems, e.g. neutron stars, supernovae, etc.
>>> - We have existing codes in Fortran/C/C++/Python in the Einstein
>>> Toolkit <https://einsteintoolkit.org>, and various current
>>> shortcomings require a fundamental redesign of parts of the system,
>>> and I am exploring Julia as alternative.
>>>
>>> Obviously, the laundry list above is too long for a summer project,
>>> but any sub-projects in there would be (a) highly interesting to many
>>> people, (b) could lead to publication if there's sufficient progress,
>>> and (c) I (and probably others) would be more than happy to mentor.
>>> The project you propose should be based on your own skills,
>>> experience, and interest.
>> So you would like to see some parts of the Einstein toolkit running on some
>> of the big HPC systems, including inter and intra node communication as well
>> as vectorization (SIMD), wouldn't you?
>> Is there already some parts of your code available in  Julia?
>> Would it be sufficient to concentrate on one or two kernels/subroutines to
>> reimplement within Julia?
>> I personal would be interested in porting a kernel to Julia and starting to
>> optimize it using SIMD instructions, followed by an multi threading
>> implementation and finally a multi node version of it.
>> I think it will be exiting to see how well it will scale especially in
>> comparison to existing productive Fortran or C(++) code.
> The Einstein Toolkit is already running on large HPC systems; we've
> e.g. ran on 300k cores on Blue Waters, and we're using SIMD
> vectorization and multi-threading etc. However, future development
> requires a programming environment that is much more flexible, and I
> hope and expect that Julia can come in handy here.
>
>> I do not have good background knowledge in astrophysics, I hope this is not
>> a not a big hazard.
> This is a computer science project, no astrophysics knowledge is required.
>
>> Moreover I couldn't find a good starting point in the Einstein toolkit. I
>> will appreciate if you can point me to some suitable subroutines.
> This project isn't about the Einstein Toolkit per se -- you can view
> it as an example application. A much simpler example is e.g.
> <https://github.com/eschnett/WaveToy.jl>. This solves a much simpler
> equation (the scalar wave equation), but using similar (but also
> simpler) algorithms. You will see that this code uses futures and
> remote references in a very Julian way. (Part of the code predates the
> recent changes to futures, and thus duplicates mechanisms that are now
> available in Base.)
>
> -erik
>
>> -Christopher
>>
>>> -erik
>>>
>>> On Mon, Mar 21, 2016 at 7:31 PM, 'Christopher Bross' via julia-dev
>>> <[hidden email]> wrote:
>>>> Hi,
>>>>
>>>> My name is Christopher and I'm a Computational Engineering master
>>>> student
>>>> from Erlangen, Germany.
>>>> I already successfully had  taken part in Google Summer of Code in 2014
>>>> and
>>>> recently decided to apply as well for this year.
>>>>
>>>> Due to my study program and my personal interests, I'm very curious
>>>> about
>>>> programs and programming language related to HPC.
>>>> Julia is an exciting project which I have observed and tried out already
>>>> for
>>>> a year. This includes a project at my university, that has led me to
>>>> hold an
>>>> workshop about Julia and to write a paper "Implementing the Lattice
>>>> Boltzmann method using the Julia language" including performance
>>>> measuring
>>>> and comparison to a C++ implementation, which is in the process of
>>>> getting
>>>> published right now.
>>>> I realized that Julia already has some very good futures but never the
>>>> less
>>>> is missing some imported ones. Other essential futures (like distributed
>>>> parallelism) seems to be under development and the API and functionality
>>>> differs from version to version.
>>>> I personally see this as an opportunity to apply and extend my knowledge
>>>> at
>>>> the same time on a real life project.
>>>>
>>>> Going through the project ideas page there are three projects that
>>>> attracts
>>>> my attention.
>>>> * Native Julia implementation of iterative solvers for numerical linear
>>>> algebra
>>>> * Native Julia implementation of massively parallel sparse linear
>>>> algebra
>>>> routines
>>>> * Ensure that Julia runs smoothly on current large HPC systems
>>>>
>>>> At first this may look confusing but all three topics have something in
>>>> commend  and I wonder if they are strictly separated.
>>>> So iterative solvers offend depend on sparse linear algebra routines.
>>>> Never the less I think the focus in the first topic is more on the basic
>>>> (serial) routines (like Arnoldi and Lanczos iterations)  while in the
>>>> second
>>>> topic "massively parallel" are the key words.
>>>> It would be great if you can give me some hints if my interpretation is
>>>> right and how strict it is.
>>>>
>>>> I assume you have distributed parallelism in mint  when you are talking
>>>> about "massively parallel" code.
>>>> This looks like there is an overlap with the third topic, as the most
>>>> time
>>>> in scientific codes offend is spend in (sparse or dense) linear algebra
>>>> kernels.
>>>> Do you have a specific application in mind for this task?
>>>> Is the aim of this project like a proof of concept for the current
>>>> distributed parallelism model?
>>>> Dose it means most of the work has to be done in the runtime system code
>>>> or
>>>> does it means to optimize and existing (and already running on a
>>>> distributed
>>>> cluster) code to get reasonable scaling results or do you think the
>>>> hardest
>>>> part will be to get get the julia code to run within the classically
>>>> (mpi)
>>>> environment?
>>>>
>>>> I'm sorry for the long post on the mailing list but I couldn't figure
>>>> out
>>>> who are the experts/mentors for each topic.
>>>> So I hope I get some answers here ;)
>>>>
>>>> Thanks,
>>>> Christopher Bross
>>>
>>>
>>> --
>>> Erik Schnetter <[hidden email]>
>>> http://www.perimeterinstitute.ca/personal/eschnetter/
>
>