ANN: CUDAdrv.jl, and CUDA.jl deprecation

classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

ANN: CUDAdrv.jl, and CUDA.jl deprecation

maleadt
Hi all,

CUDAdrv.jl is Julia wrapper for the CUDA driver API -- not to be confused with its counterpart CUDArt.jl which wraps the slightly higher-level CUDA runtime API.

The package doesn't feature many high-level or easy-to-use wrappers, but focuses on providing the necessary functionality for other packages to build upon. For example, CUDArt uses CUDAdrv for launching kernels, while CUDAnative (the in-development native programming interface) completely relies on CUDAdrv for all GPU interactions.

It features a ccall-like cudacall interface for launching kernels and passing values:
using CUDAdrv
using Base.Test

dev
= CuDevice(0)
ctx
= CuContext(dev)

md
= CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
vadd
= CuFunction(md, "kernel_vadd")

dims
= (3,4)
a
= round(rand(Float32, dims) * 100)
b
= round(rand(Float32, dims) * 100)

d_a
= CuArray(a)
d_b
= CuArray(b)
d_c
= CuArray(Float32, dims)

len
= prod(dims)
cudacall
(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{Cfloat}), d_a, d_b, d_c)
c
= Array(d_c)
@test a+b c

destroy
(ctx)

For documentation, refer to the NVIDIA docs. Even though they don't fully match what CUDAdrv.jl implements, the package is well tested, and redocumenting the entire thing is too much work.

Current master of this package only supports 0.5, but there's a tagged version supporting 0.4 (as CUDArt.jl does so as well). It has been tested on CUDA 5.0 up to 8.0, but there might always be issues with certain versions (as the wrappers aren't auto-generated, and probably will never be due to how NVIDIA has implemented cuda.h)

Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is completely right, but it mimics the overlap between CUDA's runtime and driver APIs as in some cases we do specifically need one or the other (eg., CUDAnative wouldn't work with only the runtime API). There's also some legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl has been an independent effort.


In other news, we have recently deprecated the old CUDA.jl package. All users should now use either CUDArt.jl or CUDAdrv.jl, depending on what suits them best. Neither is a drop-in replacement, but the changes should be minor. At the very least, users will have to change the kernel launch syntax, which should use cudacall as shown above. In the future, we might re-use the CUDA.jl package name for the native programming interface currently at CUDAnative.jl


Best,
Tim
Reply | Threaded
Open this post in threaded view
|

Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

Kyunghun Kim
Good news!
I had wished there's would be some integration in several CUDA packages. 

By the way, is there's any plan for 'standard' GPU array type, such as https://github.com/JuliaGPU/GPUArrays.jl ?
CUDArt, CUDAdrv has its own CUDA array type and there's package such as ArrayFire.jl 

For example, if I add package wrapping new NVIDIA library such as cuRAND, 
which GPU array type should I support in that package? 

Best, 
Kyunghun
Reply | Threaded
Open this post in threaded view
|

Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

Simon Danisch
In reply to this post by maleadt
Great work! :)

Well, I think GPUArrays should be the right place! If it is the right place depends on how much time and cooperation I get ;)
The plan is to integrate all these 3rd party libraries. If you could help me with that, it would already be a great first step to establish that library :)

Am Freitag, 30. September 2016 03:31:29 UTC+2 schrieb Tim Besard:
Hi all,

<a href="https://github.com/JuliaGPU/CUDAdrv.jl" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDAdrv.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEDqHqtnscalb2OQuoX-KzLqpeltg&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDAdrv.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEDqHqtnscalb2OQuoX-KzLqpeltg&#39;;return true;">CUDAdrv.jl is Julia wrapper for the CUDA driver API -- not to be confused with its counterpart <a href="https://github.com/JuliaGPU/CUDArt.jl" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDArt.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHyQChx3GiSXok6xx0YcT4_l976qA&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDArt.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHyQChx3GiSXok6xx0YcT4_l976qA&#39;;return true;">CUDArt.jl which wraps the slightly higher-level CUDA runtime API.

The package doesn't feature many high-level or easy-to-use wrappers, but focuses on providing the necessary functionality for other packages to build upon. For example, CUDArt uses CUDAdrv for launching kernels, while CUDAnative (the in-development native programming interface) completely relies on CUDAdrv for all GPU interactions.

It features a ccall-like cudacall interface for launching kernels and passing values:
using CUDAdrv
using Base.Test

dev
= CuDevice(0)
ctx
= CuContext(dev)

md
= CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
vadd
= CuFunction(md, "kernel_vadd")

dims
= (3,4)
a
= round(rand(Float32, dims) * 100)
b
= round(rand(Float32, dims) * 100)

d_a
= CuArray(a)
d_b
= CuArray(b)
d_c
= CuArray(Float32, dims)

len
= prod(dims)
cudacall
(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{Cfloat}), d_a, d_b, d_c)
c
= Array(d_c)
@test a+b c

destroy
(ctx)

For documentation, refer to the <a href="http://docs.nvidia.com/cuda/cuda-driver-api/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fdocs.nvidia.com%2Fcuda%2Fcuda-driver-api%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEWOX-CuKOL8s3A2xNf8vXedeklUA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fdocs.nvidia.com%2Fcuda%2Fcuda-driver-api%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEWOX-CuKOL8s3A2xNf8vXedeklUA&#39;;return true;">NVIDIA docs. Even though they don't fully match what CUDAdrv.jl implements, the package is well tested, and redocumenting the entire thing is too much work.

Current master of this package only supports 0.5, but there's a tagged version supporting 0.4 (as CUDArt.jl does so as well). It has been tested on CUDA 5.0 up to 8.0, but there might always be issues with certain versions (as the wrappers aren't auto-generated, and probably will never be due to how NVIDIA has implemented cuda.h)

Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is completely right, but it mimics the overlap between CUDA's runtime and driver APIs as in some cases we do specifically need one or the other (eg., CUDAnative wouldn't work with only the runtime API). There's also some legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl has been an independent effort.


In other news, we have recently deprecated the old CUDA.jl package. All users should now use either CUDArt.jl or CUDAdrv.jl, depending on what suits them best. Neither is a drop-in replacement, but the changes should be minor. At the very least, users will have to change the kernel launch syntax, which should use cudacall as shown above. In the future, we might re-use the CUDA.jl package name for the native programming interface currently at CUDAnative.jl


Best,
Tim
Reply | Threaded
Open this post in threaded view
|

Re: ANN: CUDAdrv.jl, and CUDA.jl deprecation

Chris Rackauckas
In reply to this post by maleadt
Thanks for the update.

On Thursday, September 29, 2016 at 6:31:29 PM UTC-7, Tim Besard wrote:
Hi all,

<a href="https://github.com/JuliaGPU/CUDAdrv.jl" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDAdrv.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEDqHqtnscalb2OQuoX-KzLqpeltg&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDAdrv.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEDqHqtnscalb2OQuoX-KzLqpeltg&#39;;return true;">CUDAdrv.jl is Julia wrapper for the CUDA driver API -- not to be confused with its counterpart <a href="https://github.com/JuliaGPU/CUDArt.jl" target="_blank" rel="nofollow" onmousedown="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDArt.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHyQChx3GiSXok6xx0YcT4_l976qA&#39;;return true;" onclick="this.href=&#39;https://www.google.com/url?q\x3dhttps%3A%2F%2Fgithub.com%2FJuliaGPU%2FCUDArt.jl\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNHyQChx3GiSXok6xx0YcT4_l976qA&#39;;return true;">CUDArt.jl which wraps the slightly higher-level CUDA runtime API.

The package doesn't feature many high-level or easy-to-use wrappers, but focuses on providing the necessary functionality for other packages to build upon. For example, CUDArt uses CUDAdrv for launching kernels, while CUDAnative (the in-development native programming interface) completely relies on CUDAdrv for all GPU interactions.

It features a ccall-like cudacall interface for launching kernels and passing values:
using CUDAdrv
using Base.Test

dev
= CuDevice(0)
ctx
= CuContext(dev)

md
= CuModuleFile(joinpath(dirname(@__FILE__), "vadd.ptx"))
vadd
= CuFunction(md, "kernel_vadd")

dims
= (3,4)
a
= round(rand(Float32, dims) * 100)
b
= round(rand(Float32, dims) * 100)

d_a
= CuArray(a)
d_b
= CuArray(b)
d_c
= CuArray(Float32, dims)

len
= prod(dims)
cudacall
(vadd, len, 1, (DevicePtr{Cfloat},DevicePtr{Cfloat},DevicePtr{Cfloat}), d_a, d_b, d_c)
c
= Array(d_c)
@test a+b c

destroy
(ctx)

For documentation, refer to the <a href="http://docs.nvidia.com/cuda/cuda-driver-api/" target="_blank" rel="nofollow" onmousedown="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fdocs.nvidia.com%2Fcuda%2Fcuda-driver-api%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEWOX-CuKOL8s3A2xNf8vXedeklUA&#39;;return true;" onclick="this.href=&#39;http://www.google.com/url?q\x3dhttp%3A%2F%2Fdocs.nvidia.com%2Fcuda%2Fcuda-driver-api%2F\x26sa\x3dD\x26sntz\x3d1\x26usg\x3dAFQjCNEWOX-CuKOL8s3A2xNf8vXedeklUA&#39;;return true;">NVIDIA docs. Even though they don't fully match what CUDAdrv.jl implements, the package is well tested, and redocumenting the entire thing is too much work.

Current master of this package only supports 0.5, but there's a tagged version supporting 0.4 (as CUDArt.jl does so as well). It has been tested on CUDA 5.0 up to 8.0, but there might always be issues with certain versions (as the wrappers aren't auto-generated, and probably will never be due to how NVIDIA has implemented cuda.h)

Anybody thinking there's a lot of overlap between CUDArt and CUDAdrv is completely right, but it mimics the overlap between CUDA's runtime and driver APIs as in some cases we do specifically need one or the other (eg., CUDAnative wouldn't work with only the runtime API). There's also some legacy at the Julia side: CUDAdrv.jl is based on CUDA.jl, while CUDArt.jl has been an independent effort.


In other news, we have recently deprecated the old CUDA.jl package. All users should now use either CUDArt.jl or CUDAdrv.jl, depending on what suits them best. Neither is a drop-in replacement, but the changes should be minor. At the very least, users will have to change the kernel launch syntax, which should use cudacall as shown above. In the future, we might re-use the CUDA.jl package name for the native programming interface currently at CUDAnative.jl


Best,
Tim