The Julia programming support for AMD GPUs based on the ROCm platform aims to provide similar capabilities as the NVIDIA CUDA stack, with support for both low-level kernel programming as well as an array-oriented interface. AMDGPU.jl offers comparable performance as HIP C++. The toolchain can easily be installed on latest version of Julia using the integrated package manager.

AMDGPU.jl makes it possible to program AMD GPUs at different abstraction levels:

The documentation of AMDGPU.jl demonstrates each of these approaches.


Julia on the CPU is known for its good performance, approaching that of statically compiled languages like C. The same holds for programming AMD GPUs with kernels written using AMDGPU.jl, where we show preliminary performance to approach that of HIP C++ on a memcopy and 2D diffusion kernel:

Preliminary performance of a memcopy and 2D diffusion kernel implemented in Julia with AMDGPU.jl and executed on a MI250x GPU.