Native installation
While containers are often lauded for making it easier to reproduce someone else’s development environment on your machine, GPUs actually invert this rule of thumb. As soon GPUs get involved, it’s often easier to get something working with a native installation.
The reason why that is the case is that before we get any chance of having a working GPU setup inside of a container, we must first get a working GPU setup on the host system. And once you have taken care of that (which is often the hardest part), getting the rest of a native development environment up and running is not that much extra work.
As before, we will will assume that you have already taken care of setting up a native development environment for Rust CPU development, and this documentation will therefore only focus on the changes needed to get this setup ready for native Vulkan development. Which will basically boil down to installing a couple of Vulkan development tools.
Vulkan validation layers
Vulkan came in a context where GPU applications were often bottlenecked by API overheads, and one of its central design goals was to improve upon that. A particularly controversial decision taken then was to remove mandatory parameter validation from the API, instead making it undefined behavior to pass any kind of unexpected parameter value to a Vulkan function.
This may be amazing for run-time performance, but certainly does not result in a great application development experience. Therefore it was also made possible to bring such checks back as an optional “validation” layer, that is meant to be used during application development and later removed in production. As a bonus, because this layer was only meant for development purposes and operated under no performance constraint, it could also…
- Perform checks that are much more detailed than those that any GPU API performed before, finding more errors in GPU-side code and CPU-GPU synchronization patterns.
- Supplement API usage error reporting with more opinionated “best practices” and “performance” lints that are more similar to compiler warnings in spirit.
Because this package is meant to be used for development purposes, it is not a default part of Vulkan installations. Thankfully, all commonly used systems have a package for that:
- Debian/Ubuntu/openSUSE/Brew:
vulkan-validationlayers
- Arch/Fedora/RHEL:
vulkan-validation-layers
- Windows: Best installed as part of the LunarG Vulkan SDK
shaderc
Older GPU APIs relied on GPU drivers to implement a compiler for a C-like language, which proved to be a bad idea as GPU manufacturers are terrible compiler developers (and terrible software developers in general). Applications thus experienced constant issues linked to those compilers, from uneven performance across hardware to incorrect run-time program behavior.
To get rid of this pain, Vulkan has switched to an AoT/JiT hybrid compilation model where GPU code is first compiled into a simplified assembly-like interpreted representation called SPIR-V on the developer’s machine, and it is this intermediate representation that gets sent to the GPU driver for final compilation into a device- and driver-specific binary.
Because of this, our development setup is going to require a compiler that goes
from the GLSL domain-specific language (which is a common choice for GPU code,
we’ll get into why during the course) to SPIR-V. The vulkano
Rust binding that
we use is specifically designed to use
shaderc
, which is a compiler that is
maintained by the Android development team.
Unfortunately, shaderc
is not packaged by all Linux distributions. You may
therefore need to either use the official
binaries or
build it from source. In the latter case, you are going to need…
- CMake
- Ninja
- C and C++ compilers
- Python
- git
…and once those dependencies are available, you should be able to build and
install the latest upstream-tested version of shaderc
and its dependencies
using the following script:
git clone --branch=known-good https://github.com/google/shaderc \
&& cd shaderc \
&& ./update_shaderc_sources.py \
&& cd src \
&& ./utils/git-sync-deps \
&& mkdir build \
&& cd build \
&& cmake -GNinja -DCMAKE_BUILD_TYPE=Release .. \
&& ninja \
&& ctest -j$(nproc) \
&& sudo ninja install \
&& cd ../../.. \
&& rm -rf shaderc
Whether you download binaries or build from source, the resulting shaderc
installation location will likely not be in the default search path of the
associated shaderc-sys
Rust bindings. We will want to fix this, otherwise the
bindings will try to be helpful by automatically downloading and building an
internal copy of shaderc
insternally. This may fail if the dependencies are
not available, and is otherwise inefficient as such a build will need to be
performed once per project that uses shaderc-sys
and again if the build
directory is ever discarded using something like cargo clean
.
To point shaderc-sys
in the right direction, find the directory in which the
libshaderc_combined
static library was installed (typically some variation of
/usr/local/lib
when building from source on Unix systems). Then adjust your
Rust development environment’s configuration so that the SHADERC_LIB_DIR
environment variable is set to point to this directory.
Syntax highlighting
For an optimal GPU development experience, you will want to set up your code
editor to apply GLSL syntax highlighting to files with a .comp
extension. In
the case of Visual Studio Code, this can be done by installing the
slevesque.shader
extension.
Testing your setup
Your Rust development environment should now be ready for this course’s practical work. I strongly advise testing it by running the following script:
curl -LO https://gitlab.in2p3.fr/grasland/numerical-rust-gpu/-/archive/solution/numerical-rust-gpu-solution.zip \
&& unzip numerical-rust-gpu-solution.zip \
&& rm numerical-rust-gpu-solution.zip \
&& cd numerical-rust-gpu-solution/exercises \
&& echo "------" \
&& cargo run --release --bin info -- -p \
&& echo "------" \
&& cargo run --release --bin square -- -p \
&& cd ../.. \
&& rm -rf numerical-rust-gpu-solution
It performs the following actions, whose outcome should be manually checked:
- Run a Rust program that should produce the same device list as
vulkaninfo --summary
. This tells you that any device that gets correctly detected by a C Vulkan program also gets correctly detected by a Rust Vulkan program, as one would expect. - Run another program that uses a simple heuristic to pick the Vulkan device that should be most performant, then uses that device to square an array of floating-point numbers, then checks the results. You should make sure the device selection that this program made is sensible and its final result check passed.
- If everything went well, the script will clean up after itself by deleting all previously created files.