Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Host GPU setup

Before a Vulkan-based program can use your GPU, a few system preparations are needed:

  • Vulkan relies on GPU hardware features that were introduced around 2012. If your system’s GPUs are older than this, then you will almost certainly need to use a GPU emulator, and can ignore everything else that is said inside of this chapter.
  • Doing any kind of work with a GPU requires a working GPU driver. Which, for some popular brands of GPUs, may unfortunately require some work.
  • Doing Vulkan work specifically additionally requires a Vulkan implementation that knows how to communicate with your GPU driver.
    • Some GPU drivers provide their own Vulkan implementation. This is common on Windows, but also seen in e.g. NVidia’s Linux drivers.
    • Other GPU drivers expose a standardized interface that a third-party Vulkan implementations can tap into. This is the norm on Linux on macOS.

It is important to point out that you will also need these preparations when using Linux containers, because the containers do not acquire full control of the GPU hardware. They need to go through the host system’s GPU driver, which must therefore be working.

In fact, as a word of warning, containerized setups will likely make it harder for you to get a working GPU setup.1 Given the option to do so, you should prefer using a native development environment for this course, or any other kind of coding that involves GPUs for that matter.

GPU driver

The procedure for getting a working GPU driver is, as you may imagine, fairly system-dependent. Please select your operating system using the tabs below:

macOS bundles suitable GPU drivers for all Apple-manufactured computers, and Macs should therefore require no extra GPU driver setup.2

After performing any setup step described above and rebooting, your system should have a working GPU driver. But owing to the highly system-specific nature of this step, we unfortunately won’t yet be able to check this in an OS-agnostic manner. To do that, we will install another component that you are likely to need for this course, namely a Vulkan implementation.

Vulkan implementation

As mentioned above, your GPU driver may or may not come with a Vulkan implementation. If that is not the case, we will want to install one.

Like Windows, macOS does not provide first-class Vulkan support out of the box because Apple want to push their own proprietary GPU API called Metal.

Unlike on Windows, however, there is no easy workaround based on installing the GPU manufacturer’s driver on macOS, because Apple is the manufacturer and unsurprisingly they do not provide an optional driver with Vulkan support either.

What we will therefore need to do is to layer a third-party Vulkan implementation on top of Apple’s proprietary Metal API. The MoltenVk project provides the most popular implementation of such a layered Vulkan implementation at the time of writing.

As the author sadly did not get the chance to experiment with a Mac during preparation of the course, we cannot provide precise installation instructions for MoltenVk. So please follow the installation instructions of the README file of the official code repository and ping the course author if you run into any trouble.

Update: During the 2025 edition of the school, the experience was that MoltenVk was reasonably easy to install and worked fine for the simple number-squaring GPU program that is presented in the first part of this course, but struggled building the full larger Gray-Scott simulation program. Help from expert macOS users in debugging this is welcome. If there are none in the audience, let us hope that someone else will encounter the issue and it will resolve itself in future MoltenVk releases…

Given this preparation, your system should now be ready to run Vulkan apps that use your GPU. How do we know for sure, however? A test app will come in handy here.

Final check

The best way to check if your Vulkan setup works is to run a Vulkan application that can display a list of available devices and make sure that your GPUs are featured in that list.

The Khronos Group, which maintains the Vulkan specification, provide a simple tool for this in the form of the vulkaninfo app, which prints a list of all available devices along with their properties. And for once, planets have aligned properly and all package managers in common use have agreed to name the package that contains this app identically. No matter if you use a Linux distribution’s built-in package manager, brew for macOS, or vcpkg for Windows, the package that contains this utility is called vulkan-tools on every system that the author could think about.

There is just one problem: Vulkan devices have many properties, which means that the default level of detail displayed by vulkaninfo is unbrearable. For example, it emits more than 6000 lines of textual output on the author’s laptop at the time of writing.

Thankfully there is an easy fix for that: add the --summary command line option, and you will get a reasonably concise device list at the end of the output. Here’s the output from the author’s laptop:

vulkaninfo --summary
[ ... global Vulkan implementation properties ... ]

Devices:
========
GPU0:
        apiVersion         = 1.4.311
        driverVersion      = 25.1.4
        vendorID           = 0x1002
        deviceID           = 0x1636
        deviceType         = PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU
        deviceName         = AMD Radeon Graphics (RADV RENOIR)
        driverID           = DRIVER_ID_MESA_RADV
        driverName         = radv
        driverInfo         = Mesa 25.1.4-arch1.1
        conformanceVersion = 1.4.0.0
        deviceUUID         = 00000000-0800-0000-0000-000000000000
        driverUUID         = 414d442d-4d45-5341-2d44-525600000000
GPU1:
        apiVersion         = 1.4.311
        driverVersion      = 25.1.4
        vendorID           = 0x1002
        deviceID           = 0x731f
        deviceType         = PHYSICAL_DEVICE_TYPE_DISCRETE_GPU
        deviceName         = AMD Radeon RX 5600M (RADV NAVI10)
        driverID           = DRIVER_ID_MESA_RADV
        driverName         = radv
        driverInfo         = Mesa 25.1.4-arch1.1
        conformanceVersion = 1.4.0.0
        deviceUUID         = 00000000-0300-0000-0000-000000000000
        driverUUID         = 414d442d-4d45-5341-2d44-525600000000
GPU2:
        apiVersion         = 1.4.311
        driverVersion      = 25.1.4
        vendorID           = 0x10005
        deviceID           = 0x0000
        deviceType         = PHYSICAL_DEVICE_TYPE_CPU
        deviceName         = llvmpipe (LLVM 20.1.6, 256 bits)
        driverID           = DRIVER_ID_MESA_LLVMPIPE
        driverName         = llvmpipe
        driverInfo         = Mesa 25.1.4-arch1.1 (LLVM 20.1.6)
        conformanceVersion = 1.3.1.1
        deviceUUID         = 6d657361-3235-2e31-2e34-2d6172636800
        driverUUID         = 6c6c766d-7069-7065-5555-494400000000

As you can see, this particular system has three Vulkan devices available:

  • An AMD GPU that’s integrated into the same package as the CPU (low-power, low-performance)
  • Another AMD GPU that is separated from the CPU aka discrete (high-power, high-performance)
  • A GPU emulator called llvmpipe that is useful for debugging, and as a fallback for systems where there is no easy way to get a real hardware GPU to work (e.g. continuous integration of software hosted on GitHub or GitLab).

If you see all the Vulkan devices that you expect in the output of this command, that’s great! You are done with this chapter and can move to the next one. Otherwise, please go through this page’s instructions slowly again, making sure that you have not forgotten anything, and if not ping the teacher and we’ll try to figure it out together.


  1. In addition to a working GPU driver on the host sytem and a working Vulkan stack inside of the container, you need to have working communication between the two. This assumes that they are compatible, which is anything but a given when e.g. running Linux containers on Windows or macOS. It also doesn’t help that most container runtimes are designed to operate as a black box (with few ways for users to observe and control the inner machinery) and attempt to sandbox containers (which may prevent them from getting access to the host GPU in the default container runtime configuration).

  2. Unless you are using an exotic configuration like an old macOS release running on a recent computer, that is, but if you know how to get yourself into this sort of Apple-unsupported configuration, we trust you to also know how to keep its GPU driver working… :)

  3. NVidia’s GPU drivers have historically tapped into unstable APIs of the Linux kernel that may change across even bugfix kernel releases, and this makes them highly vulnerable to breakage across system updates. To make matters worse, their software license would also prevent Linux distributions from shipping these drivers into their official software repository, which prevented distributions from enforcing kernel/driver compatibility at the package manager level. The situation has recently improved for newer hardware (>= Turing generation), where a new “open-source driver” (actually a thin open-source layer over an enormous encrypted+signed binary blob running on a hidden RISC-V CPU because this is NVidia) has been released with a license that enables distributions to ship it as a normal package.

  4. The unfortunate popularity of “stable” distributions like Red Hat Enterprise or Ubuntu LTS, which take pride in embalming ancient software releases and wasting thousands of developer hours into backporting bugfixes from newer releases, make this harder than it should be. But when an old kernel gets in the way of hardware support and a full distribution upgrade is not an option, consider upgrading the kernel alone using facilities like Ubuntu’s “HardWare Enablement” (-hwe) kernel packages.