Manual Intel or AMD GPU configuration

Note

This document outlines how to configure a host Intel or AMD GPU in a shared manner linked to specific Workspace configuration on a single server deployment. If using an Nvidia GPU please see GPU Acceleration.

Determine device ids for your GPU

Before modifying a Workspace you will need to determine the card and render device id for the GPU you want to use in your Workspaces container.

First install some dependencies:

sudo apt-get update
sudo apt-get install -y drm-info jq

Once installed run:

drm_info -j 2>/dev/null| jq 'with_entries(.value |= .driver.desc)'
{
  "/dev/dri/card1": "AMD GPU",
  "/dev/dri/card0": "Intel Graphics"
}

To determine the renderD device in most cases they will count up from 128 IE:

  • renderD128 = card0

  • renderD129 = card1

If you have multiple cards of the same type you will want to dig into the lspci data for your system linking up the PCI bus IDs from this command:

ls -l /dev/dri/by-path/
pci-0000:00:02.0-card -> ../card0
pci-0000:00:02.0-render -> ../renderD128
pci-0000:02:00.0-card -> ../card1
pci-0000:02:00.0-render -> ../renderD129

The PCI IDs for your cards can be found with lspci:

lspci -vnn | grep VGA
00:02.0 VGA compatible controller [0300]: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor Integrated Graphics Controller [8086:0412] (rev 06) (prog-if 00 [VGA controller])
02:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] [1002:67df] (rev c7) (prog-if 00 [VGA controller])

With the first column being the pci id which can be referenced from the by-path symlinks. (00:02.0 links to pci-0000:00:02.0-card and thus card0)

Adding your device to a Workspace

Note

Using this manual method means that Kasm Workspaces is unaware of the presence of the GPU. Therefore, you should not set the GPU count in the Workspace definition. This also means that Kasm Workspaces will not place limits on the number of sessions that can share the GPU. We cannot guarantee any level of security when sharing GPUs between different users.

In this example we will assume from the section above we want to add the Intel GPU located at:

  • /dev/dri/card0

  • /dev/dri/renderD128

Login as an Administrator user to your Workspaces deployment and navigate to Admin > Workspaces > select the Workspace you want to modify and edit.

Under “Docker Run Config Overide (JSON)” set:

{
  "environment": {
    "KASM_EGL_CARD": "/dev/dri/card0",
    "KASM_RENDERD": "/dev/dri/renderD128"
  },
  "devices": [
    "/dev/dri/card0:/dev/dri/card0:rwm",
    "/dev/dri/renderD128:/dev/dri/renderD128:rwm"
  ]
}

Under “Docker Exec Config (JSON)” set:

{
  "first_launch": {
    "user": "root",
    "cmd": "bash -c 'chown -R kasm-user:kasm-user /dev/dri/*'"
  }
}

Finally click Submit.

Testing GPU support

For GPU acceleration, Kasm Workspaces configures applications where possible to be wrapped with a vglrun command to leverage VirtualGL. The easiest way to test if your GPU is mounted in and detected in the container would be to launch the applicable Workspace image you modified. Once inside the Workspace run the terminal command glxheads. In this example using our Intel GPU you should see:

Name: :1.0
  Display:     0x563b4182dc60
  Window:      0x2800002
  Context:     0x563b4193fa90
  GL_VERSION:  4.6 (Compatibility Profile) Mesa 22.2.2 - kisak-mesa PPA
  GL_VENDOR:   Intel
  GL_RENDERER: Mesa Intel(R) HD Graphics 4600 (HSW GT2)

You can also force any arbitrarty application to use this GPU by passing:

vglrun -d ${KASM_EGL_CARD} YOURCOMMANDHERE

This may be useful if the application is not using VirtualGL by default.