Manual Intel or AMD GPU configuration
Note
This document outlines how to configure a host Intel or AMD GPU in a shared manner linked to specific Workspace configuration on a single server deployment. If using an Nvidia GPU please see GPU Acceleration.
Determine device ids for your GPU
Before modifying a Workspace you will need to determine the card and render device id for the GPU you want to use in your Workspaces container. On a single GPU system this will be simply:
/dev/dri/card0
/dev/dri/renderD128
On multi GPU systems you will need to determine which card and render device to pass into the Workspaces container. This can be achieved using different tools the easiest being to follow file linking if you only need to determine which card is using what driver:
ls -l /sys/class/drm/renderD*/device/driver | awk '{print $9,$11}'
/sys/class/drm/renderD128/device/driver ../../../bus/pci/drivers/i915
/sys/class/drm/renderD129/device/driver ../../../../bus/pci/drivers/nvidia
/sys/class/drm/renderD130/device/driver ../../../../bus/pci/drivers/amdgpu
This is useful for systems that have an integrated GPU and a discrete card.
If you have multiple GPUs of the same manufacturer or driver you will need to manually parse through to determine which is which:
ls -l /dev/dri/by-path/*-render | awk '{print $9,$11}'
/dev/dri/by-path/pci-0000:04:00.0-render ../renderD128
Use the resulting pci id: (in this case 04:00.0)
sudo lspci |grep 04:00.0
04:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt (rev c7)
There is no bulletproof way to identify cards that are identical, but in general PCI Express IDs will go from lowest to highest being closest to farthest from the CPU on the motherboard of your system.
Adding your device to a Workspace
Note
Using this manual method means that Kasm Workspaces is unaware of the presence of the GPU. Therefore, you should not set the GPU count in the Workspace definition. This also means that Kasm Workspaces will not place limits on the number of sessions that can share the GPU. We cannot guarantee any level of security when sharing GPUs between different users.
There are two ways to utilize a GPU with an open source driver like Intel, AMDGPU, Radeon, or Nouveau. This can be achieved with VirtualGL or DRI3 while using the virtual framebuffer X11 display that KasmVNC launches. In most cases when choosing a method DRI3 will be preferred as it is the native rendering pipeline a bare metal screen would use in a desktop Linux installation. This means it will be more compatible and just work out of the box without special modifications or wrappers. More about DRI3 acceleration can be found in the KasmVNC Documentation under GPU Acceleration.
DRI3
In this example we will assume from the section above we want to add the Intel GPU located at:
/dev/dri/card0
/dev/dri/renderD128
Login as an Administrator user to your Workspaces deployment and navigate to Admin > Workspaces > Workspaces > select the Workspace you want to modify and edit.
Under “Docker Run Config Overide (JSON)” set:
{
"environment": {
"HW3D": true,
"DRINODE": "/dev/dri/renderD128"
},
"devices": [
"/dev/dri/card0:/dev/dri/card0:rwm",
"/dev/dri/renderD128:/dev/dri/renderD128:rwm"
]
}
Under “Docker Exec Config (JSON)” set:
{
"first_launch": {
"user": "root",
"cmd": "bash -c 'chown -R kasm-user:kasm-user /dev/dri/*'"
}
}
Finally click Save.
Testing GPU support - DRI3
The easiest way to test if your GPU is mounted in and detected in the container would be to launch the applicable Workspace image you modified. Once inside the Workspace run the terminal command glxheads
. In this example using our Intel GPU you should see:
Name: :1.0
Display: 0x563b4182dc60
Window: 0x2800002
Context: 0x563b4193fa90
GL_VERSION: 4.6 (Compatibility Profile) Mesa 22.2.2 - kisak-mesa PPA
GL_VENDOR: Intel
GL_RENDERER: Mesa Intel(R) HD Graphics 4600 (HSW GT2)
VirtualGL
In this example we will assume from the section above we want to add the Intel GPU located at:
/dev/dri/card0
/dev/dri/renderD128
Login as an Administrator user to your Workspaces deployment and navigate to Admin > Workspaces > Workspaces > select the Workspace you want to modify and edit.
Under “Docker Run Config Overide (JSON)” set:
{
"environment": {
"KASM_EGL_CARD": "/dev/dri/card0",
"KASM_RENDERD": "/dev/dri/renderD128"
},
"devices": [
"/dev/dri/card0:/dev/dri/card0:rwm",
"/dev/dri/renderD128:/dev/dri/renderD128:rwm"
]
}
Under “Docker Exec Config (JSON)” set:
{
"first_launch": {
"user": "root",
"cmd": "bash -c 'chown -R kasm-user:kasm-user /dev/dri/*'"
}
}
Finally click Save.
Testing GPU support - VirtualGL
For GPU acceleration, Kasm Workspaces configures applications where possible to be wrapped with a vglrun
command to leverage VirtualGL.
The easiest way to test if your GPU is mounted in and detected in the container would be to launch the applicable Workspace image you modified. Once inside the Workspace run the terminal command glxheads
. In this example using our Intel GPU you should see:
Name: :1.0
Display: 0x563b4182dc60
Window: 0x2800002
Context: 0x563b4193fa90
GL_VERSION: 4.6 (Compatibility Profile) Mesa 22.2.2 - kisak-mesa PPA
GL_VENDOR: Intel
GL_RENDERER: Mesa Intel(R) HD Graphics 4600 (HSW GT2)
You can also force any arbitrarty application to use this GPU by passing:
vglrun -d ${KASM_EGL_CARD} YOURCOMMANDHERE
This may be useful if the application is not using VirtualGL by default.