VM Provider Configs
Note
The Auto-Scaling feature is ONLY available in Enterprise licensing. For more information on licensing please visit: Licensing.

Create New Provider
Name |
Description |
VM Provider Configs |
Select an existing config or create a new config. If selecting an existing config and changing any of the details, those details will be changed for anything using the same VM Provider config. |
Provider |
Select a provider from AWS, Azure, Digital Ocean, Google Cloud or Oracle Cloud. If selecting an existing provider this will be selected automatically. |
AWS Settings
A number of settings are required to be defined to use this functionality.

AWS Settings
Name |
Description |
Name |
A name to use to identify the config. |
AWS Access Key ID |
The AWS Access Key used for the AWS API. |
AWS Secret Access Key |
The AWS Secret Access Key used for the AWS API. |
AWS: Region |
The AWS Region the EC2 Nodes should be provisioned in. e.g (us-east-1) |
AWS: EC2 AMI ID |
The AMI ID to use for the provisioned EC2 nodes. This should be an OS that is supported by the Kasm installer. |
AWS: EC2 Instance Type |
The EC2 Instance Type (e.g t3.micro). Note the Cores and Memory override settings don’t necessarily have to match the instance configurations. This is to allow for over provisioning. |
AWS: Max EC2 Nodes |
The maximum number of EC2 nodes to provision regardless of the need for available free slots |
AWS: EC2 Security Group IDs |
A Json list containg security group IDs to assign the EC2 nodes. e.g |
AWS: EC2 Subnet ID |
The subnet ID to place the EC2 nodes in. |
AWS: EC2 EBS Volume Size (GB) |
The size of the root EBS Volume for the EC2 nodes. |
AWS: EC2 EBS Volume Type |
The EBS Volume Type (e.g gp2) |
AWS: EC2 IAM |
The IAM to assign the EC2 Nodes. Administrators may want to assign CloudWatch IAM access. |
AWS: EC2 Custom Tags |
A Json dictionary for custom tags to assigned on auto-scaled Agent EC2 Nodes. e.g |
AWS: EC2 Startup Script |
When the EC2 Nodes are provision this script is executed. The script is responsible for installing and configuring the Kasm Agent. |
Retrieve Windows VM Password from AWS |
When provisioning an AWS Windows VM Kasm can retrieve the password generated by AWS and store it in the Server configuration record created during the autoscale provision. This will only happen if the Connection Password field from the attached Autoscale config is blank. When populated Kasm will use the defined value instead of what is returned from AWS. The Administrator may want to leave this field blank and disable retrieving the password from AWS if they wish the Kasm user to be presented with a login screen to manually enter credentials upon connecting to the Windows Workspace. NOTE: This setting only affects Windows (RDP connection type) AWS instances. |
SSH Keys |
The SSH Key pair to assign the EC2 node |
AWS Config Override (JSON) |
Custom configuration may be added to the provision request for advanced use cases. Instance configuration is overridden in the ‘instance_config’ configuration block e.g.
|
Azure Settings
A number of settings are required to be defined to use this functionality. The Azure settings appear in the Deployment Zone configuration when the feature is licensed.

Azure Settings
Register Azure app
An API key for Kasm must be created to use to interface with Azure. Azure call these apps, and the example will walk through registering one along with the required permissions.
Register an app by going to the Azure Active Directory service in the Azure portal.

Azure Active Directory
From the Add dropdown select App Registration

App Registration
Give this app a human-readable name such as Kasm Workspaces

App Registration
Go to Resource Groups and select the Resource Group that Kasm will autoscale in.

Azure Resource Groups
Select Access Control (IAM)

Access Control
From the Add drop down select Add role assignment

Add Role Assignment
The app created in Azure will need two roles, first select the Virtual Machine Contributor role, then on the next page select the app by typing in the name e.g. Kasm Workspaces

Virtual Machine Contributor

Assign Contributor
Go through this process again to add the Network Contributor and the DNS Zone Contributor roles

Network Contributor

DNS Zone Contributor
Azure VM Settings
A number of settings are required to be defined to use this functionality. The Digital Ocean settings appear in the Pool configuration when the feature is licensed.

Azure VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
Subscription ID |
The Subscription ID for the Azure Account.
This can be found in the Azure portal by searching for Subscriptions in the search bar in Azure home then selecting the subscription to use.
(e.g |
Resource Group |
The Resource Group the DNS Zone and/or Virtual Machines belong to (e.g |
Tenant ID |
The Tenant ID for the Azure Account.
This can be found in the Azure portal by going to Azure Active Directory using the search bar in Azure home.
(e.g |
Client ID |
The Client ID credential used to auth to the Azure Account.
Client ID can be obtained by registering an application within Azure Active Directory.
(e.g |
Client Secret |
The Client Secret credential created with the registered applicaiton in Azure Active Directory. (e.g |
Azure Authority |
Which Azure authority to use, there are four, Azure Public Cloud, Azure Government, Azure China and Azure Germany. |
Region |
The Azure region where the Agents will be provisioned. (e.g |
Max Instances |
The maximum number of Azure VMs to provision regardless of the need for additional resources. |
VM Size |
The size configuration of the Azure VM to provision (e.g |
OS Disk Type |
The disk type to use for the Azure VM. (e.g |
OS Disk Size (GB) |
The size (in GB) of the boot volume to assign the compute instance. |
OS Image Reference (JSON) |
The OS Image Reference configuration for the Azure VMs (e.g
|
Image is Windows |
Is this a windows VM being created |
Network Security Group |
The network security group to attach to the VM (e.g |
Subnet |
The subnet to attach the VM to (e.g |
Assign Public IP |
If checked, the VM will be assigned a public IP. If no public ip IP is assigned the VM must ne attached to a standard load balancer of the subnet must have a NAT Gateway or user-defined route (UDR). If a public IP is used, the subnet must not also include a NAT Gateway. Reference |
Tags (JSON) |
A JSON dictionary of custom tags to assign to the VMs (e.g |
OS Username |
The login username to assign to the new VM (e.g |
OS Password |
The login password to assign to the new VM. Note: Password authentication is disabled for SSH by default |
SSH Public Key |
The SSH public key to install on the VM for the defined user: (e.g |
Agent Startup Script |
When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. |
Config Override (JSON) |
Custom configuration may be added to the provision request for advanced use cases. The emitted json structure is visible by clicking JSON View when inspecting the VM in the Azure console.
The keys in this configuration can be used to update top level keys within the emitted json config (e.g |
Digital Ocean Settings
A number of settings are required to be defined to use this functionality. The Digital Ocean settings appear in the Pool configuration when the feature is licensed.
Warning
Please review Tag Does Not Exist Error for known issues and workarounds

Digital Occean VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
Token |
The token to use to connect to this VM |
Max Droplets |
The maximum number of Digital Ocean droplets to provision , regardless of whether more are needed to fulfill user demand. |
Region |
The Digital Ocean Region where droplets should be provisioned. (e.g nyc1) |
Image |
The Image to use when creating droplets. (e.g docker-18-04) |
Droplet Size |
The droplet size configuration (e.g c-2) |
Tags |
A tag(s) to assign the droplet when it is created. This should be a comma separated list of tags. |
SSH Key Name |
The SSH Key to assign to the newly created droplets. The SSH Key must already exist in the Digital Ocean Account. |
Firewall Name |
The name of the Firewall to apply to the newly created droplets. This Firewall must already exist in the Digital Ocean Account. |
Startup Script |
When droplets are provision this script is executed. The script is responsible for installing and configuring the Kasm Agent. |
Tag Does Not Exist Error
Upon first testing AutoScaling with Digital Ocean, an error similar to the following may be presented:
Future generated an exception: tag zone:abc123 does not exist
traceback:
..
File "digitalocean/Firewall.py", line 225, in add_tags
File "digitalocean/baseapi.py", line 196, in get_data
digitalocean.DataReadError: tag zone:abc123 does not exist
process: manager_api_server
This error occurs when Kasm Workspaces tries to assign a unique tag based on the Zone Id to the Digital Ocean Firewall.
If that tag does not already exist in Digital Ocean, the operation will fail and present the error.
To workaround the issue, manually create a tag matching the one specified in the error (e.g zone:abc123
) via
the Digital Ocean console. This can be done via API, or simply creating the tag on a temporary Droplet.
Google Cloud (GCP) Settings
A number of settings are required to be defined to use this functionality. The GCP settings appear in the Pool configuration when the feature is licensed.

Google Cloud VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
GCP Credentials |
The JSON formatted credentials for the service account used to authenticate with GCP: Ref |
Max Instances |
The maximum number of GCP compute instances to provision regardless of the need for additional resources. |
Project ID |
The Google Cloud Project ID (e.g pensive-voice-547511) |
Region |
The region to provision the new compute instances. (e.g us-east4) |
Zone |
The zone the new compute instance will be provisioned in (e.g us-east4-b) |
Machine Type |
The Machine type for the GCP compute instances. (e.g e2-standard-2) |
Machine Image |
The Machine Image to use for the new compute instance. (e.g projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20211212) |
Boot Volume GB |
The size (in GB) of the boot volume to assign the compute instance. |
Disk Type |
The disk type for the new instance. (e.g pd-ssd) |
Customer Managed Encryption Key (CMEK) |
The optional path to the Customer Managaged Encryption Key (CMEK) (e.g projects/pensive-voice-547511/locations/global/keyRings/my-keyring/cryptoKeys/my-key |
Network |
The path of the Network to place the new instance. (e.g projects/pensive-voice-547511/global/networks/default) |
Sub Network |
The path of the Sub Network to place the new instance. (e.g projects/pensive-voice-547511/regions/us-east4/subnetworks/default) |
Public IP |
If checked, a public IP will be assigned to the new instances |
Network Tags (JSON) |
A JSON list of the Network Tags to assign the new instance. (e.g |
Custom Labels (JSON) |
A JSON dictionary of Custom Labels to assign the new instance (e.g |
Metadata (JSON) |
A JSON list of metadata objects to add to the instance.
(e.g |
Service Account (JSON) |
A JSON dictionary representing for a service account to attach to the instance.
(e.g |
Guest Accelerators (JSON) |
A JSON list representing the guest accelerators (e. GPUs) to attach to the instance.
(e.g |
GCP Config Override (JSON) |
A JSON dictionary that can be used to customize attributes of the VM request. The only attributes that cannot be overridden
are |
VM Installed OS Type |
The family of the OS installed on the VM (e.g. linux or windows). |
Startup Script Type |
The type of startup script to execute, this determines the key used when creating the GCP startup script metadata. Windows Startup Scripts Linux Startup Scripts |
Startup Script |
When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. |
Note on Updating Existing Google Cloud Providers (GCP)
Please review the settings for all existing Google Cloud Providers (GCP). Two new fields were added; VM Installed OS Type
which defaults to Linux
, and Startup Script Type
which defaults to Bash Script
. If the existing provider is configured
with a Windows VM it will not successfully launch the startup script without changing these values.
Oracle Cloud (OCI) Settings
A number of settings are required to be defined to use this functionality. The OCI settings appear in the Pool configuration when the feature is licensed.

OCI VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
User OCID |
The OCID of the user to authenticate with the OCI API. (e.g ocid1.user.oc1..xyz) |
Public Key Fingerprint |
The public key fingerprint of the authenticated API user. (e.g xx:yy:zz:11:22:33) |
Private Key |
The private key (PEM format) of the authenticated API user. |
Region |
The OCI Region name. (e.g us-ashburn-1) |
Tenancy OCID |
The Tenancy OCID for the OCI account. (e.g ocid1.tenancy.oc1..xyz) |
Compartment OCID |
The Compartment OCID where the auto-scaled agents will be placed. (ocid1.compartment.oc1..xyx) |
Network Security Group OCIDs (JSON) |
A JSON list of Security Group OCIDs that will be assigned to the auto-scaled agents.
(e.g |
Max Instances |
The maximum number of OCI compute instances to provision regardless of the need for available free slots. |
Availability Domains (JSON) |
A JSON list of availability domains where the OCI compute instances may be placed.
(e.g |
Image OCID |
The OCID of the Image to use when creating the compute instances. (e.g ocid1.image.oc1.iad.xyz) |
Shape |
The name of the shape used for the created compute instances. (e.g VM.Standard.E4.Flex) |
Flex CPUs |
The number of OCPUs to assign the compute instance. This is only applicable when a Flex shape is used. |
Burstable Base CPU Utilization |
The baseline percentage of a CPU Core that can be use continuously on a burstable instance (Select 100% to use a non-burstable instance). Reference. |
Flex Memory GB |
The amount of memory (in GB) to assign the compute instance. This is only applicable when a Flex shape is used. |
Boot Volume GB |
The size (in GB) of the boot volume to assign the compute instance. |
Boot Volume VPUs Per GB |
The Volume Performance Units (VPUs) to assign to the boot volume. Values between 10 and 120 in mulitples of 10 are acceptable. 10 is the default and represents the Balanced profile. The higher the VPUs, the higher the volume performance and cost. Reference. |
Custom Tags (JSON) |
A Json dictionary of custom freeform tags to assigned the auto-scaled instances. e.g |
Subnet OCID |
The OCID of the Subnet where the auto-scaled instances will be placed. (e.g ocid1.subnet.oc1.iad.xyz) |
SSH Public Key |
The SSH public key to insert into the compute instances. (e.g ssh-rsa XYABC) |
Startup Script |
When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. |
OCI Config Override |
A JSON dictionary that can be used to customize attributes of the VM request. An OCI Model can be specified with the “OCI_MODEL_NAME” key. Reference: OCI Python Docs and Kasm Examples. |
You can find the OCI Image ID for the version of the desired operating system in the desired region by finding navigating the OCI Image page.
OCI Config Override Examples
Below are some OCI autoscale configurations that utilize the OCI Config Override.
Disable Legacy Instance Metadata Service
Disables instance metadata service v2 for additional security.
{
"launch_instance_details": {
"instance_options": {
"OCI_MODEL_NAME": "InstanceOptions",
"are_legacy_imds_endpoints_disabled": true
}
}
}
Enable Instance Agent Plugins
A list of available plugins can be retrieved by navigating to an existing instance’s “Oracle Cloud Agent” config page. This example enables the “Vulnerability Scanning” plugin.
{
"launch_instance_details": {
"agent_config": {
"OCI_MODEL_NAME": "LaunchInstanceAgentConfigDetails",
"is_monitoring_disabled": false,
"is_management_disabled": false,
"are_all_plugins_disabled": false,
"plugins_config": [{
"OCI_MODEL_NAME": "InstanceAgentPluginConfigDetails",
"name": "Vulnerability Scanning",
"desired_state": "ENABLED"
}]
}
}
}
VMware vSphere Settings
A number of settings are required to be defined to use this functionality. The VMware vSphere settings appear in the Pool configuration when the feature is licensed.

VSphere VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
vSphere vCenter Address |
The location of the VMware vSphere vCenter server to use. |
vSphere vCenter Port |
The port to use. (The vCenter default is 443) |
vSphere vCenter Username |
The username to use when authenticating with the vSphere vCenter server. |
vSphere vCenter Password |
The password to use when authenticating with the vSphere vCenter server. |
VM Template Name |
The name of the template VM to use when cloning new autoscaled VMs. |
Max Instances |
The maximum number of vSphere VM instances to provision regardless of the need for available free slots. |
Datacenter Name |
The datacenter to use for cloning the new vSphere VM instances. |
VM Folder |
The VM folder to use for cloning the new vSphere VM instances. This field is optional, if left blank the VM folder of the template is used. |
Datastore Name |
The datastore to use for cloning the new vSphere VM instances. This field is optional, if left blank the datastore of the template is used. |
Cluster Name |
The cluster to use for cloning the new vSphere VM instances. This field is optional, if left blank the cluster of the template is used. |
Resource Pool |
The resource pool to use for cloning the new vSphere VM instances. This field is optional, if left blank the resource pool of the template is used. |
Datastore Cluster Name |
The datastore cluster to use for cloning the new vSphere VM instances. This field is optional, if left blank the datastore cluster of the template is used. |
Guest VM Username |
The username to use for running the startup script on the new vSphere VM instance. This account should have sufficient privileges to execute all commands in the startup script. |
Guest VM Password |
The password for the Guest VM Username account. |
Number of Guest CPUs |
The number of CPUs to configure on new vSphere VM instances. This option is not dependent on the number of CPUs configured on the template. |
Amount of Guest Memory(GiB) |
The amount of memory in GibiBytes to configure on new vSphere VM instances. This option is not dependent on the amount of memory configured on the template. |
What family of OS is installed in the VM |
Whether the template OS is Linux or Windows. This is needed to ensure proper execution of the startup script. |
Startup Script |
When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are run as bash scripts on a Linux host and Powershell scripts on a Windows host. Additional troublshooting steps can be found in the Creating Templates For Use With The VMware vSphere Provider section of the server documentation. |
Permissions for vCenter service account
These are the minimum permissions that your service account requires in vCenter based on a default configuration. The account might require additional privileges depending on specific features and configurations you have in place. We advise creating a dedicated service account for Kasm Workspaces autoscaling with these permissions to enhance security and minimize potential risks.
Datastore
Allocate space
Browse datastore
Global
Cancel task
Network
Assign network
Resource
Assign virtual machine to resource pool
Virtual machine
Change Configuration
Change CPU count
Change Memory
Set annotation
Edit Inventory
Create from existing
Create new
Remove
Unregister
Guest operations
Guest operation modifications
Guest operation program execution
Guest operation queries
Interaction
Power off
Power on
Provisioning
Deploy template
Network Connectivity
The agent startup scripts utilize VMware’s guest script execution via VMware Tools. This functionality requires direct HTTPS connectivity between the Kasm Workspace Manager and the ESXi host(s) running the agent VMs.
Notes on vSphere Datastore Storage
When configuring VMware vSphere with Kasm Workspaces one important item to keep in mind is datastore storage. When clones are created VMware will attempt to satisfy the clone operation if the datastore runs out of space, any VMs that are running on that datastore will be paused until space is available. Kasm Workspaces recommends that critical management VMs such as the Vcenter server VM and cluster management VMs are on separate datastores that are not used for Kasm autoscaling.
OpenStack Settings
A number of settings are required to be defined to use this functionality. The OpenStack settings appear in the Pool configuration when the feature is licensed.
The appropriate OpenStack configuration options can be found by using the “API Access” page of the OpenStack UI and downloading the “OpenStack RC File”.

OpenStack VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
Max Instances |
The maximum number of OpenStack compute instances to provision regardless of the need for additional resources. |
OpenStack Identity Endpoint |
The endpoint address of the OpenStack Keystone endpoint (e.g. |
OpenStack Nova Endpoint |
The endpoint address of the OpenStack Nova (Compute) endpoint (e.g. |
OpenStack Nova Version |
The version to use with the OpenStack Nova (Compute) endpoint (e.g. |
OpenStack Glance Endpoint |
The endpoint address of the OpenStack Glance (Image) endpoint (e.g. |
OpenStack Glance Version |
The version to use with the OpenStack Glance (Image) endpoint (e.g. |
OpenStack Cinder Endpoint |
The endpoint address of the OpenStack Cinder (Volume) endpoint. Note: The address contains the OpenStack Project ID
(e.g. |
OpenStack Cinder Version |
The version to use with the OpenStack Cinder (Volume) endpoint. (e.g. |
Project Name |
The name of the OpenStack Project where VMs will be provisioned. |
Authentication Method |
The kind of credential used to authenticate against the OpenStack Endpoints. |
Application Credential ID |
The Credential ID of the OpenStack Application Credential. |
Application Credential Secret |
The OpenStack Application Credential secret. |
Project Domain Name |
The Domain that OpenStack Project belongs to (e.g. |
User Domain Name |
The Domain that the OpenStack User belongs to (e.g. |
Username |
The Username of the OpenStack User used to authentication against OpenStack. |
Password |
The Password of the OpenStack User used to authenticate against OpenStack. |
Metadata |
A Json Dictionary containing the metadata tags applied to the OpenStack VMs (e.g. |
Image ID |
The ID of the Image used to provision OpenStack VMs. |
Flavor |
The name of the desired Flavor for the OpenStack VM (e.g. |
Create Volume |
Enable to create a new Block storage (Cinder) volume for the OpenStack VM. (When disabled, ephemeral Compute (Nova) storage is used.) |
Volume Size (GB) |
The desired size of the VM Volume in GB. This can only be specified when “Create Volume” is enabled. |
Volume Type |
The type of volume to use for the new OpenStack VM Volume (e.g. |
Startup Script |
When OpenStack VMs are provision this script is executed. The script is responsible for installing and configuring the Kasm Agent. |
Security Groups |
A list containing the security groups applied to the OpenStack VM (e.g. |
Network ID |
The ID of the network that the OpenStack VMs will be connected to. |
Key Name |
The name of the SSH Key used to connect to the instance. |
Availability Zone |
The Name of the Availability Zone that the OpenStack VM will be placed into. |
Config Override |
A JSON dictionary that can be used to customize attributes of the VM request |
Openstack Notes
The OpenStack provider requires that OpenStack endpoints present trusted, signed TLS certificates. This can be done through an API gateway that presents a valid certificate or through configuring valid certificates on each individual service (Reference: Openstack Docs).
Openstack Endpoints Require Trusted Certificates
The OpenStack provider requires that OpenStack endpoints present trusted, signed TLS certificates. This can be done through an API gateway that presents a valid certificate or through configuring valid certificates on each individual service (Reference: Openstack Docs.).
Application Credential Access Rules
Openstack Application credentials allow for administrators to specify Access Rules to restrict the permissions of an application credential further than a role might allow. Below is an example of the minimum set of permissions that Kasm Workspaces requires in an Application Credential
- service: volumev3 method: POST path: /v3/*/volumes - service: volumev3 method: DELETE path: /v3/*/volumes/* - service: volumev3 method: GET path: /v3/*/volumes - service: volumev3 method: GET path: /v3/*/volumes/* - service: volumev3 method: GET path: /v3/*/volumes/detail - service: compute method: GET path: /v2.1/servers/detail - service: compute method: GET path: /v2.1/servers - service: compute method: GET path: /v2.1/flavors - service: compute method: GET path: /v2.1/flavors/* - service: compute method: GET path: /v2.1/servers/*/os-volume_attachments - service: compute method: GET path: /v2.1/servers/* - service: compute method: GET path: /v2.1/servers/*/os-interface - service: compute method: POST path: /v2.1/servers - service: compute method: DELETE path: /v2.1/servers/* - service: image method: GET path: /v2/images/* - service: image method: GET path: /v2/schemas/image
KubeVirt Enabled Providers
Overview
KASM supports autoscaling in Kubernetes environments that are running KubeVirt. This includes generic k8s installations as well as GKE and Harvester deployments.
Startup Scripts
We have released updated startup scripts to include KubeVirt support, the most important change is the inclusion of the qemu-agent.
https://github.com/kasmtech/workspaces-autoscale-startup-scripts/blob/develop/latest/docker_agents/ubuntu.sh
Config Overrides
KASM generates VMs using a Kubernetes yaml manifest described by this API specification:
https://kubevirt.io/api-reference/main/definitions.html#_v1_virtualmachine
In the event that KASM providers do not expose a required feature, the provider configuration may be overridden.
In order to do this, the entire manifest must be stored in the provider config_override
.
KASM will parse the manifest and attempt to update certain fields; the metadata
will be updated so that the name
field
contains a unique name, the namespace
matches the namespace in the provider config, and the labels
are updated to contain
various labels required for autoscale functionality. All other values will be preserved.
The runStrategy
will be set to Always
and the hostname
will be set to match the unique name.
In order to support startup scripts, a disk
with the following settings will be appended to the disks
:
- name: config-drive-disk
cdrom:
bus: sata
readonly: true
This points to a volume
that will be appended to the volumes
with the following settings:
- name: config-drive-disk
cloudInitConfigDrive:
secretRef:
name: f'{name}-secret'
The manifest will be used to spawn multiple VMs, thus using unique names for certain resources such as PVCs is necessary.
To support this the provider will replace any instance of $KASM_NAME
with a unique name, to use this for multiple different types of
resources you can append to the name such as this suggested PVC example:
volumes:
- name: disk-0
persistentVolumeClaim:
claimName: $KASM_NAME-pvc
Again, due to the fact that the manifest will be spawning multiple VMs it is necessary to utilize a disk cloning method
such as the dataVolume
feature of the Containerized Data Importer interface created by KubeVirt.
Caveats
The k8s namespace for KASM resources is configured on the provider, this should not be updated while the provider is in use. Doing so can result in unpredictable behavior and orphaned resources. If it is necessary to change the k8s namespace, a new autoscale and provider should be created with the new namespace and the old autoscale configuration should be updated setting the standby cores, gpus and memory to 0. This should allow new resources to transition to the new provider.
It is possible for orphaned k8s objects to exist for various reasons, such as power loss of the KASM server during VM creation. Currently, these objects must be cleaned up manually. The k8s objects that KASM creates are: virtualmachines, secrets and PVCs.
The KASM kubevirt provider does not work out of the box with the following Kubernetes deployments:
KIND, the default KIND deployment uses local-path-provisioning for storage which does not support CDI cloning.
KubeVirt Settings
A number of settings are required to be defined to use this functionality. The KubeVirt settings appear in the Pool configuration when the feature is licensed.
The appropriate Kubernetes configuration options can be found by downloading the KubeConfig file provided by your Kubernetes installation.

KubeVirt VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
Max Instances |
The maximum number of KubeVirt compute instances to provision regardless of the need for additional resources. |
Kubernetes Host |
The address of the kubernetes cluster (e.g. |
Kubernetes SSL Certificate |
The kubernetes cluster certificate as a base64 encoded string of a PEM file. |
Kubernetes API Token |
The bearer token for authentication to the kubernetes cluster. |
VM Namespace |
The name of the Kubernetes namespace where the VMs will be provisioned. |
VM SSH Public Key |
The Public SSL Certificate used to access the VM. |
VM Cores |
The nubmer of CPU cores to configure for the VM. |
VM Memory |
The amount of memory in Gibibyte (GiB) to configure for the VM. |
VM Disk Size |
The size of the disk in Gibibyte (GiB) to configure for the VM. |
VM Disk Source |
The name of the source PVC containing a cloud ready disk image used to clone a new disk volume |
VM Interface Type |
The interface type for the VM (e.g. masquerade or bridge). |
VM Network Name |
The name of the network interface. If using a multus network, it should match the name of that network. |
VM Network Type |
The network type for the VM (e.g. pod or multus). |
VM Startup Script |
When VMs are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are ran as bash scripts on a Linux host and Powershell scripts on a Windows host. Additional troublshooting steps can be found in the Creating Templates For Use With The VMware vSphere Provider section of the server documentation. |
Configuration Override |
A config override that contains a complete YAML manifest file used when provisioning the VM. |
Enable TPM |
Enable TPM for VM. |
Enable EFI Boot |
Enable the EFI boot loader for the VM. |
Enable Secure Boot |
Enable secure boot for the VM (requires EFI boot to be enabled). |
KubeVirt GKE Setup Example
This example assumes you have a GKE account, a Linux development environment, and an existing KASM deployment (ref).
The example will assume the following variables:
cluster name
kasm
zone
us-central1
region
us-central1-c
machine-type
c3-standard-8
namespace
kasm
storage class name
kasm-storage
pvc name
kasm-ubuntu-focal
pvc size
25GiB
pvc image
focal-server-cloudimg-amd64.img
These should be replaced with values more appropriate to your installation.
Ensure GKE is configured
Install the gcloud console (ref):
curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz
tar -xf google-cloud-cli-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh -q --path-update true --command-completion true
. ~/.profile
Initialize the gcloud console (ref):
gcloud init --no-launch-browser
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-c
Enable the GKE engine API (ref):
gcloud services enable container.googleapis.com
Create a cluster with nested virtualization support (ref):
gcloud container clusters create kasm \
--enable-nested-virtualization \
--node-labels=nested-virtualization=enabled \
--machine-type=c3-standard-8
Install the kubectl gcloud component (ref):
gcloud components install kubectl
Configure GKE kubectl authentication (ref):
gcloud components install gke-gcloud-auth-plugin
gcloud container clusters get-credentials kasm \
--region=us-central1-c
Create the KASM namespace:
kubectl create namespace kasm
Install KubeVirt
Note: The current v1.3 release of KubeVirt introduced a bug preventing GKE support. You must install the v1.2.2 release.
Install KubeVirt (ref):
#export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
export RELEASE=v1.2.2
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
Wait for it to be ready. This may time out multiple (2-3) times before returning successfully:
kubectl -n kubevirt wait kv kubevirt --for condition=Available
Install the Containerized Data Importer extension
In order to support efficient cloning KubeVirt requires the Containerized Data Importer extension (ref).
Install the CDI extension:
export VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
Create a new storage class that uses the GKE CSI driver and has the
Immediate
volume binding mode:
kubectl apply -f - <<EOF
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
components.gke.io/component-name: pdcsi
components.gke.io/component-version: 0.18.23
components.gke.io/layer: addon
storageclass.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
k8s-app: gcp-compute-persistent-disk-csi-driver
name: kasm-storage
parameters:
type: pd-balanced
provisioner: pd.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
Mark any existing default storage classes as non-default:
kubectl patch storageclass standard-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Create local kubectl authentication
Currently, in order to authenticate with the GKE cluster KASM needs a local kubectl authentication account.
Create a service account:
KUBE_SA_NAME="kasm-admin"
kubectl create sa $KUBE_SA_NAME
kubectl create clusterrolebinding $KUBE_SA_NAME --clusterrole cluster-admin --serviceaccount default:$KUBE_SA_NAME
Manually create a long-lived API token for the service account
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: $KUBE_SA_NAME-secret
annotations:
kubernetes.io/service-account.name: $KUBE_SA_NAME
type: kubernetes.io/service-account-token
EOF
Generate the kubeconfig
KUBE_DEPLOY_SECRET_NAME=$KUBE_SA_NAME-secret
KUBE_API_EP=`gcloud container clusters describe kasm --format="value(privateClusterConfig.publicEndpoint)"`
KUBE_API_TOKEN=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.token}'|base64 --decode`
KUBE_API_CA=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.ca\.crt}'`
echo $KUBE_API_CA | base64 --decode > tmp.deploy.ca.crt
touch $HOME/local.cfg
export KUBECONFIG=$HOME/local.cfg
kubectl config set-cluster local --server=https://$KUBE_API_EP --certificate-authority=tmp.deploy.ca.crt --embed-certs=true
kubectl config set-credentials $KUBE_SA_NAME --token=$KUBE_API_TOKEN
kubectl config set-context local --cluster local --user $KUBE_SA_NAME
kubectl config use-context local
Validate your kubeconfig works
kubectl version
It should display both the client and server versions, if it does not you can retrieve the current config used by kubectl to ensure it is using the correct config
kubectl config view
Ensure that it is using the local settings you generated and not an existing GKE configuration.
Upload a PVC
The virtctl
tool can be used to upload a VM image. Both the raw
and qcow2
formats are supported. The image should be cloud-ready, with cloud-init configured.
Download and install the
virtctl
tool:
VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.observedKubeVirtVersion}")
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64.exe
echo ${ARCH}
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
chmod +x virtctl
sudo install virtctl /usr/local/bin
Expose the CDI Upload Proxy by executing the following command in another terminal:
kubectl -n cdi port-forward service/cdi-uploadproxy 8443:443
Use the
virtctl
tool to upload the VM image:
virtctl image-upload pvc kasm-ubuntu-focal --uploadproxy-url=https://localhost:8443 --size=25Gi --image-path=./focal-server-cloudimg-amd64.img --insecure -n kasm
Ensure KASM is configured
Create Certs
sudo openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout kasm_nginx.key -out kasm_nginx.crt -subj "/C=US/ST=VA/L=None/O=None/OU=DoFu/CN=kasm-host/emailAddress=none@none.none" 2> /dev/null
Configure KASM
Add a license
Set the default zone upstream address to the address of the KASM host
Add a Pool
Name
KubeVirt PoolType
Docker Agent
Add an Auto-Scale config
Name
KubeVirt AutoScaleAutoScale Type
Docker AgentPool
KubeVirt PoolDeployment Zone
defaultStandby Cores
4Standby GPUs
1Standby Memory
4000Downscale Backoff
600Agent Cores Override
4Agent GPUs Override
1Agent Memory Override
4Nginx Cert
paste kasm_nginx.crtNginx Key
paste kasm_nginx.key
Create a new VM Provider
Provider
KubeVirtName
KubeVirt ProviderMax Instances
10Host
paste server URI from kubeconfigSSL Certificate
paste certiciate-authority-data from kubeconfigAPI Token
paste token from kubeconfigVM Namespace
kasmVM Public SSH Key
paste user public ssh keyCores
4Memory
4Disk Source
kasm-ubuntu-focalDisk Size
30Interface Type
bridgeNetwork Name
defaultNetwork Type
podStartup Script
paste ubuntu docker agent startup script
Harvester Settings
A number of settings are required to be defined to use this functionality. The Harvester settings appear in the Pool configuration when the feature is licensed.
The appropriate Kubernetes configuration options can be found by downloading the KubeConfig file provided by your Kubernetes installation.

Harvester VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
Max Instances |
The maximum number of KubeVirt compute instances to provision regardless of the need for additional resources. |
Kubernetes Host |
The address of the kubernetes cluster (e.g. |
Kubernetes SSL Certificate |
The kubernetes cluster certificate as a base64 encoded string of a PEM file. |
Kubernetes API Token |
The bearer token for authentication to the kubernetes cluster. |
VM Namespace |
The name of the Kubernetes namespace where the VMs will be provisioned. |
VM SSH Public Key |
The Public SSL Certificate used to access the VM. |
VM Cores |
The nubmer of CPU cores to configure for the VM. |
VM Memory |
The amount of memory in Gibibyte (GiB) to configure for the VM. |
VM Disk Size |
The size of the disk in Gibibyte (GiB) to configure for the VM. |
VM Disk Image |
The name of the Harvester image used to clone a new disk volume |
VM Interface Type |
The interface type for the VM (e.g. masquerade or bridge). |
VM Network Name |
The name of the network interface. If using a multus network, it should match the name of that network. |
VM Network Type |
The network type for the VM (e.g. pod or multus). |
VM Startup Script |
When VMs are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are ran as bash scripts on a Linux host and Powershell scripts on a Windows host. Additional troublshooting steps can be found in the Creating Templates For Use With The VMware vSphere Provider section of the server documentation. |
Configuration Override |
A config override that contains a complete YAML manifest file used when provisioning the VM. |
Enable TPM |
Enable TPM for VM. |
Enable EFI Boot |
Enable the EFI boot loader for the VM. |
Enable Secure Boot |
Enable secure boot for the VM (requires EFI boot to be enabled). |
Harvester Setup Example
This example assumes you have a standard Harvester v1.3.1 deployment. The guide for doing so can be found here:
https://docs.harvesterhci.io/v1.3/install/index
Generate credentials to access the Kubernetes cluster
Harvester provides a download link on the support page of the dashboard. This will be used later when configuring the KASM provider.

Download KubeConfig
Create a namespace
The name for this example deployment should be
kasm

Create Namespace
Create a VM network
The namespace should match the one created above, for this example deployment it is
kasm
.The name will be used later in the provider configuration, for this example deployment it should be
kasm-network
.For this example deployment, select the
mgmt
cluster network.

Create VM network
Create a VM image
The namespace should match the one created above, for this example deployment it is
kasm
.The name will be used later in the provider configuration, for this example deployment it should be
kasm-ubuntu-focal
.For this example deployment, use the url
https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img
.

Create VM image
Create a KASM deployment VM
The namespace should match the one created above, for this example deployment it is
kasm
.The name for this example deployment should be
kasm-app
The CPU, memory and disk size should meet the minimum needs of your demonstration use-cases. A good example would be
4
cores8
Gi and50
Gb of disk space.For ease of use, add a public SSHKey
Under
Volumes
select the disk image created abovekasm-ubuntu-focal
Under
Networks
select the VM network created abovekasm-network

Create KASM VM
Install KASM
Using the VM created above complete a single-server deployment of KASM (ref)
Create Certs
sudo openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout kasm_nginx.key -out kasm_nginx.crt -subj "/C=US/ST=VA/L=None/O=None/OU=DoFu/CN=kasm-host/emailAddress=none@none.none" 2> /dev/null
Configure KASM
Add a KASM license
Set the default zone upstream address to the address of the KASM host
Add a Pool
Name
Harvester PoolType
Docker Agent
Add an Auto-Scale config
Name
Harvester AutoScaleAutoScale Type
Docker AgentPool
Harvester PoolDeployment Zone
defaultStandby Cores
4Standby GPUs
1Standby Memory
4000Downscale Backoff
600Agent Cores Override
4Agent GPUs Override
1Agent Memory Override
4Nginx Cert
paste kasm_nginx.crtNginx Key
paste kasm_nginx.key
Create a new VM Provider
Provider
HarvesterName
Harvester ProviderMax Instances
10Host
paste server URI from kubeconfigSSL Certificate
paste certiciate-authority-data from kubeconfigAPI Token
paste token from kubeconfigVM Namespace
kasmVM Public SSH Key
paste user public ssh keyCores
4Memory
4Disk Image
kasm-ubuntu-focalDisk Size
30Network Name
kasm-networkNetwork Type
multusStartup Script
paste ubuntu docker agent startup script
Proxmox Settings
A number of settings are required to be defined to use this functionality. The Proxmox settings appear in the Pool configuration when the feature is licensed.

Proxmox VM
Name |
Description |
---|---|
Name |
A name to use to identify the config. |
Max Instances |
The maximum number of Proxmox compute instances to provision regardless of the need for additional resources. |
Host |
Must correspond to an exposed address that routes to a Proxmox VE node. Proxmox uses a cluster system where nodes are synced together, thus any exposed node will allow access to any additional nodes in the same cluster. If Proxmox is running on a non-standard port, the port must be provided here as part of the URI. This value must not include the scheme (http, https) as it will be added via the Proxmox API, and is hardcoded to https. |
Username |
The name of a user that KASM will use to access the Proxmox VE APIs. This user must be created in Proxmox and assigned the appropriate permissions. When creating the KASM user, it can either be an internal Proxmox managed user, or one managed by Linux PAM. This value must end in the appropriate realm. Either @pam for Linux authentiction or @pve for Proxmox internal auth. For example, a Proxomox user named KasmUser would be entered as KasmUser@pve. |
Token Name |
Corresponds to the token ID assigned when creating a Proxmox API token. In order for KASM to authenticate an API token must be created and associated with the KASM user. Depending on your security model it can either have separate privileges if priviledge separation is enabled, or it can share the same permissions as the KASM user. The Proxmox API uses Token Name and Token ID interchangably to refer to the identifier of a token and sometimes Token ID includes the full path of the token which includes the Username. When entering the Token Name into the KASM API ensure that you are using only the Token Name and not including the Username. (e.g. kasm_token and not kasm_user@pam!kasm_token). |
Token Value |
The Secret value generated by the Proxmox UI when creating an authentication token. Used to authenticate with the Proxmox VE APIs. Save off this Secret in case you need to re-enter it later as the value is hidden after entry into the KASM UI. |
Verify SSL |
Whether or not to validate SSL certificates. Set to False to enable self-signed certificates. Defaults to True. |
VMID Range Lower |
The lowest integer value used when generating VM IDs. |
VMID Range Upper |
The highest integer value used when generating VM IDs. |
Full Clone |
Whether or not to perform a full clone of the VM. Defaults to False, which performs a linked clone. |
Template Name |
The name of the VM template to use when cloning new autoscaled VMs. |
Cluster Node Name |
The name of the Proxmox node containing the VM template. |
Resource Pool Name |
The (optional) resource pool to use for cloning the new VM instances. |
Storage Pool Name |
The (optional) storage pool to use for cloning the new VM instances. This requires performing a full clone. |
Target Node Name |
The (optional) name of the Proxmox node the VM will be provisioned on. If left blank, this will default to the Cluster node. |
VM Cores |
The number of CPU cores to configure for the VM. |
VM Memory |
The amount of memory in Gibibyte (GiB) to configure for the VM. |
Installed OS Type |
Whether the template OS is Linux or Windows. This is needed to ensure proper execution of the startup script. |
Startup Script Path |
The absolute path to where the startup script will be uploaded e.g (“C:\Windows\Temp”, “/tmp”). The path must exist or the script will fail to execute. |
Startup Script |
When VMs are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are ran as bash scripts on a Linux host and Powershell scripts on a Windows host. |
The Storage Pool Name
, Full Clone
mode, and Target Node Name
options are related to advanced Proxmox features that may require custom Proxmox configurations such as shared storage, additional permissions, etc. They are beyond the scope of this guide.
Proxmox Configuration
When using the Proxmox provider the administrator must create templates on the cluster for the provider to clone from.
VMID Ranges
The Proxmox API does not allow for atomically generated VMIDs that are guaranteed to be unique between API calls (ref). In order to ensure that a KASM Proxmox provider does not generate a VMID that conflicts with other Proxmox providers, or existing users, a range setting is exposed for configuration in the same fashion as the Proxmox UI.
Security Best Practices
The default Proxmox installation will provide a root user with access to all Proxmox APIs and resources, but in order to properly secure your system you should follow the practice of least privilege
and create a user that only has access to the APIs and resources required.
The Proxmox an API Token permission
must be a subset of an existing User permission
. For this security model we will create a new user, which by default have no permissions, and add only the required permissions. The API Token is only used for authentication and will inherit these permissions. If you modify this scheme, ensure that you add the required permissions to both the user and the API token.
To facilitate that, here is a guide to walk through the APIs and resources that must be enabled in order for KASM to properly function with Proxmox.
Create a new user named
KasmAPIUser

Create User
Create a new token with the ID
KasmToken
assigned to theKasmAPIUser
. DisablePrivilege Separation
and set it to never expire.

Create Token
Create a new pool for the KASM VMs named
KasmPool

Create Pool
Create a role named
KasmAdmin
Assign it the following privileges
Pool.Audit
VM.Allocate
Datastore.AllocateSpace
SDN.Use
VM.Audit
VM.Clone
VM.Config.CDROM
VM.Config.CPU
VM.Config.Disk
VM.Config.HWType
VM.Config.Memory
VM.Config.Network
VM.Config.Options
VM.Monitor
VM.PowerMgmt

Create Role
Add a user permission for each entry in the following list, for the
KasmAPIUser
with theKasmAdmin
role:/sdn/zones/<networkzone>
/storage/<storagepool>
/pool/KasmPool

Assign User Permission
You must provide the name of the pool created here to the provider configuration.
Ensure that any VM templates you’ve created have been added to the KasmPool
or KASM will not have access to them.
Linux Templates
When creating Linux templates ensure that:
The QEMU guest agent has been installed and enabled. For example, on Ubuntu/Debian distributions the guest agent can be installed via the following commands:
sudo apt update
sudo apt install qemu-guest-agent -y
Then reboot the VM to allow the changes to take effect
The Proxmox DHCP server uses machine-id and not MAC address to return unique IP addresses, and thus the template machine-id needs to be zerod out. The method to do so may vary based on Linux distro, on Ubuntu execute the following commands:
sudo truncate -s 0 /etc/machine-id
sudo truncate -s 0 /var/lib/dbus/machine-id
Windows Templates
The QEMU guest agent has been installed and enabled. The installer is typically included with the Windows VirtIO driver CD
Ensure that the Windows VirtIO drivers have been installed, this is typically done during installation by clicking
Load Driver
in aCustom (advanced)
install. At a minimum the VirtIO drivers for the SCSI controller and Ethernet adapters must be installed. It is also recommended to install the memory ballooning driver. More information can be found on the Proxmox wiki (ref).Remote desktop has been enabled on the VM.
The startup script entered into the VM Config Provider will be run as a Powershell script, so ensure that unrestricted remote scripting is enabled. To do so, open a PowerShell console with Administrator privileges and execute the following command:
Set-ExecutionPolicy Unrestricted
KASM Setup Examples
Proxmox Setup
This example assumes you have an existing KASM deployment (ref), and a standard Proxmox VE v8.3.1 deployment (ref).
Create a Proxmox User
Under the
Datacenter->Permissions
section of the Proxmox VE UI, select theUsers
subsection and clickAdd
to create a new userEnter a name for the user, this example will use the value
KasmAPIUser
From the
realm
drop-down, select theProxmox VE Authentication Server
optionEnter a valid password for the user
Confirm the password
Create a Token
Under the
Datacenter->Permissions
section of the Proxmox VE UI, select theAPI Tokens
subsection and clickAdd
to create a new tokenSelect the user, for this example deployment, select
KasmAPIUser@pve
Un-Check the
Privilege Separation
optionEnter a name for the token, this example will use the value
KasmToken
Copy the token secret for later use
Create a Pool
Under the
Datacenter->Permissions
section of the Proxmox VE UI, select thePools
subsection and clickCreate
to create a new poolEnter a name for the pool, this example will use the value
KasmPool
Create a Role
Under the
Datacenter->Permissions
section of the Proxmox VE UI, select theRoles
subsection and clickCreate
to create a new roleEnter a name for the role, this example will use the value
KasmAdmin
From the
Privileges
drop-down, select the following privileges:Pool.Audit
VM.Allocate
Datastore.AllocateSpace
SDN.Use
VM.Audit
VM.Clone
VM.Config.CDROM
VM.Config.CPU
VM.Config.Disk
VM.Config.HWType
VM.Config.Memory
VM.Config.Network
VM.Config.Options
VM.Monitor
VM.PowerMgmt
Assign Permissions
Assign the following three permissions:
/sdn/zones/<networkzone>
/storage/<storagepool>
/pool/KasmPool
Under the
Datacenter->Permissions
section of the Proxmox VE UI clickAdd
and from the drop-down selectUser Permission
to assign new user permissionsFrom the
Path
drop-down, select a path from the above listFrom the
User
drop-down, select theKasmAPIUser@pve
userFrom the
Role
drop-down, select theKasmAdmin
roleRepeat for the remaining entries in the list
Linux Docker Agent Example
Upload Linux Image
This guide assumes you have a valid Ubuntu Server ISO (ref).
Under the
local
section of the Proxmox VE UI, select theISO Images
subsection and clickUpload
to upload a new imageFrom the
file
drop-down, select the ISO you previously downloaded
Create an Ubuntu VM Template
From the top menu of the Proxmox VE UI click
Create VM
General Settings
Enter a valid VM name, for this example deployment enter
kasm-ubuntu-template
Click
Next
OS Settings
From the
ISO image
drop-down, select the name of the Ubuntu image you createdFrom the
Resource Pool
drop-down, selectKasmPool
Ensure that
Linux
is selected as the Guest OS typeClick
Next
System Settings
From the
machine
drop-down, selectq35
Enable
Qemu Agent
Ensure that
VirtIO SCSI single
is selected from theSCSI Controller
drop-downClick
Next
Disks Settings
From the
Bus/Device
drop-down, selectVirtIO Block
In the
Disk size (GiB)
section, enter30
Click
Next
CPU Settings
From the
Cores
drop-down, select4
From the
Type
drop-down, selecthost
Click
Next
Memory Settings
In the
Memory (MiB)
section, enter4096
Click
Next
Network Settings
Ensure that
VirtIO (paravirtualized)
is selected from theModel
drop-downClick
Next
Confirm
Click
Finish
From the list of VMs, select the newly created VM and navigate to the
Console
section. Click onstart
Install the OS
Remove the installation ISO from the VM
From the
Hardware
section of the VM, select theCD/DVD Drive
and click on `EditSelect the
Do not use any media
option and click onOk
to save the changes
Install the QEMU guest agent
Run the following commands:
sudo apt update
sudo apt install qemu-guest-agent -y
sudo systemctl start qemu-guest-agent
sudo systemctl enable qemu-guest-agent
sudo reboot
sudo truncate -s 0 /etc/machine-id
sudo truncate -s 0 /var/lib/dbus/machine-id
Ensure that the guest agent is running by executing the following command:
sudo systemctl status qemu-guest-agent.service

QEMU Enabled
Convert to template
Ensure the VM is turned off by selecting
Shutdown
from the VM top menu and clicking onYes
From the
More
section of the VM top menu, select theConvert to Template
option and click onYes
Configure KASM Docker Agent
Set the default zone upstream address to the address of the KASM host
Add a Pool
Enter
Test Proxmox Linux Pool
for theName
From the
Type
drop-down, selectDocker Agent
Add an Auto-Scale config
Enter
Test Proxmox Linux AutoScale
for theName
From the
AutoScale Type
drop-down, selectDocker Agent
From the
Pool
drop-down, selectTest Proxmox Linux Pool
From the
Deployment Zone
drop-down, selectdefault
Enter
1
for theStandby Cores
Enter
0
for theStandby GPUs
Enter
0
for theStandby Memory
Enter
60
for theDownscale Backoff
Enter
4
for theAgent Cores Override
Enter
0
for theAgent GPUs Override
Enter
4
for theAgent Memory Override
Create a new VM Provider
From the
Provider
drop-down, selectProxmox
Enter
Text Proxmox Linux Provider
for theName
Enter
10
for theMax Instances
Enter the address of the Proxmox VE server for the
Host
(e.g. 192.168.1.100)Enter
KasmAPIUser@pve
for theUsername
Enter
KasmToken
for theToken Name
Copy in the token secret generated when creating the Proxmox API token previously
Disable
Verify SSL
Enter
1000
for theVMID Range Lower
Enter
2000
for theVMID Range Upper
Enter
kasm-ubuntu-template
for theTemplate Name
Enter the name of the Proxmox VE node for the
Cluster Node Name
(the default ispve
)Enter
KasmPool
for theResource Pool Name
Leave the
Storage Pool Name
emptyEnter
4
for theCores
Enter
4
for theMemory
From the
Installed OS Type
drop-down, selectLinux
Enter
/tmp
for theStartup Script Path
Copy in the latest Linux docker agent startup script (ref) for the
Startup Script
Windows Server 2025 Example
More information can be found on the Proxmox wiki (ref).
Upload Windows Images
This guide assumes you have a valid Windows 2025 ISO (ref) and the corresponding VirtIO drivers ISO (ref).
Create the following two images:
Windows 2025
VirtIO drivers
Under the
local
section of the Proxmox VE UI, select theISO Images
subsection and clickUpload
to upload a new imageFrom the
file
drop-down, select an ISO from the above listRepeat for the remaining entry in the list
Create a Windows VM Template
From the top menu of the Proxmox VE UI click
Create VM
General Settings
Enter a valid VM name, for this example deployment enter
kasm-windows-template
From the
Resource Pool
drop-down, selectKasmPool
-Click
Next
OS Settings
From the
ISO image
drop-down, select the name of the Windows 2025 image you createdEnsure that
Microsoft Windows 11/2022/2025
is selected as the Guest OS typeCheck the
Add additional driver for VirtIO drivers
checkboxFrom the new
ISO image
drop-down, select the name of the VirtIO drivers image you createdClick
Next
System Settings
From the
machine
drop-down, selectq35
Enable
Qemu Agent
Ensure that
VirtIO SCSI single
is selected from theSCSI Controller
drop-downClick
Next
Disks Settings
From the
Bus/Device
drop-down, selectSCSI
From the
Cache
drop-down, selectWrite-Back
In the
Disk size (GiB)
section, enter100
Click
Next
CPU Settings
From the
Cores
drop-down, select4
From the
Type
drop-down, selecthost
Click
Next
Memory Settings
In the
Memory (MiB)
section, enter4096
Click
Next
Network Settings
Ensure that
VirtIO (paravirtualized)
is selected from theModel
drop-downClick
Next
Confirm
Click
Finish
From the list of VMs, select the newly created VM and navigate to the
Console
section. Click onstart
Launch
Windows Boot
Click a button to “Press any key to boot from CD or DVD”
Select language settings
Click
Next
Select keyboard settings
Click
Next
Select setup option
Click
I agree
checkboxClick
Next
Select Image
Select
Window Server 2025 Standard Evaluation (Desktop Experience)
Click
Next
Applicable notices and license terms
Click
Accept
Select location to install Windows Server
Click
Load Driver
Click
Browse
Expand the CD Drive containing the VirtIO drivers
Navigate to the
vioscsi/2k25/amd64
folderClick
Ok
Select
Red Hat VirtIO SCSI pass-through controller
Click
Install
Click
Load Driver
Click
Accept
to agree to theApplicable notices and license terms
againClick
Browse
Expand the CD Drive containing the VirtIO drivers
Navigate to the
NetKVM/2k25/amd64
folderClick
Ok
Select
Red Hat VirtIO Ethernet Adapter
Click
Install
Click
Load Driver
Click
Accept
to agree to theApplicable notices and license terms
againClick
Browse
Expand the CD Drive containing the VirtIO drivers
Navigate to the
Balloon/2k25/amd64
folderClick
Ok
Select
VirtIO Balloon Driver
Click
Install
Click
Next
Ready to Install
Click
Install
Customize Settings
Enter a valid password for the built-in Administrator
Confirm the password
Click
Finish
Login
Press
Ctrl-Alt-Del
Enter the Administrator password
Send diagnostic data to Microsoft
Click
Accept
Enable remote scripting
Open the Start Menu
Enter
powershell
into the searchClick
Run as administrator
from theWindows PowerShell
application menuExecute the following command in the PowerShell console:
Set-ExecutionPolicy Unrestricted
Install the QEMU Guest Agent
Open
File Explorer
Select the CD Drive containing the VirtIO drivers
Navigate to the
guest-agent
folderDouble-Click the
qemu-ga-x86_64
installer
Install the remaining VirtIO Drivers
Open
File Explorer
Select the CD Drive containing the VirtIO drivers
Double-Click the
virtio-win-gt-x64
installerClick
Next
Click the
I accept
checkbox, then ClickNext
Click
Next
Click
Install
Click
Finish
Enable Remote Desktop
Select
Local Server
from theServer Manager
Click on the
Disabled
link next toRemote Desktop
Click
Allow remote connections to this computer
Click
Ok
Click
Apply
Power Off the OS
Open the Start Menu
Click the Power Icon
From the drop-down, select
Shut Down
From the drop-down, select
Other (Planned)
Click
Continue
Remove the installation ISO from the VM
From the
Hardware
section of the VM, select the Windows ISOCD/DVD Drive
and click onEdit
Select the
Do not use any media
option and click onOk
to save the changes
Remove the VirtIO driver ISO from the VM
From the
Hardware
section of the VM, select the VirtIO driver ISOCD/DVD Drive
and click onRemove
Select
Yes
Convert to template
Power off the OS
From the
More
section of the VM top menu, select theConvert to Template
option and click onYes
Configure KASM Windows Server
Set the default zone upstream address to the address of the KASM host
Add a Pool
Enter
Test Proxmox Windows Pool
for theName
From the
Type
drop-down, selectServer
Add an Auto-Scale config
Enter
Test Proxmox Windows AutoScale
for theName
From the
AutoScale Type
drop-down, selectServer
From the
Pool
drop-down, selectTest Proxmox Windows Pool
From the
Deployment Zone
drop-down, selectdefault
Enter
60
for theDownscale Backoff
Enable
Require Server Checkin
Enable
Kasm Desktop Service installed
From the
Connection Type
drop-down, selectRDP
Enter
3389
for theConnection Port
From the
Connection Credential Type
drop-down, selectDynamic User Accounts
Enter
1
for theMinimum Available Sessions
Enter
1
for theMaximum Simulatneous Sessions
Create a new VM Provider
From the
Provider
drop-down, selectProxmox
Enter
Text Proxmox Windows Provider
for theName
Enter
10
for theMax Instances
Enter the address of the Proxmox VE server for the
Host
(e.g. 192.168.1.100)Enter
KasmAPIUser@pve
for theUsername
Enter
KasmToken
for theToken Name
Copy in the token secret generated when creating the Proxmox API token previously
Disable
Verify SSL
Enter
3000
for theVMID Range Lower
Enter
4000
for theVMID Range Upper
Enter
kasm-windows-template
for theTemplate Name
Enter the name of the Proxmox VE node for the
Cluster Node Name
(the default ispve
)Enter
KasmPool
for theResource Pool Name
Leave the
Storage Pool Name
emptyEnter
4
for theCores
Enter
4
for theMemory
From the
Installed OS Type
drop-down, selectWindows
Enter
C:\Windows\Temp
for theStartup Script Path
Copy in the latest Windows service startup script (ref) for the
Startup Script
Create Workspace
From the
Workspace Type
drop-down, selectPool
Enter
Windows 2025
for theFriendly Name
Enter
Test Proxmox Windows 2025 Workspace
for theDescription
Enable
Enabled
From the
Pool
drop-down, selectTest Proxmox Windows Pool