Pools

Pools can be used to group a set of similar fixed systems together, so they can be treated as a single Workspace for user access. Users will see a single Workspace icon on their dashboard, but their session will get distributed to an available server in the pool. Each server in the pool can be set to support 1 or more concurrent sessions. Kasm automatically distributes the sessions evenly over the servers.

Pools can also be used to auto scale servers using a supported VM provider. Auto scaling can be used to automatically provision Servers or Docker Agents.

Note

If the license includes auto-scale configs, the adminsitrator will have 3 links underneath the table for All AutoScale Configs, All VM Provider Configs and All DNS Provider Configs. These allow the administrator to see all of the configs available and make changes, but the recommended approach is to use the Edit option on a specific pool.

../../_images/pools.webp

Server Pools

Create Pool

  • Click the Infrastructure item in the navigation menu.

  • Select the Pools option in the dropdown menu.

  • Select Add from the top right of the Pools table.

../../_images/create_server_pool.webp

Create Server Pool

Provide a name for the server pool. The type can be either Docker Agent or Server.

Servers List

This list is only shown for Pools of type Server

This list is almost identical to the Servers list with the exception that only servers that are assigned to this pool are shown. Use the assign button to assign existing servers to this pool.

../../_images/server_list.webp

Server List

Docker Agents List

This list is only shown for Pools of type Docker Agent

This list is almost identical to the Agents list with the exception that only servers that are assigned to this pool. Use the assign button to assign existing agents to this pool.

../../_images/agent_list.webp

Agent List

AutoScale Configurations

Note

This feature requires an Enterprise license. Please contact a Kasm Technologies representative for details.

Kasm has the ability to automatically provision and destroy Servers and Docker Agents based on user demand. The AutoScale configuration differs slightly between Servers and Agents.

../../_images/autoscale_list.webp

Autoscale List

In this section the administrator can Assign or Add AutoScale configurations, if the administrator adds an AutoScale configuration it will automatically be assigned to this pool with the AutoScale Type and Pool both set and prevented from being changed.

Clicking Add button on the AutoScale Configuration table will walk the administrator through a Wizard. The first step is the AutoScale configuration, which is slightly different between a Server pool and a Docker Agent pool.

AutoScale Scheduling

Kasm AutoScaling configurations have the capability of being scheduled for active times. This capability allows customers to save compute costs by turning off Kasm AutoScaling when that extra compute is not needed.

AutoScale schedules are available as a tab when editing an AutoScale configuration. If no schedule is defined then the AutoScale configuration is considered active unless disabled. Kasm will not scale down a server that has active sessions on it, so the administrator can be assured they will not disrupt any existing Kasm sessions when the schedule becomes inactive. When an AutoScale configuration for Docker Agents is inactive any unused staged sessions will be removed and no new sessions (staged or user created) will be assigned to any Docker Agents that are part of the inactive AutoScale configuration.

Since an AutoScale configuration with an inactive schedule will not provision any compute resources, the administrator may want two AutoScale configurations: one that has minimal standby compute available to minimize compute costs and one that has more substantial resources configured. This will allow users to always get a session even if it takes extra time on off hours. The administrator can even configure 0 for the standby cpu/memory/gpu during the off hours and allow session resource provisioning to be fulfilled fully on demand.

../../_images/autoscale_schedule_tab.png

Click on Add Schedule to create a new schedule for the AutoScaling configuration. On the Add Schedule screen there are fields for what days of the week this schedule should be active as well as both a start time, end time, and a timezone. Kasm will convert the time from the Kasm database to match the timezone specified when determining if a schedule is active or not. Multiple schedules can be defined for each AutoScale configuration.

../../_images/add_new_autoscale_schedule_config.png

Example AutoScale Schedules

Here are a few examples of how AutoScale schedules can be leveraged.

Basic Example (Traditional Business Schedule)

A business whose core hours are 8 a.m. to 5 p.m. Monday through Friday. The administrator could configure Kasm AutoScaling to turn on autoscaling at 7 a.m. so all compute was available well before the 8 a.m. start time and turn off AutoScaling at 6 p.m. each evening. This allows the administrator to save overnight compute costs when there is not a business need for that compute. Defining this schedule is straight forward the Add Schedule screen would look similar to this:

../../_images/basic_autoscale_schedule.png

Basic AutoScale Schedule

Multiple Continuous Days Schedule

A more complicated example would be if an Administrator wanted compute resources to come on line at 3 p.m. Wednesday, and stay on until 11 a.m. Friday. For this the administrator would create three separate schedules on the autoscale config which would provide the desired functionality. The first schedule the administrator would select only wednesday, start time 3:00 p.m., and end time 11:59 p.m.

../../_images/continuous_autoscale_schedule_wednesday.png

Wednesday Start AutoScale Schedule

The second schedule the administrator would select Thursday, with start time 12:00 a.m. and end time 11:59 p.m.

../../_images/continuous_autoscale_schedule_thursday.png

Thursday All Day AutoScale Schedule

The third schedule the administrator would select Friday, with start time 12:00 a.m. and end time 11:00 a.m.

../../_images/continuous_autoscale_schedule_friday.png

Friday End AutoScale Schedule

../../_images/continuous_autoscale_schedules.png

Set of Schedules For Multi-Day AutoScale Scheduling

Overnight schedule

Here is an example of an overnight schedule where the start time is later than the end time. This will result in an overnight schedule. It will become active on the start time on the day(s) of the week selected, and will become inactive at the end time of the following day. In the following example the Autoscale configuration will become active on Tuesday and Thursday at 9:00 p.m. and will deactivate on Wednesday and Friday at 5:00 a.m.

../../_images/autoscale_schedule_overnight.png

Overnight AutoScale Schedule

Multiple Time Periods Within a Day

By using multiple schedules it is possible to configure multiple active times within a single day. Here morning shift is configured from 8:00 a.m. until 12:00 p.m. on Monday, Wednesday and Friday.

../../_images/autoscale_schedule_split_day_morning.png

Multiple Time Periods Morning AutoScale Schedule

The afternoon shift is configured from 2:00 p.m. until 6:00 p.m. on the same days.

../../_images/autoscale_schedule_split_day_afternoon.png

Multiple Time Periods Afternoon AutoScale Schedule

../../_images/autoscale_schedule_split_day.png

List of AutoScale Schedules for Multiple Time Periods Within a Day

AutoScale Config (Server Pool)

This section covers the AutoScale configuration step of the Wizard for Pools of type Server.

../../_images/create_autoscale_server.webp

Create Autoscale Server

General AutoScaling Settings

Name

Description

Name

Name for the AutoScale config .

AutoScale Type

The type of AutoScale confog this is, either a Docker Agent or a Server.

Pool

Which pool this AutoScale config is attached to.

Enabled

Whether to enable this config or not.

Aggressive Scaling

When enabled, the system may take more expedient measures to provision raw compute resources for on-demand session requests. See Aggressive Scaling for more details.

Deployment Zone

Which zone this AutoScale config applies to.

Require Checkin

When enabled, the system will wait to receive a callback from the newly created server to set its status to Running. The callback may come from the Kasm Windows Service or by calling the set_server_status API. See Require Checkin for more details.

Kasm Desktop Service Installed

When enabled, the sytem will assume the Kasm Desktop Service is installed, enabling workflows that require the agent.

Connection Type

Whether to use KasmVNC or RDP, VNC, or SSH.

Connection Port

Which port to connect on.

Connection Credential Type

Which type of credentials are used for this server. Options are Static Credentials, Dynamic User Accounts, SSO User Accounts, and Authenticate with Smartcard.

SSO Domain

The domain to use for SSO User Accounts Connection Credential Type. A blank entry will pass the username to Windows exactly as it is in Kasm. A value of localhost will instruct Kasm to drop any domain part from the Kasm username i.e. john_smith@example.com becomes john_smith when passed to Windows.

Connection Username

Which username to connect to the server with. Only visible with Static Credentials Connection Credential Type.

Connection Password

Which password to connect to the server with. Only visible with Static Credentials Connection Credential Type.

Use User SSH Key

Whether to use the SSH keys assigned to a Kasm user. (Only applicable to SSH connection type)

Connection Private Key

The private key to authenticate against the SSH server with. (Only applicable to SSH connection type)

Connection Private Key Passphrase

The passphrase encrypting the specified private key. (Only applicable to SSH connection type)

Connection Info (JSON)

Any extra connection info.

Create Active Directory Computer Record

Whether to create an active directory record or not.

Reusable

Whether the connection reusable.

Minimum Available Sessions

The minimum available sessions that should be free. Auto scale more resources if under this threshold.

Max Simultaneous Sessions Per Server

Max sessions per server allowed

Max Simultaneous Users

For RDP/SSH servers, the number of concurrently connected users per server.

Max Simultaneous Sessions Per Server

For RDP and SSH servers, the Max Simultaneous Session Per Server and Max Simultaneous Users work together. SSH and RDP support multiple sessions per user per server. For RDP this is typically found in RemoteApp use cases. For SSH and RemoteApp servers you may want 1 server to handle just 2 concurrent users but up to 10 concurrent sessions each. These settings are used for two purposes, auto-scaling and deciding where to assign new sessions to.

For auto scaling, Kasm periodically checks to see that it has resource availability to create Minimum Available Sessions on the existing servers in the pool. It checks how many sessions and how many new users each server can handle, it uses the lower of those two values to determine how many new sessions each existing server can likely handle. If the total from all servers is less than the Minimum Available Sessions, new servers are created until it reaches the desired capacity.

Kasm will not allow a single user to provision multiple RDP desktops per server. Only RemoteApps are allowed to be assigned to the same server for a single user. Kasm will allow one desktop and multiple RemoteApps on the same server for a user.

Require Checkin

The Require Checkin flag can be used to ensure the system waits until a newly created server is fully ready before allowing a users session to connect. Administrators may use the autoscale startup script to ensure the desired configurations and services are properly initialized (e.g RDP is enabled and running).

If using Windows sessions, adminstrators may use the Kasm Desktop Service. All systems may use the set_server_status API.

POST /api/set_server_status?token={checkin_jwt}

Example request:

{
     "status": "running",
     "status_message": "Initialization Complete",
     "status_progress": "100"
 }

Additional examples are available in the workspaces-autoscale-startup-scripts repo.

Create Active Directory Computer Record

As covered in the above table, the AutoScaling configuration for a Server Pool allows the administrator to automatically join new VMs to an active directory domain. If the checkbox for Create Active Directory Computer Record is checked two additional fields will be shown, LDAP Config and Active Directory Computer OU DN.

../../_images/join_to_domain.webp

Join to Domain

Kasm Workspaces creates the AD Computer record, but it does not join the computer to the domain, this needs to be done on the system itself. When Kasm creates the AD record, a temporary randomly generated password is created which can be used on the target VM to join it to Active Directory. Kasm can inject this password in a PowerShell script on the VM. That PowerShell script needs to be executed when the VM starts up, in order to complete the process of adding the VM to Active Directory. Below is an example PowerShell script, the special tags {ad_join_credential} and {domain} will be replaced by Kasm with the randomly generated password and domain name respectively. This script is placed in the VM Provider configuration in the Startup Script field.

$joinCred = New-Object pscredential -ArgumentList ([pscustomobject]@{{ UserName = $null; Password = (ConvertTo-SecureString -String '{ad_join_credential}' -AsPlainText -Force)[0] }})
Add-Computer -Domain "{domain}" -Options UnsecuredJoin,PasswordPass -Credential $joinCred -Force -Restart

Note

Some cloud providers will automatically execute this startup script when the VM boots, making it easy to get auto AD joining working end-to-end. Other cloud providers, such as Azure, do not automatically execute this script. See the details of each VM Config Provider.

LDAP Config The LDAP Config drop down allows the administrator to select which LDAP configuration to use to add the computer record to Active Directory. The LDAP Configuration does not have to be enabled, this allows the administrator to use one LDAP configuration for Authentication and another for AD Computer record creation. If using LDAP for end-user authentication to Kasm Workspaces, the administrator can also configure single-sign on to the Windows systems.

Active Directory Computer OU DN This is the DN of the Active Directory Computer OU that the administrator would like the computer records placed in.

Note

The LDAP config must be using an SSL secured LDAPS connection or the LDAP server will not permit Kasm to create the AD Computer record.

Single Sign-On to Windows Systems via LDAP

When users login to Kasm via LDAP Authentication, they are able to create sessions to Windows systems that are joined to the same Active Directory domain and are configured for SSO credential pass-through. In the above table covering the Auto Scale configuration fields for Server Pools is the Connection Credential Type field. Select the value SSO User Accounts in the Auto Scale configuration for the Server Pool. This requires that all users accessing servers in this Server Pool are authenticated to Kasm using LDAP authentication. See our Windows Deployment Guide video for a walk through of this topic and more.

Authentication Options When Connecting to an SSH Server

Kasm Workspaces has the ability to connect to arbitrary SSH servers. It can use SSH key or password authentication. There are a few combinations of options on the Autoscale Config edit screen that can be selected.

The Autoscale Config can be configured to use:

  • Username/password authentication

  • Username and select the Use User SSH Key to send the username and private key stored with the Kasm user.

  • Username and a pasted in private key, optionally you can include a passphrase if the key has one.

Select the value SSO User Accounts in the Connection Credential Type field and check the Use User SSH Key checkbox and Kasm Workspaces will send the user’s Kasm Workspaces username along with the user’s Kasm Workspaces SSH key allowing easy multiple user support on SSH servers.

There are some restrictions on the ssh keys supported that are enforced by the connection proxy library used for the ssh server connections: The SSH key must be an ssh-rsa key in PKCS1 format (i.e. the header of the key starts with -----BEGIN RSA PRIVATE KEY-----) with a key size of 2048. In addition, some newer Linux distributions such as Ubuntu 22.04 LTS will not accept ssh-rsa keys by default. To work around this edit the /etc/ssh/sshd_config file on the target server using the startup script of the vm provider and add these two lines to the config.

HostKeyAlgorithms +ssh-rsa
PubkeyAcceptedKeyTypes +ssh-rsa

AutoScale Config (Docker Agent Pool)

This section covers the AutoScale configuration step of the Wizard for Pools of type Docker Agent.

../../_images/create_autoscale_agent.webp

Create Autoscale Agent

General AutoScaling Settings

Name

Description

Name

Name for the AutoScale config .

AutoScale Type

The type of AutoScale confog this is, either a Docker Agent or a Server.

Pool

Which pool this AutoScale config is attached to.

Enabled

Whether to enable this config or not.

Aggressive Scaling

When enabled, the system may take more expedient measures to provision raw compute resources for on-demand session requests. See Aggressive Scaling for more details

Deployment Zone

Which zone this AutoScale config applies to.

Standby Cores

The number of standby cores that the system should try to keep “always available” at any given time in addition to any that is needed to satisfy the Staging Config requirements. If the number of available cores falls below this number, more Agents are created. If the number of available cores rises above this number, Agents are deleted as long as it wont result in the number of available cores falling below this number. A value of 0 indicates no additional standby compute is created. The AutoScaler will only provision enough compute according to the Staging Config requirements.

Standby GPUs

The number of standby GPUs that the system should try to keep “always available” at any given time in addition to any that is needed to satisfy the Staging Config requirements. If the number of available GPUs falls below this number, more Agents are created. If the number of available GPUs rises above this number, Agents are deleted as long as it wont result in the number of available GPUs falling below this number. A value of 0 indicates no additional standby compute is created. The AutoScaler will only provision enough compute according to the Staging Config requirements.

Standby Memory (MB)

The amount of memory (in MB) that the system should try to keep “always available” at any given time in addition to any that is needed to satisfy the Staging Config requirements. If the amount of available memory falls below this number, more Agents are created. If the amount of available memory rises above this number, Agents are deleted as long as it wont result in available amount falling below this number. A value of 0 indicates no additional standby compute is created. The AutoScaler will only provision enough compute according to the Staging Config requirements.

Downscale Backoff (Seconds)

This setting prevents prevents the system from downscaling (deleting Agents) for this amount of time (in seconds) when needed. This is useful for preventing the system from thrashing up and down if the available resource hover around an interval that would typically trigger autoscaling.

Agent Cores Override

When an Agent is created, the compute resource (e.g AWS EC2 / Digital Ocean Droplet) will have a set amount of CPU and Ram as defined by the cloud provider’s instance type. This setting should typically be set to match the instance type but can be set to a preferred value.

Agent GPUs Override

When an Agent is created, the compute resource (e.g AWS EC2 / Digital Ocean Droplet) will have a set number of GPUs as defined by the cloud provider’s instance type. This setting should typically be set to match the instance type but can be set to a higher number to allow oversubscribing.

Agent Memory Override (GB)

When an Agent is created, the compute resource (e.g AWS EC2 / Digital Ocean Droplet) will have a set amount of CPU and Ram as defined by the cloud provider’s instance type. This setting should typically be set to match the instance type but can be set to a preferred value.

NGINX Cert

The PEM encoded SSL certificate to use for the kasm_proxy role on the created Agents. This cert should be a wildcard for the Base Domain Name (e.g *.agents.kasm.example.com)

NGINX Key

The PEM encoded SSL Key to use for the kasm_proxy role on the created Agents.

Register DNS

If enabled, the Agent’s IP will be registered in DNS.

Base Domain Name

Define a base name for the automatic DNS registration for the Agent. The system will create a full name using <ID>.<Base Domain Name>. If the Base Domain Name is “agents.kasm.example.com”, the full DNS name generated will be <ID>.agents.kasm.example.com (e.g 123abcd.agents.kasm.example.com). This Base Domain Name, must already be a registered DNS zone within the cloud provider’s DNS system.

Aggressive Scaling

Starting in Workspaces 1.14.0, administrators can choose to leverage fully on-demand compute resources for container and server/server pool based sessions. When users requests a session, and no compute is available, the system will queue the request, provision the resources according to the autocale configs, then fulfill the request. This will prevent the user from receiving a No Resources error. Instead, the user will be presented with a status indicator while the request is fulfilled, which may take several minutes. This can be used alongside the existing standby and staging mechanism to give the administrator more options to balance compute costs with session delivery times.

Enabling Aggressive Scaling in the Autoscale Config, instructs the system to make more opportunistic choices when requesting resources, with the goal of reducing the users wait time. This mode may result in compute resources being utilized in less cost-efficient ways, since users may end up on separate machines instead of pooled together depending on the circumstance. This mode may also result in the system scaling slighty beyond the max instances defined on the associated VM Provider due to the potential concurrent nature of resource provisioning.

../../_images/queued_session.png

A Requested Session in Queue

VM Provider Configs

Note

The Auto-Scaling feature is ONLY available in Enterprise licensing. For more information on licensing please visit: Licensing.

../../_images/vm_create_new.webp

Create New Provider

VM Provider Settings

Name

Description

VM Provider Configs

Select an existing config or create a new config. If selecting an existing config and changing any of the details, those details will be changed for anything using the same VM Provider config.

Provider

Select a provider from AWS, Azure, Digital Ocean, Google Cloud or Oracle Cloud. If selecting an existing provider this will be selected automatically.

AWS Settings

A number of settings are required to be defined to use this functionality.

../../_images/vm_aws.webp

AWS Settings

AWS VM Provider Settings

Name

Description

Name

A name to use to identify the config.

AWS Access Key ID

The AWS Access Key used for the AWS API.

AWS Secret Access Key

The AWS Secret Access Key used for the AWS API.

AWS: Region

The AWS Region the EC2 Nodes should be provisioned in. e.g (us-east-1)

AWS: EC2 AMI ID

The AMI ID to use for the provisioned EC2 nodes. This should be an OS that is supported by the Kasm installer.

AWS: EC2 Instance Type

The EC2 Instance Type (e.g t3.micro). Note the Cores and Memory override settings don’t necessarily have to match the instance configurations. This is to allow for over provisioning.

AWS: Max EC2 Nodes

The maximum number of EC2 nodes to provision regardless of the need for available free slots

AWS: EC2 Security Group IDs

A Json list containg security group IDs to assign the EC2 nodes. e.g ["sg-065ae66f2d", "sg-02522kdkas"]

AWS: EC2 Subnet ID

The subnet ID to place the EC2 nodes in.

AWS: EC2 EBS Volume Size (GB)

The size of the root EBS Volume for the EC2 nodes.

AWS: EC2 EBS Volume Type

The EBS Volume Type (e.g gp2)

AWS: EC2 IAM

The IAM to assign the EC2 Nodes. Administrators may want to assign CloudWatch IAM access.

AWS: EC2 Custom Tags

A Json dictionary for custom tags to assigned on auto-scaled Agent EC2 Nodes. e.g {"foo":"bar", "bin":"baz"}

AWS: EC2 Startup Script

When the EC2 Nodes are provision this script is executed. The script is responsible for installing and configuring the Kasm Agent.

Retrieve Windows VM Password from AWS

When provisioning an AWS Windows VM Kasm can retrieve the password generated by AWS and store it in the Server configuration record created during the autoscale provision. This will only happen if the Connection Password field from the attached Autoscale config is blank. When populated Kasm will use the defined value instead of what is returned from AWS. The Administrator may want to leave this field blank and disable retrieving the password from AWS if they wish the Kasm user to be presented with a login screen to manually enter credentials upon connecting to the Windows Workspace. NOTE: This setting only affects Windows (RDP connection type) AWS instances.

SSH Keys

The SSH Key pair to assign the EC2 node

AWS Config Override (JSON)

Custom configuration may be added to the provision request for advanced use cases. Instance configuration is overridden in the ‘instance_config’ configuration block e.g. {"instance_config":{"EbsOptimized": true}} See EC2 Documentation for available options.

Azure Settings

A number of settings are required to be defined to use this functionality. The Azure settings appear in the Deployment Zone configuration when the feature is licensed.

../../_images/vm_azure.webp

Azure Settings

Register Azure app

An API key for Kasm must be created to use to interface with Azure. Azure call these apps, and the example will walk through registering one along with the required permissions.

  1. Register an app by going to the Azure Active Directory service in the Azure portal.

../../_images/azure_active_directory.png

Azure Active Directory

  1. From the Add dropdown select App Registration

../../_images/app_registration.png

App Registration

  1. Give this app a human-readable name such as Kasm Workspaces

../../_images/app_registration_name.png

App Registration

  1. Go to Resource Groups and select the Resource Group that Kasm will autoscale in.

../../_images/azure_resource_groups.png

Azure Resource Groups

  1. Select Access Control (IAM)

../../_images/resource_group_access_control.png

Access Control

  1. From the Add drop down select Add role assignment

../../_images/add_role_assignment.png

Add Role Assignment

  1. The app created in Azure will need two roles, first select the Virtual Machine Contributor role, then on the next page select the app by typing in the name e.g. Kasm Workspaces

../../_images/select_virtual_machine_contributor.png

Virtual Machine Contributor

../../_images/virtual_machine_contributor_assign_app.png

Assign Contributor

  1. Go through this process again to add the Network Contributor and the DNS Zone Contributor roles

../../_images/assign_network_contributor.png

Network Contributor

../../_images/assign_dns_zone_contributor.png

DNS Zone Contributor

Azure VM Settings

A number of settings are required to be defined to use this functionality. The Digital Ocean settings appear in the Pool configuration when the feature is licensed.

../../_images/vm_azure.webp

Azure VM

Azure VM Provider Settings

Name

Description

Name

A name to use to identify the config.

Subscription ID

The Subscription ID for the Azure Account. This can be found in the Azure portal by searching for Subscriptions in the search bar in Azure home then selecting the subscription to use. (e.g 00000000-0000-0000-0000-000000000000)

Resource Group

The Resource Group the DNS Zone and/or Virtual Machines belong to (e.g dev)

Tenant ID

The Tenant ID for the Azure Account. This can be found in the Azure portal by going to Azure Active Directory using the search bar in Azure home. (e.g 00000000-0000-0000-0000-000000000000)

Client ID

The Client ID credential used to auth to the Azure Account. Client ID can be obtained by registering an application within Azure Active Directory. (e.g 00000000-0000-0000-0000-000000000000)

Client Secret

The Client Secret credential created with the registered applicaiton in Azure Active Directory. (e.g abc123)

Azure Authority

Which Azure authority to use, there are four, Azure Public Cloud, Azure Government, Azure China and Azure Germany.

Region

The Azure region where the Agents will be provisioned. (e.g eastus)

Max Instances

The maximum number of Azure VMs to provision regardless of the need for additional resources.

VM Size

The size configuration of the Azure VM to provision (e.g Standard_D2s_v3)

OS Disk Type

The disk type to use for the Azure VM. (e.g Premium_LRS)

OS Disk Size (GB)

The size (in GB) of the boot volume to assign the compute instance.

OS Image Reference (JSON)

The OS Image Reference configuration for the Azure VMs

(e.g {"publisher":"canonical","offer":"0001-com-ubuntu-server-focal","sku":"20_04-lts-gen2","version":"latest"} or

{"id":"/subscriptions/000.../resourceGroups/dev/providers/Microsoft.Compute/galleries/development-gallery/images/ubuntu-20.04-custom"}

Image is Windows

Is this a windows VM being created

Network Security Group

The network security group to attach to the VM

(e.g /subscriptions/000.../resourcegroups/dev/providers/Microsoft.Network/networkSecurityGroups/example-nsg)

Subnet

The subnet to attach the VM to

(e.g /subscriptions/000.../resourceGroups/dev/providers/Microsoft.Network/virtualNetworks/development-vnet/subnets/default)

Assign Public IP

If checked, the VM will be assigned a public IP. If no public ip IP is assigned the VM must ne attached to a standard load balancer of the subnet must have a NAT Gateway or user-defined route (UDR). If a public IP is used, the subnet must not also include a NAT Gateway. Reference

Tags (JSON)

A JSON dictionary of custom tags to assign to the VMs (e.g {"foo":"bar", "bin": "baz"} )

OS Username

The login username to assign to the new VM (e.g testuser)

OS Password

The login password to assign to the new VM. Note: Password authentication is disabled for SSH by default

SSH Public Key

The SSH public key to install on the VM for the defined user: (e.g ssh-rsa AAAAAAA....)

Agent Startup Script

When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent.

Config Override (JSON)

Custom configuration may be added to the provision request for advanced use cases. The emitted json structure is visible by clicking JSON View when inspecting the VM in the Azure console. The keys in this configuration can be used to update top level keys within the emitted json config (e.g {"location":"eastus"}). Nested items can be updated by using dot notation in the key (e.g {"hardware_profile.vm_size":"Standard_D4s_v3"}) Exiting array elements can be updated by specifying the index in the dot notation (e.g {"os_profile.linux_configuration.ssh.public_keys.0.path":"/home/ubuntu/.ssh/authorized_keys"})

Digital Ocean Settings

A number of settings are required to be defined to use this functionality. The Digital Ocean settings appear in the Pool configuration when the feature is licensed.

Warning

Please review Tag Does Not Exist Error for known issues and workarounds

../../_images/vm_do.webp

Digital Occean VM

Digital Ocean VM Provider Settings

Name

Description

Name

A name to use to identify the config.

Token

The token to use to connect to this VM

Max Droplets

The maximum number of Digital Ocean droplets to provision , regardless of whether more are needed to fulfill user demand.

Region

The Digital Ocean Region where droplets should be provisioned. (e.g nyc1)

Image

The Image to use when creating droplets. (e.g docker-18-04)

Droplet Size

The droplet size configuration (e.g c-2)

Tags

A tag(s) to assign the droplet when it is created. This should be a comma separated list of tags.

SSH Key Name

The SSH Key to assign to the newly created droplets. The SSH Key must already exist in the Digital Ocean Account.

Firewall Name

The name of the Firewall to apply to the newly created droplets. This Firewall must already exist in the Digital Ocean Account.

Startup Script

When droplets are provision this script is executed. The script is responsible for installing and configuring the Kasm Agent.

Tag Does Not Exist Error

Upon first testing AutoScaling with Digital Ocean, an error similar to the following may be presented:

 Future generated an exception: tag zone:abc123 does not exist
 traceback:
 ..
 File "digitalocean/Firewall.py", line 225, in add_tags
 File "digitalocean/baseapi.py", line 196, in get_data
 digitalocean.DataReadError: tag zone:abc123 does not exist
 process: manager_api_server

This error occurs when Kasm Workspaces tries to assign a unique tag based on the Zone Id to the Digital Ocean Firewall. If that tag does not already exist in Digital Ocean, the operation will fail and present the error. To workaround the issue, manually create a tag matching the one specified in the error (e.g zone:abc123) via the Digital Ocean console. This can be done via API, or simply creating the tag on a temporary Droplet.

Google Cloud (GCP) Settings

A number of settings are required to be defined to use this functionality. The GCP settings appear in the Pool configuration when the feature is licensed.

../../_images/vm_google.webp

Google Cloud VM

GCP VM Provider Settings

Name

Description

Name

A name to use to identify the config.

GCP Credentials

The JSON formatted credentials for the service account used to authenticate with GCP: Ref

Max Instances

The maximum number of GCP compute instances to provision regardless of the need for additional resources.

Project ID

The Google Cloud Project ID (e.g pensive-voice-547511)

Region

The region to provision the new compute instances. (e.g us-east4)

Zone

The zone the new compute instance will be provisioned in (e.g us-east4-b)

Machine Type

The Machine type for the GCP compute instances. (e.g e2-standard-2)

Machine Image

The Machine Image to use for the new compute instance. (e.g projects/ubuntu-os-cloud/global/images/ubuntu-2004-focal-v20211212)

Boot Volume GB

The size (in GB) of the boot volume to assign the compute instance.

Disk Type

The disk type for the new instance. (e.g pd-ssd)

Customer Managed Encryption Key (CMEK)

The optional path to the Customer Managaged Encryption Key (CMEK) (e.g projects/pensive-voice-547511/locations/global/keyRings/my-keyring/cryptoKeys/my-key

Network

The path of the Network to place the new instance. (e.g projects/pensive-voice-547511/global/networks/default)

Sub Network

The path of the Sub Network to place the new instance. (e.g projects/pensive-voice-547511/regions/us-east4/subnetworks/default)

Public IP

If checked, a public IP will be assigned to the new instances

Network Tags (JSON)

A JSON list of the Network Tags to assign the new instance. (e.g ["https-server", "foo", "bar"])

Custom Labels (JSON)

A JSON dictionary of Custom Labels to assign the new instance (e.g {"foo": "bar", "bin":"baz"})

Metadata (JSON)

A JSON list of metadata objects to add to the instance. (e.g [{"key": "ssh-keys", "value":"user1:ssh-rsa <key contents> user1"}]) Reference

Service Account (JSON)

A JSON dictionary representing for a service account to attach to the instance. (e.g {"email": "service-account@example.com", "scopes":["https://www.googleapis.com/auth/cloud-platform"]}) Reference

Guest Accelerators (JSON)

A JSON list representing the guest accelerators (e. GPUs) to attach to the instance. (e.g [{"acceleratorType":"projects/<project-id>/zones/<zone>/acceleratorTypes/nvidia-tesla-t4","acceleratorCount":1}]) Reference

GCP Config Override (JSON)

A JSON dictionary that can be used to customize attributes of the VM request. The only attributes that cannot be overridden are name and labels (e.g {"shieldedInstanceConfig":{"enableIntegrityMonitoring":true,"enableSecureBoot":true,"enableVtpm":true}} Reference

VM Installed OS Type

The family of the OS installed on the VM (e.g. linux or windows).

Startup Script Type

The type of startup script to execute, this determines the key used when creating the GCP startup script metadata. Windows Startup Scripts Linux Startup Scripts

Startup Script

When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent.

Note on Updating Existing Google Cloud Providers (GCP)

Please review the settings for all existing Google Cloud Providers (GCP). Two new fields were added; VM Installed OS Type which defaults to Linux, and Startup Script Type which defaults to Bash Script. If the existing provider is configured with a Windows VM it will not successfully launch the startup script without changing these values.

Oracle Cloud (OCI) Settings

A number of settings are required to be defined to use this functionality. The OCI settings appear in the Pool configuration when the feature is licensed.

../../_images/vm_oracle.webp

OCI VM

OCI VM Provider Settings

Name

Description

Name

A name to use to identify the config.

User OCID

The OCID of the user to authenticate with the OCI API. (e.g ocid1.user.oc1..xyz)

Public Key Fingerprint

The public key fingerprint of the authenticated API user. (e.g xx:yy:zz:11:22:33)

Private Key

The private key (PEM format) of the authenticated API user.

Region

The OCI Region name. (e.g us-ashburn-1)

Tenancy OCID

The Tenancy OCID for the OCI account. (e.g ocid1.tenancy.oc1..xyz)

Compartment OCID

The Compartment OCID where the auto-scaled agents will be placed. (ocid1.compartment.oc1..xyx)

Network Security Group OCIDs (JSON)

A JSON list of Security Group OCIDs that will be assigned to the auto-scaled agents. (e.g ["ocid1.networksecuritygroup.oc1.iad.xxx","ocid1.networksecuritygroup.oc1.iad.yyy"])

Max Instances

The maximum number of OCI compute instances to provision regardless of the need for available free slots.

Availability Domains (JSON)

A JSON list of availability domains where the OCI compute instances may be placed. (e.g ["BEol:US-ASHBURN-AD-1", "BEol:US-ASHBURN-AD-2"])

Image OCID

The OCID of the Image to use when creating the compute instances. (e.g ocid1.image.oc1.iad.xyz)

Shape

The name of the shape used for the created compute instances. (e.g VM.Standard.E4.Flex)

Flex CPUs

The number of OCPUs to assign the compute instance. This is only applicable when a Flex shape is used.

Burstable Base CPU Utilization

The baseline percentage of a CPU Core that can be use continuously on a burstable instance (Select 100% to use a non-burstable instance). Reference.

Flex Memory GB

The amount of memory (in GB) to assign the compute instance. This is only applicable when a Flex shape is used.

Boot Volume GB

The size (in GB) of the boot volume to assign the compute instance.

Boot Volume VPUs Per GB

The Volume Performance Units (VPUs) to assign to the boot volume. Values between 10 and 120 in mulitples of 10 are acceptable. 10 is the default and represents the Balanced profile. The higher the VPUs, the higher the volume performance and cost. Reference.

Custom Tags (JSON)

A Json dictionary of custom freeform tags to assigned the auto-scaled instances. e.g {"foo":"bar", "bin":"baz"}

Subnet OCID

The OCID of the Subnet where the auto-scaled instances will be placed. (e.g ocid1.subnet.oc1.iad.xyz)

SSH Public Key

The SSH public key to insert into the compute instances. (e.g ssh-rsa XYABC)

Startup Script

When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent.

OCI Config Override

A JSON dictionary that can be used to customize attributes of the VM request. An OCI Model can be specified with the “OCI_MODEL_NAME” key. Reference: OCI Python Docs and Kasm Examples.

You can find the OCI Image ID for the version of the desired operating system in the desired region by finding navigating the OCI Image page.

OCI Config Override Examples

Below are some OCI autoscale configurations that utilize the OCI Config Override.

Disable Legacy Instance Metadata Service

Disables instance metadata service v2 for additional security.

{
    "launch_instance_details": {
        "instance_options": {
            "OCI_MODEL_NAME": "InstanceOptions",
            "are_legacy_imds_endpoints_disabled": true
        }
    }
}
Enable Instance Agent Plugins

A list of available plugins can be retrieved by navigating to an existing instance’s “Oracle Cloud Agent” config page. This example enables the “Vulnerability Scanning” plugin.

{
    "launch_instance_details": {
        "agent_config": {
            "OCI_MODEL_NAME": "LaunchInstanceAgentConfigDetails",
            "is_monitoring_disabled": false,
            "is_management_disabled": false,
            "are_all_plugins_disabled": false,
            "plugins_config": [{
                "OCI_MODEL_NAME": "InstanceAgentPluginConfigDetails",
                "name": "Vulnerability Scanning",
                "desired_state": "ENABLED"
            }]
        }
    }
}

VMware vSphere Settings

A number of settings are required to be defined to use this functionality. The VMware vSphere settings appear in the Pool configuration when the feature is licensed.

../../_images/vm_vsphere.webp

VSphere VM

vSphere VM Provider Settings

Name

Description

Name

A name to use to identify the config.

vSphere vCenter Address

The location of the VMware vSphere vCenter server to use.

vSphere vCenter Port

The port to use. (This is usually 443)

vSphere vCenter Username

The username to use when authenticating with the vSphere vCenter server.

vSphere vCenter Password

The password to use when authenticating with the vSphere vCenter server.

VM Template Name

The template VM to use when cloning new autoscaled VMs.

Max Instances

The maximum number of vSphere VM instances to provision regardless of the need for available free slots.

Datacenter Name

The datacenter to use for cloning the new vSphere VM instances.

VM Folder

The VM folder to use for cloning the new vSphere VM instances. This field is optional, if left blank the VM folder of the template is used.

Datastore Name

The datastore to use for cloning the new vSphere VM instances. This field is optional, if left blank the datastore of the template is used.

Cluster Name

The cluster to use for cloning the new vSphere VM instances. This field is optional, if left blank the cluster of the template is used.

Resource Pool

The resource pool to use for cloning the new vSphere VM instances. This field is optional, if left blank the resource pool of the template is used.

Datastore Cluster Name

The datastore cluster to use for cloning the new vSphere VM instances. This field is optional, if left blank the datastore cluster of the template is used.

Guest VM Username

The username to use for running the startup script on the new vSphere VM instance. This account should have sufficient privileges to execute all commands in the startup script.

Guest VM Password

The password for the Guest VM Username account.

Number of Guest CPUs

The number of CPUs to configure on new vSphere VM instances. This option is not dependent on the number of CPUs configured on the template.

Amount of Guest Memory(MB)

The amount of memory in MegaBytes to configure on new vSphere VM instances. This option is not dependent on the amount of memory configured on the template.

What family of OS is installed in the VM

Whether the template OS is Linux or Windows. This is needed to ensure proper execution of the startup script.

Startup Script

When instances are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are ran as bash scripts on a Linux host and Powershell scripts on a Windows host. Additional troublshooting steps can be found in the Creating Templates For Use With The VMware vSphere Provider section of the server documentation.

Notes on vSphere Datastore Storage

When configuring VMware vSphere with Kasm Workspaces one important item to keep in mind is datastore storage. When clones are created VMware will attempt to satisfy the clone operation if the datastore runs out of space, any VMs that are running on that datastore will be paused until space is available. Kasm Workspaces recommends that critical management VMs such as the Vcenter server VM and cluster management VMs are on separate datastores that are not used for Kasm autoscaling.

OpenStack Settings

A number of settings are required to be defined to use this functionality. The OpenStack settings appear in the Pool configuration when the feature is licensed.

The appropriate OpenStack configuration options can be found by using the “API Access” page of the OpenStack UI and downloading the “OpenStack RC File”.

../../_images/vm_openstack.webp

OpenStack VM

OpenStack VM Provider Settings

Name

Description

Name

A name to use to identify the config.

Max Instances

The maximum number of OpenStack compute instances to provision regardless of the need for additional resources.

OpenStack Identity Endpoint

The endpoint address of the OpenStack Keystone endpoint (e.g. https://openstack.domain:5000)

OpenStack Nova Endpoint

The endpoint address of the OpenStack Nova (Compute) endpoint (e.g. https://openstack.domain:8774/v2/)

OpenStack Nova Version

The version to use with the OpenStack Nova (Compute) endpoint (e.g. 2.90)

OpenStack Glance Endpoint

The endpoint address of the OpenStack Glance (Image) endpoint (e.g. https://openstack.domain:9292)

OpenStack Glance Version

The version to use with the OpenStack Glance (Image) endpoint (e.g. 2)

OpenStack Cinder Endpoint

The endpoint address of the OpenStack Cinder (Volume) endpoint. Note: The address contains the OpenStack Project ID (e.g. https://openstack.domain:8776/v3/383a0dad105e460ab5a863ea0a45932b)

OpenStack Cinder Version

The version to use with the OpenStack Cinder (Volume) endpoint. (e.g. 3)

Project Name

The name of the OpenStack Project where VMs will be provisioned.

Authentication Method

The kind of credential used to authenticate against the OpenStack Endpoints.

Application Credential ID

The Credential ID of the OpenStack Application Credential.

Application Credential Secret

The OpenStack Application Credential secret.

Project Domain Name

The Domain that OpenStack Project belongs to (e.g. domain-1353722761)

User Domain Name

The Domain that the OpenStack User belongs to (e.g. domain-1353722761)

Username

The Username of the OpenStack User used to authentication against OpenStack.

Password

The Password of the OpenStack User used to authenticate against OpenStack.

Metadata

A Json Dictionary containing the metadata tags applied to the OpenStack VMs (e.g. {"my_tag": "my_value"})

Image ID

The ID of the Image used to provision OpenStack VMs.

Flavor

The name of the desired Flavor for the OpenStack VM (e.g. gen.medium)

Create Volume

Enable to create a new Block storage (Cinder) volume for the OpenStack VM. (When disabled, ephemeral Compute (Nova) storage is used.)

Volume Size (GB)

The desired size of the VM Volume in GB. This can only be specified when “Create Volume” is enabled.

Volume Type

The type of volume to use for the new OpenStack VM Volume (e.g. __DEFAULT__)

Startup Script

When OpenStack VMs are provision this script is executed. The script is responsible for installing and configuring the Kasm Agent.

Security Groups

A list containing the security groups applied to the OpenStack VM (e.g. ["sg1", "sg2"])

Network ID

The ID of the network that the OpenStack VMs will be connected to.

Key Name

The name of the SSH Key used to connect to the instance.

Availability Zone

The Name of the Availability Zone that the OpenStack VM will be placed into.

Config Override

A JSON dictionary that can be used to customize attributes of the VM request

Openstack Notes

The OpenStack provider requires that OpenStack endpoints present trusted, signed TLS certificates. This can be done through an API gateway that presents a valid certificate or through configuring valid certificates on each individual service (Reference: Openstack Docs).

Openstack Endpoints Require Trusted Certificates

The OpenStack provider requires that OpenStack endpoints present trusted, signed TLS certificates. This can be done through an API gateway that presents a valid certificate or through configuring valid certificates on each individual service (Reference: Openstack Docs.).

Application Credential Access Rules

Openstack Application credentials allow for administrators to specify Access Rules to restrict the permissions of an application credential further than a role might allow. Below is an example of the minimum set of permissions that Kasm Workspaces requires in an Application Credential

- service: volumev3
method: POST
path: /v3/*/volumes
- service: volumev3
method: DELETE
path: /v3/*/volumes/*
- service: volumev3
method: GET
path: /v3/*/volumes
- service: volumev3
method: GET
path: /v3/*/volumes/*
- service: volumev3
method: GET
path: /v3/*/volumes/detail
- service: compute
method: GET
path: /v2.1/servers/detail
- service: compute
method: GET
path: /v2.1/servers
- service: compute
method: GET
path: /v2.1/flavors
- service: compute
method: GET
path: /v2.1/flavors/*
- service: compute
method: GET
path: /v2.1/servers/*/os-volume_attachments
- service: compute
method: GET
path: /v2.1/servers/*
- service: compute
method: GET
path: /v2.1/servers/*/os-interface
- service: compute
method: POST
path: /v2.1/servers
- service: compute
method: DELETE
path: /v2.1/servers/*
- service: image
method: GET
path: /v2/images/*
- service: image
method: GET
path: /v2/schemas/image

KubeVirt Enabled Providers

Overview

KASM supports autoscaling in Kubernetes environments that are running KubeVirt. This includes generic k8s installations as well as GKE and Harvester deployments.

Startup Scripts

We have released updated startup scripts to include KubeVirt support, the most important change is the inclusion of the qemu-agent.

https://github.com/kasmtech/workspaces-autoscale-startup-scripts/blob/develop/latest/docker_agents/ubuntu.sh

Config Overrides

KASM generates VMs using a Kubernetes yaml manifest described by this API specification:

https://kubevirt.io/api-reference/main/definitions.html#_v1_virtualmachine

In the event that KASM providers do not expose a required feature, the provider configuration may be overridden. In order to do this, the entire manifest must be stored in the provider config_override. KASM will parse the manifest and attempt to update certain fields; the metadata will be updated so that the name field contains a unique name, the namespace matches the namespace in the provider config, and the labels are updated to contain various labels required for autoscale functionality. All other values will be preserved. The runStrategy will be set to Always and the hostname will be set to match the unique name. In order to support startup scripts, a disk with the following settings will be appended to the disks:

- name: config-drive-disk
  cdrom:
    bus: sata
    readonly: true

This points to a volume that will be appended to the volumes with the following settings:

- name: config-drive-disk
  cloudInitConfigDrive:
    secretRef:
      name: f'{name}-secret'

The manifest will be used to spawn multiple VMs, thus using unique names for certain resources such as PVCs is necessary. To support this the provider will replace any instance of $KASM_NAME with a unique name, to use this for multiple different types of resources you can append to the name such as this suggested PVC example:

volumes:
  - name: disk-0
    persistentVolumeClaim:
      claimName: $KASM_NAME-pvc

Again, due to the fact that the manifest will be spawning multiple VMs it is necessary to utilize a disk cloning method such as the dataVolume feature of the Containerized Data Importer interface created by KubeVirt.

Caveats

The k8s namespace for KASM resources is configured on the provider, this should not be updated while the provider is in use. Doing so can result in unpredictable behavior and orphaned resources. If it is necessary to change the k8s namespace, a new autoscale and provider should be created with the new namespace and the old autoscale configuration should be updated setting the standby cores, gpus and memory to 0. This should allow new resources to transition to the new provider.

It is possible for orphaned k8s objects to exist for various reasons, such as power loss of the KASM server during VM creation. Currently, these objects must be cleaned up manually. The k8s objects that KASM creates are: virtualmachines, secrets and PVCs.

The KASM kubevirt provider does not work out of the box with the following Kubernetes deployments:

  • KIND, the default KIND deployment uses local-path-provisioning for storage which does not support CDI cloning.

KubeVirt Settings

A number of settings are required to be defined to use this functionality. The KubeVirt settings appear in the Pool configuration when the feature is licensed.

The appropriate Kubernetes configuration options can be found by downloading the KubeConfig file provided by your Kubernetes installation.

../../_images/vm_kubevirt.webp

KubeVirt VM

KubeVirt VM Provider Settings

Name

Description

Name

A name to use to identify the config.

Max Instances

The maximum number of KubeVirt compute instances to provision regardless of the need for additional resources.

Kubernetes Host

The address of the kubernetes cluster (e.g. https://kubevirt.domain:5000).

Kubernetes SSL Certificate

The kubernetes cluster certificate as a base64 encoded string of a PEM file.

Kubernetes API Token

The bearer token for authentication to the kubernetes cluster.

VM Namespace

The name of the Kubernetes namespace where the VMs will be provisioned.

VM SSH Public Key

The Public SSL Certificate used to access the VM.

VM Cores

The nubmer of CPU cores to configure for the VM.

VM Memory

The amount of memory in Gibibyte (GiB) to configure for the VM.

VM Disk Size

The size of the disk in Gibibyte (GiB) to configure for the VM.

VM Disk Source

The name of the source PVC containing a cloud ready disk image used to clone a new disk volume

VM Interface Type

The interface type for the VM (e.g. masquerade or bridge).

VM Network Name

The name of the network interface. If using a multus network, it should match the name of that network.

VM Network Type

The network type for the VM (e.g. pod or multus).

VM Startup Script

When VMs are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are ran as bash scripts on a Linux host and Powershell scripts on a Windows host. Additional troublshooting steps can be found in the Creating Templates For Use With The VMware vSphere Provider section of the server documentation.

Configuration Override

A config override that contains a complete YAML manifest file used when provisioning the VM.

Enable TPM

Enable TPM for VM.

Enable EFI Boot

Enable the EFI boot loader for the VM.

Enable Secure Boot

Enable secure boot for the VM (requires EFI boot to be enabled).

KubeVirt GKE Setup Example

This example assumes you have a GKE account, a Linux development environment, and an existing KASM deployment (ref).

The example will assume the following variables:

  • cluster name kasm

  • zone us-central1

  • region us-central1-c

  • machine-type c3-standard-8

  • namespace kasm

  • storage class name kasm-storage

  • pvc name kasm-ubuntu-focal

  • pvc size 25GiB

  • pvc image focal-server-cloudimg-amd64.img

These should be replaced with values more appropriate to your installation.

Ensure GKE is configured

  • Install the gcloud console (ref):

curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-cli-linux-x86_64.tar.gz
tar -xf google-cloud-cli-linux-x86_64.tar.gz
./google-cloud-sdk/install.sh -q --path-update true --command-completion true
. ~/.profile
  • Initialize the gcloud console (ref):

gcloud init --no-launch-browser
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-c
  • Enable the GKE engine API (ref):

gcloud services enable container.googleapis.com
  • Create a cluster with nested virtualization support (ref):

gcloud container clusters create kasm \
    --enable-nested-virtualization \
    --node-labels=nested-virtualization=enabled \
    --machine-type=c3-standard-8
  • Install the kubectl gcloud component (ref):

gcloud components install kubectl
  • Configure GKE kubectl authentication (ref):

gcloud components install gke-gcloud-auth-plugin
gcloud container clusters get-credentials kasm \
    --region=us-central1-c
  • Create the KASM namespace:

kubectl create namespace kasm

Install KubeVirt

Note: The current v1.3 release of KubeVirt introduced a bug preventing GKE support. You must install the v1.2.2 release.

  • Install KubeVirt (ref):

#export RELEASE=$(curl https://storage.googleapis.com/kubevirt-prow/release/kubevirt/kubevirt/stable.txt)
export RELEASE=v1.2.2
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-operator.yaml
kubectl apply -f https://github.com/kubevirt/kubevirt/releases/download/${RELEASE}/kubevirt-cr.yaml
  • Wait for it to be ready. This may time out multiple (2-3) times before returning successfully:

kubectl -n kubevirt wait kv kubevirt --for condition=Available

Install the Containerized Data Importer extension

In order to support efficient cloning KubeVirt requires the Containerized Data Importer extension (ref).

  • Install the CDI extension:

export VERSION=$(curl -s https://api.github.com/repos/kubevirt/containerized-data-importer/releases/latest | grep '"tag_name":' | sed -E 's/.*"([^"]+)".*/\1/')
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
  • Create a new storage class that uses the GKE CSI driver and has the Immediate volume binding mode:

kubectl apply -f - <<EOF
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  annotations:
    components.gke.io/component-name: pdcsi
    components.gke.io/component-version: 0.18.23
    components.gke.io/layer: addon
    storageclass.kubernetes.io/is-default-class: "true"
  labels:
    addonmanager.kubernetes.io/mode: EnsureExists
    k8s-app: gcp-compute-persistent-disk-csi-driver
  name: kasm-storage
parameters:
  type: pd-balanced
provisioner: pd.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
  • Mark any existing default storage classes as non-default:

kubectl patch storageclass standard-rwo -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'

Create local kubectl authentication

Currently, in order to authenticate with the GKE cluster KASM needs a local kubectl authentication account.

  • Create a service account:

KUBE_SA_NAME="kasm-admin"
kubectl create sa $KUBE_SA_NAME
kubectl create clusterrolebinding $KUBE_SA_NAME --clusterrole cluster-admin --serviceaccount default:$KUBE_SA_NAME
  • Manually create a long-lived API token for the service account

kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: $KUBE_SA_NAME-secret
  annotations:
    kubernetes.io/service-account.name: $KUBE_SA_NAME
type: kubernetes.io/service-account-token
EOF
  • Generate the kubeconfig

KUBE_DEPLOY_SECRET_NAME=$KUBE_SA_NAME-secret
KUBE_API_EP=`gcloud container clusters describe kasm --format="value(privateClusterConfig.publicEndpoint)"`
KUBE_API_TOKEN=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.token}'|base64 --decode`
KUBE_API_CA=`kubectl get secret $KUBE_DEPLOY_SECRET_NAME -o jsonpath='{.data.ca\.crt}'`
echo $KUBE_API_CA | base64 --decode > tmp.deploy.ca.crt

touch $HOME/local.cfg
export KUBECONFIG=$HOME/local.cfg
kubectl config set-cluster local --server=https://$KUBE_API_EP --certificate-authority=tmp.deploy.ca.crt --embed-certs=true
kubectl config set-credentials $KUBE_SA_NAME --token=$KUBE_API_TOKEN
kubectl config set-context local --cluster local --user $KUBE_SA_NAME
kubectl config use-context local
  • Validate your kubeconfig works

kubectl version

It should display both the client and server versions, if it does not you can retrieve the current config used by kubectl to ensure it is using the correct config

kubectl config view

Ensure that it is using the local settings you generated and not an existing GKE configuration.

Upload a PVC

The virtctl tool can be used to upload a VM image. Both the raw and qcow2 formats are supported. The image should be cloud-ready, with cloud-init configured.

  • Download and install the virtctl tool:

VERSION=$(kubectl get kubevirt.kubevirt.io/kubevirt -n kubevirt -o=jsonpath="{.status.observedKubeVirtVersion}")
ARCH=$(uname -s | tr A-Z a-z)-$(uname -m | sed 's/x86_64/amd64/') || windows-amd64.exe
echo ${ARCH}
curl -L -o virtctl https://github.com/kubevirt/kubevirt/releases/download/${VERSION}/virtctl-${VERSION}-${ARCH}
chmod +x virtctl
sudo install virtctl /usr/local/bin
  • Expose the CDI Upload Proxy by executing the following command in another terminal:

kubectl -n cdi port-forward service/cdi-uploadproxy 8443:443
  • Use the virtctl tool to upload the VM image:

virtctl image-upload pvc kasm-ubuntu-focal --uploadproxy-url=https://localhost:8443 --size=25Gi --image-path=./focal-server-cloudimg-amd64.img --insecure -n kasm

Ensure KASM is configured

  • Create Certs

sudo openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout kasm_nginx.key -out kasm_nginx.crt -subj "/C=US/ST=VA/L=None/O=None/OU=DoFu/CN=kasm-host/emailAddress=none@none.none" 2> /dev/null
  • Configure KASM

    • Add a license

    • Set the default zone upstream address to the address of the KASM host

    • Add a Pool

      • Name KubeVirt Pool

      • Type Docker Agent

    • Add an Auto-Scale config

      • Name KubeVirt AutoScale

      • AutoScale Type Docker Agent

      • Pool KubeVirt Pool

      • Deployment Zone default

      • Standby Cores 4

      • Standby GPUs 1

      • Standby Memory 4000

      • Downscale Backoff 600

      • Agent Cores Override 4

      • Agent GPUs Override 1

      • Agent Memory Override 4

      • Nginx Cert paste kasm_nginx.crt

      • Nginx Key paste kasm_nginx.key

    • Create a new VM Provider

      • Provider KubeVirt

      • Name KubeVirt Provider

      • Max Instances 10

      • Host paste server URI from kubeconfig

      • SSL Certificate paste certiciate-authority-data from kubeconfig

      • API Token paste token from kubeconfig

      • VM Namespace kasm

      • VM Public SSH Key paste user public ssh key

      • Cores 4

      • Memory 4

      • Disk Source kasm-ubuntu-focal

      • Disk Size 30

      • Interface Type bridge

      • Network Name default

      • Network Type pod

      • Startup Script paste ubuntu docker agent startup script

Harvester Settings

A number of settings are required to be defined to use this functionality. The Harvester settings appear in the Pool configuration when the feature is licensed.

The appropriate Kubernetes configuration options can be found by downloading the KubeConfig file provided by your Kubernetes installation.

../../_images/vm_harvester.webp

Harvester VM

Harvester VM Provider Settings

Name

Description

Name

A name to use to identify the config.

Max Instances

The maximum number of KubeVirt compute instances to provision regardless of the need for additional resources.

Kubernetes Host

The address of the kubernetes cluster (e.g. https://kubevirt.domain:5000).

Kubernetes SSL Certificate

The kubernetes cluster certificate as a base64 encoded string of a PEM file.

Kubernetes API Token

The bearer token for authentication to the kubernetes cluster.

VM Namespace

The name of the Kubernetes namespace where the VMs will be provisioned.

VM SSH Public Key

The Public SSL Certificate used to access the VM.

VM Cores

The nubmer of CPU cores to configure for the VM.

VM Memory

The amount of memory in Gibibyte (GiB) to configure for the VM.

VM Disk Size

The size of the disk in Gibibyte (GiB) to configure for the VM.

VM Disk Image

The name of the Harvester image used to clone a new disk volume

VM Interface Type

The interface type for the VM (e.g. masquerade or bridge).

VM Network Name

The name of the network interface. If using a multus network, it should match the name of that network.

VM Network Type

The network type for the VM (e.g. pod or multus).

VM Startup Script

When VMs are provisioned, this script is executed and is responsible for installing and configuring the Kasm Agent. Scripts are ran as bash scripts on a Linux host and Powershell scripts on a Windows host. Additional troublshooting steps can be found in the Creating Templates For Use With The VMware vSphere Provider section of the server documentation.

Configuration Override

A config override that contains a complete YAML manifest file used when provisioning the VM.

Enable TPM

Enable TPM for VM.

Enable EFI Boot

Enable the EFI boot loader for the VM.

Enable Secure Boot

Enable secure boot for the VM (requires EFI boot to be enabled).

Harvester Setup Example

This example assumes you have a standard Harvester v1.3.1 deployment. The guide for doing so can be found here:

https://docs.harvesterhci.io/v1.3/install/index

Generate credentials to access the Kubernetes cluster

Harvester provides a download link on the support page of the dashboard. This will be used later when configuring the KASM provider.

../../_images/harvester-kubeconfig.png

Download KubeConfig

Create a namespace

  • The name for this example deployment should be kasm

../../_images/harvester-namespace.png

Create Namespace

Create a VM network

  • The namespace should match the one created above, for this example deployment it is kasm.

  • The name will be used later in the provider configuration, for this example deployment it should be kasm-network.

  • For this example deployment, select the mgmt cluster network.

../../_images/harvester-network.png

Create VM network

Create a VM image

  • The namespace should match the one created above, for this example deployment it is kasm.

  • The name will be used later in the provider configuration, for this example deployment it should be kasm-ubuntu-focal.

  • For this example deployment, use the url https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img.

../../_images/harvester-image.png

Create VM image

Create a KASM deployment VM

  • The namespace should match the one created above, for this example deployment it is kasm.

  • The name for this example deployment should be kasm-app

  • The CPU, memory and disk size should meet the minimum needs of your demonstration use-cases. A good example would be 4 cores 8 Gi and 50 Gb of disk space.

  • For ease of use, add a public SSHKey

  • Under Volumes select the disk image created above kasm-ubuntu-focal

  • Under Networks select the VM network created above kasm-network

../../_images/harvester-vm.png

Create KASM VM

Install KASM

Using the VM created above complete a single-server deployment of KASM (ref)

Create Certs

sudo openssl req -x509 -nodes -days 1825 -newkey rsa:2048 -keyout kasm_nginx.key -out kasm_nginx.crt -subj "/C=US/ST=VA/L=None/O=None/OU=DoFu/CN=kasm-host/emailAddress=none@none.none" 2> /dev/null

Configure KASM

  • Add a KASM license

  • Set the default zone upstream address to the address of the KASM host

  • Add a Pool

    • Name Harvester Pool

    • Type Docker Agent

  • Add an Auto-Scale config

    • Name Harvester AutoScale

    • AutoScale Type Docker Agent

    • Pool Harvester Pool

    • Deployment Zone default

    • Standby Cores 4

    • Standby GPUs 1

    • Standby Memory 4000

    • Downscale Backoff 600

    • Agent Cores Override 4

    • Agent GPUs Override 1

    • Agent Memory Override 4

    • Nginx Cert paste kasm_nginx.crt

    • Nginx Key paste kasm_nginx.key

  • Create a new VM Provider

    • Provider Harvester

    • Name Harvester Provider

    • Max Instances 10

    • Host paste server URI from kubeconfig

    • SSL Certificate paste certiciate-authority-data from kubeconfig

    • API Token paste token from kubeconfig

    • VM Namespace kasm

    • VM Public SSH Key paste user public ssh key

    • Cores 4

    • Memory 4

    • Disk Image kasm-ubuntu-focal

    • Disk Size 30

    • Network Name kasm-network

    • Network Type multus

    • Startup Script paste ubuntu docker agent startup script

DNS Provider Configs

Note

This feature requires a special license. Please contact your Kasm Technologies representative for details.

../../_images/dns_create_new.webp

Create New DNS

AWS DNS Settings

Name

Description

DNS Provider Configs

Select an existing config or create a new config. If you select an existing config and change any of the details, those details will be changed for anything using the same DNS Provider config.

Provider

Select a provider from AWS, Azure, Digital Ocean, Google Cloud or Oracle Cloud. If you select an existing provider this will be selected automatically.

AWS DNS Provider Settings

../../_images/dns_aws.webp

AWS DNS Provider

AWS DNS Settings

Name

Description

Name

A name to use to identify the config.

Access Key ID

The AWS Access Key used for the AWS API.

Access Key Secret

The AWS Secret Access Key used for the AWS API.

Azure DNS Provider Settings

../../_images/dns_azure.webp

Azure DNS Provider

Azure DNS Provider Settings

Name

Description

Name

A name to use to identify the config.

Subscription ID

The Subscription ID for the Azure Account. This can be found in the Azure portal by searching for Subscriptions in the search bar in Azure home then selecting the subscription you want to use. (e.g 00000000-0000-0000-0000-000000000000)

Resource Group

The Resource Group the DNS Zone and/or Virtual Machines belong to (e.g dev)

Tenant ID

The Tenant ID for the Azure Account. This can be found in the Azure portal by going to Azure Active Directory using the search bar in Azure home. (e.g 00000000-0000-0000-0000-000000000000)

Client ID

The Client ID credential used to auth to the Azure Account. Client ID can be obtained by registering an application within Azure Active Directory. (e.g 00000000-0000-0000-0000-000000000000)

Client Secret

The Client Secret credential created with the registered applicaiton in Azure Active Directory. (e.g abc123)

Azure Authority

Which Azure authority to use, there are four, Azure Public Cloud, Azure Government, Azure China and Azure Germany.

Region

The Azure region where the Agents will be provisioned. (e.g eastus)

Digital Ocean DNS Provider Settings

../../_images/dns_do.webp

Digital Ocean DNS Provider

Digital Ocean DNS Provider Settings

Name

Description

Name

A name to use to identify the config.

Token

The token to use to connect to this VM

Google Cloud (GCP) DNS Provider Settings

../../_images/dns_google.webp

Google Cloud DNS Provider

GCP DNS Provider Settings

Name

Description

Name

A name to use to identify the config.

Project

The Google Cloud Project ID (e.g pensive-voice-547511)

Credentials

The JSON formatted credentials for the service account used to authenticate with GCP: Ref

Oracle Cloud (OCI) DNS Provider Settings

../../_images/dns_oracle.webp

OCI DNS Provider

OCI DNS Provider Settings

Name

Description

Name

A name to use to identify the config.

Fingerprint

The public key fingerprint of the authenticated API user. (e.g xx:yy:zz:11:22:33)

Tenancy OCID

The Tenancy OCID for the OCI account. (e.g ocid1.tenancy.oc1..xyz)

Region

The OCI Region name. (e.g us-ashburn-1)

Compartment OCID

The Compartment OCID where the auto-scaled agents will be placed. (ocid1.compartment.oc1..xyx)

User OCID

The OCID of the user to authenticate with the OCI API. (e.g ocid1.user.oc1..xyz)

Private Key

The private key (PEM format) of the authenticated API user.