---
myst:
html_meta:
"description lang=en": "Step-by-step guide to configuring Digital Ocean AutoScale for Kasm Workspaces. Learn how to set up your environment, create API tokens, define VM provider settings, and test autoscaling behavior for optimized resource usage."
"keywords": "Kasm, Digital Ocean, AutoScale, autoscaling, droplets, cloud-init, Kubernetes, SSH, firewall, VM, startup script"
"property=og:locale": "en_US"
---
```{title} DigitalOcean AutoScale
```
# DigitalOcean AutoScale
```{contents} Table of Contents
:depth: 3
:local:
```
This guide will walk you through configuring autoscaling for Kasm Workspaces on DigitalOcean. Autoscaling in Kasm Workspaces automatically provisions and destroys agents based on user demand, ensuring optimized resource utilization and cost efficiency.
```{raw} html
```
## Overview
### Prerequisites
* Access to Digital Ocean: Ensure you have the appropriate access to your Digital Ocean environment
* Kasm Workspaces Installed: A basic setup of Kasm Workspaces must already exist
* Understand Key Concepts:
* **Zones**: Logical groupings of Kasm services for geographical or organizational segmentation
* **Pools**: Logical groupings of Kasm Docker Agents and Server Pools for load balancing
* Plan Your Configuration:
* Understand your deployment zone requirements
* Configure your Digital Ocean environment
### Setup your Digital Ocean Environment
- **Create an API Token**: Go to your Digital Ocean dashboard -> "API" -> "Personal Access Tokens" -> "Generate New Token"
```{figure} /images/autoscaling/providers/digitalocean/digitalocean_create_api_key.png
:align: center
**Create Personal Access Token in Digital Ocean**
```
* Token Name: Give a name (e.g Kasm AutoScale)
* Expiration: Set an expiration for your token (e.g 30 days)
* Scopes: Choose "Custom Scopes" and give access to the following resources:
* certificate
* database
* firewall
* load_balancer
* project
* regions
* sizes
* ssh_key
* tag
* vpc
* image
* droplet
* domain
* action
```{figure} /images/autoscaling/providers/digitalocean/digitalocean_custom_scope_perms.png
:align: center
**Custom Scopes in Digital Ocean**
```
* Generate Token
* Save your token securely as you won't be able to see it again.
- **Add SSH Key**: You need to assign an SSH key to your newly provisioned droplets.
* Go to your Digital Ocean dashboard -> "Settings" -> "Security" -> "Add SSH Key"
* Follow the on-screen instructions from Digital Ocean to generate an SSH key pair and paste your Public Key and choose a Key Name.
```{figure} /images/autoscaling/providers/digitalocean/digitalocean_create_ssh_key.png
:align: center
**Add SSH Key to DigitalOcean**
```
## Configure Digital Ocean Details on Kasm
```{eval-rst}
* Follow :ref:`autoscale_docker_config` or :ref:`autoscale_server_config` to create to create a new AutoScale config, or select **Create New** in **VM Provider Configs** if you already have one.
* Set Provider to Digital Ocean
* Configure the following settings:
```
```{include} /guide/compute/vm_providers/digital_ocean.md
```
* Submit the Provider Config
### "Tag Does Not Exist" Error
Upon first testing AutoScaling with Digital Ocean, an error similar to the following may be presented:
```
Future generated an exception: tag zone:abc123 does not exist
traceback:
..
File "digitalocean/Firewall.py", line 225, in add_tags
File "digitalocean/baseapi.py", line 196, in get_data
digitalocean.DataReadError: tag zone:abc123 does not exist
process: manager_api_server
```
This error occurs when Kasm Workspaces tries to assign a unique tag based on the Zone Id to the Digital Ocean Firewall. If that tag does not already exist in Digital Ocean, the operation will fail and present the error. To workaround the issue, manually create a tag matching the one specified in the error (e.g zone:abc123) via the Digital Ocean console. This can be done via API, or simply creating the tag on a temporary Droplet.
## Test your Digital Ocean Autoscaling setup
If you have configured non-zero Standby/Minimum Available Session values agents should start provisioning immediately. Otherwise, try launching multiple workspaces to increase resource utilization, prompting Kasm to autoscale new agents.
* Provision a Workspace
* Go to Workspaces > Registry
* Make multiple workspaces available
* Go to the Workspaces dashboard and launch sufficient workspace sessions to exceed your resource standby thresholds
* Monitor the provisioning of new agents by going to "Infrastructure" -> "Agents"
* Verify new VM instances in Proxmox
* Check Downscaling
* Terminate sessions to reduce resource usage
* Confirm that Kasm removes agents after the back-off period