VPN Sidecar Containers

Intro

For security reasons or to gain access to a remote network it might be required to give specific Kasm Workspaces access to a VPN. One way to do that would be to customize an existing Kasm core image with VPN tools, but then it requires many network level permissions and becomes a configuration nightmare. The Kasm team’s recommended configuration would be to use an off the shelf or custom VPN docker container running alongside Kasm Workspaces while leveraging the existing desktop and application containers. Their traffic can then be forced through the external VPN container using a custom Docker exec config. This document will be covering three different ways to achieve this.

Deployment time ~10 minutes.

Option 1 Create a standalone openvpn setup

This configuration is going to be using an off the shelf OpenVPN container for the server as the assumption is most users will already have an OpenVPN endpoint available. Then the example will show building a custom client container to act as the sidecar.

The OpenVPN server

Here kylemanna/openvpn is being used to be able to quickly spin up an endpoint on a remote server. Use the following commands to deploy and dump a usable client config file:

sudo docker volume create --name Kasm_vpn
sudo docker run -v Kasm_vpn:/etc/openvpn --rm kylemanna/openvpn ovpn_genconfig -u udp://IP_OR_HOSTNAME
sudo docker run -v Kasm_vpn:/etc/openvpn --rm -it kylemanna/openvpn ovpn_initpki
sudo docker run -v Kasm_vpn:/etc/openvpn -d -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn
sudo docker run -v Kasm_vpn:/etc/openvpn --rm -it kylemanna/openvpn easyrsa build-client-full Kasm nopass
sudo docker run -v Kasm_vpn:/etc/openvpn --rm kylemanna/openvpn ovpn_getclient Kasm > Kasm.ovpn

Please substitute the IP_OR_HOSTNAME for the actual server IP or hostname for the server. After filling out the prompted information for password protecting the certificates the Kasm.ovpn configuration file needed to connect the VPN sidecar client will be written to disk.

The OpenVPN sidecar container

Build a custom docker container to act as the VPN sidecar container on the Kasm Workspaces host.

mkdir openvpn-client
cd openvpn-client
mkdir root

Here is the Dockerfile:

FROM debian:latest

RUN apt update && \
    apt install -y \
    iptables \
    openvpn && \
    apt clean

# add local files
COPY /root /

VOLUME [ "/vpn/config" ]
ENTRYPOINT [ "/entrypoint.sh" ]

And custom root/entrypoint.sh:

#! /bin/sh

# create tun device
if [ ! -c /dev/net/tun ]; then
  mkdir -p /dev/net
  mknod /dev/net/tun c 10 200
fi

# Enable devices MASQUERADE mode
iptables -t nat -A POSTROUTING -o tun+ -j MASQUERADE

# start vpn client
openvpn --config /vpn/config/${VPN_CONFIG}

Make sure the entrypoint is executable:

chmod +x root/entrypoint.sh

The directory structure should look like this:

openvpn-client/
├─ root/
│  ├─ entrypoint.sh
├─ Dockerfile

Now build the container:

sudo docker build -t openvpn-client .

On the Docker host running Kasm Workspaces (or Agent Server if using a Mult-Server Deploy) create a custom docker network:

sudo docker network create \
  --driver=bridge \
  --opt icc=true \
  --subnet=172.20.0.0/16 \
  vpn-1

And finally navigate to the folder containing the Kasm.ovpn config file and run the container:

sudo docker run -d \
  --cap-add NET_ADMIN \
  --name open-vpn \
  --net vpn-1 \
  --ip 172.20.0.2 \
  -e VPN_CONFIG=Kasm.ovpn \
  -v $(pwd):/vpn/config \
  --restart unless-stopped \
  openvpn-client

Customizing Workspaces

Login to the Workspaces web interface and click on “Workspaces -> Workspaces” from the Admin tab:

../../_images/images.webp

Workspaces Location Admin

Now select the arrow button next to the Workspace to be modified to use this network and select “Clone”:

../../_images/clone.webp

Cloning a Workspace

This example will be modifying a AlmaLinux 8 desktop Workspace. First rename the Friendly Name to append that this is a special VPN enabled container AlmaLinux 8 - VPN.

For some configurations it may be necessary to populate the DNS server addresses of the container. This will be indicated by DNS resolution failure inside the container. This can be done by modifying the Docker Run Config Override (JSON) and adding DNS server entries (This workspace already had the "hostname": "kasm" populated in this field):

{
  "dns": [
    "8.8.8.8",
    "8.8.4.4"
  ]
}

Here the public Google DNS servers are used, but these can be any DNS servers.

Next change the Docker Exec Config (JSON) to:

{
  "first_launch":{
    "user":"root",
    "privileged":true,
    "cmd":"bash -c 'ip route delete default && ip route add default via 172.20.0.2'"
  }
}

Then select “Restrict Image to Docker Network” and choose the network created in the previous step (vpn-1).

Once finished the Workspace settings should look something like this:

../../_images/settings1.webp

Workspace Network Settings

Click on Save and hop back over to the Workspaces tab. The new AlmaLinux 8 - VPN workspace should be present:

../../_images/image.webp

User Dashboard Launch Button

OK the Workspace is ready to deploy, the same process can be followed for any of the Kasm Workspaces to pipe their network traffic through the VPN container.

Testing

Testing the VPN configuration in the container will vary depending on the provider or use case, but here are some tips. These examples will be using the AlmaLinux 8 - VPN Workspace configured earlier.

Finding the public IP OpenVPN

Click on Applications -> Terminal Emulator and enter:

curl icanhazip.com

Make sure the IP returned is not the current public IP of the Kasm Workspaces Agent the container is running on.

Option 3 Tailscale

Tailscale is a zero configuration VPN solution that allows users to quickly connect to a network of remote computers by their Tailscale IP addresses. This example will be configuring a sidecar container to route traffic to the machines on the Tailscale network. It is important to note here this example does not configure an exit node here simply allowing configured VPN containers to route traffic out to the Tailscale network.

Getting the Tailscale auth key

Most users can skip this step when trying to connect Workspaces containers to existing Tailscale infrastructure. For those who are new to Tailscale sign up for an account with them here.

After signed up a login screen will be presented:

../../_images/welcome.png

Add device Tailscale

Ignore the setup wizard and click the avatar on the top right of the screen and select “Billing”:

../../_images/billing.png

Billing Location

From here click on “Keys” under “Personal Settings”:

../../_images/keys.png

Keys Setting Location

Click on “Generate auth key…”:

../../_images/generate.png

Generate a Key

The Kasm team recommends setting “Reusable” and “Ephemeral” but ultimately review the settings and decide the best settings:

../../_images/keysettings.png

Generate Settings

Copy the key created and move on to the next step.

Setting up a Tailscale Docker container

On the Kasm Workspaces host build a custom docker container to act as the VPN sidecar container.

mkdir tailscale
cd tailscale
mkdir root

Here is the Dockerfile:

FROM alpine:3.15

RUN \
  apk add --no-cache \
    bind-tools \
    tailscale 

# add local files
COPY /root /

ENTRYPOINT [ "/entrypoint.sh" ]

And custom root/entrypoint.sh:

#! /bin/sh

# create tun device
if [ ! -c /dev/net/tun ]; then
  mkdir -p /dev/net
  mknod /dev/net/tun c 10 200
fi

# Enable devices MASQUERADE mode
iptables -t nat -A POSTROUTING -o eth+ -j MASQUERADE
iptables -t nat -A POSTROUTING -o tailscale+ -j MASQUERADE

# start vpn client
tailscaled

Make sure the entrypoint is executable:

chmod +x root/entrypoint.sh

The directory structure should look like this:

tailscale/
├─ root/
│  ├─ entrypoint.sh
├─ Dockerfile

Now build the container:

sudo docker build -t tailscaled .

On the Docker host running Kasm Workspaces (or Agent Server if using a Mult-Server Deploy) create a custom docker network:

sudo docker network create \
  --driver=bridge \
  --opt icc=true \
  --subnet=172.20.0.0/16 \
  vpn-1

Now spinup a Tailscale Docker container:

sudo docker run -d \
  --cap-add NET_ADMIN \
  --name tailscaled \
  --net vpn-1 \
  --ip 172.20.0.2 \
  --restart unless-stopped \
  tailscaled

Now login using the auth key:

sudo docker exec tailscaled tailscale up --authkey=<AUTH KEY FROM PREVIOUS STEP>

Now navigate here and ensure the machine is listed:

../../_images/machine.png

Machine View

Customizing Workspaces

Login to the Workspaces web interface and click on “Workspaces -> Workspaces” from the Admin tab:

../../_images/images.webp

Workspaces Location Admin

Now select the arrow button next to the Workspace to be modified to use this network and select “Clone”:

../../_images/clone.webp

Cloning a Workspace

This example will be modifying a AlmaLinux 8 desktop Workspace. First rename the Friendly Name to append that this is a special VPN enabled container AlmaLinux 8 - VPN.

For some configurations it may be necessary to populate the DNS server addresses of the container. This will be indicated by DNS resolution failure inside the container. This can be done by modifying the Docker Run Config Override (JSON) and adding DNS server entries (This workspace already had the "hostname": "kasm" populated in this field):

{
  "dns": [
    "8.8.8.8",
    "8.8.4.4"
  ]
}

Here the public Google DNS servers are used, but these can be any DNS servers.

Next change the Docker Exec Config (JSON) to:

{
  "first_launch":{
    "user":"root",
    "privileged":true,
    "cmd":"bash -c 'ip route delete default && ip route add default via 172.20.0.2'"
  }
}

Then select “Restrict Image to Docker Network” and choose the network created in the previous step (vpn-1).

Once finished the Workspace settings should look something like this:

../../_images/settings1.webp

Workspace Network Settings

Click on Save and hop back over to the Workspaces tab. The new AlmaLinux 8 - VPN workspace should be present:

../../_images/image.webp

User Dashboard Launch Button

OK the Workspace is ready to deploy, the same process can be followed for any of the Kasm Workspaces to pipe their network traffic through the VPN container.

Testing

Testing the VPN in the container will vary depending on the provider or use case, but here are some tips. These examples will be using the AlmaLinux 8 - VPN Workspace configured earlier.

Tailscale testing

An easy way to test is to simply run a ping to another device on the Tailscale network from the workspace. In order to test this at least one other device connected to the Tailscale network will be needed. From the Tailscale dashboard here the IPs of other connected devices can be seen. This example will be using 100.86.224.182.

Assuming the Workspace is a modified desktop Workspace and in particular that this container is using the AlmaLinux 8 example from above. Once launched into the session go ahead and open a terminal from Applications -> Terminal Emulator:

../../_images/openterminal.png

Opening a Terminal

Now run a ping command:

ping 100.86.224.182

There should be acks printed to the screen:

../../_images/pings.png

Ping Response

Any container with this configuration will have access to other machines on the connected Tailscale network.