How to Bootstrap K3s Cluster Over SSH in Under 60 Seconds with K3sup
Why Bootstrap K3s with K3sup?
Setting up a Kubernetes cluster traditionally requires complex configuration management tools, extensive YAML files, and hours of troubleshooting. K3sup (pronounced "ketchup") eliminates this friction by automating K3s deployment over SSH in under 60 seconds.
Unlike heavyweight Kubernetes distributions, K3s is a lightweight, production-ready Kubernetes distribution optimized for edge computing and resource-constrained environments. K3sup removes the remaining barrier: the installation process itself.
Perfect for:
- DevOps engineers provisioning multiple edge nodes
- Developers testing Kubernetes locally on VMs
- Teams deploying to bare metal or cloud VMs
- Raspberry Pi and ARM-based clusters
- Rapid prototyping without container orchestration overhead
Pre-requisites and Setup
Before you begin, ensure you have:
- SSH access to your target Linux host (Ubuntu, Debian, CentOS, or any systemd-based distro)
- K3sup binary installed on your local machine
- kubectl (optional, but recommended for cluster management)
- A target VM or physical machine with at least 512MB RAM (1GB+ recommended)
Install K3sup on Your Local Machine
K3sup is cross-compiled for Linux, macOS, Windows, and Raspberry Pi:
# Linux/macOS (curl)
curl -sLS https://get.k3sup.dev | sh
# Or using Homebrew on macOS
brew install k3sup
# Verify installation
k3sup version
The binary is approximately 15MB and requires no dependencies beyond SSH access to your target hosts.
Step 1: Deploy the K3s Server (Control Plane)
The k3sup install command deploys a K3s server (Kubernetes control plane) to a remote host via SSH:
k3sup install \
--host=192.168.1.100 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--k3s-version=v1.28.6
Parameter breakdown:
| Parameter | Purpose | Example |
|-----------|---------|----------|
| --host | IP address or hostname of target | 192.168.1.100 |
| --user | SSH user account | ubuntu, ec2-user |
| --ssh-key | Path to private SSH key | ~/.ssh/id_rsa |
| --k3s-version | Specific K3s release | v1.28.6 (optional, uses latest if omitted) |
| --local-path | Where to save KUBECONFIG | ./kubeconfig.yaml |
| --cluster | Create HA cluster mode | Boolean flag |
K3sup will:
- Connect via SSH
- Download and execute K3s installer
- Configure systemd service
- Extract KUBECONFIG credentials
- Save kubeconfig to your local machine
The entire process completes in 30-45 seconds.
Step 2: Verify Cluster is Ready
Once installation completes, verify the control plane is operational:
# Export KUBECONFIG
export KUBECONFIG=$(pwd)/kubeconfig.yaml
# Check node status
kubectl get nodes
# Expected output:
# NAME STATUS ROLES AGE VERSION
# your-server Ready control-plane,master 10s v1.28.6
# Check system pods
kubectl get pods --all-namespaces
All system pods (coredns, metrics-server, local-path-provisioner) should reach Running status within 15 seconds.
Step 3: Join Agent Nodes (Optional)
For multi-node clusters, join additional nodes as agents:
# Get the join token from your server
token=$(ssh -i ~/.ssh/id_rsa ubuntu@192.168.1.100 'cat /var/lib/rancher/k3s/server/node-token')
# Join agent node
k3sup join \
--host=192.168.1.101 \
--server-host=192.168.1.100 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--server-ip=192.168.1.100
Repeat for each additional node. K3sup automatically configures the agent to connect to your server.
# Verify agents joined
kubectl get nodes
# Output:
# NAME STATUS ROLES AGE VERSION
# server Ready master 2m v1.28.6
# agent-1 Ready <none> 45s v1.28.6
# agent-2 Ready <none> 40s v1.28.6
Advanced: High Availability Setup
Multi-Master with External SQL Database
For production deployments, create an HA cluster with external database:
# Deploy first server with external datastore
k3sup install \
--host=192.168.1.100 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--k3s-extra-args="--datastore-endpoint=postgres://user:pass@db.example.com:5432/k3s --datastore-cafile=/etc/ssl/certs/ca.crt"
# Deploy additional masters
k3sup install \
--host=192.168.1.101 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--k3s-extra-args="--datastore-endpoint=postgres://user:pass@db.example.com:5432/k3s"
This removes etcd coupling and allows horizontal control plane scaling.
Embedded etcd Cluster
For smaller deployments, use embedded etcd with server-to-server networking:
# Deploy primary master
k3sup install \
--host=192.168.1.100 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--cluster
# Deploy secondary master (joins etcd cluster automatically)
k3sup install \
--host=192.168.1.101 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--cluster \
--server-host=192.168.1.100
Embedded etcd replicates across all masters automatically.
Raspberry Pi Deployment
K3sup is optimized for ARM-based single-board computers:
# Install on Raspberry Pi 4 (Ubuntu 22.04)
k3sup install \
--host=192.168.1.50 \
--user=ubuntu \
--ssh-key=$HOME/.ssh/id_rsa \
--k3s-version=v1.28.6
# The binary auto-detects ARM architecture and uses appropriate K3s build
Minimum hardware: 1GB RAM (1.5GB recommended). Works on Pi 2, 3, and 4.
Troubleshooting Common Issues
SSH Connection Timeout
# Verify SSH connectivity first
ssh -i ~/.ssh/id_rsa -v ubuntu@192.168.1.100 'echo Connected'
# If failure, check:
# 1. Security group/firewall allows port 22
# 2. SSH key permissions: chmod 600 ~/.ssh/id_rsa
# 3. User has sudo access
K3s Service Fails to Start
# SSH into server and check systemd status
ssh -i ~/.ssh/id_rsa ubuntu@192.168.1.100
sudo systemctl status k3s
sudo journalctl -u k3s -n 50
# Common causes:
# - Insufficient disk space: df -h
# - SELinux blocking: sudo setenforce 0 (testing only)
# - Port 6443 in use: sudo lsof -i :6443
KUBECONFIG Merge Issues
# Backup existing kubeconfig
cp ~/.kube/config ~/.kube/config.backup
# Merge new cluster
cp kubeconfig.yaml ~/.kube/config.new
kubectl config view --raw > ~/.kube/config.merged
# Switch contexts
kubectl config use-context k3s-default
kubectl get nodes
K3sup Pro: IaC and GitOps at Scale
For teams managing multiple clusters, K3sup Pro adds:
plan/applycommands for declarative infrastructure- Parallel deployment across dozens of nodes
- Git-based versioning of cluster configurations
uninstallcommand for rapid teardown
Production users with complex multi-cluster needs should evaluate K3sup Pro for automated drift detection and centralized management.
Next Steps
- Deploy an application:
kubectl create deployment nginx --image=nginx - Expose with Traefik (built into K3s):
kubectl expose deployment nginx --port=80 --type=LoadBalancer - Install cert-manager for HTTPS:
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.2/cert-manager.yaml - Backup etcd (if multi-master):
k3sup get-kubeconfig --host=192.168.1.100 > backup-kubeconfig.yaml
Conclusion
K3sup eliminates Kubernetes bootstrapping complexity. From zero to functional cluster in 60 seconds, K3sup enables teams to focus on containerized workloads rather than infrastructure setup. Whether you're running edge clusters, building CI/CD infrastructure, or experimenting locally, K3sup's SSH-based approach requires no agents, no container registry, and no complex orchestration tooling.
Recommended Tools
- DigitalOceanCloud hosting built for developers — $200 free credit for new users
- VultrHigh-performance cloud compute — deploy in 60 seconds
- Akamai Cloud (Linode)Developer-friendly cloud infrastructure