20 KiB
Headscale - Self-Hosted Tailscale Control Server
Status: 🟢 Live
Host: Calypso (100.103.48.78)
Stack File: hosts/synology/calypso/headscale.yaml
Public URL: https://headscale.vish.gg:8443
Admin UI: https://headscale.vish.gg:8443/admin (Headplane, Authentik SSO)
Ports: 8085 (API), 3002 (Headplane UI), 9099 (Metrics), 50443 (gRPC)
Overview
Headscale is an open-source, self-hosted implementation of the Tailscale control server. It allows you to run your own Tailscale coordination server, giving you full control over your mesh VPN network.
Why Self-Host?
| Feature | Tailscale Cloud | Headscale |
|---|---|---|
| Control | Tailscale manages | You manage |
| Data Privacy | Keys on their servers | Keys on your servers |
| Cost | Free tier limits | Unlimited devices |
| OIDC Auth | Limited | Full control |
| Network Isolation | Shared infra | Your infra only |
Recommended Host: Calypso
Why Calypso?
| Factor | Rationale |
|---|---|
| Authentik Integration | OIDC provider already running for SSO |
| Nginx Proxy Manager | HTTPS/SSL termination already configured |
| Infrastructure Role | Hosts auth, git, networking services |
| Stability | Synology NAS = 24/7 uptime |
| Resources | Low footprint fits alongside 52 containers |
Alternative Hosts
- Homelab VM: Viable, but separates auth from control plane
- Concord NUC: Running Home Assistant, keep it focused
- Atlantis: Primary media server, avoid network-critical services
Architecture
Internet
│
▼
┌─────────────────┐
│ NPM (Calypso) │ ← SSL termination
│ headscale.vish.gg
└────────┬────────┘
│ :8085
▼
┌─────────────────┐
│ Headscale │ ← Control plane
│ (container) │
└────────┬────────┘
│ OIDC
▼
┌─────────────────┐
│ Authentik │ ← User auth
│ sso.vish.gg │
└─────────────────┘
Network Flow
- Tailscale clients connect to
headscale.vish.gg(HTTPS) - NPM terminates SSL, forwards to Headscale container
- Users authenticate via Authentik OIDC
- Headscale coordinates the mesh network
- Direct connections established between peers (via DERP relays if needed)
Services
| Service | Container | Port | Purpose |
|---|---|---|---|
| Headscale | headscale |
8085→8080 | Control server API |
| Headscale | headscale |
50443 | gRPC API |
| Headscale | headscale |
9099→9090 | Prometheus metrics |
| Headplane | headplane |
3002→3000 | Web admin UI (replaces headscale-ui) |
Pre-Deployment Setup
Step 1: Create Authentik Application
In Authentik at https://sso.vish.gg:
1.1 Create OAuth2/OIDC Provider
- Go to Applications → Providers → Create
- Select OAuth2/OpenID Provider
- Configure:
| Setting | Value |
|---|---|
| Name | Headscale |
| Authorization flow | default-provider-authorization-implicit-consent |
| Client type | Confidential |
| Client ID | (auto-generated, copy this) |
| Client Secret | (auto-generated, copy this) |
| Redirect URIs | https://headscale.vish.gg/oidc/callback |
| Signing Key | authentik Self-signed Certificate |
- Under Advanced protocol settings:
- Scopes:
openid,profile,email - Subject mode:
Based on the User's Email
- Scopes:
1.2 Create Application
- Go to Applications → Applications → Create
- Configure:
| Setting | Value |
|---|---|
| Name | Headscale |
| Slug | headscale |
| Provider | Select the provider you created |
| Launch URL | https://headscale.vish.gg |
1.3 Copy Credentials
Save these values to update the stack:
- Client ID:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx - Client Secret:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Step 2: Configure NPM Proxy Hosts
In Nginx Proxy Manager at http://calypso.vish.local:81:
2.1 Headscale API Proxy
| Setting | Value |
|---|---|
| Domain Names | headscale.vish.gg |
| Scheme | http |
| Forward Hostname/IP | headscale |
| Forward Port | 8080 |
| Block Common Exploits | ✅ |
| Websockets Support | ✅ |
SSL Tab:
- SSL Certificate: Request new Let's Encrypt
- Force SSL: ✅
- HTTP/2 Support: ✅
2.2 Headplane UI Proxy (via /admin path on main domain)
The Headplane UI is served at https://headscale.vish.gg:8443/admin via NPM path routing.
| Setting | Value |
|---|---|
| Domain Names | headscale.vish.gg |
| Scheme | http |
| Forward Hostname/IP | headplane |
| Forward Port | 3000 |
| Custom Location | /admin |
Step 3: Verify Authentik Network
# SSH to Calypso and check the network name
ssh admin@calypso.vish.local
docker network ls | grep authentik
If the network name differs from authentik-net, update the stack file.
Step 4: Update Stack Configuration
Edit hosts/synology/calypso/headscale.yaml:
oidc:
client_id: "REDACTED_CLIENT_ID"
client_secret: "REDACTED_CLIENT_SECRET"
Deployment
Option A: GitOps via Portainer
# 1. Commit the stack file
cd /path/to/homelab
git add hosts/synology/calypso/headscale.yaml
git commit -m "feat(headscale): Add self-hosted Tailscale control server"
git push origin main
# 2. Create GitOps stack via API
curl -X POST \
-H "X-API-Key: "REDACTED_API_KEY" \
-H "Content-Type: application/json" \
"http://vishinator.synology.me:10000/api/stacks/create/standalone/repository?endpointId=443397" \
-d '{
"name": "headscale-stack",
"repositoryURL": "https://git.vish.gg/Vish/homelab.git",
"repositoryReferenceName": "refs/heads/main",
"composeFile": "hosts/synology/calypso/headscale.yaml",
"repositoryAuthentication": true,
"repositoryUsername": "",
"repositoryPassword": "YOUR_GIT_TOKEN",
"autoUpdate": {
"interval": "5m",
"forceUpdate": false,
"forcePullImage": false
}
}'
Option B: Manual via Portainer UI
- Go to Portainer → Stacks → Add stack
- Select "Repository"
- Configure:
- Repository URL:
https://git.vish.gg/Vish/homelab.git - Reference:
refs/heads/main - Compose path:
hosts/synology/calypso/headscale.yaml - Authentication: Enable, enter Git token
- Repository URL:
- Enable GitOps updates with 5m polling
- Deploy
Post-Deployment Verification
1. Check Container Health
# Via Portainer API
curl -s -H "X-API-Key: TOKEN" \
"http://vishinator.synology.me:10000/api/endpoints/443397/docker/containers/json" | \
jq '.[] | select(.Names[0] | contains("headscale")) | {name: .Names[0], state: .State}'
2. Test API Endpoint
curl -s https://headscale.vish.gg/health
# Should return: {"status":"pass"}
3. Check Metrics
curl -s http://calypso.vish.local:9099/metrics | head -20
Client Setup
Linux/macOS
# Install Tailscale client
curl -fsSL https://tailscale.com/install.sh | sh
# Connect to your Headscale server
sudo tailscale up --login-server=https://headscale.vish.gg
# This will open a browser for OIDC authentication
# After auth, the device will be registered
With Pre-Auth Key
# Generate key in Headscale first (see Admin Commands below)
sudo tailscale up --login-server=https://headscale.vish.gg --authkey=YOUR_PREAUTH_KEY
iOS/Android
- Install Tailscale app from App Store/Play Store
- Open app → Use a different server
- Enter:
https://headscale.vish.gg - Authenticate via Authentik
Verify Connection
tailscale status
# Should show your device and any other connected peers
tailscale ip
# Shows your Tailscale IP (100.64.x.x)
Admin Commands
Execute commands inside the Headscale container on Calypso:
# SSH to Calypso
ssh -p 62000 Vish@100.103.48.78
# Enter container (full path required on Synology)
sudo /usr/local/bin/docker exec headscale headscale <command>
Note
: Headscale v0.28+ uses numeric user IDs. Get the ID with
users listfirst, then pass--user <ID>to other commands.
User Management
# List users (shows numeric IDs)
headscale users list
# Create a user
headscale users create myuser
# Rename a user
headscale users rename --identifier <id> <newname>
# Delete a user
headscale users destroy --identifier <id>
Node Management
# List all nodes
headscale nodes list
# Register a node manually
headscale nodes register --user <user-id> --key nodekey:xxxxx
# Delete a node
headscale nodes delete --identifier <node-id>
# Expire a node (force re-auth)
headscale nodes expire --identifier <node-id>
# Move node to different user
headscale nodes move --identifier <node-id> --user <user-id>
Pre-Auth Keys
# Create a pre-auth key (single use)
headscale preauthkeys create --user <user-id>
# Create reusable key (expires in 24h)
headscale preauthkeys create --user <user-id> --reusable --expiration 24h
# List keys
headscale preauthkeys list --user <user-id>
API Keys
# Create API key for external integrations
headscale apikeys create --expiration 90d
# List API keys
headscale apikeys list
Route & Exit Node Management
How it works: Exit node and subnet routes are a two-step process.
- The node must advertise the route via
tailscale set --advertise-exit-nodeor--advertise-routes.- The server (Headscale) must approve the advertised route. Without approval, the route is visible but not active.
All commands below are run inside the Headscale container on Calypso:
ssh -p 62000 Vish@100.103.48.78 "sudo /usr/local/bin/docker exec headscale headscale <command>"
List All Routes
Shows every node that is advertising routes, what is approved, and what is actively serving:
headscale nodes list-routes
Output columns:
- Approved: routes the server has approved
- Available: routes the node is currently advertising
- Serving (Primary): routes actively being used
Approve an Exit Node
After a node runs tailscale set --advertise-exit-node, approve it server-side:
# Find the node ID first
headscale nodes list
# Approve exit node routes (IPv4 + IPv6)
headscale nodes approve-routes --identifier <node-id> --routes '0.0.0.0/0,::/0'
If the node also advertises a subnet route you want to keep approved alongside exit node:
# Example: calypso also advertises 192.168.0.0/24
headscale nodes approve-routes --identifier 12 --routes '0.0.0.0/0,::/0,192.168.0.0/24'
Important
:
approve-routesreplaces the full approved route list for that node. Always include all routes you want active (subnet routes + exit routes) in a single command.
Approve a Subnet Route Only
For nodes that advertise a local subnet (e.g. a router or NAS providing LAN access) but are not exit nodes:
# Example: approve 192.168.0.0/24 for atlantis
headscale nodes approve-routes --identifier 11 --routes '192.168.0.0/24'
Revoke / Remove Routes
To remove approval for a route, re-run approve-routes omitting that route:
# Example: remove exit node approval from a node, keep subnet only
headscale nodes approve-routes --identifier <node-id> --routes '192.168.0.0/24'
# Remove all approved routes from a node
headscale nodes approve-routes --identifier <node-id> --routes ''
Current Exit Nodes (March 2026)
The following nodes are approved as exit nodes:
| Node | ID | Exit Node Routes | Subnet Routes |
|---|---|---|---|
| vish-concord-nuc | 5 | 0.0.0.0/0, ::/0 |
192.168.68.0/22 |
| setillo | 6 | 0.0.0.0/0, ::/0 |
192.168.69.0/24 |
| truenas-scale | 8 | 0.0.0.0/0, ::/0 |
— |
| atlantis | 11 | 0.0.0.0/0, ::/0 |
— |
| calypso | 12 | 0.0.0.0/0, ::/0 |
192.168.0.0/24 |
| gl-mt3000 | 16 | 0.0.0.0/0, ::/0 |
192.168.12.0/24 |
| gl-be3600 | 17 | 0.0.0.0/0, ::/0 |
192.168.8.0/24 |
| homeassistant | 19 | 0.0.0.0/0, ::/0 |
— |
Adding a New Node
Step 1: Install Tailscale on the new device
Linux:
curl -fsSL https://tailscale.com/install.sh | sh
Synology NAS: Install the Tailscale package from Package Center (or manually via .spk).
TrueNAS Scale: Available as an app in the TrueNAS app catalog.
Home Assistant: Install via the HA Add-on Store (search "Tailscale").
OpenWrt / GL.iNet routers: Install tailscale via opkg or the GL.iNet admin panel.
Step 2: Generate a pre-auth key (recommended for non-interactive installs)
# Get the user ID first
headscale users list
# Create a reusable pre-auth key (24h expiry)
headscale preauthkeys create --user <user-id> --reusable --expiration 24h
Step 3: Connect the node
Interactive (browser-based OIDC auth):
sudo tailscale up --login-server=https://headscale.vish.gg
# Follow the printed URL to authenticate via Authentik
Non-interactive (pre-auth key):
sudo tailscale up --login-server=https://headscale.vish.gg --authkey=<preauth-key>
With exit node advertising enabled from the start:
sudo tailscale up \
--login-server=https://headscale.vish.gg \
--authkey=<preauth-key> \
--advertise-exit-node
With subnet route advertising:
sudo tailscale up \
--login-server=https://headscale.vish.gg \
--authkey=<preauth-key> \
--advertise-routes=192.168.1.0/24
Step 4: Verify the node registered
headscale nodes list
# New node should appear with an assigned 100.x.x.x IP
Step 5: Approve routes (if needed)
If the node advertised exit node or subnet routes:
headscale nodes list-routes
# Find the node ID and approve as needed
headscale nodes approve-routes --identifier <node-id> --routes '0.0.0.0/0,::/0'
Step 6: (Optional) Rename the node
Headscale uses the system hostname by default. To rename:
headscale nodes rename --identifier <node-id> <new-name>
Configuration Reference
Key Settings in config.yaml
| Setting | Value | Description |
|---|---|---|
server_url |
https://headscale.vish.gg:8443 |
Public URL for clients (port 8443 required) |
listen_addr |
0.0.0.0:8080 |
Internal listen address |
prefixes.v4 |
100.64.0.0/10 |
IPv4 CGNAT range |
prefixes.v6 |
fd7a:115c:a1e0::/48 |
IPv6 ULA range |
dns.magic_dns |
true |
Enable MagicDNS |
dns.base_domain |
tail.vish.gg |
DNS suffix for devices |
database.type |
sqlite |
Database backend |
oidc.issuer |
https://sso.vish.gg/... |
Authentik OIDC endpoint |
DERP Configuration
Using Tailscale's public DERP servers (recommended):
derp:
urls:
- https://controlplane.tailscale.com/derpmap/default
auto_update_enabled: true
For self-hosted DERP, see: https://tailscale.com/kb/1118/custom-derp-servers
Monitoring Integration
Prometheus Scrape Config
Add to your Prometheus configuration:
scrape_configs:
- job_name: 'headscale'
static_configs:
- targets: ['calypso.vish.local:9099']
labels:
instance: 'headscale'
Key Metrics
| Metric | Description |
|---|---|
headscale_connected_peers |
Number of connected peers |
headscale_registered_machines |
Total registered machines |
headscale_online_machines |
Currently online machines |
Troubleshooting
Client Can't Connect
- Check DNS resolution:
nslookup headscale.vish.gg - Check SSL certificate:
curl -v https://headscale.vish.gg/health - Check NPM logs: Portainer → Calypso → nginx-proxy-manager → Logs
- Check Headscale logs:
docker logs headscale
OIDC Authentication Fails
- Verify Authentik is reachable:
curl https://sso.vish.gg/.well-known/openid-configuration - Check redirect URI: Must exactly match in Authentik provider
- Check client credentials: Ensure ID/secret are correct in config
- Check Headscale logs:
docker logs headscale | grep oidc
Nodes Not Connecting to Each Other
- Check DERP connectivity: Nodes may be relaying through DERP
- Check firewall: Ensure UDP 41641 is open for direct connections
- Check node status:
tailscale statuson each node
Synology NAS: Userspace Networking Limitation
Synology Tailscale runs in userspace networking mode (NetfilterMode: 0) by default. This means:
- No
tailscale0tun device is created - No kernel routing table 52 entries exist
tailscale pingworks (uses the daemon directly), but TCP traffic to Tailscale IPs fails- Other services on the NAS cannot reach Tailscale IPs of remote peers
Workaround: Use LAN IPs instead of Tailscale IPs for service-to-service communication when both hosts are on the same network. This is why all Atlantis arr services use 192.168.0.210 (homelab-vm LAN IP) for Signal notifications instead of 100.67.40.126 (Tailscale IP).
Why not tailscale configure-host? Running tailscale configure-host + restarting the Tailscale service temporarily enables kernel networking, but tailscaled becomes unstable and crashes repeatedly (every few minutes). The boot-up DSM task "Tailscale enable outbound" runs configure-host on boot, but the effect does not persist reliably. This is a known limitation of the Synology Tailscale package.
SSL certificate gotcha: When connecting from Synology to headscale.vish.gg, split-horizon DNS resolves to Calypso's LAN IP (192.168.0.250). Port 443 there serves the Synology default certificate (CN=synology), not the headscale cert. Use https://headscale.vish.gg:8443 as the login-server URL — port 8443 serves the correct headscale certificate.
# Check if Tailscale is in userspace mode on a Synology NAS
tailscale debug prefs | grep NetfilterMode
# NetfilterMode: 0 = userspace (no tun device, no TCP routing)
# NetfilterMode: 1 = kernel (tun device + routing, but unstable on Synology)
# Check if tailscale0 exists
ip link show tailscale0
Container Won't Start
- Check config syntax: YAML formatting errors
- Check network exists:
docker network ls | grep authentik - Check volume permissions: Synology may have permission issues
Backup
Data to Backup
| Path | Content |
|---|---|
headscale-data:/var/lib/headscale/db.sqlite |
User/node database |
headscale-data:/var/lib/headscale/private.key |
Server private key |
headscale-data:/var/lib/headscale/noise_private.key |
Noise protocol key |
Backup Command
# On Calypso
docker run --rm -v headscale-data:/data -v /volume1/backups:/backup \
alpine tar czf /backup/headscale-backup-$(date +%Y%m%d).tar.gz /data
Migration from Tailscale
If migrating existing devices from Tailscale cloud:
- On each device:
sudo tailscale logout - Connect to Headscale:
sudo tailscale up --login-server=https://headscale.vish.gg - Re-establish routes: Configure exit nodes and subnet routes as needed
Note: You cannot migrate Tailscale cloud configuration directly. ACLs, routes, and settings must be reconfigured.
Related Documentation
External Resources
- Headscale Documentation
- Headscale GitHub
- Headplane GitHub (Admin UI — replaces headscale-ui)
- Tailscale Client Docs
Last updated: 2026-03-29 (documented Synology userspace networking limitation and SSL cert gotcha; switched Signal notifications to LAN IP)