13 KiB
Headscale - Self-Hosted Tailscale Control Server
Status: 🟡 Planned (Not yet deployed)
Host: Calypso (recommended)
Stack File: hosts/synology/calypso/headscale.yaml
Ports: 8085 (API), 8086 (UI), 9099 (Metrics), 50443 (gRPC)
Overview
Headscale is an open-source, self-hosted implementation of the Tailscale control server. It allows you to run your own Tailscale coordination server, giving you full control over your mesh VPN network.
Why Self-Host?
| Feature | Tailscale Cloud | Headscale |
|---|---|---|
| Control | Tailscale manages | You manage |
| Data Privacy | Keys on their servers | Keys on your servers |
| Cost | Free tier limits | Unlimited devices |
| OIDC Auth | Limited | Full control |
| Network Isolation | Shared infra | Your infra only |
Recommended Host: Calypso
Why Calypso?
| Factor | Rationale |
|---|---|
| Authentik Integration | OIDC provider already running for SSO |
| Nginx Proxy Manager | HTTPS/SSL termination already configured |
| Infrastructure Role | Hosts auth, git, networking services |
| Stability | Synology NAS = 24/7 uptime |
| Resources | Low footprint fits alongside 52 containers |
Alternative Hosts
- Homelab VM: Viable, but separates auth from control plane
- Concord NUC: Running Home Assistant, keep it focused
- Atlantis: Primary media server, avoid network-critical services
Architecture
Internet
│
▼
┌─────────────────┐
│ NPM (Calypso) │ ← SSL termination
│ headscale.vish.gg
└────────┬────────┘
│ :8085
▼
┌─────────────────┐
│ Headscale │ ← Control plane
│ (container) │
└────────┬────────┘
│ OIDC
▼
┌─────────────────┐
│ Authentik │ ← User auth
│ sso.vish.gg │
└─────────────────┘
Network Flow
- Tailscale clients connect to
headscale.vish.gg(HTTPS) - NPM terminates SSL, forwards to Headscale container
- Users authenticate via Authentik OIDC
- Headscale coordinates the mesh network
- Direct connections established between peers (via DERP relays if needed)
Services
| Service | Container | Port | Purpose |
|---|---|---|---|
| Headscale | headscale |
8085→8080 | Control server API |
| Headscale | headscale |
50443 | gRPC API |
| Headscale | headscale |
9099→9090 | Prometheus metrics |
| Headscale UI | headscale-ui |
8086→8080 | Web management interface |
Pre-Deployment Setup
Step 1: Create Authentik Application
In Authentik at https://sso.vish.gg:
1.1 Create OAuth2/OIDC Provider
- Go to Applications → Providers → Create
- Select OAuth2/OpenID Provider
- Configure:
| Setting | Value |
|---|---|
| Name | Headscale |
| Authorization flow | default-provider-authorization-implicit-consent |
| Client type | Confidential |
| Client ID | (auto-generated, copy this) |
| Client Secret | (auto-generated, copy this) |
| Redirect URIs | https://headscale.vish.gg/oidc/callback |
| Signing Key | authentik Self-signed Certificate |
- Under Advanced protocol settings:
- Scopes:
openid,profile,email - Subject mode:
Based on the User's Email
- Scopes:
1.2 Create Application
- Go to Applications → Applications → Create
- Configure:
| Setting | Value |
|---|---|
| Name | Headscale |
| Slug | headscale |
| Provider | Select the provider you created |
| Launch URL | https://headscale.vish.gg |
1.3 Copy Credentials
Save these values to update the stack:
- Client ID:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx - Client Secret:
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Step 2: Configure NPM Proxy Hosts
In Nginx Proxy Manager at http://calypso.vish.local:81:
2.1 Headscale API Proxy
| Setting | Value |
|---|---|
| Domain Names | headscale.vish.gg |
| Scheme | http |
| Forward Hostname/IP | headscale |
| Forward Port | 8080 |
| Block Common Exploits | ✅ |
| Websockets Support | ✅ |
SSL Tab:
- SSL Certificate: Request new Let's Encrypt
- Force SSL: ✅
- HTTP/2 Support: ✅
2.2 Headscale UI Proxy (Optional)
| Setting | Value |
|---|---|
| Domain Names | headscale-ui.vish.gg |
| Scheme | http |
| Forward Hostname/IP | headscale-ui |
| Forward Port | 8080 |
Step 3: Verify Authentik Network
# SSH to Calypso and check the network name
ssh admin@calypso.vish.local
docker network ls | grep authentik
If the network name differs from authentik-net, update the stack file.
Step 4: Update Stack Configuration
Edit hosts/synology/calypso/headscale.yaml:
oidc:
client_id: "REDACTED_CLIENT_ID"
client_secret: "REDACTED_CLIENT_SECRET"
Deployment
Option A: GitOps via Portainer
# 1. Commit the stack file
cd /path/to/homelab
git add hosts/synology/calypso/headscale.yaml
git commit -m "feat(headscale): Add self-hosted Tailscale control server"
git push origin main
# 2. Create GitOps stack via API
curl -X POST \
-H "X-API-Key: "REDACTED_API_KEY" \
-H "Content-Type: application/json" \
"http://vishinator.synology.me:10000/api/stacks/create/standalone/repository?endpointId=443397" \
-d '{
"name": "headscale-stack",
"repositoryURL": "https://git.vish.gg/Vish/homelab.git",
"repositoryReferenceName": "refs/heads/main",
"composeFile": "hosts/synology/calypso/headscale.yaml",
"repositoryAuthentication": true,
"repositoryUsername": "",
"repositoryPassword": "YOUR_GIT_TOKEN",
"autoUpdate": {
"interval": "5m",
"forceUpdate": false,
"forcePullImage": false
}
}'
Option B: Manual via Portainer UI
- Go to Portainer → Stacks → Add stack
- Select "Repository"
- Configure:
- Repository URL:
https://git.vish.gg/Vish/homelab.git - Reference:
refs/heads/main - Compose path:
hosts/synology/calypso/headscale.yaml - Authentication: Enable, enter Git token
- Repository URL:
- Enable GitOps updates with 5m polling
- Deploy
Post-Deployment Verification
1. Check Container Health
# Via Portainer API
curl -s -H "X-API-Key: TOKEN" \
"http://vishinator.synology.me:10000/api/endpoints/443397/docker/containers/json" | \
jq '.[] | select(.Names[0] | contains("headscale")) | {name: .Names[0], state: .State}'
2. Test API Endpoint
curl -s https://headscale.vish.gg/health
# Should return: {"status":"pass"}
3. Check Metrics
curl -s http://calypso.vish.local:9099/metrics | head -20
Client Setup
Linux/macOS
# Install Tailscale client
curl -fsSL https://tailscale.com/install.sh | sh
# Connect to your Headscale server
sudo tailscale up --login-server=https://headscale.vish.gg
# This will open a browser for OIDC authentication
# After auth, the device will be registered
With Pre-Auth Key
# Generate key in Headscale first (see Admin Commands below)
sudo tailscale up --login-server=https://headscale.vish.gg --authkey=YOUR_PREAUTH_KEY
iOS/Android
- Install Tailscale app from App Store/Play Store
- Open app → Use a different server
- Enter:
https://headscale.vish.gg - Authenticate via Authentik
Verify Connection
tailscale status
# Should show your device and any other connected peers
tailscale ip
# Shows your Tailscale IP (100.64.x.x)
Admin Commands
Execute commands inside the Headscale container:
# SSH to Calypso
ssh admin@calypso.vish.local
# Enter container
docker exec -it headscale headscale <command>
User Management
# List users (namespaces)
headscale users list
# Create a user
headscale users create myuser
# Delete a user
headscale users destroy myuser
Node Management
# List all nodes
headscale nodes list
# Register a node manually
headscale nodes register --user myuser --key nodekey:xxxxx
# Delete a node
headscale nodes delete -i <node-id>
# Expire a node (force re-auth)
headscale nodes expire -i <node-id>
Pre-Auth Keys
# Create a pre-auth key (single use)
headscale preauthkeys create --user myuser
# Create reusable key (expires in 24h)
headscale preauthkeys create --user myuser --reusable --expiration 24h
# List keys
headscale preauthkeys list --user myuser
API Keys
# Create API key for external integrations
headscale apikeys create --expiration 90d
# List API keys
headscale apikeys list
Configuration Reference
Key Settings in config.yaml
| Setting | Value | Description |
|---|---|---|
server_url |
https://headscale.vish.gg |
Public URL for clients |
listen_addr |
0.0.0.0:8080 |
Internal listen address |
prefixes.v4 |
100.64.0.0/10 |
IPv4 CGNAT range |
prefixes.v6 |
fd7a:115c:a1e0::/48 |
IPv6 ULA range |
dns.magic_dns |
true |
Enable MagicDNS |
dns.base_domain |
tail.vish.gg |
DNS suffix for devices |
database.type |
sqlite |
Database backend |
oidc.issuer |
https://sso.vish.gg/... |
Authentik OIDC endpoint |
DERP Configuration
Using Tailscale's public DERP servers (recommended):
derp:
urls:
- https://controlplane.tailscale.com/derpmap/default
auto_update_enabled: true
For self-hosted DERP, see: https://tailscale.com/kb/1118/custom-derp-servers
Monitoring Integration
Prometheus Scrape Config
Add to your Prometheus configuration:
scrape_configs:
- job_name: 'headscale'
static_configs:
- targets: ['calypso.vish.local:9099']
labels:
instance: 'headscale'
Key Metrics
| Metric | Description |
|---|---|
headscale_connected_peers |
Number of connected peers |
headscale_registered_machines |
Total registered machines |
headscale_online_machines |
Currently online machines |
Troubleshooting
Client Can't Connect
- Check DNS resolution:
nslookup headscale.vish.gg - Check SSL certificate:
curl -v https://headscale.vish.gg/health - Check NPM logs: Portainer → Calypso → nginx-proxy-manager → Logs
- Check Headscale logs:
docker logs headscale
OIDC Authentication Fails
- Verify Authentik is reachable:
curl https://sso.vish.gg/.well-known/openid-configuration - Check redirect URI: Must exactly match in Authentik provider
- Check client credentials: Ensure ID/secret are correct in config
- Check Headscale logs:
docker logs headscale | grep oidc
Nodes Not Connecting to Each Other
- Check DERP connectivity: Nodes may be relaying through DERP
- Check firewall: Ensure UDP 41641 is open for direct connections
- Check node status:
tailscale statuson each node
Container Won't Start
- Check config syntax: YAML formatting errors
- Check network exists:
docker network ls | grep authentik - Check volume permissions: Synology may have permission issues
Backup
Data to Backup
| Path | Content |
|---|---|
headscale-data:/var/lib/headscale/db.sqlite |
User/node database |
headscale-data:/var/lib/headscale/private.key |
Server private key |
headscale-data:/var/lib/headscale/noise_private.key |
Noise protocol key |
Backup Command
# On Calypso
docker run --rm -v headscale-data:/data -v /volume1/backups:/backup \
alpine tar czf /backup/headscale-backup-$(date +%Y%m%d).tar.gz /data
Migration from Tailscale
If migrating existing devices from Tailscale cloud:
- On each device:
sudo tailscale logout - Connect to Headscale:
sudo tailscale up --login-server=https://headscale.vish.gg - Re-establish routes: Configure exit nodes and subnet routes as needed
Note: You cannot migrate Tailscale cloud configuration directly. ACLs, routes, and settings must be reconfigured.
Related Documentation
External Resources
Last updated: February 2026