Sanitized mirror from private repository - 2026-04-18 11:19:59 UTC
Some checks failed
Documentation / Build Docusaurus (push) Failing after 5m14s
Documentation / Deploy to GitHub Pages (push) Has been skipped

This commit is contained in:
Gitea Mirror Bot
2026-04-18 11:19:59 +00:00
commit fb00a325d1
1418 changed files with 359990 additions and 0 deletions

72
docs/hosts/guava.md Normal file
View File

@@ -0,0 +1,72 @@
# Guava
TrueNAS SCALE (Electric Eel, 25.04.2) secondary NAS. Runs alongside the Synology primaries (atlantis/calypso) with its own ZFS pool and a mix of TrueNAS `ix-apps` and raw Docker services.
## Specs
| | |
|---|---|
| Hostname | `guava` |
| OS | TrueNAS SCALE 25.04.2 (Debian-based, kernel 6.12.15) |
| LAN IP | 192.168.0.100 |
| Tailscale IP | 100.75.252.64 (Headscale node ID:8) |
| RAM | 30 GB |
| Boot pool | `boot-pool` — 464 GB SSD (17 GB used) |
| Data pool | `data` — 3.62 TB raw (2.16 TB used, 1.47 TB free, 59% full, dedup 1.67x) |
| API key | stored in `MEMORY.md` (see root `.claude` memory) |
SSH aliases: `guava` or `truenas` (both → 100.75.252.64, user `vish`).
## Networking
- `accept_routes=false` on Tailscale to prevent Calypso's `192.168.0.0/24` subnet advertisement from hijacking Guava's own LAN replies. See [`docs/troubleshooting/guava-smb-incident-2026-03-14.md`](../troubleshooting/guava-smb-incident-2026-03-14.md) and [`docs/networking/GUAVA_LAN_ROUTING_FIX.md`](../networking/GUAVA_LAN_ROUTING_FIX.md) for background.
- Dedicated policy-based routing rule: `ip rule add to 192.168.0.0/24 lookup main priority 5200` (persistent — applied on boot).
## Running services
Nineteen containers as of 2026-04-18. TrueNAS-managed apps are prefixed `ix-*`.
### TrueNAS apps (`ix-*`)
| App | Purpose |
|---|---|
| `ix-portainer-portainer-1` | Standalone Portainer instance (not federated with the main Atlantis Portainer) |
| `ix-gitea-gitea-1` + `ix-gitea-postgres-1` | Legacy Gitea instance (primary Gitea runs on matrix-ubuntu) |
| `ix-jellyfin-jellyfin-1` | Legacy Jellyfin instance (primary Jellyfin runs on Olares with RTX 5090 transcode) |
| `ix-tailscale-tailscale-1` | Tailscale TrueNAS app (separate from host-level tailscaled) |
| `ix-wg-easy-wg-easy-1` | WireGuard-Easy VPN admin UI |
### Raw Docker
| Container | Purpose |
|---|---|
| `tdarr-node-guava` | Tdarr transcode node — one of several nodes offloading from the main Tdarr server (see `docs/services/individual/tdarr.md`) |
| `ollama` | Local Ollama instance (smaller models; primary inference is on Olares) |
| `open-webui` | OpenWebUI for the local Ollama |
| `fasten-onprem` | Personal health records aggregator |
| `planka` + `planka-db` | Kanban board |
| `fenrus` | Dashboard/launcher |
| `nginx` | Reverse proxy for local apps |
| `openspeedtest` | Self-hosted speed test |
| `ddns-crista-love` | Cloudflare DDNS for `crista.love` |
| `node-exporter` | Prometheus host metrics |
| `dozzle-agent` | Remote log agent for central Dozzle |
| `rendered-tailscale-1` | Helper container for `ix-tailscale` |
## Storage layout
- `data` pool (3.62 TB raw, RAIDZ) — primary data
- `data/.ix-virt/` — libvirt/incus VM storage (including a `proton-bridge` container used for Proton Mail IMAP bridging)
- `data/.system/` — TrueNAS system datasets (configs, NetData, NFS, SMB)
- User datasets are managed through the TrueNAS UI
## Portainer (standalone)
This host's Portainer is **not** registered with the main Portainer at `pt.vish.gg`. It is a separate instance accessed directly via TrueNAS apps. This is deliberate — Guava manages its own `ix-*` apps lifecycle.
## Related docs
- [Host overview](../infrastructure/hosts.md) — Guava row
- [Guava LAN routing fix](../networking/GUAVA_LAN_ROUTING_FIX.md)
- [Guava SMB incident (2026-03-14)](../troubleshooting/guava-smb-incident-2026-03-14.md)
- [Tdarr](../services/individual/tdarr.md) — node federation

74
docs/hosts/jellyfish.md Normal file
View File

@@ -0,0 +1,74 @@
# Jellyfish
Raspberry Pi 5 running Debian Trixie, behind a GL-MT3600BE (Beryl 7) router in Hawaii.
## Hardware
| Property | Value |
|----------|-------|
| **Model** | Raspberry Pi 5 Model B Rev 1.0 |
| **CPU** | ARM Cortex-A76 (2 cores visible), BogoMIPS 108 |
| **RAM** | 4 GB |
| **Boot disk** | 32GB microSD (mmcblk0) |
| **External storage** | 4TB NVMe SSD (Crucial CT4000P3SSD8) via USB, LUKS+EXT4 |
| **OS** | Debian 13 (Trixie) |
| **Tailscale IP** | `100.69.121.120` |
| **Headscale node** | ID:15 |
| **LAN IP** | `192.168.12.181` (eth0), `192.168.12.182` (wlan0) |
| **Gateway** | GL-MT3600BE at `192.168.12.1` |
| **User** | `lulu` |
## Network
Jellyfish is on the Beryl 7's `192.168.12.0/24` LAN, reachable from the tailnet via Tailscale or via the router's subnet route.
```bash
ssh jellyfish # via Tailscale (100.69.121.120)
ssh lulu@192.168.12.181 # from devices on the Beryl 7 LAN
```
## External Drive (LUKS)
The 4TB NVMe is encrypted with LUKS and mounted at `/srv/nas`. It must be opened manually after each reboot (passphrase required).
### Bring Up
```bash
sudo cryptsetup luksOpen /dev/sda ssd # enter passphrase
sudo mount /dev/mapper/ssd /srv/nas
sudo systemctl start smbd
docker compose -f /srv/nas/ametrine/Docker/photoprism/compose.yaml up -d
```
### Shut Down
```bash
docker compose -f /srv/nas/ametrine/Docker/photoprism/compose.yaml down
sudo systemctl stop smbd
sudo umount /srv/nas
sudo cryptsetup close ssd
sudo shutdown -h now
```
### FSCK Recovery Notes
The SSD has had EXT4 corruption from 143 unsafe shutdowns (power loss, not hardware failure). SMART reports healthy (0 errors, 100% spare). Recovery notes are at `/home/lulu/FSCK-RECOVERY-NOTES.md` on jellyfish.
If LUKS mapper returns I/O errors but raw `/dev/sda` reads fine:
1. Stop all services (photoprism, smbd, syncthing)
2. Unmount `/srv/nas`
3. Close LUKS: `sudo cryptsetup close ssd`
4. Reopen: `sudo cryptsetup luksOpen /dev/sda ssd`
5. Run fsck: `sudo e2fsck -y /dev/mapper/ssd`
## Services
| Service | Location | Port |
|---------|----------|------|
| **Photoprism** | `/srv/nas/ametrine/Docker/photoprism/` | Docker |
| **Samba** | System service (`smbd`) | 445 |
## Other Devices on Same LAN
- `moon``192.168.12.223` (Tailscale `100.64.0.6`)
- `homeassistant` — Home Assistant OS (Tailscale `100.112.186.90`)

77
docs/hosts/seattle.md Normal file
View File

@@ -0,0 +1,77 @@
# Seattle
Contabo cloud VPS in Seattle, US. Public internet-facing host for services that need a stable external IP, plus Tailscale exit node / DERP relay for the mesh.
## Specs
| | |
|---|---|
| Hostname | `vmi2076105` |
| OS | Ubuntu 24.04.4 LTS (Noble) |
| Public IP | YOUR_WAN_IP |
| Tailscale IP | 100.82.197.124 (Headscale node ID:2) |
| RAM | 62 GB |
| Disk | 290 GB root (~110 GB free) |
| Tailscale | 1.96.4 |
SSH aliases (see `~/.ssh/config`): `seattle` (public IP, Contabo SSH), `seattle-tailscale` (via Tailscale IP).
## Role
- **Public exit node** for Tailscale mesh
- **DERP relay** (`derper`) — self-hosted DERP, advertised to Headscale
- **Stoatchat** (Revolt fork) full stack — see `docs/admin/stoatchat-operational-status.md`
- **AI coding workstation** (HolyClaude, :3059)
- **Personal productivity** (Obsidian remote, Wallabag, KeeWeb, Padloc)
- **Matrix / LiveKit** signalling + TURN for video calls
- **DDNS updaters** for `*.vish.gg` records pointing to this VPS
## Running services
All managed via `docker compose`. Twenty containers as of 2026-04-18.
| Container | Purpose | Ports |
|---|---|---|
| `holyclaude` | Web UI for Claude Code via [coderluii/holyclaude](https://github.com/coderluii/holyclaude) | `100.82.197.124:3059 → 3001` |
| `derper` | Tailscale DERP relay | `:3478/udp`, `:8444/tcp` |
| `livekit` | WebRTC SFU for Matrix calls | `:7880-7881/tcp`, `:50000-50100/udp` |
| `fluxer_server` | Fluxer backend | `127.0.0.1:8088` |
| `nats-core` | NATS messaging | internal |
| `nats-jetstream` | NATS persistence | internal |
| `elasticsearch` | Stoatchat search | `:9200` |
| `valkey` | Redis-compatible cache (Stoatchat) | internal |
| `meilisearch` | Full-text search | `:7700` |
| `padloc-nginx` / `padloc-server` / `padloc-pwa` | Padloc password manager | `:5500` |
| `keeweb` | KeeWeb password vault | `:8443` |
| `obsidian` | Headless Obsidian via LinuxServer image | `127.0.0.1:3000-3001` |
| `wallabag` | Read-later service | `127.0.0.1:8880` |
| `dozzle-agent` | Remote log agent | `:7007`, `:8080` |
| `diun` | Docker image update notifier | — |
| `ddns-ddns-seattle-derp-1` | Cloudflare DDNS for DERP DNS | — |
| `ddns-ddns-seattle-proxied-1` | Cloudflare DDNS for proxied records | — |
| `ddns-ddns-seattle-stoatchat-1` | Cloudflare DDNS for Stoatchat | — |
Nginx runs on the host (not in Docker) on `:80/:443` with Let's Encrypt and terminates SSL for all public-facing services.
## Networking
- `eth0` — Contabo public IP (YOUR_WAN_IP)
- `tailscale0` — 100.82.197.124, advertises as exit node
- Firewall: Contabo panel + ufw; ports 80, 443, 2222 (SSH), 7880-7881, 50000-50100/udp, 8444, 5500, 3478/udp open
- DDNS: three Cloudflare DDNS containers keep DNS records synced to the public IP
## Related docs
- [HolyClaude service](../services/individual/holyclaude.md)
- [Stoatchat operational status](../admin/stoatchat-operational-status.md)
- [Seattle monitoring update (Feb 2026)](../admin/monitoring-update-seattle-2026-02.md)
- [Headscale](../services/individual/headscale.md) — DERP relay advertisement
## Host access
```sh
ssh seattle # public IP, port 2222
ssh seattle-tailscale # via Tailscale (100.82.197.124)
```
SSH login is `root` (key-based); no password auth.

157
docs/hosts/setillo.md Normal file
View File

@@ -0,0 +1,157 @@
# Setillo
Synology DS223j NAS running DSM 7.3.2. Secondary Synology used for monitoring exporters and AdGuard secondary DNS.
## Specs
| | |
|---|---|
| Model | DS223j |
| Platform | rtd1619b (aarch64) |
| DSM | 7.3.2 (build 86009) |
| Storage | `/volume1` — 8.8 TB btrfs |
| Tailscale | 1.96.4 (as of 2026-04-11) |
## Running services
Containers under DSM Container Manager:
| Name | Image | Purpose |
|---|---|---|
| `node_exporter` | `quay.io/prometheus/node-exporter` | Prometheus host metrics |
| `snmp_exporter` | `quay.io/prometheus/snmp-exporter` | SNMP metrics for network gear |
| `adguard` | `adguard/adguardhome` | Secondary AdGuard DNS resolver |
| `dozzle-agent` | `amir20/dozzle` | Remote log agent for the main Dozzle instance |
## Sudoers restriction (important)
The `vish` user has passwordless sudo but **cannot invoke shells via sudo**:
```
(ALL) NOPASSWD: "REDACTED_PASSWORD" !/bin/ash, !/bin/sh, !/bin/bash, !/usr/bin/su
```
Practical implications:
- ✅ Works: `sudo mkdir`, `sudo mount`, `sudo wget`, `sudo /opt/bin/opkg install foo`, `sudo tee /etc/file`, `sudo systemctl enable foo`
- ❌ Blocked: `sudo sh script.sh`, `sudo bash -c '...'`, `sudo -i`, `sudo ./script.sh` (even with a `#!/bin/sh` shebang — the kernel exec of `#!/bin/sh script.sh` is blocked)
To run shell scripts as root, translate them into a series of individual `sudo` invocations of non-shell binaries. Use `sudo tee file <<EOF` heredocs to write files (tee is not a shell, so it's allowed).
## Entware (opkg package manager)
Installed 2026-04-11 to work around DSM's minimal busybox toolset.
### Layout
| | |
|---|---|
| Distro | `aarch64-k3.10`, GLIBC 2.27 |
| Home | `/volume1/@entware` |
| Mount point | `/opt` (bind mount) |
| Persistence | `/etc/systemd/system/opt.mount` (enabled, `After=syno-volume.target`, `WantedBy=local-fs.target`) |
| Installed size | ~260 MB (153 packages) |
### Persistence unit
`/etc/systemd/system/opt.mount`:
```ini
[Unit]
Description=Entware bind mount for /opt
DefaultDependencies=no
After=syno-volume.target
Requires=syno-volume.target
Before=local-fs.target
Conflicts=umount.target
[Mount]
What=/volume1/@entware
Where=/opt
Type=none
Options=bind
[Install]
WantedBy=local-fs.target
```
### PATH setup
`/opt` uses three binary directories. `~/.profile` sets:
```sh
export PATH=/opt/bin:/opt/sbin:/opt/usr/bin:$PATH
```
Some binaries live at `/opt/bin`, some at `/opt/sbin` (e.g. `iotop`, `fzf`), some at `/opt/usr/bin` (e.g. `fd`, `eza`). Keep all three in PATH.
### /opt/containerd preservation
DSM pre-creates empty stub dirs `/opt/containerd/{bin,lib}` (cosmetic — real containerd lives at `/var/packages/REDACTED_APP_PASSWORD/`). The Entware install recreated these stubs inside the bind-mounted tree so Synology's view of `/opt/containerd` is preserved whether `/opt` is bind-mounted or not. If you ever rebuild Entware, recreate them:
```sh
sudo mkdir -p /opt/containerd/bin /opt/containerd/lib
sudo chmod 711 /opt/containerd
```
### Installing packages
Entware's `opkg` is not in the default sudo PATH (and sudoers blocks shells, so you can't `sudo bash -c`). Always invoke opkg by full path:
```sh
sudo /opt/bin/opkg update
sudo /opt/bin/opkg install <pkg>
```
For Python packages not in the Entware repo, use `/opt/bin/pip3`:
```sh
sudo /opt/bin/pip3 install --break-system-packages <pkg>
```
### Currently installed (high-value tools)
| Category | Packages |
|---|---|
| Shell | `bash`, `tmux`, `screen`, `htop`, `vim-full`, `nano`, `less`, `fzf` |
| Network | `iperf3`, `mtr-json`, `bind-dig`, `tcpdump`, `nmap`, `socat`, `curl`, `wget-ssl`, `nethogs`, `iftop`, `whois`, `mosh-full`, `openssh-client`, `openssh-sftp-server` |
| Filesystem | `rsync`, `rclone`, `ncdu`, `pv`, `file`, `tree`, `lsof`, `jq`, `yq` (pip) |
| Observability | `sysstat`, `dstat`, `strace`, `procps-ng`, `python3-iotop`, `glances` (pip), `fail2ban` |
| Dev | `git`, `python3`, `python3-pip`, `node`, `gnupg2` |
| Modern unix | `ripgrep`, `fd`, `eza`, `zoxide` |
### Not available in the Entware aarch64-k3.10 repo
- `bat` — install via `cargo install bat` if needed
- `duf` — use `df` / `ncdu` instead
- `bash-completion` — individual tools (git, fzf) provide their own
- `yq`, `glances` — installed via `pip3` instead
### Uninstall (full reversal)
```sh
sudo systemctl disable --now opt.mount
sudo rm /etc/systemd/system/opt.mount
sudo systemctl daemon-reload
sudo rm -rf /volume1/@entware
```
### DSM upgrade caveat
DSM major version bumps (e.g. 7.3 → 8) can clobber `/etc/systemd/system/`. After any DSM upgrade, re-check:
```sh
systemctl is-enabled opt.mount
```
If missing, recreate the unit file (content above), then `sudo systemctl daemon-reload && sudo systemctl enable --now opt.mount`. The Entware tree itself survives on `/volume1/@entware` — only the mount unit needs recreating.
## Tailscale upgrades
Tailscale's Synology package ships a built-in self-updater. Don't hunt for SPK URLs — just:
```sh
sudo tailscale update --yes
```
It downloads the right `.spk` from `pkgs.tailscale.com` and installs it in place. Confirmed working 2026-04-11 (upgraded 1.92.3 → 1.96.4).

View File

@@ -0,0 +1,317 @@
# 🎮 PufferPanel Game Server Management
*Web-based game server management panel for the Seattle VM*
## Overview
PufferPanel provides a comprehensive web interface for managing game servers, including Minecraft, Source engine games, and other popular multiplayer games.
## Deployment Information
### Host Location
- **Host**: Seattle VM (`homelab_vm`)
- **Container**: `pufferpanel-seattle`
- **Status**: ✅ Active
- **Access**: `https://games.vish.gg`
### Container Configuration
```yaml
services:
pufferpanel:
image: pufferpanel/pufferpanel:latest
container_name: pufferpanel-seattle
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
volumes:
- pufferpanel-config:/etc/pufferpanel
- pufferpanel-data:/var/lib/pufferpanel
- game-servers:/var/lib/pufferpanel/servers
ports:
- "8080:8080"
- "25565:25565" # Minecraft
- "27015:27015" # Source games
networks:
- game-network
```
## Managed Game Servers
### Minecraft Servers
- **Vanilla Minecraft**: Latest release version
- **Paper Minecraft**: Performance-optimized server
- **Modded Minecraft**: Forge/Fabric mod support
- **Bedrock Edition**: Cross-platform compatibility
### Source Engine Games
- **Garry's Mod**: PropHunt and sandbox modes
- **Left 4 Dead 2**: Co-op survival campaigns
- **Counter-Strike**: Classic competitive gameplay
- **Team Fortress 2**: Team-based multiplayer
### Other Games
- **Satisfactory**: Factory building dedicated server
- **Valheim**: Viking survival multiplayer
- **Terraria**: 2D adventure and building
- **Don't Starve Together**: Survival multiplayer
## Server Management
### Web Interface
- **URL**: `https://games.vish.gg`
- **Authentication**: Local user accounts
- **Features**: Start/stop, console access, file management
- **Monitoring**: Real-time server status and logs
### User Management
```bash
# Create admin user
docker exec pufferpanel-seattle pufferpanel user add --admin admin
# Create regular user
docker exec pufferpanel-seattle pufferpanel user add player
# Set user permissions
docker exec pufferpanel-seattle pufferpanel user perms player server.minecraft.view
```
### Server Templates
- **Pre-configured**: Common game server templates
- **Custom templates**: Tailored server configurations
- **Auto-updates**: Automatic game updates
- **Backup integration**: Scheduled server backups
## Network Configuration
### Port Management
```yaml
# Port mappings for different games
ports:
- "25565:25565" # Minecraft Java
- "19132:19132/udp" # Minecraft Bedrock
- "27015:27015" # Source games
- "7777:7777/udp" # Satisfactory
- "2456-2458:2456-2458/udp" # Valheim
```
### Firewall Rules
```bash
# Allow game server ports
sudo ufw allow 25565/tcp comment "Minecraft Java"
sudo ufw allow 19132/udp comment "Minecraft Bedrock"
sudo ufw allow 27015/tcp comment "Source games"
sudo ufw allow 7777/udp comment "Satisfactory"
```
## Storage Management
### Server Data
```
/var/lib/pufferpanel/servers/
├── minecraft-vanilla/
│ ├── world/
│ ├── plugins/
│ └── server.properties
├── gmod-prophunt/
│ ├── garrysmod/
│ └── srcds_run
└── satisfactory/
├── FactoryGame/
└── Engine/
```
### Backup Strategy
- **Automated backups**: Daily world/save backups
- **Retention policy**: 7 daily, 4 weekly, 12 monthly
- **Storage location**: `/mnt/backups/game-servers/`
- **Compression**: Gzip compression for space efficiency
## Performance Optimization
### Resource Allocation
```yaml
# Per-server resource limits
deploy:
resources:
limits:
memory: 4G # Minecraft servers
cpus: '2.0'
reservations:
memory: 2G
cpus: '1.0'
```
### Java Optimization (Minecraft)
```bash
# JVM arguments for Minecraft servers
-Xms2G -Xmx4G
-XX:+UseG1GC
-XX:+ParallelRefProcEnabled
-XX:MaxGCPauseMillis=200
-XX:+UnlockExperimentalVMOptions
-XX:+DisableExplicitGC
-XX:G1NewSizePercent=30
-XX:G1MaxNewSizePercent=40
```
### Network Optimization
- **TCP optimization**: Tuned for game traffic
- **Buffer sizes**: Optimized for low latency
- **Connection limits**: Prevent resource exhaustion
- **Rate limiting**: Anti-DDoS protection
## Monitoring and Alerts
### Server Monitoring
- **Resource usage**: CPU, memory, disk I/O
- **Player count**: Active players per server
- **Performance metrics**: TPS, latency, crashes
- **Uptime tracking**: Server availability statistics
### Alert Configuration
```yaml
# Prometheus alerts for game servers
- alert: GameServerDown
expr: up{job="pufferpanel"} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Game server {{ $labels.instance }} is down"
- alert: HighMemoryUsage
expr: container_memory_usage_bytes{name="minecraft-server"} / container_spec_memory_limit_bytes > 0.9
for: 10m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.name }}"
```
## Security Configuration
### Access Control
- **User authentication**: Local user database
- **Role-based permissions**: Admin, moderator, player roles
- **Server isolation**: Containerized server environments
- **Network segmentation**: Isolated game network
### Security Hardening
```bash
# Disable unnecessary services
systemctl disable --now telnet
systemctl disable --now rsh
# Configure fail2ban for SSH
sudo fail2ban-client set sshd bantime 3600
# Regular security updates
sudo apt update && sudo apt upgrade -y
```
### Backup Security
- **Encrypted backups**: AES-256 encryption
- **Access controls**: Restricted backup access
- **Integrity checks**: Backup verification
- **Offsite storage**: Cloud backup copies
## Troubleshooting
### Common Issues
#### Server Won't Start
```bash
# Check server logs
docker exec pufferpanel-seattle pufferpanel logs minecraft-server
# Verify port availability
netstat -tulpn | grep :25565
# Check resource limits
docker stats pufferpanel-seattle
```
#### Connection Issues
```bash
# Test network connectivity
telnet games.vish.gg 25565
# Check firewall rules
sudo ufw status numbered
# Verify DNS resolution
nslookup games.vish.gg
```
#### Performance Problems
```bash
# Monitor resource usage
htop
# Check disk I/O
iotop
# Analyze network traffic
nethogs
```
### Log Analysis
```bash
# View PufferPanel logs
docker logs pufferpanel-seattle
# View specific server logs
docker exec pufferpanel-seattle tail -f /var/lib/pufferpanel/servers/minecraft/logs/latest.log
# Check system logs
journalctl -u docker -f
```
## Maintenance Procedures
### Regular Maintenance
- **Weekly**: Server restarts and updates
- **Monthly**: Backup verification and cleanup
- **Quarterly**: Security audit and updates
- **Annually**: Hardware assessment and upgrades
### Update Procedures
```bash
# Update PufferPanel
docker pull pufferpanel/pufferpanel:latest
docker-compose up -d pufferpanel
# Update game servers
# Use PufferPanel web interface for game updates
```
### Backup Procedures
```bash
# Manual backup
docker exec pufferpanel-seattle pufferpanel backup create minecraft-server
# Restore from backup
docker exec pufferpanel-seattle pufferpanel backup restore minecraft-server backup-name
```
## Integration with Homelab
### Monitoring Integration
- **Prometheus**: Server metrics collection
- **Grafana**: Performance dashboards
- **NTFY**: Alert notifications
- **Uptime Kuma**: Service availability monitoring
### Authentication Integration
- **Authentik SSO**: Single sign-on integration (planned)
- **LDAP**: Centralized user management (planned)
- **Discord**: Player authentication via Discord (planned)
### Backup Integration
- **Automated backups**: Integration with homelab backup system
- **Cloud storage**: Backup to cloud storage
- **Monitoring**: Backup success/failure notifications
---
**Status**: ✅ PufferPanel managing multiple game servers with automated backups and monitoring

View File

@@ -0,0 +1,177 @@
version: '3.8'
services:
pufferpanel:
image: pufferpanel/pufferpanel:latest
container_name: pufferpanel-seattle
restart: unless-stopped
environment:
- PUID=1000
- PGID=1000
- TZ=America/New_York
- PUFFERPANEL_WEB_HOST=0.0.0.0:8080
- PUFFERPANEL_DAEMON_CONSOLE_BUFFER=50
- PUFFERPANEL_DAEMON_CONSOLE_FORWARD=false
- PUFFERPANEL_DAEMON_SFTP_HOST=0.0.0.0:5657
- PUFFERPANEL_DAEMON_AUTH_URL=http://localhost:8080
- PUFFERPANEL_DAEMON_AUTH_CLIENTID=
- PUFFERPANEL_DAEMON_AUTH_CLIENTSECRET=
volumes:
- pufferpanel-config:/etc/pufferpanel
- pufferpanel-data:/var/lib/pufferpanel
- game-servers:/var/lib/pufferpanel/servers
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "8080:8080" # Web interface
- "5657:5657" # SFTP
- "25565:25565" # Minecraft Java
- "19132:19132/udp" # Minecraft Bedrock
- "27015:27015" # Source games (GMod, L4D2)
- "27015:27015/udp"
- "7777:7777/udp" # Satisfactory
- "15777:15777/udp" # Satisfactory query
- "2456-2458:2456-2458/udp" # Valheim
- "7000-7100:7000-7100/tcp" # Additional game ports
networks:
- game-network
- proxy
labels:
# Nginx Proxy Manager labels
- "traefik.enable=true"
- "traefik.http.routers.pufferpanel.rule=Host(`games.vish.gg`)"
- "traefik.http.routers.pufferpanel.tls=true"
- "traefik.http.routers.pufferpanel.tls.certresolver=letsencrypt"
- "traefik.http.services.pufferpanel.loadbalancer.server.port=8080"
# Monitoring labels
- "prometheus.io/scrape=true"
- "prometheus.io/port=8080"
- "prometheus.io/path=/metrics"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/api/self"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
deploy:
resources:
limits:
memory: 1G
cpus: '1.0'
reservations:
memory: 512M
cpus: '0.5'
# Minecraft server template (managed by PufferPanel)
minecraft-vanilla:
image: itzg/minecraft-server:latest
container_name: minecraft-vanilla-seattle
restart: unless-stopped
environment:
- EULA=TRUE
- TYPE=VANILLA
- VERSION=LATEST
- MEMORY=4G
- JVM_OPTS=-XX:+UseG1GC -XX:+ParallelRefProcEnabled -XX:MaxGCPauseMillis=200
- ENABLE_RCON=true
- RCON_PASSWORD="REDACTED_PASSWORD"
- DIFFICULTY=normal
- MAX_PLAYERS=20
- MOTD=Homelab Minecraft Server
- SPAWN_PROTECTION=16
- VIEW_DISTANCE=10
- SIMULATION_DISTANCE=10
volumes:
- minecraft-data:/data
- minecraft-backups:/backups
ports:
- "25566:25565"
networks:
- game-network
depends_on:
- pufferpanel
deploy:
resources:
limits:
memory: 6G
cpus: '3.0'
reservations:
memory: 4G
cpus: '2.0'
healthcheck:
test: ["CMD", "mc-health"]
interval: 60s
timeout: 10s
retries: 3
start_period: 120s
# Game server backup service
game-backup:
image: alpine:latest
container_name: game-backup-seattle
restart: unless-stopped
environment:
- TZ=America/New_York
- BACKUP_SCHEDULE=0 2 * * * # Daily at 2 AM
- RETENTION_DAYS=30
volumes:
- game-servers:/game-servers:ro
- minecraft-data:/minecraft-data:ro
- /mnt/backups/game-servers:/backups
- ./scripts/backup-games.sh:/backup-games.sh:ro
command: |
sh -c "
apk add --no-cache dcron rsync gzip
echo '0 2 * * * /backup-games.sh' | crontab -
crond -f -l 2"
networks:
- game-network
depends_on:
- pufferpanel
volumes:
pufferpanel-config:
driver: local
driver_opts:
type: none
o: bind
device: /opt/pufferpanel/config
pufferpanel-data:
driver: local
driver_opts:
type: none
o: bind
device: /opt/pufferpanel/data
game-servers:
driver: local
driver_opts:
type: none
o: bind
device: /opt/pufferpanel/servers
minecraft-data:
driver: local
driver_opts:
type: none
o: bind
device: /opt/minecraft/data
minecraft-backups:
driver: local
driver_opts:
type: none
o: bind
device: /mnt/backups/minecraft
networks:
game-network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
proxy:
external: true
name: nginx-proxy-manager_default