Sanitized mirror from private repository - 2026-03-12 11:19:27 UTC
This commit is contained in:
145
hosts/physical/concord-nuc/README.md
Normal file
145
hosts/physical/concord-nuc/README.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# Concord NUC
|
||||
|
||||
**Hostname**: concord-nuc / vish-concord-nuc
|
||||
**IP Address**: 192.168.68.100 (static, eno1)
|
||||
**Tailscale IP**: 100.72.55.21
|
||||
**OS**: Ubuntu (cloud-init based)
|
||||
**SSH**: `ssh vish-concord-nuc` (via Tailscale — see `~/.ssh/config`)
|
||||
|
||||
---
|
||||
|
||||
## Network Configuration
|
||||
|
||||
### Static IP Setup
|
||||
|
||||
`eno1` is configured with a **static IP** (`192.168.68.100/22`) via netplan. This is required because AdGuard Home binds its DNS listener to a specific IP, and DHCP lease changes would cause it to crash.
|
||||
|
||||
**Netplan config**: `/etc/netplan/50-cloud-init.yaml`
|
||||
|
||||
```yaml
|
||||
network:
|
||||
ethernets:
|
||||
eno1:
|
||||
dhcp4: false
|
||||
addresses:
|
||||
- 192.168.68.100/22
|
||||
routes:
|
||||
- to: default
|
||||
via: 192.168.68.1
|
||||
nameservers:
|
||||
addresses:
|
||||
- 9.9.9.9
|
||||
- 1.1.1.1
|
||||
version: 2
|
||||
wifis:
|
||||
wlp1s0:
|
||||
access-points:
|
||||
This_Wifi_Sucks:
|
||||
password: "REDACTED_PASSWORD"
|
||||
dhcp4: true
|
||||
```
|
||||
|
||||
**Cloud-init is disabled** from managing network config:
|
||||
`/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg` — prevents reboots from reverting to DHCP.
|
||||
|
||||
> **Warning**: If you ever re-enable cloud-init networking or wipe this file, eno1 will revert to DHCP and AdGuard will start crash-looping on the next restart. See the Troubleshooting section below.
|
||||
|
||||
---
|
||||
|
||||
## Services
|
||||
|
||||
| Service | Port | URL |
|
||||
|---------|------|-----|
|
||||
| AdGuard Home (Web UI) | 9080 | http://192.168.68.100:9080 |
|
||||
| AdGuard Home (DNS) | 53 | 192.168.68.100:53, 100.72.55.21:53 |
|
||||
| Home Assistant | - | see homeassistant.yaml |
|
||||
| Plex | - | see plex.yaml |
|
||||
| Syncthing | - | see syncthing.yaml |
|
||||
| Invidious | 3000 | https://in.vish.gg (public), http://192.168.68.100:3000 |
|
||||
| Materialious | 3001 | http://192.168.68.100:3001 |
|
||||
| YourSpotify | 4000, 15000 | see yourspotify.yaml |
|
||||
|
||||
---
|
||||
|
||||
## Deployed Stacks
|
||||
|
||||
| Compose File | Service | Notes |
|
||||
|-------------|---------|-------|
|
||||
| `adguard.yaml` | AdGuard Home | DNS ad blocker, binds to 192.168.68.100 |
|
||||
| `homeassistant.yaml` | Home Assistant | Home automation |
|
||||
| `plex.yaml` | Plex | Media server |
|
||||
| `syncthing.yaml` | Syncthing | File sync |
|
||||
| `wireguard.yaml` | WireGuard / wg-easy | VPN |
|
||||
| `dyndns_updater.yaml` | DynDNS | Dynamic DNS |
|
||||
| `node-exporter.yaml` | Node Exporter | Prometheus metrics |
|
||||
| `piped.yaml` | Piped | YouTube alternative frontend |
|
||||
| `yourspotify.yaml` | YourSpotify | Spotify stats |
|
||||
| `invidious/invidious.yaml` | Invidious + Companion + DB + Materialious | YouTube frontend — https://in.vish.gg |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### AdGuard crash-loops on startup
|
||||
|
||||
**Symptom**: `docker ps` shows AdGuard as "Restarting" or "Up Less than a second"
|
||||
|
||||
**Cause**: AdGuard binds DNS to a specific IP (`192.168.68.100`). If the host's IP changes (DHCP), or if AdGuard rewrites its config to the current DHCP address, it will fail to bind on next start.
|
||||
|
||||
**Diagnose**:
|
||||
```bash
|
||||
docker logs AdGuard --tail 20
|
||||
# Look for: "bind: cannot assign requested address"
|
||||
# The log will show which IP it tried to use
|
||||
```
|
||||
|
||||
**Fix**:
|
||||
```bash
|
||||
# 1. Check what IP AdGuard thinks it should use
|
||||
sudo grep -A3 'bind_hosts' /home/vish/docker/adguard/config/AdGuardHome.yaml
|
||||
|
||||
# 2. Check what IP eno1 actually has
|
||||
ip addr show eno1 | grep 'inet '
|
||||
|
||||
# 3. If they don't match, update the config
|
||||
sudo sed -i 's/- 192.168.68.XXX/- 192.168.68.100/' /home/vish/docker/adguard/config/AdGuardHome.yaml
|
||||
|
||||
# 4. Restart AdGuard
|
||||
docker restart AdGuard
|
||||
```
|
||||
|
||||
**If the host IP has reverted to DHCP** (e.g. after a reboot wiped the static config):
|
||||
```bash
|
||||
# Re-apply static IP
|
||||
sudo netplan apply
|
||||
|
||||
# Verify
|
||||
ip addr show eno1 | grep 'inet '
|
||||
# Should show: inet 192.168.68.100/22
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Incident History
|
||||
|
||||
### 2026-02-22 — AdGuard crash-loop / IP mismatch
|
||||
|
||||
- **Root cause**: Host had drifted from `192.168.68.100` to DHCP-assigned `192.168.68.87`. AdGuard briefly started, rewrote its config to `.87`, then the static IP was applied and `.87` was gone — causing a bind failure loop.
|
||||
- **Resolution**:
|
||||
1. Disabled cloud-init network management
|
||||
2. Set `eno1` to static `192.168.68.100/22` via netplan
|
||||
3. Corrected `AdGuardHome.yaml` `bind_hosts` back to `.100`
|
||||
4. Restarted AdGuard — stable
|
||||
|
||||
---
|
||||
|
||||
### 2026-02-27 — Invidious 502 / crash-loop
|
||||
|
||||
- **Root cause 1**: PostgreSQL 14 defaults `pg_hba.conf` to `scram-sha-256` for host connections. Invidious's Crystal DB driver does not support scram-sha-256, causing a "password authentication failed" crash loop even with correct credentials.
|
||||
- **Fix**: Changed last line of `/var/lib/postgresql/data/pg_hba.conf` in the `invidious-db` container from `host all all all scram-sha-256` to `host all all 172.21.0.0/16 trust`, then ran `SELECT pg_reload_conf();`.
|
||||
- **Root cause 2**: Portainer had saved the literal string `REDACTED_SECRET_KEY` as the `SERVER_SECRET_KEY` env var for the companion container (Portainer's secret-redaction placeholder was baked in as the real value). The latest companion image validates the key strictly (exactly 16 alphanumeric chars), causing it to crash.
|
||||
- **Fix**: Updated the Portainer stack file via API (`PUT /api/stacks/584`), replacing all `REDACTED_*` placeholders with the real values.
|
||||
|
||||
---
|
||||
|
||||
*Last updated: 2026-02-27*
|
||||
23
hosts/physical/concord-nuc/adguard.yaml
Normal file
23
hosts/physical/concord-nuc/adguard.yaml
Normal file
@@ -0,0 +1,23 @@
|
||||
# AdGuard Home - DNS ad blocker
|
||||
# Web UI: http://192.168.68.100:9080
|
||||
# DNS: 192.168.68.100:53, 100.72.55.21:53
|
||||
#
|
||||
# IMPORTANT: This container binds DNS to 192.168.68.100 (configured in AdGuardHome.yaml).
|
||||
# The host MUST have a static IP of 192.168.68.100 on eno1, otherwise AdGuard will
|
||||
# crash-loop with "bind: cannot assign requested address".
|
||||
# See README.md for static IP setup and troubleshooting.
|
||||
services:
|
||||
adguard:
|
||||
image: adguard/adguardhome
|
||||
container_name: AdGuard
|
||||
mem_limit: 2g
|
||||
cpu_shares: 768
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
network_mode: host
|
||||
volumes:
|
||||
- /home/vish/docker/adguard/config:/opt/adguardhome/conf:rw
|
||||
- /home/vish/docker/adguard/data:/opt/adguardhome/work:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
28
hosts/physical/concord-nuc/diun.yaml
Normal file
28
hosts/physical/concord-nuc/diun.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Diun — Docker Image Update Notifier
|
||||
#
|
||||
# Watches all running containers on this host and sends ntfy
|
||||
# notifications when upstream images update their digest.
|
||||
# Schedule: Mondays 09:00 (weekly cadence).
|
||||
#
|
||||
# ntfy topic: https://ntfy.vish.gg/diun
|
||||
|
||||
services:
|
||||
diun:
|
||||
image: crazymax/diun:latest
|
||||
container_name: diun
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- diun-data:/data
|
||||
environment:
|
||||
LOG_LEVEL: info
|
||||
DIUN_WATCH_WORKERS: "20"
|
||||
DIUN_WATCH_SCHEDULE: "0 9 * * 1"
|
||||
DIUN_WATCH_JITTER: 30s
|
||||
DIUN_PROVIDERS_DOCKER: "true"
|
||||
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
|
||||
DIUN_NOTIF_NTFY_ENDPOINT: "https://ntfy.vish.gg"
|
||||
DIUN_NOTIF_NTFY_TOPIC: "diun"
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
diun-data:
|
||||
@@ -0,0 +1,28 @@
|
||||
pds-g^KU_n-Ck6JOm^BQu9pcct0DI/MvsCnViM6kGHGVCigvohyf/HHHfHG8c=
|
||||
|
||||
|
||||
8. Start the Server
|
||||
Use screen or tmux to keep the server running in the background.
|
||||
|
||||
Start Master (Overworld) Server
|
||||
bash
|
||||
Copy
|
||||
Edit
|
||||
cd ~/dst/bin
|
||||
screen -S dst-master ./dontstarve_dedicated_server_nullrenderer -cluster MyCluster -shard Master
|
||||
Start Caves Server
|
||||
Open a new session:
|
||||
|
||||
bash
|
||||
Copy
|
||||
Edit
|
||||
|
||||
|
||||
screen -S dst-caves ./dontstarve_dedicated_server_nullrenderer -cluster MyCluster -shard Caves
|
||||
|
||||
|
||||
|
||||
[Service]
|
||||
User=dst
|
||||
ExecStart=/home/dstserver/dst/bin/dontstarve_dedicated_server_nullrenderer -cluster MyCluster -shard Master
|
||||
Restart=always
|
||||
15
hosts/physical/concord-nuc/dozzle-agent.yaml
Normal file
15
hosts/physical/concord-nuc/dozzle-agent.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
services:
|
||||
dozzle-agent:
|
||||
image: amir20/dozzle:latest
|
||||
container_name: dozzle-agent
|
||||
command: agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- "7007:7007"
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "/dozzle", "healthcheck"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
17
hosts/physical/concord-nuc/dyndns_updater.yaml
Normal file
17
hosts/physical/concord-nuc/dyndns_updater.yaml
Normal file
@@ -0,0 +1,17 @@
|
||||
# Dynamic DNS Updater
|
||||
# Updates DNS records when public IP changes
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
ddns-vish-13340:
|
||||
image: favonia/cloudflare-ddns:latest
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
user: "1000:1000"
|
||||
read_only: true
|
||||
cap_drop: [all]
|
||||
security_opt: [no-new-privileges:true]
|
||||
environment:
|
||||
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
|
||||
- DOMAINS=api.vish.gg,api.vp.vish.gg,in.vish.gg,client.spotify.vish.gg,spotify.vish.gg
|
||||
- PROXIED=false
|
||||
55
hosts/physical/concord-nuc/homeassistant.yaml
Normal file
55
hosts/physical/concord-nuc/homeassistant.yaml
Normal file
@@ -0,0 +1,55 @@
|
||||
# Home Assistant - Smart home automation
|
||||
# Port: 8123
|
||||
# Open source home automation platform
|
||||
version: '3'
|
||||
services:
|
||||
homeassistant:
|
||||
container_name: homeassistant
|
||||
image: ghcr.io/home-assistant/home-assistant:stable
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- /home/vish/docker/homeassistant:/config
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
|
||||
matter-server:
|
||||
container_name: matter-server
|
||||
image: ghcr.io/home-assistant-libs/python-matter-server:stable
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /home/vish/docker/matter:/data
|
||||
|
||||
piper:
|
||||
container_name: piper
|
||||
image: rhasspy/wyoming-piper:latest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "10200:10200"
|
||||
volumes:
|
||||
- /home/vish/docker/piper:/data
|
||||
command: --voice en_US-lessac-medium
|
||||
|
||||
whisper:
|
||||
container_name: whisper
|
||||
image: rhasspy/wyoming-whisper:latest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "10300:10300"
|
||||
volumes:
|
||||
- /home/vish/docker/whisper:/data
|
||||
command: --model tiny-int8 --language en
|
||||
|
||||
openwakeword:
|
||||
container_name: openwakeword
|
||||
image: rhasspy/wyoming-openwakeword:latest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "10400:10400"
|
||||
command: --preload-model ok_nabu
|
||||
|
||||
networks:
|
||||
default:
|
||||
name: homeassistant-stack
|
||||
13
hosts/physical/concord-nuc/invidious/docker/init-invidious-db.sh
Executable file
13
hosts/physical/concord-nuc/invidious/docker/init-invidious-db.sh
Executable file
@@ -0,0 +1,13 @@
|
||||
#!/bin/bash
|
||||
# Invidious DB initialisation script
|
||||
# Runs once on first container start (docker-entrypoint-initdb.d).
|
||||
#
|
||||
# Adds a pg_hba.conf rule allowing connections from any Docker subnet
|
||||
# using trust auth. Without this, PostgreSQL rejects the invidious
|
||||
# container when the Docker network is assigned a different subnet after
|
||||
# a recreate (the default pg_hba.conf only covers localhost).
|
||||
|
||||
set -e
|
||||
|
||||
# Allow connections from any host on the Docker bridge network
|
||||
echo "host all all 0.0.0.0/0 trust" >> /var/lib/postgresql/data/pg_hba.conf
|
||||
115
hosts/physical/concord-nuc/invidious/invidious.yaml
Normal file
115
hosts/physical/concord-nuc/invidious/invidious.yaml
Normal file
@@ -0,0 +1,115 @@
|
||||
version: "3"
|
||||
|
||||
configs:
|
||||
materialious_nginx:
|
||||
content: |
|
||||
events { worker_connections 1024; }
|
||||
http {
|
||||
default_type application/octet-stream;
|
||||
include /etc/nginx/mime.types;
|
||||
server {
|
||||
listen 80;
|
||||
|
||||
# The video player passes dashUrl as a relative path that resolves
|
||||
# to this origin — proxy Invidious API/media paths to local service.
|
||||
# (in.vish.gg resolves to the external IP which is unreachable via
|
||||
# hairpin NAT from inside Docker; invidious:3000 is on same network)
|
||||
location ~ ^/(api|companion|vi|ggpht|videoplayback|sb|s_p|ytc|storyboards) {
|
||||
proxy_pass http://invidious:3000;
|
||||
proxy_set_header Host $$host;
|
||||
proxy_set_header X-Real-IP $$remote_addr;
|
||||
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
|
||||
}
|
||||
|
||||
location / {
|
||||
root /usr/share/nginx/html;
|
||||
try_files $$uri /index.html;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
services:
|
||||
|
||||
invidious:
|
||||
image: quay.io/invidious/invidious:latest
|
||||
platform: linux/amd64
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
INVIDIOUS_CONFIG: |
|
||||
db:
|
||||
dbname: invidious
|
||||
user: kemal
|
||||
password: "REDACTED_PASSWORD"
|
||||
host: invidious-db
|
||||
port: 5432
|
||||
check_tables: true
|
||||
invidious_companion:
|
||||
- private_url: "http://companion:8282/companion"
|
||||
invidious_companion_key: "pha6nuser7ecei1E"
|
||||
hmac_key: "Kai5eexiewohchei"
|
||||
healthcheck:
|
||||
test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/trending || exit 1
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 2
|
||||
logging:
|
||||
options:
|
||||
max-size: "1G"
|
||||
max-file: "4"
|
||||
depends_on:
|
||||
- invidious-db
|
||||
- companion
|
||||
|
||||
companion:
|
||||
image: quay.io/invidious/invidious-companion:latest
|
||||
platform: linux/amd64
|
||||
environment:
|
||||
- SERVER_SECRET_KEY=pha6nuser7ecei1E
|
||||
restart: unless-stopped
|
||||
cap_drop:
|
||||
- ALL
|
||||
read_only: true
|
||||
volumes:
|
||||
- companioncache:/var/tmp/youtubei.js:rw
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
logging:
|
||||
options:
|
||||
max-size: "1G"
|
||||
max-file: "4"
|
||||
|
||||
invidious-db:
|
||||
image: postgres:14
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
POSTGRES_DB: invidious
|
||||
POSTGRES_USER: kemal
|
||||
POSTGRES_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
volumes:
|
||||
- postgresdata:/var/lib/postgresql/data
|
||||
- ./config/sql:/config/sql
|
||||
- ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
|
||||
|
||||
materialious:
|
||||
image: wardpearce/materialious:latest
|
||||
container_name: materialious
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
VITE_DEFAULT_INVIDIOUS_INSTANCE: "https://in.vish.gg"
|
||||
configs:
|
||||
- source: materialious_nginx
|
||||
target: /etc/nginx/nginx.conf
|
||||
ports:
|
||||
- "3001:80"
|
||||
logging:
|
||||
options:
|
||||
max-size: "1G"
|
||||
max-file: "4"
|
||||
|
||||
volumes:
|
||||
postgresdata:
|
||||
companioncache:
|
||||
4
hosts/physical/concord-nuc/invidious/invidious_notes.txt
Normal file
4
hosts/physical/concord-nuc/invidious/invidious_notes.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
vish@vish-concord-nuc:~/invidious/invidious$ pwgen 16 1 # for Invidious (HMAC_KEY)
|
||||
Kai5eexiewohchei
|
||||
vish@vish-concord-nuc:~/invidious/invidious$ pwgen 16 1 # for Invidious companion (invidious_companion_key)
|
||||
pha6nuser7ecei1E
|
||||
@@ -0,0 +1,65 @@
|
||||
version: "3.8" # Upgrade to a newer version for better features and support
|
||||
|
||||
services:
|
||||
invidious:
|
||||
image: quay.io/invidious/invidious:latest
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3000:3000"
|
||||
environment:
|
||||
INVIDIOUS_CONFIG: |
|
||||
db:
|
||||
dbname: invidious
|
||||
user: kemal
|
||||
password: "REDACTED_PASSWORD"
|
||||
host: invidious-db
|
||||
port: 5432
|
||||
check_tables: true
|
||||
signature_server: inv_sig_helper:12999
|
||||
visitor_data: ""
|
||||
po_token: "REDACTED_TOKEN"=="
|
||||
hmac_key: "9Uncxo4Ws54s7dr0i3t8"
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-nv", "--tries=1", "--spider", "http://127.0.0.1:3000/api/v1/trending"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 2
|
||||
logging:
|
||||
options:
|
||||
max-size: "1G"
|
||||
max-file: "4"
|
||||
depends_on:
|
||||
- invidious-db
|
||||
|
||||
inv_sig_helper:
|
||||
image: quay.io/invidious/inv-sig-helper:latest
|
||||
init: true
|
||||
command: ["--tcp", "0.0.0.0:12999"]
|
||||
environment:
|
||||
- RUST_LOG=info
|
||||
restart: unless-stopped
|
||||
cap_drop:
|
||||
- ALL
|
||||
read_only: true
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
|
||||
invidious-db:
|
||||
image: docker.io/library/postgres:14
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- postgresdata:/var/lib/postgresql/data
|
||||
- ./config/sql:/config/sql
|
||||
- ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
|
||||
environment:
|
||||
POSTGRES_DB: invidious
|
||||
POSTGRES_USER: kemal
|
||||
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
|
||||
volumes:
|
||||
postgresdata:
|
||||
@@ -0,0 +1,2 @@
|
||||
docker all in one
|
||||
docker-compose down --volumes --remove-orphans && docker-compose pull && docker-compose up -d
|
||||
28
hosts/physical/concord-nuc/nginx/client.spotify.vish.gg.conf
Normal file
28
hosts/physical/concord-nuc/nginx/client.spotify.vish.gg.conf
Normal file
@@ -0,0 +1,28 @@
|
||||
# Redirect all HTTP traffic to HTTPS
|
||||
server {
|
||||
listen 80;
|
||||
server_name client.spotify.vish.gg;
|
||||
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
# HTTPS configuration for the subdomain
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name client.spotify.vish.gg;
|
||||
|
||||
# SSL Certificates (managed by Certbot)
|
||||
ssl_certificate /etc/letsencrypt/live/client.spotify.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/client.spotify.vish.gg/privkey.pem;
|
||||
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
|
||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
|
||||
|
||||
# Proxy to Docker container
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:4000; # Maps to your Docker container
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
63
hosts/physical/concord-nuc/nginx/in.vish.gg.conf
Normal file
63
hosts/physical/concord-nuc/nginx/in.vish.gg.conf
Normal file
@@ -0,0 +1,63 @@
|
||||
server {
|
||||
if ($host = in.vish.gg) {
|
||||
return 301 https://$host$request_uri;
|
||||
} # managed by Certbot
|
||||
|
||||
|
||||
listen 80;
|
||||
server_name in.vish.gg;
|
||||
|
||||
# Redirect all HTTP traffic to HTTPS
|
||||
return 301 https://$host$request_uri;
|
||||
|
||||
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name in.vish.gg;
|
||||
|
||||
# SSL Certificates (Certbot paths)
|
||||
ssl_certificate /etc/letsencrypt/live/in.vish.gg/fullchain.pem; # managed by Certbot
|
||||
ssl_certificate_key /etc/letsencrypt/live/in.vish.gg/privkey.pem; # managed by Certbot
|
||||
include /etc/letsencrypt/options-ssl-nginx.conf;
|
||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
|
||||
|
||||
# --- Reverse Proxy to Invidious ---
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:3000;
|
||||
proxy_http_version 1.1;
|
||||
|
||||
# Required headers for reverse proxying
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
|
||||
# WebSocket and streaming stability
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
|
||||
# Disable buffering for video streams
|
||||
proxy_buffering off;
|
||||
proxy_request_buffering off;
|
||||
|
||||
# Avoid premature timeouts during long playback
|
||||
proxy_read_timeout 600s;
|
||||
proxy_send_timeout 600s;
|
||||
}
|
||||
|
||||
# Cache static assets (images, css, js) for better performance
|
||||
location ~* \.(?:jpg|jpeg|png|gif|ico|css|js|webp)$ {
|
||||
expires 30d;
|
||||
add_header Cache-Control "public, no-transform";
|
||||
proxy_pass http://127.0.0.1:3000;
|
||||
}
|
||||
|
||||
# Security headers (optional but sensible)
|
||||
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
|
||||
add_header X-Content-Type-Options nosniff;
|
||||
add_header X-Frame-Options SAMEORIGIN;
|
||||
add_header Referrer-Policy same-origin;
|
||||
|
||||
}
|
||||
28
hosts/physical/concord-nuc/nginx/spotify.conf
Normal file
28
hosts/physical/concord-nuc/nginx/spotify.conf
Normal file
@@ -0,0 +1,28 @@
|
||||
# Redirect HTTP to HTTPS
|
||||
server {
|
||||
listen 80;
|
||||
server_name spotify.vish.gg;
|
||||
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
# HTTPS server block
|
||||
server {
|
||||
listen 443 ssl;
|
||||
server_name spotify.vish.gg;
|
||||
|
||||
# SSL Certificates (managed by Certbot)
|
||||
ssl_certificate /etc/letsencrypt/live/spotify.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/spotify.vish.gg/privkey.pem;
|
||||
include /etc/letsencrypt/options-ssl-nginx.conf;
|
||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
|
||||
|
||||
# Proxy requests to backend API
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:15000;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
74
hosts/physical/concord-nuc/nginx/vp.vish.gg.conf
Normal file
74
hosts/physical/concord-nuc/nginx/vp.vish.gg.conf
Normal file
@@ -0,0 +1,74 @@
|
||||
# Redirect HTTP to HTTPS
|
||||
server {
|
||||
listen 80;
|
||||
server_name vp.vish.gg api.vp.vish.gg proxy.vp.vish.gg;
|
||||
|
||||
return 301 https://$host$request_uri;
|
||||
}
|
||||
|
||||
# HTTPS Reverse Proxy for Piped
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name vp.vish.gg;
|
||||
|
||||
# SSL Certificates (managed by Certbot)
|
||||
ssl_certificate /etc/letsencrypt/live/vp.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/vp.vish.gg/privkey.pem;
|
||||
include /etc/letsencrypt/options-ssl-nginx.conf;
|
||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
|
||||
|
||||
# Proxy requests to Piped Frontend (use Docker service name, NOT 127.0.0.1)
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# HTTPS Reverse Proxy for Piped API
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name api.vp.vish.gg;
|
||||
|
||||
# SSL Certificates
|
||||
ssl_certificate /etc/letsencrypt/live/vp.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/vp.vish.gg/privkey.pem;
|
||||
include /etc/letsencrypt/options-ssl-nginx.conf;
|
||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
|
||||
|
||||
# Proxy requests to Piped API backend
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:8080;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# HTTPS Reverse Proxy for Piped Proxy (for video streaming)
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name proxy.vp.vish.gg;
|
||||
|
||||
# SSL Certificates
|
||||
ssl_certificate /etc/letsencrypt/live/vp.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/vp.vish.gg/privkey.pem;
|
||||
include /etc/letsencrypt/options-ssl-nginx.conf;
|
||||
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
|
||||
|
||||
# Proxy video playback requests through ytproxy
|
||||
location ~ (/videoplayback|/api/v4/|/api/manifest/) {
|
||||
include snippets/ytproxy.conf;
|
||||
add_header Cache-Control private always;
|
||||
proxy_hide_header Access-Control-Allow-Origin;
|
||||
}
|
||||
|
||||
location / {
|
||||
include snippets/ytproxy.conf;
|
||||
add_header Cache-Control "public, max-age=604800";
|
||||
proxy_hide_header Access-Control-Allow-Origin;
|
||||
}
|
||||
}
|
||||
24
hosts/physical/concord-nuc/node-exporter.yaml
Normal file
24
hosts/physical/concord-nuc/node-exporter.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
# Node Exporter - Prometheus metrics exporter for hardware/OS metrics
|
||||
# Exposes metrics on port 9101 (changed from 9100 due to host conflict)
|
||||
# Used by: Grafana/Prometheus monitoring stack
|
||||
# Note: Using bridge network with port mapping instead of host network
|
||||
# to avoid conflict with host-installed node_exporter
|
||||
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
node-exporter:
|
||||
image: quay.io/prometheus/node-exporter:latest
|
||||
container_name: node_exporter
|
||||
ports:
|
||||
- "9101:9100"
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
command:
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--path.sysfs=/host/sys'
|
||||
- '--path.rootfs=/rootfs'
|
||||
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
restart: unless-stopped
|
||||
79
hosts/physical/concord-nuc/piped.yaml
Normal file
79
hosts/physical/concord-nuc/piped.yaml
Normal file
@@ -0,0 +1,79 @@
|
||||
# Piped - YouTube frontend
|
||||
# Port: 8080
|
||||
# Privacy-respecting YouTube
|
||||
|
||||
services:
|
||||
piped-frontend:
|
||||
image: 1337kavin/piped-frontend:latest
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- piped
|
||||
environment:
|
||||
BACKEND_HOSTNAME: api.vp.vish.gg
|
||||
HTTP_MODE: https
|
||||
container_name: piped-frontend
|
||||
piped-proxy:
|
||||
image: 1337kavin/piped-proxy:latest
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- UDS=1
|
||||
volumes:
|
||||
- piped-proxy:/app/socket
|
||||
container_name: piped-proxy
|
||||
piped:
|
||||
image: 1337kavin/piped:latest
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./config/config.properties:/app/config.properties:ro
|
||||
depends_on:
|
||||
- postgres
|
||||
container_name: piped-backend
|
||||
bg-helper:
|
||||
image: 1337kavin/bg-helper-server:latest
|
||||
restart: unless-stopped
|
||||
container_name: piped-bg-helper
|
||||
nginx:
|
||||
image: nginx:mainline-alpine
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:80"
|
||||
volumes:
|
||||
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
|
||||
- ./config/pipedapi.conf:/etc/nginx/conf.d/pipedapi.conf:ro
|
||||
- ./config/pipedproxy.conf:/etc/nginx/conf.d/pipedproxy.conf:ro
|
||||
- ./config/pipedfrontend.conf:/etc/nginx/conf.d/pipedfrontend.conf:ro
|
||||
- ./config/ytproxy.conf:/etc/nginx/snippets/ytproxy.conf:ro
|
||||
- piped-proxy:/var/run/ytproxy
|
||||
container_name: nginx
|
||||
depends_on:
|
||||
- piped
|
||||
- piped-proxy
|
||||
- piped-frontend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.piped.rule=Host(`FRONTEND_HOSTNAME`, `BACKEND_HOSTNAME`, `PROXY_HOSTNAME`)"
|
||||
- "traefik.http.routers.piped.entrypoints=websecure"
|
||||
- "traefik.http.services.piped.loadbalancer.server.port=8080"
|
||||
postgres:
|
||||
image: pgautoupgrade/pgautoupgrade:16-alpine
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- ./data/db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=piped
|
||||
- POSTGRES_USER=piped
|
||||
- POSTGRES_PASSWORD="REDACTED_PASSWORD"
|
||||
container_name: postgres
|
||||
watchtower:
|
||||
image: containrrr/watchtower
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
environment:
|
||||
- WATCHTOWER_CLEANUP=true
|
||||
- WATCHTOWER_INCLUDE_RESTARTING=true
|
||||
container_name: watchtower
|
||||
command: piped-frontend piped-backend piped-proxy piped-bg-helper varnish nginx postgres watchtower
|
||||
volumes:
|
||||
piped-proxy: null
|
||||
28
hosts/physical/concord-nuc/plex.yaml
Normal file
28
hosts/physical/concord-nuc/plex.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Plex Media Server
|
||||
# Web UI: http://<host-ip>:32400/web
|
||||
# Uses Intel QuickSync for hardware transcoding (via /dev/dri)
|
||||
# Media library mounted from NAS at /mnt/nas
|
||||
|
||||
services:
|
||||
plex:
|
||||
image: linuxserver/plex:latest
|
||||
container_name: plex
|
||||
network_mode: host
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- VERSION=docker
|
||||
# Get claim token from: https://www.plex.tv/claim/
|
||||
- PLEX_CLAIM=claim-REDACTED_APP_PASSWORD
|
||||
volumes:
|
||||
- /home/vish/docker/plex/config:/config
|
||||
- /mnt/nas/:/data/media
|
||||
devices:
|
||||
# Intel QuickSync for hardware transcoding
|
||||
- /dev/dri:/dev/dri
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: on-failure:10
|
||||
# custom-cont-init.d/01-wait-for-nas.sh waits up to 120s for /mnt/nas before starting Plex
|
||||
22
hosts/physical/concord-nuc/portainer_agent.yaml
Normal file
22
hosts/physical/concord-nuc/portainer_agent.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
# Portainer Edge Agent - concord-nuc
|
||||
# Connects to Portainer server on Atlantis (100.83.230.112:8000)
|
||||
# Deploy: docker compose -f portainer_agent.yaml up -d
|
||||
|
||||
services:
|
||||
portainer_edge_agent:
|
||||
image: portainer/agent:2.33.7
|
||||
container_name: portainer_edge_agent
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
- /:/host
|
||||
- portainer_agent_data:/data
|
||||
environment:
|
||||
EDGE: "1"
|
||||
EDGE_ID: "be02f203-f10c-471a-927c-9ca2adac254c"
|
||||
EDGE_KEY: "aHR0cDovLzEwMC44My4yMzAuMTEyOjEwMDAwfGh0dHA6Ly8xMDAuODMuMjMwLjExMjo4MDAwfGtDWjVkTjJyNXNnQTJvMEF6UDN4R3h6enBpclFqa05Wa0FCQkU0R1IxWFU9fDQ0MzM5OA"
|
||||
EDGE_INSECURE_POLL: "1"
|
||||
|
||||
volumes:
|
||||
portainer_agent_data:
|
||||
22
hosts/physical/concord-nuc/scrutiny-collector.yaml
Normal file
22
hosts/physical/concord-nuc/scrutiny-collector.yaml
Normal file
@@ -0,0 +1,22 @@
|
||||
# Scrutiny Collector — concord-nuc (Intel NUC)
|
||||
#
|
||||
# Ships SMART data to the hub on homelab-vm.
|
||||
# NUC typically has one internal NVMe + optionally a SATA SSD.
|
||||
# Adjust device list: run `lsblk` to see actual drives.
|
||||
#
|
||||
# Hub: http://100.67.40.126:8090
|
||||
|
||||
services:
|
||||
scrutiny-collector:
|
||||
image: ghcr.io/analogj/scrutiny:master-collector
|
||||
container_name: scrutiny-collector
|
||||
cap_add:
|
||||
- SYS_RAWIO
|
||||
- SYS_ADMIN
|
||||
volumes:
|
||||
- /run/udev:/run/udev:ro
|
||||
devices:
|
||||
- /dev/sda
|
||||
environment:
|
||||
COLLECTOR_API_ENDPOINT: "http://100.67.40.126:8090"
|
||||
restart: unless-stopped
|
||||
19
hosts/physical/concord-nuc/syncthing.yaml
Normal file
19
hosts/physical/concord-nuc/syncthing.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# Syncthing - File synchronization
|
||||
# Port: 8384 (web), 22000 (sync)
|
||||
# Continuous file synchronization between devices
|
||||
services:
|
||||
syncthing:
|
||||
container_name: syncthing
|
||||
ports:
|
||||
- 8384:8384
|
||||
- 22000:22000/tcp
|
||||
- 22000:22000/udp
|
||||
- 21027:21027/udp
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- /home/vish/docker/syncthing/config:/config
|
||||
- /home/vish/docker/syncthing/data1:/data1
|
||||
- /home/vish/docker/syncthing/data2:/data2
|
||||
restart: unless-stopped
|
||||
image: ghcr.io/linuxserver/syncthing
|
||||
25
hosts/physical/concord-nuc/wireguard.yaml
Normal file
25
hosts/physical/concord-nuc/wireguard.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# WireGuard - VPN server
|
||||
# Port: 51820/udp
|
||||
# Modern, fast VPN tunnel
|
||||
services:
|
||||
wg-easy:
|
||||
container_name: wg-easy
|
||||
image: ghcr.io/wg-easy/wg-easy
|
||||
|
||||
environment:
|
||||
- HASH_PASSWORD="REDACTED_PASSWORD"
|
||||
- WG_HOST=vishconcord.tplinkdns.com
|
||||
|
||||
volumes:
|
||||
- ./config:/etc/wireguard
|
||||
- /lib/modules:/lib/modules
|
||||
ports:
|
||||
- "51820:51820/udp"
|
||||
- "51821:51821/tcp"
|
||||
restart: unless-stopped
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
- SYS_MODULE
|
||||
sysctls:
|
||||
- net.ipv4.ip_forward=1
|
||||
- net.ipv4.conf.all.src_valid_mark=1
|
||||
49
hosts/physical/concord-nuc/yourspotify.yaml
Normal file
49
hosts/physical/concord-nuc/yourspotify.yaml
Normal file
@@ -0,0 +1,49 @@
|
||||
# Your Spotify - Listening statistics
|
||||
# Port: 3000
|
||||
# Self-hosted Spotify listening history tracker
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
server:
|
||||
image: yooooomi/your_spotify_server
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "15000:8080" # Expose port 15000 for backend service
|
||||
depends_on:
|
||||
- mongo
|
||||
environment:
|
||||
- API_ENDPOINT=https://spotify.vish.gg # Public URL for backend
|
||||
- CLIENT_ENDPOINT=https://client.spotify.vish.gg # Public URL for frontend
|
||||
- SPOTIFY_PUBLIC=d6b3bda999f042099ce79a8b6e9f9e68 # Spotify app client ID
|
||||
- SPOTIFY_SECRET=72c650e7a25f441baa245b963003a672 # Spotify app client secret
|
||||
- SPOTIFY_REDIRECT_URI=https://client.spotify.vish.gg/callback # Redirect URI for OAuth
|
||||
- CORS=https://client.spotify.vish.gg # Allow frontend's origin
|
||||
networks:
|
||||
- spotify_network
|
||||
|
||||
mongo:
|
||||
container_name: mongo
|
||||
image: mongo:4.4.8
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- yourspotify_mongo_data:/data/db # Named volume for persistent storage
|
||||
networks:
|
||||
- spotify_network
|
||||
|
||||
web:
|
||||
image: yooooomi/your_spotify_client
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "4000:3000" # Expose port 4000 for frontend
|
||||
environment:
|
||||
- API_ENDPOINT=https://spotify.vish.gg # URL for backend API
|
||||
networks:
|
||||
- spotify_network
|
||||
|
||||
volumes:
|
||||
yourspotify_mongo_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
spotify_network:
|
||||
driver: bridge
|
||||
Reference in New Issue
Block a user