Sanitized mirror from private repository - 2026-04-24 08:27:14 UTC
Some checks failed
Documentation / Build Docusaurus (push) Failing after 5m7s
Documentation / Deploy to GitHub Pages (push) Has been skipped

This commit is contained in:
Gitea Mirror Bot
2026-04-24 08:27:14 +00:00
commit 48bfce60e7
1450 changed files with 365038 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,410 @@
# Pinchflat Test Deployment Implementation Plan
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
**Goal:** Deploy Pinchflat (YouTube auto-archiver) on Atlantis via a hand-run docker compose on a feature branch, so the user can evaluate it before committing to production promotion.
**Architecture:** Single `ghcr.io/kieraneglin/pinchflat:latest` container, port-published on `8945` on Atlantis's LAN IP. Config on Atlantis NVMe (`/volume2/metadata/docker2/pinchflat/config`), downloads on SATA array (`/volume1/data/media/youtube`). No SSO, no reverse proxy, no Kuma monitor, no Portainer stack — purely a branch-based hand-run test.
**Tech Stack:** Docker Compose, Synology DSM (Atlantis NAS), SSH to `vish@atlantis`, git.
**Reference spec:** `docs/superpowers/specs/2026-04-24-pinchflat-design.md`
---
## File Structure
Files added to the repo on branch `feat/pinchflat`:
- `hosts/synology/atlantis/pinchflat/docker-compose.yml` — the single-service compose definition
- `docs/services/individual/pinchflat.md` — brief stub (purpose, URL, test status), following repo convention at `docs/services/individual/<service>.md`
Non-tracked host state created on Atlantis:
- `/volume2/metadata/docker2/pinchflat/config/` — persistent SQLite + YAML config
- `/volume1/data/media/youtube/` — download target folder
- `/volume1/homes/vish/pinchflat-test/` — throwaway working copy of the branch
No modifications to existing files. No Portainer stack registration.
---
## Task 1: Pre-flight checks on Atlantis
**Files:** None (verification only)
- [ ] **Step 1: Confirm port 8945 is free on Atlantis**
Run:
```bash
ssh vish@atlantis 'ss -tlnp 2>/dev/null | grep -E ":8945\b" || echo "port 8945 free"'
```
Expected: `port 8945 free`
If a process is listening on 8945, stop and re-plan — pick a different host port (e.g. 8946) and update the compose file in Task 2 accordingly.
- [ ] **Step 2: Confirm media root exists and ownership convention**
Run:
```bash
ssh vish@atlantis 'ls -ld /volume1/data/media /volume2/metadata/docker2'
```
Expected output includes two directories owned by a user account, with `/volume1/data/media` containing existing media subfolders (movies, tv, anime, etc.).
- [ ] **Step 3: Confirm Docker daemon is running on Atlantis**
Run:
```bash
ssh vish@atlantis 'docker ps --format "table {{.Names}}\t{{.Status}}" | head -5'
```
Expected: a table listing at least a handful of running containers (plex, sonarr, etc.). If Docker isn't responding, stop and investigate before proceeding.
---
## Task 2: Create feature branch and compose file
**Files:**
- Create: `hosts/synology/atlantis/pinchflat/docker-compose.yml`
- [ ] **Step 1: Create branch off main**
Run from the repo root `/home/homelab/organized/repos/homelab`:
```bash
git checkout main && git pull --ff-only && git checkout -b feat/pinchflat
```
Expected: branch `feat/pinchflat` checked out, working tree clean except for the pre-existing untracked items (`.secrets.baseline` modifications, `data/expenses.csv`, `.superpowers/`, `backups/`).
- [ ] **Step 2: Create the compose directory**
Run:
```bash
mkdir -p hosts/synology/atlantis/pinchflat
```
- [ ] **Step 3: Write `hosts/synology/atlantis/pinchflat/docker-compose.yml`**
Full file contents:
```yaml
# Pinchflat - YouTube auto-archiver (test deployment)
# Port: 8945
# Docs: https://github.com/kieraneglin/pinchflat
# Scope: lightweight evaluation on Atlantis. No SSO, no reverse proxy, no Kuma.
# See: docs/superpowers/specs/2026-04-24-pinchflat-design.md
version: "3.8"
services:
pinchflat:
image: ghcr.io/kieraneglin/pinchflat:latest
container_name: pinchflat
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
ports:
- "8945:8945"
volumes:
- /volume2/metadata/docker2/pinchflat/config:/config
- /volume1/data/media/youtube:/downloads
healthcheck:
test: ["CMD-SHELL", "wget -qO /dev/null http://127.0.0.1:8945/ 2>/dev/null || exit 1"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
security_opt:
- no-new-privileges:true
restart: unless-stopped
```
- [ ] **Step 4: Validate the YAML**
Run from the repo root:
```bash
python3 -c "import yaml; yaml.safe_load(open('hosts/synology/atlantis/pinchflat/docker-compose.yml'))" && echo "YAML OK"
```
Expected: `YAML OK`
If parsing fails, fix the indentation/syntax before proceeding. Do not use `sed` to edit — re-run the Write step with the full corrected file.
---
## Task 3: Write the docs stub
**Files:**
- Create: `docs/services/individual/pinchflat.md`
- [ ] **Step 1: Write the doc stub**
Full file contents:
````markdown
# Pinchflat
YouTube channel auto-archiver. Subscribes to channels, polls for new uploads, downloads via yt-dlp, stores locally with metadata/subtitles/chapters.
## Status
**Test deployment as of 2026-04-24.** Running on Atlantis via hand-run `docker compose` on branch `feat/pinchflat`. Not yet registered in Portainer, not yet behind Authentik / NPM / Kuma.
See `docs/superpowers/specs/2026-04-24-pinchflat-design.md` for design rationale and promotion path.
## Access
- **Web UI:** http://192.168.0.200:8945 (LAN only)
- **Host:** Atlantis
- **Image:** `ghcr.io/kieraneglin/pinchflat:latest`
## Paths on Atlantis
- **Compose:** `/volume1/homes/vish/pinchflat-test/hosts/synology/atlantis/pinchflat/docker-compose.yml` (during test)
- **Config:** `/volume2/metadata/docker2/pinchflat/config`
- **Downloads:** `/volume1/data/media/youtube/<Channel>/<YYYY-MM-DD> - <Title>.mkv`
## Runtime defaults (configure in web UI on first launch)
- Output template: `/downloads/{{ source_custom_name }}/{{ upload_yyyy_mm_dd }} - {{ title }}.{{ ext }}`
- Resolution cap: 2160p (4K)
- Container format: MKV (required for VP9/AV1 4K streams)
- Thumbnails: on
- Subtitles: on
- Chapters: on
- NFO files: off
## Operations
### Start / stop (during test phase)
```bash
ssh vish@atlantis
cd /volume1/homes/vish/pinchflat-test
docker compose up -d # start
docker compose logs -f # tail logs
docker compose down # stop (data preserved)
```
### Full teardown (abandon test)
```bash
ssh vish@atlantis
cd /volume1/homes/vish/pinchflat-test
docker compose down -v
sudo rm -rf /volume2/metadata/docker2/pinchflat /volume1/data/media/youtube
cd ~ && rm -rf /volume1/homes/vish/pinchflat-test
```
Then on the workstation:
```bash
git branch -D feat/pinchflat
git push origin --delete feat/pinchflat # if pushed
```
## Promotion to production (if keeping)
1. Merge `feat/pinchflat` → `main`.
2. Register new Portainer GitOps stack pointing at `hosts/synology/atlantis/pinchflat/docker-compose.yml`.
3. Stop the hand-run container, re-up via Portainer.
4. Add NPM proxy host `pinchflat.vish.gg` + Authentik proxy provider.
5. Add Kuma HTTP monitor against `http://192.168.0.200:8945`.
6. Pin the image to a specific digest instead of `:latest`.
7. Expand this doc with operational runbook (channel subscription process, troubleshooting, log locations).
````
- [ ] **Step 2: Verify the doc renders as expected Markdown**
Run:
```bash
head -5 docs/services/individual/pinchflat.md && echo "---" && wc -l docs/services/individual/pinchflat.md
```
Expected: the first line is `# Pinchflat`, line count is around 6070.
---
## Task 4: Commit and push the branch
**Files:** Both files from Task 2 and Task 3.
- [ ] **Step 1: Stage the two new files only**
Explicitly avoid staging the pre-existing dirty working-tree items (`.secrets.baseline`, `data/expenses.csv`, `.superpowers/`, `backups/`).
Run:
```bash
git add hosts/synology/atlantis/pinchflat/docker-compose.yml docs/services/individual/pinchflat.md
git status --short
```
Expected: only the two new files appear in the "to be committed" section. Other items remain in the unstaged/untracked section untouched.
- [ ] **Step 2: Commit**
Run:
```bash
git commit -m "feat(pinchflat): add test deployment compose + docs stub
Hand-run docker compose on Atlantis port 8945 for evaluating Pinchflat as a
YouTube auto-archiver. No SSO/NPM/Kuma/Portainer during the test phase.
See docs/superpowers/specs/2026-04-24-pinchflat-design.md for design and
promotion path."
```
Expected: a commit hook chain runs (trim whitespace, check yaml, yamllint, docker-compose syntax check, detect-secrets). All should pass. Per CLAUDE.md, never add `Co-Authored-By` lines.
If yamllint fails on the new compose file, read the error, fix the compose file inline with Edit (not sed), re-validate per Task 2 Step 4, then `git add` + `git commit --amend --no-edit` is acceptable here since the commit has not been pushed yet.
- [ ] **Step 3: Push the branch**
Run:
```bash
git push -u origin feat/pinchflat
```
Expected: branch published to Gitea. Output shows tracking established.
---
## Task 5: Deploy on Atlantis
**Files:** None in the repo; creates host directories and launches the container.
- [ ] **Step 1: Create the config and download directories with correct ownership**
Pinchflat runs as `PUID=1029 PGID=100` (Synology `dockerlimited:users`). Pre-create the dirs so there's no first-boot permission confusion.
Run:
```bash
ssh vish@atlantis 'sudo mkdir -p /volume2/metadata/docker2/pinchflat/config /volume1/data/media/youtube && sudo chown -R 1029:100 /volume2/metadata/docker2/pinchflat /volume1/data/media/youtube && ls -ld /volume2/metadata/docker2/pinchflat /volume1/data/media/youtube'
```
Expected: both directories exist and are owned by `dockerlimited:users` (which may display as `1029:users` depending on DSM's /etc/passwd).
- [ ] **Step 2: Clone the branch to a throwaway working copy on Atlantis**
Run:
```bash
ssh vish@atlantis 'mkdir -p /volume1/homes/vish/pinchflat-test && cd /volume1/homes/vish/pinchflat-test && git clone --branch feat/pinchflat --depth 1 https://git.vish.gg/vish/homelab.git . && ls hosts/synology/atlantis/pinchflat/'
```
Expected: a single `docker-compose.yml` listed under the pinchflat directory. If the git clone fails (auth or network), fall back to `scp` of the single compose file:
```bash
scp hosts/synology/atlantis/pinchflat/docker-compose.yml vish@atlantis:/volume1/homes/vish/pinchflat-test/docker-compose.yml
```
(in which case the `docker compose` commands in later steps should be run directly from `/volume1/homes/vish/pinchflat-test/` rather than the nested path)
- [ ] **Step 3: Pull the image**
Run:
```bash
ssh vish@atlantis 'cd /volume1/homes/vish/pinchflat-test/hosts/synology/atlantis/pinchflat && docker compose pull'
```
Expected: image `ghcr.io/kieraneglin/pinchflat:latest` pulls successfully. If pull fails on auth (ghcr.io rate-limit), add `docker login ghcr.io` with a PAT and retry.
- [ ] **Step 4: Start the container**
Run:
```bash
ssh vish@atlantis 'cd /volume1/homes/vish/pinchflat-test/hosts/synology/atlantis/pinchflat && docker compose up -d'
```
Expected: `Container pinchflat Started`. No errors.
- [ ] **Step 5: Verify container status**
Run:
```bash
ssh vish@atlantis 'docker ps --filter name=pinchflat --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"'
```
Expected: single row showing `pinchflat Up N seconds (health: starting) 0.0.0.0:8945->8945/tcp, :::8945->8945/tcp`. Wait ~60 seconds after the first `up` so the healthcheck can transition from `starting` to `healthy`.
---
## Task 6: Smoke test
**Files:** None (evaluation only)
- [ ] **Step 1: Confirm the web UI responds**
Run from the workstation:
```bash
curl -s -o /dev/null -w "%{http_code}\n" http://192.168.0.200:8945/
```
Expected: `200` (Pinchflat serves its dashboard at `/`). If you get `000` (connection refused), wait 30 seconds and retry — the app takes a moment to bind after container start.
- [ ] **Step 2: Tail logs for any obvious errors**
Run:
```bash
ssh vish@atlantis 'docker logs pinchflat --tail 50 2>&1 | grep -iE "error|fatal|panic" | head -20 || echo "no errors in recent logs"'
```
Expected: `no errors in recent logs`, or if any errors appear, assess whether they're benign (e.g. migration notices) vs blocking.
- [ ] **Step 3: Check container healthcheck transitioned to healthy**
Run:
```bash
ssh vish@atlantis 'docker inspect pinchflat --format "{{.State.Health.Status}}"'
```
Expected: `healthy`. If still `starting` after 2 minutes, check logs for why the web server isn't responding. If `unhealthy`, something is wrong — stop and debug before handing off.
- [ ] **Step 4: Confirm config and download volumes wrote correctly**
Run:
```bash
ssh vish@atlantis 'ls -la /volume2/metadata/docker2/pinchflat/config/ | head -10 && echo "---" && ls -la /volume1/data/media/youtube/'
```
Expected: the config dir now contains Pinchflat's initial files (e.g. database file, YAML config). The downloads dir may still be empty — that's fine, it only populates after the user subscribes to a channel.
- [ ] **Step 5: Hand off to user for UI walk-through**
Report to the user:
- URL: `http://192.168.0.200:8945`
- Container state: running + healthy
- Next step is user-driven: open the URL, walk through first-run setup, apply the defaults from §4 of the spec (output template, 4K cap, MKV container, thumbnails/subtitles/chapters on, NFO off), subscribe to 25 test channels.
Do NOT mark the plan as complete at this step. The user drives the evaluation window; the plan is complete when the user makes the keep/drop decision.
---
## Task 7 (deferred): Keep-or-drop decision
**Trigger:** User signals "keep" or "drop" after evaluation window.
**If KEEP:**
- Open follow-up plan for promotion (register Portainer GitOps stack, add NPM + Authentik + Kuma, pin image digest, expand docs).
- Out of scope for this plan.
**If DROP:**
- [ ] Run the teardown block from `docs/services/individual/pinchflat.md` (Full teardown section).
- [ ] Delete the remote branch: `git push origin --delete feat/pinchflat`.
- [ ] Delete the local branch: `git branch -D feat/pinchflat`.
- [ ] Optionally delete the two repo files with a follow-up commit on main, or leave the branch deleted and nothing ever landed on main — either is fine.
---
## Self-review notes
- **Spec coverage:** Every spec section (architecture, components, storage, runtime defaults, deployment workflow, error handling, testing, rollback) maps to a task or is explicitly deferred. ✓
- **Placeholder scan:** No TBDs. No "similar to Task N" shortcuts. Every compose field and command is literal. ✓
- **Type consistency:** Paths match across tasks (`/volume1/data/media/youtube`, `/volume2/metadata/docker2/pinchflat/config`, `/volume1/homes/vish/pinchflat-test`, port `8945`). Ownership `1029:100` consistent. ✓
- **Known fuzziness:** "60-70 lines" for the doc stub is approximate; the exact line count depends on final whitespace. Not a blocker.

View File

@@ -0,0 +1,316 @@
# Homelab Dashboard — Design Spec
## Context
The homelab has 73 MCP tools, 11 automation scripts, and data scattered across 15+ services. There's no unified view — you switch between Homarr, Grafana, Portainer, and terminal logs. This dashboard consolidates everything into a single production-grade UI.
## Architecture
```
┌─────────────────┐ ┌──────────────────┐ ┌──────────────────────┐
│ Next.js UI │────▶│ FastAPI Backend │────▶│ Services │
│ (dashboard-ui) │ │ (dashboard-api) │ │ │
│ Port 3000 │ │ Port 8888 │ │ Portainer (5 hosts) │
│ │ │ │ │ Jellyfin (olares) │
│ - shadcn/ui │ │ - scripts/lib/* │ │ Ollama (olares) │
│ - Tailwind CSS │ │ - SQLite readers │ │ Prometheus │
│ - SWR polling │ │ - SSE stream │ │ Gitea │
│ - dark theme │ │ - /api/* routes │ │ Headscale │
│ │ │ │ │ SQLite DBs (6) │
└─────────────────┘ └──────────────────┘ │ expenses.csv │
└──────────────────────┘
```
**Docker Compose** runs both containers. Mounts `scripts/` read-only for the Python backend to access `lib/` modules and SQLite DBs.
## Tech Stack
| Layer | Technology | Why |
|-------|-----------|-----|
| Frontend | Next.js 15 + React 19 | Best component ecosystem |
| UI Components | shadcn/ui + Tailwind CSS | Production-grade, dark mode built-in |
| Data Fetching | SWR (stale-while-revalidate) | Auto-polling with caching |
| Real-time | EventSource (SSE) | Activity feed + alerts |
| Backend | FastAPI (Python 3.12) | Reuses existing `scripts/lib/` modules |
| Database | SQLite (read-only) + CSV | Existing automation data, no new DB |
| Deployment | Docker Compose (2 containers) | `dashboard-ui` + `dashboard-api` |
## Tabs & Content
### 1. Dashboard (Overview)
**Quick Stats Row** (5 cards, polled every 60s):
- Total containers (sum across all Portainer endpoints) + health status
- Hosts online (Portainer endpoint health checks)
- GPU status (nvidia-smi via SSH to olares: temp, utilization, VRAM)
- Emails classified today (query processed.db WHERE date = today)
- Active alerts (count of unhealthy containers from stack-restart.db)
**Activity Feed** (SSE, real-time):
- Reads from a combined event log the API builds from:
- `/tmp/stack-restart.log` (container health events)
- `/tmp/backup-validator.log` (backup results)
- `/tmp/gmail-organizer-dvish.log` + others (email classifications)
- `/tmp/receipt-tracker.log` (expense extractions)
- `/tmp/config-drift.log` (drift detections)
- Shows most recent 20 events with color-coded dots by type
- New events push via SSE
**Jellyfin Card** (polled every 30s):
- Now playing (active sessions via Jellyfin API)
- Library item counts (movies, TV, anime, music)
**Ollama Card** (polled every 60s):
- Model status (loaded/unloaded, model name)
- VRAM usage
- Daily call count (parsed from automation logs)
**Hosts Grid** (polled every 60s):
- 5 Portainer endpoints with container counts
- Status indicator (green/red)
- Click to navigate to Infrastructure tab filtered by host
### 2. Infrastructure
**Container Table** (polled every 30s):
- All containers across all Portainer endpoints
- Columns: Name, Host, Status, Image, Uptime
- Filter by endpoint, search by name
- Click to view logs (modal with last 100 lines)
- Restart button per container
**Olares Pods** (polled every 30s):
- K3s pod list from `kubectl get pods -A`
- GPU processes from nvidia-smi
- Restart deployment button
**Headscale Nodes** (polled every 120s):
- Node list with online/offline status
- Last seen timestamp
- IP addresses
### 3. Media
**Jellyfin Now Playing** (polled every 15s):
- Active streams with user, device, title, transcode status
- Bandwidth indicator
**Download Queues** (polled every 30s):
- Sonarr queue (upcoming episodes, download status)
- Radarr queue (upcoming movies, download status)
- SABnzbd queue (active downloads, speed, ETA)
**Library Stats** (polled every 300s):
- Jellyfin library counts
- Recent additions (if API supports it)
### 4. Automations
**Email Organizer Status** (polled every 120s):
- Per-account stats: lzbellina92, dvish92, admin@thevish.io
- Today's classifications by category (bar chart)
- Sender cache hit rate
- Last run time + errors
**Stack Restart History** (polled every 60s):
- Table from stack-restart.db: container, endpoint, duration, action taken, LLM analysis
- Last 7 days
**Backup Status** (polled every 300s):
- Parse latest `/tmp/gmail-backup-daily.log`
- OK/FAIL indicator with last run time
- Email count backed up
**Config Drift** (polled every 300s):
- Table of detected drifts (if any)
- Last scan time
**Disk Predictions** (polled every 3600s):
- Table from latest disk-predictor run
- Volumes approaching 90% highlighted
### 5. Expenses
**Expense Table** (polled every 300s):
- Read from `data/expenses.csv`
- Columns: Date, Vendor, Amount, Currency, Order#, Account
- Sortable, filterable
- Running total for current month
**Monthly Summary** (polled every 300s):
- Total spend this month
- Spend by vendor (top 10)
- Spend by category (if derivable from vendor)
**Subscription Audit** (static, monthly):
- Latest audit results from subscription-auditor
- Active, dormant, marketing sender counts
## FastAPI Backend Endpoints
```
GET /api/health → backend health check
# Dashboard
GET /api/stats/overview → container count, host health, GPU, email count, alerts
GET /api/activity → SSE stream of recent events
GET /api/jellyfin/status → now playing + library counts
GET /api/ollama/status → model, VRAM, call count
# Infrastructure
GET /api/containers → all containers across endpoints (?endpoint=atlantis)
GET /api/containers/{id}/logs → container logs (?endpoint=atlantis&tail=100)
POST /api/containers/{id}/restart → restart container
GET /api/olares/pods → k3s pod list (?namespace=)
GET /api/olares/gpu → nvidia-smi output
GET /api/headscale/nodes → headscale node list
# Media
GET /api/jellyfin/sessions → active playback sessions
GET /api/sonarr/queue → download queue
GET /api/radarr/queue → download queue
GET /api/sabnzbd/queue → active downloads
# Automations
GET /api/automations/email → organizer stats from processed.db files
GET /api/automations/restarts → stack-restart history from DB
GET /api/automations/backup → backup log parse
GET /api/automations/drift → config drift status
GET /api/automations/disk → disk predictions
# Expenses
GET /api/expenses → expenses.csv data (?month=2026-04)
GET /api/expenses/summary → monthly totals, top vendors
GET /api/subscriptions → latest subscription audit
```
## SSE Activity Stream
The `/api/activity` endpoint uses Server-Sent Events:
```python
@app.get("/api/activity")
async def activity_stream():
async def event_generator():
# Tail all automation log files
# Parse new lines into structured events
# Yield as SSE: data: {"type": "email", "message": "...", "time": "..."}
return StreamingResponse(event_generator(), media_type="text/event-stream")
```
Event types: `container_health`, `backup`, `email_classified`, `receipt_extracted`, `config_drift`, `stack_restart`, `pr_review`.
## Docker Compose
```yaml
# dashboard/docker-compose.yml
services:
dashboard-api:
build: ./api
ports:
- "8888:8888"
volumes:
- ../../scripts:/app/scripts:ro # access lib/ and SQLite DBs
- ../../data:/app/data:ro # expenses.csv
- /tmp:/app/logs:ro # automation log files
environment:
- PORTAINER_URL=http://100.83.230.112:10000
- PORTAINER_TOKEN=${PORTAINER_TOKEN}
- OLLAMA_URL=http://192.168.0.145:31434
restart: unless-stopped
dashboard-ui:
build: ./ui
ports:
- "3000:3000"
environment:
- API_URL=http://dashboard-api:8888
depends_on:
- dashboard-api
restart: unless-stopped
```
## File Structure
```
dashboard/
docker-compose.yml
api/
Dockerfile
requirements.txt # fastapi, uvicorn, httpx
main.py # FastAPI app
routers/
overview.py # /api/stats, /api/activity
containers.py # /api/containers/*
media.py # /api/jellyfin/*, /api/sonarr/*, etc.
automations.py # /api/automations/*
expenses.py # /api/expenses/*
olares.py # /api/olares/*
ui/
Dockerfile
package.json
next.config.js
tailwind.config.ts
app/
layout.tsx # root layout with top nav
page.tsx # Dashboard tab (default)
infrastructure/
page.tsx
media/
page.tsx
automations/
page.tsx
expenses/
page.tsx
components/
nav.tsx # top navigation bar
stat-card.tsx # quick stats cards
activity-feed.tsx # SSE-powered activity feed
container-table.tsx # sortable container list
host-card.tsx # host status card
expense-table.tsx # expense data table
jellyfin-card.tsx # now playing + library stats
ollama-card.tsx # LLM status card
lib/
api.ts # fetch wrapper for backend API
use-sse.ts # SSE hook for activity feed
```
## Polling Intervals
| Data | Interval | Rationale |
|------|----------|-----------|
| Container status | 30s | Detect issues quickly |
| Jellyfin sessions | 15s | Now playing should feel live |
| GPU / Ollama | 60s | Changes slowly |
| Email stats | 120s | Organizer runs every 30min |
| Activity feed | SSE (real-time) | Should feel instant |
| Expenses | 300s | Changes once/day at most |
| Headscale nodes | 120s | Rarely changes |
| Disk predictions | 3600s | Weekly report, hourly check is plenty |
## Design Tokens (Dark Theme)
Based on the approved mockup:
```
Background: #0a0a1a (page), #0f172a (cards), #1e293b (borders)
Text: #f1f5f9 (primary), #94a3b8 (secondary), #475569 (muted)
Accent: #3b82f6 (blue, primary action)
Success: #22c55e (green, healthy)
Warning: #f59e0b (amber)
Error: #ef4444 (red)
Purple: #8b5cf6 (Ollama/AI indicators)
```
These map directly to Tailwind's slate/blue/green/amber/red/violet palette, so shadcn/ui theming is straightforward.
## Verification
1. `docker compose up` should start both containers
2. `http://localhost:3000` loads the dashboard
3. All 5 tabs render REDACTED_APP_PASSWORD the homelab
4. Activity feed updates in real-time when an automation runs
5. Container restart button works
6. Expenses table shows data from expenses.csv
7. Mobile-responsive (test at 375px width)

View File

@@ -0,0 +1,146 @@
# Pinchflat Test Deployment — Design
**Date:** 2026-04-24
**Status:** Approved, awaiting implementation plan
**Scope:** Evaluate Pinchflat (YouTube auto-archiver) on Atlantis as a lightweight test before deciding whether to adopt permanently.
## Goal
Run Pinchflat on Atlantis long enough to evaluate its channel-subscription UX and download quality on real 4K monitors. Keep the test cheap to throw away: no SSO, no reverse proxy, no DNS entry, no Kuma monitor, no Portainer stack registration until we decide to keep it.
## Non-Goals
- Media-server integration (user does not use Jellyfin; no Plex integration planned for this test)
- Authentik SSO / NPM proxy / `*.vish.gg` hostname / Kuma monitor — all deferred to a "promotion to prod" follow-up if we keep it
- Exposure outside the LAN
- GitOps / Portainer stack registration
## Architecture
Single container on Atlantis, port-published on the LAN. No dependencies on other services.
```
┌──────────────────────────────────────┐
│ Atlantis (Synology, 192.168.0.200) │
│ │
LAN browser ──► │ :8945 pinchflat │
│ │ │
│ ├─► /config (NVMe) │
│ │ /volume2/metadata/docker2│
│ │ /pinchflat/config │
│ │ │
│ └─► /downloads (SATA) │
│ /volume1/data/media │
│ /youtube/ │
└──────────────────────────────────────┘
```
## Components
### 1. Container
- **Image:** `ghcr.io/kieraneglin/pinchflat:latest`
- **Container name:** `pinchflat`
- **Network:** default bridge. Port published `8945:8945` (Pinchflat default, verified free on Atlantis via `ss -tlnp`).
- **Not joined to `media2_net`** — nothing else talks to Pinchflat, no benefit to a static IP on the arr bridge.
- **User/group:** PUID=1029, PGID=100 (Synology `dockerlimited:users` — matches existing media ownership so Plex/SMB can read the output folder later if we decide to integrate).
- **Env:** `TZ=America/Los_Angeles`, `UMASK=022`.
- **Security:** `security_opt: [no-new-privileges:true]`.
- **Restart:** `unless-stopped`.
- **Watchtower:** default (enabled). Fine for a test running `:latest`.
- **Healthcheck:** HTTP GET `/` on port 8945, 30s interval (Pinchflat's web UI is the only interface).
### 2. Storage
- **Config volume:** `/volume2/metadata/docker2/pinchflat/config``/config` (NVMe, matches repo convention).
- **Downloads volume:** `/volume1/data/media/youtube/``/downloads` (SATA RAID6, new folder alongside `movies/`, `tv/`, etc.).
- No cache volume needed — Pinchflat writes directly to destination.
### 3. Compose file
- **Path:** `hosts/synology/atlantis/pinchflat/docker-compose.yml`
- **Matches conventions observed in** `hosts/synology/atlantis/youtubedl.yaml` and `hosts/synology/atlantis/arr-suite/docker-compose.yml`.
- Not referenced by any existing Portainer stack. Not in any `networks:` definition shared with other services.
### 4. Pinchflat runtime defaults (configured in web UI on first launch)
Applied globally; overridable per-channel.
- **Output template:** `/downloads/{{ source_custom_name }}/{{ upload_yyyy_mm_dd }} - {{ title }}.{{ ext }}`
- Produces `/downloads/Veritasium/2024-03-12 - Why planes don't fly faster.mkv` etc.
- One folder per channel, date-prefixed files for chronological sort, no fake S01E01 naming.
- **Resolution cap:** 4K (best available up to 2160p). User has 4K monitors; 4K channels get 4K, others fall back naturally.
- **Container format:** MKV (required for clean VP9/AV1 playback — YouTube does not encode 4K in H.264).
- **Thumbnails:** on (cheap, useful in file managers).
- **Subtitles:** on (any available language, SRT sidecar files).
- **Chapters:** on (embedded in MKV).
- **NFO files:** off (only useful for Plex/Kodi/Jellyfin; not needed here).
## Deployment & Test Workflow
1. Create branch `feat/pinchflat` off `main`.
2. Add `hosts/synology/atlantis/pinchflat/docker-compose.yml` and a stub `docs/services/individual/pinchflat.md` (~20 lines: what it is, current test status, URL).
3. Commit, push branch.
4. SSH to Atlantis, clone/checkout the branch to a throwaway working copy under `/volume1/homes/vish/pinchflat-test/`.
5. `docker compose up -d` from the working copy.
6. Pre-create `/volume1/data/media/youtube/` with `dockerlimited:users` ownership (or let Pinchflat create it on first download — either works).
7. Open `http://192.168.0.200:8945` in a browser, walk through initial setup, apply the defaults from §4.
8. Subscribe to 25 test channels. Let it run for several days.
9. Decide:
- **Keep:** merge branch to `main`, register new Portainer GitOps stack pointing at the committed compose path, expand docs, stop the hand-run container, re-`up` via Portainer.
- **Drop:** `docker compose down -v` on Atlantis, `rm -rf /volume2/metadata/docker2/pinchflat /volume1/data/media/youtube`, delete the branch.
## Data Flow
```
YouTube ──► Pinchflat (yt-dlp) ──► /downloads/<Channel>/<date> - <title>.mkv
└─► /config/pinchflat.db (SQLite: subscriptions, download history)
```
No downstream consumers during the test. Files live on the filesystem; user browses via SMB / file manager / direct playback.
## Error Handling
- **Download failures** (age-restricted, geo-blocked, deleted videos): Pinchflat surfaces these in its web UI with retry buttons. No external alerting during the test.
- **Disk fill:** `/volume1/data` has ample headroom, but 4K with many channels can grow fast. If it becomes a concern, lower the global cap to 1080p or set per-channel caps.
- **Container crash:** `restart: unless-stopped` brings it back. No monitoring during the test phase — we'll notice if the UI doesn't load.
- **`:latest` breaking change:** Watchtower enabled. If an update breaks something, we roll back by pinning to a prior digest in the compose file.
## Testing
Manual, UI-driven:
- **Smoke:** container comes up, web UI loads, ingests a single video from a pasted URL.
- **Subscription:** channel subscription polls correctly, new uploads appear within the polling interval.
- **Quality:** 4K-capable channel produces a 2160p MKV; 1080p-only channel produces a 1080p MKV.
- **Sidecar files:** subtitles and thumbnails present next to the MKV.
- **Permissions:** output files are `dockerlimited:users`, readable via SMB.
- **Persistence:** restart container, state survives.
## Open Questions
None — design approved.
## Promotion Path (if kept)
Out of scope for this spec, but the follow-up to "keep" would include:
- Register GitOps stack in Portainer from the committed compose path
- Add Authentik proxy provider + NPM proxy host for `pinchflat.vish.gg`
- Add Kuma monitor (HTTP `http://192.168.0.200:8945`)
- Pin image to a specific version digest instead of `:latest`
- Expand `docs/services/individual/pinchflat.md` with full operational runbook
## Rollback Plan
Throw-away test. Rollback is:
```bash
ssh atlantis
cd /volume1/homes/vish/pinchflat-test
docker compose down -v
sudo rm -rf /volume2/metadata/docker2/pinchflat /volume1/data/media/youtube
cd ~ && rm -rf pinchflat-test
```
Then `git branch -D feat/pinchflat` and `git push origin --delete feat/pinchflat`. Zero impact on any production stack since nothing references this compose file.