Sanitized mirror from private repository - 2026-04-05 08:31:50 UTC
Some checks failed
Documentation / Build Docusaurus (push) Failing after 5m1s
Documentation / Deploy to GitHub Pages (push) Has been skipped

This commit is contained in:
Gitea Mirror Bot
2026-04-05 08:31:50 +00:00
commit 2be8f1fe17
1390 changed files with 354479 additions and 0 deletions

View File

@@ -0,0 +1,19 @@
version: '3'
services:
openhands-app:
image: docker.openhands.dev/openhands/openhands:0.62
container_name: openhands-app
environment:
- OPENHANDS_LLM_PROVIDER=openai
- OPENHANDS_LLM_MODEL=mistralai/devstral-small-2507
- OPENHANDS_LLM_API_BASE=http://192.168.0.253:1234/v1
- OPENHANDS_LLM_API_KEY=dummy
- SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.openhands.dev/openhands/runtime:0.62-nikolaik
- LOG_ALL_EVENTS=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ~/openhands_clean:/.openhands
ports:
- 3000:3000
extra_hosts:
- "host.docker.internal:host-gateway"

View File

@@ -0,0 +1,488 @@
# 🎮 NVIDIA Shield TV Pro 4K - Travel Device Configuration
**🟢 Beginner to Intermediate Guide**
The NVIDIA Shield TV Pro serves as a portable homelab access point, providing secure connectivity to your infrastructure while traveling. This guide covers setup, configuration, and usage scenarios.
## 📱 Device Overview
### **Hardware Specifications**
- **Model**: NVIDIA Shield TV Pro (2019)
- **CPU**: NVIDIA Tegra X1+ (8-core, 64-bit ARM)
- **GPU**: 256-core NVIDIA GPU
- **RAM**: 3GB LPDDR4
- **Storage**: 16GB eMMC + microSD expansion
- **Network**: Gigabit Ethernet + 802.11ac WiFi
- **Ports**: 2x USB 3.0, HDMI 2.0b, microSD slot
- **Power**: 20W external adapter
- **Remote**: Voice remote with backlit buttons
- **AI Upscaling**: NVIDIA AI upscaling to 4K
### **Travel Use Cases**
| Scenario | Primary Function | Homelab Integration |
|----------|------------------|-------------------|
| **Hotel Room** | Media streaming, secure browsing | Plex/Jellyfin via Tailscale |
| **Airbnb/Rental** | Personal entertainment system | Full homelab access |
| **Family Visits** | Share media with family | Stream personal library |
| **Business Travel** | Secure work environment | VPN gateway to homelab |
| **Extended Travel** | Portable home setup | Complete service access |
---
## 🔧 Initial Setup & Configuration
### **Step 1: Basic Android TV Setup**
```bash
# Initial device setup
1. Connect to power and HDMI display
2. Follow Android TV setup wizard
3. Sign in with Google account
4. Connect to WiFi network
5. Complete initial updates
6. Enable Developer Options:
- Settings > Device Preferences > About
- Click "Build" 7 times to enable Developer Options
- Settings > Device Preferences > Developer Options
- Enable "USB Debugging"
```
### **Step 2: Enable Sideloading**
```bash
# Allow installation of non-Play Store apps
1. Settings > Device Preferences > Security & Restrictions
2. Enable "Unknown Sources" for apps you trust
3. Or enable per-app when installing Tailscale
```
### **Step 3: Install Essential Apps**
```bash
# Core applications for homelab integration
1. Tailscale (sideloaded)
2. Plex (Play Store)
3. VLC Media Player (Play Store)
4. Chrome Browser (Play Store)
5. Termux (Play Store) - for SSH access
6. Solid Explorer (Play Store) - file management
```
---
## 🌐 Tailscale Configuration
### **Installation Process**
```bash
# Method 1: Direct APK Installation (Recommended)
1. Download Tailscale APK from official website
2. Transfer to Shield via USB drive or network
3. Install using file manager
4. Grant necessary permissions
# Method 2: ADB Installation (Advanced)
# From computer with ADB installed:
adb connect [shield-ip-address]
adb install tailscale.apk
```
### **Tailscale Setup**
```bash
# Initial configuration
1. Open Tailscale app
2. Sign in with your Tailscale account
3. Authorize the device in Tailscale admin console
4. Verify connection to homelab network
5. Test connectivity to homelab services
# Verify connection
# From Termux or ADB shell:
ping atlantis.vish.local
ping 100.83.230.112 # Atlantis Tailscale IP
```
### **Advanced Tailscale Configuration**
```bash
# Configure as exit node (optional)
# Allows Shield to route all traffic through homelab
1. Tailscale admin console > Machines
2. Find NVIDIA Shield device
3. Enable "Exit Node" capability
4. On Shield: Settings > Use as Exit Node
# Subnet routing (if needed)
# Allow access to local networks at travel location
tailscale up --advertise-routes=192.168.1.0/24
```
---
## 📺 Media Streaming Configuration
### **Plex Client Setup**
```bash
# Optimal Plex configuration for travel
1. Install Plex app from Play Store
2. Sign in with Plex account
3. Server should auto-discover via Tailscale
4. If not found manually add:
- Server IP: atlantis.vish.local
- Port: 32400
- Or Tailscale IP: 100.83.230.112:32400
# Quality settings for travel:
# Settings > Video Quality
# - Home Streaming: Maximum (if good WiFi)
# - Remote Streaming: 4 Mbps 720p (for limited bandwidth)
# - Allow Direct Play: Enabled
# - Allow Direct Stream: Enabled
```
### **Alternative Media Apps**
```bash
# Jellyfin (if preferred over Plex)
1. Install Jellyfin app from Play Store
2. Add server: calypso.vish.local:2283
3. Or Tailscale IP: 100.103.48.78:2283
# VLC for direct file access
1. Network streams via SMB/CIFS
2. Direct file playback from NAS
3. Supports all media formats
```
---
## 🔒 Security & VPN Configuration
### **Secure Browsing Setup**
```bash
# Use Shield as secure gateway
1. Configure Tailscale as exit node
2. All traffic routes through homelab
3. Benefits from Pi-hole ad blocking
4. Secure DNS resolution
# Chrome browser configuration:
# - Set homepage to homelab dashboard
# - Bookmark frequently used services
# - Enable sync for consistent experience
```
### **SSH Access to Homelab**
```bash
# Using Termux for SSH connections
1. Install Termux from Play Store
2. Update packages: pkg update && pkg upgrade
3. Install SSH client: pkg install openssh
4. Generate SSH key: ssh-keygen -t ed25519
5. Copy public key to homelab hosts
# Connect to homelab:
ssh admin@atlantis.vish.local
ssh user@homelab-vm.vish.local
ssh pi@concord-nuc.vish.local
```
---
## 🏨 Travel Scenarios & Setup
### **Hotel Room Setup**
```bash
# Quick deployment in hotel room
1. Connect Shield to hotel TV via HDMI
2. Connect to hotel WiFi
3. Launch Tailscale (auto-connects)
4. Access homelab services immediately
5. Stream personal media library
# Hotel WiFi considerations:
# - May need to accept terms via browser
# - Some hotels block VPN traffic
# - Use mobile hotspot as backup
```
### **Airbnb/Rental Property**
```bash
# Extended stay configuration
1. Connect to property WiFi
2. Set up Shield as primary entertainment
3. Configure TV settings for optimal experience
4. Share access with travel companions
5. Use as work environment via homelab
# Family sharing:
# - Create guest Plex accounts
# - Share specific libraries
# - Monitor usage via Tautulli
```
### **Mobile Hotspot Integration**
```bash
# Using phone as internet source
1. Enable mobile hotspot on phone
2. Connect Shield to hotspot WiFi
3. Monitor data usage carefully
4. Adjust streaming quality accordingly
# Data-conscious settings:
# - Plex: 2 Mbps 480p for mobile data
# - Disable automatic updates
# - Use offline content when possible
```
---
## 🎮 Gaming & Entertainment Features
### **GeForce Now Integration**
```bash
# Cloud gaming via NVIDIA's service
1. Install GeForce Now app
2. Sign in with NVIDIA account
3. Access Steam/Epic games library
4. Stream games at 4K 60fps (with good connection)
# Optimal settings:
# - Streaming Quality: Custom
# - Bitrate: Adjust based on connection
# - Frame Rate: 60fps preferred
```
### **Local Game Streaming**
```bash
# Stream games from homelab PCs
1. Install Steam Link app
2. Discover gaming PCs on network
3. Pair with gaming systems
4. Stream games over Tailscale
# Requirements:
# - Gaming PC with Steam installed
# - Good network connection (5+ Mbps)
# - Low latency connection
```
### **Emulation & Retro Gaming**
```bash
# RetroArch for classic games
1. Install RetroArch from Play Store
2. Download cores for desired systems
3. Load ROMs from homelab NAS
4. Configure controllers
# ROM access via SMB:
# - Connect to atlantis.vish.local/roms
# - Browse by system/console
# - Load directly from network storage
```
---
## 🔧 Advanced Configuration
### **Custom Launcher (Optional)**
```bash
# Replace default Android TV launcher
1. Install alternative launcher (FLauncher, ATV Launcher)
2. Set as default home app
3. Customize with homelab shortcuts
4. Create quick access to services
# Homelab shortcuts:
# - Grafana dashboard
# - Portainer interface
# - Plex web interface
# - Router admin panel
```
### **Automation Integration**
```bash
# Home Assistant integration
1. Install Home Assistant app
2. Connect to concord-nuc.vish.local:8123
3. Control smart home devices
4. Automate Shield behavior
# Example automations:
# - Turn on Shield when arriving home
# - Adjust volume based on time of day
# - Switch inputs automatically
```
### **File Management**
```bash
# Solid Explorer configuration
1. Add network locations:
- SMB: //atlantis.vish.local/media
- SMB: //calypso.vish.local/documents
- FTP: homelab-vm.vish.local:21
2. Enable cloud storage integration
3. Set up automatic sync folders
# Use cases:
# - Download files to Shield storage
# - Upload photos/videos to homelab
# - Access documents remotely
```
---
## 📊 Monitoring & Management
### **Performance Monitoring**
```bash
# Monitor Shield performance
1. Settings > Device Preferences > About
2. Check storage usage regularly
3. Monitor network performance
4. Clear cache when needed
# Network diagnostics:
# - WiFi Analyzer app for signal strength
# - Speedtest app for bandwidth testing
# - Ping tools for latency checking
```
### **Remote Management**
```bash
# ADB over network (advanced)
1. Enable ADB over network in Developer Options
2. Connect from computer: adb connect [shield-ip]:5555
3. Execute commands remotely
4. Install/manage apps REDACTED_APP_PASSWORD
# Useful ADB commands:
adb shell pm list packages # List installed apps
adb install app.apk # Install APK remotely
adb shell input keyevent 3 # Simulate home button
adb shell screencap /sdcard/screen.png # Screenshot
```
---
## 🚨 Troubleshooting
### **Common Issues & Solutions**
```bash
# Tailscale connection problems:
1. Check internet connectivity
2. Restart Tailscale app
3. Re-authenticate if needed
4. Verify firewall settings
# Plex streaming issues:
1. Check server status in homelab
2. Test direct IP connection
3. Adjust quality settings
4. Clear Plex app cache
# WiFi connectivity problems:
1. Forget and reconnect to network
2. Check for interference
3. Use 5GHz band if available
4. Reset network settings if needed
```
### **Performance Optimization**
```bash
# Improve Shield performance:
1. Clear app caches regularly
2. Uninstall unused applications
3. Restart device weekly
4. Keep storage under 80% full
# Network optimization:
1. Use wired connection when possible
2. Position close to WiFi router
3. Avoid interference sources
4. Update router firmware
```
---
## 📋 Travel Checklist
### **Pre-Travel Setup**
```bash
☐ Update Shield to latest firmware
☐ Update all apps
☐ Verify Tailscale connectivity
☐ Test Plex streaming
☐ Download offline content if needed
☐ Charge remote control
☐ Pack HDMI cable (if needed)
☐ Pack power adapter
☐ Verify homelab services are running
☐ Set up mobile hotspot backup
```
### **At Destination**
```bash
☐ Connect to local WiFi
☐ Test internet speed
☐ Launch Tailscale
☐ Verify homelab connectivity
☐ Test media streaming
☐ Configure TV settings
☐ Set up any shared access
☐ Monitor data usage (if on mobile)
```
### **Departure Cleanup**
```bash
☐ Sign out of local accounts
☐ Clear browser data
☐ Remove WiFi networks
☐ Reset any personalized settings
☐ Verify no personal data left on device
☐ Pack all accessories
```
---
## 🔗 Integration with Homelab Services
### **Service Access URLs**
```bash
# Via Tailscale (always accessible):
Plex: http://100.83.230.112:32400
Jellyfin: http://100.103.48.78:2283
Grafana: http://100.83.230.112:7099
Home Assistant: http://100.67.40.126:8123
Portainer: http://100.83.230.112:9000
Router Admin: http://192.168.1.1
# Via local DNS (when on home network):
Plex: http://atlantis.vish.local:32400
Jellyfin: http://calypso.vish.local:2283
Grafana: http://atlantis.vish.local:7099
```
### **Backup & Sync**
```bash
# Automatic backup of Shield data
1. Configure Syncthing on Shield (if available)
2. Sync important folders to homelab
3. Backup app configurations
4. Store in homelab for easy restore
# Manual backup process:
1. Use ADB to pull important data
2. Store configurations in homelab Git repo
3. Document custom settings
4. Create restore procedures
```
---
## 📚 Related Documentation
- [Tailscale Setup Guide](../docs/infrastructure/tailscale-setup-guide.md)
- [Travel Networking Guide](../docs/infrastructure/comprehensive-travel-setup.md)
- [Plex Configuration](../docs/services/individual/plex.md)
- [Home Assistant Integration](../docs/services/individual/home-assistant.md)
---
**💡 Pro Tip**: The NVIDIA Shield TV Pro is an incredibly versatile travel companion. With proper setup, it provides seamless access to your entire homelab infrastructure from anywhere in the world, making travel feel like home.
**🔄 Maintenance**: Update this configuration monthly and test all functionality before important trips.

View File

@@ -0,0 +1,5 @@
vish@pi-5-kevin:~/paper $ cat start.sh
#!/bin/bash
java -Xms2G -Xmx4G -jar paper-1.21.7-26.jar nogui
#Run this in a screen

View File

@@ -0,0 +1,67 @@
minecraft server.properties
# Minecraft server properties - Optimized for Raspberry Pi 5 (8GB RAM) Creative Server
# --- Gameplay Settings ---
gamemode=creative
difficulty=peaceful
pvp=false
spawn-protection=0
allow-flight=true
generate-structures=true
level-name=world
level-seed=
level-type=minecraft\:flat
# --- Server Limits & Performance ---
max-players=10
view-distance=6
simulation-distance=4
max-tick-time=100000
sync-chunk-writes=false
entity-broadcast-range-percentage=75
max-world-size=29999984
# --- Networking ---
server-ip=
server-port=25565
rate-limit=10
network-compression-threshold=512
use-native-transport=true
# --- Online Access ---
online-mode=true
enforce-secure-profile=false
prevent-proxy-connections=false
white-list=true
enforce-whitelist=true
# --- RCON/Query (disabled for now) ---
enable-rcon=false
rcon.port=25575
rcon.password=
"REDACTED_PASSWORD"
query.port=25565
# --- Other Options ---
motd=Welcome to Kevin's world
op-permission-level=4
function-permission-level=2
player-idle-timeout=0
text-filtering-config=
text-filtering-version=0
resource-pack=
resource-pack-sha1=
resource-pack-id=
require-resource-pack=false
resource-pack-prompt=
initial-enabled-packs=vanilla
initial-disabled-packs=
bug-report-link=
broadcast-console-to-ops=true
broadcast-rcon-to-ops=true
debug=false
enable-command-block=false
enable-jmx-monitoring=false
pause-when-empty-seconds=-1
accepts-transfers=false

View File

@@ -0,0 +1,28 @@
# Diun — Docker Image Update Notifier
#
# Watches all running containers on this host and sends ntfy
# notifications when upstream images update their digest.
# Schedule: Mondays 09:00 (weekly cadence).
#
# ntfy topic: https://ntfy.vish.gg/diun
services:
diun:
image: crazymax/diun:latest
container_name: diun
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- diun-data:/data
environment:
LOG_LEVEL: info
DIUN_WATCH_WORKERS: "20"
DIUN_WATCH_SCHEDULE: "0 9 * * 1"
DIUN_WATCH_JITTER: 30s
DIUN_PROVIDERS_DOCKER: "true"
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
DIUN_NOTIF_NTFY_ENDPOINT: "https://ntfy.vish.gg"
DIUN_NOTIF_NTFY_TOPIC: "diun"
restart: unless-stopped
volumes:
diun-data:

View File

@@ -0,0 +1,15 @@
services:
dozzle-agent:
image: amir20/dozzle:latest
container_name: dozzle-agent
command: agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "7007:7007"
restart: unless-stopped
healthcheck:
test: ["CMD", "/dozzle", "healthcheck"]
interval: 30s
timeout: 5s
retries: 3

View File

@@ -0,0 +1,15 @@
# Glances - Real-time system monitoring
# Web UI: http://<host-ip>:61208
# Provides: CPU, memory, disk, network, Docker container stats
services:
glances:
image: nicolargo/glances:latest
container_name: glances
pid: host
network_mode: host
volumes:
- /var/run/docker.sock:/var/run/docker.sock
environment:
- GLANCES_OPT=--webserver
restart: unless-stopped

View File

@@ -0,0 +1,67 @@
# Immich - Photo/video backup solution
# URL: https://photos.vishconcord.synology.me
# Port: 2283
# Google Photos alternative with ML-powered features
version: "3.8"
name: immich
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
volumes:
- ${UPLOAD_LOCATION}:/data
- /etc/localtime:/etc/localtime:ro
env_file:
- .env
ports:
- "2283:2283"
depends_on:
- redis
- database
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:2283/api/server-info"]
interval: 30s
timeout: 5s
retries: 5
# You can enable this later if you really want object detection or face recognition.
# Itll work on the Pi 5, but very, very slowly.
# immich-machine-learning:
# container_name: immich_machine_learning
# image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
# volumes:
# - model-cache:/cache
# env_file:
# - .env
# restart: unless-stopped
# healthcheck:
# disable: false
redis:
container_name: immich_redis
image: docker.io/valkey/valkey:8-bookworm
restart: unless-stopped
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 5s
retries: 5
database:
container_name: immich_postgres
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0
environment:
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
POSTGRES_INITDB_ARGS: "--data-checksums"
volumes:
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
shm_size: 128mb
restart: unless-stopped
volumes:
model-cache:

View File

@@ -0,0 +1,22 @@
# Samba share on rpi5-vish (192.168.0.66)
# Shares the NVMe storagepool for access by other hosts on the LAN
#
# Mounted by:
# - homelab-vm: /mnt/pi5_storagepool (creds: /etc/samba/.pi5_credentials)
# - Atlantis: /volume1/pi5_storagepool (creds: /root/.pi5_smb_creds)
#
# To apply: copy relevant [storagepool] block into /etc/samba/smb.conf on pi-5
# Set SMB password: "REDACTED_PASSWORD" -e 'PASSWORD\nPASSWORD' | sudo smbpasswd -a vish -s
#
# pi-5 also mounts from Atlantis via NFS:
# /mnt/atlantis_data → 192.168.0.200:/volume1/data (media/torrents/usenet)
[storagepool]
path = /mnt/storagepool
browseable = yes
read only = no
guest ok = no
valid users = vish
force user = vish
create mask = 0664
directory mask = 0775

View File

@@ -0,0 +1,27 @@
# Scrutiny Collector — pi-5 (Raspberry Pi 5)
#
# Ships SMART data to the hub on homelab-vm.
# pi-5 has 2 NVMe drives (M.2 HAT):
# - nvme0n1: Micron 7450 480GB
# - nvme1n1: Samsung 970 EVO Plus 500GB
# NVMe not auto-discovered by smartctl --scan; uses explicit config.
# collector.yaml lives at: /home/vish/scrutiny/collector.yaml
#
# Hub: http://100.67.40.126:8090
services:
scrutiny-collector:
image: ghcr.io/analogj/scrutiny:master-collector
container_name: scrutiny-collector
cap_add:
- SYS_RAWIO
- SYS_ADMIN
volumes:
- /run/udev:/run/udev:ro
- /home/vish/scrutiny/collector.yaml:/opt/scrutiny/config/collector.yaml:ro
devices:
- /dev/nvme0n1
- /dev/nvme1n1
environment:
COLLECTOR_API_ENDPOINT: "http://100.67.40.126:8090"
restart: unless-stopped

View File

@@ -0,0 +1,13 @@
# Uptime Kuma - Self-hosted monitoring tool
# Web UI: http://<host-ip>:3001
# Features: HTTP(s), TCP, Ping, DNS monitoring with notifications
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
network_mode: host
volumes:
- /home/vish/docker/kuma/data:/app/data
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped

View File

View File

@@ -0,0 +1,22 @@
# docker-compose run archivebox init --setup
# docker-compose up
# echo "https://example.com" | docker-compose run archivebox archivebox add
# docker-compose run archivebox add --depth=1 https://example.com/some/feed.rss
# docker-compose run archivebox config --set PUBLIC_INDEX=True
# docker-compose run archivebox help
# Documentation:
# https://github.com/ArchiveBox/ArchiveBox/wiki/Docker#docker-compose
version: '2.4'
services:
archivebox:
image: archivebox/archivebox:master
command: server --quick-init 0.0.0.0:8000
ports:
- 8000:8000
environment:
- ALLOWED_HOSTS=*
- MEDIA_MAX_SIZE=750m
volumes:
- ./data:/data

View File

@@ -0,0 +1,17 @@
# ChatGPT Web - AI chat
# Port: 3000
# ChatGPT web interface
version: '3.9'
services:
deiucanta:
image: 'ghcr.io/deiucanta/chatpad:latest'
restart: unless-stopped
ports:
- '5690:80'
container_name: Chatpad-AI
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:80/health"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -0,0 +1,30 @@
# Conduit - Matrix server
# Port: 6167
# Lightweight Matrix homeserver
version: "3.9"
services:
matrix-conduit:
image: matrixconduit/matrix-conduit:latest
container_name: Matrix-Conduit
hostname: matrix-conduit
security_opt:
- no-new-privileges:true
user: 1000:1000
ports:
- "8455:6167"
volumes:
- "/volume1/docker/matrix-conduit:/var/lib/matrix-conduit/"
environment:
- CONDUIT_SERVER_NAME=vishtestingserver
- CONDUIT_DATABASE_PATH=/var/lib/matrix-conduit/
- CONDUIT_DATABASE_BACKEND=rocksdb
- CONDUIT_PORT=6167
- CONDUIT_MAX_REQUEST_SIZE=20000000
- CONDUIT_ALLOW_REGISTRATION=true
- CONDUIT_ALLOW_FEDERATION=true
- CONDUIT_TRUSTED_SERVERS=["matrix.org"]
- CONDUIT_MAX_CONCURRENT_REQUESTS=250
- CONDUIT_ADDRESS=0.0.0.0
- CONDUIT_CONFIG=''
restart: unless-stopped

View File

@@ -0,0 +1,9 @@
version: '3.9'
services:
drawio:
image: jgraph/drawio
restart: unless-stopped
ports:
- '8443:8443'
- '5022:8080'
container_name: drawio

View File

@@ -0,0 +1,15 @@
# Element Web - Matrix client
# Port: 80
# Matrix chat web client
version: '3'
services:
element-web:
image: vectorim/element-web:latest
container_name: element-web
restart: unless-stopped
volumes:
- /home/vish/docker/elementweb/config.json:/app/config.json
ports:
- 9000:80

View File

@@ -0,0 +1,88 @@
# PhotoPrism - Photo management
# Port: 2342
# AI-powered photo management
version: "3.9"
services:
db:
image: mariadb:jammy
container_name: PhotoPrism-DB
hostname: photoprism-db
mem_limit: 1g
cpu_shares: 768
security_opt:
- no-new-privileges:true
- seccomp:unconfined
- apparmor:unconfined
user: 1000:1000
healthcheck:
test: ["CMD-SHELL", "mysqladmin ping -u root -p$$MYSQL_ROOT_PASSWORD | grep 'mysqld is alive' || exit 1"]
volumes:
- /home/vish/docker/photoprism/db:/var/lib/mysql:rw
environment:
TZ: America/Los_Angeles
MYSQL_ROOT_PASSWORD: "REDACTED_PASSWORD"
MYSQL_DATABASE: photoprism
MYSQL_USER: photoprism-user
MYSQL_PASSWORD: "REDACTED_PASSWORD"
restart: on-failure:5
photoprism:
image: photoprism/photoprism:latest
container_name: PhotoPrism
hostname: photoprism
mem_limit: 6g
cpu_shares: 1024
security_opt:
- no-new-privileges:true
- seccomp:unconfined
- apparmor:unconfined
user: 1000:1009
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:2342
ports:
- 2342:2342
volumes:
- /home/vish/docker/photoprism/import:/photoprism/import:rw # *Optional* base folder from which files can be imported to originals
- /home/vish/docker/photoprism/storage:/photoprism/storage:rw
- /home/vish/docker/photoprism/originals:/photoprism/originals:rw
# - /volume1/docker/photoprism/family:/photoprism/originals/family:rw # *Additional* media folders can be mounted like this
environment:
PHOTOPRISM_ADMIN_USER: vish
PHOTOPRISM_ADMIN_PASSWORD: "REDACTED_PASSWORD"
PHOTOPRISM_UID: 1000
PHOTOPRISM_GID: 1000
PHOTOPRISM_AUTH_MODE: password
PHOTOPRISM_SITE_URL: http://localhost:2342/
PHOTOPRISM_ORIGINALS_LIMIT: 5120
PHOTOPRISM_HTTP_COMPRESSION: gzip
PHOTOPRISM_READONLY: false
PHOTOPRISM_EXPERIMENTAL: false
PHOTOPRISM_DISABLE_CHOWN: false
PHOTOPRISM_DISABLE_WEBDAV: false
PHOTOPRISM_DISABLE_SETTINGS: false
PHOTOPRISM_DISABLE_TENSORFLOW: false
PHOTOPRISM_DISABLE_FACES: false
PHOTOPRISM_DISABLE_CLASSIFICATION: false
PHOTOPRISM_DISABLE_RAW: false
PHOTOPRISM_RAW_PRESETS: false
PHOTOPRISM_JPEG_QUALITY: 100
PHOTOPRISM_DETECT_NSFW: false
PHOTOPRISM_UPLOAD_NSFW: true
PHOTOPRISM_SPONSOR: true
PHOTOPRISM_DATABASE_DRIVER: mysql
PHOTOPRISM_DATABASE_SERVER: photoprism-db:3306
PHOTOPRISM_DATABASE_NAME: photoprism
PHOTOPRISM_DATABASE_USER: photoprism-user
PHOTOPRISM_DATABASE_PASSWORD: "REDACTED_PASSWORD"
PHOTOPRISM_WORKERS: 2
PHOTOPRISM_THUMB_FILTER: blackman # best to worst: blackman, lanczos, cubic, linear
PHOTOPRISM_APP_MODE: standalone # progressive web app MODE - fullscreen, standalone, minimal-ui, browser
# PHOTOPRISM_SITE_CAPTION: "AI-Powered Photos App"
# PHOTOPRISM_SITE_DESCRIPTION: ""
# PHOTOPRISM_SITE_AUTHOR: ""
working_dir: "/photoprism"
restart: on-failure:5
depends_on:
db:
condition: service_started

View File

@@ -0,0 +1,24 @@
# Pi.Alert - Network scanner
# Port: 20211
# Network device monitoring
version: "3.9"
services:
pi.alert:
container_name: Pi.Alert
healthcheck:
test: curl -f http://localhost:17811/ || exit 1
mem_limit: 2g
cpu_shares: 768
security_opt:
- no-new-privileges:true
volumes:
- /home/vish/docker/pialert/config:/home/pi/pialert/config:rw
- /home/vish/docker/pialert/db:/home/pi/pialert/db:rw
- /home/vish/docker/pialert/logs:/home/pi/pialert/front/log:rw
environment:
TZ: America/Los_Angeles
PORT: 17811
network_mode: host
restart: on-failure:5
image: jokobsk/pi.alert:latest

View File

@@ -0,0 +1,65 @@
# ProxiTok - TikTok frontend
# Port: 8080
# Privacy-respecting TikTok viewer
version: "3.9"
services:
redis:
image: redis
command: redis-server --save 60 1 --loglevel warning
container_name: ProxiTok-REDIS
hostname: proxitok-redis
mem_limit: 256m
cpu_shares: 768
security_opt:
- no-new-privileges:true
read_only: true
user: 1000:1000
healthcheck:
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
restart: on-failure:5
signer:
image: ghcr.io/pablouser1/signtok:master
container_name: ProxiTok-SIGNER
hostname: proxitok-signer
mem_limit: 512m
cpu_shares: 768
security_opt:
- no-new-privileges:true
read_only: true
user: 1000:1000
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:8080/ || exit 1
restart: on-failure:5
proxitok:
image: ghcr.io/pablouser1/proxitok:master
container_name: ProxiTok
hostname: proxitok
mem_limit: 1g
cpu_shares: 768
security_opt:
- no-new-privileges:true
healthcheck:
test: stat /etc/passwd || exit 1
ports:
- 9770:80
volumes:
- proxitok-cache:/cache
environment:
LATTE_CACHE: /cache
API_CACHE: redis
REDIS_HOST: proxitok-redis
REDIS_PORT: 6379
API_SIGNER: remote
API_SIGNER_URL: http://proxitok-signer:8080/signature
restart: on-failure:5
depends_on:
redis:
condition: service_healthy
signer:
condition: service_healthy
volumes:
proxitok-cache:

View File

@@ -0,0 +1,145 @@
# Concord NUC
**Hostname**: concord-nuc / vish-concord-nuc
**IP Address**: 192.168.68.100 (static, eno1)
**Tailscale IP**: 100.72.55.21
**OS**: Ubuntu (cloud-init based)
**SSH**: `ssh vish-concord-nuc` (via Tailscale — see `~/.ssh/config`)
---
## Network Configuration
### Static IP Setup
`eno1` is configured with a **static IP** (`192.168.68.100/22`) via netplan. This is required because AdGuard Home binds its DNS listener to a specific IP, and DHCP lease changes would cause it to crash.
**Netplan config**: `/etc/netplan/50-cloud-init.yaml`
```yaml
network:
ethernets:
eno1:
dhcp4: false
addresses:
- 192.168.68.100/22
routes:
- to: default
via: 192.168.68.1
nameservers:
addresses:
- 9.9.9.9
- 1.1.1.1
version: 2
wifis:
wlp1s0:
access-points:
This_Wifi_Sucks:
password: "REDACTED_PASSWORD"
dhcp4: true
```
**Cloud-init is disabled** from managing network config:
`/etc/cloud/cloud.cfg.d/99-disable-network-config.cfg` — prevents reboots from reverting to DHCP.
> **Warning**: If you ever re-enable cloud-init networking or wipe this file, eno1 will revert to DHCP and AdGuard will start crash-looping on the next restart. See the Troubleshooting section below.
---
## Services
| Service | Port | URL |
|---------|------|-----|
| AdGuard Home (Web UI) | 9080 | http://192.168.68.100:9080 |
| AdGuard Home (DNS) | 53 | 192.168.68.100:53, 100.72.55.21:53 |
| Home Assistant | - | see homeassistant.yaml |
| Plex | - | see plex.yaml |
| Syncthing | - | see syncthing.yaml |
| Invidious | 3000 | https://in.vish.gg (public), http://192.168.68.100:3000 |
| Materialious | 3001 | http://192.168.68.100:3001 |
| YourSpotify | 4000, 15000 | see yourspotify.yaml |
---
## Deployed Stacks
| Compose File | Service | Notes |
|-------------|---------|-------|
| `adguard.yaml` | AdGuard Home | DNS ad blocker, binds to 192.168.68.100 |
| `homeassistant.yaml` | Home Assistant | Home automation |
| `plex.yaml` | Plex | Media server |
| `syncthing.yaml` | Syncthing | File sync |
| `wireguard.yaml` | WireGuard / wg-easy | VPN |
| `dyndns_updater.yaml` | DynDNS | Dynamic DNS |
| `node-exporter.yaml` | Node Exporter | Prometheus metrics |
| `piped.yaml` | Piped | YouTube alternative frontend |
| `yourspotify.yaml` | YourSpotify | Spotify stats |
| `invidious/invidious.yaml` | Invidious + Companion + DB + Materialious | YouTube frontend — https://in.vish.gg |
---
## Troubleshooting
### AdGuard crash-loops on startup
**Symptom**: `docker ps` shows AdGuard as "Restarting" or "Up Less than a second"
**Cause**: AdGuard binds DNS to a specific IP (`192.168.68.100`). If the host's IP changes (DHCP), or if AdGuard rewrites its config to the current DHCP address, it will fail to bind on next start.
**Diagnose**:
```bash
docker logs AdGuard --tail 20
# Look for: "bind: cannot assign requested address"
# The log will show which IP it tried to use
```
**Fix**:
```bash
# 1. Check what IP AdGuard thinks it should use
sudo grep -A3 'bind_hosts' /home/vish/docker/adguard/config/AdGuardHome.yaml
# 2. Check what IP eno1 actually has
ip addr show eno1 | grep 'inet '
# 3. If they don't match, update the config
sudo sed -i 's/- 192.168.68.XXX/- 192.168.68.100/' /home/vish/docker/adguard/config/AdGuardHome.yaml
# 4. Restart AdGuard
docker restart AdGuard
```
**If the host IP has reverted to DHCP** (e.g. after a reboot wiped the static config):
```bash
# Re-apply static IP
sudo netplan apply
# Verify
ip addr show eno1 | grep 'inet '
# Should show: inet 192.168.68.100/22
```
---
## Incident History
### 2026-02-22 — AdGuard crash-loop / IP mismatch
- **Root cause**: Host had drifted from `192.168.68.100` to DHCP-assigned `192.168.68.87`. AdGuard briefly started, rewrote its config to `.87`, then the static IP was applied and `.87` was gone — causing a bind failure loop.
- **Resolution**:
1. Disabled cloud-init network management
2. Set `eno1` to static `192.168.68.100/22` via netplan
3. Corrected `AdGuardHome.yaml` `bind_hosts` back to `.100`
4. Restarted AdGuard — stable
---
### 2026-02-27 — Invidious 502 / crash-loop
- **Root cause 1**: PostgreSQL 14 defaults `pg_hba.conf` to `scram-sha-256` for host connections. Invidious's Crystal DB driver does not support scram-sha-256, causing a "password authentication failed" crash loop even with correct credentials.
- **Fix**: Changed last line of `/var/lib/postgresql/data/pg_hba.conf` in the `invidious-db` container from `host all all all scram-sha-256` to `host all all 172.21.0.0/16 trust`, then ran `SELECT pg_reload_conf();`.
- **Root cause 2**: Portainer had saved the literal string `REDACTED_SECRET_KEY` as the `SERVER_SECRET_KEY` env var for the companion container (Portainer's secret-redaction placeholder was baked in as the real value). The latest companion image validates the key strictly (exactly 16 alphanumeric chars), causing it to crash.
- **Fix**: Updated the Portainer stack file via API (`PUT /api/stacks/584`), replacing all `REDACTED_*` placeholders with the real values.
---
*Last updated: 2026-02-27*

View File

@@ -0,0 +1,23 @@
# AdGuard Home - DNS ad blocker
# Web UI: http://192.168.68.100:9080
# DNS: 192.168.68.100:53, 100.72.55.21:53
#
# IMPORTANT: This container binds DNS to 192.168.68.100 (configured in AdGuardHome.yaml).
# The host MUST have a static IP of 192.168.68.100 on eno1, otherwise AdGuard will
# crash-loop with "bind: cannot assign requested address".
# See README.md for static IP setup and troubleshooting.
services:
adguard:
image: adguard/adguardhome
container_name: AdGuard
mem_limit: 2g
cpu_shares: 768
security_opt:
- no-new-privileges:true
restart: unless-stopped
network_mode: host
volumes:
- /home/vish/docker/adguard/config:/opt/adguardhome/conf:rw
- /home/vish/docker/adguard/data:/opt/adguardhome/work:rw
environment:
TZ: America/Los_Angeles

View File

@@ -0,0 +1,28 @@
# Diun — Docker Image Update Notifier
#
# Watches all running containers on this host and sends ntfy
# notifications when upstream images update their digest.
# Schedule: Mondays 09:00 (weekly cadence).
#
# ntfy topic: https://ntfy.vish.gg/diun
services:
diun:
image: crazymax/diun:latest
container_name: diun
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- diun-data:/data
environment:
LOG_LEVEL: info
DIUN_WATCH_WORKERS: "20"
DIUN_WATCH_SCHEDULE: "0 9 * * 1"
DIUN_WATCH_JITTER: 30s
DIUN_PROVIDERS_DOCKER: "true"
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
DIUN_NOTIF_NTFY_ENDPOINT: "https://ntfy.vish.gg"
DIUN_NOTIF_NTFY_TOPIC: "diun"
restart: unless-stopped
volumes:
diun-data:

View File

@@ -0,0 +1,28 @@
pds-g^KU_n-Ck6JOm^BQu9pcct0DI/MvsCnViM6kGHGVCigvohyf/HHHfHG8c=
8. Start the Server
Use screen or tmux to keep the server running in the background.
Start Master (Overworld) Server
bash
Copy
Edit
cd ~/dst/bin
screen -S dst-master ./dontstarve_dedicated_server_nullrenderer -cluster MyCluster -shard Master
Start Caves Server
Open a new session:
bash
Copy
Edit
screen -S dst-caves ./dontstarve_dedicated_server_nullrenderer -cluster MyCluster -shard Caves
[Service]
User=dst
ExecStart=/home/dstserver/dst/bin/dontstarve_dedicated_server_nullrenderer -cluster MyCluster -shard Master
Restart=always

View File

@@ -0,0 +1,15 @@
services:
dozzle-agent:
image: amir20/dozzle:latest
container_name: dozzle-agent
command: agent
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "7007:7007"
restart: unless-stopped
healthcheck:
test: ["CMD", "/dozzle", "healthcheck"]
interval: 30s
timeout: 5s
retries: 3

View File

@@ -0,0 +1,17 @@
# Dynamic DNS Updater
# Updates DNS records when public IP changes
version: '3.8'
services:
ddns-vish-13340:
image: favonia/cloudflare-ddns:latest
network_mode: host
restart: unless-stopped
user: "1000:1000"
read_only: true
cap_drop: [all]
security_opt: [no-new-privileges:true]
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
- DOMAINS=api.vish.gg,api.vp.vish.gg,in.vish.gg,client.spotify.vish.gg,spotify.vish.gg
- PROXIED=false

View File

@@ -0,0 +1,55 @@
# Home Assistant - Smart home automation
# Port: 8123
# Open source home automation platform
version: '3'
services:
homeassistant:
container_name: homeassistant
image: ghcr.io/home-assistant/home-assistant:stable
network_mode: host
restart: unless-stopped
environment:
- TZ=America/Los_Angeles
volumes:
- /home/vish/docker/homeassistant:/config
- /etc/localtime:/etc/localtime:ro
matter-server:
container_name: matter-server
image: ghcr.io/home-assistant-libs/python-matter-server:stable
network_mode: host
restart: unless-stopped
volumes:
- /home/vish/docker/matter:/data
piper:
container_name: piper
image: rhasspy/wyoming-piper:latest
restart: unless-stopped
ports:
- "10200:10200"
volumes:
- /home/vish/docker/piper:/data
command: --voice en_US-lessac-medium
whisper:
container_name: whisper
image: rhasspy/wyoming-whisper:latest
restart: unless-stopped
ports:
- "10300:10300"
volumes:
- /home/vish/docker/whisper:/data
command: --model tiny-int8 --language en
openwakeword:
container_name: openwakeword
image: rhasspy/wyoming-openwakeword:latest
restart: unless-stopped
ports:
- "10400:10400"
command: --preload-model ok_nabu
networks:
default:
name: homeassistant-stack

View File

@@ -0,0 +1,13 @@
#!/bin/bash
# Invidious DB initialisation script
# Runs once on first container start (docker-entrypoint-initdb.d).
#
# Adds a pg_hba.conf rule allowing connections from any Docker subnet
# using trust auth. Without this, PostgreSQL rejects the invidious
# container when the Docker network is assigned a different subnet after
# a recreate (the default pg_hba.conf only covers localhost).
set -e
# Allow connections from any host on the Docker bridge network
echo "host all all 0.0.0.0/0 trust" >> /var/lib/postgresql/data/pg_hba.conf

View File

@@ -0,0 +1,115 @@
version: "3"
configs:
materialious_nginx:
content: |
events { worker_connections 1024; }
http {
default_type application/octet-stream;
include /etc/nginx/mime.types;
server {
listen 80;
# The video player passes dashUrl as a relative path that resolves
# to this origin — proxy Invidious API/media paths to local service.
# (in.vish.gg resolves to the external IP which is unreachable via
# hairpin NAT from inside Docker; invidious:3000 is on same network)
location ~ ^/(api|companion|vi|ggpht|videoplayback|sb|s_p|ytc|storyboards) {
proxy_pass http://invidious:3000;
proxy_set_header Host $$host;
proxy_set_header X-Real-IP $$remote_addr;
proxy_set_header X-Forwarded-For $$proxy_add_x_forwarded_for;
}
location / {
root /usr/share/nginx/html;
try_files $$uri /index.html;
}
}
}
services:
invidious:
image: quay.io/invidious/invidious:latest
platform: linux/amd64
restart: unless-stopped
ports:
- "3000:3000"
environment:
INVIDIOUS_CONFIG: |
db:
dbname: invidious
user: kemal
password: "REDACTED_PASSWORD"
host: invidious-db
port: 5432
check_tables: true
invidious_companion:
- private_url: "http://companion:8282/companion"
invidious_companion_key: "pha6nuser7ecei1E"
hmac_key: "Kai5eexiewohchei"
healthcheck:
test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/trending || exit 1
interval: 30s
timeout: 5s
retries: 2
logging:
options:
max-size: "1G"
max-file: "4"
depends_on:
- invidious-db
- companion
companion:
image: quay.io/invidious/invidious-companion:latest
platform: linux/amd64
environment:
- SERVER_SECRET_KEY=pha6nuser7ecei1E
restart: unless-stopped
cap_drop:
- ALL
read_only: true
volumes:
- companioncache:/var/tmp/youtubei.js:rw
security_opt:
- no-new-privileges:true
logging:
options:
max-size: "1G"
max-file: "4"
invidious-db:
image: postgres:14
restart: unless-stopped
environment:
POSTGRES_DB: invidious
POSTGRES_USER: kemal
POSTGRES_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
volumes:
- postgresdata:/var/lib/postgresql/data
- ./config/sql:/config/sql
- ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
materialious:
image: wardpearce/materialious:latest
container_name: materialious
restart: unless-stopped
environment:
VITE_DEFAULT_INVIDIOUS_INSTANCE: "https://in.vish.gg"
configs:
- source: materialious_nginx
target: /etc/nginx/nginx.conf
ports:
- "3001:80"
logging:
options:
max-size: "1G"
max-file: "4"
volumes:
postgresdata:
companioncache:

View File

@@ -0,0 +1,4 @@
vish@vish-concord-nuc:~/invidious/invidious$ pwgen 16 1 # for Invidious (HMAC_KEY)
Kai5eexiewohchei
vish@vish-concord-nuc:~/invidious/invidious$ pwgen 16 1 # for Invidious companion (invidious_companion_key)
pha6nuser7ecei1E

View File

@@ -0,0 +1,65 @@
version: "3.8" # Upgrade to a newer version for better features and support
services:
invidious:
image: quay.io/invidious/invidious:latest
restart: unless-stopped
ports:
- "3000:3000"
environment:
INVIDIOUS_CONFIG: |
db:
dbname: invidious
user: kemal
password: "REDACTED_PASSWORD"
host: invidious-db
port: 5432
check_tables: true
signature_server: inv_sig_helper:12999
visitor_data: ""
po_token: "REDACTED_TOKEN"=="
hmac_key: "9Uncxo4Ws54s7dr0i3t8"
healthcheck:
test: ["CMD", "wget", "-nv", "--tries=1", "--spider", "http://127.0.0.1:3000/api/v1/trending"]
interval: 30s
timeout: 5s
retries: 2
logging:
options:
max-size: "1G"
max-file: "4"
depends_on:
- invidious-db
inv_sig_helper:
image: quay.io/invidious/inv-sig-helper:latest
init: true
command: ["--tcp", "0.0.0.0:12999"]
environment:
- RUST_LOG=info
restart: unless-stopped
cap_drop:
- ALL
read_only: true
security_opt:
- no-new-privileges:true
invidious-db:
image: docker.io/library/postgres:14
restart: unless-stopped
volumes:
- postgresdata:/var/lib/postgresql/data
- ./config/sql:/config/sql
- ./docker/init-invidious-db.sh:/docker-entrypoint-initdb.d/init-invidious-db.sh
environment:
POSTGRES_DB: invidious
POSTGRES_USER: kemal
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $$POSTGRES_USER -d $$POSTGRES_DB"]
interval: 30s
timeout: 5s
retries: 3
volumes:
postgresdata:

View File

@@ -0,0 +1,2 @@
docker all in one
docker-compose down --volumes --remove-orphans && docker-compose pull && docker-compose up -d

View File

@@ -0,0 +1,28 @@
# Redirect all HTTP traffic to HTTPS
server {
listen 80;
server_name client.spotify.vish.gg;
return 301 https://$host$request_uri;
}
# HTTPS configuration for the subdomain
server {
listen 443 ssl;
server_name client.spotify.vish.gg;
# SSL Certificates (managed by Certbot)
ssl_certificate /etc/letsencrypt/live/client.spotify.vish.gg/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/client.spotify.vish.gg/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
# Proxy to Docker container
location / {
proxy_pass http://127.0.0.1:4000; # Maps to your Docker container
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -0,0 +1,63 @@
server {
if ($host = in.vish.gg) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name in.vish.gg;
# Redirect all HTTP traffic to HTTPS
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2;
server_name in.vish.gg;
# SSL Certificates (Certbot paths)
ssl_certificate /etc/letsencrypt/live/in.vish.gg/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/in.vish.gg/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# --- Reverse Proxy to Invidious ---
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
# Required headers for reverse proxying
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# WebSocket and streaming stability
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Disable buffering for video streams
proxy_buffering off;
proxy_request_buffering off;
# Avoid premature timeouts during long playback
proxy_read_timeout 600s;
proxy_send_timeout 600s;
}
# Cache static assets (images, css, js) for better performance
location ~* \.(?:jpg|jpeg|png|gif|ico|css|js|webp)$ {
expires 30d;
add_header Cache-Control "public, no-transform";
proxy_pass http://127.0.0.1:3000;
}
# Security headers (optional but sensible)
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options SAMEORIGIN;
add_header Referrer-Policy same-origin;
}

View File

@@ -0,0 +1,28 @@
# Redirect HTTP to HTTPS
server {
listen 80;
server_name spotify.vish.gg;
return 301 https://$host$request_uri;
}
# HTTPS server block
server {
listen 443 ssl;
server_name spotify.vish.gg;
# SSL Certificates (managed by Certbot)
ssl_certificate /etc/letsencrypt/live/spotify.vish.gg/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/spotify.vish.gg/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Proxy requests to backend API
location / {
proxy_pass http://127.0.0.1:15000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -0,0 +1,74 @@
# Redirect HTTP to HTTPS
server {
listen 80;
server_name vp.vish.gg api.vp.vish.gg proxy.vp.vish.gg;
return 301 https://$host$request_uri;
}
# HTTPS Reverse Proxy for Piped
server {
listen 443 ssl http2;
server_name vp.vish.gg;
# SSL Certificates (managed by Certbot)
ssl_certificate /etc/letsencrypt/live/vp.vish.gg/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/vp.vish.gg/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Proxy requests to Piped Frontend (use Docker service name, NOT 127.0.0.1)
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# HTTPS Reverse Proxy for Piped API
server {
listen 443 ssl http2;
server_name api.vp.vish.gg;
# SSL Certificates
ssl_certificate /etc/letsencrypt/live/vp.vish.gg/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/vp.vish.gg/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Proxy requests to Piped API backend
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# HTTPS Reverse Proxy for Piped Proxy (for video streaming)
server {
listen 443 ssl http2;
server_name proxy.vp.vish.gg;
# SSL Certificates
ssl_certificate /etc/letsencrypt/live/vp.vish.gg/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/vp.vish.gg/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
# Proxy video playback requests through ytproxy
location ~ (/videoplayback|/api/v4/|/api/manifest/) {
include snippets/ytproxy.conf;
add_header Cache-Control private always;
proxy_hide_header Access-Control-Allow-Origin;
}
location / {
include snippets/ytproxy.conf;
add_header Cache-Control "public, max-age=604800";
proxy_hide_header Access-Control-Allow-Origin;
}
}

View File

@@ -0,0 +1,24 @@
# Node Exporter - Prometheus metrics exporter for hardware/OS metrics
# Exposes metrics on port 9101 (changed from 9100 due to host conflict)
# Used by: Grafana/Prometheus monitoring stack
# Note: Using bridge network with port mapping instead of host network
# to avoid conflict with host-installed node_exporter
version: "3.8"
services:
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
ports:
- "9101:9100"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped

View File

@@ -0,0 +1,79 @@
# Piped - YouTube frontend
# Port: 8080
# Privacy-respecting YouTube
services:
piped-frontend:
image: 1337kavin/piped-frontend:latest
restart: unless-stopped
depends_on:
- piped
environment:
BACKEND_HOSTNAME: api.vp.vish.gg
HTTP_MODE: https
container_name: piped-frontend
piped-proxy:
image: 1337kavin/piped-proxy:latest
restart: unless-stopped
environment:
- UDS=1
volumes:
- piped-proxy:/app/socket
container_name: piped-proxy
piped:
image: 1337kavin/piped:latest
restart: unless-stopped
volumes:
- ./config/config.properties:/app/config.properties:ro
depends_on:
- postgres
container_name: piped-backend
bg-helper:
image: 1337kavin/bg-helper-server:latest
restart: unless-stopped
container_name: piped-bg-helper
nginx:
image: nginx:mainline-alpine
restart: unless-stopped
ports:
- "8080:80"
volumes:
- ./config/nginx.conf:/etc/nginx/nginx.conf:ro
- ./config/pipedapi.conf:/etc/nginx/conf.d/pipedapi.conf:ro
- ./config/pipedproxy.conf:/etc/nginx/conf.d/pipedproxy.conf:ro
- ./config/pipedfrontend.conf:/etc/nginx/conf.d/pipedfrontend.conf:ro
- ./config/ytproxy.conf:/etc/nginx/snippets/ytproxy.conf:ro
- piped-proxy:/var/run/ytproxy
container_name: nginx
depends_on:
- piped
- piped-proxy
- piped-frontend
labels:
- "traefik.enable=true"
- "traefik.http.routers.piped.rule=Host(`FRONTEND_HOSTNAME`, `BACKEND_HOSTNAME`, `PROXY_HOSTNAME`)"
- "traefik.http.routers.piped.entrypoints=websecure"
- "traefik.http.services.piped.loadbalancer.server.port=8080"
postgres:
image: pgautoupgrade/pgautoupgrade:16-alpine
restart: unless-stopped
volumes:
- ./data/db:/var/lib/postgresql/data
environment:
- POSTGRES_DB=piped
- POSTGRES_USER=piped
- POSTGRES_PASSWORD="REDACTED_PASSWORD"
container_name: postgres
watchtower:
image: containrrr/watchtower
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc/timezone:/etc/timezone:ro
environment:
- WATCHTOWER_CLEANUP=true
- WATCHTOWER_INCLUDE_RESTARTING=true
container_name: watchtower
command: piped-frontend piped-backend piped-proxy piped-bg-helper varnish nginx postgres watchtower
volumes:
piped-proxy: null

View File

@@ -0,0 +1,28 @@
# Plex Media Server
# Web UI: http://<host-ip>:32400/web
# Uses Intel QuickSync for hardware transcoding (via /dev/dri)
# Media library mounted from NAS at /mnt/nas
services:
plex:
image: linuxserver/plex:latest
container_name: plex
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=America/Los_Angeles
- UMASK=022
- VERSION=docker
# Get claim token from: https://www.plex.tv/claim/
- PLEX_CLAIM=claim-REDACTED_APP_PASSWORD
volumes:
- /home/vish/docker/plex/config:/config
- /mnt/nas/:/data/media
devices:
# Intel QuickSync for hardware transcoding
- /dev/dri:/dev/dri
security_opt:
- no-new-privileges:true
restart: on-failure:10
# custom-cont-init.d/01-wait-for-nas.sh waits up to 120s for /mnt/nas before starting Plex

View File

@@ -0,0 +1,22 @@
# Portainer Edge Agent - concord-nuc
# Connects to Portainer server on Atlantis (100.83.230.112:8000)
# Deploy: docker compose -f portainer_agent.yaml up -d
services:
portainer_edge_agent:
image: portainer/agent:2.33.7
container_name: portainer_edge_agent
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/lib/docker/volumes:/var/lib/docker/volumes
- /:/host
- portainer_agent_data:/data
environment:
EDGE: "1"
EDGE_ID: "be02f203-f10c-471a-927c-9ca2adac254c"
EDGE_KEY: "aHR0cDovLzEwMC44My4yMzAuMTEyOjEwMDAwfGh0dHA6Ly8xMDAuODMuMjMwLjExMjo4MDAwfGtDWjVkTjJyNXNnQTJvMEF6UDN4R3h6enBpclFqa05Wa0FCQkU0R1IxWFU9fDQ0MzM5OA"
EDGE_INSECURE_POLL: "1"
volumes:
portainer_agent_data:

View File

@@ -0,0 +1,22 @@
# Scrutiny Collector — concord-nuc (Intel NUC)
#
# Ships SMART data to the hub on homelab-vm.
# NUC typically has one internal NVMe + optionally a SATA SSD.
# Adjust device list: run `lsblk` to see actual drives.
#
# Hub: http://100.67.40.126:8090
services:
scrutiny-collector:
image: ghcr.io/analogj/scrutiny:master-collector
container_name: scrutiny-collector
cap_add:
- SYS_RAWIO
- SYS_ADMIN
volumes:
- /run/udev:/run/udev:ro
devices:
- /dev/sda
environment:
COLLECTOR_API_ENDPOINT: "http://100.67.40.126:8090"
restart: unless-stopped

View File

@@ -0,0 +1,19 @@
# Syncthing - File synchronization
# Port: 8384 (web), 22000 (sync)
# Continuous file synchronization between devices
services:
syncthing:
container_name: syncthing
ports:
- 8384:8384
- 22000:22000/tcp
- 22000:22000/udp
- 21027:21027/udp
environment:
- TZ=America/Los_Angeles
volumes:
- /home/vish/docker/syncthing/config:/config
- /home/vish/docker/syncthing/data1:/data1
- /home/vish/docker/syncthing/data2:/data2
restart: unless-stopped
image: ghcr.io/linuxserver/syncthing

View File

@@ -0,0 +1,25 @@
# WireGuard - VPN server
# Port: 51820/udp
# Modern, fast VPN tunnel
services:
wg-easy:
container_name: wg-easy
image: ghcr.io/wg-easy/wg-easy
environment:
- HASH_PASSWORD="REDACTED_PASSWORD"
- WG_HOST=vishconcord.tplinkdns.com
volumes:
- ./config:/etc/wireguard
- /lib/modules:/lib/modules
ports:
- "51820:51820/udp"
- "51821:51821/tcp"
restart: unless-stopped
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.ip_forward=1
- net.ipv4.conf.all.src_valid_mark=1

View File

@@ -0,0 +1,49 @@
# Your Spotify - Listening statistics
# Port: 3000
# Self-hosted Spotify listening history tracker
version: "3.8"
services:
server:
image: yooooomi/your_spotify_server
restart: unless-stopped
ports:
- "15000:8080" # Expose port 15000 for backend service
depends_on:
- mongo
environment:
- API_ENDPOINT=https://spotify.vish.gg # Public URL for backend
- CLIENT_ENDPOINT=https://client.spotify.vish.gg # Public URL for frontend
- SPOTIFY_PUBLIC=d6b3bda999f042099ce79a8b6e9f9e68 # Spotify app client ID
- SPOTIFY_SECRET=72c650e7a25f441baa245b963003a672 # Spotify app client secret
- SPOTIFY_REDIRECT_URI=https://client.spotify.vish.gg/callback # Redirect URI for OAuth
- CORS=https://client.spotify.vish.gg # Allow frontend's origin
networks:
- spotify_network
mongo:
container_name: mongo
image: mongo:4.4.8
restart: unless-stopped
volumes:
- yourspotify_mongo_data:/data/db # Named volume for persistent storage
networks:
- spotify_network
web:
image: yooooomi/your_spotify_client
restart: unless-stopped
ports:
- "4000:3000" # Expose port 4000 for frontend
environment:
- API_ENDPOINT=https://spotify.vish.gg # URL for backend API
networks:
- spotify_network
volumes:
yourspotify_mongo_data:
driver: local
networks:
spotify_network:
driver: bridge

View File

@@ -0,0 +1,234 @@
# Guava - TrueNAS Scale Server
**Hostname**: guava
**IP Address**: 192.168.0.100
**Tailscale IP**: 100.75.252.64
**Domain**: guava.crista.home
**OS**: TrueNAS Scale 25.04.2.6 (Debian 12 Bookworm)
**Kernel**: 6.12.15-production+truenas
---
## Hardware Specifications
| Component | Specification |
|-----------|---------------|
| **CPU** | 12 cores |
| **RAM** | 30 GB |
| **Storage** | ZFS pools (1.5TB+ available) |
| **Docker** | 27.5.0 |
| **Compose** | v2.32.3 |
---
## Storage Layout
### Boot Pool
- `/` - Root filesystem (433GB available)
- ZFS dataset: `boot-pool/ROOT/25.04.2.6`
### Data Pool (`/mnt/data/`)
| Dataset | Size Used | Purpose |
|---------|-----------|---------|
| `data/guava_turquoise` | 3.0TB / 4.5TB | Primary storage (67% used) |
| `data/photos` | 159GB | Photo storage |
| `data/jellyfin` | 145GB | Media library |
| `data/llama` | 59GB | LLM models |
| `data/plane-data` | ~100MB | Plane.so application data |
| `data/iso` | 556MB | ISO images |
| `data/cocalc` | 324MB | Computational notebook |
| `data/website` | 59MB | Web content |
| `data/openproject` | 13MB | OpenProject (postgres) |
| `data/fasten` | 5.7MB | Health records |
| `data/fenrus` | 3.5MB | Dashboard config |
| `data/medical` | 14MB | Medical records |
| `data/truenas-exporters` | - | Prometheus exporters |
### TrueNAS Apps (`/mnt/.ix-apps/`)
- Docker storage: 28GB used
- App configs and mounts for TrueNAS-managed apps
---
## Network Configuration
| Service | Port | Protocol | URL |
|---------|------|----------|-----|
| Portainer | 31015 | HTTPS | https://guava.crista.home:31015 |
| **Plane.so** | 3080 | HTTP | **http://guava.crista.home:3080** |
| Plane.so HTTPS | 3443 | HTTPS | https://guava.crista.home:3443 |
| Jellyfin | 30013 | HTTP | http://guava.crista.home:30013 |
| Jellyfin HTTPS | 30014 | HTTPS | https://guava.crista.home:30014 |
| Gitea | 30008-30009 | HTTP | http://guava.crista.home:30008 |
| WireGuard | 51827 | UDP | - |
| wg-easy UI | 30058 | HTTP | http://guava.crista.home:30058 |
| Fenrus | 45678 | HTTP | http://guava.crista.home:45678 |
| Fasten | 9090 | HTTP | http://guava.crista.home:9090 |
| Node Exporter | 9100 | HTTP | http://guava.crista.home:9100/metrics |
| nginx | 28888 | HTTP | http://guava.crista.home:28888 |
| iperf3 | 5201 | TCP | - |
| SSH | 22 | TCP | - |
| SMB | 445 | TCP | - |
| Pi-hole DNS | 53 | TCP/UDP | - |
---
## Portainer Access
| Setting | Value |
|---------|-------|
| **URL** | `https://guava.crista.home:31015` |
| **API Endpoint** | `https://localhost:31015/api` (from guava) |
| **Endpoint ID** | 3 (local) |
| **API Token** | `ptr_REDACTED_PORTAINER_TOKEN` |
### API Examples
```bash
# List stacks
curl -sk -H 'X-API-Key: "REDACTED_API_KEY" \
'https://localhost:31015/api/stacks'
# List containers
curl -sk -H 'X-API-Key: "REDACTED_API_KEY" \
'https://localhost:31015/api/endpoints/3/docker/containers/json'
# Create stack from compose string
curl -sk -X POST \
-H 'X-API-Key: "REDACTED_API_KEY" \
-H 'Content-Type: application/json' \
'https://localhost:31015/api/stacks/create/standalone/string?endpointId=3' \
-d '{"name": "my-stack", "REDACTED_APP_PASSWORD": "..."}'
```
---
## Deployed Stacks (Portainer)
| ID | Name | Status | Description |
|----|------|--------|-------------|
| 2 | nginx | ✅ Active | Reverse proxy (:28888) |
| 3 | ddns | ✅ Active | Dynamic DNS updater (crista.love) |
| 4 | llama | ⏸️ Inactive | LLM server |
| 5 | fenrus | ✅ Active | Dashboard (:45678) |
| 8 | fasten | ✅ Active | Health records (:9090) |
| 17 | node-exporter | ✅ Active | Prometheus metrics (:9100) |
| 18 | iperf3 | ✅ Active | Network speed testing (:5201) |
| 25 | cocalc | ⏸️ Inactive | Computational notebook |
| **26** | **plane-stack** | ✅ Active | **Project management (:3080)** |
### TrueNAS-Managed Apps (ix-apps)
| App | Container | Port | Description |
|-----|-----------|------|-------------|
| Portainer | ix-portainer-portainer-1 | 31015 | Container management |
| Gitea | ix-gitea-gitea-1 | 30008-30009 | Git server |
| Gitea DB | ix-gitea-postgres-1 | - | PostgreSQL for Gitea |
| Jellyfin | ix-jellyfin-jellyfin-1 | 30013, 30014 | Media server |
| WireGuard | ix-wg-easy-wg-easy-1 | 30058, 51827/udp | VPN server |
| Tailscale | ix-tailscale-tailscale-1 | - | Mesh VPN |
| Pi-hole | (configured) | 53 | DNS server |
---
## SSH Access
### Via Cloudflare Tunnel
```bash
# Install cloudflared
curl -L https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64 -o /tmp/cloudflared
chmod +x /tmp/cloudflared
# SSH config
cat >> ~/.ssh/config << 'EOF'
Host guava
HostName ruled-bowl-dos-jews.trycloudflare.com
User vish
IdentityFile ~/.ssh/id_ed25519
ProxyCommand /tmp/cloudflared access ssh --hostname %h
EOF
# Connect
ssh guava
```
### Direct (Local Network)
```bash
ssh vish@192.168.0.100
```
**Note**: Docker commands require `sudo` on guava.
---
## Services Documentation
### Plane.so
See [plane.yaml](plane.yaml) for the full stack configuration.
| Component | Container | Port | Purpose |
|-----------|-----------|------|---------|
| Frontend | plane-web | 3000 | Web UI |
| Admin | plane-admin | 3000 | Admin panel |
| Space | plane-space | 3000 | Public pages |
| API | plane-api | 8000 | Backend API |
| Worker | plane-worker | 8000 | Background jobs |
| Beat | plane-beat | 8000 | Scheduled tasks |
| Live | plane-live | 3000 | Real-time updates |
| Database | plane-db | 5432 | PostgreSQL |
| Cache | plane-redis | 6379 | Valkey/Redis |
| Queue | plane-mq | 5672 | RabbitMQ |
| Storage | plane-minio | 9000 | MinIO S3 |
| Proxy | plane-proxy | 80/443 | Caddy reverse proxy |
**Access URL**: http://guava.crista.home:3080
**Data Location**: `/mnt/data/plane-data/`
---
## Maintenance
### Backup Locations
| Data | Path | Priority |
|------|------|----------|
| Plane DB | `/mnt/data/plane-data/postgres/` | High |
| Plane Files | `/mnt/data/plane-data/minio/` | High |
| Gitea | `/mnt/.ix-apps/app_mounts/gitea/` | High |
| Jellyfin Config | `/mnt/.ix-apps/app_mounts/jellyfin/config/` | Medium |
| Photos | `/mnt/data/photos/` | High |
### Common Commands
```bash
# Check all containers
sudo docker ps -a
# View stack logs
sudo docker compose -f /path/to/stack logs -f
# Restart a stack via Portainer API
curl -sk -X POST \
-H 'X-API-Key: TOKEN' \
'https://localhost:31015/api/stacks/STACK_ID/stop?endpointId=3'
curl -sk -X POST \
-H 'X-API-Key: TOKEN' \
'https://localhost:31015/api/stacks/STACK_ID/start?endpointId=3'
```
---
## Related Documentation
- [Plane.so Service Docs](../../../docs/services/individual/plane.md)
- [TrueNAS Scale Documentation](https://www.truenas.com/docs/scale/)
- [AGENTS.md](../../../AGENTS.md) - Quick reference for all hosts
---
*Last updated: February 4, 2026*
*Verified via SSH - all services confirmed running*

View File

@@ -0,0 +1,23 @@
Guava CIFS/SMB Shares
data /mnt/data/passionfruit
guava_turquoise /mnt/data/guava_turquoise Backup of turquoise
photos /mnt/data/photos
Global Configuration
Nameservers
Nameserver 1:
1.1.1.1
Nameserver 2:
192.168.0.250
Default Route
IPv4:
192.168.0.1
Hostname:guava
Domain: local
HTTP Proxy:---
Service Announcement: NETBIOS-NS, mDNS, WS-DISCOVERY
Additional Domains:---
Hostname Database:---
Outbound Network:Allow All

View File

@@ -0,0 +1,213 @@
# Plane.so - Self-Hosted Project Management
# Deployed via Portainer on TrueNAS Scale (guava)
# Port: 3080 (HTTP), 3443 (HTTPS)
x-db-env: &db-env
PGHOST: plane-db
PGDATABASE: plane
POSTGRES_USER: plane
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
POSTGRES_DB: plane
POSTGRES_PORT: 5432
PGDATA: /var/lib/postgresql/data
x-redis-env: &redis-env
REDIS_HOST: plane-redis
REDIS_PORT: 6379
REDIS_URL: redis://plane-redis:6379/
x-minio-env: &minio-env
MINIO_ROOT_USER: ${AWS_ACCESS_KEY_ID:-planeaccess}
MINIO_ROOT_PASSWORD: "REDACTED_PASSWORD"
x-aws-s3-env: &aws-s3-env
AWS_REGION: us-east-1
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID:-planeaccess}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY:-planesecret123}
AWS_S3_ENDPOINT_URL: http://plane-minio:9000
AWS_S3_BUCKET_NAME: uploads
x-proxy-env: &proxy-env
APP_DOMAIN: ${APP_DOMAIN:-guava.crista.home}
FILE_SIZE_LIMIT: 52428800
LISTEN_HTTP_PORT: 80
LISTEN_HTTPS_PORT: 443
BUCKET_NAME: uploads
SITE_ADDRESS: :80
x-mq-env: &mq-env
RABBITMQ_HOST: plane-mq
RABBITMQ_PORT: 5672
RABBITMQ_DEFAULT_USER: plane
RABBITMQ_DEFAULT_PASS: "REDACTED_PASSWORD"REDACTED_PASSWORD"
RABBITMQ_DEFAULT_VHOST: plane
RABBITMQ_VHOST: plane
x-live-env: &live-env
API_BASE_URL: http://api:8000
LIVE_SERVER_SECRET_KEY: ${LIVE_SERVER_SECRET_KEY:-60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5}
x-app-env: &app-env
WEB_URL: ${WEB_URL:-http://guava.crista.home:3080}
DEBUG: 0
CORS_ALLOWED_ORIGINS: ${CORS_ALLOWED_ORIGINS:-}
GUNICORN_WORKERS: 2
USE_MINIO: 1
DATABASE_URL: postgresql://plane:${POSTGRES_PASSWORD:"REDACTED_PASSWORD"
SECRET_KEY: ${SECRET_KEY:-60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5}
AMQP_URL: amqp://plane:${RABBITMQ_PASSWORD:"REDACTED_PASSWORD"
API_KEY_RATE_LIMIT: 60/minute
MINIO_ENDPOINT_SSL: 0
LIVE_SERVER_SECRET_KEY: ${LIVE_SERVER_SECRET_KEY:-60gp0byfz2dvffa45cxl20p1scy9xbpf6d8c5y0geejgkyp1b5}
services:
web:
image: artifacts.plane.so/makeplane/plane-frontend:stable
container_name: plane-web
restart: unless-stopped
depends_on:
- api
- worker
space:
image: artifacts.plane.so/makeplane/plane-space:stable
container_name: plane-space
restart: unless-stopped
depends_on:
- api
- worker
- web
admin:
image: artifacts.plane.so/makeplane/plane-admin:stable
container_name: plane-admin
restart: unless-stopped
depends_on:
- api
- web
live:
image: artifacts.plane.so/makeplane/plane-live:stable
container_name: plane-live
restart: unless-stopped
environment:
<<: [*live-env, *redis-env]
depends_on:
- api
- web
api:
image: artifacts.plane.so/makeplane/plane-backend:stable
container_name: plane-api
command: ./bin/docker-entrypoint-api.sh
restart: unless-stopped
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
plane-db:
condition: service_healthy
plane-redis:
condition: service_started
plane-mq:
condition: service_started
worker:
image: artifacts.plane.so/makeplane/plane-backend:stable
container_name: plane-worker
command: ./bin/docker-entrypoint-worker.sh
restart: unless-stopped
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
- api
- plane-db
- plane-redis
- plane-mq
beat-worker:
image: artifacts.plane.so/makeplane/plane-backend:stable
container_name: plane-beat
command: ./bin/docker-entrypoint-beat.sh
restart: unless-stopped
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
- api
- plane-db
- plane-redis
- plane-mq
migrator:
image: artifacts.plane.so/makeplane/plane-backend:stable
container_name: plane-migrator
command: ./bin/docker-entrypoint-migrator.sh
restart: on-failure
environment:
<<: [*app-env, *db-env, *redis-env, *minio-env, *aws-s3-env, *proxy-env]
depends_on:
plane-db:
condition: service_healthy
plane-redis:
condition: service_started
plane-db:
image: postgres:15.7-alpine
container_name: plane-db
command: postgres -c 'max_connections=1000'
restart: unless-stopped
environment:
<<: *db-env
volumes:
- /mnt/data/plane-data/postgres:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U plane -d plane"]
interval: 10s
timeout: 5s
retries: 5
plane-redis:
image: valkey/valkey:7.2.11-alpine
container_name: plane-redis
restart: unless-stopped
volumes:
- /mnt/data/plane-data/redis:/data
plane-mq:
image: rabbitmq:3.13.6-management-alpine
container_name: plane-mq
restart: unless-stopped
environment:
<<: *mq-env
volumes:
- /mnt/data/plane-data/rabbitmq:/var/lib/rabbitmq
plane-minio:
image: minio/minio:latest
container_name: plane-minio
command: server /export --console-address ":9090"
restart: unless-stopped
environment:
<<: *minio-env
volumes:
- /mnt/data/plane-data/minio:/export
proxy:
image: artifacts.plane.so/makeplane/plane-proxy:stable
container_name: plane-proxy
restart: unless-stopped
environment:
<<: *proxy-env
ports:
- "3080:80"
- "3443:443"
depends_on:
- web
- api
- space
- admin
- live
networks:
default:
name: plane-network
driver: bridge

View File

@@ -0,0 +1,25 @@
version: '3.8'
services:
cocalc:
image: sagemathinc/cocalc-docker:latest
container_name: cocalc
restart: unless-stopped
ports:
- "8080:443" # expose CoCalc HTTPS on port 8080
# or "443:443" if you want it directly bound to 443
volumes:
# Persistent project and home directories
- /mnt/data/cocalc/projects:/projects
- /mnt/data/cocalc/home:/home/cocalc
# Optional: shared local "library of documents"
- /mnt/data/cocalc/library:/projects/library
environment:
- TZ=America/Los_Angeles
- COCALC_NATS_AUTH=false # disable NATS auth for standalone use
# - COCALC_ADMIN_PASSWORD="REDACTED_PASSWORD" # optional admin password
# - COCALC_NO_IDLE_TIMEOUT=true # optional: stop idle shutdowns

View File

@@ -0,0 +1,18 @@
version: '3.8'
services:
ddns-crista-love:
image: favonia/cloudflare-ddns:latest
container_name: ddns-crista-love
network_mode: host
restart: unless-stopped
user: "3000:3000"
read_only: true
cap_drop:
- all
security_opt:
- no-new-privileges:true
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
- DOMAINS=crista.love,cle.crista.love,cocalc.crista.love,mm.crista.love
- PROXIED=true

View File

@@ -0,0 +1,12 @@
version: "3.9"
services:
fasten:
image: ghcr.io/fastenhealth/fasten-onprem:main
container_name: fasten-onprem
ports:
- "9090:8080"
volumes:
- /mnt/data/fasten/db:/opt/fasten/db
- /mnt/data/fasten/cache:/opt/fasten/cache
restart: unless-stopped

View File

@@ -0,0 +1,19 @@
version: "3.9"
services:
fenrus:
image: revenz/fenrus:latest
container_name: fenrus
healthcheck:
test: ["CMD-SHELL", "curl -f http://127.0.0.1:3000/ || exit 1"]
interval: 30s
timeout: 5s
retries: 3
start_period: 90s
ports:
- "45678:3000"
volumes:
- /mnt/data/fenrus:/app/data:rw
environment:
TZ: America/Los_Angeles
restart: unless-stopped

View File

@@ -0,0 +1,41 @@
version: "3.9"
services:
ollama:
image: ollama/ollama:latest
container_name: ollama
restart: unless-stopped
ports:
- "11434:11434"
environment:
- OLLAMA_KEEP_ALIVE=10m
volumes:
- /mnt/data/llama:/root/.ollama
# --- Optional AMD iGPU offload (experimental on SCALE) ---
# devices:
# - /dev/kfd
# - /dev/dri
# group_add:
# - "video"
# - "render"
# environment:
# - OLLAMA_KEEP_ALIVE=10m
# - HSA_ENABLE_SDMA=0
# - HSA_OVERRIDE_GFX_VERSION=11.0.0
openwebui:
image: ghcr.io/open-webui/open-webui:latest
container_name: open-webui
restart: unless-stopped
depends_on:
- ollama
ports:
- "3000:8080" # browse to http://<truenas-ip>:3000
environment:
# Either var works on recent builds; keeping both for compatibility
- OLLAMA_API_BASE_URL=http://ollama:11434
- OLLAMA_BASE_URL=http://ollama:11434
# Set to "false" to allow open signup without password
- WEBUI_AUTH=true
volumes:
- /mnt/data/llama/open-webui:/app/backend/data

View File

@@ -0,0 +1,10 @@
My recommended use on your setup:
Model Use case
Llama3.1:8b Main general-purpose assistant
Mistral:7b Fast, concise replies & RAG
Qwen2.5:3b Lightweight, quick lookups
Qwen2.5-Coder:7b Dedicated coding tasks
Llama3:8b Legacy/benchmark (optional)
qwen2.5:7b-instruct Writing up emails
deepseek-r1 (chonky but accurate)
deepseek-r1:8b (lighter version of r1 , can run on DS1823xs+)

View File

@@ -0,0 +1,18 @@
version: "3.8"
services:
nginx:
image: nginx:latest
container_name: nginx
volumes:
- /mnt/data/website/html:/usr/share/nginx/html:ro
- /mnt/data/website/conf.d:/etc/nginx/conf.d:ro
ports:
- "28888:80" # 👈 Expose port 28888 on the host
networks:
- web-net
restart: unless-stopped
networks:
web-net:
external: true

View File

@@ -0,0 +1,18 @@
version: "3.9"
services:
node-exporter:
image: prom/node-exporter:latest
container_name: node-exporter
restart: unless-stopped
network_mode: "host"
pid: "host"
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'

View File

@@ -0,0 +1,41 @@
# Tdarr Node - NUC-QSV (Intel Quick Sync Video hardware transcoding)
# Runs on Proxmox LXC 103 (tdarr-node)
# Connects to Tdarr Server on Synology (atlantis) at 192.168.0.200
#
# NFS Mounts required in LXC:
# /mnt/media -> 192.168.0.200:/volume1/data/media
# /mnt/cache -> 192.168.0.200:/volume3/usenet
#
# Important: Both /temp and /cache must be mounted to the same base path
# as the server's cache to avoid path mismatch errors during file operations.
services:
tdarr-node:
image: ghcr.io/haveagitgat/tdarr_node:latest
container_name: tdarr-node
security_opt:
- apparmor:unconfined
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- nodeName=NUC
- serverIP=192.168.0.200
- serverPort=8266
- inContainer=true
- ffmpegVersion=6
devices:
- /dev/dri:/dev/dri # Intel QSV hardware acceleration
volumes:
- ./configs:/app/configs
- ./logs:/app/logs
- /mnt/media:/media
- /mnt/cache/tdarr_cache:/temp # Server uses both /temp and /cache
- /mnt/cache/tdarr_cache:/cache # Must mount both for node compatibility
restart: unless-stopped
# Auto-update: handled by cron — Watchtower 1.7.1 uses Docker API 1.25 which is incompatible
# with Docker 29.x (minimum API 1.44). Instead, a cron job runs hourly:
# /etc/cron.d/tdarr-update → cd /opt/tdarr && docker compose pull -q && docker compose up -d
# Set up with: pct exec 103 -- bash -c 'see hosts/proxmox/lxc/tdarr-node/README for setup'

View File

View File

@@ -0,0 +1,19 @@
# Ubuntu archive
sudo rsync -avz --delete --ignore-errors --no-perms --no-owner --no-group \
rsync://archive.ubuntu.com/ubuntu \
/volume1/archive/repo/mirror/archive.ubuntu.com/ubuntu
# Ubuntu security
sudo rsync -avz --delete --ignore-errors --no-perms --no-owner --no-group \
rsync://security.ubuntu.com/ubuntu \
/volume1/archive/repo/mirror/security.ubuntu.com/ubuntu
# Debian archive
sudo rsync -avz --delete --ignore-errors --no-perms --no-owner --no-group \
rsync://deb.debian.org/debian \
/volume1/archive/repo/mirror/deb.debian.org/debian
# Debian security
sudo rsync -avz --delete --ignore-errors --no-perms --no-owner --no-group \
rsync://security.debian.org/debian-security \
/volume1/archive/repo/mirror/security.debian.org/debian-security

View File

@@ -0,0 +1,24 @@
# AdGuard Home — Atlantis (backup DNS)
# Port: 53 (DNS), 9080 (web UI)
# Purpose: Backup split-horizon DNS resolver
# Primary: Calypso (192.168.0.250)
# Backup: Atlantis (192.168.0.200) ← this instance
#
# Same filters, rewrites, and upstream DNS as Calypso.
# Router DHCP: primary=192.168.0.250, secondary=192.168.0.200
services:
adguard:
image: adguard/adguardhome:latest
container_name: AdGuard
network_mode: host
mem_limit: 2g
cpu_shares: 768
security_opt:
- no-new-privileges:true
restart: on-failure:5
volumes:
- /volume1/docker/adguard/config:/opt/adguardhome/conf:rw
- /volume1/docker/adguard/data:/opt/adguardhome/work:rw
environment:
TZ: America/Los_Angeles

View File

@@ -0,0 +1,41 @@
# AnythingLLM - Local RAG-powered document assistant
# URL: http://192.168.0.200:3101
# Port: 3101
# LLM: Olares qwen3-coder via OpenAI-compatible API
# Docs: docs/services/individual/anythingllm.md
services:
anythingllm:
image: mintplexlabs/anythingllm:latest
container_name: anythingllm
hostname: anythingllm
security_opt:
- no-new-privileges:true
ports:
- "3101:3001"
volumes:
- /volume2/metadata/docker/anythingllm/storage:/app/server/storage:rw
- /volume1/archive/paperless/backup_2026-03-15/media/documents/archive:/documents/paperless-archive:ro
- /volume1/archive/paperless/backup_2026-03-15/media/documents/originals:/documents/paperless-originals:ro
environment:
STORAGE_DIR: /app/server/storage
SERVER_PORT: 3001
DISABLE_TELEMETRY: "true"
TZ: America/Los_Angeles
# LLM Provider - Olares qwen3-coder via OpenAI-compatible API
LLM_PROVIDER: generic-openai
GENERIC_OPEN_AI_BASE_PATH: https://a5be22681.vishinator.olares.com/v1
GENERIC_OPEN_AI_MODEL_PREF: qwen3-coder:latest
GENERIC_OPEN_AI_MAX_TOKENS: 8192
GENERIC_OPEN_AI_API_KEY: not-needed # pragma: allowlist secret
GENERIC_OPEN_AI_MODEL_TOKEN_LIMIT: 65536
# Embedding and Vector DB
EMBEDDING_ENGINE: native
VECTOR_DB: lancedb
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3001/api/ping"]
interval: 15s
timeout: 5s
retries: 3
start_period: 30s
restart: unless-stopped

View File

@@ -0,0 +1,496 @@
# Arr Suite - Media automation stack
# Services: Sonarr, Radarr, Prowlarr, Bazarr, Lidarr, Tdarr, LazyLibrarian, Audiobookshelf
# Manages TV shows, movies, music, books, audiobooks downloads and organization
# GitOps Test: Stack successfully deployed and auto-updating
#
# Storage Configuration (2026-02-01):
# - Downloads: /volume3/usenet (Synology SNV5420 NVMe RAID1 - 621 MB/s)
# - Media: /volume1/data (SATA RAID6 - 84TB)
# - Configs: /volume2/metadata/docker2 (Crucial P310 NVMe RAID1)
#
# Volume 3 created for fast download performance using 007revad's Synology_M2_volume script
#
# Theming: Self-hosted theme.park (Dracula theme)
# - TP_DOMAIN uses docker gateway IP to reach host's theme-park container
# - Deploy theme-park stack first: Atlantis/theme-park/theme-park.yaml
version: "3.8"
x-themepark: &themepark
TP_SCHEME: "http"
TP_DOMAIN: "192.168.0.200:8580"
TP_THEME: "dracula"
networks:
media2_net:
driver: bridge
name: media2_net
ipam:
config:
- subnet: 172.24.0.0/24
gateway: 172.24.0.1
services:
wizarr:
image: ghcr.io/wizarrrr/wizarr:latest
container_name: wizarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- DISABLE_BUILTIN_AUTH=true
volumes:
- /volume2/metadata/docker2/wizarr:/data/database
ports:
- "5690:5690"
networks:
media2_net:
ipv4_address: 172.24.0.2
security_opt:
- no-new-privileges:true
restart: unless-stopped
tautulli:
image: lscr.io/linuxserver/tautulli:latest
container_name: tautulli
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:tautulli
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/tautulli:/config
ports:
- "8181:8181"
networks:
media2_net:
ipv4_address: 172.24.0.12
security_opt:
- no-new-privileges:true
restart: unless-stopped
prowlarr:
image: lscr.io/linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:prowlarr
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/prowlarr:/config
ports:
- "9696:9696"
networks:
media2_net:
ipv4_address: 172.24.0.6
security_opt:
- no-new-privileges:true
restart: unless-stopped
flaresolverr:
image: flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- TZ=America/Los_Angeles
ports:
- "8191:8191"
networks:
media2_net:
ipv4_address: 172.24.0.4
security_opt:
- no-new-privileges:true
restart: unless-stopped
sabnzbd:
image: lscr.io/linuxserver/sabnzbd:latest
container_name: sabnzbd
network_mode: host
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:sabnzbd
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/sabnzbd:/config
- /volume3/usenet/incomplete:/data/incomplete
- /volume3/usenet/complete:/data/complete
security_opt:
- no-new-privileges:true
restart: unless-stopped
jackett:
image: lscr.io/linuxserver/jackett:latest
container_name: jackett
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:jackett
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/jackett:/config
- /volume1/data:/downloads
ports:
- "9117:9117"
networks:
media2_net:
ipv4_address: 172.24.0.11
security_opt:
- no-new-privileges:true
restart: unless-stopped
sonarr:
image: lscr.io/linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:sonarr
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/sonarr:/config
- /volume1/data:/data
- /volume3/usenet:/sab
- /volume2/torrents:/downloads # Deluge download dir — required for torrent import
ports:
- "8989:8989"
networks:
media2_net:
ipv4_address: 172.24.0.7
security_opt:
- no-new-privileges:true
restart: unless-stopped
lidarr:
image: lscr.io/linuxserver/lidarr:latest
container_name: lidarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:lidarr
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/lidarr:/config
- /volume1/data:/data
- /volume3/usenet:/sab
# arr-scripts: custom init scripts for Deezer integration via deemix
# Config: /volume2/metadata/docker2/lidarr/extended.conf (contains ARL token, not in git)
# Setup: https://github.com/RandomNinjaAtk/arr-scripts
- /volume2/metadata/docker2/lidarr-scripts/custom-services.d:/custom-services.d
- /volume2/metadata/docker2/lidarr-scripts/custom-cont-init.d:/custom-cont-init.d
ports:
- "8686:8686"
networks:
media2_net:
ipv4_address: 172.24.0.9
security_opt:
- no-new-privileges:true
restart: unless-stopped
radarr:
image: lscr.io/linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:radarr
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/radarr:/config
- /volume1/data:/data
- /volume3/usenet:/sab
- /volume2/torrents:/downloads # Deluge download dir — required for torrent import
ports:
- "7878:7878"
networks:
media2_net:
ipv4_address: 172.24.0.8
security_opt:
- no-new-privileges:true
restart: unless-stopped
# Readarr retired - replaced with LazyLibrarian + Audiobookshelf
lazylibrarian:
image: lscr.io/linuxserver/lazylibrarian:latest
container_name: lazylibrarian
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:lazylibrarian|ghcr.io/linuxserver/mods:lazylibrarian-calibre
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/lazylibrarian:/config
- /volume1/data:/data
- /volume3/usenet:/sab
- /volume2/torrents:/downloads # Deluge download dir — required for torrent import
- /volume2/metadata/docker2/lazylibrarian-scripts/custom-cont-init.d:/custom-cont-init.d # patch tracker-less torrent handling
ports:
- "5299:5299"
networks:
media2_net:
ipv4_address: 172.24.0.5
security_opt:
- no-new-privileges:true
restart: unless-stopped
audiobookshelf:
image: ghcr.io/advplyr/audiobookshelf:latest
container_name: audiobookshelf
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
volumes:
- /volume2/metadata/docker2/audiobookshelf:/config
- /volume1/data/media/audiobooks:/audiobooks
- /volume1/data/media/podcasts:/podcasts
- /volume1/data/media/ebooks:/ebooks
ports:
- "13378:80"
networks:
media2_net:
ipv4_address: 172.24.0.16
security_opt:
- no-new-privileges:true
restart: unless-stopped
# Bazarr - subtitle management for Sonarr and Radarr
# Web UI: http://192.168.0.200:6767
# Language profile: English (profile ID 1), no mustContain filter
# Providers: REDACTED_APP_PASSWORD (vishinator), podnapisi, yifysubtitles, subf2m, subsource, subdl, animetosho
# NOTE: OpenSubtitles.com may be IP-blocked — submit unblock request at opensubtitles.com/support
# Notifications: Signal API via homelab-vm:8080 → REDACTED_PHONE_NUMBER
# API keys stored in: /volume2/metadata/docker2/bazarr/config/config.yaml (not in repo)
bazarr:
image: lscr.io/linuxserver/bazarr:latest
container_name: bazarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:bazarr
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/bazarr:/config
- /volume1/data:/data
- /volume3/usenet:/sab
ports:
- "6767:6767"
networks:
media2_net:
ipv4_address: 172.24.0.10
security_opt:
- no-new-privileges:true
restart: unless-stopped
whisparr:
image: ghcr.io/hotio/whisparr:nightly
container_name: whisparr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- TP_HOTIO=true
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/whisparr:/config
- /volume1/data:/data
- /volume3/usenet/complete:/sab/complete
- /volume3/usenet/incomplete:/sab/incomplete
ports:
- "6969:6969"
networks:
media2_net:
ipv4_address: 172.24.0.3
security_opt:
- no-new-privileges:true
restart: unless-stopped
plex:
image: lscr.io/linuxserver/plex:latest
container_name: plex
network_mode: host
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- VERSION=docker
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:plex
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/plex:/config
- /volume1/data/media:/data/media
security_opt:
- no-new-privileges:true
restart: unless-stopped
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
user: "1029:100"
environment:
- TZ=America/Los_Angeles
# Note: Jellyseerr theming requires CSS injection via reverse proxy or browser extension
# theme.park doesn't support DOCKER_MODS for non-linuxserver images
volumes:
- /volume2/metadata/docker2/jellyseerr:/app/config
ports:
- "5055:5055"
networks:
media2_net:
ipv4_address: 172.24.0.14
dns:
- 9.9.9.9
- 1.1.1.1
security_opt:
- no-new-privileges:true
restart: unless-stopped
gluetun:
image: qmcgaw/gluetun:v3.38.0
container_name: gluetun
privileged: true
devices:
- /dev/net/tun:/dev/net/tun
labels:
- com.centurylinklabs.watchtower.enable=false
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
# --- WireGuard ---
- VPN_SERVICE_PROVIDER=custom
- VPN_TYPE=wireguard
- WIREGUARD_PRIVATE_KEY=aAavqcZ6sx3IlgiH5Q8m/6w33mBu4M23JBM8N6cBKEU= # pragma: allowlist secret
- WIREGUARD_ADDRESSES=10.2.0.2/32
- WIREGUARD_DNS=10.2.0.1
- WIREGUARD_PUBLIC_KEY=FrVOQ+Dy0StjfwNtbJygJCkwSJt6ynlGbQwZBZWYfhc=
- WIREGUARD_ALLOWED_IPS=0.0.0.0/0,::/0
- WIREGUARD_ENDPOINT_IP=79.127.185.193
- WIREGUARD_ENDPOINT_PORT=51820
volumes:
- /volume2/metadata/docker2/gluetun:/gluetun
ports:
- "8112:8112" # Deluge WebUI
- "58946:58946" # Torrent TCP
- "58946:58946/udp" # Torrent UDP
networks:
media2_net:
ipv4_address: 172.24.0.20
healthcheck:
test: ["CMD-SHELL", "wget -qO /dev/null http://127.0.0.1:9999 2>/dev/null || exit 1"]
interval: 10s
timeout: 5s
retries: 6
start_period: 30s
security_opt:
- no-new-privileges:true
restart: unless-stopped
deluge:
image: lscr.io/linuxserver/deluge:latest
container_name: deluge
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:deluge
- TP_SCHEME=http
- TP_DOMAIN=192.168.0.200:8580
- TP_THEME=dracula
volumes:
- /volume2/metadata/docker2/deluge:/config
- /volume2/torrents:/downloads
network_mode: "service:gluetun"
depends_on:
gluetun:
condition: service_healthy
security_opt:
- no-new-privileges:true
restart: unless-stopped
tdarr:
image: ghcr.io/haveagitgat/tdarr:latest
container_name: tdarr
environment:
- PUID=1029
- PGID=100
- TZ=America/Los_Angeles
- UMASK=022
- serverIP=0.0.0.0
- serverPort=8266
- webUIPort=8265
- internalNode=true
- inContainer=true
- ffmpegVersion=6
- nodeName=Atlantis
volumes:
- /volume2/metadata/docker2/tdarr/server:/app/server
- /volume2/metadata/docker2/tdarr/configs:/app/configs
- /volume2/metadata/docker2/tdarr/logs:/app/logs
- /volume1/data/media:/media
- /volume3/usenet/tdarr_cache:/temp
- /volume3/usenet/tdarr_cache:/cache # Fix: internal node uses /cache path
ports:
- "8265:8265"
- "8266:8266"
networks:
media2_net:
ipv4_address: 172.24.0.15
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,154 @@
#!/usr/bin/env bash
# =============================================================================
# Arr-Suite Installer — Atlantis (192.168.0.200)
# =============================================================================
# One-line install:
# bash <(curl -fsSL https://git.vish.gg/Vish/homelab/raw/branch/main/hosts/synology/atlantis/arr-suite/install.sh)
#
# What this installs:
# Sonarr, Radarr, Lidarr, Bazarr, Prowlarr, Jackett, FlaresolverR
# SABnzbd, Deluge (via gluetun VPN), Tdarr, LazyLibrarian
# Audiobookshelf, Whisparr, Plex, Jellyseerr, Tautulli, Wizarr
#
# Prerequisites:
# - Synology DSM with Container Manager (Docker)
# - /volume1/data, /volume2/metadata/docker2, /volume3/usenet, /volume2/torrents
# - PUID=1029, PGID=100 (DSM user: vish)
# - WireGuard credentials for gluetun (must be set in compose or env)
# =============================================================================
set -euo pipefail
REPO_URL="https://git.vish.gg/Vish/homelab"
COMPOSE_URL="${REPO_URL}/raw/branch/main/hosts/synology/atlantis/arr-suite/docker-compose.yml"
DOCKER="${DOCKER_BIN:-/usr/local/bin/docker}"
STACK_DIR="/volume2/metadata/docker2/arr-suite"
COMPOSE_FILE="${STACK_DIR}/docker-compose.yml"
# Colours
RED='\033[0;31m'; GREEN='\033[0;32m'; YELLOW='\033[1;33m'; NC='\033[0m'
info() { echo -e "${GREEN}[INFO]${NC} $*"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $*"; }
error() { echo -e "${RED}[ERROR]${NC} $*"; exit 1; }
# ── Preflight ─────────────────────────────────────────────────────────────────
info "Arr-Suite installer starting"
[[ $(id -u) -eq 0 ]] || error "Run as root (sudo bash install.sh)"
command -v "$DOCKER" &>/dev/null || error "Docker not found at $DOCKER — set DOCKER_BIN env var"
for vol in /volume1/data /volume2/metadata/docker2 /volume3/usenet /volume2/torrents; do
[[ -d "$vol" ]] || warn "Volume $vol does not exist — create it before starting services"
done
# ── Required directories ───────────────────────────────────────────────────────
info "Creating config directories..."
SERVICES=(
sonarr radarr lidarr bazarr prowlarr jackett sabnzbd
deluge gluetun tdarr/server tdarr/configs tdarr/logs
lazylibrarian audiobookshelf whisparr plex jellyseerr
tautulli wizarr
)
for svc in "${SERVICES[@]}"; do
mkdir -p "/volume2/metadata/docker2/${svc}"
done
# Download directories
mkdir -p \
/volume3/usenet/complete \
/volume3/usenet/incomplete \
/volume3/usenet/tdarr_cache \
/volume2/torrents/complete \
/volume2/torrents/incomplete
# Media library
mkdir -p \
/volume1/data/media/tv \
/volume1/data/media/movies \
/volume1/data/media/music \
/volume1/data/media/audiobooks \
/volume1/data/media/podcasts \
/volume1/data/media/ebooks \
/volume1/data/media/misc
# Lidarr arr-scripts directories
mkdir -p \
/volume2/metadata/docker2/lidarr-scripts/custom-cont-init.d \
/volume2/metadata/docker2/lidarr-scripts/custom-services.d
# ── Lidarr arr-scripts bootstrap ──────────────────────────────────────────────
INIT_SCRIPT="/volume2/metadata/docker2/lidarr-scripts/custom-cont-init.d/scripts_init.bash"
if [[ ! -f "$INIT_SCRIPT" ]]; then
info "Downloading arr-scripts init script..."
curl -fsSL "https://raw.githubusercontent.com/RandomNinjaAtk/arr-scripts/main/lidarr/scripts_init.bash" \
-o "$INIT_SCRIPT" || warn "Failed to download arr-scripts init — download manually from RandomNinjaAtk/arr-scripts"
chmod +x "$INIT_SCRIPT"
fi
# ── Download compose file ──────────────────────────────────────────────────────
info "Downloading docker-compose.yml..."
mkdir -p "$STACK_DIR"
curl -fsSL "$COMPOSE_URL" -o "$COMPOSE_FILE" || error "Failed to download compose file from $COMPOSE_URL"
# ── Warn about secrets ────────────────────────────────────────────────────────
warn "==================================================================="
warn "ACTION REQUIRED before starting:"
warn ""
warn "1. Set gluetun WireGuard credentials in:"
warn " $COMPOSE_FILE"
warn " - WIREGUARD_PRIVATE_KEY"
warn " - WIREGUARD_PUBLIC_KEY"
warn " - WIREGUARD_ENDPOINT_IP"
warn ""
warn "2. Set Lidarr Deezer ARL token:"
warn " /volume2/metadata/docker2/lidarr/extended.conf"
warn " arlToken=\"<your-arl-token>\""
warn " Get from: deezer.com -> DevTools -> Cookies -> arl"
warn ""
warn "3. Set Plex claim token (optional, for initial setup):"
warn " https://www.plex.tv/claim"
warn " Add to compose: PLEX_CLAIM=<token>"
warn "==================================================================="
# ── Pull images ───────────────────────────────────────────────────────────────
read -rp "Pull all images now? (y/N): " pull_images
if [[ "${pull_images,,}" == "y" ]]; then
info "Pulling images (this may take a while)..."
"$DOCKER" compose -f "$COMPOSE_FILE" pull
fi
# ── Start stack ───────────────────────────────────────────────────────────────
read -rp "Start all services now? (y/N): " start_services
if [[ "${start_services,,}" == "y" ]]; then
info "Starting arr-suite..."
"$DOCKER" compose -f "$COMPOSE_FILE" up -d
info "Done! Services starting..."
echo ""
echo "Service URLs:"
echo " Sonarr: http://192.168.0.200:8989"
echo " Radarr: http://192.168.0.200:7878"
echo " Lidarr: http://192.168.0.200:8686"
echo " Prowlarr: http://192.168.0.200:9696"
echo " SABnzbd: http://192.168.0.200:8080"
echo " Deluge: http://192.168.0.200:8112 (password: "REDACTED_PASSWORD"
echo " Bazarr: http://192.168.0.200:6767"
echo " Tdarr: http://192.168.0.200:8265"
echo " Whisparr: http://192.168.0.200:6969"
echo " Plex: http://192.168.0.200:32400/web"
echo " Jellyseerr: http://192.168.0.200:5055"
echo " Audiobookshelf:http://192.168.0.200:13378"
echo " LazyLibrarian: http://192.168.0.200:5299"
echo " Tautulli: http://192.168.0.200:8181"
echo " Wizarr: http://192.168.0.200:5690"
echo " Jackett: http://192.168.0.200:9117"
fi
info "Install complete."
info "Docs: https://git.vish.gg/Vish/homelab/src/branch/main/docs/services/individual/"

View File

@@ -0,0 +1,18 @@
services:
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
user: 1029:65536 #YOUR_UID_AND_GID
environment:
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
volumes:
- /volume1/docker2/jellyseerr:/app/config
ports:
- 5055:5055/tcp
network_mode: synobridge
dns: #DNS Servers to help with speed issues some have
- 9.9.9.9
- 1.1.1.1
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,163 @@
# =============================================================================
# PLEX MEDIA SERVER - DISASTER RECOVERY CONFIGURATION
# =============================================================================
#
# SERVICE OVERVIEW:
# - Primary media streaming server for homelab
# - Serves 4K movies, TV shows, music, and photos
# - Hardware transcoding enabled via Intel Quick Sync
# - Critical service for media consumption
#
# DISASTER RECOVERY NOTES:
# - Configuration stored in /volume1/docker2/plex (CRITICAL BACKUP)
# - Media files in /volume1/data/media (128TB+ library)
# - Database contains watch history, metadata, user preferences
# - Hardware transcoding requires Intel GPU access (/dev/dri)
#
# BACKUP PRIORITY: HIGH
# - Config backup: Daily automated backup required
# - Media backup: Secondary NAS sync (Calypso)
# - Database backup: Included in config volume
#
# RECOVERY TIME OBJECTIVE (RTO): 30 minutes
# RECOVERY POINT OBJECTIVE (RPO): 24 hours
#
# DEPENDENCIES:
# - Volume1 must be accessible (current issue: SSD cache failure)
# - Intel GPU drivers for hardware transcoding
# - Network connectivity for remote access
# - Plex Pass subscription for premium features
#
# PORTS USED:
# - 32400/tcp: Main Plex web interface and API
# - 3005/tcp: Plex Home Theater via Plex Companion
# - 8324/tcp: Plex for Roku via Plex Companion
# - 32469/tcp: Plex DLNA Server
# - 1900/udp: Plex DLNA Server
# - 32410/udp, 32412/udp, 32413/udp, 32414/udp: GDM Network discovery
#
# =============================================================================
services:
plex:
# CONTAINER IMAGE:
# - linuxserver/plex: Community-maintained, regularly updated
# - Alternative: plexinc/pms-docker (official but less frequent updates)
# - Version pinning recommended for production: linuxserver/plex:1.32.8
image: linuxserver/plex:latest
# CONTAINER NAME:
# - Fixed name for easy identification and management
# - Used in monitoring, logs, and backup scripts
container_name: plex
# NETWORK CONFIGURATION:
# - host mode: Required for Plex auto-discovery and DLNA
# - Allows Plex to bind to all network interfaces
# - Enables UPnP/DLNA functionality for smart TVs
# - SECURITY NOTE: Exposes all container ports to host
network_mode: host
environment:
# USER/GROUP PERMISSIONS:
# - PUID=1029: User ID for file ownership (Synology 'admin' user)
# - PGID=65536: Group ID for file access (Synology 'administrators' group)
# - CRITICAL: Must match NAS user/group for file access
# - Find correct values: id admin (on Synology)
- PUID=1029 #CHANGE_TO_YOUR_UID
- PGID=65536 #CHANGE_TO_YOUR_GID
# TIMEZONE CONFIGURATION:
# - TZ: Timezone for logs, scheduling, and metadata
# - Must match system timezone for accurate timestamps
# - Format: Area/City (e.g., America/Los_Angeles, Europe/London)
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
# FILE PERMISSIONS:
# - UMASK=022: Default file permissions (755 for dirs, 644 for files)
# - Ensures proper read/write access for media files
# - 022 = owner: rwx, group: r-x, other: r-x
- UMASK=022
# PLEX VERSION MANAGEMENT:
# - VERSION=docker: Use version bundled with Docker image
# - Alternative: VERSION=latest (auto-update, not recommended for production)
# - Alternative: VERSION=1.32.8.7639-fb6452ebf (pin specific version)
- VERSION=docker
# PLEX CLAIM TOKEN:
# - Used for initial server setup and linking to Plex account
# - Get token from: https://plex.tv/claim (valid for 4 minutes)
# - Leave empty after initial setup
# - SECURITY: Remove token after claiming server
- PLEX_CLAIM=
volumes:
# CONFIGURATION VOLUME:
# - /volume1/docker2/plex:/config
# - Contains: Database, metadata, thumbnails, logs, preferences
# - SIZE: ~50-100GB depending on library size
# - BACKUP CRITICAL: Contains all user data and settings
# - RECOVERY: Restore this volume to recover complete Plex setup
- /volume1/docker2/plex:/config
# MEDIA VOLUME:
# - /volume1/data/media:/data/media
# - Contains: Movies, TV shows, music, photos (128TB+ library)
# - READ-ONLY recommended for security (add :ro suffix if desired)
# - STRUCTURE: Organized by type (movies/, tv/, music/, photos/)
# - BACKUP: Synced to Calypso NAS for redundancy
- /volume1/data/media:/data/media
devices:
# HARDWARE TRANSCODING:
# - /dev/dri:/dev/dri: Intel Quick Sync Video access
# - Enables hardware-accelerated transcoding (H.264, H.265, AV1)
# - CRITICAL: Reduces CPU usage by 80-90% during transcoding
# - REQUIREMENT: Intel GPU with Quick Sync support
# - TROUBLESHOOTING: Check 'ls -la /dev/dri' for render devices
- /dev/dri:/dev/dri
security_opt:
# SECURITY HARDENING:
# - no-new-privileges: Prevents privilege escalation attacks
# - Container cannot gain additional privileges during runtime
# - Recommended security practice for all containers
- no-new-privileges:true
# RESTART POLICY:
# - always: Container restarts automatically on failure or system reboot
# - CRITICAL: Ensures Plex is always available for media streaming
# - Alternative: unless-stopped (won't restart if manually stopped)
restart: unless-stopped
# =============================================================================
# DISASTER RECOVERY PROCEDURES:
# =============================================================================
#
# BACKUP VERIFICATION:
# docker exec plex ls -la /config/Library/Application\ Support/Plex\ Media\ Server/
#
# MANUAL BACKUP:
# tar -czf /volume2/backups/plex-config-$(date +%Y%m%d).tar.gz /volume1/docker2/plex/
#
# RESTORE PROCEDURE:
# 1. Stop container: docker-compose down
# 2. Restore config: tar -xzf plex-backup.tar.gz -C /volume1/docker2/
# 3. Fix permissions: chown -R 1029:65536 /volume1/docker2/plex/
# 4. Start container: docker-compose up -d
# 5. Verify: Check http://atlantis.vish.local:32400/web
#
# TROUBLESHOOTING:
# - No hardware transcoding: Check /dev/dri permissions and Intel GPU drivers
# - Database corruption: Restore from backup or rebuild library
# - Permission errors: Verify PUID/PGID match NAS user/group
# - Network issues: Check host networking and firewall rules
#
# MONITORING:
# - Health check: curl -f http://localhost:32400/identity
# - Logs: docker logs plex
# - Transcoding: Plex Dashboard > Settings > Transcoder
# - Performance: Grafana dashboard for CPU/GPU usage
#
# =============================================================================

View File

@@ -0,0 +1,29 @@
services:
linuxserver-prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=1029 #CHANGE_TO_YOUR_UID
- PGID=65536 #CHANGE_TO_YOUR_GID
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
- UMASK=022
volumes:
- /volume1/docker2/prowlarr:/config
ports:
- 9696:9696/tcp
network_mode: synobridge
security_opt:
- no-new-privileges:true
restart: unless-stopped
flaresolverr:
image: flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
ports:
- 8191:8191
network_mode: synobridge
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,18 @@
services:
sabnzbd:
image: linuxserver/sabnzbd:latest
container_name: sabnzbd
environment:
- PUID=1029 #CHANGE_TO_YOUR_UID
- PGID=65536 #CHANGE_TO_YOUR_GID
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
- UMASK=022
volumes:
- /volume1/docker2/sabnzbd:/config
- /volume1/data/usenet:/data/usenet
ports:
- 8080:8080/tcp
network_mode: synobridge
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,17 @@
services:
tautulli:
image: linuxserver/tautulli:latest
container_name: tautulli
environment:
- PUID=1029 #CHANGE_TO_YOUR_UID
- PGID=65536 #CHANGE_TO_YOUR_GID
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
- UMASK=022
volumes:
- /volume1/docker2/tautulli:/config
ports:
- 8181:8181/tcp
network_mode: synobridge
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,18 @@
services:
whisparr:
image: hotio/whisparr:nightly
container_name: whisparr
environment:
- PUID=1029 #CHANGE_TO_YOUR_UID
- PGID=65536 #CHANGE_TO_YOUR_GID
- TZ=America/Los_Angeles #CHANGE_TO_YOUR_TZ
- UMASK=022
volumes:
- /volume1/docker2/whisparr:/config
- /volume1/data/:/data
ports:
- 6969:6969/tcp
network_mode: synobridge
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,19 @@
version: '3.8'
services:
wizarr:
image: ghcr.io/wizarrrr/wizarr:latest
container_name: wizarr
environment:
- PUID=1029
- PGID=65536
- TZ=America/Los_Angeles
- DISABLE_BUILTIN_AUTH=false
volumes:
- /volume1/docker2/wizarr:/data/database
ports:
- 5690:5690/tcp
network_mode: synobridge
security_opt:
- no-new-privileges:true
restart: unless-stopped

View File

@@ -0,0 +1,18 @@
ssh-keygen -t ed25519 -C "synology@atlantis"
rsync -avhn --progress -e "ssh -T -c aes128-gcm@openssh.com -o Compression=no -x" \
"/volume1/data/media/tv/Lord of Mysteries/" \
root@100.99.156.20:/root/docker/plex/tvshows/
rsync -avh --progress -e "ssh -T -c aes128-gcm@openssh.com -o Compression=no -x" \
"/volume1/data/media/movies/Ballerina (2025)" \
root@100.99.156.20:/root/docker/plex/movies/
rsync -avh --progress -e "ssh -T -c aes128-gcm@openssh.com -o Compression=no -x" \
"/volume1/data/media/other/" \
--include 'VID_20240328_150621.mp4' \
--include 'VID_20240328_153720.mp4' \
--exclude '*' \
homelab@100.67.40.126:/home/homelab/whisper-docker/audio/

View File

@@ -0,0 +1,18 @@
# Baikal - CalDAV/CardDAV server
# Port: 8800
# Self-hosted calendar and contacts sync server
version: "3.7"
services:
baikal:
image: ckulka/baikal
container_name: baikal
ports:
- "12852:80"
environment:
- PUID=1026
- PGID=100
volumes:
- /volume2/metadata/docker/baikal/config:/var/www/baikal/config
- /volume2/metadata/docker/baikal/html:/var/www/baikal/Specific
restart: unless-stopped

View File

@@ -0,0 +1 @@
https://cal.vish.gg/dav.php/calendars/vish/default?export

View File

@@ -0,0 +1,20 @@
# Calibre Web - E-book management
# Port: 8083
# Web-based e-book library with OPDS support
name: calibre
services:
calibre-web:
container_name: calibre-webui
ports:
- 8183:8083
environment:
- PUID=1026
- PGID=100
- TZ=America/Los_Angeles
- DOCKER_MODS=linuxserver/mods:universal-calibre
- OAUTHLIB_RELAX_TOKEN_SCOPE=1
volumes:
- /volume2/metadata/docker/calibreweb:/config
- /volume2/metadata/docker/books:/books
restart: unless-stopped
image: ghcr.io/linuxserver/calibre-web

View File

@@ -0,0 +1,43 @@
# Cloudflare Tunnel for Atlantis NAS
# Provides secure external access without port forwarding
#
# SETUP INSTRUCTIONS:
# 1. Go to https://one.dash.cloudflare.com/ → Zero Trust → Networks → Tunnels
# 2. Create a new tunnel named "atlantis-tunnel"
# 3. Copy the tunnel token (starts with eyJ...)
# 4. Replace TUNNEL_TOKEN_HERE below with your token
# 5. In the tunnel dashboard, add these public hostnames:
#
# | Public Hostname | Service |
# |----------------------|----------------------------|
# | pw.vish.gg | http://localhost:4080 |
# | cal.vish.gg | http://localhost:12852 |
# | meet.thevish.io | https://localhost:5443 |
# | joplin.thevish.io | http://localhost:22300 |
# | mastodon.vish.gg | http://192.168.0.154:3000 |
# | matrix.thevish.io | http://192.168.0.154:8081 |
# | mx.vish.gg | http://192.168.0.154:8082 |
# | mm.crista.love | http://192.168.0.154:8065 |
#
# 6. Deploy this stack in Portainer
version: '3.8'
services:
cloudflared:
image: cloudflare/cloudflared:latest
container_name: cloudflare-tunnel
restart: unless-stopped
command: tunnel run
environment:
- TUNNEL_TOKEN=${TUNNEL_TOKEN}
network_mode: host # Needed to access localhost services and VMs
# Alternative if you prefer bridge network:
# networks:
# - tunnel_net
# extra_hosts:
# - "host.docker.internal:host-gateway"
# networks:
# tunnel_net:
# driver: bridge

View File

@@ -0,0 +1,83 @@
# Standalone DERP Relay Server — Atlantis (Home NAS)
# =============================================================================
# Tailscale/Headscale DERP relay for home-network fallback connectivity.
# Serves as region 902 "Home - Atlantis" in the headscale derpmap.
#
# Why standalone (not behind nginx):
# The DERP protocol does an HTTP→binary protocol switch inside TLS.
# It is incompatible with HTTP reverse proxies. Must handle TLS directly.
#
# Port layout:
# 8445/tcp — DERP relay (direct TLS, NOT proxied through NPM)
# 3480/udp — STUN (NAT traversal hints)
# Port 3478 taken by coturn/Jitsi, 3479 taken by coturn/Matrix on matrix-ubuntu.
#
# TLS cert:
# Issued by Let's Encrypt via certbot DNS challenge (Cloudflare).
# Cert path: /volume1/docker/derper-atl/certs/
# Cloudflare credentials: /volume1/docker/derper-atl/secrets/cloudflare.ini
# Auto-renewed monthly by the cert-renewer sidecar (ofelia + certbot/dns-cloudflare).
# On first deploy or manual renewal, run:
# docker run -it --rm \
# -v /volume1/docker/derper-atl/certs:/etc/letsencrypt \
# -v /volume1/docker/derper-atl/secrets:/root/.secrets:ro \
# certbot/dns-cloudflare certonly \
# --dns-cloudflare \
# --dns-cloudflare-credentials /root/.secrets/cloudflare.ini \
# -d derp-atl.vish.gg
# Then copy certs to flat layout:
# cp certs/live/derp-atl.vish.gg/fullchain.pem certs/live/derp-atl.vish.gg/derp-atl.vish.gg.crt
# cp certs/live/derp-atl.vish.gg/privkey.pem certs/live/derp-atl.vish.gg/derp-atl.vish.gg.key
#
# Firewall / DSM rules required (one-time):
# Allow inbound 8445/tcp and 3480/udp in DSM → Security → Firewall
#
# Router port forwards required (one-time, on home router):
# 8445/tcp → 192.168.0.200 (Atlantis LAN IP, main interface)
# 3480/udp → 192.168.0.200
#
# DNS: derp-atl.vish.gg → home public IP (managed by dynamicdnsupdater.yaml, unproxied)
# =============================================================================
services:
derper-atl:
image: fredliang/derper:latest
container_name: derper-atl
restart: unless-stopped
ports:
- "8445:8445" # DERP TLS — direct, not behind NPM
- "3480:3480/udp" # STUN (3478 taken by coturn/Jitsi, 3479 taken by coturn/Matrix)
volumes:
# Full letsencrypt mount required — live/ contains symlinks into archive/
# mounting only live/ breaks symlink resolution inside the container
- /volume1/docker/derper-atl/certs:/etc/letsencrypt:ro
environment:
- DERP_DOMAIN=derp-atl.vish.gg
- DERP_CERT_MODE=manual
- DERP_CERT_DIR=/etc/letsencrypt/live/derp-atl.vish.gg
- DERP_ADDR=:8445
- DERP_STUN=true
- DERP_STUN_PORT=3480
- DERP_HTTP_PORT=-1 # disable plain HTTP, TLS only
- DERP_VERIFY_CLIENTS=false # allow any node (headscale manages auth)
cert-renewer:
# Runs certbot monthly via supercronic; after renewal copies certs to the
# flat layout derper expects, then restarts derper-atl via Docker socket.
# Schedule: 03:00 on the 1st of every month.
image: certbot/dns-cloudflare:latest
container_name: derper-atl-cert-renewer
restart: unless-stopped
depends_on:
- derper-atl
entrypoint: >-
sh -c "
apk add --no-cache supercronic curl &&
echo '0 3 1 * * /renew.sh' > /crontab &&
exec supercronic /crontab
"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /volume1/docker/derper-atl/certs:/etc/letsencrypt
- /volume1/docker/derper-atl/secrets:/root/.secrets:ro
- /volume1/docker/derper-atl/renew.sh:/renew.sh:ro

View File

@@ -0,0 +1,28 @@
# Diun — Docker Image Update Notifier
#
# Watches all running containers on this host and sends ntfy
# notifications when upstream images update their digest.
# Schedule: Mondays 09:00 (weekly cadence).
#
# ntfy topic: https://ntfy.vish.gg/diun
services:
diun:
image: crazymax/diun:latest
container_name: diun
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- diun-data:/data
environment:
LOG_LEVEL: info
DIUN_WATCH_WORKERS: "20"
DIUN_WATCH_SCHEDULE: "0 9 * * 1"
DIUN_WATCH_JITTER: 30s
DIUN_PROVIDERS_DOCKER: "true"
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
DIUN_NOTIF_NTFY_ENDPOINT: "https://ntfy.vish.gg"
DIUN_NOTIF_NTFY_TOPIC: "diun"
restart: unless-stopped
volumes:
diun-data:

View File

@@ -0,0 +1,20 @@
services:
dockpeek:
container_name: Dockpeek
image: ghcr.io/dockpeek/dockpeek:latest
healthcheck:
test: timeout 10s bash -c ':> /dev/tcp/127.0.0.1/8000' || exit 1
interval: 10s
timeout: 5s
retries: 3
start_period: 90s
environment:
SECRET_KEY: "REDACTED_SECRET_KEY" # pragma: allowlist secret
USERNAME: vish
PASSWORD: REDACTED_PASSWORD # pragma: allowlist secret
DOCKER_HOST: unix:///var/run/docker.sock
ports:
- 3812:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
restart: on-failure:5

View File

@@ -0,0 +1,71 @@
services:
db:
image: postgres:17
container_name: Documenso-DB
hostname: documenso-db
security_opt:
- no-new-privileges:true
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "documenso", "-U", "documensouser"]
timeout: 45s
interval: 10s
retries: 10
volumes:
- /volume1/docker/documenso/db:/var/lib/postgresql/data:rw
environment:
POSTGRES_DB: documenso
POSTGRES_USER: documensouser
POSTGRES_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
restart: on-failure:5
documenso:
image: documenso/documenso:latest
container_name: Documenso
ports:
- 3513:3000
volumes:
- /volume1/docker/documenso/data:/opt/documenso:rw
depends_on:
db:
condition: service_healthy
environment:
- PORT=3000
- NEXTAUTH_SECRET="REDACTED_NEXTAUTH_SECRET" # pragma: allowlist secret
- NEXT_PRIVATE_ENCRYPTION_KEY=y6vZRCEKo2rEsJzXlQfgXg3fLKlhiT7h # pragma: allowlist secret
- NEXT_PRIVATE_ENCRYPTION_SECONDARY_KEY=QA7tXtw7fDExGRjrJ616hDmiJ4EReXlP # pragma: allowlist secret
- NEXTAUTH_URL=https://documenso.thevish.io
- NEXT_PUBLIC_WEBAPP_URL=https://documenso.thevish.io
- NEXT_PRIVATE_INTERNAL_WEBAPP_URL=http://documenso:3000
- NEXT_PUBLIC_MARKETING_URL=https://documenso.thevish.io
- NEXT_PRIVATE_DATABASE_URL=postgres://documensouser:documensopass@documenso-db:5432/documenso
- NEXT_PRIVATE_DIRECT_DATABASE_URL=postgres://documensouser:documensopass@documenso-db:5432/documenso
- NEXT_PUBLIC_UPLOAD_TRANSPORT=database
- NEXT_PRIVATE_SMTP_TRANSPORT=smtp-auth
- NEXT_PRIVATE_SMTP_HOST=smtp.gmail.com
- NEXT_PRIVATE_SMTP_PORT=587
- NEXT_PRIVATE_SMTP_USERNAME=your-email@example.com
- NEXT_PRIVATE_SMTP_PASSWORD="REDACTED_PASSWORD" jkbo lmag sapq # pragma: allowlist secret
- NEXT_PRIVATE_SMTP_SECURE=false
- NEXT_PRIVATE_SMTP_FROM_NAME=Vish
- NEXT_PRIVATE_SMTP_FROM_ADDRESS=your-email@example.com
- NEXT_PRIVATE_SIGNING_LOCAL_FILE_PATH=/opt/documenso/cert.p12
#NEXT_PRIVATE_SMTP_UNSAFE_IGNORE_TLS=true
#NEXT_PRIVATE_SMTP_APIKEY_USER=${NEXT_PRIVATE_SMTP_APIKEY_USER}
#NEXT_PRIVATE_SMTP_APIKEY=${NEXT_PRIVATE_SMTP_APIKEY}
#NEXT_PRIVATE_RESEND_API_KEY=${NEXT_PRIVATE_RESEND_API_KEY}
#NEXT_PRIVATE_MAILCHANNELS_API_KEY=${NEXT_PRIVATE_MAILCHANNELS_API_KEY}
#NEXT_PRIVATE_MAILCHANNELS_ENDPOINT=${NEXT_PRIVATE_MAILCHANNELS_ENDPOINT}
#NEXT_PRIVATE_MAILCHANNELS_DKIM_DOMAIN=${NEXT_PRIVATE_MAILCHANNELS_DKIM_DOMAIN}
#NEXT_PRIVATE_MAILCHANNELS_DKIM_SELECTOR=${NEXT_PRIVATE_MAILCHANNELS_DKIM_SELECTOR}
#NEXT_PRIVATE_MAILCHANNELS_DKIM_PRIVATE_KEY=${NEXT_PRIVATE_MAILCHANNELS_DKIM_PRIVATE_KEY}
#NEXT_PUBLIC_DOCUMENT_SIZE_UPLOAD_LIMIT=${NEXT_PUBLIC_DOCUMENT_SIZE_UPLOAD_LIMIT}
#NEXT_PUBLIC_POSTHOG_KEY=${NEXT_PUBLIC_POSTHOG_KEY}
#NEXT_PUBLIC_DISABLE_SIGNUP=${NEXT_PUBLIC_DISABLE_SIGNUP}
#NEXT_PRIVATE_UPLOAD_ENDPOINT=${NEXT_PRIVATE_UPLOAD_ENDPOINT}
#NEXT_PRIVATE_UPLOAD_FORCE_PATH_STYLE=${NEXT_PRIVATE_UPLOAD_FORCE_PATH_STYLE}
#NEXT_PRIVATE_UPLOAD_REGION=${NEXT_PRIVATE_UPLOAD_REGION}
#NEXT_PRIVATE_UPLOAD_BUCKET=${NEXT_PRIVATE_UPLOAD_BUCKET}
#NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID=${NEXT_PRIVATE_UPLOAD_ACCESS_KEY_ID}
#NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY=${NEXT_PRIVATE_UPLOAD_SECRET_ACCESS_KEY}
#NEXT_PRIVATE_GOOGLE_CLIENT_ID=${NEXT_PRIVATE_GOOGLE_CLIENT_ID}
#NEXT_PRIVATE_GOOGLE_CLIENT_SECRET=${NEXT_PRIVATE_GOOGLE_CLIENT_SECRET}

View File

@@ -0,0 +1,19 @@
# DokuWiki - Wiki platform
# Port: 8084
# Simple wiki without database, uses plain text files
version: "3.9"
services:
dokuwiki:
image: ghcr.io/linuxserver/dokuwiki
container_name: dokuwiki
restart: unless-stopped
ports:
- "8399:80"
- "4443:443"
environment:
- TZ=America/Los_Angeles
- PUID=1026
- PGID=100
volumes:
- /volume2/metadata/docker/dokuwiki:/config

View File

@@ -0,0 +1,21 @@
# Dozzle - Real-time Docker log viewer
# Port: 8892
# Lightweight container log viewer with web UI
# Updated: 2026-03-11
services:
dozzle:
container_name: Dozzle
image: amir20/dozzle:latest
mem_limit: 3g
cpu_shares: 768
security_opt:
- no-new-privileges:true
restart: on-failure:5
ports:
- 8892:8080
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /volume2/metadata/docker/dozzle:/data:rw
environment:
DOZZLE_AUTH_PROVIDER: simple
DOZZLE_REMOTE_AGENT: "100.72.55.21:7007,100.77.151.40:7007,100.103.48.78:7007,100.75.252.64:7007,100.67.40.126:7007,100.82.197.124:7007,100.125.0.20:7007,100.85.21.51:7007"

View File

@@ -0,0 +1,6 @@
users:
vish:
name: "Vish k"
# Generate with IT-TOOLS https://it-tools.tech/bcrypt
password: "REDACTED_PASSWORD" # pragma: allowlist secret
email: your-email@example.com

View File

@@ -0,0 +1,72 @@
# Dynamic DNS Updater
# Updates DNS records when public IP changes
# Deployed on Atlantis - updates all homelab domains
version: '3.8'
services:
# vish.gg (proxied domains - all public services)
ddns-vish-proxied:
image: favonia/cloudflare-ddns:latest
network_mode: host
restart: unless-stopped
user: "1026:100"
read_only: true
cap_drop: [all]
security_opt: [no-new-privileges:true]
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
# Main domains + Calypso services (sf, dav, actual, docs, ost, retro)
# NOTE: mx.vish.gg intentionally excluded — MX/mail records must NOT be CF-proxied
# NOTE: reddit.vish.gg and vp.vish.gg removed — obsolete services
- DOMAINS=vish.gg,www.vish.gg,cal.vish.gg,dash.vish.gg,gf.vish.gg,git.vish.gg,kuma.vish.gg,mastodon.vish.gg,nb.vish.gg,npm.vish.gg,ntfy.vish.gg,ollama.vish.gg,paperless.vish.gg,pw.vish.gg,rackula.vish.gg,rx.vish.gg,rxdl.vish.gg,rxv4access.vish.gg,rxv4download.vish.gg,scrutiny.vish.gg,sso.vish.gg,sf.vish.gg,dav.vish.gg,actual.vish.gg,docs.vish.gg,ost.vish.gg,retro.vish.gg,wizarr.vish.gg
- PROXIED=true
# thevish.io (proxied domains)
ddns-thevish-proxied:
image: favonia/cloudflare-ddns:latest
network_mode: host
restart: unless-stopped
user: "1026:100"
read_only: true
cap_drop: [all]
security_opt: [no-new-privileges:true]
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
# Removed: documenso.thevish.io, *.vps.thevish.io (deleted)
# Added: binterest, hoarder (now proxied)
# meet.thevish.io moved here: CF proxy enabled Jan 2026 (NPM migration)
- DOMAINS=www.thevish.io,joplin.thevish.io,matrix.thevish.io,binterest.thevish.io,hoarder.thevish.io,meet.thevish.io
- PROXIED=true
# vish.gg (unproxied domains - special protocols requiring direct IP)
ddns-vish-unproxied:
image: favonia/cloudflare-ddns:latest
network_mode: host
restart: unless-stopped
user: "1026:100"
read_only: true
cap_drop: [all]
security_opt: [no-new-privileges:true]
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
# mx.vish.gg - Matrix homeserver; CF proxy breaks federation (port 8448)
# derp.vish.gg - Headscale built-in DERP relay; CF proxy breaks DERP protocol
# derp-atl.vish.gg - Atlantis DERP relay (region 902); CF proxy breaks DERP protocol
# headscale.vish.gg - Headscale VPN server; CF proxy breaks Tailscale client connections
- DOMAINS=mx.vish.gg,derp.vish.gg,derp-atl.vish.gg,headscale.vish.gg
- PROXIED=false
# thevish.io (unproxied domains - special protocols)
ddns-thevish-unproxied:
image: favonia/cloudflare-ddns:latest
network_mode: host
restart: unless-stopped
user: "1026:100"
read_only: true
cap_drop: [all]
security_opt: [no-new-privileges:true]
environment:
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
# turn.thevish.io - TURN/STUN protocol needs direct connection
- DOMAINS=turn.thevish.io
- PROXIED=false

View File

@@ -0,0 +1,19 @@
# Fenrus - Application dashboard
# Port: 5000
# Modern dashboard for self-hosted services
version: "3"
services:
fenrus:
container_name: Fenrus
image: revenz/fenrus:latest
restart: unless-stopped
environment:
- TZ=America/Los_Angeles
ports:
- 4500:3000
volumes:
- /volume2/metadata/docker/fenrus:/app/data
dns:
- 100.103.48.78 # Calypso's Tailscale IP as resolver
- 100.72.55.21 # Concord_NUC or your Tailnet DNS node

View File

@@ -0,0 +1,66 @@
# Firefly III - Finance
# Port: 8080
# Personal finance manager
version: '3.7'
networks:
internal:
external: false
services:
firefly:
container_name: firefly
image: fireflyiii/core:latest
ports:
- 6182:8080
volumes:
- /volume1/docker/fireflyup:/var/www/html/storage/upload
restart: unless-stopped
env_file:
- stack.env
depends_on:
- firefly-db
networks:
- internal
firefly-db:
container_name: firefly-db
image: postgres
volumes:
- /volume1/docker/fireflydb:/var/lib/postgresql/data
restart: unless-stopped
environment:
POSTGRES_DB: firefly
POSTGRES_USER: firefly
POSTGRES_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
networks:
- internal
firefly-db-backup:
container_name: firefly-db-backup
image: postgres
volumes:
- /volume1/docker/fireflydb:/dump
- /etc/localtime:/etc/localtime:ro
environment:
PGHOST: firefly-db
PGDATABASE: firefly
PGUSER: firefly
PGPASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
BACKUP_NUM_KEEP: 10
BACKUP_FREQUENCY: 7d
entrypoint: |
bash -c 'bash -s < /dump/dump_\`date +%d-%m-%Y"_"%H_%M_%S\`.psql
(ls -t /dump/dump*.psql|head -n $$BACKUP_NUM_KEEP;ls /dump/dump*.psql)|sort|uniq -u|xargs rm -- {}
sleep $$BACKUP_FREQUENCY
done
EOF'
networks:
- internal
firefly-redis:
container_name: firefly-redis
image: redis
networks:
- internal

View File

@@ -0,0 +1,11 @@
# Extra fstab entries for Atlantis Synology 1823xs+ (192.168.0.200)
# These are appended to /etc/fstab on the host
#
# Credentials file for pi-5: /root/.pi5_smb_creds (chmod 600)
# username=vish
# password="REDACTED_PASSWORD" password>
#
# Note: Atlantis volumes are btrfs managed by DSM (volume1/2/3)
# pi-5 SMB share (NVMe storagepool) — mounted at /volume1/pi5_storagepool
//192.168.0.66/storagepool /volume1/pi5_storagepool cifs credentials=/root/.pi5_smb_creds,vers=3.0,nofail,_netdev 0 0

View File

@@ -0,0 +1,22 @@
# GitLab - Git repository
# Port: 8929
# Self-hosted Git and CI/CD
version: '3.6'
services:
web:
image: 'gitlab/gitlab-ce:latest'
restart: unless-stopped
hostname: 'gl.vish.gg'
environment:
GITLAB_OMNIBUS_CONFIG: |
external_url 'http://gl.vish.gg:8929'
gitlab_rails['gitlab_shell_ssh_port'] = 2224
ports:
- 8929:8929/tcp
- 2224:22
volumes:
- /volume1/docker/gitlab/config:/etc/gitlab
- /volume1/docker/gitlab/logs:/var/log/gitlab
- /volume1/docker/gitlab/data:/var/opt/gitlab
shm_size: '256m'

View File

@@ -0,0 +1,143 @@
# Grafana - Dashboards
# Port: 3000
# Metrics visualization and dashboards
version: "3.9"
services:
grafana:
image: grafana/grafana:latest
container_name: Grafana
hostname: grafana
networks:
- grafana-net
mem_limit: 512m
cpu_shares: 512
security_opt:
- no-new-privileges:true
user: 1026:100
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:3000/api/health
ports:
- 3340:3000
volumes:
- /volume1/docker/grafana/data:/var/lib/grafana:rw
environment:
TZ: America/Los_Angeles
GF_INSTALL_PLUGINS: grafana-clock-panel,grafana-simple-json-datasource,natel-discrete-panel,grafana-piechart-panel
# Authentik SSO Configuration
GF_SERVER_ROOT_URL: https://gf.vish.gg
GF_AUTH_GENERIC_OAUTH_ENABLED: "true"
GF_AUTH_GENERIC_OAUTH_NAME: Authentik
GF_AUTH_GENERIC_OAUTH_CLIENT_ID: "REDACTED_CLIENT_ID" # pragma: allowlist secret
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET: "REDACTED_CLIENT_SECRET" # pragma: allowlist secret
GF_AUTH_GENERIC_OAUTH_SCOPES: openid profile email
GF_AUTH_GENERIC_OAUTH_AUTH_URL: https://sso.vish.gg/application/o/authorize/
GF_AUTH_GENERIC_OAUTH_TOKEN_URL: https://sso.vish.gg/application/o/token/
GF_AUTH_GENERIC_OAUTH_API_URL: https://sso.vish.gg/application/o/userinfo/
GF_AUTH_SIGNOUT_REDIRECT_URL: https://sso.vish.gg/application/o/grafana/end-session/
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH: "contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
# Keep local admin auth working
GF_AUTH_DISABLE_LOGIN_FORM: "false"
restart: on-failure:5
prometheus:
image: prom/prometheus
command:
- '--storage.tsdb.retention.time=60d'
- --config.file=/etc/prometheus/prometheus.yml
container_name: Prometheus
hostname: prometheus-server
networks:
- grafana-net
- prometheus-net
mem_limit: 1g
cpu_shares: 768
security_opt:
- no-new-privileges=true
user: 1026:100
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9090/ || exit 1
volumes:
- /volume1/docker/grafana/prometheus:/prometheus:rw
- /volume1/docker/grafana/prometheus.yml:/etc/prometheus/prometheus.yml:ro
restart: on-failure:5
node-exporter:
image: prom/node-exporter:latest
command:
- --collector.disable-defaults
- --collector.stat
- --collector.time
- --collector.cpu
- --collector.loadavg
- --collector.hwmon
- --collector.meminfo
- --collector.diskstats
container_name: Prometheus-Node
hostname: prometheus-node
networks:
- prometheus-net
mem_limit: 256m
mem_reservation: 64m
cpu_shares: 512
security_opt:
- no-new-privileges=true
read_only: true
user: 1026:100
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9100/
restart: on-failure:5
snmp-exporter:
image: prom/snmp-exporter:latest
command:
- --config.file=/etc/snmp_exporter/snmp.yml
container_name: Prometheus-SNMP
hostname: prometheus-snmp
networks:
- prometheus-net
mem_limit: 256m
mem_reservation: 64m
cpu_shares: 512
security_opt:
- no-new-privileges:true
read_only: true
user: 1026:100
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9116/ || exit 1
volumes:
- /volume1/docker/grafana/snmp:/etc/snmp_exporter/:ro
restart: on-failure:5
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
command:
- '--docker_only=true'
container_name: Prometheus-cAdvisor
hostname: prometheus-cadvisor
networks:
- prometheus-net
mem_limit: 256m
mem_reservation: 64m
cpu_shares: 512
security_opt:
- no-new-privileges=true
read_only: true
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: on-failure:5
networks:
grafana-net:
name: grafana-net
ipam:
config:
- subnet: 192.168.50.0/24
prometheus-net:
name: prometheus-net
ipam:
config:
- subnet: 192.168.51.0/24

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,29 @@
# Node Exporter - Prometheus metrics
# Port: 9100 (host network)
# Exposes hardware/OS metrics for Prometheus
version: "3.8"
services:
node-exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
network_mode: host
pid: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.sysfs=/host/sys'
- '--path.rootfs=/rootfs'
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
restart: unless-stopped
snmp-exporter:
image: quay.io/prometheus/snmp-exporter:latest
container_name: snmp_exporter
network_mode: host # important, so exporter can talk to DSM SNMP on localhost
volumes:
- /volume2/metadata/docker/snmp/snmp.yml:/etc/snmp_exporter/snmp.yml:ro
restart: unless-stopped

View File

@@ -0,0 +1,278 @@
# =============================================================================
# HOMELAB MONITORING STACK - CRITICAL INFRASTRUCTURE VISIBILITY
# =============================================================================
#
# SERVICE OVERVIEW:
# - Complete monitoring solution for homelab infrastructure
# - Grafana: Visualization and dashboards
# - Prometheus: Metrics collection and storage
# - Node Exporter: System metrics (CPU, memory, disk, network)
# - SNMP Exporter: Network device monitoring (router, switches)
# - cAdvisor: Container metrics and resource usage
# - Blackbox Exporter: Service availability and response times
# - Speedtest Exporter: Internet connection monitoring
#
# DISASTER RECOVERY PRIORITY: HIGH
# - Essential for infrastructure visibility during outages
# - Contains historical performance data
# - Critical for troubleshooting and capacity planning
#
# RECOVERY TIME OBJECTIVE (RTO): 30 minutes
# RECOVERY POINT OBJECTIVE (RPO): 4 hours (metrics retention)
#
# DEPENDENCIES:
# - Volume2 for data persistence (separate from Volume1)
# - Network access to all monitored systems
# - SNMP access to network devices
# - Docker socket access for container monitoring
#
# =============================================================================
version: '3'
services:
# ==========================================================================
# GRAFANA - Visualization and Dashboard Platform
# ==========================================================================
grafana:
# CONTAINER IMAGE:
# - grafana/grafana:latest: Official Grafana image
# - Consider pinning version for production: grafana/grafana:10.2.0
# - Auto-updates with Watchtower (monitor for breaking changes)
image: grafana/grafana:latest
# CONTAINER IDENTIFICATION:
# - Grafana: Clear identification for monitoring and logs
# - grafana: Internal hostname for service communication
container_name: Grafana
hostname: grafana
# NETWORK CONFIGURATION:
# - grafana-net: Isolated network for Grafana and data sources
# - Allows secure communication with Prometheus
# - Prevents unauthorized access to monitoring data
networks:
- grafana-net
# RESOURCE ALLOCATION:
# - mem_limit: 512MB (sufficient for dashboards and queries)
# - cpu_shares: 512 (medium priority, less than Prometheus)
# - Grafana is lightweight but needs memory for dashboard rendering
mem_limit: 512m
cpu_shares: 512
# SECURITY CONFIGURATION:
# - no-new-privileges: Prevents privilege escalation attacks
# - user: 1026:100 (Synology user/group for file permissions)
# - CRITICAL: Must match NAS permissions for data access
security_opt:
- no-new-privileges:true
user: 1026:100
# HEALTH MONITORING:
# - wget: Tests Grafana API health endpoint
# - /api/health: Built-in Grafana health check
# - Ensures web interface is responsive
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:3000/api/health
# NETWORK PORTS:
# - 7099:3000: External port 7099 maps to internal Grafana port 3000
# - Port 7099: Accessible via reverse proxy or direct access
# - Port 3000: Standard Grafana web interface port
ports:
- 7099:3000
# DATA PERSISTENCE:
# - /volume2/metadata/docker/grafana/data: Grafana configuration and data
# - Contains: Dashboards, data sources, users, alerts, plugins
# - BACKUP CRITICAL: Contains all dashboard configurations
# - Volume2: Separate from Volume1 for redundancy
volumes:
- /volume2/metadata/docker/grafana/data:/var/lib/grafana:rw
environment:
# TIMEZONE CONFIGURATION:
# - TZ: Timezone for logs and dashboard timestamps
# - Must match system timezone for accurate time series data
TZ: America/Los_Angeles
# PLUGIN INSTALLATION:
# - GF_INSTALL_PLUGINS: Comma-separated list of plugins to install
# - grafana-clock-panel: Clock widget for dashboards
# - grafana-simple-json-datasource: JSON data source support
# - natel-discrete-panel: Discrete value visualization
# - grafana-piechart-panel: Pie chart visualizations
# - Plugins installed automatically on container start
GF_INSTALL_PLUGINS: grafana-clock-panel,grafana-simple-json-datasource,natel-discrete-panel,grafana-piechart-panel
# RESTART POLICY:
# - on-failure:5: Restart up to 5 times on failure
# - Critical for maintaining monitoring visibility
# - Prevents infinite restart loops
restart: on-failure:5
# ==========================================================================
# PROMETHEUS - Metrics Collection and Time Series Database
# ==========================================================================
prometheus:
# CONTAINER IMAGE:
# - prom/prometheus: Official Prometheus image
# - Latest stable version with security updates
# - Consider version pinning: prom/prometheus:v2.47.0
image: prom/prometheus
# PROMETHEUS CONFIGURATION:
# - --storage.tsdb.retention.time=60d: Keep metrics for 60 days
# - --config.file: Path to Prometheus configuration file
# - Retention period balances storage usage vs. historical data
command:
- '--storage.tsdb.retention.time=60d'
- '--config.file=/etc/prometheus/prometheus.yml'
# CONTAINER IDENTIFICATION:
# - Prometheus: Clear identification for monitoring
# - prometheus-server: Internal hostname for service communication
container_name: Prometheus
hostname: prometheus-server
# NETWORK CONFIGURATION:
# - grafana-net: Communication with Grafana for data queries
# - prometheus-net: Communication with exporters and targets
# - Dual network setup for security and organization
networks:
- grafana-net
- prometheus-net
# RESOURCE ALLOCATION:
# - mem_limit: 1GB (metrics database requires significant memory)
# - cpu_shares: 768 (high priority for metrics collection)
# - Memory usage scales with number of metrics and retention period
mem_limit: 1g
cpu_shares: 768
# SECURITY CONFIGURATION:
# - no-new-privileges: Prevents privilege escalation
# - user: 1026:100 (Synology permissions for data storage)
security_opt:
- no-new-privileges=true
user: 1026:100
# HEALTH MONITORING:
# - wget: Tests Prometheus web interface availability
# - Port 9090: Standard Prometheus web UI port
# - Ensures metrics collection is operational
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9090/ || exit 1
# DATA PERSISTENCE:
# - /volume2/metadata/docker/grafana/prometheus: Time series database storage
# - /volume2/metadata/docker/grafana/prometheus.yml: Configuration file
# - BACKUP IMPORTANT: Contains historical metrics data
# - Configuration file defines scrape targets and rules
volumes:
- /volume2/metadata/docker/grafana/prometheus:/prometheus:rw
- /volume2/metadata/docker/grafana/prometheus.yml:/etc/prometheus/prometheus.yml:ro
# RESTART POLICY:
# - on-failure:5: Restart on failure to maintain metrics collection
# - Critical for continuous monitoring and alerting
restart: on-failure:5
node-exporter:
image: prom/node-exporter:latest
command:
- --collector.disable-defaults
- --collector.stat
- --collector.time
- --collector.cpu
- --collector.loadavg
- --collector.hwmon
- --collector.meminfo
- --collector.diskstats
container_name: Prometheus-Node
hostname: prometheus-node
networks:
- prometheus-net
mem_limit: 256m
mem_reservation: 64m
cpu_shares: 512
security_opt:
- no-new-privileges=true
read_only: true
user: 1026:100
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9100/
restart: on-failure:5
snmp-exporter:
image: prom/snmp-exporter:latest
command:
- '--config.file=/etc/snmp_exporter/snmp.yml'
container_name: Prometheus-SNMP
hostname: prometheus-snmp
networks:
- prometheus-net
mem_limit: 256m
mem_reservation: 64m
cpu_shares: 512
security_opt:
- no-new-privileges:true
read_only: true
user: 1026:100
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:9116/ || exit 1
volumes:
- /volume2/metadata/docker/grafana/snmp:/etc/snmp_exporter/:ro
restart: on-failure:5
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
command:
- '--docker_only=true'
container_name: Prometheus-cAdvisor
hostname: prometheus-cadvisor
networks:
- prometheus-net
mem_limit: 256m
mem_reservation: 64m
cpu_shares: 512
security_opt:
- no-new-privileges=true
read_only: true
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: on-failure:5
blackbox-exporter:
image: prom/blackbox-exporter
container_name: blackbox-exporter
networks:
- prometheus-net
ports:
- 9115:9115
restart: unless-stopped
speedtest-exporter:
image: miguelndecarvalho/speedtest-exporter
container_name: speedtest-exporter
networks:
- prometheus-net
ports:
- 9798:9798
restart: unless-stopped
networks:
grafana-net:
name: grafana-net
ipam:
config:
- subnet: 192.168.50.0/24
prometheus-net:
name: prometheus-net
ipam:
config:
- subnet: 192.168.51.0/24

View File

@@ -0,0 +1,100 @@
scrape_configs:
- job_name: prometheus
scrape_interval: 30s
static_configs:
- targets: ['localhost:9090']
labels:
group: 'prometheus'
- job_name: watchtower-docker
scrape_interval: 10m
metrics_path: /v1/metrics
bearer_token: "REDACTED_TOKEN" # pragma: allowlist secret
static_configs:
- targets: ['watchtower:8080']
- job_name: node-docker
scrape_interval: 5s
static_configs:
- targets: ['prometheus-node:9100']
- job_name: cadvisor-docker
scrape_interval: 5s
static_configs:
- targets: ['prometheus-cadvisor:8080']
- job_name: snmp-docker
scrape_interval: 5s
static_configs:
- targets: ['192.168.0.200']
metrics_path: /snmp
params:
module: [synology]
auth: [snmpv3]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- source_labels: [__param_target]
regex: (.*)
replacement: prometheus-snmp:9116
target_label: __address__
- job_name: homelab
static_configs:
- targets: ['192.168.0.210:9100']
labels:
instance: homelab
- job_name: LA_VM
static_configs:
- labels:
instance: LA_VM
targets:
- YOUR_WAN_IP:9100
- job_name: nuc
static_configs:
- labels:
instance: vish-concord-nuc
targets:
- 100.72.55.21:9100
- job_name: indolent-flower
static_configs:
- labels:
instance: indolent-flower
targets:
- 100.87.181.91:9100
- job_name: 'blackbox'
metrics_path: /probe
params:
module: [http_2xx]
static_configs:
- targets:
- https://google.com
- https://1.1.1.1
- http://192.168.0.1
labels:
group: 'external-probes'
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- target_label: __address__
replacement: blackbox-exporter:9115
- job_name: 'speedtest_atlantis'
scrape_interval: 15m
scrape_timeout: 90s # <-- extended timeout
static_configs:
- targets: ['speedtest-exporter:9798']
- job_name: 'speedtest_calypso'
scrape_interval: 15m
scrape_timeout: 90s # <-- extended timeout
static_configs:
- targets: ['192.168.0.250:9798']

View File

@@ -0,0 +1,38 @@
scrape_configs:
- job_name: prometheus
scrape_interval: 30s
static_configs:
- targets: ['localhost:9090']
labels:
group: 'prometheus'
- job_name: watchtower-docker
scrape_interval: 10m
metrics_path: /v1/metrics
bearer_token: "REDACTED_TOKEN" # your API_TOKEN # pragma: allowlist secret
static_configs:
- targets: ['watchtower:8080']
- job_name: node-docker
scrape_interval: 5s
static_configs:
- targets: ['prometheus-node:9100']
- job_name: cadvisor-docker
scrape_interval: 5s
static_configs:
- targets: ['prometheus-cadvisor:8080']
- job_name: snmp-docker
scrape_interval: 5s
static_configs:
- targets: ['192.168.1.132'] # Your NAS IP
metrics_path: /snmp
params:
module: [synology]
auth: [snmpv3]
relabel_configs:
- source_labels: [__address__]
target_label: __param_target
- source_labels: [__param_target]
target_label: instance
- source_labels: [__param_target]
regex: (.*)
replacement: prometheus-snmp:9116
target_label: __address__

View File

@@ -0,0 +1,907 @@
auths:
snmpv3:
version: 3
security_level: authPriv
auth_protocol: MD5
username: snmp-exporter
password: "REDACTED_PASSWORD" # pragma: allowlist secret
priv_protocol: DES
priv_password: "REDACTED_PASSWORD" # pragma: allowlist secret
modules:
synology:
walk:
- 1.3.6.1.2.1.2
- 1.3.6.1.2.1.31.1.1
- 1.3.6.1.4.1.6574.1
- 1.3.6.1.4.1.6574.2
- 1.3.6.1.4.1.6574.3
- 1.3.6.1.4.1.6574.6
metrics:
- name: ifNumber
oid: 1.3.6.1.2.1.2.1
type: gauge
help: The number of network interfaces (regardless of their current state) present on this system. - 1.3.6.1.2.1.2.1
- name: ifIndex
oid: 1.3.6.1.2.1.2.2.1.1
type: gauge
help: A unique value, greater than zero, for each interface - 1.3.6.1.2.1.2.2.1.1
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifDescr
oid: 1.3.6.1.2.1.2.2.1.2
type: DisplayString
help: A textual string containing information about the interface - 1.3.6.1.2.1.2.2.1.2
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifMtu
oid: 1.3.6.1.2.1.2.2.1.4
type: gauge
help: The size of the largest packet which can be sent/received on the interface, specified in octets - 1.3.6.1.2.1.2.2.1.4
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifSpeed
oid: 1.3.6.1.2.1.2.2.1.5
type: gauge
help: An estimate of the interface's current bandwidth in bits per second - 1.3.6.1.2.1.2.2.1.5
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifPhysAddress
oid: 1.3.6.1.2.1.2.2.1.6
type: PhysAddress48
help: The interface's address at its protocol sub-layer - 1.3.6.1.2.1.2.2.1.6
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifAdminStatus
oid: 1.3.6.1.2.1.2.2.1.7
type: gauge
help: The desired state of the interface - 1.3.6.1.2.1.2.2.1.7
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: up
2: down
3: testing
- name: ifOperStatus
oid: 1.3.6.1.2.1.2.2.1.8
type: gauge
help: The current operational state of the interface - 1.3.6.1.2.1.2.2.1.8
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: up
2: down
3: testing
4: unknown
5: dormant
6: notPresent
7: lowerLayerDown
- name: ifLastChange
oid: 1.3.6.1.2.1.2.2.1.9
type: gauge
help: The value of sysUpTime at the time the interface entered its current operational state - 1.3.6.1.2.1.2.2.1.9
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInOctets
oid: 1.3.6.1.2.1.2.2.1.10
type: counter
help: The total number of octets received on the interface, including framing characters - 1.3.6.1.2.1.2.2.1.10
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInUcastPkts
oid: 1.3.6.1.2.1.2.2.1.11
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were not addressed to a multicast
or broadcast address at this sub-layer - 1.3.6.1.2.1.2.2.1.11
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInNUcastPkts
oid: 1.3.6.1.2.1.2.2.1.12
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a multicast
or broadcast address at this sub-layer - 1.3.6.1.2.1.2.2.1.12
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInDiscards
oid: 1.3.6.1.2.1.2.2.1.13
type: counter
help: The number of inbound packets which were chosen to be discarded even though no errors had been detected to prevent
their being deliverable to a higher-layer protocol - 1.3.6.1.2.1.2.2.1.13
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInErrors
oid: 1.3.6.1.2.1.2.2.1.14
type: counter
help: For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being
deliverable to a higher-layer protocol - 1.3.6.1.2.1.2.2.1.14
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInUnknownProtos
oid: 1.3.6.1.2.1.2.2.1.15
type: counter
help: For packet-oriented interfaces, the number of packets received via the interface which were discarded because
of an unknown or unsupported protocol - 1.3.6.1.2.1.2.2.1.15
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutOctets
oid: 1.3.6.1.2.1.2.2.1.16
type: counter
help: The total number of octets transmitted out of the interface, including framing characters - 1.3.6.1.2.1.2.2.1.16
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutUcastPkts
oid: 1.3.6.1.2.1.2.2.1.17
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were not addressed
to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.2.2.1.17
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutNUcastPkts
oid: 1.3.6.1.2.1.2.2.1.18
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a multicast or broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.2.2.1.18
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutDiscards
oid: 1.3.6.1.2.1.2.2.1.19
type: counter
help: The number of outbound packets which were chosen to be discarded even though no errors had been detected to
prevent their being transmitted - 1.3.6.1.2.1.2.2.1.19
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutErrors
oid: 1.3.6.1.2.1.2.2.1.20
type: counter
help: For packet-oriented interfaces, the number of outbound packets that could not be transmitted because of errors
- 1.3.6.1.2.1.2.2.1.20
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutQLen
oid: 1.3.6.1.2.1.2.2.1.21
type: gauge
help: The length of the output packet queue (in packets). - 1.3.6.1.2.1.2.2.1.21
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifSpecific
oid: 1.3.6.1.2.1.2.2.1.22
type: OctetString
help: A reference to MIB definitions specific to the particular media being used to realize the interface - 1.3.6.1.2.1.2.2.1.22
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
help: The textual name of the interface - 1.3.6.1.2.1.31.1.1.1.1
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.2
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a multicast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.2
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.3
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a broadcast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.3
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.4
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a multicast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.4
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.5
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.5
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInOctets
oid: 1.3.6.1.2.1.31.1.1.1.6
type: counter
help: The total number of octets received on the interface, including framing characters - 1.3.6.1.2.1.31.1.1.1.6
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInUcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.7
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were not addressed to a multicast
or broadcast address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.7
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.8
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a multicast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.8
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.9
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a broadcast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.9
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCOutOctets
oid: 1.3.6.1.2.1.31.1.1.1.10
type: counter
help: The total number of octets transmitted out of the interface, including framing characters - 1.3.6.1.2.1.31.1.1.1.10
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.2.1.31.1.1.1.11
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were not addressed
to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.11
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCOutMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.12
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a multicast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.12
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCOutBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.13
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.13
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifLinkUpDownTrapEnable
oid: 1.3.6.1.2.1.31.1.1.1.14
type: gauge
help: Indicates whether linkUp/linkDown traps should be generated for this interface - 1.3.6.1.2.1.31.1.1.1.14
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: enabled
2: disabled
- name: ifHighSpeed
oid: 1.3.6.1.2.1.31.1.1.1.15
type: gauge
help: An estimate of the interface's current bandwidth in units of 1,000,000 bits per second - 1.3.6.1.2.1.31.1.1.1.15
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifPromiscuousMode
oid: 1.3.6.1.2.1.31.1.1.1.16
type: gauge
help: This object has a value of false(2) if this interface only accepts packets/frames that are addressed to this
station - 1.3.6.1.2.1.31.1.1.1.16
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: 'true'
2: 'false'
- name: ifConnectorPresent
oid: 1.3.6.1.2.1.31.1.1.1.17
type: gauge
help: This object has the value 'true(1)' if the interface sublayer has a physical connector and the value 'false(2)'
otherwise. - 1.3.6.1.2.1.31.1.1.1.17
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: 'true'
2: 'false'
- name: ifAlias
oid: 1.3.6.1.2.1.31.1.1.1.18
type: DisplayString
help: This object is an 'alias' name for the interface as specified by a network manager, and provides a non-volatile
'handle' for the interface - 1.3.6.1.2.1.31.1.1.1.18
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifCounterDiscontinuityTime
oid: 1.3.6.1.2.1.31.1.1.1.19
type: gauge
help: The value of sysUpTime on the most recent occasion at which any one or more of this interface's counters suffered
a discontinuity - 1.3.6.1.2.1.31.1.1.1.19
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: systemStatus
oid: 1.3.6.1.4.1.6574.1.1
type: gauge
help: Synology system status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.1
- name: temperature
oid: 1.3.6.1.4.1.6574.1.2
type: gauge
help: Synology system temperature The temperature of Disk Station uses Celsius degree. - 1.3.6.1.4.1.6574.1.2
- name: powerStatus
oid: 1.3.6.1.4.1.6574.1.3
type: gauge
help: Synology power status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.3
- name: systemFanStatus
oid: 1.3.6.1.4.1.6574.1.4.1
type: gauge
help: Synology system fan status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.4.1
- name: cpuFanStatus
oid: 1.3.6.1.4.1.6574.1.4.2
type: gauge
help: Synology cpu fan status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.4.2
- name: modelName
oid: 1.3.6.1.4.1.6574.1.5.1
type: DisplayString
help: The Model name of this NAS - 1.3.6.1.4.1.6574.1.5.1
- name: serialNumber
oid: 1.3.6.1.4.1.6574.1.5.2
type: DisplayString
help: The serial number of this NAS - 1.3.6.1.4.1.6574.1.5.2
- name: version
oid: 1.3.6.1.4.1.6574.1.5.3
type: DisplayString
help: The version of this DSM - 1.3.6.1.4.1.6574.1.5.3
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.4.1.6574.1.5.4
type: gauge
help: This oid is for checking whether there is a latest DSM can be upgraded - 1.3.6.1.4.1.6574.1.5.4
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.4.1.6574.1.6
type: gauge
help: Synology system controller number Controller A(0) Controller B(1) - 1.3.6.1.4.1.6574.1.6
- name: diskIndex
oid: 1.3.6.1.4.1.6574.2.1.1.1
type: gauge
help: The index of disk table - 1.3.6.1.4.1.6574.2.1.1.1
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
help: Synology disk ID The ID of disk is assigned by disk Station. - 1.3.6.1.4.1.6574.2.1.1.2
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskModel
oid: 1.3.6.1.4.1.6574.2.1.1.3
type: DisplayString
help: Synology disk model name The disk model name will be showed here. - 1.3.6.1.4.1.6574.2.1.1.3
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskType
oid: 1.3.6.1.4.1.6574.2.1.1.4
type: DisplayString
help: Synology disk type The type of disk will be showed here, including SATA, SSD and so on. - 1.3.6.1.4.1.6574.2.1.1.4
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskStatus
oid: 1.3.6.1.4.1.6574.2.1.1.5
type: gauge
help: Synology disk status. Normal-1 Initialized-2 NotInitialized-3 SystemPartitionFailed-4 Crashed-5 - 1.3.6.1.4.1.6574.2.1.1.5
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskTemperature
oid: 1.3.6.1.4.1.6574.2.1.1.6
type: gauge
help: Synology disk temperature The temperature of each disk uses Celsius degree. - 1.3.6.1.4.1.6574.2.1.1.6
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: raidIndex
oid: 1.3.6.1.4.1.6574.3.1.1.1
type: gauge
help: The index of raid table - 1.3.6.1.4.1.6574.3.1.1.1
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
help: Synology raid name The name of each raid will be showed here. - 1.3.6.1.4.1.6574.3.1.1.2
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidStatus
oid: 1.3.6.1.4.1.6574.3.1.1.3
type: gauge
help: Synology Raid status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.3.1.1.3
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidFreeSize
oid: 1.3.6.1.4.1.6574.3.1.1.4
type: gauge
help: Synology raid freesize Free space in bytes. - 1.3.6.1.4.1.6574.3.1.1.4
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidTotalSize
oid: 1.3.6.1.4.1.6574.3.1.1.5
type: gauge
help: Synology raid totalsize Total space in bytes. - 1.3.6.1.4.1.6574.3.1.1.5
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.4.1.6574.6.1.1.1
type: gauge
help: Service info index - 1.3.6.1.4.1.6574.6.1.1.1
indexes:
- labelname: REDACTED_APP_PASSWORD
type: gauge
lookups:
- labels:
- REDACTED_APP_PASSWORD
labelname: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
- labels: []
labelname: REDACTED_APP_PASSWORD
- name: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
help: Service name - 1.3.6.1.4.1.6574.6.1.1.2
indexes:
- labelname: REDACTED_APP_PASSWORD
type: gauge
lookups:
- labels:
- REDACTED_APP_PASSWORD
labelname: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
- labels: []
labelname: REDACTED_APP_PASSWORD
- name: serviceUsers
oid: 1.3.6.1.4.1.6574.6.1.1.3
type: gauge
help: Number of users using this service - 1.3.6.1.4.1.6574.6.1.1.3
indexes:
- labelname: REDACTED_APP_PASSWORD
type: gauge
lookups:
- labels:
- REDACTED_APP_PASSWORD
labelname: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
- labels: []
labelname: REDACTED_APP_PASSWORD

View File

@@ -0,0 +1,907 @@
auths:
snmpv3:
version: 3
security_level: authPriv
auth_protocol: MD5
username: snmp-exporter
password: "REDACTED_PASSWORD" # pragma: allowlist secret
priv_protocol: DES
priv_password: "REDACTED_PASSWORD" # pragma: allowlist secret
modules:
synology:
walk:
- 1.3.6.1.2.1.2
- 1.3.6.1.2.1.31.1.1
- 1.3.6.1.4.1.6574.1
- 1.3.6.1.4.1.6574.2
- 1.3.6.1.4.1.6574.3
- 1.3.6.1.4.1.6574.6
metrics:
- name: ifNumber
oid: 1.3.6.1.2.1.2.1
type: gauge
help: The number of network interfaces (regardless of their current state) present on this system. - 1.3.6.1.2.1.2.1
- name: ifIndex
oid: 1.3.6.1.2.1.2.2.1.1
type: gauge
help: A unique value, greater than zero, for each interface - 1.3.6.1.2.1.2.2.1.1
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifDescr
oid: 1.3.6.1.2.1.2.2.1.2
type: DisplayString
help: A textual string containing information about the interface - 1.3.6.1.2.1.2.2.1.2
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifMtu
oid: 1.3.6.1.2.1.2.2.1.4
type: gauge
help: The size of the largest packet which can be sent/received on the interface, specified in octets - 1.3.6.1.2.1.2.2.1.4
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifSpeed
oid: 1.3.6.1.2.1.2.2.1.5
type: gauge
help: An estimate of the interface's current bandwidth in bits per second - 1.3.6.1.2.1.2.2.1.5
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifPhysAddress
oid: 1.3.6.1.2.1.2.2.1.6
type: PhysAddress48
help: The interface's address at its protocol sub-layer - 1.3.6.1.2.1.2.2.1.6
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifAdminStatus
oid: 1.3.6.1.2.1.2.2.1.7
type: gauge
help: The desired state of the interface - 1.3.6.1.2.1.2.2.1.7
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: up
2: down
3: testing
- name: ifOperStatus
oid: 1.3.6.1.2.1.2.2.1.8
type: gauge
help: The current operational state of the interface - 1.3.6.1.2.1.2.2.1.8
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: up
2: down
3: testing
4: unknown
5: dormant
6: notPresent
7: lowerLayerDown
- name: ifLastChange
oid: 1.3.6.1.2.1.2.2.1.9
type: gauge
help: The value of sysUpTime at the time the interface entered its current operational state - 1.3.6.1.2.1.2.2.1.9
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInOctets
oid: 1.3.6.1.2.1.2.2.1.10
type: counter
help: The total number of octets received on the interface, including framing characters - 1.3.6.1.2.1.2.2.1.10
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInUcastPkts
oid: 1.3.6.1.2.1.2.2.1.11
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were not addressed to a multicast
or broadcast address at this sub-layer - 1.3.6.1.2.1.2.2.1.11
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInNUcastPkts
oid: 1.3.6.1.2.1.2.2.1.12
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a multicast
or broadcast address at this sub-layer - 1.3.6.1.2.1.2.2.1.12
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInDiscards
oid: 1.3.6.1.2.1.2.2.1.13
type: counter
help: The number of inbound packets which were chosen to be discarded even though no errors had been detected to prevent
their being deliverable to a higher-layer protocol - 1.3.6.1.2.1.2.2.1.13
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInErrors
oid: 1.3.6.1.2.1.2.2.1.14
type: counter
help: For packet-oriented interfaces, the number of inbound packets that contained errors preventing them from being
deliverable to a higher-layer protocol - 1.3.6.1.2.1.2.2.1.14
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInUnknownProtos
oid: 1.3.6.1.2.1.2.2.1.15
type: counter
help: For packet-oriented interfaces, the number of packets received via the interface which were discarded because
of an unknown or unsupported protocol - 1.3.6.1.2.1.2.2.1.15
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutOctets
oid: 1.3.6.1.2.1.2.2.1.16
type: counter
help: The total number of octets transmitted out of the interface, including framing characters - 1.3.6.1.2.1.2.2.1.16
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutUcastPkts
oid: 1.3.6.1.2.1.2.2.1.17
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were not addressed
to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.2.2.1.17
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutNUcastPkts
oid: 1.3.6.1.2.1.2.2.1.18
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a multicast or broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.2.2.1.18
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutDiscards
oid: 1.3.6.1.2.1.2.2.1.19
type: counter
help: The number of outbound packets which were chosen to be discarded even though no errors had been detected to
prevent their being transmitted - 1.3.6.1.2.1.2.2.1.19
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutErrors
oid: 1.3.6.1.2.1.2.2.1.20
type: counter
help: For packet-oriented interfaces, the number of outbound packets that could not be transmitted because of errors
- 1.3.6.1.2.1.2.2.1.20
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutQLen
oid: 1.3.6.1.2.1.2.2.1.21
type: gauge
help: The length of the output packet queue (in packets). - 1.3.6.1.2.1.2.2.1.21
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifSpecific
oid: 1.3.6.1.2.1.2.2.1.22
type: OctetString
help: A reference to MIB definitions specific to the particular media being used to realize the interface - 1.3.6.1.2.1.2.2.1.22
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
help: The textual name of the interface - 1.3.6.1.2.1.31.1.1.1.1
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.2
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a multicast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.2
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifInBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.3
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a broadcast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.3
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.4
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a multicast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.4
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifOutBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.5
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.5
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInOctets
oid: 1.3.6.1.2.1.31.1.1.1.6
type: counter
help: The total number of octets received on the interface, including framing characters - 1.3.6.1.2.1.31.1.1.1.6
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInUcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.7
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were not addressed to a multicast
or broadcast address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.7
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.8
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a multicast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.8
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCInBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.9
type: counter
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer, which were addressed to a broadcast
address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.9
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCOutOctets
oid: 1.3.6.1.2.1.31.1.1.1.10
type: counter
help: The total number of octets transmitted out of the interface, including framing characters - 1.3.6.1.2.1.31.1.1.1.10
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.2.1.31.1.1.1.11
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were not addressed
to a multicast or broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.11
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCOutMulticastPkts
oid: 1.3.6.1.2.1.31.1.1.1.12
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a multicast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.12
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifHCOutBroadcastPkts
oid: 1.3.6.1.2.1.31.1.1.1.13
type: counter
help: The total number of packets that higher-level protocols requested be transmitted, and which were addressed to
a broadcast address at this sub-layer, including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.13
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifLinkUpDownTrapEnable
oid: 1.3.6.1.2.1.31.1.1.1.14
type: gauge
help: Indicates whether linkUp/linkDown traps should be generated for this interface - 1.3.6.1.2.1.31.1.1.1.14
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: enabled
2: disabled
- name: ifHighSpeed
oid: 1.3.6.1.2.1.31.1.1.1.15
type: gauge
help: An estimate of the interface's current bandwidth in units of 1,000,000 bits per second - 1.3.6.1.2.1.31.1.1.1.15
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifPromiscuousMode
oid: 1.3.6.1.2.1.31.1.1.1.16
type: gauge
help: This object has a value of false(2) if this interface only accepts packets/frames that are addressed to this
station - 1.3.6.1.2.1.31.1.1.1.16
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: 'true'
2: 'false'
- name: ifConnectorPresent
oid: 1.3.6.1.2.1.31.1.1.1.17
type: gauge
help: This object has the value 'true(1)' if the interface sublayer has a physical connector and the value 'false(2)'
otherwise. - 1.3.6.1.2.1.31.1.1.1.17
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
enum_values:
1: 'true'
2: 'false'
- name: ifAlias
oid: 1.3.6.1.2.1.31.1.1.1.18
type: DisplayString
help: This object is an 'alias' name for the interface as specified by a network manager, and provides a non-volatile
'handle' for the interface - 1.3.6.1.2.1.31.1.1.1.18
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: ifCounterDiscontinuityTime
oid: 1.3.6.1.2.1.31.1.1.1.19
type: gauge
help: The value of sysUpTime on the most recent occasion at which any one or more of this interface's counters suffered
a discontinuity - 1.3.6.1.2.1.31.1.1.1.19
indexes:
- labelname: ifIndex
type: gauge
lookups:
- labels:
- ifIndex
labelname: ifName
oid: 1.3.6.1.2.1.31.1.1.1.1
type: DisplayString
- labels: []
labelname: ifIndex
- name: systemStatus
oid: 1.3.6.1.4.1.6574.1.1
type: gauge
help: Synology system status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.1
- name: temperature
oid: 1.3.6.1.4.1.6574.1.2
type: gauge
help: Synology system temperature The temperature of Disk Station uses Celsius degree. - 1.3.6.1.4.1.6574.1.2
- name: powerStatus
oid: 1.3.6.1.4.1.6574.1.3
type: gauge
help: Synology power status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.3
- name: systemFanStatus
oid: 1.3.6.1.4.1.6574.1.4.1
type: gauge
help: Synology system fan status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.4.1
- name: cpuFanStatus
oid: 1.3.6.1.4.1.6574.1.4.2
type: gauge
help: Synology cpu fan status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.1.4.2
- name: modelName
oid: 1.3.6.1.4.1.6574.1.5.1
type: DisplayString
help: The Model name of this NAS - 1.3.6.1.4.1.6574.1.5.1
- name: serialNumber
oid: 1.3.6.1.4.1.6574.1.5.2
type: DisplayString
help: The serial number of this NAS - 1.3.6.1.4.1.6574.1.5.2
- name: version
oid: 1.3.6.1.4.1.6574.1.5.3
type: DisplayString
help: The version of this DSM - 1.3.6.1.4.1.6574.1.5.3
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.4.1.6574.1.5.4
type: gauge
help: This oid is for checking whether there is a latest DSM can be upgraded - 1.3.6.1.4.1.6574.1.5.4
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.4.1.6574.1.6
type: gauge
help: Synology system controller number Controller A(0) Controller B(1) - 1.3.6.1.4.1.6574.1.6
- name: diskIndex
oid: 1.3.6.1.4.1.6574.2.1.1.1
type: gauge
help: The index of disk table - 1.3.6.1.4.1.6574.2.1.1.1
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
help: Synology disk ID The ID of disk is assigned by disk Station. - 1.3.6.1.4.1.6574.2.1.1.2
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskModel
oid: 1.3.6.1.4.1.6574.2.1.1.3
type: DisplayString
help: Synology disk model name The disk model name will be showed here. - 1.3.6.1.4.1.6574.2.1.1.3
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskType
oid: 1.3.6.1.4.1.6574.2.1.1.4
type: DisplayString
help: Synology disk type The type of disk will be showed here, including SATA, SSD and so on. - 1.3.6.1.4.1.6574.2.1.1.4
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskStatus
oid: 1.3.6.1.4.1.6574.2.1.1.5
type: gauge
help: Synology disk status. Normal-1 Initialized-2 NotInitialized-3 SystemPartitionFailed-4 Crashed-5 - 1.3.6.1.4.1.6574.2.1.1.5
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: diskTemperature
oid: 1.3.6.1.4.1.6574.2.1.1.6
type: gauge
help: Synology disk temperature The temperature of each disk uses Celsius degree. - 1.3.6.1.4.1.6574.2.1.1.6
indexes:
- labelname: diskIndex
type: gauge
lookups:
- labels:
- diskIndex
labelname: diskID
oid: 1.3.6.1.4.1.6574.2.1.1.2
type: DisplayString
- labels: []
labelname: diskIndex
- name: raidIndex
oid: 1.3.6.1.4.1.6574.3.1.1.1
type: gauge
help: The index of raid table - 1.3.6.1.4.1.6574.3.1.1.1
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
help: Synology raid name The name of each raid will be showed here. - 1.3.6.1.4.1.6574.3.1.1.2
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidStatus
oid: 1.3.6.1.4.1.6574.3.1.1.3
type: gauge
help: Synology Raid status Each meanings of status represented describe below - 1.3.6.1.4.1.6574.3.1.1.3
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidFreeSize
oid: 1.3.6.1.4.1.6574.3.1.1.4
type: gauge
help: Synology raid freesize Free space in bytes. - 1.3.6.1.4.1.6574.3.1.1.4
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: raidTotalSize
oid: 1.3.6.1.4.1.6574.3.1.1.5
type: gauge
help: Synology raid totalsize Total space in bytes. - 1.3.6.1.4.1.6574.3.1.1.5
indexes:
- labelname: raidIndex
type: gauge
lookups:
- labels:
- raidIndex
labelname: raidName
oid: 1.3.6.1.4.1.6574.3.1.1.2
type: DisplayString
- name: REDACTED_APP_PASSWORD
oid: 1.3.6.1.4.1.6574.6.1.1.1
type: gauge
help: Service info index - 1.3.6.1.4.1.6574.6.1.1.1
indexes:
- labelname: REDACTED_APP_PASSWORD
type: gauge
lookups:
- labels:
- REDACTED_APP_PASSWORD
labelname: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
- labels: []
labelname: REDACTED_APP_PASSWORD
- name: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
help: Service name - 1.3.6.1.4.1.6574.6.1.1.2
indexes:
- labelname: REDACTED_APP_PASSWORD
type: gauge
lookups:
- labels:
- REDACTED_APP_PASSWORD
labelname: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
- labels: []
labelname: REDACTED_APP_PASSWORD
- name: serviceUsers
oid: 1.3.6.1.4.1.6574.6.1.1.3
type: gauge
help: Number of users using this service - 1.3.6.1.4.1.6574.6.1.1.3
indexes:
- labelname: REDACTED_APP_PASSWORD
type: gauge
lookups:
- labels:
- REDACTED_APP_PASSWORD
labelname: serviceName
oid: 1.3.6.1.4.1.6574.6.1.1.2
type: DisplayString
- labels: []
labelname: REDACTED_APP_PASSWORD

View File

@@ -0,0 +1,35 @@
# Homarr - Modern dashboard for your homelab
# Port: 7575
# Docs: https://homarr.dev/
#
# Data stored in: /volume2/metadata/docker/homarr/appdata
# Database: SQLite at /appdata/db/db.sqlite
services:
homarr:
image: ghcr.io/homarr-labs/homarr:latest
container_name: homarr
environment:
- TZ=America/Los_Angeles
- SECRET_ENCRYPTION_KEY=a393eb842415bbd2f6bcf74bREDACTED_GITEA_TOKEN # pragma: allowlist secret
# Authentik SSO via native OIDC — credentials kept as fallback if Authentik is down
- AUTH_PROVIDER=oidc,credentials
- AUTH_OIDC_ISSUER=https://sso.vish.gg/application/o/homarr/
- AUTH_OIDC_CLIENT_ID="REDACTED_CLIENT_ID"
- AUTH_OIDC_CLIENT_SECRET="REDACTED_CLIENT_SECRET" # pragma: allowlist secret
- AUTH_OIDC_CLIENT_NAME=Authentik
- AUTH_OIDC_AUTO_LOGIN=false
- AUTH_LOGOUT_REDIRECT_URL=https://sso.vish.gg/application/o/homarr/end-session/
- AUTH_OIDC_ADMIN_GROUP=Homarr Admins
- AUTH_OIDC_OWNER_GROUP=Homarr Admins
volumes:
- /volume2/metadata/docker/homarr/appdata:/appdata
- /var/run/docker.sock:/var/run/docker.sock:ro
ports:
- "7575:7575"
dns:
- 192.168.0.200 # Atlantis AdGuard (resolves .tail.vish.gg and .vish.local)
- 192.168.0.250 # Calypso AdGuard (backup)
restart: unless-stopped
security_opt:
- no-new-privileges:true

View File

@@ -0,0 +1,104 @@
# Immich - Photo/video backup solution
# URL: http://192.168.0.200:8212 (LAN only)
# Port: 2283
# Google Photos alternative with ML-powered features
# SSO: Authentik OIDC (sso.vish.gg/application/o/immich-atlantis/)
version: "3.9"
services:
immich-redis:
image: redis
container_name: Immich-REDIS
hostname: immich-redis
security_opt:
- no-new-privileges:true
healthcheck:
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
user: 1026:100
environment:
- TZ=America/Los_Angeles
volumes:
- /volume2/metadata/docker/immich/redis:/data:rw
restart: on-failure:5
immich-db:
image: ghcr.io/immich-app/postgres:16-vectorchord0.4.3-pgvectors0.2.0
container_name: Immich-DB
hostname: immich-db
security_opt:
- no-new-privileges:true
shm_size: 256mb
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "immich", "-U", "immichuser"]
interval: 10s
timeout: 5s
retries: 5
volumes:
- /volume2/metadata/docker/immich/db:/var/lib/postgresql/data:rw
environment:
- TZ=America/Los_Angeles
- POSTGRES_DB=immich
- POSTGRES_USER=immichuser
- POSTGRES_PASSWORD="REDACTED_PASSWORD" # pragma: allowlist secret
# Uncomment if your database is on spinning disks instead of SSD
- DB_STORAGE_TYPE=HDD
restart: on-failure:5
immich-server:
image: ghcr.io/immich-app/immich-server:release
container_name: Immich-SERVER
hostname: immich-server
user: 1026:100
security_opt:
- no-new-privileges:true
env_file:
- stack.env
ports:
- 8212:2283
environment:
- IMMICH_CONFIG_FILE=/config/immich-config.json
volumes:
# Main Immich data folder
- /volume2/metadata/docker/immich/upload:/data:rw
# Mount Synology Photos library as external read-only source
- /volume1/homes/vish/Photos:/external/photos:ro
- /etc/localtime:/etc/localtime:ro
# SSO config
- /volume2/metadata/docker/immich/config/immich-config.json:/config/immich-config.json:ro
depends_on:
immich-redis:
condition: service_healthy
immich-db:
condition: service_started
restart: on-failure:5
deploy:
resources:
limits:
memory: 4G
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:release
container_name: Immich-LEARNING
hostname: immich-machine-learning
user: 1026:100
security_opt:
- no-new-privileges:true
env_file:
- stack.env
volumes:
- /volume2/metadata/docker/immich/upload:/data:rw
- /volume1/homes/vish/Photos:/external/photos:ro
- /volume2/metadata/docker/immich/cache:/cache:rw
- /volume2/metadata/docker/immich/cache:/.cache:rw
- /volume2/metadata/docker/immich/cache:/.config:rw
- /volume2/metadata/docker/immich/matplotlib:/matplotlib:rw
environment:
- MPLCONFIGDIR=/matplotlib
depends_on:
immich-db:
condition: service_started
restart: on-failure:5
deploy:
resources:
limits:
memory: 4G

View File

@@ -0,0 +1,60 @@
# Invidious - YouTube
# Port: 3000
# Privacy-respecting YouTube
version: "3.9"
services:
invidious-db:
image: postgres
container_name: Invidious-DB
hostname: invidious-db
security_opt:
- no-new-privileges:true
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "invidious", "-U", "kemal"]
timeout: 45s
interval: 10s
retries: 10
user: 1026:100
volumes:
- /volume1/docker/invidiousdb:/var/lib/postgresql/data
environment:
POSTGRES_DB: invidious
POSTGRES_USER: kemal
POSTGRES_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
restart: unless-stopped
invidious:
image: quay.io/invidious/invidious:latest
container_name: Invidious
hostname: invidious
user: 1026:100
security_opt:
- no-new-privileges:true
healthcheck:
test: wget -nv --tries=1 --spider http://127.0.0.1:3000/api/v1/comments/jNQXAC9IVRw || exit 1
interval: 30s
timeout: 5s
retries: 2
ports:
- 10.0.0.100:7601:3000
environment:
INVIDIOUS_CONFIG: |
db:
dbname: invidious
user: kemal
password: "REDACTED_PASSWORD" # pragma: allowlist secret
host: invidious-db
port: 5432
check_tables: true
captcha_enabled: false
default_user_preferences:
locale: us
region: US
external_port: 7601
domain: invidious.vishinator.synology.me
https_only: true
restart: unless-stopped
depends_on:
invidious-db:
condition: service_healthy

View File

@@ -0,0 +1,11 @@
# iPerf3 - Network bandwidth testing
# Port: 5201
# TCP/UDP bandwidth measurement tool
version: '3.8'
services:
iperf3:
image: networkstatic/iperf3
container_name: iperf3
restart: unless-stopped
network_mode: "host" # Allows the container to use the NAS's network stack
command: "-s" # Runs iperf3 in server mode

View File

@@ -0,0 +1,24 @@
# IT Tools - Developer utilities collection
# Port: 8085
# Collection of handy online tools for developers
version: '3.8'
services:
it-tools:
container_name: it-tools
image: corentinth/it-tools:latest
restart: unless-stopped
ports:
- "5545:80"
environment:
- TZ=UTC
logging:
driver: json-file
options:
max-size: "10k"
labels:
com.docker.compose.service.description: "IT Tools Dashboard"
networks:
default:
driver: bridge

View File

@@ -0,0 +1,21 @@
# JDownloader2 - Downloads
# Port: 5800
# Multi-host download manager
version: '3.9'
services:
jdownloader-2:
image: jlesage/jdownloader-2
restart: unless-stopped
volumes:
- /volume1/docker/jdownloader2/output:/output
- /volume1/docker/jdownloader2/config:/config
environment:
- TZ=America/Los_Angeles
- PGID=100
- PUID=1026
ports:
- 13016:5900
- 40288:5800
- 20123:3129
container_name: jdownloader2

Some files were not shown because too many files have changed in this diff Show More