Sanitized mirror from private repository - 2026-04-20 01:32:01 UTC
This commit is contained in:
134
hosts/synology/calypso/DEPLOYMENT_SUMMARY.md
Normal file
134
hosts/synology/calypso/DEPLOYMENT_SUMMARY.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Calypso GitOps Deployment Summary
|
||||
|
||||
## 🎯 Completed Deployments
|
||||
|
||||
### ✅ Reactive Resume v5 with AI Integration
|
||||
- **Location**: `/home/homelab/organized/repos/homelab/Calypso/reactive_resume_v5/`
|
||||
- **External URL**: https://rx.vish.gg
|
||||
- **Internal URL**: http://192.168.0.250:9751
|
||||
- **AI Features**: Ollama with llama3.2:3b model
|
||||
- **Status**: ✅ ACTIVE
|
||||
|
||||
**Services**:
|
||||
- Resume-ACCESS-V5: Main application (port 9751)
|
||||
- Resume-DB-V5: PostgreSQL 18 database
|
||||
- Resume-BROWSERLESS-V5: PDF generation (port 4000)
|
||||
- Resume-SEAWEEDFS-V5: S3 storage (port 9753)
|
||||
- Resume-OLLAMA-V5: AI engine (port 11434)
|
||||
|
||||
### ✅ Nginx Proxy Manager (Fixed)
|
||||
- **Location**: `/home/homelab/organized/repos/homelab/Calypso/nginx_proxy_manager/`
|
||||
- **Admin UI**: http://192.168.0.250:81
|
||||
- **HTTP Proxy**: http://192.168.0.250:8880 (external port 80)
|
||||
- **HTTPS Proxy**: https://192.168.0.250:8443 (external port 443)
|
||||
- **Status**: ✅ ACTIVE
|
||||
|
||||
## 🚀 GitOps Commands
|
||||
|
||||
### Reactive Resume v5
|
||||
```bash
|
||||
cd /home/homelab/organized/repos/homelab/Calypso/reactive_resume_v5
|
||||
|
||||
# Deploy complete stack with AI
|
||||
./deploy.sh deploy
|
||||
|
||||
# Management commands
|
||||
./deploy.sh status # Check all services
|
||||
./deploy.sh logs # View application logs
|
||||
./deploy.sh restart # Restart services
|
||||
./deploy.sh stop # Stop services
|
||||
./deploy.sh update # Update images
|
||||
./deploy.sh setup-ollama # Setup AI model
|
||||
```
|
||||
|
||||
### Nginx Proxy Manager
|
||||
```bash
|
||||
cd /home/homelab/organized/repos/homelab/Calypso/nginx_proxy_manager
|
||||
|
||||
# Deploy NPM
|
||||
./deploy.sh deploy
|
||||
|
||||
# Management commands
|
||||
./deploy.sh status # Check service status
|
||||
./deploy.sh logs # View NPM logs
|
||||
./deploy.sh restart # Restart NPM
|
||||
./deploy.sh cleanup # Clean up containers
|
||||
```
|
||||
|
||||
## 🌐 Network Configuration
|
||||
|
||||
### Router Port Forwarding
|
||||
- **Port 80** → **8880** (HTTP to NPM)
|
||||
- **Port 443** → **8443** (HTTPS to NPM)
|
||||
|
||||
### DNS Configuration
|
||||
- **rx.vish.gg** → YOUR_WAN_IP ✅
|
||||
- **rxdl.vish.gg** → YOUR_WAN_IP ✅
|
||||
|
||||
### NPM Proxy Configuration
|
||||
NPM should be configured with:
|
||||
1. **rx.vish.gg** → http://192.168.0.250:9751
|
||||
2. **rxdl.vish.gg** → http://192.168.0.250:9753
|
||||
|
||||
## 🤖 AI Integration
|
||||
|
||||
### Ollama Configuration
|
||||
- **Service**: Resume-OLLAMA-V5
|
||||
- **Port**: 11434
|
||||
- **Model**: llama3.2:3b (2GB)
|
||||
- **API**: http://192.168.0.250:11434
|
||||
|
||||
### AI Features in Reactive Resume
|
||||
- Resume content suggestions
|
||||
- Job description analysis
|
||||
- Skills optimization
|
||||
- Cover letter generation
|
||||
|
||||
## 📊 Service Status
|
||||
|
||||
### Current Status (2026-02-16)
|
||||
```
|
||||
✅ Resume-ACCESS-V5 - Up and healthy
|
||||
✅ Resume-DB-V5 - Up and healthy
|
||||
✅ Resume-BROWSERLESS-V5 - Up and healthy
|
||||
✅ Resume-SEAWEEDFS-V5 - Up and healthy
|
||||
✅ Resume-OLLAMA-V5 - Up with llama3.2:3b loaded
|
||||
✅ nginx-proxy-manager - Up and healthy
|
||||
```
|
||||
|
||||
### External Access Test
|
||||
```bash
|
||||
curl -I https://rx.vish.gg
|
||||
# HTTP/2 200 ✅
|
||||
```
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
|
||||
### If External Access Fails
|
||||
1. Check NPM proxy host configuration
|
||||
2. Verify router port forwarding (80→8880, 443→8443)
|
||||
3. Confirm DNS propagation: `nslookup rx.vish.gg`
|
||||
|
||||
### If AI Features Don't Work
|
||||
1. Check Ollama: `./deploy.sh logs` (look for Resume-OLLAMA-V5)
|
||||
2. Verify model: `ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"`
|
||||
|
||||
### Service Management
|
||||
```bash
|
||||
# Check all services
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker ps"
|
||||
|
||||
# Restart specific service
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker restart Resume-ACCESS-V5"
|
||||
```
|
||||
|
||||
## 🎉 Migration Complete
|
||||
|
||||
✅ **Reactive Resume v5** deployed with AI integration
|
||||
✅ **NPM** fixed and deployed via GitOps
|
||||
✅ **External access** working (https://rx.vish.gg)
|
||||
✅ **AI features** ready with Ollama
|
||||
✅ **Port compatibility** maintained from v4
|
||||
✅ **GitOps workflow** established
|
||||
|
||||
Your Reactive Resume v5 is now fully operational with AI capabilities!
|
||||
318
hosts/synology/calypso/REACTIVE_RESUME_V5_DEPLOYMENT.md
Normal file
318
hosts/synology/calypso/REACTIVE_RESUME_V5_DEPLOYMENT.md
Normal file
@@ -0,0 +1,318 @@
|
||||
# Reactive Resume v5 with AI Integration - Complete Deployment Guide
|
||||
|
||||
## 🎯 Overview
|
||||
|
||||
This document provides complete deployment instructions for Reactive Resume v5 with AI integration on Calypso server. The deployment includes Ollama for local AI features and maintains compatibility with existing v4 configurations.
|
||||
|
||||
**Deployment Date**: 2026-02-16
|
||||
**Status**: ✅ PRODUCTION READY
|
||||
**External URL**: https://rx.vish.gg
|
||||
**AI Model**: llama3.2:3b (2GB)
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
```
|
||||
Internet (YOUR_WAN_IP)
|
||||
↓ Port 80/443
|
||||
Router (Port Forwarding)
|
||||
↓ 80→8880, 443→8443
|
||||
Nginx Proxy Manager (Calypso:8880/8443)
|
||||
↓ Proxy to internal services
|
||||
Reactive Resume v5 Stack (Calypso:9751)
|
||||
├── Resume-ACCESS-V5 (Main App)
|
||||
├── Resume-DB-V5 (PostgreSQL 18)
|
||||
├── Resume-BROWSERLESS-V5 (PDF Gen)
|
||||
├── Resume-SEAWEEDFS-V5 (S3 Storage)
|
||||
└── Resume-OLLAMA-V5 (AI Engine)
|
||||
```
|
||||
|
||||
## 🚀 Quick Deployment
|
||||
|
||||
### Prerequisites
|
||||
1. **Router Configuration**: Port forwarding 80→8880, 443→8443
|
||||
2. **DNS**: rx.vish.gg pointing to YOUR_WAN_IP
|
||||
3. **SSH Access**: To Calypso server (192.168.0.250:62000)
|
||||
|
||||
### Deploy Everything
|
||||
```bash
|
||||
# Clone the repo (if not already done)
|
||||
git clone https://git.vish.gg/Vish/homelab.git
|
||||
cd homelab/Calypso
|
||||
|
||||
# Deploy NPM first (infrastructure)
|
||||
cd nginx_proxy_manager
|
||||
./deploy.sh deploy
|
||||
|
||||
# Deploy Reactive Resume v5 with AI
|
||||
cd ../reactive_resume_v5
|
||||
./deploy.sh deploy
|
||||
```
|
||||
|
||||
## 🤖 AI Integration Details
|
||||
|
||||
### Ollama Configuration
|
||||
- **Model**: `llama3.2:3b`
|
||||
- **Size**: ~2GB download
|
||||
- **Purpose**: Resume assistance, content generation
|
||||
- **API Endpoint**: `http://ollama:11434` (internal)
|
||||
- **External API**: `http://192.168.0.250:11434`
|
||||
|
||||
### AI Features in Reactive Resume v5
|
||||
1. **Resume Content Suggestions**: AI-powered content recommendations
|
||||
2. **Job Description Analysis**: Match skills to job requirements
|
||||
3. **Skills Optimization**: Suggest relevant skills based on experience
|
||||
4. **Cover Letter Generation**: AI-assisted cover letter writing
|
||||
|
||||
### Model Performance
|
||||
- **Speed**: Fast inference on CPU (3B parameters)
|
||||
- **Quality**: Good for resume/professional content
|
||||
- **Memory**: ~4GB RAM usage during inference
|
||||
- **Offline**: Fully local, no external API calls
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
homelab/Calypso/
|
||||
├── reactive_resume_v5/
|
||||
│ ├── docker-compose.yml # Main stack definition
|
||||
│ ├── deploy.sh # GitOps deployment script
|
||||
│ ├── README.md # Service documentation
|
||||
│ └── MIGRATION.md # v4 to v5 migration notes
|
||||
├── nginx_proxy_manager/
|
||||
│ ├── docker-compose.yml # NPM configuration
|
||||
│ ├── deploy.sh # NPM deployment script
|
||||
│ └── README.md # NPM documentation
|
||||
└── DEPLOYMENT_SUMMARY.md # This deployment overview
|
||||
```
|
||||
|
||||
## 🔧 Configuration Details
|
||||
|
||||
### Environment Variables (Reactive Resume)
|
||||
```yaml
|
||||
# Core Configuration
|
||||
APP_URL: "https://rx.vish.gg"
|
||||
NODE_ENV: "production"
|
||||
PORT: "3000"
|
||||
|
||||
# Database
|
||||
DATABASE_URL: "postgresql://resumeuser:REDACTED_PASSWORD@resume-db:5432/resume"
|
||||
|
||||
# AI Integration
|
||||
AI_PROVIDER: "ollama"
|
||||
OLLAMA_URL: "http://ollama:11434"
|
||||
OLLAMA_MODEL: "llama3.2:3b"
|
||||
|
||||
# Storage (S3-compatible)
|
||||
S3_ENDPOINT: "http://seaweedfs:8333"
|
||||
S3_BUCKET: "reactive-resume"
|
||||
S3_ACCESS_KEY_ID: "seaweedfs"
|
||||
S3_SECRET_ACCESS_KEY: "seaweedfs"
|
||||
|
||||
# PDF Generation
|
||||
PRINTER_ENDPOINT: "ws://browserless:3000?token=1234567890"
|
||||
|
||||
# SMTP (Gmail)
|
||||
SMTP_HOST: "smtp.gmail.com"
|
||||
SMTP_PORT: "465"
|
||||
SMTP_USER: "your-email@example.com"
|
||||
SMTP_PASS: "REDACTED_PASSWORD"
|
||||
SMTP_SECURE: "true"
|
||||
```
|
||||
|
||||
### Port Mapping
|
||||
```yaml
|
||||
Services:
|
||||
- Resume-ACCESS-V5: 9751:3000 # Main application
|
||||
- Resume-OLLAMA-V5: 11434:11434 # AI API
|
||||
- Resume-SEAWEEDFS-V5: 9753:8333 # S3 API (download service)
|
||||
- Resume-BROWSERLESS-V5: 4000:3000 # PDF generation
|
||||
- nginx-proxy-manager: 8880:80, 8443:443, 81:81
|
||||
```
|
||||
|
||||
## 🛠️ Management Commands
|
||||
|
||||
### Reactive Resume v5
|
||||
```bash
|
||||
cd homelab/Calypso/reactive_resume_v5
|
||||
|
||||
# Deployment
|
||||
./deploy.sh deploy # Full deployment
|
||||
./deploy.sh setup-ollama # Setup AI model only
|
||||
|
||||
# Management
|
||||
./deploy.sh status # Check all services
|
||||
./deploy.sh logs # View application logs
|
||||
./deploy.sh restart # Restart services
|
||||
./deploy.sh stop # Stop all services
|
||||
./deploy.sh update # Update images and redeploy
|
||||
```
|
||||
|
||||
### Nginx Proxy Manager
|
||||
```bash
|
||||
cd homelab/Calypso/nginx_proxy_manager
|
||||
|
||||
# Deployment
|
||||
./deploy.sh deploy # Deploy NPM
|
||||
./deploy.sh cleanup # Clean up broken containers
|
||||
|
||||
# Management
|
||||
./deploy.sh status # Check NPM status
|
||||
./deploy.sh logs # View NPM logs
|
||||
./deploy.sh restart # Restart NPM
|
||||
```
|
||||
|
||||
## 🌐 Network Configuration
|
||||
|
||||
### Router Port Forwarding
|
||||
Configure your router to forward:
|
||||
- **Port 80** → **192.168.0.250:8880** (HTTP)
|
||||
- **Port 443** → **192.168.0.250:8443** (HTTPS)
|
||||
|
||||
### NPM Proxy Host Configuration
|
||||
In NPM Admin UI (http://192.168.0.250:81):
|
||||
|
||||
1. **rx.vish.gg**:
|
||||
- Forward Hostname/IP: `192.168.0.250`
|
||||
- Forward Port: `9751`
|
||||
- Enable SSL with Cloudflare Origin Certificate
|
||||
|
||||
2. **rxdl.vish.gg** (Download Service):
|
||||
- Forward Hostname/IP: `192.168.0.250`
|
||||
- Forward Port: `9753`
|
||||
- Enable SSL with Cloudflare Origin Certificate
|
||||
|
||||
## 🔍 Troubleshooting
|
||||
|
||||
### AI Features Not Working
|
||||
```bash
|
||||
# Check Ollama service
|
||||
./deploy.sh logs | grep ollama
|
||||
|
||||
# Verify model is loaded
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"
|
||||
|
||||
# Test AI API directly
|
||||
curl http://192.168.0.250:11434/api/generate -d '{
|
||||
"model": "llama3.2:3b",
|
||||
"prompt": "Write a professional summary for a software engineer",
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
### External Access Issues
|
||||
```bash
|
||||
# Test DNS resolution
|
||||
nslookup rx.vish.gg
|
||||
|
||||
# Test external connectivity
|
||||
curl -I https://rx.vish.gg
|
||||
|
||||
# Check NPM proxy configuration
|
||||
./deploy.sh status
|
||||
```
|
||||
|
||||
### Service Health Check
|
||||
```bash
|
||||
# Check all containers
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker ps"
|
||||
|
||||
# Check specific service logs
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker logs Resume-ACCESS-V5"
|
||||
```
|
||||
|
||||
## 📊 Performance Metrics
|
||||
|
||||
### Resource Usage (Typical)
|
||||
- **CPU**: 2-4 cores during AI inference
|
||||
- **RAM**: 6-8GB total (4GB for Ollama + 2-4GB for other services)
|
||||
- **Storage**: ~15GB (2GB model + 3GB images + data)
|
||||
- **Network**: Minimal (all AI processing local)
|
||||
|
||||
### Response Times
|
||||
- **App Load**: <2s
|
||||
- **AI Suggestions**: 3-10s (depending on prompt complexity)
|
||||
- **PDF Generation**: 2-5s
|
||||
- **File Upload**: <1s (local S3)
|
||||
|
||||
## 🔐 Security Considerations
|
||||
|
||||
### Access Control
|
||||
- All services behind NPM reverse proxy
|
||||
- External access only via HTTPS
|
||||
- AI processing completely local (no data leaves network)
|
||||
- Database credentials environment-specific
|
||||
|
||||
### SSL/TLS
|
||||
- Cloudflare Origin Certificates in NPM
|
||||
- End-to-end encryption for external access
|
||||
- Internal services use HTTP (behind firewall)
|
||||
|
||||
## 🔄 Backup & Recovery
|
||||
|
||||
### Critical Data Locations
|
||||
```bash
|
||||
# Database backup
|
||||
/volume1/docker/rxv5/db/
|
||||
|
||||
# File storage backup
|
||||
/volume1/docker/rxv5/seaweedfs/
|
||||
|
||||
# AI model data
|
||||
/volume1/docker/rxv5/ollama/
|
||||
|
||||
# NPM configuration
|
||||
/volume1/docker/nginx-proxy-manager/data/
|
||||
```
|
||||
|
||||
### Backup Commands
|
||||
```bash
|
||||
# Create backup
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo tar -czf /volume1/backups/rxv5-$(date +%Y%m%d).tar.gz /volume1/docker/rxv5/"
|
||||
|
||||
# Restore from backup
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo tar -xzf /volume1/backups/rxv5-YYYYMMDD.tar.gz -C /"
|
||||
```
|
||||
|
||||
## 📈 Monitoring
|
||||
|
||||
### Health Endpoints
|
||||
- **Application**: http://192.168.0.250:9751/health
|
||||
- **Database**: PostgreSQL health checks via Docker
|
||||
- **AI Service**: http://192.168.0.250:11434/api/tags
|
||||
- **Storage**: SeaweedFS S3 API health
|
||||
|
||||
### Log Locations
|
||||
```bash
|
||||
# Application logs
|
||||
sudo /usr/local/bin/docker logs Resume-ACCESS-V5
|
||||
|
||||
# AI service logs
|
||||
sudo /usr/local/bin/docker logs Resume-OLLAMA-V5
|
||||
|
||||
# Database logs
|
||||
sudo /usr/local/bin/docker logs Resume-DB-V5
|
||||
```
|
||||
|
||||
## 🎉 Success Criteria
|
||||
|
||||
✅ **External Access**: https://rx.vish.gg responds with 200
|
||||
✅ **AI Integration**: Ollama model loaded and responding
|
||||
✅ **PDF Generation**: Browserless service healthy
|
||||
✅ **File Storage**: SeaweedFS S3 API functional
|
||||
✅ **Database**: PostgreSQL healthy and accessible
|
||||
✅ **Proxy**: NPM routing traffic correctly
|
||||
|
||||
## 📞 Support
|
||||
|
||||
For issues with this deployment:
|
||||
1. Check service status: `./deploy.sh status`
|
||||
2. Review logs: `./deploy.sh logs`
|
||||
3. Verify network connectivity and DNS
|
||||
4. Ensure router port forwarding is correct
|
||||
5. Check NPM proxy host configuration
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-02-16
|
||||
**Deployed By**: OpenHands GitOps
|
||||
**Version**: Reactive Resume v5.0.9 + Ollama llama3.2:3b
|
||||
31
hosts/synology/calypso/actualbudget.yml
Normal file
31
hosts/synology/calypso/actualbudget.yml
Normal file
@@ -0,0 +1,31 @@
|
||||
# Actual Budget - Personal finance
|
||||
# Port: 5006
|
||||
# URL: https://actual.vish.gg
|
||||
# Local-first personal budgeting app
|
||||
# SSO: Authentik OIDC (sso.vish.gg/application/o/actual-budget/)
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
actual_server:
|
||||
image: actualbudget/actual-server:latest
|
||||
container_name: Actual
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: timeout 10s bash -c ':> /dev/tcp/127.0.0.1/5006' || exit 1
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 90s
|
||||
ports:
|
||||
- "8304:5006"
|
||||
volumes:
|
||||
- /volume1/docker/actual:/data:rw
|
||||
environment:
|
||||
# Authentik OIDC SSO — login method not set so password login remains as fallback
|
||||
ACTUAL_OPENID_DISCOVERY_URL: https://sso.vish.gg/application/o/actual-budget/.well-known/openid-configuration
|
||||
ACTUAL_OPENID_CLIENT_ID: actual-budget
|
||||
ACTUAL_OPENID_CLIENT_SECRET: "REDACTED_CLIENT_SECRET" # pragma: allowlist secret
|
||||
ACTUAL_OPENID_SERVER_HOSTNAME: https://actual.vish.gg
|
||||
ACTUAL_USER_CREATION_MODE: login
|
||||
restart: on-failure:5
|
||||
19
hosts/synology/calypso/adguard.yaml
Normal file
19
hosts/synology/calypso/adguard.yaml
Normal file
@@ -0,0 +1,19 @@
|
||||
# AdGuard Home - DNS ad blocker
|
||||
# Port: 3000 (web), 53 (DNS)
|
||||
# Network-wide ad blocking via DNS
|
||||
|
||||
services:
|
||||
adguard:
|
||||
image: adguard/adguardhome
|
||||
container_name: AdGuard
|
||||
mem_limit: 2g
|
||||
cpu_shares: 768
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: on-failure:5
|
||||
network_mode: host
|
||||
volumes:
|
||||
- /volume1/docker/adguard/config:/opt/adguardhome/conf:rw
|
||||
- /volume1/docker/adguard/data:/opt/adguardhome/work:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
7
hosts/synology/calypso/apt-cacher-ng/acng.conf
Normal file
7
hosts/synology/calypso/apt-cacher-ng/acng.conf
Normal file
@@ -0,0 +1,7 @@
|
||||
# Basic config
|
||||
CacheDir: /var/cache/apt-cacher-ng
|
||||
LogDir: /var/log/apt-cacher-ng
|
||||
Port: 3142
|
||||
|
||||
# Crucial for HTTPS repositories
|
||||
PassThroughPattern: .*
|
||||
23
hosts/synology/calypso/apt-cacher-ng/apt-cacher-ng.yml
Normal file
23
hosts/synology/calypso/apt-cacher-ng/apt-cacher-ng.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
# APT Cacher NG - Package cache
|
||||
# Port: 3142
|
||||
# Caching proxy for Debian packages
|
||||
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
apt-cacher-ng:
|
||||
image: sameersbn/apt-cacher-ng:latest
|
||||
container_name: apt-cacher-ng
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3142:3142"
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- /volume1/docker/apt-cacher-ng/cache:/var/cache/apt-cacher-ng
|
||||
- /volume1/docker/apt-cacher-ng/log:/var/log/apt-cacher-ng
|
||||
- /volume1/docker/apt-cacher-ng/config:/etc/apt-cacher-ng
|
||||
dns:
|
||||
- 1.1.1.1
|
||||
- 8.8.8.8
|
||||
network_mode: bridge
|
||||
215
hosts/synology/calypso/arr-suite-wip.yaml
Normal file
215
hosts/synology/calypso/arr-suite-wip.yaml
Normal file
@@ -0,0 +1,215 @@
|
||||
# Arr Suite WIP - Media automation
|
||||
# Work-in-progress Arr stack configuration
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
tautulli:
|
||||
image: linuxserver/tautulli:latest
|
||||
container_name: tautulli
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/tautulli:/config
|
||||
ports:
|
||||
- 8181:8181/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
linuxserver-prowlarr:
|
||||
image: linuxserver/prowlarr:latest
|
||||
container_name: prowlarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/prowlarr:/config
|
||||
ports:
|
||||
- 9696:9696/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
flaresolverr:
|
||||
image: flaresolverr/flaresolverr:latest
|
||||
container_name: flaresolverr
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
ports:
|
||||
- 8191:8191
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
sabnzbd:
|
||||
image: linuxserver/sabnzbd:latest
|
||||
container_name: sabnzbd
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- HOST_WHITELIST=synobridge,192.168.0.1/24,127.0.0.1
|
||||
- LOCAL_RANGES=synobridge,192.168.0.1/24
|
||||
volumes:
|
||||
- /volume1/docker2/sabnzbd:/config
|
||||
- /volume1/data/usenet:/data/usenet
|
||||
ports:
|
||||
- 25000:8080/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
sonarr:
|
||||
image: linuxserver/sonarr:latest
|
||||
container_name: sonarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/sonarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 8989:8989/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
lidarr:
|
||||
image: linuxserver/lidarr:latest
|
||||
container_name: lidarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/lidarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 8686:8686/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
radarr:
|
||||
image: linuxserver/radarr:latest
|
||||
container_name: radarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/radarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 7878:7878/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
readarr:
|
||||
image: linuxserver/readarr:develop
|
||||
container_name: readarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/readarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 8787:8787/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
bazarr:
|
||||
image: linuxserver/bazarr:latest
|
||||
container_name: bazarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/bazarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 6767:6767/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
whisparr:
|
||||
image: hotio/whisparr:nightly
|
||||
container_name: whisparr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
volumes:
|
||||
- /volume1/docker2/whisparr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 6969:6969/tcp
|
||||
network_mode: synobridge
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
plex:
|
||||
image: linuxserver/plex:latest
|
||||
container_name: plex
|
||||
network_mode: host
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- VERSION=docker
|
||||
- PLEX_CLAIM=
|
||||
volumes:
|
||||
- /volume1/docker2/plex:/config
|
||||
- /volume1/data/media:/data/media
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
jellyseerr:
|
||||
image: fallenbagel/jellyseerr:latest
|
||||
container_name: jellyseerr
|
||||
user: 1027:65536
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- /volume1/docker2/jellyseerr:/app/config
|
||||
ports:
|
||||
- 5055:5055/tcp
|
||||
network_mode: synobridge
|
||||
dns:
|
||||
- 9.9.9.9
|
||||
- 1.1.1.1
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
299
hosts/synology/calypso/arr_suite_with_dracula.yml
Normal file
299
hosts/synology/calypso/arr_suite_with_dracula.yml
Normal file
@@ -0,0 +1,299 @@
|
||||
# Arr Suite - Media automation stack
|
||||
# Services: Sonarr, Radarr, Prowlarr, Bazarr, Lidarr, Readarr, Whisparr,
|
||||
# Tautulli, SABnzbd, Plex, Jellyseerr, Flaresolverr
|
||||
# Manages TV shows, movies, music, books downloads and organization
|
||||
#
|
||||
# Theming: Self-hosted theme.park (Dracula theme) on Atlantis
|
||||
# - TP_DOMAIN uses Atlantis LAN IP to reach theme-park container
|
||||
# - Theme-park stack: Atlantis/theme-park/theme-park.yaml
|
||||
# Updated: February 16, 2026
|
||||
version: "3.8"
|
||||
|
||||
x-themepark: &themepark
|
||||
TP_SCHEME: "http"
|
||||
TP_DOMAIN: "192.168.0.200:8580"
|
||||
TP_THEME: "dracula"
|
||||
|
||||
networks:
|
||||
media_net:
|
||||
driver: bridge
|
||||
name: media_net
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.23.0.0/24
|
||||
gateway: 172.23.0.1
|
||||
|
||||
services:
|
||||
tautulli:
|
||||
image: linuxserver/tautulli:latest
|
||||
container_name: tautulli
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:tautulli
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/tautulli:/config
|
||||
ports:
|
||||
- 8181:8181/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.6
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
prowlarr:
|
||||
image: linuxserver/prowlarr:latest
|
||||
container_name: prowlarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:prowlarr
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/prowlarr:/config
|
||||
ports:
|
||||
- 9696:9696/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.5
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
flaresolverr:
|
||||
image: flaresolverr/flaresolverr:latest
|
||||
container_name: flaresolverr
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
ports:
|
||||
- 8191:8191
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.3
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
sabnzbd:
|
||||
image: linuxserver/sabnzbd:latest
|
||||
container_name: sabnzbd
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- HOST_WHITELIST=172.23.0.0/24,192.168.0.0/24,127.0.0.1
|
||||
- LOCAL_RANGES=172.23.0.0/24,192.168.0.0/24
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:sabnzbd
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/sabnzbd:/config
|
||||
- /volume1/data/usenet:/data/usenet
|
||||
ports:
|
||||
- 25000:8080/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.7
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
sonarr:
|
||||
image: linuxserver/sonarr:latest
|
||||
container_name: sonarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:sonarr
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/sonarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 8989:8989/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.12
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
lidarr:
|
||||
image: linuxserver/lidarr:latest
|
||||
container_name: lidarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:lidarr
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/lidarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 8686:8686/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.8
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
radarr:
|
||||
image: linuxserver/radarr:latest
|
||||
container_name: radarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:radarr
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/radarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 7878:7878/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.10
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
readarr:
|
||||
image: lscr.io/linuxserver/readarr:0.4.19-nightly
|
||||
container_name: readarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:readarr
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/readarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 8787:8787/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.4
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
bazarr:
|
||||
image: linuxserver/bazarr:latest
|
||||
container_name: bazarr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:bazarr
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/bazarr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 6767:6767/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.9
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
whisparr:
|
||||
image: ghcr.io/hotio/whisparr:latest
|
||||
container_name: whisparr
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- TP_HOTIO=true
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/whisparr:/config
|
||||
- /volume1/data:/data
|
||||
ports:
|
||||
- 6969:6969/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.2
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
plex:
|
||||
image: linuxserver/plex:latest
|
||||
container_name: plex
|
||||
network_mode: host
|
||||
environment:
|
||||
- PUID=1027
|
||||
- PGID=65536
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- VERSION=docker
|
||||
- PLEX_CLAIM=
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:plex
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker2/plex:/config
|
||||
- /volume1/data/media:/data/media
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
|
||||
jellyseerr:
|
||||
image: fallenbagel/jellyseerr:latest
|
||||
container_name: jellyseerr
|
||||
user: "1027:65536"
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
# Note: Jellyseerr theming requires CSS injection via reverse proxy
|
||||
# theme.park doesn't support DOCKER_MODS for non-linuxserver images
|
||||
volumes:
|
||||
- /volume1/docker2/jellyseerr:/app/config
|
||||
ports:
|
||||
- 5055:5055/tcp
|
||||
networks:
|
||||
media_net:
|
||||
ipv4_address: 172.23.0.11
|
||||
dns:
|
||||
- 9.9.9.9
|
||||
- 1.1.1.1
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
restart: unless-stopped
|
||||
14
hosts/synology/calypso/authentik/.env.example
Normal file
14
hosts/synology/calypso/authentik/.env.example
Normal file
@@ -0,0 +1,14 @@
|
||||
# Authentik Environment Variables
|
||||
# Copy to .env in Portainer or set in stack environment variables
|
||||
|
||||
# Secret key - CHANGE THIS! Generate with: openssl rand -base64 36
|
||||
AUTHENTIK_SECRET_KEY=REDACTED_SECRET_KEY
|
||||
|
||||
# PostgreSQL password - CHANGE THIS! Generate with: openssl rand -base64 32
|
||||
PG_PASS=REDACTED_PASSWORD
|
||||
|
||||
# Gmail SMTP (using App Password)
|
||||
# Generate app password at: https://myaccount.google.com/apppasswords
|
||||
SMTP_USER=your.email@gmail.com
|
||||
SMTP_PASS=REDACTED_SMTP_PASSWORD
|
||||
SMTP_FROM=user@example.com
|
||||
115
hosts/synology/calypso/authentik/docker-compose.yaml
Normal file
115
hosts/synology/calypso/authentik/docker-compose.yaml
Normal file
@@ -0,0 +1,115 @@
|
||||
# Authentik - Identity Provider / SSO
|
||||
# Docs: https://docs.goauthentik.io/
|
||||
# Deployed to: Calypso (DS723+)
|
||||
# Domain: sso.vish.gg
|
||||
#
|
||||
# DISASTER RECOVERY:
|
||||
# - Database: /volume1/docker/authentik/database (PostgreSQL)
|
||||
# - Media: /volume1/docker/authentik/media (uploaded files, icons)
|
||||
# - Certs: /volume1/docker/authentik/certs (custom certificates)
|
||||
# - Templates: /volume1/docker/authentik/templates (custom email templates)
|
||||
#
|
||||
# INITIAL SETUP:
|
||||
# 1. Deploy stack via Portainer
|
||||
# 2. Access https://sso.vish.gg/if/flow/initial-setup/
|
||||
# 3. Create admin account (akadmin)
|
||||
# 4. Configure providers for each service
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
authentik-db:
|
||||
image: docker.io/library/postgres:16-alpine
|
||||
container_name: Authentik-DB
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 5s
|
||||
volumes:
|
||||
- /volume1/docker/authentik/database:/var/lib/postgresql/data
|
||||
environment:
|
||||
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
|
||||
POSTGRES_USER: authentik
|
||||
POSTGRES_DB: authentik
|
||||
|
||||
authentik-redis:
|
||||
image: docker.io/library/redis:alpine
|
||||
container_name: Authentik-REDIS
|
||||
command: --save 60 1 --loglevel warning
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping | grep PONG"]
|
||||
start_period: 20s
|
||||
interval: 30s
|
||||
retries: 5
|
||||
timeout: 3s
|
||||
volumes:
|
||||
- /volume1/docker/authentik/redis:/data
|
||||
|
||||
authentik-server:
|
||||
image: ghcr.io/goauthentik/server:2026.2.1
|
||||
container_name: Authentik-SERVER
|
||||
restart: unless-stopped
|
||||
command: server
|
||||
environment:
|
||||
AUTHENTIK_SECRET_KEY: "REDACTED_SECRET_KEY"
|
||||
AUTHENTIK_REDIS__HOST: authentik-redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: authentik-db
|
||||
AUTHENTIK_POSTGRESQL__USER: authentik
|
||||
AUTHENTIK_POSTGRESQL__NAME: authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: "REDACTED_PASSWORD"
|
||||
# Email configuration (Gmail)
|
||||
AUTHENTIK_EMAIL__HOST: smtp.gmail.com
|
||||
AUTHENTIK_EMAIL__PORT: 587
|
||||
AUTHENTIK_EMAIL__USERNAME: your-email@example.com
|
||||
AUTHENTIK_EMAIL__PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
AUTHENTIK_EMAIL__USE_TLS: "true"
|
||||
AUTHENTIK_EMAIL__FROM: sso@vish.gg
|
||||
volumes:
|
||||
- /volume1/docker/authentik/media:/media
|
||||
- /volume1/docker/authentik/templates:/templates
|
||||
ports:
|
||||
- "9000:9000" # HTTP
|
||||
- "9443:9443" # HTTPS
|
||||
depends_on:
|
||||
authentik-db:
|
||||
condition: service_healthy
|
||||
authentik-redis:
|
||||
condition: service_healthy
|
||||
|
||||
authentik-worker:
|
||||
image: ghcr.io/goauthentik/server:2026.2.1
|
||||
container_name: Authentik-WORKER
|
||||
restart: unless-stopped
|
||||
command: worker
|
||||
environment:
|
||||
AUTHENTIK_SECRET_KEY: "REDACTED_SECRET_KEY"
|
||||
AUTHENTIK_REDIS__HOST: authentik-redis
|
||||
AUTHENTIK_POSTGRESQL__HOST: authentik-db
|
||||
AUTHENTIK_POSTGRESQL__USER: authentik
|
||||
AUTHENTIK_POSTGRESQL__NAME: authentik
|
||||
AUTHENTIK_POSTGRESQL__PASSWORD: "REDACTED_PASSWORD"
|
||||
# Email configuration (Gmail)
|
||||
AUTHENTIK_EMAIL__HOST: smtp.gmail.com
|
||||
AUTHENTIK_EMAIL__PORT: 587
|
||||
AUTHENTIK_EMAIL__USERNAME: your-email@example.com
|
||||
AUTHENTIK_EMAIL__PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
AUTHENTIK_EMAIL__USE_TLS: "true"
|
||||
AUTHENTIK_EMAIL__FROM: sso@vish.gg
|
||||
# This is optional, and can be removed. If you remove this, the following will happen
|
||||
# - The permissions for the /media folders aren't fixed, so make sure they are 1000:1000
|
||||
# - The docker socket can't be accessed anymore
|
||||
user: root
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /volume1/docker/authentik/media:/media
|
||||
- /volume1/docker/authentik/certs:/certs
|
||||
- /volume1/docker/authentik/templates:/templates
|
||||
depends_on:
|
||||
authentik-db:
|
||||
condition: service_healthy
|
||||
authentik-redis:
|
||||
condition: service_healthy
|
||||
35
hosts/synology/calypso/derpmap.yaml
Normal file
35
hosts/synology/calypso/derpmap.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
regions:
|
||||
900:
|
||||
regionid: 900
|
||||
regioncode: home-cal
|
||||
regionname: "Home - Calypso"
|
||||
nodes:
|
||||
- name: 900a
|
||||
regionid: 900
|
||||
hostname: headscale.vish.gg
|
||||
derpport: 8443
|
||||
stunport: -1
|
||||
ipv4: 184.23.52.14
|
||||
901:
|
||||
regionid: 901
|
||||
regioncode: sea
|
||||
regionname: "Seattle VPS"
|
||||
nodes:
|
||||
- name: 901a
|
||||
regionid: 901
|
||||
hostname: derp-sea.vish.gg
|
||||
derpport: 8444
|
||||
stunport: 3478
|
||||
ipv4: YOUR_WAN_IP
|
||||
ipv6: "2605:a141:2207:6105::1"
|
||||
902:
|
||||
regionid: 902
|
||||
regioncode: home-atl
|
||||
regionname: "Home - Atlantis"
|
||||
nodes:
|
||||
- name: 902a
|
||||
regionid: 902
|
||||
hostname: derp-atl.vish.gg
|
||||
derpport: 8445
|
||||
stunport: 3480
|
||||
ipv4: 184.23.52.14
|
||||
28
hosts/synology/calypso/diun.yaml
Normal file
28
hosts/synology/calypso/diun.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Diun — Docker Image Update Notifier
|
||||
#
|
||||
# Watches all running containers on this host and sends ntfy
|
||||
# notifications when upstream images update their digest.
|
||||
# Schedule: Mondays 09:00 (weekly cadence).
|
||||
#
|
||||
# ntfy topic: https://ntfy.vish.gg/diun
|
||||
|
||||
services:
|
||||
diun:
|
||||
image: crazymax/diun:latest
|
||||
container_name: diun
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- diun-data:/data
|
||||
environment:
|
||||
LOG_LEVEL: info
|
||||
DIUN_WATCH_WORKERS: "20"
|
||||
DIUN_WATCH_SCHEDULE: "0 9 * * 1"
|
||||
DIUN_WATCH_JITTER: 30s
|
||||
DIUN_PROVIDERS_DOCKER: "true"
|
||||
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
|
||||
DIUN_NOTIF_NTFY_ENDPOINT: "https://ntfy.vish.gg"
|
||||
DIUN_NOTIF_NTFY_TOPIC: "diun"
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
diun-data:
|
||||
16
hosts/synology/calypso/dozzle-agent.yaml
Normal file
16
hosts/synology/calypso/dozzle-agent.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
# Updated: 2026-03-11
|
||||
services:
|
||||
dozzle-agent:
|
||||
image: amir20/dozzle:latest
|
||||
container_name: dozzle-agent
|
||||
command: agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- "7007:7007"
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "/dozzle", "healthcheck"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
96
hosts/synology/calypso/firefly/firefly.yaml
Normal file
96
hosts/synology/calypso/firefly/firefly.yaml
Normal file
@@ -0,0 +1,96 @@
|
||||
# Firefly III - Finance manager
|
||||
# Port: 8080
|
||||
# Personal finance manager
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis
|
||||
container_name: Firefly-REDIS
|
||||
hostname: firefly-redis
|
||||
mem_limit: 256m
|
||||
mem_reservation: 50m
|
||||
cpu_shares: 768
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: true
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
|
||||
volumes:
|
||||
- /volume1/docker/firefly/redis:/data:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
|
||||
importer:
|
||||
image: fireflyiii/data-importer:latest
|
||||
container_name: Firefly-Importer
|
||||
hostname: firefly-importer
|
||||
security_opt:
|
||||
- no-new-privileges:false
|
||||
volumes:
|
||||
- /volume1/docker/firefly/importer:/var/www/html/storage/upload:rw
|
||||
ports:
|
||||
- 6192:8080
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
firefly:
|
||||
condition: service_healthy
|
||||
|
||||
db:
|
||||
image: mariadb:11.4-noble #LTS Long Time Support Until May 29, 2029.
|
||||
container_name: Firefly-DB
|
||||
hostname: firefly-db
|
||||
mem_limit: 512m
|
||||
mem_reservation: 128m
|
||||
cpu_shares: 768
|
||||
security_opt:
|
||||
- no-new-privileges:false
|
||||
volumes:
|
||||
- /volume1/docker/firefly/db:/var/lib/mysql:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
MYSQL_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
MYSQL_DATABASE: firefly
|
||||
MYSQL_USER: fireflyuser
|
||||
MYSQL_PASSWORD: "REDACTED_PASSWORD"
|
||||
restart: on-failure:5
|
||||
|
||||
firefly:
|
||||
image: fireflyiii/core:latest
|
||||
container_name: Firefly
|
||||
hostname: firefly
|
||||
mem_limit: 1g
|
||||
cpu_shares: 768
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: curl -f http://localhost:8080/ || exit 1
|
||||
env_file:
|
||||
- stack.env
|
||||
volumes:
|
||||
- /volume1/docker/firefly/upload:/var/www/html/storage/upload:rw
|
||||
ports:
|
||||
- 6182:8080
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_healthy
|
||||
|
||||
cron:
|
||||
image: alpine:latest
|
||||
command: sh -c "echo \"0 3 * * * wget -qO- http://firefly:8080/api/v1/cron/9610001d2871a8622ea5bf5e65fe25db\" | crontab - && crond -f -L /dev/stdout"
|
||||
container_name: Firefly-Cron
|
||||
hostname: firefly-cron
|
||||
mem_limit: 64m
|
||||
cpu_shares: 256
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
firefly:
|
||||
condition: service_started
|
||||
12
hosts/synology/calypso/fstab.mounts
Normal file
12
hosts/synology/calypso/fstab.mounts
Normal file
@@ -0,0 +1,12 @@
|
||||
# SMB shares exported by Calypso (100.103.48.78) - Synology DS723+
|
||||
# Accessible via Tailscale only (LAN IP varies / not pinned for other hosts)
|
||||
# Credentials: username=Vish (capital V), password="REDACTED_PASSWORD" password>
|
||||
#
|
||||
# Mounted on homelab-vm at /mnt/calypso_*
|
||||
|
||||
//100.103.48.78/data /mnt/calypso_data cifs credentials=/etc/samba/.calypso_credentials,vers=3.0,_netdev,nofail 0 0
|
||||
//100.103.48.78/docker /mnt/calypso_docker cifs credentials=/etc/samba/.calypso_credentials,vers=3.0,_netdev,nofail 0 0
|
||||
//100.103.48.78/docker2 /mnt/calypso_docker2 cifs credentials=/etc/samba/.calypso_credentials,vers=3.0,_netdev,nofail 0 0
|
||||
//100.103.48.78/dropboxsync /mnt/calypso_dropboxsync cifs credentials=/etc/samba/.calypso_credentials,vers=3.0,_netdev,nofail 0 0
|
||||
//100.103.48.78/Files /mnt/calypso_files cifs credentials=/etc/samba/.calypso_credentials,vers=3.0,_netdev,nofail 0 0
|
||||
//100.103.48.78/netshare /mnt/calypso_netshare cifs credentials=/etc/samba/.calypso_credentials,vers=3.0,_netdev,nofail 0 0
|
||||
33
hosts/synology/calypso/gitea-runner.yaml
Normal file
33
hosts/synology/calypso/gitea-runner.yaml
Normal file
@@ -0,0 +1,33 @@
|
||||
# Gitea Actions Runner for Calypso
|
||||
# This runner enables CI/CD workflows for git.vish.gg
|
||||
#
|
||||
# IMPORTANT: The GITEA_RUNNER_TOKEN env var must be set in the Portainer stack env
|
||||
# (or as a Docker secret) before deploying. Get a token from:
|
||||
# https://git.vish.gg/-/admin/runners (site-level, admin only)
|
||||
# or per-repo: https://git.vish.gg/Vish/homelab/settings/actions/runners
|
||||
#
|
||||
# If the runner gets stuck in a registration loop ("runner registration token not found"),
|
||||
# the token has expired or the Gitea instance was updated. Get a new token and recreate:
|
||||
# docker stop gitea-runner && docker rm gitea-runner
|
||||
# docker run -d --name gitea-runner ... -e GITEA_RUNNER_REGISTRATION_TOKEN=<new-token> ...
|
||||
# Or redeploy this stack with the updated GITEA_RUNNER_TOKEN env var in Portainer.
|
||||
|
||||
version: "3"
|
||||
services:
|
||||
gitea-runner:
|
||||
image: gitea/act_runner:latest
|
||||
container_name: gitea-runner
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- /volume1/docker/gitea-runner/stack.env # contains GITEA_RUNNER_TOKEN=<token>
|
||||
environment:
|
||||
- GITEA_INSTANCE_URL=https://git.vish.gg
|
||||
- GITEA_RUNNER_REGISTRATION_TOKEN=${GITEA_RUNNER_TOKEN:-CHANGE_ME}
|
||||
- GITEA_RUNNER_NAME=calypso-runner
|
||||
- GITEA_RUNNER_LABELS=ubuntu-latest:docker://node:20-bookworm,ubuntu-22.04:docker://ubuntu:22.04,python:docker://python:3.11
|
||||
volumes:
|
||||
- gitea-runner-data:/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
||||
volumes:
|
||||
gitea-runner-data:
|
||||
55
hosts/synology/calypso/gitea-server.yaml
Normal file
55
hosts/synology/calypso/gitea-server.yaml
Normal file
@@ -0,0 +1,55 @@
|
||||
# Gitea - Git server
|
||||
# Port: 3000
|
||||
# Lightweight self-hosted Git service
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres:16-bookworm
|
||||
container_name: Gitea-DB
|
||||
hostname: gitea-db
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: ["CMD", "pg_isready", "-q", "-d", "gitea", "-U", "giteauser"]
|
||||
timeout: 45s
|
||||
interval: 10s
|
||||
retries: 10
|
||||
user: 1026:100
|
||||
volumes:
|
||||
- /volume1/docker/gitea/db:/var/lib/postgresql/data:rw
|
||||
environment:
|
||||
- POSTGRES_DB=gitea
|
||||
- POSTGRES_USER=giteauser
|
||||
- POSTGRES_PASSWORD="REDACTED_PASSWORD"
|
||||
restart: unless-stopped
|
||||
|
||||
web:
|
||||
image: gitea/gitea:latest
|
||||
container_name: Gitea
|
||||
hostname: gitea
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: wget --no-verbose --tries=1 --spider http://localhost:3000/ || exit 1
|
||||
ports:
|
||||
- 3052:3000
|
||||
- 2222:22
|
||||
volumes:
|
||||
- /volume1/docker/gitea/data:/data
|
||||
- /etc/TZ:/etc/TZ:ro
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
environment:
|
||||
- USER_UID=1026
|
||||
- USER_GID=100
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
- GITEA__database__HOST=gitea-db:5432
|
||||
- GITEA__database__NAME=gitea
|
||||
- GITEA__database__USER=giteauser
|
||||
- GITEA__database__PASSWD="REDACTED_PASSWORD"
|
||||
- ROOT_URL=https://git.vish.gg
|
||||
# Authentik OAuth2 SSO Configuration
|
||||
- GITEA__oauth2_client__ENABLE_AUTO_REGISTRATION=true
|
||||
- GITEA__oauth2_client__ACCOUNT_LINKING=auto
|
||||
- GITEA__oauth2_client__UPDATE_AVATAR=true
|
||||
- GITEA__oauth2_client__OPENID_CONNECT_SCOPES=openid email profile
|
||||
restart: unless-stopped
|
||||
68
hosts/synology/calypso/grafana_prometheus/prometheus.yml
Normal file
68
hosts/synology/calypso/grafana_prometheus/prometheus.yml
Normal file
@@ -0,0 +1,68 @@
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: prometheus
|
||||
scrape_interval: 30s
|
||||
static_configs:
|
||||
- targets: ['localhost:9090']
|
||||
labels:
|
||||
group: 'prometheus'
|
||||
|
||||
- job_name: watchtower-docker
|
||||
scrape_interval: 10m
|
||||
metrics_path: /v1/metrics
|
||||
bearer_token: "REDACTED_TOKEN" # pragma: allowlist secret
|
||||
static_configs:
|
||||
- targets: ['watchtower:8080']
|
||||
|
||||
- job_name: node-docker
|
||||
scrape_interval: 5s
|
||||
static_configs:
|
||||
- targets: ['prometheus-node:9100']
|
||||
|
||||
- job_name: cadvisor-docker
|
||||
scrape_interval: 5s
|
||||
static_configs:
|
||||
- targets: ['prometheus-cadvisor:8080']
|
||||
|
||||
- job_name: snmp-docker
|
||||
scrape_interval: 5s
|
||||
metrics_path: /snmp
|
||||
params:
|
||||
module: [synology]
|
||||
auth: [snmpv3]
|
||||
static_configs:
|
||||
- targets: ['192.168.0.250']
|
||||
relabel_configs:
|
||||
- source_labels: [__address__]
|
||||
target_label: __param_target
|
||||
- source_labels: [__param_target]
|
||||
target_label: instance
|
||||
- target_label: __address__
|
||||
replacement: prometheus-snmp:9116
|
||||
|
||||
- job_name: blackbox
|
||||
metrics_path: /probe
|
||||
params:
|
||||
module: [http_2xx]
|
||||
static_configs:
|
||||
- targets:
|
||||
- https://google.com
|
||||
- https://1.1.1.1
|
||||
- http://192.168.0.1
|
||||
labels:
|
||||
group: external-probes
|
||||
relabel_configs:
|
||||
- source_labels: [__address__]
|
||||
target_label: __param_target
|
||||
- source_labels: [__param_target]
|
||||
target_label: instance
|
||||
- target_label: __address__
|
||||
replacement: blackbox-exporter:9115
|
||||
|
||||
- job_name: speedtest
|
||||
scrape_interval: 15m
|
||||
scrape_timeout: 90s # <-- extended timeout
|
||||
static_configs:
|
||||
- targets: ['speedtest-exporter:9798']
|
||||
938
hosts/synology/calypso/grafana_prometheus/snmp.yml
Normal file
938
hosts/synology/calypso/grafana_prometheus/snmp.yml
Normal file
@@ -0,0 +1,938 @@
|
||||
auths:
|
||||
snmpv3:
|
||||
version: 3
|
||||
security_level: authPriv
|
||||
auth_protocol: MD5
|
||||
username: snmp-exporter
|
||||
password: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
priv_protocol: DES
|
||||
priv_password: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
modules:
|
||||
synology:
|
||||
walk:
|
||||
- 1.3.6.1.2.1.2 # network
|
||||
- 1.3.6.1.2.1.31.1.1 # The total number received/transmitted of the interface
|
||||
- 1.3.6.1.4.1.6574.1 # displays all system statuses
|
||||
- 1.3.6.1.4.1.6574.2 # information regarding hard drives e.g Temperature
|
||||
- 1.3.6.1.4.1.6574.3 # monitoring RAID status
|
||||
- 1.3.6.1.4.1.6574.6 # the number of users logging in
|
||||
metrics:
|
||||
- name: ifNumber
|
||||
oid: 1.3.6.1.2.1.2.1
|
||||
type: gauge
|
||||
help: The number of network interfaces (regardless of their current state) present
|
||||
on this system. - 1.3.6.1.2.1.2.1
|
||||
- name: ifIndex
|
||||
oid: 1.3.6.1.2.1.2.2.1.1
|
||||
type: gauge
|
||||
help: A unique value, greater than zero, for each interface - 1.3.6.1.2.1.2.2.1.1
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifDescr
|
||||
oid: 1.3.6.1.2.1.2.2.1.2
|
||||
type: DisplayString
|
||||
help: A textual string containing information about the interface - 1.3.6.1.2.1.2.2.1.2
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifMtu
|
||||
oid: 1.3.6.1.2.1.2.2.1.4
|
||||
type: gauge
|
||||
help: The size of the largest packet which can be sent/received on the interface,
|
||||
specified in octets - 1.3.6.1.2.1.2.2.1.4
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifSpeed
|
||||
oid: 1.3.6.1.2.1.2.2.1.5
|
||||
type: gauge
|
||||
help: An estimate of the interface's current bandwidth in bits per second - 1.3.6.1.2.1.2.2.1.5
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifPhysAddress
|
||||
oid: 1.3.6.1.2.1.2.2.1.6
|
||||
type: PhysAddress48
|
||||
help: The interface's address at its protocol sub-layer - 1.3.6.1.2.1.2.2.1.6
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifAdminStatus
|
||||
oid: 1.3.6.1.2.1.2.2.1.7
|
||||
type: gauge
|
||||
help: The desired state of the interface - 1.3.6.1.2.1.2.2.1.7
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
enum_values:
|
||||
1: up
|
||||
2: down
|
||||
3: testing
|
||||
- name: ifOperStatus
|
||||
oid: 1.3.6.1.2.1.2.2.1.8
|
||||
type: gauge
|
||||
help: The current operational state of the interface - 1.3.6.1.2.1.2.2.1.8
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
enum_values:
|
||||
1: up
|
||||
2: down
|
||||
3: testing
|
||||
4: unknown
|
||||
5: dormant
|
||||
6: notPresent
|
||||
7: lowerLayerDown
|
||||
- name: ifLastChange
|
||||
oid: 1.3.6.1.2.1.2.2.1.9
|
||||
type: gauge
|
||||
help: The value of sysUpTime at the time the interface entered its current operational
|
||||
state - 1.3.6.1.2.1.2.2.1.9
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInOctets
|
||||
oid: 1.3.6.1.2.1.2.2.1.10
|
||||
type: counter
|
||||
help: The total number of octets received on the interface, including framing
|
||||
characters - 1.3.6.1.2.1.2.2.1.10
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInUcastPkts
|
||||
oid: 1.3.6.1.2.1.2.2.1.11
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were not addressed to a multicast or broadcast address at this sub-layer
|
||||
- 1.3.6.1.2.1.2.2.1.11
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInNUcastPkts
|
||||
oid: 1.3.6.1.2.1.2.2.1.12
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were addressed to a multicast or broadcast address at this sub-layer -
|
||||
1.3.6.1.2.1.2.2.1.12
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInDiscards
|
||||
oid: 1.3.6.1.2.1.2.2.1.13
|
||||
type: counter
|
||||
help: The number of inbound packets which were chosen to be discarded even though
|
||||
no errors had been detected to prevent their being deliverable to a higher-layer
|
||||
protocol - 1.3.6.1.2.1.2.2.1.13
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInErrors
|
||||
oid: 1.3.6.1.2.1.2.2.1.14
|
||||
type: counter
|
||||
help: For packet-oriented interfaces, the number of inbound packets that contained
|
||||
errors preventing them from being deliverable to a higher-layer protocol - 1.3.6.1.2.1.2.2.1.14
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInUnknownProtos
|
||||
oid: 1.3.6.1.2.1.2.2.1.15
|
||||
type: counter
|
||||
help: For packet-oriented interfaces, the number of packets received via the interface
|
||||
which were discarded because of an unknown or unsupported protocol - 1.3.6.1.2.1.2.2.1.15
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutOctets
|
||||
oid: 1.3.6.1.2.1.2.2.1.16
|
||||
type: counter
|
||||
help: The total number of octets transmitted out of the interface, including framing
|
||||
characters - 1.3.6.1.2.1.2.2.1.16
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutUcastPkts
|
||||
oid: 1.3.6.1.2.1.2.2.1.17
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were not addressed to a multicast or broadcast address at this sub-layer,
|
||||
including those that were discarded or not sent - 1.3.6.1.2.1.2.2.1.17
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutNUcastPkts
|
||||
oid: 1.3.6.1.2.1.2.2.1.18
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were addressed to a multicast or broadcast address at this sub-layer,
|
||||
including those that were discarded or not sent - 1.3.6.1.2.1.2.2.1.18
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutDiscards
|
||||
oid: 1.3.6.1.2.1.2.2.1.19
|
||||
type: counter
|
||||
help: The number of outbound packets which were chosen to be discarded even though
|
||||
no errors had been detected to prevent their being transmitted - 1.3.6.1.2.1.2.2.1.19
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutErrors
|
||||
oid: 1.3.6.1.2.1.2.2.1.20
|
||||
type: counter
|
||||
help: For packet-oriented interfaces, the number of outbound packets that could
|
||||
not be transmitted because of errors - 1.3.6.1.2.1.2.2.1.20
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutQLen
|
||||
oid: 1.3.6.1.2.1.2.2.1.21
|
||||
type: gauge
|
||||
help: The length of the output packet queue (in packets). - 1.3.6.1.2.1.2.2.1.21
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifSpecific
|
||||
oid: 1.3.6.1.2.1.2.2.1.22
|
||||
type: OctetString
|
||||
help: A reference to MIB definitions specific to the particular media being used
|
||||
to realize the interface - 1.3.6.1.2.1.2.2.1.22
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
help: The textual name of the interface - 1.3.6.1.2.1.31.1.1.1.1
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInMulticastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.2
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were addressed to a multicast address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.2
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifInBroadcastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.3
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were addressed to a broadcast address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.3
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutMulticastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.4
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were addressed to a multicast address at this sub-layer, including
|
||||
those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.4
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifOutBroadcastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.5
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were addressed to a broadcast address at this sub-layer, including
|
||||
those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.5
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCInOctets
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.6
|
||||
type: counter
|
||||
help: The total number of octets received on the interface, including framing
|
||||
characters - 1.3.6.1.2.1.31.1.1.1.6
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCInUcastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.7
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were not addressed to a multicast or broadcast address at this sub-layer
|
||||
- 1.3.6.1.2.1.31.1.1.1.7
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCInMulticastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.8
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were addressed to a multicast address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.8
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCInBroadcastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.9
|
||||
type: counter
|
||||
help: The number of packets, delivered by this sub-layer to a higher (sub-)layer,
|
||||
which were addressed to a broadcast address at this sub-layer - 1.3.6.1.2.1.31.1.1.1.9
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCOutOctets
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.10
|
||||
type: counter
|
||||
help: The total number of octets transmitted out of the interface, including framing
|
||||
characters - 1.3.6.1.2.1.31.1.1.1.10
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: REDACTED_APP_PASSWORD
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.11
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were not addressed to a multicast or broadcast address at this sub-layer,
|
||||
including those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.11
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCOutMulticastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.12
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were addressed to a multicast address at this sub-layer, including
|
||||
those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.12
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifHCOutBroadcastPkts
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.13
|
||||
type: counter
|
||||
help: The total number of packets that higher-level protocols requested be transmitted,
|
||||
and which were addressed to a broadcast address at this sub-layer, including
|
||||
those that were discarded or not sent - 1.3.6.1.2.1.31.1.1.1.13
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifLinkUpDownTrapEnable
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.14
|
||||
type: gauge
|
||||
help: Indicates whether linkUp/linkDown traps should be generated for this interface
|
||||
- 1.3.6.1.2.1.31.1.1.1.14
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
enum_values:
|
||||
1: enabled
|
||||
2: disabled
|
||||
- name: ifHighSpeed
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.15
|
||||
type: gauge
|
||||
help: An estimate of the interface's current bandwidth in units of 1,000,000 bits
|
||||
per second - 1.3.6.1.2.1.31.1.1.1.15
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifPromiscuousMode
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.16
|
||||
type: gauge
|
||||
help: This object has a value of false(2) if this interface only accepts packets/frames
|
||||
that are addressed to this station - 1.3.6.1.2.1.31.1.1.1.16
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
enum_values:
|
||||
1: "true"
|
||||
2: "false"
|
||||
- name: ifConnectorPresent
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.17
|
||||
type: gauge
|
||||
help: This object has the value 'true(1)' if the interface sublayer has a physical
|
||||
connector and the value 'false(2)' otherwise. - 1.3.6.1.2.1.31.1.1.1.17
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
enum_values:
|
||||
1: "true"
|
||||
2: "false"
|
||||
- name: ifAlias
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.18
|
||||
type: DisplayString
|
||||
help: This object is an 'alias' name for the interface as specified by a network
|
||||
manager, and provides a non-volatile 'handle' for the interface - 1.3.6.1.2.1.31.1.1.1.18
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: ifCounterDiscontinuityTime
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.19
|
||||
type: gauge
|
||||
help: The value of sysUpTime on the most recent occasion at which any one or more
|
||||
of this interface's counters suffered a discontinuity - 1.3.6.1.2.1.31.1.1.1.19
|
||||
indexes:
|
||||
- labelname: ifIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- ifIndex
|
||||
labelname: ifName
|
||||
oid: 1.3.6.1.2.1.31.1.1.1.1
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: ifIndex
|
||||
- name: systemStatus
|
||||
oid: 1.3.6.1.4.1.6574.1.1
|
||||
type: gauge
|
||||
help: Synology system status Each meanings of status represented describe below
|
||||
- 1.3.6.1.4.1.6574.1.1
|
||||
- name: temperature
|
||||
oid: 1.3.6.1.4.1.6574.1.2
|
||||
type: gauge
|
||||
help: Synology system temperature The temperature of Disk Station uses Celsius
|
||||
degree. - 1.3.6.1.4.1.6574.1.2
|
||||
- name: powerStatus
|
||||
oid: 1.3.6.1.4.1.6574.1.3
|
||||
type: gauge
|
||||
help: Synology power status Each meanings of status represented describe below
|
||||
- 1.3.6.1.4.1.6574.1.3
|
||||
- name: systemFanStatus
|
||||
oid: 1.3.6.1.4.1.6574.1.4.1
|
||||
type: gauge
|
||||
help: Synology system fan status Each meanings of status represented describe
|
||||
below - 1.3.6.1.4.1.6574.1.4.1
|
||||
- name: cpuFanStatus
|
||||
oid: 1.3.6.1.4.1.6574.1.4.2
|
||||
type: gauge
|
||||
help: Synology cpu fan status Each meanings of status represented describe below
|
||||
- 1.3.6.1.4.1.6574.1.4.2
|
||||
- name: modelName
|
||||
oid: 1.3.6.1.4.1.6574.1.5.1
|
||||
type: DisplayString
|
||||
help: The Model name of this NAS - 1.3.6.1.4.1.6574.1.5.1
|
||||
- name: serialNumber
|
||||
oid: 1.3.6.1.4.1.6574.1.5.2
|
||||
type: DisplayString
|
||||
help: The serial number of this NAS - 1.3.6.1.4.1.6574.1.5.2
|
||||
- name: version
|
||||
oid: 1.3.6.1.4.1.6574.1.5.3
|
||||
type: DisplayString
|
||||
help: The version of this DSM - 1.3.6.1.4.1.6574.1.5.3
|
||||
- name: REDACTED_APP_PASSWORD
|
||||
oid: 1.3.6.1.4.1.6574.1.5.4
|
||||
type: gauge
|
||||
help: This oid is for checking whether there is a latest DSM can be upgraded -
|
||||
1.3.6.1.4.1.6574.1.5.4
|
||||
- name: REDACTED_APP_PASSWORD
|
||||
oid: 1.3.6.1.4.1.6574.1.6
|
||||
type: gauge
|
||||
help: Synology system controller number Controller A(0) Controller B(1) - 1.3.6.1.4.1.6574.1.6
|
||||
- name: diskIndex
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.1
|
||||
type: gauge
|
||||
help: The index of disk table - 1.3.6.1.4.1.6574.2.1.1.1
|
||||
indexes:
|
||||
- labelname: diskIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- diskIndex
|
||||
labelname: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: diskIndex
|
||||
- name: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
help: Synology disk ID The ID of disk is assigned by disk Station. - 1.3.6.1.4.1.6574.2.1.1.2
|
||||
indexes:
|
||||
- labelname: diskIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- diskIndex
|
||||
labelname: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: diskIndex
|
||||
- name: diskModel
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.3
|
||||
type: DisplayString
|
||||
help: Synology disk model name The disk model name will be showed here. - 1.3.6.1.4.1.6574.2.1.1.3
|
||||
indexes:
|
||||
- labelname: diskIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- diskIndex
|
||||
labelname: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: diskIndex
|
||||
- name: diskType
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.4
|
||||
type: DisplayString
|
||||
help: Synology disk type The type of disk will be showed here, including SATA,
|
||||
SSD and so on. - 1.3.6.1.4.1.6574.2.1.1.4
|
||||
indexes:
|
||||
- labelname: diskIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- diskIndex
|
||||
labelname: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: diskIndex
|
||||
- name: diskStatus
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.5
|
||||
type: gauge
|
||||
help: Synology disk status. Normal-1 Initialized-2 NotInitialized-3 SystemPartitionFailed-4 Crashed-5
|
||||
- 1.3.6.1.4.1.6574.2.1.1.5
|
||||
indexes:
|
||||
- labelname: diskIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- diskIndex
|
||||
labelname: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: diskIndex
|
||||
- name: diskTemperature
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.6
|
||||
type: gauge
|
||||
help: Synology disk temperature The temperature of each disk uses Celsius degree.
|
||||
- 1.3.6.1.4.1.6574.2.1.1.6
|
||||
indexes:
|
||||
- labelname: diskIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- diskIndex
|
||||
labelname: diskID
|
||||
oid: 1.3.6.1.4.1.6574.2.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: diskIndex
|
||||
- name: raidIndex
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.1
|
||||
type: gauge
|
||||
help: The index of raid table - 1.3.6.1.4.1.6574.3.1.1.1
|
||||
indexes:
|
||||
- labelname: raidIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- raidIndex
|
||||
labelname: raidName
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.2
|
||||
type: DisplayString
|
||||
- name: raidName
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.2
|
||||
type: DisplayString
|
||||
help: Synology raid name The name of each raid will be showed here. - 1.3.6.1.4.1.6574.3.1.1.2
|
||||
indexes:
|
||||
- labelname: raidIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- raidIndex
|
||||
labelname: raidName
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.2
|
||||
type: DisplayString
|
||||
- name: raidStatus
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.3
|
||||
type: gauge
|
||||
help: Synology Raid status Each meanings of status represented describe below
|
||||
- 1.3.6.1.4.1.6574.3.1.1.3
|
||||
indexes:
|
||||
- labelname: raidIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- raidIndex
|
||||
labelname: raidName
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.2
|
||||
type: DisplayString
|
||||
- name: raidFreeSize
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.4
|
||||
type: gauge
|
||||
help: Synology raid freesize Free space in bytes. - 1.3.6.1.4.1.6574.3.1.1.4
|
||||
indexes:
|
||||
- labelname: raidIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- raidIndex
|
||||
labelname: raidName
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.2
|
||||
type: DisplayString
|
||||
- name: raidTotalSize
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.5
|
||||
type: gauge
|
||||
help: Synology raid totalsize Total space in bytes. - 1.3.6.1.4.1.6574.3.1.1.5
|
||||
indexes:
|
||||
- labelname: raidIndex
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- raidIndex
|
||||
labelname: raidName
|
||||
oid: 1.3.6.1.4.1.6574.3.1.1.2
|
||||
type: DisplayString
|
||||
- name: REDACTED_APP_PASSWORD
|
||||
oid: 1.3.6.1.4.1.6574.6.1.1.1
|
||||
type: gauge
|
||||
help: Service info index - 1.3.6.1.4.1.6574.6.1.1.1
|
||||
indexes:
|
||||
- labelname: REDACTED_APP_PASSWORD
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- REDACTED_APP_PASSWORD
|
||||
labelname: serviceName
|
||||
oid: 1.3.6.1.4.1.6574.6.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: REDACTED_APP_PASSWORD
|
||||
- name: serviceName
|
||||
oid: 1.3.6.1.4.1.6574.6.1.1.2
|
||||
type: DisplayString
|
||||
help: Service name - 1.3.6.1.4.1.6574.6.1.1.2
|
||||
indexes:
|
||||
- labelname: REDACTED_APP_PASSWORD
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- REDACTED_APP_PASSWORD
|
||||
labelname: serviceName
|
||||
oid: 1.3.6.1.4.1.6574.6.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: REDACTED_APP_PASSWORD
|
||||
- name: serviceUsers
|
||||
oid: 1.3.6.1.4.1.6574.6.1.1.3
|
||||
type: gauge
|
||||
help: Number of users using this service - 1.3.6.1.4.1.6574.6.1.1.3
|
||||
indexes:
|
||||
- labelname: REDACTED_APP_PASSWORD
|
||||
type: gauge
|
||||
lookups:
|
||||
- labels:
|
||||
- REDACTED_APP_PASSWORD
|
||||
labelname: serviceName
|
||||
oid: 1.3.6.1.4.1.6574.6.1.1.2
|
||||
type: DisplayString
|
||||
- labels: []
|
||||
labelname: REDACTED_APP_PASSWORD
|
||||
40
hosts/synology/calypso/headplane-config.yaml
Normal file
40
hosts/synology/calypso/headplane-config.yaml
Normal file
@@ -0,0 +1,40 @@
|
||||
# Headplane Configuration - Reference Copy
|
||||
# ==========================================
|
||||
# Live file location on Calypso: /volume1/docker/headscale/headplane/config.yaml
|
||||
# This file is NOT auto-deployed - must be manually placed on Calypso.
|
||||
#
|
||||
# To deploy/update config on Calypso:
|
||||
# scp -P 62000 headplane-config.yaml Vish@100.103.48.78:/volume1/docker/headscale/headplane/config.yaml
|
||||
# docker restart headplane
|
||||
#
|
||||
# Secrets are redacted here - see Authentik provider pk=16 (app slug=headplane) for OIDC creds.
|
||||
# Headscale API key managed via: docker exec headscale headscale apikeys list
|
||||
|
||||
headscale:
|
||||
# Internal Docker network URL - headplane and headscale share headscale-net
|
||||
url: http://headscale:8080
|
||||
# Path to headscale config inside the container (shared volume mount)
|
||||
config_path: /etc/headscale/config.yaml
|
||||
|
||||
server:
|
||||
host: 0.0.0.0
|
||||
port: 3000
|
||||
# Public URL used for OIDC redirect URIs - must include :8443, no /admin suffix
|
||||
base_url: https://headscale.vish.gg:8443
|
||||
# Must be EXACTLY 32 characters: openssl rand -base64 24 | tr -d '=\n'
|
||||
cookie_secret: "REDACTED_SEE_CALYPSO" # pragma: allowlist secret
|
||||
|
||||
oidc:
|
||||
# Authentik OIDC provider pk=16, app slug=headplane
|
||||
issuer: https://sso.vish.gg/application/o/headplane/
|
||||
client_id: "REDACTED_CLIENT_ID" # pragma: allowlist secret
|
||||
client_secret: "REDACTED_CLIENT_SECRET" # pragma: allowlist secret
|
||||
# Headscale API key used by Headplane during the OIDC auth flow
|
||||
# Generate: docker exec headscale headscale apikeys create --expiration 999d
|
||||
headscale_api_key: "REDACTED_API_KEY" # pragma: allowlist secret
|
||||
|
||||
integration:
|
||||
docker:
|
||||
# Enables Settings and DNS UI by allowing Headplane to restart headscale
|
||||
# after config changes via the read-only Docker socket mount
|
||||
enabled: true
|
||||
106
hosts/synology/calypso/headscale-config.yaml
Normal file
106
hosts/synology/calypso/headscale-config.yaml
Normal file
@@ -0,0 +1,106 @@
|
||||
# Headscale Configuration - Reference Copy
|
||||
# ==========================================
|
||||
# Live file location on Calypso: /volume1/docker/headscale/config/config.yaml
|
||||
# This file is NOT auto-deployed - must be manually placed on Calypso.
|
||||
# The docker-compose.yaml mounts /volume1/docker/headscale/config/ → /etc/headscale/
|
||||
#
|
||||
# To update config on Calypso:
|
||||
# scp -P 62000 headscale-config.yaml Vish@100.103.48.78:/volume1/docker/headscale/config/config.yaml
|
||||
# docker restart headscale
|
||||
|
||||
server_url: https://headscale.vish.gg:8443
|
||||
|
||||
listen_addr: 0.0.0.0:8080
|
||||
metrics_listen_addr: 0.0.0.0:9090
|
||||
grpc_listen_addr: 0.0.0.0:50443
|
||||
grpc_allow_insecure: false
|
||||
|
||||
tls_cert_path: ""
|
||||
tls_key_path: ""
|
||||
|
||||
private_key_path: /var/lib/headscale/private.key
|
||||
noise:
|
||||
private_key_path: /var/lib/headscale/noise_private.key
|
||||
|
||||
prefixes:
|
||||
v4: 100.64.0.0/10
|
||||
v6: fd7a:115c:a1e0::/48
|
||||
allocation: sequential
|
||||
|
||||
derp:
|
||||
server:
|
||||
# Built-in DERP relay — region 900 "Home - Calypso"
|
||||
# Served at /derp on the same port as headscale (through NPM on 8443)
|
||||
# No STUN — UDP 3478 is occupied by coturn on Atlantis (Jitsi)
|
||||
enabled: true
|
||||
region_id: 900
|
||||
region_code: "home-cal"
|
||||
region_name: "Home - Calypso"
|
||||
private_key_path: /var/lib/headscale/derp_server_private.key
|
||||
# Required by headscale even though UDP 3478 is not exposed in compose
|
||||
# (port 3478 → Atlantis on the router for Jitsi/coturn)
|
||||
stun_listen_addr: "0.0.0.0:3478"
|
||||
# We define the region manually in derpmap.yaml (stunport: -1)
|
||||
automatically_add_embedded_derp_region: false
|
||||
verify_clients: false
|
||||
ipv4: 184.23.52.14
|
||||
# No public DERP fallback — Tailscale public DERPs reject headscale nodes (auth mismatch)
|
||||
# Risk: nodes behind strict NAT that cannot P2P will lose connectivity if both custom
|
||||
# DERPs (home-cal + seattle-vps) are unreachable simultaneously.
|
||||
# Mitigation: home-cal (Calypso) and seattle-vps are independent failure domains.
|
||||
urls: []
|
||||
# Custom derpmap: region 900 (home) + region 901 (Seattle VPS)
|
||||
paths:
|
||||
- /etc/headscale/derpmap.yaml
|
||||
auto_update_enabled: false
|
||||
|
||||
ephemeral_node_inactivity_timeout: 30m
|
||||
|
||||
database:
|
||||
type: sqlite
|
||||
sqlite:
|
||||
path: /var/lib/headscale/db.sqlite
|
||||
write_ahead_log: true
|
||||
|
||||
# OIDC via Authentik (provider pk=15, app slug=headscale at sso.vish.gg)
|
||||
# Credentials stored only on Calypso at /volume1/docker/headscale/config/config.yaml
|
||||
oidc:
|
||||
only_start_if_oidc_is_available: false # Allow headscale to start even if Authentik is temporarily unavailable
|
||||
issuer: "https://sso.vish.gg/application/o/headscale/"
|
||||
client_id: "REDACTED_CLIENT_ID"
|
||||
client_secret: "REDACTED_CLIENT_SECRET" # pragma: allowlist secret
|
||||
scope: ["openid", "profile", "email"]
|
||||
extra_params:
|
||||
domain_hint: vish.gg
|
||||
allowed_domains: []
|
||||
allowed_groups: []
|
||||
allowed_users: []
|
||||
expiry: 180d
|
||||
use_expiry_from_token: false
|
||||
|
||||
log:
|
||||
format: text
|
||||
level: info
|
||||
|
||||
logtail:
|
||||
enabled: false
|
||||
randomize_client_port: false
|
||||
|
||||
# DNS: MagicDNS with AdGuard nameservers for ad-blocking + split-horizon on the tailnet
|
||||
# Using Tailscale IPs so all mesh nodes (including remote) can reach DNS
|
||||
dns:
|
||||
magic_dns: true
|
||||
base_domain: tail.vish.gg
|
||||
nameservers:
|
||||
global:
|
||||
- 100.103.48.78 # Calypso AdGuard (Tailscale IP)
|
||||
- 100.83.230.112 # Atlantis AdGuard (Tailscale IP)
|
||||
search_domains: []
|
||||
extra_records: []
|
||||
|
||||
unix_socket: /var/run/headscale/headscale.sock
|
||||
unix_socket_permission: "0770"
|
||||
|
||||
policy:
|
||||
mode: file
|
||||
path: "" # Empty = allow all (configure ACLs later)
|
||||
120
hosts/synology/calypso/headscale.yaml
Normal file
120
hosts/synology/calypso/headscale.yaml
Normal file
@@ -0,0 +1,120 @@
|
||||
# Headscale - Self-Hosted Tailscale Control Server
|
||||
# =============================================================================
|
||||
# Open-source implementation of the Tailscale control server
|
||||
# =============================================================================
|
||||
# Deployed via: Portainer GitOps (or docker compose up -d on Calypso)
|
||||
# Ports: 8080 (HTTP API), 443 (HTTPS via NPM), 9090 (Metrics)
|
||||
#
|
||||
# Why Calypso?
|
||||
# - Already runs Authentik (SSO/OIDC provider) for seamless integration
|
||||
# - Already runs Nginx Proxy Manager for external HTTPS access
|
||||
# - Infrastructure services host (Gitea, NPM, Authentik)
|
||||
# - Synology NAS = always-on, stable, reliable
|
||||
#
|
||||
# External Access:
|
||||
# - NPM proxy host: headscale.vish.gg → 192.168.0.250:8085
|
||||
# WebSocket support MUST be enabled in NPM (already configured, host ID 44)
|
||||
# - /admin path routed to Headplane at 192.168.0.250:3002 via NPM Advanced tab
|
||||
# - OIDC auth via Authentik (provider pk=15, app slug=headscale)
|
||||
# Authentik reached via public HTTPS - no shared Docker network needed
|
||||
#
|
||||
# Config files:
|
||||
# - Headscale: /volume1/docker/headscale/config/config.yaml on Calypso
|
||||
# - Headplane: /volume1/docker/headscale/headplane/config.yaml on Calypso
|
||||
# - NOT managed by inline configs block (Synology Docker Compose v2.20 doesn't support it)
|
||||
#
|
||||
# Architecture:
|
||||
# ┌─────────────────────────────────────────────────────────────────────┐
|
||||
# │ HEADSCALE SETUP │
|
||||
# ├─────────────────────────────────────────────────────────────────────┤
|
||||
# │ │
|
||||
# │ ┌─────────────┐ ┌─────────────────────────┐ │
|
||||
# │ │ Clients │ │ Calypso │ │
|
||||
# │ │ │ │ │ │
|
||||
# │ │ ┌─────────┐ │ HTTPS/443 │ ┌───────────────────┐ │ │
|
||||
# │ │ │Tailscale│ │─────────────────────▶│ │ Nginx Proxy Mgr │ │ │
|
||||
# │ │ │ Client │ │ headscale.vish.gg │ │ (SSL Term) │ │ │
|
||||
# │ │ └─────────┘ │ │ └─────────┬─────────┘ │ │
|
||||
# │ │ │ │ │ │ │
|
||||
# │ │ ┌─────────┐ │ │ ▼ │ │
|
||||
# │ │ │ Phone │ │ │ ┌───────────────────┐ │ │
|
||||
# │ │ │ App │ │ │ │ Headscale │ │ │
|
||||
# │ │ └─────────┘ │ │ │ :8080 │ │ │
|
||||
# │ │ │ │ └─────────┬─────────┘ │ │
|
||||
# │ │ ┌─────────┐ │ │ │ │ │
|
||||
# │ │ │ Linux │ │ │ ▼ │ │
|
||||
# │ │ │ Server │ │ │ ┌───────────────────┐ │ │
|
||||
# │ │ └─────────┘ │ │ │ Authentik │ │ │
|
||||
# │ └─────────────┘ │ │ sso.vish.gg │ │ │
|
||||
# │ │ │ (OIDC via HTTPS) │ │ │
|
||||
# │ │ └───────────────────┘ │ │
|
||||
# │ └─────────────────────────┘ │
|
||||
# └─────────────────────────────────────────────────────────────────────┘
|
||||
|
||||
services:
|
||||
headscale:
|
||||
image: headscale/headscale:latest
|
||||
container_name: headscale
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
# Required so Headplane can locate this container via Docker socket
|
||||
me.tale.headplane.target: "headscale"
|
||||
volumes:
|
||||
# Config file at /volume1/docker/headscale/config/config.yaml
|
||||
- /volume1/docker/headscale/config:/etc/headscale
|
||||
# Persistent data: keys, SQLite database
|
||||
- headscale-data:/var/lib/headscale
|
||||
# Unix socket for headscale CLI
|
||||
- headscale-socket:/var/run/headscale
|
||||
ports:
|
||||
- "8085:8080" # Main API - proxied via NPM to headscale.vish.gg
|
||||
- "50443:50443" # gRPC
|
||||
- "9099:9090" # Prometheus metrics
|
||||
command: serve
|
||||
networks:
|
||||
- headscale-net
|
||||
healthcheck:
|
||||
test: ["CMD", "headscale", "health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
headplane:
|
||||
image: ghcr.io/tale/headplane:latest
|
||||
container_name: headplane
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "3002:3000" # Host port 3002 (3000+3001 taken by DSM nginx)
|
||||
volumes:
|
||||
# Headplane config (secrets live on Calypso, reference copy in repo)
|
||||
- /volume1/docker/headscale/headplane/config.yaml:/etc/headplane/config.yaml
|
||||
# Persistent data: session DB, agent cache
|
||||
- headplane-data:/var/lib/headplane
|
||||
# Shared read/write access to headscale config (for Settings UI)
|
||||
- /volume1/docker/headscale/config/config.yaml:/etc/headscale/config.yaml
|
||||
# Docker socket - read-only, needed to restart headscale after config changes
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- headscale-net
|
||||
depends_on:
|
||||
- headscale
|
||||
healthcheck:
|
||||
test: ["CMD", "/bin/hp_healthcheck"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
volumes:
|
||||
headscale-data:
|
||||
name: headscale-data
|
||||
headscale-socket:
|
||||
name: headscale-socket
|
||||
headplane-data:
|
||||
name: headplane-data
|
||||
|
||||
networks:
|
||||
headscale-net:
|
||||
name: headscale-net
|
||||
driver: bridge
|
||||
117
hosts/synology/calypso/immich/docker-compose.yml
Normal file
117
hosts/synology/calypso/immich/docker-compose.yml
Normal file
@@ -0,0 +1,117 @@
|
||||
# Immich - Photo/video backup solution
|
||||
# URL: https://photos.vishconcord.synology.me
|
||||
# Port: 2283
|
||||
# Google Photos alternative with ML-powered features
|
||||
#
|
||||
# IMPORTANT: Portainer git deploy does NOT load env_file references.
|
||||
# All env vars from stack.env MUST be set as Portainer stack environment
|
||||
# overrides. Without them, DB_HOSTNAME defaults to "database" (Immich v2.6.2+)
|
||||
# causing "getaddrinfo ENOTFOUND database" crashes.
|
||||
# Fixed 2026-03-27: env vars added as Portainer stack overrides via API.
|
||||
|
||||
services:
|
||||
immich-redis:
|
||||
image: redis
|
||||
container_name: Immich-REDIS
|
||||
hostname: immich-redis
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
|
||||
user: 1026:100
|
||||
env_file:
|
||||
- stack.env
|
||||
environment:
|
||||
- TZ=${TZ}
|
||||
volumes:
|
||||
- /volume1/docker/immich/redis:/data:rw
|
||||
restart: on-failure:5
|
||||
|
||||
immich-db:
|
||||
image: ghcr.io/immich-app/postgres:16-vectorchord0.4.3-pgvectors0.2.0
|
||||
container_name: Immich-DB
|
||||
hostname: immich-db
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
env_file:
|
||||
- stack.env
|
||||
healthcheck:
|
||||
test: ["CMD", "pg_isready", "-q", "-d", "${DB_DATABASE_NAME}", "-U", "${DB_USERNAME}"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
shm_size: 128mb
|
||||
volumes:
|
||||
- /volume1/docker/immich/db:/var/lib/postgresql/data:rw
|
||||
environment:
|
||||
- TZ=${TZ}
|
||||
- POSTGRES_DB=${DB_DATABASE_NAME}
|
||||
- POSTGRES_USER=${DB_USERNAME}
|
||||
- POSTGRES_PASSWORD="REDACTED_PASSWORD"
|
||||
- DB_STORAGE_TYPE=HDD
|
||||
restart: on-failure:5
|
||||
|
||||
immich-server:
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
container_name: Immich-SERVER
|
||||
hostname: immich-server
|
||||
user: 1026:100
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
env_file:
|
||||
- stack.env
|
||||
environment:
|
||||
- NODE_ENV=${NODE_ENV}
|
||||
- TZ=${TZ}
|
||||
- DB_HOSTNAME=${DB_HOSTNAME}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD="REDACTED_PASSWORD"
|
||||
- DB_DATABASE_NAME=${DB_DATABASE_NAME}
|
||||
- REDIS_HOSTNAME=${REDIS_HOSTNAME}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- JWT_SECRET=${JWT_SECRET}
|
||||
- IMMICH_CONFIG_FILE=/config/immich-config.json
|
||||
ports:
|
||||
- 8212:2283
|
||||
volumes:
|
||||
- /volume1/docker/immich/upload:/data:rw
|
||||
- /volume1/docker/immich/external_photos/photos:/external/photos:rw
|
||||
- /volume1/docker/immich/config/immich-config.json:/config/immich-config.json:ro
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
immich-redis:
|
||||
condition: service_healthy
|
||||
immich-db:
|
||||
condition: service_started
|
||||
|
||||
immich-machine-learning:
|
||||
image: ghcr.io/immich-app/immich-machine-learning:release
|
||||
container_name: Immich-LEARNING
|
||||
hostname: immich-machine-learning
|
||||
user: 1026:100
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
env_file:
|
||||
- stack.env
|
||||
environment:
|
||||
- NODE_ENV=${NODE_ENV}
|
||||
- TZ=${TZ}
|
||||
- DB_HOSTNAME=${DB_HOSTNAME}
|
||||
- DB_USERNAME=${DB_USERNAME}
|
||||
- DB_PASSWORD="REDACTED_PASSWORD"
|
||||
- DB_DATABASE_NAME=${DB_DATABASE_NAME}
|
||||
- REDIS_HOSTNAME=${REDIS_HOSTNAME}
|
||||
- LOG_LEVEL=${LOG_LEVEL}
|
||||
- JWT_SECRET=${JWT_SECRET}
|
||||
- MPLCONFIGDIR=/matplotlib
|
||||
volumes:
|
||||
- /volume1/docker/immich/upload:/data:rw
|
||||
- /volume1/docker/immich/external_photos/photos:/external/photos:rw
|
||||
- /volume1/docker/immich/cache:/cache:rw
|
||||
- /volume1/docker/immich/cache:/.cache:rw
|
||||
- /volume1/docker/immich/cache:/.config:rw
|
||||
- /volume1/docker/immich/matplotlib:/matplotlib:rw
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
immich-db:
|
||||
condition: service_started
|
||||
11
hosts/synology/calypso/iperf3.yml
Normal file
11
hosts/synology/calypso/iperf3.yml
Normal file
@@ -0,0 +1,11 @@
|
||||
# iPerf3 - Network bandwidth testing
|
||||
# Port: 5201
|
||||
# TCP/UDP bandwidth measurement tool
|
||||
version: '3.8'
|
||||
services:
|
||||
iperf3:
|
||||
image: networkstatic/iperf3
|
||||
container_name: iperf3
|
||||
restart: unless-stopped
|
||||
network_mode: "host" # Allows the container to use the NAS's network stack
|
||||
command: "-s" # Runs iperf3 in server mode
|
||||
46
hosts/synology/calypso/nginx-proxy-manager.yaml
Normal file
46
hosts/synology/calypso/nginx-proxy-manager.yaml
Normal file
@@ -0,0 +1,46 @@
|
||||
# Nginx Proxy Manager - Reverse Proxy with GUI
|
||||
# Docs: https://nginxproxymanager.com/
|
||||
# Deployed to: Calypso (DS723+)
|
||||
# Domains: *.vish.gg, *.thevish.io
|
||||
#
|
||||
# REPLACES: Synology DSM Reverse Proxy
|
||||
# INTEGRATES: Authentik SSO via Forward Auth
|
||||
#
|
||||
# PORTS:
|
||||
# - 80: HTTP (redirect to HTTPS)
|
||||
# - 443: HTTPS (main proxy)
|
||||
# - 81: Admin UI
|
||||
#
|
||||
# DISASTER RECOVERY:
|
||||
# - Config: /volume1/docker/nginx-proxy-manager/data
|
||||
# - SSL Certs: /volume1/docker/nginx-proxy-manager/letsencrypt
|
||||
# - Database: SQLite in data directory
|
||||
|
||||
services:
|
||||
nginx-proxy-manager:
|
||||
image: jc21/nginx-proxy-manager:latest
|
||||
container_name: nginx-proxy-manager
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
# Using alternate ports during migration (Synology nginx on 80/443)
|
||||
# Change to 80:80 and 443:443 after migration complete
|
||||
- "8880:80" # HTTP (temp port)
|
||||
- "8443:443" # HTTPS (temp port)
|
||||
- "81:81" # Admin UI
|
||||
environment:
|
||||
# Disable IPv6 if not needed
|
||||
DISABLE_IPV6: "true"
|
||||
volumes:
|
||||
- /volume1/docker/nginx-proxy-manager/data:/data
|
||||
- /volume1/docker/nginx-proxy-manager/letsencrypt:/etc/letsencrypt
|
||||
networks:
|
||||
- npm-network
|
||||
healthcheck:
|
||||
test: ["CMD", "/bin/check-health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
networks:
|
||||
npm-network:
|
||||
driver: bridge
|
||||
104
hosts/synology/calypso/nginx_proxy_manager/README.md
Normal file
104
hosts/synology/calypso/nginx_proxy_manager/README.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Nginx Proxy Manager - GitOps Deployment
|
||||
|
||||
This directory contains the GitOps deployment configuration for Nginx Proxy Manager on the Calypso server.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# Deploy NPM
|
||||
./deploy.sh
|
||||
|
||||
# Check status
|
||||
./deploy.sh status
|
||||
|
||||
# View logs
|
||||
./deploy.sh logs
|
||||
```
|
||||
|
||||
## 🌐 Access URLs
|
||||
|
||||
- **Admin UI**: http://192.168.0.250:81
|
||||
- **HTTP Proxy**: http://192.168.0.250:8880 (external port 80)
|
||||
- **HTTPS Proxy**: https://192.168.0.250:8443 (external port 443)
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Port Mapping
|
||||
- `8880:80` - HTTP proxy (router forwards 80→8880)
|
||||
- `8443:443` - HTTPS proxy (router forwards 443→8443)
|
||||
- `81:81` - Admin interface
|
||||
|
||||
### Data Storage
|
||||
- **Config**: `/volume1/docker/nginx-proxy-manager/data`
|
||||
- **SSL Certs**: `/volume1/docker/nginx-proxy-manager/letsencrypt`
|
||||
|
||||
## 🛠️ Deployment Commands
|
||||
|
||||
```bash
|
||||
# Full deployment
|
||||
./deploy.sh deploy
|
||||
|
||||
# Management
|
||||
./deploy.sh restart # Restart service
|
||||
./deploy.sh stop # Stop service
|
||||
./deploy.sh update # Update images and redeploy
|
||||
./deploy.sh status # Check service status
|
||||
./deploy.sh logs # View service logs
|
||||
./deploy.sh cleanup # Clean up existing containers
|
||||
```
|
||||
|
||||
## 🔐 Initial Setup
|
||||
|
||||
1. **First Login**:
|
||||
- URL: http://192.168.0.250:81
|
||||
- Email: `admin@example.com`
|
||||
- Password: "REDACTED_PASSWORD"
|
||||
|
||||
2. **Change Default Credentials**:
|
||||
- Update email and password immediately
|
||||
- Enable 2FA if desired
|
||||
|
||||
3. **Configure Proxy Hosts**:
|
||||
- Add your domains (*.vish.gg, *.thevish.io)
|
||||
- Configure SSL certificates
|
||||
- Set up forwarding rules
|
||||
|
||||
## 🌍 Router Configuration
|
||||
|
||||
Ensure your router forwards these ports:
|
||||
- **Port 80** → **8880** (HTTP)
|
||||
- **Port 443** → **8443** (HTTPS)
|
||||
|
||||
## 🔄 Migration Notes
|
||||
|
||||
This deployment uses alternate ports (8880/8443) to avoid conflicts with Synology's built-in nginx service. Once migration is complete and Synology nginx is disabled, you can change the ports to standard 80/443.
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### Service Won't Start
|
||||
```bash
|
||||
# Clean up and redeploy
|
||||
./deploy.sh cleanup
|
||||
./deploy.sh deploy
|
||||
```
|
||||
|
||||
### Can't Access Admin UI
|
||||
```bash
|
||||
# Check service status
|
||||
./deploy.sh status
|
||||
|
||||
# Check logs
|
||||
./deploy.sh logs
|
||||
```
|
||||
|
||||
### SSL Certificate Issues
|
||||
1. Ensure domains point to your external IP (YOUR_WAN_IP)
|
||||
2. Check router port forwarding
|
||||
3. Verify Cloudflare DNS settings
|
||||
|
||||
## 📊 Status
|
||||
|
||||
**Status**: ✅ **ACTIVE DEPLOYMENT** (GitOps)
|
||||
- **Version**: Latest (jc21/nginx-proxy-manager)
|
||||
- **Deployed**: 2026-02-16
|
||||
- **External Access**: ✅ Configured via router forwarding
|
||||
181
hosts/synology/calypso/nginx_proxy_manager/deploy.sh
Executable file
181
hosts/synology/calypso/nginx_proxy_manager/deploy.sh
Executable file
@@ -0,0 +1,181 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Nginx Proxy Manager - GitOps Deployment Script
|
||||
# Deploys NPM to Calypso server with proper port configuration
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Configuration
|
||||
SERVICE_NAME="nginx-proxy-manager"
|
||||
REMOTE_HOST="Vish@192.168.0.250"
|
||||
SSH_PORT="62000"
|
||||
REMOTE_PATH="/volume1/docker/nginx-proxy-manager"
|
||||
COMPOSE_FILE="docker-compose.yml"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Logging functions
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date '+%Y-%m-%d %H:%M:%S')] $1${NC}"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
check_prerequisites() {
|
||||
if [[ ! -f "$COMPOSE_FILE" ]]; then
|
||||
error "docker-compose.yml not found in current directory"
|
||||
fi
|
||||
|
||||
if ! ssh -q -p "$SSH_PORT" "$REMOTE_HOST" exit; then
|
||||
error "Cannot connect to $REMOTE_HOST"
|
||||
fi
|
||||
}
|
||||
|
||||
cleanup_existing() {
|
||||
log "Cleaning up existing NPM containers..."
|
||||
|
||||
# Stop and remove any existing NPM containers
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker stop nginx-proxy-manager 2>/dev/null || true"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker rm nginx-proxy-manager 2>/dev/null || true"
|
||||
|
||||
# Clean up any orphaned containers
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker container prune -f 2>/dev/null || true"
|
||||
|
||||
success "Cleanup complete"
|
||||
}
|
||||
|
||||
deploy() {
|
||||
log "Deploying $SERVICE_NAME to $REMOTE_HOST..."
|
||||
|
||||
# Create required directories
|
||||
log "Creating required directories..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "mkdir -p $REMOTE_PATH/{data,letsencrypt}"
|
||||
|
||||
# Copy compose file
|
||||
log "Copying docker-compose.yml to $REMOTE_HOST:$REMOTE_PATH/"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cat > $REMOTE_PATH/docker-compose.yml" < "$COMPOSE_FILE"
|
||||
|
||||
# Deploy services
|
||||
log "Starting NPM services..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose up -d"
|
||||
|
||||
# Wait for services to be healthy
|
||||
log "Waiting for services to be healthy..."
|
||||
sleep 15
|
||||
|
||||
# Check status
|
||||
if ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker ps | grep -q 'nginx-proxy-manager.*Up'"; then
|
||||
success "$SERVICE_NAME deployed successfully!"
|
||||
log "Admin UI: http://192.168.0.250:81"
|
||||
log "HTTP Proxy: http://192.168.0.250:8880"
|
||||
log "HTTPS Proxy: https://192.168.0.250:8443"
|
||||
warning "Default login: admin@example.com / changeme"
|
||||
warning "Make sure your router forwards:"
|
||||
warning " Port 80 → 8880 (HTTP)"
|
||||
warning " Port 443 → 8443 (HTTPS)"
|
||||
else
|
||||
warning "Service started but may not be fully healthy yet. Check logs with: ./deploy.sh logs"
|
||||
fi
|
||||
}
|
||||
|
||||
restart() {
|
||||
log "Restarting $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose restart"
|
||||
success "Service restarted"
|
||||
}
|
||||
|
||||
stop() {
|
||||
log "Stopping $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose down"
|
||||
success "Service stopped"
|
||||
}
|
||||
|
||||
logs() {
|
||||
log "Showing logs for $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker logs -f nginx-proxy-manager"
|
||||
}
|
||||
|
||||
status() {
|
||||
log "Checking status of $SERVICE_NAME services..."
|
||||
echo
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}' | grep -E '(NAMES|nginx-proxy-manager)'"
|
||||
echo
|
||||
|
||||
# Test connectivity
|
||||
if curl -s -o /dev/null -w "%{http_code}" "http://192.168.0.250:81" | grep -q "200\|302\|401"; then
|
||||
success "NPM Admin UI is responding at http://192.168.0.250:81"
|
||||
else
|
||||
warning "NPM Admin UI is not responding"
|
||||
fi
|
||||
}
|
||||
|
||||
update() {
|
||||
log "Updating $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose pull"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose up -d"
|
||||
success "Service updated"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
COMMAND=${1:-deploy}
|
||||
|
||||
case $COMMAND in
|
||||
deploy)
|
||||
check_prerequisites
|
||||
cleanup_existing
|
||||
deploy
|
||||
;;
|
||||
restart)
|
||||
check_prerequisites
|
||||
restart
|
||||
;;
|
||||
stop)
|
||||
check_prerequisites
|
||||
stop
|
||||
;;
|
||||
logs)
|
||||
check_prerequisites
|
||||
logs
|
||||
;;
|
||||
status)
|
||||
check_prerequisites
|
||||
status
|
||||
;;
|
||||
update)
|
||||
check_prerequisites
|
||||
update
|
||||
;;
|
||||
cleanup)
|
||||
check_prerequisites
|
||||
cleanup_existing
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [deploy|restart|stop|logs|status|update|cleanup]"
|
||||
echo
|
||||
echo "Commands:"
|
||||
echo " deploy - Deploy/update the service (default)"
|
||||
echo " restart - Restart the service"
|
||||
echo " stop - Stop the service"
|
||||
echo " logs - Show service logs"
|
||||
echo " status - Show service status"
|
||||
echo " update - Pull latest images and redeploy"
|
||||
echo " cleanup - Clean up existing containers"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
@@ -0,0 +1,46 @@
|
||||
# Nginx Proxy Manager - Reverse Proxy with GUI
|
||||
# Docs: https://nginxproxymanager.com/
|
||||
# Deployed to: Calypso (DS723+)
|
||||
# Domains: *.vish.gg, *.thevish.io
|
||||
#
|
||||
# REPLACES: Synology DSM Reverse Proxy
|
||||
# INTEGRATES: Authentik SSO via Forward Auth
|
||||
#
|
||||
# PORTS:
|
||||
# - 80: HTTP (redirect to HTTPS)
|
||||
# - 443: HTTPS (main proxy)
|
||||
# - 81: Admin UI
|
||||
#
|
||||
# DISASTER RECOVERY:
|
||||
# - Config: /volume1/docker/nginx-proxy-manager/data
|
||||
# - SSL Certs: /volume1/docker/nginx-proxy-manager/letsencrypt
|
||||
# - Database: SQLite in data directory
|
||||
|
||||
services:
|
||||
nginx-proxy-manager:
|
||||
image: jc21/nginx-proxy-manager:latest
|
||||
container_name: nginx-proxy-manager
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
# Using alternate ports during migration (Synology nginx on 80/443)
|
||||
# Change to 80:80 and 443:443 after migration complete
|
||||
- "8880:80" # HTTP (temp port)
|
||||
- "8443:443" # HTTPS (temp port)
|
||||
- "81:81" # Admin UI
|
||||
environment:
|
||||
# Disable IPv6 if not needed
|
||||
DISABLE_IPV6: "true"
|
||||
volumes:
|
||||
- /volume1/docker/nginx-proxy-manager/data:/data
|
||||
- /volume1/docker/nginx-proxy-manager/letsencrypt:/etc/letsencrypt
|
||||
networks:
|
||||
- npm-network
|
||||
healthcheck:
|
||||
test: ["CMD", "/bin/check-health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
|
||||
networks:
|
||||
npm-network:
|
||||
driver: bridge
|
||||
31
hosts/synology/calypso/node-exporter.yaml
Normal file
31
hosts/synology/calypso/node-exporter.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
# Node Exporter + SNMP Exporter - Prometheus metrics exporters
|
||||
# Node Exporter: Hardware/OS metrics on port 9100 (via host network)
|
||||
# SNMP Exporter: Network device metrics on port 9116 (via host network)
|
||||
# Used by: Grafana/Prometheus monitoring stack
|
||||
|
||||
version: "3.8"
|
||||
|
||||
services:
|
||||
node-exporter:
|
||||
image: quay.io/prometheus/node-exporter:latest
|
||||
container_name: node_exporter
|
||||
network_mode: host
|
||||
pid: host
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
command:
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--path.sysfs=/host/sys'
|
||||
- '--path.rootfs=/rootfs'
|
||||
- '--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
restart: unless-stopped
|
||||
|
||||
snmp-exporter:
|
||||
image: quay.io/prometheus/snmp-exporter:latest
|
||||
container_name: snmp_exporter
|
||||
network_mode: host
|
||||
volumes:
|
||||
- /volume1/docker/snmp/snmp.yml:/etc/snmp_exporter/snmp.yml:ro
|
||||
restart: unless-stopped
|
||||
10
hosts/synology/calypso/openspeedtest.yaml
Normal file
10
hosts/synology/calypso/openspeedtest.yaml
Normal file
@@ -0,0 +1,10 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
openspeedtest:
|
||||
image: openspeedtest/latest
|
||||
container_name: openspeedtest
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
128
hosts/synology/calypso/paperless/README.md
Normal file
128
hosts/synology/calypso/paperless/README.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Paperless-NGX + AI
|
||||
|
||||
Document management system with AI-powered automatic tagging and categorization.
|
||||
|
||||
## Deployment
|
||||
|
||||
- **Host:** Calypso (Synology NAS)
|
||||
- **Paperless-NGX URL:** https://paperlessngx.vishconcord.synology.me
|
||||
- **Paperless-AI URL:** http://calypso.local:3000
|
||||
- **Deployed via:** Portainer Stacks
|
||||
|
||||
## Stacks
|
||||
|
||||
### 1. Paperless-NGX (paperless-testing)
|
||||
Main document management system with office document support.
|
||||
|
||||
**File:** `docker-compose.yml`
|
||||
|
||||
| Container | Port | Purpose |
|
||||
|-----------|------|---------|
|
||||
| PaperlessNGX | 8777 | Main web UI |
|
||||
| PaperlessNGX-DB | - | PostgreSQL database |
|
||||
| PaperlessNGX-REDIS | - | Redis cache |
|
||||
| PaperlessNGX-GOTENBERG | - | Office doc conversion |
|
||||
| PaperlessNGX-TIKA | - | Document parsing |
|
||||
|
||||
### 2. Paperless-AI (paperless-ai)
|
||||
AI extension for automatic document classification.
|
||||
|
||||
**File:** `paperless-ai.yml`
|
||||
|
||||
| Container | Port | Purpose |
|
||||
|-----------|------|---------|
|
||||
| PaperlessNGX-AI | 3000 (host) | AI processing & web UI |
|
||||
|
||||
## Data Locations
|
||||
|
||||
| Data | Path |
|
||||
|------|------|
|
||||
| Documents | `/volume1/docker/paperlessngx/media` |
|
||||
| Database | `/volume1/docker/paperlessngx/db` |
|
||||
| Export/Backup | `/volume1/docker/paperlessngx/export` |
|
||||
| Consume folder | `/volume1/docker/paperlessngx/consume` |
|
||||
| Trash | `/volume1/docker/paperlessngx/trash` |
|
||||
| AI config | `/volume1/docker/paperlessngxai` |
|
||||
|
||||
## Credentials
|
||||
|
||||
### Paperless-NGX
|
||||
- URL: https://paperlessngx.vishconcord.synology.me
|
||||
- Admin user: vish
|
||||
- Admin password: "REDACTED_PASSWORD"
|
||||
|
||||
### PostgreSQL
|
||||
- Database: paperless
|
||||
- User: paperlessuser
|
||||
- Password: "REDACTED_PASSWORD"
|
||||
|
||||
### Redis
|
||||
- Password: "REDACTED_PASSWORD"
|
||||
|
||||
### API Token
|
||||
- Token: `REDACTED_API_TOKEN`
|
||||
|
||||
## AI Integration (Ollama)
|
||||
|
||||
Paperless-AI connects to Ollama on Atlantis for LLM inference.
|
||||
|
||||
**Ollama URL:** https://ollama.vishconcord.synology.me
|
||||
**Model:** neural-chat:7b (recommended)
|
||||
|
||||
### Configuring AI
|
||||
|
||||
1. Access Paperless-AI web UI: http://calypso.local:3000
|
||||
2. Complete initial setup wizard
|
||||
3. Configure:
|
||||
- AI Provider: Ollama
|
||||
- Ollama URL: https://ollama.vishconcord.synology.me
|
||||
- Model: neural-chat:7b (or llama3.2:latest)
|
||||
4. Set up tags and document types to auto-assign
|
||||
5. Restart container after initial setup to build RAG index
|
||||
|
||||
### Available Ollama Models
|
||||
|
||||
| Model | Size | Best For |
|
||||
|-------|------|----------|
|
||||
| neural-chat:7b | 7B | General documents |
|
||||
| llama3.2:3b | 3.2B | Fast processing |
|
||||
| mistral:7b | 7.2B | High quality |
|
||||
| phi3:mini | 3.8B | Balanced |
|
||||
|
||||
## Backup
|
||||
|
||||
### Manual Export
|
||||
```bash
|
||||
# SSH into Calypso or use Portainer exec
|
||||
docker exec PaperlessNGX document_exporter ../export -c -d
|
||||
```
|
||||
|
||||
### Backup Location
|
||||
Exports are saved to: `/volume1/docker/paperlessngx/export/`
|
||||
|
||||
### Restore
|
||||
```bash
|
||||
docker exec PaperlessNGX document_importer ../export
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Paperless-AI not connecting to Ollama
|
||||
1. Verify Ollama is running on Atlantis
|
||||
2. Check URL is correct: `https://ollama.vishconcord.synology.me`
|
||||
3. Test connectivity: `curl https://ollama.vishconcord.synology.me/api/tags`
|
||||
|
||||
### Documents not being processed
|
||||
1. Check Paperless-AI logs: `docker logs PaperlessNGX-AI`
|
||||
2. Verify API token is correct
|
||||
3. Ensure tags are configured in Paperless-AI web UI
|
||||
|
||||
### OCR issues
|
||||
1. Check Tika and Gotenberg are running
|
||||
2. Verify language is set: `PAPERLESS_OCR_LANGUAGE: eng`
|
||||
|
||||
## Documentation
|
||||
|
||||
- [Paperless-ngx Docs](https://docs.paperless-ngx.com/)
|
||||
- [Paperless-AI GitHub](https://github.com/clusterzx/paperless-ai)
|
||||
- [Ollama Docs](https://ollama.com/)
|
||||
129
hosts/synology/calypso/paperless/docker-compose.yml
Normal file
129
hosts/synology/calypso/paperless/docker-compose.yml
Normal file
@@ -0,0 +1,129 @@
|
||||
# Paperless-NGX with Office Document Support
|
||||
# URL: https://docs.vish.gg
|
||||
# Port: 8777
|
||||
# Notifications: ntfy (http://192.168.0.210:8081/paperless)
|
||||
# SSO: Authentik OIDC (sso.vish.gg/application/o/paperless/)
|
||||
|
||||
services:
|
||||
redis:
|
||||
image: redis:8
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- redis-server --requirepass REDACTED_PASSWORD
|
||||
container_name: PaperlessNGX-REDIS
|
||||
hostname: paper-redis
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: true
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
|
||||
volumes:
|
||||
- /volume1/docker/paperlessngx/redis:/data:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
|
||||
db:
|
||||
image: postgres:18
|
||||
container_name: PaperlessNGX-DB
|
||||
hostname: paper-db
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: ["CMD", "pg_isready", "-q", "-d", "paperless", "-U", "paperlessuser"]
|
||||
timeout: 45s
|
||||
interval: 10s
|
||||
retries: 10
|
||||
volumes:
|
||||
- /volume1/docker/paperlessngx/db:/var/lib/postgresql:rw
|
||||
environment:
|
||||
POSTGRES_DB: paperless
|
||||
POSTGRES_USER: paperlessuser
|
||||
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
|
||||
restart: on-failure:5
|
||||
|
||||
gotenberg:
|
||||
image: gotenberg/gotenberg:latest
|
||||
container_name: PaperlessNGX-GOTENBERG
|
||||
hostname: gotenberg
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
user: 1026:100
|
||||
command:
|
||||
- "gotenberg"
|
||||
- "--chromium-disable-javascript=true"
|
||||
- "--chromium-allow-list=file:///tmp/.*"
|
||||
restart: on-failure:5
|
||||
|
||||
tika:
|
||||
image: docker.io/apache/tika:latest
|
||||
container_name: PaperlessNGX-TIKA
|
||||
hostname: tika
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
user: 1026:100
|
||||
restart: on-failure:5
|
||||
|
||||
paperless:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
container_name: PaperlessNGX
|
||||
hostname: paperless-ngx
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fs", "-S", "--max-time", "2", "http://localhost:8000"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
ports:
|
||||
- 8777:8000
|
||||
volumes:
|
||||
- /volume1/docker/paperlessngx/data:/usr/src/paperless/data:rw
|
||||
- /volume1/docker/paperlessngx/media:/usr/src/paperless/media:rw
|
||||
- /volume1/docker/paperlessngx/export:/usr/src/paperless/export:rw
|
||||
- /volume1/docker/paperlessngx/consume:/usr/src/paperless/consume:rw
|
||||
- /volume1/docker/paperlessngx/trash:/usr/src/paperless/trash:rw
|
||||
environment:
|
||||
PAPERLESS_REDIS: redis://:redispass@paper-redis:6379
|
||||
PAPERLESS_DBENGINE: postgresql
|
||||
PAPERLESS_DBHOST: paper-db
|
||||
PAPERLESS_DBNAME: paperless
|
||||
PAPERLESS_DBUSER: paperlessuser
|
||||
PAPERLESS_DBPASS: paperlesspass
|
||||
PAPERLESS_EMPTY_TRASH_DIR: ../trash
|
||||
PAPERLESS_FILENAME_FORMAT: "{{ created_year }}/{{ correspondent }}/{{ document_type }}/{{ title }}"
|
||||
PAPERLESS_OCR_ROTATE_PAGES_THRESHOLD: 6
|
||||
PAPERLESS_TASK_WORKERS: 1
|
||||
USERMAP_UID: 1026
|
||||
USERMAP_GID: 100
|
||||
PAPERLESS_SECRET_KEY: "REDACTED_SECRET_KEY"
|
||||
PAPERLESS_TIME_ZONE: America/Los_Angeles
|
||||
PAPERLESS_ADMIN_USER: vish
|
||||
PAPERLESS_ADMIN_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
PAPERLESS_URL: https://docs.vish.gg
|
||||
PAPERLESS_CSRF_TRUSTED_ORIGINS: https://docs.vish.gg
|
||||
PAPERLESS_OCR_LANGUAGE: eng
|
||||
PAPERLESS_TIKA_ENABLED: 1
|
||||
PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
|
||||
PAPERLESS_TIKA_ENDPOINT: http://tika:9998
|
||||
# ntfy notification on document consumption
|
||||
PAPERLESS_POST_CONSUME_SCRIPT: /usr/src/paperless/data/notify.sh
|
||||
# Authentik OIDC SSO
|
||||
PAPERLESS_APPS: allauth.socialaccount.providers.openid_connect
|
||||
PAPERLESS_SOCIALACCOUNT_PROVIDERS: >-
|
||||
{"openid_connect": {"APPS": [{"provider_id": "paperless", "name": "Authentik",
|
||||
"client_id": "paperless",
|
||||
"secret": "10e705242ca03f59b10ea831REDACTED_GITEA_TOKEN",
|
||||
"settings": {"server_url": "https://sso.vish.gg/application/o/paperless/.well-known/openid-configuration"}}]}}
|
||||
restart: on-failure:5
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
redis:
|
||||
condition: service_healthy
|
||||
tika:
|
||||
condition: service_started
|
||||
gotenberg:
|
||||
condition: service_started
|
||||
41
hosts/synology/calypso/paperless/paperless-ai.yml
Normal file
41
hosts/synology/calypso/paperless/paperless-ai.yml
Normal file
@@ -0,0 +1,41 @@
|
||||
# Paperless-AI - AI-powered document processing for Paperless-NGX
|
||||
# Uses Ollama on Atlantis for LLM inference
|
||||
# Web UI: http://<calypso-ip>:3033 or via reverse proxy
|
||||
# Docs: https://github.com/clusterzx/paperless-ai
|
||||
|
||||
services:
|
||||
paperlessngx-ai:
|
||||
image: clusterzx/paperless-ai:latest
|
||||
container_name: PaperlessNGX-AI
|
||||
hostname: paperless-ai
|
||||
ports:
|
||||
- "3033:3000"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:3000/status"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 90s
|
||||
volumes:
|
||||
- /volume1/docker/paperlessngxai:/app/data:rw
|
||||
environment:
|
||||
# --- Paperless-NGX Connection ---
|
||||
# Using Calypso's IP + external port (containers on different networks)
|
||||
PAPERLESS_URL: "http://192.168.0.250:8777"
|
||||
PAPERLESS_NGX_URL: "http://192.168.0.250:8777"
|
||||
PAPERLESS_HOST: "192.168.0.250"
|
||||
PAPERLESS_API_URL: "http://192.168.0.250:8777/api"
|
||||
PAPERLESS_API_TOKEN: "REDACTED_TOKEN"
|
||||
|
||||
# --- LLM Connection (LM Studio on Shinku-Ryuu via Tailscale) ---
|
||||
# Temporarily using LM Studio instead of Ollama (OpenAI-compatible API)
|
||||
# Original Ollama config: OLLAMA_API_URL: "http://192.168.0.200:11434" OLLAMA_MODEL: "llama3.2:latest"
|
||||
AI_PROVIDER: "custom"
|
||||
CUSTOM_BASE_URL: "http://100.98.93.15:1234/v1"
|
||||
CUSTOM_MODEL: "llama-3.2-3b-instruct"
|
||||
CUSTOM_API_KEY: "lm-studio"
|
||||
|
||||
# --- Optional Settings ---
|
||||
# PROCESS_PREDEFINED_DOCUMENTS: "yes"
|
||||
# SCAN_INTERVAL: "*/30 * * * *"
|
||||
restart: unless-stopped
|
||||
BIN
hosts/synology/calypso/piped+hyperpipe/Piped conf.zip
Normal file
BIN
hosts/synology/calypso/piped+hyperpipe/Piped conf.zip
Normal file
Binary file not shown.
33
hosts/synology/calypso/piped+hyperpipe/Piped conf/nginx.conf
Normal file
33
hosts/synology/calypso/piped+hyperpipe/Piped conf/nginx.conf
Normal file
@@ -0,0 +1,33 @@
|
||||
user root;
|
||||
worker_processes auto;
|
||||
|
||||
error_log /var/log/nginx/error.log notice;
|
||||
pid /var/run/nginx.pid;
|
||||
|
||||
|
||||
events {
|
||||
worker_connections 1024;
|
||||
}
|
||||
|
||||
|
||||
http {
|
||||
include /etc/nginx/mime.types;
|
||||
default_type application/octet-stream;
|
||||
|
||||
server_names_hash_bucket_size 128;
|
||||
|
||||
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
|
||||
'$status $body_bytes_sent "$http_referer" '
|
||||
'"$http_user_agent" "$http_x_forwarded_for"';
|
||||
|
||||
access_log /var/log/nginx/access.log main;
|
||||
|
||||
sendfile on;
|
||||
tcp_nodelay on;
|
||||
|
||||
keepalive_timeout 65;
|
||||
|
||||
resolver 127.0.0.11 ipv6=off valid=10s;
|
||||
|
||||
include /etc/nginx/conf.d/*.conf;
|
||||
}
|
||||
@@ -0,0 +1,15 @@
|
||||
proxy_cache_path /tmp/pipedapi_cache levels=1:2 keys_zone=pipedapi:4m max_size=2g inactive=60m use_temp_path=off;
|
||||
|
||||
server {
|
||||
listen 80;
|
||||
server_name pipedapi.vish.gg;
|
||||
|
||||
set $backend "http://piped-backend:8080";
|
||||
|
||||
location / {
|
||||
proxy_cache pipedapi;
|
||||
proxy_pass $backend;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection "keep-alive";
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,12 @@
|
||||
server {
|
||||
listen 80;
|
||||
server_name piped.vish.gg;
|
||||
|
||||
set $backend "http://piped-frontend";
|
||||
|
||||
location / {
|
||||
proxy_pass $backend;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection "keep-alive";
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
server {
|
||||
listen 80;
|
||||
server_name pipedproxy.vish.gg;
|
||||
|
||||
location ~ (/videoplayback|/api/v4/|/api/manifest/) {
|
||||
include snippets/ytproxy.conf;
|
||||
add_header Cache-Control private always;
|
||||
}
|
||||
|
||||
location / {
|
||||
include snippets/ytproxy.conf;
|
||||
add_header Cache-Control "public, max-age=604800";
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,18 @@
|
||||
proxy_buffering on;
|
||||
proxy_buffers 1024 16k;
|
||||
proxy_set_header X-Forwarded-For "";
|
||||
proxy_set_header CF-Connecting-IP "";
|
||||
proxy_hide_header "alt-svc";
|
||||
sendfile on;
|
||||
sendfile_max_chunk 512k;
|
||||
tcp_nopush on;
|
||||
aio threads=default;
|
||||
aio_write on;
|
||||
directio 16m;
|
||||
proxy_hide_header Cache-Control;
|
||||
proxy_hide_header etag;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Connection keep-alive;
|
||||
proxy_max_temp_file_size 32m;
|
||||
access_log off;
|
||||
proxy_pass http://unix:/var/run/ytproxy/actix.sock;
|
||||
37
hosts/synology/calypso/piped+hyperpipe/config.properties
Normal file
37
hosts/synology/calypso/piped+hyperpipe/config.properties
Normal file
@@ -0,0 +1,37 @@
|
||||
# The port to Listen on.
|
||||
PORT: 8080
|
||||
|
||||
# The number of workers to use for the server
|
||||
HTTP_WORKERS: 2
|
||||
|
||||
# Proxy
|
||||
PROXY_PART: https://pipedproxy.vish.gg
|
||||
|
||||
# Outgoing HTTP Proxy - eg: 127.0.0.1:8118
|
||||
#HTTP_PROXY: 127.0.0.1:8118
|
||||
|
||||
# Captcha Parameters
|
||||
#CAPTCHA_BASE_URL: https://api.capmonster.cloud/
|
||||
#CAPTCHA_API_KEY: INSERT_HERE
|
||||
|
||||
# Public API URL
|
||||
API_URL: https://pipedapi.vish.gg
|
||||
|
||||
# Public Frontend URL
|
||||
FRONTEND_URL: https://piped.vish.gg
|
||||
|
||||
# Enable haveibeenpwned compromised password API
|
||||
COMPROMISED_PASSWORD_CHECK: true
|
||||
|
||||
# Disable Registration
|
||||
DISABLE_REGISTRATION: false
|
||||
|
||||
# Feed Retention Time in Days
|
||||
FEED_RETENTION: 30
|
||||
|
||||
# Hibernate properties
|
||||
hibernate.connection.url: jdbc:postgresql://piped-db:5432/piped
|
||||
hibernate.connection.driver_class: org.postgresql.Driver
|
||||
hibernate.dialect: org.hibernate.dialect.PostgreSQLDialect
|
||||
hibernate.connection.username: pipeduser
|
||||
hibernate.connection.password: "REDACTED_PASSWORD"
|
||||
20
hosts/synology/calypso/portainer_agent.yaml
Normal file
20
hosts/synology/calypso/portainer_agent.yaml
Normal file
@@ -0,0 +1,20 @@
|
||||
services:
|
||||
portainer_edge_agent:
|
||||
image: portainer/agent:2.33.7
|
||||
container_name: portainer_edge_agent
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
EDGE: "1"
|
||||
EDGE_ID: "bc4b9329-95c0-4c08-bddd-e5790330570f"
|
||||
# EDGE_KEY is sensitive — set via Portainer UI or pass at deploy time
|
||||
EDGE_KEY: ""
|
||||
EDGE_INSECURE_POLL: "1"
|
||||
volumes:
|
||||
# NOTE: Synology Docker root is /volume1/@docker, NOT /var/lib/docker
|
||||
- /volume1/@docker/volumes:/var/lib/docker/volumes
|
||||
- /:/host
|
||||
- portainer_agent_data:/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
||||
volumes:
|
||||
portainer_agent_data:
|
||||
151
hosts/synology/calypso/prometheus.yml
Normal file
151
hosts/synology/calypso/prometheus.yml
Normal file
@@ -0,0 +1,151 @@
|
||||
# Prometheus - Metrics database
|
||||
# Port: 9090
|
||||
# Time-series metrics and alerting
|
||||
|
||||
version: '3'
|
||||
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus
|
||||
command:
|
||||
- '--storage.tsdb.retention.time=60d'
|
||||
- '--config.file=/etc/prometheus/prometheus.yml'
|
||||
container_name: Prometheus
|
||||
hostname: prometheus-docker
|
||||
networks:
|
||||
- prometheus-net
|
||||
mem_limit: 1g
|
||||
cpu_shares: 768
|
||||
security_opt:
|
||||
- no-new-privileges=true
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: wget --no-verbose --tries=1 --spider http://localhost:9090/ || exit 1
|
||||
ports:
|
||||
- 12090:9090
|
||||
volumes:
|
||||
- /volume1/docker/prometheus/prometheus:/prometheus:rw
|
||||
- /volume1/docker/prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
restart: on-failure:5
|
||||
|
||||
node-exporter:
|
||||
image: prom/node-exporter:latest
|
||||
command:
|
||||
- --collector.disable-defaults
|
||||
- --collector.stat
|
||||
- --collector.time
|
||||
- --collector.cpu
|
||||
- --collector.loadavg
|
||||
- --collector.hwmon
|
||||
- --collector.meminfo
|
||||
- --collector.diskstats
|
||||
container_name: Prometheus-Node
|
||||
hostname: prometheus-node
|
||||
networks:
|
||||
- prometheus-net
|
||||
mem_limit: 256m
|
||||
mem_reservation: 64m
|
||||
cpu_shares: 512
|
||||
security_opt:
|
||||
- no-new-privileges=true
|
||||
read_only: true
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: wget --no-verbose --tries=1 --spider http://localhost:9100/
|
||||
restart: on-failure:5
|
||||
|
||||
snmp-exporter:
|
||||
image: prom/snmp-exporter:latest
|
||||
command:
|
||||
- "--config.file=/etc/snmp_exporter/snmp.yml"
|
||||
container_name: Prometheus-SNMP
|
||||
hostname: prometheus-snmp
|
||||
networks:
|
||||
- prometheus-net
|
||||
mem_limit: 256m
|
||||
mem_reservation: 64m
|
||||
cpu_shares: 512
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: true
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: wget --no-verbose --tries=1 --spider http://localhost:9116/ || exit 1
|
||||
volumes:
|
||||
- /volume1/docker/prometheus/snmp:/etc/snmp_exporter/:ro
|
||||
restart: on-failure:5
|
||||
|
||||
cadvisor:
|
||||
image: gcr.io/cadvisor/cadvisor:latest
|
||||
command:
|
||||
- '--docker_only=true'
|
||||
container_name: Prometheus-cAdvisor
|
||||
hostname: prometheus-cadvisor
|
||||
networks:
|
||||
- prometheus-net
|
||||
mem_limit: 256m
|
||||
mem_reservation: 64m
|
||||
cpu_shares: 512
|
||||
security_opt:
|
||||
- no-new-privileges=true
|
||||
read_only: true
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
restart: on-failure:5
|
||||
|
||||
blackbox-exporter:
|
||||
image: prom/blackbox-exporter
|
||||
container_name: blackbox-exporter
|
||||
networks:
|
||||
- prometheus-net
|
||||
ports:
|
||||
- 9115:9115
|
||||
restart: unless-stopped
|
||||
|
||||
speedtest-exporter:
|
||||
image: miguelndecarvalho/speedtest-exporter
|
||||
container_name: speedtest-exporter
|
||||
networks:
|
||||
- prometheus-net
|
||||
ports:
|
||||
- 9798:9798
|
||||
restart: unless-stopped
|
||||
|
||||
watchtower:
|
||||
image: containrrr/watchtower:latest
|
||||
container_name: WATCHTOWER
|
||||
hostname: watchtower
|
||||
networks:
|
||||
- prometheus-net
|
||||
mem_limit: 128m
|
||||
mem_reservation: 50m
|
||||
cpu_shares: 256
|
||||
security_opt:
|
||||
- no-new-privileges=true
|
||||
read_only: true
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
WATCHTOWER_CLEANUP: true
|
||||
WATCHTOWER_REMOVE_VOLUMES: false
|
||||
DOCKER_API_VERSION: 1.43
|
||||
WATCHTOWER_INCLUDE_RESTARTING: true
|
||||
WATCHTOWER_INCLUDE_STOPPED: false
|
||||
WATCHTOWER_SCHEDULE: "0 0 */2 * * *"
|
||||
WATCHTOWER_LABEL_ENABLE: false
|
||||
WATCHTOWER_ROLLING_RESTART: true
|
||||
WATCHTOWER_TIMEOUT: 30s
|
||||
WATCHTOWER_HTTP_API_METRICS: true
|
||||
WATCHTOWER_HTTP_API_TOKEN: ${WATCHTOWER_HTTP_API_TOKEN}
|
||||
restart: on-failure:5
|
||||
|
||||
networks:
|
||||
prometheus-net:
|
||||
name: prometheus-net
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 192.168.51.0/24
|
||||
15
hosts/synology/calypso/rackula.yml
Normal file
15
hosts/synology/calypso/rackula.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
# Rackula - Drag and drop rack visualizer
|
||||
# Port: 3891
|
||||
services:
|
||||
Rackula:
|
||||
image: ghcr.io/rackulalives/rackula:latest
|
||||
container_name: Rackula
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "nc -z 127.0.0.1 8080 || exit 1"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 90s
|
||||
ports:
|
||||
- 3891:8080
|
||||
restart: on-failure:5
|
||||
230
hosts/synology/calypso/reactive_resume_v5/AI_MODEL_GUIDE.md
Normal file
230
hosts/synology/calypso/reactive_resume_v5/AI_MODEL_GUIDE.md
Normal file
@@ -0,0 +1,230 @@
|
||||
# Reactive Resume v5 - AI Model Configuration Guide
|
||||
|
||||
## 🤖 Current AI Setup
|
||||
|
||||
### Ollama Configuration
|
||||
- **Model**: `llama3.2:3b`
|
||||
- **Provider**: `ollama`
|
||||
- **Endpoint**: `http://ollama:11434` (internal)
|
||||
- **External API**: `http://192.168.0.250:11434`
|
||||
|
||||
## 📋 Model Details for Reactive Resume v5
|
||||
|
||||
### Environment Variables
|
||||
Add these to your `docker-compose.yml` environment section:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
# AI Integration (Ollama) - v5 uses OpenAI-compatible API
|
||||
OPENAI_API_KEY: "ollama" # Dummy key for local Ollama
|
||||
OPENAI_BASE_URL: "http://ollama:11434/v1" # Ollama OpenAI-compatible endpoint
|
||||
OPENAI_MODEL: "llama3.2:3b" # Model name
|
||||
```
|
||||
|
||||
### Model Specifications
|
||||
|
||||
#### llama3.2:3b
|
||||
- **Size**: ~2GB download
|
||||
- **Parameters**: 3 billion
|
||||
- **Context Length**: 8,192 tokens
|
||||
- **Use Case**: General text generation, resume assistance
|
||||
- **Performance**: Fast inference on CPU
|
||||
- **Memory**: ~4GB RAM during inference
|
||||
|
||||
## 🔧 Alternative Models
|
||||
|
||||
If you want to use different models, here are recommended options:
|
||||
|
||||
### Lightweight Options (< 4GB RAM)
|
||||
```yaml
|
||||
# Fastest, smallest
|
||||
OLLAMA_MODEL: "llama3.2:1b" # ~1GB, very fast
|
||||
|
||||
# Balanced performance
|
||||
OLLAMA_MODEL: "llama3.2:3b" # ~2GB, good quality (current)
|
||||
|
||||
# Better quality, still reasonable
|
||||
OLLAMA_MODEL: "qwen2.5:3b" # ~2GB, good for professional text
|
||||
```
|
||||
|
||||
### High-Quality Options (8GB+ RAM)
|
||||
```yaml
|
||||
# Better reasoning
|
||||
OLLAMA_MODEL: "llama3.2:7b" # ~4GB, higher quality
|
||||
|
||||
# Excellent for professional content
|
||||
OLLAMA_MODEL: "qwen2.5:7b" # ~4GB, great for business writing
|
||||
|
||||
# Best quality (if you have the resources)
|
||||
OLLAMA_MODEL: "llama3.2:11b" # ~7GB, excellent quality
|
||||
```
|
||||
|
||||
### Specialized Models
|
||||
```yaml
|
||||
# Code-focused (good for tech resumes)
|
||||
OLLAMA_MODEL: "codellama:7b" # ~4GB, code-aware
|
||||
|
||||
# Instruction-following
|
||||
OLLAMA_MODEL: "mistral:7b" # ~4GB, good at following prompts
|
||||
```
|
||||
|
||||
## 🚀 Model Management Commands
|
||||
|
||||
### Pull New Models
|
||||
```bash
|
||||
# Pull a different model
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama pull qwen2.5:3b"
|
||||
|
||||
# List available models
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"
|
||||
|
||||
# Remove unused models
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama rm llama3.2:1b"
|
||||
```
|
||||
|
||||
### Change Active Model
|
||||
1. Update `OLLAMA_MODEL` in `docker-compose.yml`
|
||||
2. Redeploy: `./deploy.sh restart`
|
||||
3. Pull new model if needed: `./deploy.sh setup-ollama`
|
||||
|
||||
## 🧪 Testing AI Features
|
||||
|
||||
### Direct API Test
|
||||
```bash
|
||||
# Test the AI API directly
|
||||
curl -X POST http://192.168.0.250:11434/api/generate \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "llama3.2:3b",
|
||||
"prompt": "Write a professional summary for a software engineer with 5 years experience in Python and React",
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
### Expected Response
|
||||
```json
|
||||
{
|
||||
"model": "llama3.2:3b",
|
||||
"created_at": "2026-02-16T10:00:00.000Z",
|
||||
"response": "Experienced Software Engineer with 5+ years of expertise in full-stack development using Python and React. Proven track record of building scalable web applications...",
|
||||
"done": true
|
||||
}
|
||||
```
|
||||
|
||||
## 🎯 AI Features in Reactive Resume v5
|
||||
|
||||
### 1. Resume Content Suggestions
|
||||
- **Trigger**: Click "AI Assist" button in any text field
|
||||
- **Function**: Suggests professional content based on context
|
||||
- **Model Usage**: Generates 2-3 sentence suggestions
|
||||
|
||||
### 2. Job Description Analysis
|
||||
- **Trigger**: Paste job description in "Job Match" feature
|
||||
- **Function**: Analyzes requirements and suggests skill additions
|
||||
- **Model Usage**: Extracts key requirements and matches to profile
|
||||
|
||||
### 3. Skills Optimization
|
||||
- **Trigger**: "Optimize Skills" button in Skills section
|
||||
- **Function**: Suggests relevant skills based on experience
|
||||
- **Model Usage**: Analyzes work history and recommends skills
|
||||
|
||||
### 4. Cover Letter Generation
|
||||
- **Trigger**: "Generate Cover Letter" in Documents section
|
||||
- **Function**: Creates personalized cover letter
|
||||
- **Model Usage**: Uses resume data + job description to generate letter
|
||||
|
||||
## 📊 Performance Tuning
|
||||
|
||||
### Model Performance Comparison
|
||||
| Model | Size | Speed | Quality | RAM Usage | Best For |
|
||||
|-------|------|-------|---------|-----------|----------|
|
||||
| llama3.2:1b | 1GB | Very Fast | Good | 2GB | Quick suggestions |
|
||||
| llama3.2:3b | 2GB | Fast | Very Good | 4GB | **Recommended** |
|
||||
| qwen2.5:3b | 2GB | Fast | Very Good | 4GB | Professional content |
|
||||
| llama3.2:7b | 4GB | Medium | Excellent | 8GB | High quality |
|
||||
|
||||
### Optimization Settings
|
||||
```yaml
|
||||
# In docker-compose.yml for Ollama service
|
||||
environment:
|
||||
OLLAMA_HOST: "0.0.0.0"
|
||||
OLLAMA_KEEP_ALIVE: "5m" # Keep model loaded for 5 minutes
|
||||
OLLAMA_MAX_LOADED_MODELS: "1" # Only keep one model in memory
|
||||
OLLAMA_NUM_PARALLEL: "1" # Number of parallel requests
|
||||
```
|
||||
|
||||
## 🔍 Troubleshooting AI Issues
|
||||
|
||||
### Model Not Loading
|
||||
```bash
|
||||
# Check if model exists
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"
|
||||
|
||||
# Pull model manually
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b"
|
||||
|
||||
# Check Ollama logs
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker logs Resume-OLLAMA-V5"
|
||||
```
|
||||
|
||||
### Slow AI Responses
|
||||
1. **Check CPU usage**: `htop` on Calypso
|
||||
2. **Reduce model size**: Switch to `llama3.2:1b`
|
||||
3. **Increase keep-alive**: Set `OLLAMA_KEEP_ALIVE: "30m"`
|
||||
|
||||
### AI Features Not Appearing in UI
|
||||
1. **Check environment variables**: Ensure `AI_PROVIDER=ollama` is set
|
||||
2. **Verify connectivity**: Test API endpoint from app container
|
||||
3. **Check app logs**: Look for AI-related errors
|
||||
|
||||
### Memory Issues
|
||||
```bash
|
||||
# Check memory usage
|
||||
ssh Vish@192.168.0.250 -p 62000 "free -h"
|
||||
|
||||
# If low memory, switch to smaller model
|
||||
OLLAMA_MODEL: "llama3.2:1b" # Uses ~2GB instead of 4GB
|
||||
```
|
||||
|
||||
## 🔄 Model Updates
|
||||
|
||||
### Updating to Newer Models
|
||||
1. **Check available models**: https://ollama.ai/library
|
||||
2. **Pull new model**: `ollama pull model-name`
|
||||
3. **Update compose file**: Change `OLLAMA_MODEL` value
|
||||
4. **Restart services**: `./deploy.sh restart`
|
||||
|
||||
### Model Versioning
|
||||
```yaml
|
||||
# Pin to specific version
|
||||
OLLAMA_MODEL: "llama3.2:3b-q4_0" # Specific quantization
|
||||
|
||||
# Use latest (auto-updates)
|
||||
OLLAMA_MODEL: "llama3.2:3b" # Latest version
|
||||
```
|
||||
|
||||
## 📈 Monitoring AI Performance
|
||||
|
||||
### Metrics to Watch
|
||||
- **Response Time**: Should be < 10s for most prompts
|
||||
- **Memory Usage**: Monitor RAM consumption
|
||||
- **Model Load Time**: First request after idle takes longer
|
||||
- **Error Rate**: Check for failed AI requests
|
||||
|
||||
### Performance Commands
|
||||
```bash
|
||||
# Check AI API health
|
||||
curl http://192.168.0.250:11434/api/tags
|
||||
|
||||
# Monitor resource usage
|
||||
ssh Vish@192.168.0.250 -p 62000 "docker stats Resume-OLLAMA-V5"
|
||||
|
||||
# Check AI request logs
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker logs Resume-ACCESS-V5 | grep -i ollama"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Current Configuration**: llama3.2:3b (Recommended)
|
||||
**Last Updated**: 2026-02-16
|
||||
**Performance**: ✅ Optimized for Calypso hardware
|
||||
72
hosts/synology/calypso/reactive_resume_v5/MIGRATION.md
Normal file
72
hosts/synology/calypso/reactive_resume_v5/MIGRATION.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Migration from Reactive Resume v4 to v5
|
||||
|
||||
## Migration Summary
|
||||
Successfully migrated from Reactive Resume v4 to v5 on 2026-02-16.
|
||||
|
||||
## Port Configuration
|
||||
- **Main Application**: Port 9751 (same as v4)
|
||||
- **S3 API**: Port 9753 (same as v4 MinIO)
|
||||
- **PDF Service**: Port 4000 (internal)
|
||||
|
||||
## Reverse Proxy Compatibility
|
||||
The migration maintains the same external ports, so existing reverse proxy rules continue to work:
|
||||
- `http://192.168.0.250:9751` → `rx.vish.gg`
|
||||
- `http://192.168.0.250:9753` → `rxdl.vish.gg` (S3 API)
|
||||
|
||||
## Changes from v4 to v5
|
||||
|
||||
### Storage Backend
|
||||
- **v4**: MinIO for S3-compatible storage
|
||||
- **v5**: SeaweedFS for S3-compatible storage
|
||||
- Same S3 API compatibility on port 9753
|
||||
|
||||
### Database
|
||||
- **v4**: PostgreSQL 16
|
||||
- **v5**: PostgreSQL 18
|
||||
- Database migration handled automatically
|
||||
|
||||
### PDF Generation
|
||||
- **v4**: Browserless Chrome with HTTP API
|
||||
- **v5**: Browserless Chrome with WebSocket API
|
||||
- Better performance and real-time updates
|
||||
|
||||
### Authentication
|
||||
- **v4**: Custom auth system
|
||||
- **v5**: Better Auth framework
|
||||
- More secure and feature-rich
|
||||
|
||||
## Data Migration
|
||||
- Database data preserved in `/volume1/docker/rxv5/db/`
|
||||
- File storage migrated to SeaweedFS format
|
||||
- User accounts and resumes preserved
|
||||
|
||||
## Removed Services
|
||||
The following v4 containers were stopped and removed:
|
||||
- `Resume-ACCESS` (v4 main app)
|
||||
- `Resume-DB` (v4 database)
|
||||
- `Resume-PRINTER` (v4 PDF service)
|
||||
- `Resume-MINIO` (v4 storage)
|
||||
|
||||
## New Services
|
||||
The following v5 containers are now running:
|
||||
- `Resume-ACCESS-V5` (v5 main app)
|
||||
- `Resume-DB-V5` (v5 database)
|
||||
- `Resume-BROWSERLESS-V5` (v5 PDF service)
|
||||
- `Resume-SEAWEEDFS-V5` (v5 storage)
|
||||
- `Resume-BUCKET-V5` (storage initialization)
|
||||
|
||||
## Configuration Files
|
||||
- v4 configuration archived to: `/home/homelab/organized/repos/homelab/archive/reactive_resume_v4_archived/`
|
||||
- v5 configuration active in: `/home/homelab/organized/repos/homelab/Calypso/reactive_resume_v5/`
|
||||
|
||||
## Verification
|
||||
- ✅ Application accessible at http://calypso.vish.local:9751
|
||||
- ✅ S3 API accessible at http://calypso.vish.local:9753
|
||||
- ✅ All containers healthy and running
|
||||
- ✅ Reverse proxy rules unchanged
|
||||
- ✅ Account creation working (no more "Invalid origin" errors)
|
||||
|
||||
## Future Enhancements
|
||||
- Ollama AI integration (when v5 supports it)
|
||||
- External domain configuration for https://rx.vish.gg
|
||||
- Automated backups of SeaweedFS data
|
||||
134
hosts/synology/calypso/reactive_resume_v5/README.md
Normal file
134
hosts/synology/calypso/reactive_resume_v5/README.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Reactive Resume v5 - GitOps Deployment
|
||||
|
||||
This directory contains the GitOps deployment configuration for Reactive Resume v5 on the Calypso server with AI integration.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# Deploy the complete stack
|
||||
./deploy.sh
|
||||
|
||||
# Check status
|
||||
./deploy.sh status
|
||||
|
||||
# View logs
|
||||
./deploy.sh logs
|
||||
```
|
||||
|
||||
## 🌐 Access URLs
|
||||
|
||||
- **External**: https://rx.vish.gg
|
||||
- **Internal**: http://192.168.0.250:9751
|
||||
- **Download Service**: http://192.168.0.250:9753 (rxdl.vish.gg)
|
||||
- **Ollama API**: http://192.168.0.250:11434
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Core Services
|
||||
- **Main App**: Reactive Resume v5 with AI features
|
||||
- **Database**: PostgreSQL 18
|
||||
- **Storage**: SeaweedFS (S3-compatible)
|
||||
- **PDF Generation**: Browserless Chrome
|
||||
- **AI Engine**: Ollama with llama3.2:3b model
|
||||
|
||||
### Infrastructure
|
||||
- **Proxy**: Nginx Proxy Manager (ports 8880/8443)
|
||||
- **Router**: Port forwarding 80→8880, 443→8443
|
||||
|
||||
## 🤖 AI Features
|
||||
|
||||
Reactive Resume v5 includes AI-powered features:
|
||||
- Resume content suggestions
|
||||
- Job description analysis
|
||||
- Skills optimization
|
||||
- Cover letter generation
|
||||
|
||||
Powered by Ollama running locally with the llama3.2:3b model.
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
1. **Router Configuration**: Forward ports 80→8880 and 443→8443
|
||||
2. **DNS**: rx.vish.gg and rxdl.vish.gg pointing to YOUR_WAN_IP
|
||||
3. **SSL**: Cloudflare Origin certificates in NPM
|
||||
|
||||
## 🛠️ Deployment Commands
|
||||
|
||||
```bash
|
||||
# Full deployment
|
||||
./deploy.sh deploy
|
||||
|
||||
# Setup individual components
|
||||
./deploy.sh setup-npm # Setup Nginx Proxy Manager
|
||||
./deploy.sh setup-ollama # Setup AI model
|
||||
|
||||
# Management
|
||||
./deploy.sh restart # Restart services
|
||||
./deploy.sh stop # Stop services
|
||||
./deploy.sh update # Update images and redeploy
|
||||
./deploy.sh status # Check service status
|
||||
./deploy.sh logs # View application logs
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Environment Variables
|
||||
- `APP_URL`: https://rx.vish.gg
|
||||
- `AI_PROVIDER`: ollama
|
||||
- `OLLAMA_URL`: http://ollama:11434
|
||||
- `OLLAMA_MODEL`: llama3.2:3b
|
||||
|
||||
### Volumes
|
||||
- `/volume1/docker/rxv5/db` - PostgreSQL data
|
||||
- `/volume1/docker/rxv5/seaweedfs` - File storage
|
||||
- `/volume1/docker/rxv5/ollama` - AI model data
|
||||
|
||||
## 🔄 Migration from v4
|
||||
|
||||
This deployment maintains compatibility with v4:
|
||||
- Same ports (9751, 9753)
|
||||
- Same SMTP configuration
|
||||
- Same database credentials
|
||||
- Preserves existing NPM proxy rules
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### External Access Issues
|
||||
1. Check router port forwarding: 80→8880, 443→8443
|
||||
2. Verify NPM proxy hosts are configured
|
||||
3. Confirm DNS propagation: `nslookup rx.vish.gg`
|
||||
|
||||
### AI Features Not Working
|
||||
1. Check Ollama service: `docker logs Resume-OLLAMA-V5`
|
||||
2. Pull model manually: `docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b`
|
||||
3. Verify model is loaded: `docker exec Resume-OLLAMA-V5 ollama list`
|
||||
|
||||
### Service Health
|
||||
```bash
|
||||
# Check all services
|
||||
./deploy.sh status
|
||||
|
||||
# Check specific container
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo docker logs Resume-ACCESS-V5"
|
||||
```
|
||||
|
||||
## 📊 Monitoring
|
||||
|
||||
- **Application Health**: http://192.168.0.250:9751/health
|
||||
- **Database**: PostgreSQL on port 5432 (internal)
|
||||
- **Storage**: SeaweedFS S3 API on port 8333 (internal)
|
||||
- **AI**: Ollama API on port 11434
|
||||
|
||||
## 🔐 Security
|
||||
|
||||
- All services run with `no-new-privileges:true`
|
||||
- Database credentials are environment-specific
|
||||
- SMTP uses app-specific passwords
|
||||
- External access only through NPM with SSL
|
||||
|
||||
## 📈 Status
|
||||
|
||||
**Status**: ✅ **ACTIVE DEPLOYMENT** (GitOps with AI integration)
|
||||
- **Version**: v5.0.9
|
||||
- **Deployed**: 2026-02-16
|
||||
- **AI Model**: llama3.2:3b
|
||||
- **External Access**: ✅ Configured
|
||||
210
hosts/synology/calypso/reactive_resume_v5/deploy.sh
Executable file
210
hosts/synology/calypso/reactive_resume_v5/deploy.sh
Executable file
@@ -0,0 +1,210 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Reactive Resume v5 GitOps Deployment Script
|
||||
# Usage: ./deploy.sh [action]
|
||||
# Actions: deploy, restart, stop, logs, status
|
||||
|
||||
set -e
|
||||
|
||||
COMPOSE_FILE="docker-compose.yml"
|
||||
REMOTE_HOST="Vish@192.168.0.250"
|
||||
SSH_PORT="62000"
|
||||
REMOTE_PATH="/volume1/docker/rxv5"
|
||||
SERVICE_NAME="reactive-resume-v5"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')] $1${NC}"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
check_prerequisites() {
|
||||
if [[ ! -f "$COMPOSE_FILE" ]]; then
|
||||
error "docker-compose.yml not found in current directory"
|
||||
fi
|
||||
|
||||
if ! ssh -q -p "$SSH_PORT" "$REMOTE_HOST" exit; then
|
||||
error "Cannot connect to $REMOTE_HOST"
|
||||
fi
|
||||
}
|
||||
|
||||
setup_npm() {
|
||||
log "Setting up Nginx Proxy Manager..."
|
||||
|
||||
# Create NPM directories
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "mkdir -p /volume1/homes/Vish/npm/{data,letsencrypt}"
|
||||
|
||||
# Stop existing NPM if running
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker stop nginx-proxy-manager 2>/dev/null || true"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker rm nginx-proxy-manager 2>/dev/null || true"
|
||||
|
||||
# Start NPM with correct port mapping
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker run -d \
|
||||
--name nginx-proxy-manager \
|
||||
--restart unless-stopped \
|
||||
-p 8880:80 \
|
||||
-p 8443:443 \
|
||||
-p 81:81 \
|
||||
-v /volume1/homes/Vish/npm/data:/data \
|
||||
-v /volume1/homes/Vish/npm/letsencrypt:/etc/letsencrypt \
|
||||
jc21/nginx-proxy-manager:latest"
|
||||
|
||||
success "NPM started on ports 8880/8443"
|
||||
warning "Make sure your router forwards port 80→8880 and 443→8443"
|
||||
}
|
||||
|
||||
setup_ollama() {
|
||||
log "Setting up Ollama AI model..."
|
||||
|
||||
# Wait for Ollama to be ready
|
||||
log "Waiting for Ollama service to start..."
|
||||
sleep 30
|
||||
|
||||
# Pull the required model
|
||||
log "Pulling llama3.2:3b model (this may take a while)..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b" || {
|
||||
warning "Failed to pull model automatically. You can pull it manually later with:"
|
||||
warning "docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b"
|
||||
}
|
||||
|
||||
success "Ollama setup complete"
|
||||
}
|
||||
|
||||
deploy() {
|
||||
log "Deploying $SERVICE_NAME to $REMOTE_HOST..."
|
||||
|
||||
# Create required directories
|
||||
log "Creating required directories..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "mkdir -p $REMOTE_PATH/{db,seaweedfs,ollama}"
|
||||
|
||||
# Copy compose file
|
||||
log "Copying docker-compose.yml to $REMOTE_HOST:$REMOTE_PATH/"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cat > $REMOTE_PATH/docker-compose.yml" < "$COMPOSE_FILE"
|
||||
|
||||
# Deploy services
|
||||
log "Starting services..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose up -d"
|
||||
|
||||
# Wait for services to be healthy
|
||||
log "Waiting for services to be healthy..."
|
||||
sleep 30
|
||||
|
||||
# Setup Ollama model
|
||||
setup_ollama
|
||||
|
||||
# Check status
|
||||
if ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker ps | grep -q 'Resume.*V5'"; then
|
||||
success "$SERVICE_NAME deployed successfully!"
|
||||
log "Local access: http://192.168.0.250:9751"
|
||||
log "External access: https://rx.vish.gg"
|
||||
log "Ollama API: http://192.168.0.250:11434"
|
||||
warning "Make sure NPM is configured for external access"
|
||||
else
|
||||
warning "Services started but may not be fully healthy yet. Check logs with: ./deploy.sh logs"
|
||||
fi
|
||||
}
|
||||
|
||||
restart() {
|
||||
log "Restarting $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose restart"
|
||||
success "$SERVICE_NAME restarted!"
|
||||
}
|
||||
|
||||
stop() {
|
||||
log "Stopping $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose down"
|
||||
success "$SERVICE_NAME stopped!"
|
||||
}
|
||||
|
||||
logs() {
|
||||
log "Showing logs for Resume-ACCESS-V5..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker logs Resume-ACCESS-V5 --tail 50 -f"
|
||||
}
|
||||
|
||||
status() {
|
||||
log "Checking status of $SERVICE_NAME services..."
|
||||
echo
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}' | grep -E 'Resume.*V5|NAMES'"
|
||||
echo
|
||||
|
||||
# Check if application is responding
|
||||
if curl -s -f http://192.168.0.250:9751 > /dev/null; then
|
||||
success "Application is responding at http://192.168.0.250:9751"
|
||||
else
|
||||
warning "Application may not be responding"
|
||||
fi
|
||||
}
|
||||
|
||||
update() {
|
||||
log "Updating $SERVICE_NAME (pull latest images and redeploy)..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose pull"
|
||||
deploy
|
||||
}
|
||||
|
||||
# Main script logic
|
||||
case "${1:-deploy}" in
|
||||
deploy)
|
||||
check_prerequisites
|
||||
deploy
|
||||
;;
|
||||
restart)
|
||||
check_prerequisites
|
||||
restart
|
||||
;;
|
||||
stop)
|
||||
check_prerequisites
|
||||
stop
|
||||
;;
|
||||
logs)
|
||||
check_prerequisites
|
||||
logs
|
||||
;;
|
||||
status)
|
||||
check_prerequisites
|
||||
status
|
||||
;;
|
||||
update)
|
||||
check_prerequisites
|
||||
update
|
||||
;;
|
||||
setup-npm)
|
||||
check_prerequisites
|
||||
setup_npm
|
||||
;;
|
||||
setup-ollama)
|
||||
check_prerequisites
|
||||
setup_ollama
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [deploy|restart|stop|logs|status|update|setup-npm|setup-ollama]"
|
||||
echo
|
||||
echo "Commands:"
|
||||
echo " deploy - Deploy/update the service (default)"
|
||||
echo " restart - Restart all services"
|
||||
echo " stop - Stop all services"
|
||||
echo " logs - Show application logs"
|
||||
echo " status - Show service status"
|
||||
echo " update - Pull latest images and redeploy"
|
||||
echo " setup-npm - Setup Nginx Proxy Manager"
|
||||
echo " setup-ollama - Setup Ollama AI model"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
158
hosts/synology/calypso/reactive_resume_v5/docker-compose.yml
Normal file
158
hosts/synology/calypso/reactive_resume_v5/docker-compose.yml
Normal file
@@ -0,0 +1,158 @@
|
||||
# Reactive Resume v5 - Upgraded from v4 with same configuration values
|
||||
# Docs: https://docs.rxresu.me/self-hosting/docker
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres:18
|
||||
container_name: Resume-DB-V5
|
||||
hostname: resume-db
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: ["CMD", "pg_isready", "-q", "-d", "resume", "-U", "resumeuser"]
|
||||
timeout: 45s
|
||||
interval: 10s
|
||||
retries: 10
|
||||
volumes:
|
||||
- /volume1/docker/rxv5/db:/var/lib/postgresql:rw
|
||||
environment:
|
||||
POSTGRES_DB: resume
|
||||
POSTGRES_USER: resumeuser
|
||||
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
|
||||
restart: unless-stopped
|
||||
|
||||
browserless:
|
||||
image: ghcr.io/browserless/chromium:latest
|
||||
container_name: Resume-BROWSERLESS-V5
|
||||
ports:
|
||||
- "4000:3000"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:3000/pressure?token=1234567890"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
environment:
|
||||
QUEUED: 30
|
||||
HEALTH: true
|
||||
CONCURRENT: 20
|
||||
TOKEN: 1234567890
|
||||
restart: unless-stopped
|
||||
|
||||
seaweedfs:
|
||||
image: chrislusf/seaweedfs:latest
|
||||
container_name: Resume-SEAWEEDFS-V5
|
||||
ports:
|
||||
- "9753:8333" # S3 API port (same as v4 MinIO)
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "-O", "/dev/null", "http://localhost:8888"]
|
||||
start_period: 10s
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
command: server -s3 -filer -dir=/data -ip=0.0.0.0
|
||||
environment:
|
||||
AWS_ACCESS_KEY_ID: seaweedfs
|
||||
AWS_SECRET_ACCESS_KEY: seaweedfs
|
||||
volumes:
|
||||
- /volume1/docker/rxv5/seaweedfs:/data:rw
|
||||
restart: unless-stopped
|
||||
|
||||
seaweedfs-create-bucket:
|
||||
image: quay.io/minio/mc:latest
|
||||
container_name: Resume-BUCKET-V5
|
||||
entrypoint: >
|
||||
/bin/sh -c "
|
||||
sleep 5;
|
||||
mc alias set seaweedfs http://seaweedfs:8333 seaweedfs seaweedfs;
|
||||
mc mb seaweedfs/reactive-resume;
|
||||
exit 0;
|
||||
"
|
||||
depends_on:
|
||||
seaweedfs:
|
||||
condition: service_healthy
|
||||
restart: on-failure:5
|
||||
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
container_name: Resume-OLLAMA-V5
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- /volume1/docker/rxv5/ollama:/root/.ollama:rw
|
||||
environment:
|
||||
OLLAMA_HOST: "0.0.0.0"
|
||||
restart: unless-stopped
|
||||
# Uncomment if you have GPU support
|
||||
# deploy:
|
||||
# resources:
|
||||
# reservations:
|
||||
# devices:
|
||||
# - driver: nvidia
|
||||
# count: 1
|
||||
# capabilities: [gpu]
|
||||
|
||||
resume:
|
||||
image: amruthpillai/reactive-resume:v5
|
||||
container_name: Resume-ACCESS-V5
|
||||
hostname: resume
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
ports:
|
||||
- "9751:3000" # Main application port (same as v4)
|
||||
environment:
|
||||
# --- Server ---
|
||||
PORT: 3000
|
||||
TZ: "America/Chicago"
|
||||
NODE_ENV: production
|
||||
APP_URL: "https://rx.vish.gg"
|
||||
PRINTER_APP_URL: "http://resume:3000"
|
||||
|
||||
# --- Database ---
|
||||
DATABASE_URL: "postgresql://resumeuser:REDACTED_PASSWORD@resume-db:5432/resume"
|
||||
|
||||
# --- Authentication ---
|
||||
# Using same secret as v4 for consistency
|
||||
AUTH_SECRET: "d5c3e165dafd2d82bf84acacREDACTED_GITEA_TOKEN"
|
||||
|
||||
# --- Printer (v5 uses WebSocket) ---
|
||||
PRINTER_ENDPOINT: "ws://browserless:3000?token=1234567890"
|
||||
|
||||
# --- Storage (S3 - SeaweedFS) ---
|
||||
S3_ACCESS_KEY_ID: "seaweedfs"
|
||||
S3_SECRET_ACCESS_KEY: "seaweedfs"
|
||||
S3_ENDPOINT: "http://seaweedfs:8333"
|
||||
S3_BUCKET: "reactive-resume"
|
||||
S3_FORCE_PATH_STYLE: "true"
|
||||
STORAGE_USE_SSL: "false"
|
||||
|
||||
# --- Email (SMTP) - Same as v4 ---
|
||||
SMTP_HOST: "smtp.gmail.com"
|
||||
SMTP_PORT: "465"
|
||||
SMTP_USER: "your-email@example.com"
|
||||
SMTP_PASS: "REDACTED_PASSWORD" rnqz rnqz rnqz" # pragma: allowlist secret
|
||||
SMTP_FROM: "your-email@example.com"
|
||||
SMTP_SECURE: "true"
|
||||
|
||||
# --- OAuth / SSO (Authentik) ---
|
||||
OAUTH_PROVIDER_NAME: "Authentik"
|
||||
OAUTH_CLIENT_ID: "REDACTED_CLIENT_ID"
|
||||
OAUTH_CLIENT_SECRET: "REDACTED_CLIENT_SECRET" # pragma: allowlist secret
|
||||
OAUTH_DISCOVERY_URL: "https://sso.vish.gg/application/o/reactive-resume/.well-known/openid-configuration"
|
||||
|
||||
# --- Feature Flags ---
|
||||
FLAG_DISABLE_SIGNUPS: "false"
|
||||
FLAG_DISABLE_EMAIL_AUTH: "false"
|
||||
|
||||
# --- AI Integration (Olares Ollama) ---
|
||||
# Configured via Settings UI → AI → OpenAI-compatible provider
|
||||
# Points to Olares RTX 5090 GPU inference (qwen3:32b (dense 32B))
|
||||
OPENAI_API_KEY: "dummy" # pragma: allowlist secret
|
||||
OPENAI_BASE_URL: "http://192.168.0.145:31434/v1"
|
||||
OPENAI_MODEL: "qwen3:32b"
|
||||
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
seaweedfs:
|
||||
condition: service_healthy
|
||||
restart: unless-stopped
|
||||
43
hosts/synology/calypso/retro-site.yaml
Normal file
43
hosts/synology/calypso/retro-site.yaml
Normal file
@@ -0,0 +1,43 @@
|
||||
version: '3.9'
|
||||
|
||||
# retro.vish.gg - Cyberpunk iPod Zone
|
||||
# Clones Vish/retro_site dist/ on startup and serves it via nginx.
|
||||
#
|
||||
# Auto-deploy: pushes to Vish/retro_site trigger retro-webhook (retro-webhook/)
|
||||
# which runs `docker exec` to refresh files without restarting this container.
|
||||
#
|
||||
# Manual redeploy: docker rm -f retro-site && docker compose up -d
|
||||
|
||||
services:
|
||||
retro-site:
|
||||
image: nginx:alpine
|
||||
container_name: retro-site
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- '8025:80'
|
||||
volumes:
|
||||
- site-data:/usr/share/nginx/html
|
||||
environment:
|
||||
# GIT_TOKEN is injected by Portainer at deploy time via portainer-deploy.yml
|
||||
# Set it in the Portainer stack env vars - never hardcode here
|
||||
- GIT_TOKEN=${GIT_TOKEN}
|
||||
entrypoint:
|
||||
- sh
|
||||
- -c
|
||||
- |
|
||||
apk add --no-cache git
|
||||
rm -rf /usr/share/nginx/html/*
|
||||
git clone --depth 1 https://${GIT_TOKEN}@git.vish.gg/Vish/retro_site.git /tmp/site
|
||||
cp -r /tmp/site/dist/* /usr/share/nginx/html/
|
||||
cp /tmp/site/nginx.conf /etc/nginx/conf.d/default.conf
|
||||
rm -rf /tmp/site
|
||||
nginx -g 'daemon off;'
|
||||
healthcheck:
|
||||
test: ['CMD', 'wget', '-q', '--spider', 'http://localhost/']
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
volumes:
|
||||
site-data:
|
||||
15
hosts/synology/calypso/retro-webhook/deploy.sh
Normal file
15
hosts/synology/calypso/retro-webhook/deploy.sh
Normal file
@@ -0,0 +1,15 @@
|
||||
#!/bin/sh
|
||||
# Deploy script for retro.vish.gg
|
||||
# Runs inside the retro-webhook container via adnanh/webhook
|
||||
# Clones the latest retro_site repo into the running nginx container and reloads nginx.
|
||||
set -e
|
||||
echo "[deploy] Starting retro-site update $(date)"
|
||||
docker exec retro-site sh -c "
|
||||
rm -rf /tmp/deploy &&
|
||||
git clone --depth 1 https://REDACTED_TOKEN@git.vish.gg/Vish/retro_site.git /tmp/deploy &&
|
||||
cp -r /tmp/deploy/dist/* /usr/share/nginx/html/ &&
|
||||
cp /tmp/deploy/nginx.conf /etc/nginx/conf.d/default.conf &&
|
||||
nginx -s reload &&
|
||||
rm -rf /tmp/deploy &&
|
||||
echo '[deploy] Done'
|
||||
"
|
||||
35
hosts/synology/calypso/retro-webhook/docker-compose.yaml
Normal file
35
hosts/synology/calypso/retro-webhook/docker-compose.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
# retro-webhook - Auto-deploy listener for retro.vish.gg
|
||||
#
|
||||
# Receives Gitea push webhooks and updates the retro-site container
|
||||
# in-place via `docker exec` — no container restart required.
|
||||
#
|
||||
# Deploy pipeline:
|
||||
# git push Vish/retro_site
|
||||
# → Gitea webhook #19 → POST http://100.103.48.78:8027/hooks/retro-site-deploy
|
||||
# → deploy.sh: docker exec retro-site (git clone + cp dist/ + nginx reload)
|
||||
# → site live in ~9s
|
||||
#
|
||||
# Config files must exist on the host before starting:
|
||||
# /volume1/docker/retro-webhook/hooks.json (see hooks.json in this directory)
|
||||
# /volume1/docker/retro-webhook/deploy.sh (see deploy.sh in this directory)
|
||||
#
|
||||
# Setup:
|
||||
# mkdir -p /volume1/docker/retro-webhook
|
||||
# cp hooks.json deploy.sh /volume1/docker/retro-webhook/
|
||||
# chmod +x /volume1/docker/retro-webhook/deploy.sh
|
||||
# docker compose -f docker-compose.yaml up -d
|
||||
|
||||
services:
|
||||
retro-webhook:
|
||||
image: almir/webhook
|
||||
container_name: retro-webhook
|
||||
restart: unless-stopped
|
||||
user: root
|
||||
ports:
|
||||
- '8027:9000'
|
||||
volumes:
|
||||
- /volume1/docker/retro-webhook:/config:ro
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
# Synology docker binary is not in PATH; bind-mount it directly
|
||||
- /var/packages/REDACTED_APP_PASSWORD/target/usr/bin/docker:/usr/local/bin/docker:ro
|
||||
command: ["-verbose", "-hooks=/config/hooks.json", "-hotreload"]
|
||||
8
hosts/synology/calypso/retro-webhook/hooks.json
Normal file
8
hosts/synology/calypso/retro-webhook/hooks.json
Normal file
@@ -0,0 +1,8 @@
|
||||
[
|
||||
{
|
||||
"id": "retro-site-deploy",
|
||||
"execute-command": "/config/deploy.sh",
|
||||
"command-working-directory": "/",
|
||||
"response-message": "Deploy triggered\n"
|
||||
}
|
||||
]
|
||||
41
hosts/synology/calypso/rustdesk.yaml
Normal file
41
hosts/synology/calypso/rustdesk.yaml
Normal file
@@ -0,0 +1,41 @@
|
||||
# Rustdesk Server - Self-hosted remote desktop
|
||||
# Ports:
|
||||
# - 21115: NAT type test
|
||||
# - 21116: TCP/UDP relay
|
||||
# - 21117: Relay
|
||||
# - 21118, 21119: WebSocket
|
||||
|
||||
networks:
|
||||
rustdesk-net:
|
||||
external: false
|
||||
|
||||
services:
|
||||
hbbs:
|
||||
container_name: Rustdesk-HBBS
|
||||
image: rustdesk/rustdesk-server
|
||||
command: hbbs -r 100.103.48.78:21117
|
||||
ports:
|
||||
- "21115:21115"
|
||||
- "21116:21116"
|
||||
- "21116:21116/udp"
|
||||
- "21118:21118"
|
||||
volumes:
|
||||
- /volume1/docker/rustdeskhbbs:/root:rw
|
||||
networks:
|
||||
- rustdesk-net
|
||||
depends_on:
|
||||
- hbbr
|
||||
restart: on-failure:5
|
||||
|
||||
hbbr:
|
||||
container_name: Rustdesk-HBBR
|
||||
image: rustdesk/rustdesk-server
|
||||
command: hbbr
|
||||
ports:
|
||||
- "21117:21117"
|
||||
- "21119:21119"
|
||||
volumes:
|
||||
- /volume1/docker/rustdeskhbbr:/root:rw
|
||||
networks:
|
||||
- rustdesk-net
|
||||
restart: on-failure:5
|
||||
24
hosts/synology/calypso/scrutiny-collector.yaml
Normal file
24
hosts/synology/calypso/scrutiny-collector.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
# Scrutiny Collector — Calypso (Synology DS723+, 2-bay)
|
||||
#
|
||||
# Ships SMART data to the hub on homelab-vm.
|
||||
# DS723+ has 2 bays (/dev/sda, /dev/sdb).
|
||||
# Add /dev/sdc etc. if using a DX517 expansion unit.
|
||||
#
|
||||
# privileged: true required on DSM.
|
||||
# Hub: http://100.67.40.126:8090
|
||||
|
||||
services:
|
||||
scrutiny-collector:
|
||||
image: ghcr.io/analogj/scrutiny:master-collector
|
||||
container_name: scrutiny-collector
|
||||
privileged: true
|
||||
volumes:
|
||||
- /run/udev:/run/udev:ro
|
||||
devices:
|
||||
- /dev/sata1
|
||||
- /dev/sata2
|
||||
- /dev/nvme0n1
|
||||
- /dev/nvme1n1
|
||||
environment:
|
||||
COLLECTOR_API_ENDPOINT: "http://100.67.40.126:8090"
|
||||
restart: unless-stopped
|
||||
102
hosts/synology/calypso/seafile-new.yaml
Normal file
102
hosts/synology/calypso/seafile-new.yaml
Normal file
@@ -0,0 +1,102 @@
|
||||
# Seafile - File sync
|
||||
# Port: 8611 (web), 8612 (webdav)
|
||||
# File sync and share with versioning
|
||||
# Updated: sf.vish.gg + WebDAV on port 8612
|
||||
|
||||
services:
|
||||
db:
|
||||
image: mariadb:11.4-noble
|
||||
container_name: Seafile-DB
|
||||
hostname: seafile-db
|
||||
security_opt:
|
||||
- no-new-privileges:false
|
||||
volumes:
|
||||
- /volume1/docker/seafile/db:/var/lib/mysql:rw
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
MYSQL_DATABASE: seafile_db
|
||||
MYSQL_USER: seafileuser
|
||||
MYSQL_PASSWORD: "REDACTED_PASSWORD"
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
|
||||
cache:
|
||||
image: memcached:1.6
|
||||
entrypoint: memcached -m 256
|
||||
container_name: Seafile-CACHE
|
||||
hostname: memcached
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: true
|
||||
user: 1026:100
|
||||
restart: on-failure:5
|
||||
|
||||
redis:
|
||||
image: redis
|
||||
container_name: Seafile-REDIS
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- redis-server --requirepass REDACTED_PASSWORD
|
||||
hostname: redis
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: false
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
|
||||
volumes:
|
||||
- /volume1/docker/seafile/redis:/data:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
|
||||
seafile:
|
||||
image: seafileltd/seafile-mc:13.0-latest
|
||||
container_name: Seafile
|
||||
user: 0:0
|
||||
hostname: seafile
|
||||
security_opt:
|
||||
- no-new-privileges:false
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "curl -fs --max-time 10 -H 'Host: sf.vish.gg' http://localhost/ -o /dev/null"]
|
||||
volumes:
|
||||
- /volume1/docker/seafile/data:/shared:rw
|
||||
ports:
|
||||
- 8611:80
|
||||
- 8612:8080
|
||||
environment:
|
||||
INIT_SEAFILE_MYSQL_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
SEAFILE_MYSQL_DB_HOST: seafile-db
|
||||
SEAFILE_MYSQL_DB_USER: seafileuser
|
||||
SEAFILE_MYSQL_DB_PORT: 3306
|
||||
SEAFILE_MYSQL_DB_PASSWORD: "REDACTED_PASSWORD"
|
||||
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME: seafile_db
|
||||
SEAFILE_MYSQL_DB_CCNET_DB_NAME: ccnet_db
|
||||
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME: seahub_db
|
||||
CACHE_PROVIDER: redis
|
||||
REDIS_HOST: redis
|
||||
REDIS_PORT: 6379
|
||||
REDIS_PASSWORD: "REDACTED_PASSWORD"
|
||||
TIME_ZONE: America/Los_Angeles
|
||||
SEAFILE_VOLUME: /opt/seafile-data
|
||||
SEAFILE_MYSQL_VOLUME: /opt/seafile-mysql/db
|
||||
INIT_SEAFILE_ADMIN_EMAIL: your-email@example.com
|
||||
INIT_SEAFILE_ADMIN_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
JWT_PRIVATE_KEY: "REDACTED_JWT_PRIVATE_KEY"
|
||||
SEADOC_VOLUME: /opt/seadoc-data
|
||||
SEADOC_IMAGE: seafileltd/sdoc-server:2.0-latest
|
||||
ENABLE_SEADOC: true
|
||||
SEADOC_SERVER_URL: https://sf.vish.gg/sdoc-server
|
||||
SEAFILE_SERVER_HOSTNAME: sf.vish.gg
|
||||
SEAFILE_SERVER_PROTOCOL: https
|
||||
FORCE_HTTPS_IN_CONF: true
|
||||
SEAFILE_SERVER_LETSENCRYPT: false
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_started
|
||||
cache:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_started
|
||||
restart: on-failure:5
|
||||
20
hosts/synology/calypso/seafile-oauth-config.py
Normal file
20
hosts/synology/calypso/seafile-oauth-config.py
Normal file
@@ -0,0 +1,20 @@
|
||||
# Authentik OAuth2 Configuration for Seafile
|
||||
# Append this to /shared/seafile/conf/seahub_settings.py on Calypso
|
||||
# After adding, restart Seafile container: docker restart Seafile
|
||||
#
|
||||
# This keeps local login working while adding "Sign in with Authentik" button
|
||||
|
||||
ENABLE_OAUTH = True
|
||||
OAUTH_ENABLE_INSECURE_TRANSPORT = False
|
||||
OAUTH_CLIENT_ID = "REDACTED_CLIENT_ID"
|
||||
OAUTH_CLIENT_SECRET = "REDACTED_CLIENT_SECRET"
|
||||
OAUTH_REDIRECT_URL = "https://sf.vish.gg/oauth/callback/"
|
||||
OAUTH_PROVIDER_DOMAIN = "sso.vish.gg"
|
||||
OAUTH_AUTHORIZATION_URL = "https://sso.vish.gg/application/o/authorize/"
|
||||
OAUTH_TOKEN_URL = "https://sso.vish.gg/application/o/token/"
|
||||
OAUTH_USER_INFO_URL = "https://sso.vish.gg/application/o/userinfo/"
|
||||
OAUTH_SCOPE = ["openid", "profile", "email"]
|
||||
OAUTH_ATTRIBUTE_MAP = {
|
||||
"email": (True, "email"),
|
||||
"name": (False, "name"),
|
||||
}
|
||||
116
hosts/synology/calypso/seafile-server.yaml
Normal file
116
hosts/synology/calypso/seafile-server.yaml
Normal file
@@ -0,0 +1,116 @@
|
||||
# Seafile - File sync
|
||||
# Port: 8082
|
||||
# File sync and share with versioning
|
||||
|
||||
services:
|
||||
db:
|
||||
image: mariadb:11.4-noble #LTS Long Time Support Until May 29, 2029.
|
||||
container_name: Seafile-DB
|
||||
hostname: seafile-db
|
||||
security_opt:
|
||||
- no-new-privileges:false
|
||||
volumes:
|
||||
- /volume1/docker/seafile/db:/var/lib/mysql:rw
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
MYSQL_DATABASE: seafile_db
|
||||
MYSQL_USER: seafileuser
|
||||
MYSQL_PASSWORD: "REDACTED_PASSWORD"
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
|
||||
cache:
|
||||
image: memcached:1.6
|
||||
entrypoint: memcached -m 256
|
||||
container_name: Seafile-CACHE
|
||||
hostname: memcached
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: true
|
||||
user: 1026:100
|
||||
restart: on-failure:5
|
||||
|
||||
redis:
|
||||
image: redis
|
||||
container_name: Seafile-REDIS
|
||||
command:
|
||||
- /bin/sh
|
||||
- -c
|
||||
- redis-server --requirepass REDACTED_PASSWORD
|
||||
hostname: redis
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
read_only: false
|
||||
user: 1026:100
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "redis-cli ping || exit 1"]
|
||||
volumes:
|
||||
- /volume1/docker/seafile/redis:/data:rw
|
||||
environment:
|
||||
TZ: America/Los_Angeles
|
||||
restart: on-failure:5
|
||||
|
||||
seafile:
|
||||
image: seafileltd/seafile-mc:13.0-latest
|
||||
container_name: Seafile
|
||||
user: 0:0
|
||||
hostname: seafile
|
||||
security_opt:
|
||||
- no-new-privileges:false
|
||||
healthcheck:
|
||||
test: wget --no-verbose --tries=1 --spider http://localhost
|
||||
volumes:
|
||||
- /volume1/docker/seafile/data:/shared:rw
|
||||
ports:
|
||||
- 8611:80
|
||||
environment:
|
||||
INIT_SEAFILE_MYSQL_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
SEAFILE_MYSQL_DB_HOST: seafile-db
|
||||
SEAFILE_MYSQL_DB_USER: seafileuser
|
||||
SEAFILE_MYSQL_DB_PORT: 3306
|
||||
SEAFILE_MYSQL_DB_PASSWORD: "REDACTED_PASSWORD"
|
||||
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME: seafile_db
|
||||
SEAFILE_MYSQL_DB_CCNET_DB_NAME: ccnet_db
|
||||
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME: seahub_db
|
||||
CACHE_PROVIDER: redis
|
||||
REDIS_HOST: redis
|
||||
REDIS_PORT: 6379
|
||||
REDIS_PASSWORD: "REDACTED_PASSWORD"
|
||||
TIME_ZONE: America/Los_Angeles
|
||||
SEAFILE_VOLUME: /opt/seafile-data
|
||||
SEAFILE_MYSQL_VOLUME: /opt/seafile-mysql/db
|
||||
INIT_SEAFILE_ADMIN_EMAIL: your-email@example.com
|
||||
INIT_SEAFILE_ADMIN_PASSWORD: "REDACTED_PASSWORD" # pragma: allowlist secret
|
||||
JWT_PRIVATE_KEY: "REDACTED_JWT_PRIVATE_KEY"
|
||||
SEADOC_VOLUME: /opt/seadoc-data
|
||||
SEADOC_IMAGE: seafileltd/sdoc-server:2.0-latest
|
||||
ENABLE_SEADOC: true
|
||||
SEADOC_SERVER_URL: https://sf.vish.gg/sdoc-server
|
||||
SEAFILE_SERVER_HOSTNAME: sf.vish.gg
|
||||
SEAFILE_SERVER_PROTOCOL: https
|
||||
FORCE_HTTPS_IN_CONF: true
|
||||
SEAFILE_SERVER_LETSENCRYPT: false
|
||||
# Authentik OAuth2 SSO - keeps local login working
|
||||
# NOTE: Also add to seahub_settings.py in /shared/seafile/conf/:
|
||||
# ENABLE_OAUTH = True
|
||||
# OAUTH_ENABLE_INSECURE_TRANSPORT = False
|
||||
# OAUTH_CLIENT_ID = "REDACTED_CLIENT_ID"
|
||||
# OAUTH_CLIENT_SECRET = "REDACTED_CLIENT_SECRET"
|
||||
# OAUTH_REDIRECT_URL = "https://sf.vish.gg/oauth/callback/"
|
||||
# OAUTH_PROVIDER_DOMAIN = "sso.vish.gg"
|
||||
# OAUTH_AUTHORIZATION_URL = "https://sso.vish.gg/application/o/authorize/"
|
||||
# OAUTH_TOKEN_URL = "https://sso.vish.gg/application/o/token/"
|
||||
# OAUTH_USER_INFO_URL = "https://sso.vish.gg/application/o/userinfo/"
|
||||
# OAUTH_SCOPE = ["openid", "profile", "email"]
|
||||
# OAUTH_ATTRIBUTE_MAP = {
|
||||
# "email": (True, "email"),
|
||||
# "name": (False, "name"),
|
||||
# }
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_started
|
||||
cache:
|
||||
condition: service_started
|
||||
redis:
|
||||
condition: service_started
|
||||
restart: on-failure:5
|
||||
25
hosts/synology/calypso/syncthing.yaml
Normal file
25
hosts/synology/calypso/syncthing.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# Syncthing - File synchronization
|
||||
# Port: 8384 (web), 22000 (sync)
|
||||
# Continuous file synchronization between devices
|
||||
services:
|
||||
syncthing:
|
||||
container_name: syncthing
|
||||
ports:
|
||||
- 8384:8384
|
||||
- 22000:22000/tcp
|
||||
- 22000:22000/udp
|
||||
- 21027:21027/udp
|
||||
environment:
|
||||
- PUID=1026
|
||||
- PGID=100
|
||||
- TZ=America/Los_Angeles
|
||||
- DOCKER_MODS=ghcr.io/themepark-dev/theme.park:syncthing
|
||||
- TP_SCHEME=http
|
||||
- TP_DOMAIN=192.168.0.200:8580
|
||||
- TP_THEME=dracula
|
||||
volumes:
|
||||
- /volume1/docker/syncthing/config:/config
|
||||
- /volume1/docker/syncthing/data1:/data1
|
||||
- /volume1/docker/syncthing/data2:/data2
|
||||
restart: unless-stopped
|
||||
image: ghcr.io/linuxserver/syncthing
|
||||
37
hosts/synology/calypso/tdarr-node/docker-compose.yaml
Normal file
37
hosts/synology/calypso/tdarr-node/docker-compose.yaml
Normal file
@@ -0,0 +1,37 @@
|
||||
# Tdarr Node - Calypso-CPU (DS723+ CPU-only transcoding)
|
||||
# Runs on Synology DS723+ (calypso at 192.168.0.250)
|
||||
# Connects to Tdarr Server on Synology (atlantis) at 192.168.0.200
|
||||
#
|
||||
# Hardware: AMD Ryzen R1600 (4 cores, no hardware transcoding)
|
||||
# Use case: CPU-based transcoding to help with queue processing
|
||||
#
|
||||
# NFS Mounts required (created via /usr/local/etc/rc.d/tdarr-mounts.sh):
|
||||
# /mnt/atlantis_media -> 192.168.0.200:/volume1/data/media
|
||||
# /mnt/atlantis_cache -> 192.168.0.200:/volume3/usenet/tdarr_cache
|
||||
#
|
||||
# Note: Both /temp and /cache must be mounted to the same cache directory
|
||||
# to avoid path mismatch errors during file operations.
|
||||
|
||||
services:
|
||||
tdarr-node:
|
||||
image: ghcr.io/haveagitgat/tdarr_node@sha256:dc23becc667f77d2489b1042REDACTED_GITEA_TOKEN # v2.67.01 - pinned to match server
|
||||
container_name: tdarr-node-calypso
|
||||
labels:
|
||||
- com.centurylinklabs.watchtower.enable=false
|
||||
environment:
|
||||
- PUID=1029
|
||||
- PGID=100
|
||||
- TZ=America/Los_Angeles
|
||||
- UMASK=022
|
||||
- nodeName=Calypso
|
||||
- serverIP=192.168.0.200
|
||||
- serverPort=8266
|
||||
- inContainer=true
|
||||
- ffmpegVersion=6
|
||||
volumes:
|
||||
- /volume1/docker/tdarr-node/configs:/app/configs
|
||||
- /volume1/docker/tdarr-node/logs:/app/logs
|
||||
- /mnt/atlantis_media:/media
|
||||
- /mnt/atlantis_cache:/temp
|
||||
- /mnt/atlantis_cache:/cache
|
||||
restart: unless-stopped
|
||||
30
hosts/synology/calypso/tdarr-node/nfs-mounts.sh
Normal file
30
hosts/synology/calypso/tdarr-node/nfs-mounts.sh
Normal file
@@ -0,0 +1,30 @@
|
||||
#!/bin/bash
|
||||
# NFS Mount Script for Tdarr Node on Calypso (DS723+)
|
||||
# Location: /usr/local/etc/rc.d/tdarr-mounts.sh
|
||||
#
|
||||
# This script mounts the required NFS shares from Atlantis for Tdarr
|
||||
# to access media files and the shared cache directory.
|
||||
#
|
||||
# Installation:
|
||||
# 1. Copy this file to /usr/local/etc/rc.d/tdarr-mounts.sh
|
||||
# 2. chmod +x /usr/local/etc/rc.d/tdarr-mounts.sh
|
||||
# 3. Reboot or run manually
|
||||
#
|
||||
# Note: Synology DSM runs scripts in /usr/local/etc/rc.d/ at boot
|
||||
|
||||
# Wait for network to be ready
|
||||
sleep 30
|
||||
|
||||
# Create mount points if they don't exist
|
||||
mkdir -p /mnt/atlantis_media /mnt/atlantis_cache
|
||||
|
||||
# Mount NFS shares from Atlantis (192.168.0.200)
|
||||
mount -t nfs 192.168.0.200:/volume1/data/media /mnt/atlantis_media -o rw,soft,nfsvers=3
|
||||
mount -t nfs 192.168.0.200:/volume3/usenet/tdarr_cache /mnt/atlantis_cache -o rw,soft,nfsvers=3
|
||||
|
||||
# Verify mounts
|
||||
if mountpoint -q /mnt/atlantis_media && mountpoint -q /mnt/atlantis_cache; then
|
||||
echo "Tdarr NFS mounts successful"
|
||||
else
|
||||
echo "Warning: One or more Tdarr NFS mounts failed"
|
||||
fi
|
||||
37
hosts/synology/calypso/watchtower.yaml
Normal file
37
hosts/synology/calypso/watchtower.yaml
Normal file
@@ -0,0 +1,37 @@
|
||||
# Watchtower - Container update notifier for Calypso (schedule disabled - GitOps managed)
|
||||
# Auto-update schedule removed; image updates are handled via Renovate PRs.
|
||||
# Manual update trigger: POST http://localhost:8080/v1/update
|
||||
# Header: Authorization: Bearer watchtower-metrics-token
|
||||
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
watchtower:
|
||||
image: containrrr/watchtower:latest
|
||||
container_name: watchtower
|
||||
ports:
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
environment:
|
||||
# Core functionality
|
||||
- DOCKER_API_VERSION=1.43
|
||||
- WATCHTOWER_CLEANUP=true
|
||||
- WATCHTOWER_INCLUDE_RESTARTING=true
|
||||
- WATCHTOWER_INCLUDE_STOPPED=true
|
||||
- WATCHTOWER_REVIVE_STOPPED=false
|
||||
- WATCHTOWER_TIMEOUT=10s
|
||||
- TZ=America/Los_Angeles
|
||||
|
||||
# Schedule disabled — updates managed via Renovate PRs (GitOps).
|
||||
# Enable manual HTTP API updates instead.
|
||||
- WATCHTOWER_HTTP_API_UPDATE=true
|
||||
|
||||
# HTTP API for metrics and manual update triggers
|
||||
- WATCHTOWER_HTTP_API_METRICS=true
|
||||
- WATCHTOWER_HTTP_API_TOKEN="REDACTED_HTTP_TOKEN"
|
||||
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
# Exclude watchtower from updating itself
|
||||
- "com.centurylinklabs.watchtower.enable=false"
|
||||
26
hosts/synology/calypso/wireguard-server.yaml
Normal file
26
hosts/synology/calypso/wireguard-server.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
# WireGuard - VPN server
|
||||
# Port: 51820/udp
|
||||
# Modern, fast VPN tunnel
|
||||
|
||||
version: "3.5"
|
||||
|
||||
services:
|
||||
wgeasy:
|
||||
image: ghcr.io/wg-easy/wg-easy:latest
|
||||
network_mode: "bridge"
|
||||
container_name: wgeasy
|
||||
ports:
|
||||
- "51820:51820/udp"
|
||||
- "51821:51821"
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
- SYS_MODULE
|
||||
sysctls:
|
||||
- net.ipv4.conf.all.src_valid_mark=1
|
||||
- net.ipv4.ip_forward=1
|
||||
volumes:
|
||||
- /volume1/docker/wg:/etc/wireguard
|
||||
environment:
|
||||
- WG_HOST=vishconcord.synology.me
|
||||
- HASH_PASSWORD="REDACTED_PASSWORD"
|
||||
restart: unless-stopped
|
||||
Reference in New Issue
Block a user