Clean up - simple zero-intervention bootstrap

- Removed ansible, compose, docs, scripts, tasks, templates
- Simplified bootstrap.sh for all major distros
- Works on Ubuntu, Debian, Fedora, Rocky, Arch, openSUSE
- Installs Docker, Tailscale, essential tools
- Configures firewall automatically

Co-authored-by: openhands <openhands@all-hands.dev>
This commit is contained in:
Vish-hands
2026-01-10 09:04:07 +00:00
parent 24f2cd64e9
commit cddeee6849
69 changed files with 341 additions and 9743 deletions

View File

@@ -1,74 +0,0 @@
# Synology Arrs Stack Environment Configuration
# Copy this file to .env and customize the values for your setup
# =============================================================================
# USER AND GROUP CONFIGURATION
# =============================================================================
# These should match your dockerlimited user created during setup
# Run 'id dockerlimited' on your Synology to get these values
PUID=1234
PGID=65432
# =============================================================================
# TIMEZONE CONFIGURATION
# =============================================================================
# Set your timezone - see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
TZ=Europe/London
# =============================================================================
# DIRECTORY PATHS
# =============================================================================
# Root directory for your data (media, torrents, etc.)
DATA_ROOT=/volume1/data
# Root directory for Docker container configurations
CONFIG_ROOT=/volume1/docker
# =============================================================================
# NETWORK CONFIGURATION
# =============================================================================
# Network mode for containers (synobridge is recommended for Synology)
NETWORK_MODE=synobridge
# =============================================================================
# PORT CONFIGURATION
# =============================================================================
# Customize ports if you have conflicts with other services
SONARR_PORT=8989
RADARR_PORT=7878
LIDARR_PORT=8686
BAZARR_PORT=6767
PROWLARR_PORT=9696
# =============================================================================
# VPN CONFIGURATION (for docker-compose-vpn.yml)
# =============================================================================
# VPN Provider (nordvpn, expressvpn, surfshark, etc.)
VPN_PROVIDER=nordvpn
# VPN Type (openvpn or wireguard)
VPN_TYPE=openvpn
# VPN Credentials
VPN_USER=your_vpn_username
VPN_PASSWORD=your_vpn_password
# VPN Server Countries (comma-separated)
VPN_COUNTRIES=Netherlands,Germany
# =============================================================================
# OPTIONAL: CUSTOM VOLUME MAPPINGS
# =============================================================================
# Uncomment and modify if you have different volume structures
# Custom media directories
# MOVIES_DIR=${DATA_ROOT}/media/movies
# TV_DIR=${DATA_ROOT}/media/tv
# MUSIC_DIR=${DATA_ROOT}/media/music
# BOOKS_DIR=${DATA_ROOT}/media/books
# Custom download directories
# DOWNLOADS_DIR=${DATA_ROOT}/torrents
# MOVIES_DOWNLOADS=${DOWNLOADS_DIR}/movies
# TV_DOWNLOADS=${DOWNLOADS_DIR}/tv
# MUSIC_DOWNLOADS=${DOWNLOADS_DIR}/music

View File

@@ -1,351 +0,0 @@
# 🚀 Ansible Deployment Guide for *arr Media Stack
This repository contains a complete Ansible playbook to deploy a production-ready media automation stack on any VPS. The playbook has been tested and verified on Ubuntu 22.04 with excellent performance.
## 📋 **What Gets Deployed**
### Core Services
- **Prowlarr** - Indexer management and search aggregation
- **Sonarr** - TV show automation and management
- **Radarr** - Movie automation and management
- **Lidarr** - Music automation and management
- **Whisparr** - Adult content automation (optional)
- **Bazarr** - Subtitle automation and management
- **Jellyseerr** - User request management interface
### Download Clients
- **SABnzbd** - Usenet downloader (via VPN)
- **Deluge** - Torrent downloader (via VPN)
### Media & Analytics
- **Plex** - Media server and streaming platform
- **Tautulli** - Plex analytics and monitoring
### Security & Networking
- **Gluetun** - VPN container for secure downloading
- **Fail2Ban** - Intrusion prevention system
- **UFW Firewall** - Network security
- **Tailscale Integration** - Secure remote access
## 🎯 **Verified Performance**
This playbook is based on a **successfully deployed and tested** stack with:
-**All 16 containers** running and healthy
-**VPN protection** active (IP masking verified)
-**API integrations** working (Prowlarr ↔ Sonarr ↔ SABnzbd)
-**Service connectivity** tested on all endpoints
-**Resource efficiency** on 62GB RAM / 290GB disk VPS
## 🔧 **Prerequisites**
### Target System Requirements
- **OS**: Ubuntu 20.04+ (tested on 22.04)
- **RAM**: Minimum 4GB (8GB+ recommended)
- **Storage**: 50GB+ available space
- **Network**: Public IP with SSH access
- **Architecture**: x86_64
### Control Machine Requirements
- **Ansible**: 2.12+ (tested with 13.0.0)
- **Python**: 3.8+
- **SSH**: Key-based authentication configured
### Required Accounts
- **VPN Provider**: NordVPN, Surfshark, ExpressVPN, etc.
- **Usenet Provider**: Optional but recommended
- **Indexer Access**: NZBgeek, NZBHydra2, etc.
## 🚀 **Quick Start**
### 1. Clone and Setup
```bash
git clone <repository-url>
cd synology-arrs-stack
```
### 2. Configure Inventory
```bash
# Edit your server details
cp inventory/production.yml.example inventory/production.yml
nano inventory/production.yml
```
Example inventory:
```yaml
all:
hosts:
production-vps:
ansible_host: YOUR_VPS_IP_ADDRESS
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/vps_key
tailscale_ip: YOUR_TAILSCALE_IP # Your Tailscale IP
```
### 3. Configure Secrets
```bash
# Create encrypted secrets file
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
ansible-vault edit group_vars/all/vault.yml
```
Required secrets:
```yaml
vault_vpn_provider: "nordvpn"
vault_vpn_username: "your_vpn_username"
vault_vpn_password: "your_vpn_password"
```
### 4. Deploy Stack
```bash
# Test connection
ansible all -i inventory/production.yml -m ping
# Deploy the complete stack
ansible-playbook -i inventory/production.yml ansible-deployment.yml --ask-vault-pass
```
### 5. Verify Deployment
```bash
# Check all services
ansible all -i inventory/production.yml -a "docker ps --format 'table {{.Names}}\t{{.Status}}'"
# Test service endpoints
ansible all -i inventory/production.yml -a "curl -s -o /dev/null -w '%{http_code}' http://YOUR_TAILSCALE_IP:9696"
```
## 🔐 **Security Configuration**
### VPN Setup
The stack routes download traffic through a VPN container:
```yaml
# Supported providers
vault_vpn_provider: "nordvpn" # NordVPN
vault_vpn_provider: "surfshark" # Surfshark
vault_vpn_provider: "expressvpn" # ExpressVPN
vault_vpn_provider: "pia" # Private Internet Access
```
### Firewall Rules
Automatic UFW configuration:
- **SSH**: Port 22 (your IP only)
- **Tailscale**: Full access on Tailscale network
- **Plex**: Port 32400 (public for remote access)
- **All other services**: Tailscale network only
### Fail2Ban Protection
Automatic intrusion prevention for:
- SSH brute force attacks
- Plex authentication failures
- Web service abuse
## 📊 **Service Access**
After deployment, access services via Tailscale IP:
| Service | URL | Purpose |
|---------|-----|---------|
| Prowlarr | `http://YOUR_TAILSCALE_IP:9696` | Indexer management |
| Sonarr | `http://YOUR_TAILSCALE_IP:8989` | TV shows |
| Radarr | `http://YOUR_TAILSCALE_IP:7878` | Movies |
| Lidarr | `http://YOUR_TAILSCALE_IP:8686` | Music |
| Whisparr | `http://YOUR_TAILSCALE_IP:6969` | Adult content |
| Bazarr | `http://YOUR_TAILSCALE_IP:6767` | Subtitles |
| Jellyseerr | `http://YOUR_TAILSCALE_IP:5055` | Requests |
| SABnzbd | `http://YOUR_TAILSCALE_IP:8080` | Usenet downloads |
| Deluge | `http://YOUR_TAILSCALE_IP:8081` | Torrent downloads |
| Plex | `http://YOUR_VPS_IP:32400` | Media server |
| Tautulli | `http://YOUR_TAILSCALE_IP:8181` | Plex analytics |
## ⚙️ **Configuration Options**
### Customizable Variables
```yaml
# group_vars/all/main.yml
bind_to_tailscale_only: true # Restrict to Tailscale network
enable_fail2ban: true # Enable intrusion prevention
enable_auto_updates: true # Auto-update containers
backup_enabled: true # Enable automated backups
memory_limits: # Resource constraints
sonarr: "1g"
radarr: "1g"
plex: "4g"
```
### Directory Structure
```
/home/docker/
├── compose/ # Docker Compose files
├── prowlarr/ # Prowlarr configuration
├── sonarr/ # Sonarr configuration
├── radarr/ # Radarr configuration
├── lidarr/ # Lidarr configuration
├── whisparr/ # Whisparr configuration
├── bazarr/ # Bazarr configuration
├── jellyseerr/ # Jellyseerr configuration
├── sabnzbd/ # SABnzbd configuration
├── deluge/ # Deluge configuration
├── plex/ # Plex configuration
├── tautulli/ # Tautulli configuration
├── gluetun/ # VPN configuration
├── media/ # Media library
│ ├── movies/ # Movie files
│ ├── tv/ # TV show files
│ ├── music/ # Music files
│ └── adult/ # Adult content (optional)
└── downloads/ # Download staging area
├── complete/ # Completed downloads
└── incomplete/ # In-progress downloads
```
## 🔧 **Post-Deployment Setup**
### 1. Configure Indexers in Prowlarr
1. Access Prowlarr at `http://YOUR_TAILSCALE_IP:9696`
2. Add your indexers (NZBgeek, NZBHydra2, etc.)
3. Configure API keys and test connections
### 2. Connect Applications to Prowlarr
1. In Prowlarr, go to Settings → Apps
2. Add Sonarr, Radarr, Lidarr, Whisparr, Bazarr
3. Use internal Docker network URLs:
- Sonarr: `http://sonarr:8989`
- Radarr: `http://radarr:7878`
- etc.
### 3. Configure Download Clients
1. In each *arr app, go to Settings → Download Clients
2. Add SABnzbd: `http://gluetun:8081`
3. Add Deluge: `http://gluetun:8112`
### 4. Setup Plex Libraries
1. Access Plex at `http://YOUR_VPS_IP:32400`
2. Add libraries pointing to:
- Movies: `/media/movies`
- TV Shows: `/media/tv`
- Music: `/media/music`
### 5. Configure Jellyseerr
1. Access Jellyseerr at `http://YOUR_TAILSCALE_IP:5055`
2. Connect to Plex server
3. Configure Sonarr and Radarr connections
## 🔍 **Troubleshooting**
### Check Service Status
```bash
# View all container status
ansible all -i inventory/production.yml -a "docker ps"
# Check specific service logs
ansible all -i inventory/production.yml -a "docker logs sonarr --tail 50"
# Test VPN connection
ansible all -i inventory/production.yml -a "docker exec gluetun curl -s ifconfig.me"
```
### Common Issues
**VPN Not Working**
```bash
# Check VPN container logs
docker logs gluetun
# Verify VPN credentials in vault.yml
ansible-vault edit group_vars/all/vault.yml
```
**Services Not Accessible**
```bash
# Check Tailscale status
sudo tailscale status
# Verify firewall rules
sudo ufw status verbose
```
**Download Issues**
```bash
# Check download client connectivity
docker exec sonarr curl -s http://gluetun:8081/api?mode=version
# Verify indexer connections in Prowlarr
```
## 📈 **Monitoring & Maintenance**
### Automated Monitoring
The playbook includes monitoring scripts:
- **Health checks**: Every 5 minutes
- **Resource monitoring**: CPU, memory, disk usage
- **VPN connectivity**: Continuous monitoring
- **Service availability**: HTTP endpoint checks
### Backup Strategy
Automated backups include:
- **Configuration files**: Daily backup of all service configs
- **Database exports**: Sonarr, Radarr, Lidarr databases
- **Retention**: 30 days (configurable)
- **Encryption**: AES-256 encrypted backups
### Update Management
- **Container updates**: Weekly automatic updates
- **Security patches**: Automatic OS security updates
- **Configuration preservation**: Configs preserved during updates
## 🎯 **Performance Optimization**
### Resource Allocation
Tested optimal settings for VPS deployment:
```yaml
memory_limits:
plex: "4g" # Media transcoding
sonarr: "1g" # TV processing
radarr: "1g" # Movie processing
sabnzbd: "1g" # Download processing
lidarr: "512m" # Music processing
prowlarr: "512m" # Indexer management
jellyseerr: "512m" # Request handling
deluge: "512m" # Torrent processing
bazarr: "256m" # Subtitle processing
tautulli: "256m" # Analytics
gluetun: "256m" # VPN routing
```
### Storage Optimization
- **SSD recommended** for configuration and databases
- **HDD acceptable** for media storage
- **Separate volumes** for media vs. system data
- **Automatic cleanup** of old downloads and logs
## 🤝 **Contributing**
### Testing New Features
```bash
# Test on staging environment
ansible-playbook -i inventory/staging.yml ansible-deployment.yml --check
# Validate configuration
ansible-playbook -i inventory/production.yml ansible-deployment.yml --syntax-check
```
### Reporting Issues
Please include:
- Ansible version and OS details
- Full error output with `-vvv` flag
- Relevant service logs
- System resource information
## 📄 **License**
This project is licensed under the MIT License - see the LICENSE file for details.
## 🙏 **Acknowledgments**
- **Dr. Frankenstein's Guide** - Original VPS deployment methodology
- **LinuxServer.io** - Excellent Docker images for all services
- **Servarr Community** - Outstanding *arr application ecosystem
- **Tailscale** - Secure networking solution
---
**🎉 Ready to deploy your own media automation empire? Let's get started!**

View File

@@ -1,283 +0,0 @@
# 📋 Changelog - *arr Media Stack
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [2.0.0] - 2024-11-25 - **🚀 Production-Ready Ansible Deployment**
### 🎉 **Major Features Added**
#### **Bootstrap Script & Ansible Automation**
- **One-command deployment** from fresh Ubuntu/Debian install via `bootstrap.sh`
- **Complete Ansible playbook** for infrastructure automation (`ansible-deployment.yml`)
- **Production-ready templates** for all services with Jinja2 templating
- **Vault-encrypted secrets** management for secure credential storage
- **Automated deployment script** (`deploy.sh`) with health verification
- **System dependency installation** (Docker, Ansible, Python, monitoring tools)
#### **Enhanced Security & Networking**
- **Tailscale VPN integration** for zero-trust network access
- **UFW firewall configuration** with minimal attack surface
- **Fail2Ban intrusion prevention** system with custom rules
- **VPN-routed downloads** via Gluetun container for privacy
- **Container security hardening** with no-new-privileges and proper user isolation
#### **Production Verification & Testing**
- **Battle-tested on real VPS** (YOUR_VPS_IP_ADDRESS) with 62GB RAM, 290GB disk
- **All 16 containers verified** running and healthy
- **VPN protection confirmed** (IP masking: VPN_IP_ADDRESS ≠ VPS: YOUR_VPS_IP_ADDRESS)
- **API integrations tested** (Prowlarr ↔ Sonarr ↔ SABnzbd working)
- **Service connectivity verified** on all endpoints with HTTP status checks
- **Resource efficiency optimized** for VPS deployment constraints
#### **Monitoring & Management**
- **Health monitoring system** with automated service checks
- **Resource usage monitoring** and performance tracking
- **Automated backup system** for configurations and databases
- **Service health verification** with API connectivity testing
- **Management aliases** for easy service administration
- **Container monitoring** with ctop and health dashboards
### 🔧 **Technical Improvements**
#### **Service Stack Updates**
- **Prowlarr**: Enhanced indexer management with API integration testing
- **Sonarr**: TV automation with verified API (cbce325f9bc04740b3a6513a7a17ca0e)
- **Radarr**: Movie automation with verified API (ad87534619cd489cab2279fb35aa9b54)
- **Lidarr**: Music automation and management
- **Whisparr**: Adult content automation (optional deployment)
- **Bazarr**: Subtitle automation and management
- **Jellyseerr**: User request management interface
- **SABnzbd**: Usenet downloader (VPN-protected, verified working)
- **Deluge**: Torrent downloader (VPN-protected)
- **Plex**: Media server with public access option
- **Tautulli**: Plex analytics and monitoring
- **Gluetun**: VPN container for secure downloading
#### **Infrastructure Enhancements**
- **Docker Compose optimization** for VPS resource constraints
- **Network configuration** with proper container communication
- **Storage layout optimization** with efficient directory structure
- **Environment variable management** with secure templating
- **Service dependency management** with proper startup ordering
### 📚 **Documentation Overhaul**
#### **New Documentation Files**
- **[Bootstrap Script](bootstrap.sh)** - Complete fresh OS deployment
- **[Ansible Deployment Guide](ANSIBLE_DEPLOYMENT.md)** - Comprehensive setup documentation
- **[Updated README](README.md)** - Production-focused project overview
- **[Enhanced Changelog](CHANGELOG.md)** - Detailed change tracking
#### **Configuration Templates**
- **[Environment Template](templates/.env.j2)** - Jinja2 service configuration
- **[Vault Template](group_vars/all/vault.yml.example)** - Encrypted secrets management
- **[Inventory Template](inventory/production.yml.example)** - Server configuration
#### **Management & Deployment**
- **[Deployment Script](deploy.sh)** - Automated Ansible deployment with verification
- **Helper aliases** for service management (arr-status, arr-logs, arr-restart, etc.)
- **System monitoring commands** (sysinfo, vpn-status, containers)
### 🛠️ **Bug Fixes & Improvements**
#### **Container & Service Issues**
- **Fixed Watchtower restart loops** with Docker API v1.44 compatibility
- **Resolved permission issues** with proper user/group setup (docker:docker)
- **Improved container health checks** with proper HTTP endpoint testing
- **Enhanced error handling** in deployment and management scripts
#### **Network & Security Issues**
- **Fixed service connectivity** between containers with proper network configuration
- **Resolved VPN routing** for download clients through Gluetun
- **Improved firewall rules** for Tailscale-only access with UFW
- **Enhanced port management** and conflict resolution
#### **Configuration & Deployment Issues**
- **Standardized configuration** across all services with consistent templating
- **Improved secret management** with Ansible Vault encryption
- **Enhanced deployment reliability** with idempotent Ansible tasks
- **Better error reporting** during deployment with detailed logging
### 📊 **Performance & Resource Optimization**
#### **VPS-Specific Optimizations**
- **Memory limits** tuned for typical VPS constraints (4-8GB RAM)
- **CPU allocation** optimized for service priority and resource sharing
- **Storage efficiency** with hard link support and proper directory layout
- **Network optimization** for container-to-container communication
#### **Monitoring & Alerting**
- **Real-time health monitoring** with automated service checks
- **Performance metrics** collection and analysis
- **Resource usage tracking** with alerting capabilities
- **Service availability** monitoring with API endpoint verification
### 🎯 **Deployment Methods**
#### **🚀 Method 1: Bootstrap Script (Recommended for Fresh VPS)**
```bash
curl -sSL https://github.com/your-username/arr-suite-template/raw/branch/main/bootstrap.sh | bash
```
- **Fresh OS deployment** from Ubuntu 20.04+ or Debian 11+
- **Automated dependency installation** (Docker, Ansible, Python, monitoring)
- **Complete system configuration** (security, networking, monitoring)
- **One-command setup** with comprehensive verification
#### **⚙️ Method 2: Ansible Deployment (Advanced Users)**
```bash
git clone https://github.com/your-username/arr-suite-template.git
cd arr-suite
./deploy.sh
```
- **Infrastructure as code** with Ansible automation
- **Idempotent deployment** with configuration management
- **Health verification** and service testing
- **Customizable configuration** with vault secrets
#### **📖 Method 3: Manual Setup (Educational)**
- **Step-by-step documentation** for learning purposes
- **Troubleshooting guides** for common issues
- **Configuration examples** and best practices
- **Component-by-component** installation guidance
### 🔄 **Migration & Compatibility**
- **Backward compatibility** with existing configurations
- **Automatic data migration** during upgrades
- **Service continuity** maintained during deployment
- **Configuration preservation** for existing installations
### 🎯 **Production Metrics**
- **100% container health** (16/16 containers healthy)
- **Zero downtime deployment** process
- **Secure by default** configuration
- **Production-ready** with monitoring and backups
- **VPS-optimized** resource allocation
---
## [1.0.0] - 2024-11-17 - **Initial Release**
### Added
- Initial release of Synology Arrs Stack
- Complete Docker Compose configuration for Arrs suite
- Support for Sonarr, Radarr, Lidarr, Bazarr, and Prowlarr
- Environment-based configuration with `.env` file
- Automated setup script for directory structure and permissions
- Deployment script with multiple options (standard, VPN, custom)
- Backup and restore functionality
- Comprehensive logging and monitoring scripts
- VPN integration support with GlueTUN
- Individual service compose files for selective deployment
- Health checks for all containers
- Security enhancements (non-root user, no-new-privileges)
- Custom bridge network support (synobridge)
- Comprehensive documentation:
- Setup guide with prerequisites
- Configuration guide for all applications
- Troubleshooting guide with common issues
- VPN setup guide with multiple providers
- Example configurations and templates
- Timezone examples and configuration helpers
### Features
- **Easy Deployment**: One-command deployment with automated setup
- **Flexible Configuration**: Environment-based configuration for easy customization
- **Security First**: Containers run as non-root user with security restrictions
- **VPN Support**: Optional VPN routing for Prowlarr to access blocked indexers
- **Monitoring**: Built-in health checks and logging utilities
- **Backup/Restore**: Automated backup and restore functionality
- **Documentation**: Comprehensive guides for setup, configuration, and troubleshooting
- **Synology Optimized**: Specifically designed for Synology NAS devices
- **Hard Link Support**: Proper directory structure for efficient storage usage
### Technical Details
- Docker Compose version 3.8
- LinuxServer.io container images
- Custom bridge network (synobridge) support
- Environment variable configuration
- Health checks with curl/wget
- Resource monitoring capabilities
- Log aggregation and export
- Automated permission management
### Supported Applications
- **Sonarr** (latest) - TV Show management
- **Radarr** (latest) - Movie management
- **Lidarr** (latest) - Music management
- **Bazarr** (latest) - Subtitle management
- **Prowlarr** (latest) - Indexer management
- **GlueTUN** (latest) - VPN client (optional)
### Supported VPN Providers
- NordVPN
- ExpressVPN
- Surfshark
- ProtonVPN
- Windscribe
- Custom OpenVPN/WireGuard configurations
### Scripts Included
- `setup.sh` - Initial environment and directory setup
- `deploy.sh` - Stack deployment with multiple options
- `backup.sh` - Configuration backup and restore
- `logs.sh` - Log viewing and management
### Documentation
- `README.md` - Project overview and quick start
- `docs/SETUP.md` - Detailed setup instructions
- `docs/CONFIGURATION.md` - Application configuration guide
- `docs/TROUBLESHOOTING.md` - Common issues and solutions
- `docs/VPN_SETUP.md` - VPN integration guide
- `CHANGELOG.md` - Version history and changes
### Configuration Templates
- `.env.example` - Environment configuration template
- `config-templates/timezone-examples.txt` - Timezone reference
- Individual compose files for selective deployment
## [Unreleased]
### Planned Features
- Watchtower integration for automatic updates
- Prometheus metrics export
- Grafana dashboard templates
- Additional VPN provider support
- Reverse proxy configuration examples
- SSL/TLS setup guide
- Performance optimization guide
- Migration scripts from other setups
### Potential Improvements
- Container resource limit recommendations
- Database optimization scripts
- Log rotation configuration
- Notification integration examples
- Custom script examples
- API integration examples
---
## Version History
### Version Numbering
- **Major version** (X.0.0): Breaking changes, major feature additions
- **Minor version** (0.X.0): New features, non-breaking changes
- **Patch version** (0.0.X): Bug fixes, documentation updates
### Release Notes
Each release includes:
- New features and improvements
- Bug fixes and security updates
- Breaking changes (if any)
- Migration instructions (if needed)
- Updated documentation
### Support Policy
- **Current version**: Full support and updates
- **Previous major version**: Security updates only
- **Older versions**: Community support only
For the latest updates and releases, check the [GitHub repository](https://github.com/yourusername/synology-arrs-stack).

253
README.md
View File

@@ -1,215 +1,112 @@
# 🎬 ARR Suite Template Bootstrap
# Server Bootstrap
> **Complete Media Automation Stack Template** - Production-ready Ansible deployment for VPS
One-command server preparation for media automation stacks.
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Ansible](https://img.shields.io/badge/Ansible-2.9%2B-red.svg)](https://www.ansible.com/)
[![Docker](https://img.shields.io/badge/Docker-20.10%2B-blue.svg)](https://www.docker.com/)
[![Ubuntu](https://img.shields.io/badge/Ubuntu-20.04%2B-orange.svg)](https://ubuntu.com/)
## What It Does
## 🚀 **One-Command Media Server Deployment**
**Zero intervention required.** The bootstrap script automatically:
Deploy a complete, production-ready media automation stack to your VPS in **15-30 minutes** with a single Ansible command.
- ✅ Detects your OS (Ubuntu, Debian, Fedora, Rocky, Arch, openSUSE)
- ✅ Installs Docker and Docker Compose
- ✅ Installs Tailscale for secure remote access
- ✅ Configures firewall (SSH, Plex, Jellyfin ports)
- ✅ Installs essential tools (htop, git, curl, jq, etc.)
- ✅ Creates helpful shell aliases
### **🎯 What You Get**
**Run one command, server is ready.**
```
📦 16 Production Services
├── 🔍 Prowlarr - Indexer management
├── 📺 Sonarr - TV show automation
├── 🎬 Radarr - Movie automation
├── 🎵 Lidarr - Music automation
├── 🔞 Whisparr - Adult content (optional)
├── 📝 Bazarr - Subtitle automation
├── 🎭 Jellyseerr - Request management
├── 📥 SABnzbd - Usenet downloader
├── 🌊 Deluge - Torrent downloader
├── 🎪 Plex - Media server
├── 📊 Tautulli - Analytics
├── 🔒 Gluetun - VPN protection
├── 🛡️ Fail2Ban - Security
├── 🔥 UFW - Firewall
├── 🌐 Tailscale - Remote access
└── 📈 Monitoring - Health checks
```
## Quick Install
## ⚡ **Quick Start**
### **Prerequisites**
- Ubuntu 20.04+ VPS with 4GB+ RAM
- SSH access with sudo privileges
- Domain name (optional but recommended)
### **1. Clone & Configure**
```bash
git clone <this-repo> arr-suite
cd arr-suite
# Configure your VPS details
nano inventory/production.yml
# Set up your secrets
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
ansible-vault encrypt group_vars/all/vault.yml
ansible-vault edit group_vars/all/vault.yml
curl -fsSL -H "Authorization: token 77e3ddaf262bb94f6fa878ca449cc1aa1129a00d" \
"https://git.vish.gg/Vish/arr-suite-template-bootstrap/raw/branch/main/bootstrap.sh" | sudo bash
```
### **2. Deploy Everything**
### Install Options
```bash
# One command deployment
ansible-playbook -i inventory/production.yml ansible-deployment.yml
# Without Tailscale
curl -fsSL -H "Authorization: token 77e3ddaf262bb94f6fa878ca449cc1aa1129a00d" \
"https://git.vish.gg/Vish/arr-suite-template-bootstrap/raw/branch/main/bootstrap.sh" | sudo bash -s -- --no-tailscale
# Or use the helper script
./deploy.sh
# Without firewall configuration
curl -fsSL -H "Authorization: token 77e3ddaf262bb94f6fa878ca449cc1aa1129a00d" \
"https://git.vish.gg/Vish/arr-suite-template-bootstrap/raw/branch/main/bootstrap.sh" | sudo bash -s -- --no-firewall
```
### **3. Access Your Services**
```
🌐 Web Interfaces (after deployment):
├── Prowlarr: http://your-vps:9696
├── Sonarr: http://your-vps:8989
├── Radarr: http://your-vps:7878
├── Lidarr: http://your-vps:8686
├── Bazarr: http://your-vps:6767
├── Jellyseerr: http://your-vps:5055
├── SABnzbd: http://your-vps:8080
├── Deluge: http://your-vps:8112
├── Plex: http://your-vps:32400
└── Tautulli: http://your-vps:8181
```
## Supported Systems
## 🔧 **Configuration Guide**
- Ubuntu 20.04, 22.04, 24.04+
- Debian 11, 12+
- Linux Mint 20, 21, 22+
- Fedora 38+
- Rocky Linux / AlmaLinux / RHEL 9+
- Arch Linux / Manjaro
- openSUSE
### **Required Configuration**
## What Gets Installed
1. **VPS Details** (`inventory/production.yml`):
```yaml
ansible_host: YOUR_VPS_IP_ADDRESS
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/your_private_key
```
| Component | Description |
|-----------|-------------|
| Docker | Container runtime |
| Docker Compose | Multi-container orchestration |
| Tailscale | Secure mesh VPN |
| htop | Process viewer |
| git | Version control |
| curl, wget | Download tools |
| jq | JSON processor |
| tree, ncdu | File utilities |
2. **VPN Credentials** (`group_vars/all/vault.yml`):
```yaml
vault_vpn_provider: "nordvpn" # or surfshark, expressvpn
vault_vpn_username: "your_vpn_username"
vault_vpn_password: "your_vpn_password"
```
## After Bootstrap
3. **Optional Services** (`ansible-deployment.yml`):
```yaml
# Enable/disable services
enable_whisparr: false # Adult content
enable_tailscale: true # Remote access
enable_plex_claim: false # Auto Plex setup
```
### **VPN Providers Supported**
- ✅ NordVPN
- ✅ Surfshark
- ✅ ExpressVPN
- ✅ ProtonVPN
- ✅ CyberGhost
- ✅ Private Internet Access
- ✅ Mullvad
## 📚 **Documentation**
- 📖 **[Complete Deployment Guide](ANSIBLE_DEPLOYMENT.md)** - Detailed setup instructions
- ⚙️ **[Configuration Guide](docs/CONFIGURATION.md)** - Service configuration
- 🔧 **[Troubleshooting](docs/TROUBLESHOOTING.md)** - Common issues & solutions
- 🔒 **[VPN Setup](docs/VPN_CONFIGURATION.md)** - VPN provider configuration
- 🌐 **[Service Access](docs/SERVICE_ACCESS.md)** - Web interface guide
## 🛡️ **Security Features**
- 🔒 **VPN Protection** - All downloads through encrypted VPN
- 🛡️ **Firewall** - UFW with minimal open ports
- 🚫 **Intrusion Prevention** - Fail2Ban protection
- 🔐 **Encrypted Secrets** - Ansible Vault for credentials
- 🌐 **Secure Access** - Tailscale mesh networking
- 🔄 **Auto Updates** - Security patches automated
## 🎯 **Production Ready**
**Tested on Ubuntu 22.04**
**Resource optimized** (4GB RAM minimum)
**High availability** with health checks
**Automated backups** with encryption
**Monitoring & alerts** included
**SSL/TLS ready** for domain setup
## 🚀 **Deployment Options**
### **Option 1: Full Automation (Recommended)**
Connect to Tailscale:
```bash
ansible-playbook -i inventory/production.yml ansible-deployment.yml
sudo tailscale up
```
### **Option 2: Manual Bootstrap**
Then install your media stack:
### Plex + SABnzbd + Deluge
```bash
./bootstrap.sh # Prepare VPS
docker-compose -f compose/docker-compose-vpn.yml up -d
curl -fsSL -H "Authorization: token 77e3ddaf262bb94f6fa878ca449cc1aa1129a00d" \
"https://git.vish.gg/Vish/arr-suite/raw/branch/main/install.sh" | sudo bash
```
### **Option 3: Custom Services**
### Jellyfin + qBittorrent
```bash
# Deploy only specific services
ansible-playbook -i inventory/production.yml ansible-deployment.yml --tags "sonarr,radarr,plex"
curl -fsSL -H "Authorization: token 77e3ddaf262bb94f6fa878ca449cc1aa1129a00d" \
"https://git.vish.gg/Vish/arr-suite-jellyfin/raw/branch/main/install.sh" | sudo bash
```
## 🔧 **Customization**
## Shell Aliases
### **Add Your Own Services**
1. Create service definition in `compose/`
2. Add configuration in `templates/`
3. Update `ansible-deployment.yml`
After bootstrap, these aliases are available (reload shell first):
### **Custom Domains**
```yaml
# In group_vars/all/vault.yml
vault_domain: "yourdomain.com"
vault_ssl_email: "you@yourdomain.com"
```bash
dps # Show running containers
dlogs # View container logs
dstop # Stop containers
dstart # Start containers
drestart # Restart containers
dupdate # Update all containers
sysinfo # System information
myip # Show public IP
ports # Show listening ports
```
### **Resource Limits**
```yaml
# Adjust in ansible-deployment.yml
docker_memory_limit: "2g"
docker_cpu_limit: "1.0"
```
## Firewall Ports
## 📊 **System Requirements**
The bootstrap configures these ports:
| Component | Minimum | Recommended |
|-----------|---------|-------------|
| **RAM** | 4GB | 8GB+ |
| **Storage** | 50GB | 500GB+ |
| **CPU** | 2 cores | 4+ cores |
| **Network** | 100Mbps | 1Gbps |
| **OS** | Ubuntu 20.04 | Ubuntu 22.04 |
| Port | Service |
|------|---------|
| 22 | SSH |
| 32400 | Plex |
| 8096 | Jellyfin |
## 🆘 **Support & Community**
Additional ports are opened by the arr-suite installers.
- 📖 **Documentation**: Check `docs/` directory
- 🐛 **Issues**: Open GitHub issues for bugs
- 💬 **Discussions**: Use GitHub Discussions for questions
- 🔧 **Troubleshooting**: See `docs/TROUBLESHOOTING.md`
## License
## 📝 **License**
MIT License - see [LICENSE](LICENSE) file for details.
## 🙏 **Credits**
Built with ❤️ using:
- [Ansible](https://www.ansible.com/) - Infrastructure automation
- [Docker](https://www.docker.com/) - Containerization
- [Gluetun](https://github.com/qdm12/gluetun) - VPN container
- [Linuxserver.io](https://www.linuxserver.io/) - Container images
---
**⭐ Star this repo if it helped you build an awesome media server!**
> **Note**: This is a template repository. Customize the configuration files with your own settings before deployment.
MIT

View File

@@ -1,311 +0,0 @@
# 🎯 Template Setup Guide
> **Complete setup instructions for the ARR Suite Template Bootstrap**
This guide will walk you through customizing and deploying this template for your own VPS.
## 🚀 **Quick Setup (5 Minutes)**
### **Step 1: Clone and Prepare**
```bash
git clone <your-repo-url> my-arr-suite
cd my-arr-suite
```
### **Step 2: Configure Your VPS**
Edit `inventory/production.yml`:
```yaml
all:
children:
arrs_servers:
hosts:
your-vps:
ansible_host: YOUR_VPS_IP_ADDRESS # ← Your VPS IP
ansible_user: root # ← Your SSH user
ansible_ssh_private_key_file: ~/.ssh/your_private_key # ← Your SSH key
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
tailscale_ip: YOUR_TAILSCALE_IP_ADDRESS # ← Optional: Tailscale IP
```
### **Step 3: Set Up Secrets**
```bash
# Copy the vault template
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
# Encrypt it (you'll set a password)
ansible-vault encrypt group_vars/all/vault.yml
# Edit your secrets
ansible-vault edit group_vars/all/vault.yml
```
**Required secrets to configure:**
```yaml
# VPN Credentials (REQUIRED)
vault_vpn_provider: "nordvpn" # Your VPN provider
vault_vpn_username: "your_username" # Your VPN username
vault_vpn_password: "your_password" # Your VPN password
# Optional: Indexer credentials
vault_nzbgeek_api_key: "your_api_key"
vault_nzbgeek_username: "your_username"
# Optional: Usenet provider
vault_usenet_provider_host: "news.your-provider.com"
vault_usenet_provider_username: "your_username"
vault_usenet_provider_password: "your_password"
```
### **Step 4: Deploy**
```bash
# Deploy everything
ansible-playbook -i inventory/production.yml ansible-deployment.yml
# Or use the helper script
./deploy.sh
```
## 🔧 **Detailed Configuration**
### **VPN Provider Setup**
#### **NordVPN**
```yaml
vault_vpn_provider: "nordvpn"
vault_vpn_username: "your_nordvpn_email"
vault_vpn_password: "your_nordvpn_password"
```
#### **Surfshark**
```yaml
vault_vpn_provider: "surfshark"
vault_vpn_username: "your_surfshark_username"
vault_vpn_password: "your_surfshark_password"
```
#### **ExpressVPN**
```yaml
vault_vpn_provider: "expressvpn"
vault_vpn_username: "your_expressvpn_username"
vault_vpn_password: "your_expressvpn_password"
```
### **Optional Services Configuration**
Edit `ansible-deployment.yml` to enable/disable services:
```yaml
# Service toggles
enable_whisparr: false # Adult content automation
enable_tailscale: true # Secure remote access
enable_plex_claim: false # Auto Plex setup
enable_backup_system: true # Automated backups
enable_monitoring: true # Health monitoring
```
### **Resource Customization**
Adjust resource limits in `ansible-deployment.yml`:
```yaml
# Docker resource limits
docker_memory_limit: "2g" # Per container memory limit
docker_cpu_limit: "1.0" # Per container CPU limit
# Storage paths
media_root: "/home/docker/media"
downloads_root: "/home/docker/downloads"
config_root: "/home/docker"
```
## 🌐 **Domain & SSL Setup (Optional)**
### **Custom Domain Configuration**
```yaml
# In vault.yml
vault_domain: "yourdomain.com"
vault_ssl_email: "you@yourdomain.com"
vault_cloudflare_api_token: "your_cloudflare_token" # If using Cloudflare
```
### **Reverse Proxy Setup**
The template includes Traefik configuration for SSL:
```yaml
# Enable reverse proxy
enable_traefik: true
enable_ssl: true
```
## 🔐 **Security Customization**
### **SSH Key Setup**
```bash
# Generate SSH key if you don't have one
ssh-keygen -t ed25519 -f ~/.ssh/arr_suite_key -C "arr-suite-deployment"
# Copy to your VPS
ssh-copy-id -i ~/.ssh/arr_suite_key.pub root@YOUR_VPS_IP
# Update inventory with key path
ansible_ssh_private_key_file: ~/.ssh/arr_suite_key
```
### **Tailscale Setup (Recommended)**
```bash
# Install Tailscale on your local machine
curl -fsSL https://tailscale.com/install.sh | sh
# Get your Tailscale auth key
tailscale up
# Visit the URL to authenticate
# Add auth key to vault.yml
vault_tailscale_auth_key: "tskey-auth-your-key-here"
```
## 📊 **Post-Deployment Configuration**
### **1. Access Your Services**
After deployment, services will be available at:
```
http://YOUR_VPS_IP:9696 - Prowlarr
http://YOUR_VPS_IP:8989 - Sonarr
http://YOUR_VPS_IP:7878 - Radarr
http://YOUR_VPS_IP:8686 - Lidarr
http://YOUR_VPS_IP:6767 - Bazarr
http://YOUR_VPS_IP:5055 - Jellyseerr
http://YOUR_VPS_IP:8080 - SABnzbd
http://YOUR_VPS_IP:8112 - Deluge
http://YOUR_VPS_IP:32400 - Plex
http://YOUR_VPS_IP:8181 - Tautulli
```
### **2. Configure Indexers in Prowlarr**
1. Access Prowlarr at `http://YOUR_VPS_IP:9696`
2. Go to Settings → Indexers
3. Add your indexers (NZBgeek, NZBHydra2, etc.)
4. Test connections
### **3. Connect Applications**
1. In Prowlarr → Settings → Apps
2. Add each application:
- **Sonarr**: `http://sonarr:8989`
- **Radarr**: `http://radarr:7878`
- **Lidarr**: `http://lidarr:8686`
- **Bazarr**: `http://bazarr:6767`
### **4. Setup Download Clients**
In each *arr application:
1. Go to Settings → Download Clients
2. Add SABnzbd: `http://gluetun:8080`
3. Add Deluge: `http://gluetun:8112`
### **5. Configure Plex**
1. Access Plex at `http://YOUR_VPS_IP:32400`
2. Complete initial setup
3. Add libraries:
- Movies: `/media/movies`
- TV Shows: `/media/tv`
- Music: `/media/music`
## 🛠️ **Customization Examples**
### **Add Custom Service**
1. Create `compose/my-service.yml`:
```yaml
services:
my-service:
image: my-service:latest
container_name: my-service
ports:
- "8090:8090"
volumes:
- ./my-service:/config
restart: unless-stopped
```
2. Add to `ansible-deployment.yml`:
```yaml
- name: Deploy my custom service
docker_compose:
project_src: "{{ docker_dir }}/compose"
files:
- docker-compose-vpn.yml
- my-service.yml
```
### **Custom Backup Schedule**
```yaml
# In ansible-deployment.yml
backup_schedule: "0 2 * * *" # Daily at 2 AM
backup_retention_days: 30 # Keep 30 days of backups
backup_encryption: true # Encrypt backups
```
## 🔧 **Troubleshooting**
### **Common Issues**
#### **Ansible Connection Failed**
```bash
# Test SSH connection
ssh -i ~/.ssh/your_key root@YOUR_VPS_IP
# Check inventory syntax
ansible-inventory -i inventory/production.yml --list
```
#### **VPN Not Working**
```bash
# Check VPN container logs
docker logs gluetun
# Test VPN connection
docker exec gluetun curl -s ifconfig.me
```
#### **Services Not Accessible**
```bash
# Check container status
docker ps
# Check firewall
sudo ufw status
# Check service logs
docker logs prowlarr
```
### **Useful Commands**
```bash
# Check deployment status
ansible-playbook -i inventory/production.yml ansible-deployment.yml --check
# Run specific tasks
ansible-playbook -i inventory/production.yml ansible-deployment.yml --tags "docker"
# View vault contents
ansible-vault view group_vars/all/vault.yml
# Edit vault
ansible-vault edit group_vars/all/vault.yml
```
## 📚 **Next Steps**
1. **Read the full documentation**: [ANSIBLE_DEPLOYMENT.md](ANSIBLE_DEPLOYMENT.md)
2. **Configure your indexers**: Add your favorite indexers to Prowlarr
3. **Set up automation**: Configure quality profiles and release profiles
4. **Add media**: Start adding movies and TV shows to your libraries
5. **Monitor performance**: Use Tautulli to monitor your Plex usage
## 🆘 **Getting Help**
- **Documentation**: Check the `docs/` directory
- **Troubleshooting**: See [docs/TROUBLESHOOTING.md](docs/TROUBLESHOOTING.md)
- **Community**: Join the discussion forums
- **Issues**: Report bugs in the issue tracker
---
**🎉 You're ready to deploy your own media automation empire!**

View File

@@ -1,298 +0,0 @@
---
# Ansible Playbook for *arr Stack Deployment
# Based on successful VPS deployment at YOUR_VPS_IP_ADDRESS
#
# Usage: ansible-playbook -i inventory/production.yml ansible-deployment.yml
#
# This playbook deploys a complete media automation stack including:
# - Prowlarr (indexer management)
# - Sonarr (TV shows)
# - Radarr (movies)
# - Lidarr (music)
# - Whisparr (adult content)
# - Bazarr (subtitles)
# - Jellyseerr (request management)
# - SABnzbd (usenet downloader)
# - Deluge (torrent downloader)
# - Plex (media server)
# - Tautulli (Plex analytics)
# - Gluetun (VPN container)
- name: Deploy Complete *arr Media Stack
hosts: all
become: yes
vars:
# Stack configuration
stack_name: "arr-stack"
base_path: "/home/docker"
compose_file: "{{ base_path }}/compose/docker-compose.yml"
# Network configuration
tailscale_ip: "YOUR_TAILSCALE_IP" # Your current Tailscale IP
# Service ports (current working configuration)
services:
prowlarr: 9696
sonarr: 8989
radarr: 7878
lidarr: 8686
whisparr: 6969
bazarr: 6767
jellyseerr: 5055
sabnzbd: 8080
deluge: 8081
plex: 32400
tautulli: 8181
# VPN configuration (parameterized)
vpn_provider: "{{ vault_vpn_provider | default('nordvpn') }}"
vpn_username: "{{ vault_vpn_username }}"
vpn_password: "{{ vault_vpn_password }}"
# API Keys (to be generated or provided)
api_keys:
prowlarr: "{{ vault_prowlarr_api_key | default('') }}"
sonarr: "{{ vault_sonarr_api_key | default('') }}"
radarr: "{{ vault_radarr_api_key | default('') }}"
lidarr: "{{ vault_lidarr_api_key | default('') }}"
whisparr: "{{ vault_whisparr_api_key | default('') }}"
bazarr: "{{ vault_bazarr_api_key | default('') }}"
jellyseerr: "{{ vault_jellyseerr_api_key | default('') }}"
sabnzbd: "{{ vault_sabnzbd_api_key | default('') }}"
tasks:
- name: System Requirements Check
block:
- name: Check OS version
ansible.builtin.setup:
filter: ansible_distribution*
- name: Verify minimum requirements
ansible.builtin.assert:
that:
- ansible_memtotal_mb >= 4096
- ansible_architecture == "x86_64"
fail_msg: "System does not meet minimum requirements (4GB RAM, x86_64)"
success_msg: "System requirements verified"
- name: Display system info
ansible.builtin.debug:
msg: |
System: {{ ansible_distribution }} {{ ansible_distribution_version }}
Memory: {{ ansible_memtotal_mb }}MB
Architecture: {{ ansible_architecture }}
Disk Space: {{ ansible_mounts[0].size_total // 1024 // 1024 // 1024 }}GB
- name: Docker Environment Setup
block:
- name: Install Docker dependencies
ansible.builtin.apt:
name:
- apt-transport-https
- ca-certificates
- curl
- gnupg
- lsb-release
state: present
update_cache: yes
- name: Add Docker GPG key
ansible.builtin.apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
- name: Add Docker repository
ansible.builtin.apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
- name: Install Docker
ansible.builtin.apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-compose-plugin
state: present
update_cache: yes
- name: Start and enable Docker
ansible.builtin.systemd:
name: docker
state: started
enabled: yes
- name: Add user to docker group
ansible.builtin.user:
name: "{{ ansible_user }}"
groups: docker
append: yes
- name: Create Directory Structure
block:
- name: Create base directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0755'
loop:
- "{{ base_path }}"
- "{{ base_path }}/compose"
- "{{ base_path }}/prowlarr"
- "{{ base_path }}/sonarr"
- "{{ base_path }}/radarr"
- "{{ base_path }}/lidarr"
- "{{ base_path }}/whisparr"
- "{{ base_path }}/bazarr"
- "{{ base_path }}/jellyseerr"
- "{{ base_path }}/sabnzbd"
- "{{ base_path }}/deluge"
- "{{ base_path }}/plex"
- "{{ base_path }}/tautulli"
- "{{ base_path }}/gluetun"
- "{{ base_path }}/media"
- "{{ base_path }}/downloads"
- name: Deploy Docker Compose Configuration
block:
- name: Generate docker-compose.yml
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ compose_file }}"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
notify: restart stack
- name: Create environment file
ansible.builtin.template:
src: .env.j2
dest: "{{ base_path }}/compose/.env"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0600'
notify: restart stack
- name: Deploy Stack
block:
- name: Pull latest images
ansible.builtin.command:
cmd: docker compose pull
chdir: "{{ base_path }}/compose"
become_user: "{{ ansible_user }}"
- name: Start the stack
ansible.builtin.command:
cmd: docker compose up -d
chdir: "{{ base_path }}/compose"
become_user: "{{ ansible_user }}"
- name: Wait for services to be ready
ansible.builtin.wait_for:
host: "{{ tailscale_ip }}"
port: "{{ item.value }}"
timeout: 300
loop: "{{ services | dict2items }}"
when: item.key != 'plex' # Plex binds to 0.0.0.0
- name: Wait for Plex specifically
ansible.builtin.wait_for:
host: "0.0.0.0"
port: "{{ services.plex }}"
timeout: 300
- name: Verify Deployment
block:
- name: Check container health
ansible.builtin.command:
cmd: docker ps --filter "status=running" --format "table {{.Names}}\t{{.Status}}"
register: container_status
become_user: "{{ ansible_user }}"
- name: Display container status
ansible.builtin.debug:
var: container_status.stdout_lines
- name: Test service endpoints
ansible.builtin.uri:
url: "http://{{ tailscale_ip }}:{{ item.value }}"
method: GET
status_code: [200, 302, 307, 403] # Accept various redirect/auth responses
loop: "{{ services | dict2items }}"
when: item.key != 'plex'
register: service_tests
- name: Test Plex endpoint
ansible.builtin.uri:
url: "http://{{ ansible_default_ipv4.address }}:{{ services.plex }}/web"
method: GET
status_code: [200, 302, 307]
register: plex_test
- name: Display test results
ansible.builtin.debug:
msg: "All services are responding correctly!"
when: service_tests is succeeded and plex_test is succeeded
handlers:
- name: restart stack
ansible.builtin.command:
cmd: docker compose down && docker compose up -d
chdir: "{{ base_path }}/compose"
become_user: "{{ ansible_user }}"
# Post-deployment configuration tasks
- name: Configure Service Integrations
hosts: all
become: no
vars:
prowlarr_url: "http://{{ tailscale_ip }}:9696"
sonarr_url: "http://{{ tailscale_ip }}:8989"
radarr_url: "http://{{ tailscale_ip }}:7878"
tasks:
- name: Wait for API endpoints to be ready
ansible.builtin.uri:
url: "{{ item }}/api/v1/system/status"
method: GET
headers:
X-Api-Key: "{{ api_keys[item.split(':')[2].split('/')[0]] }}"
status_code: 200
loop:
- "{{ prowlarr_url }}"
- "{{ sonarr_url }}"
- "{{ radarr_url }}"
retries: 10
delay: 30
when: api_keys[item.split(':')[2].split('/')[0]] != ""
- name: Display post-deployment information
ansible.builtin.debug:
msg: |
🎉 *arr Stack Deployment Complete!
📊 Service URLs:
- Prowlarr: {{ prowlarr_url }}
- Sonarr: {{ sonarr_url }}
- Radarr: {{ radarr_url }}
- Lidarr: http://{{ tailscale_ip }}:8686
- Whisparr: http://{{ tailscale_ip }}:6969
- Bazarr: http://{{ tailscale_ip }}:6767
- Jellyseerr: http://{{ tailscale_ip }}:5055
- SABnzbd: http://{{ tailscale_ip }}:8080
- Deluge: http://{{ tailscale_ip }}:8081
- Plex: http://{{ ansible_default_ipv4.address }}:32400
- Tautulli: http://{{ tailscale_ip }}:8181
🔧 Next Steps:
1. Configure indexers in Prowlarr
2. Set up download clients in *arr apps
3. Configure media libraries in Plex
4. Set up request workflows in Jellyseerr
📁 Data Locations:
- Config: {{ base_path }}/[service-name]
- Media: {{ base_path }}/media
- Downloads: {{ base_path }}/downloads

View File

@@ -1,524 +1,324 @@
#!/bin/bash
# 🚀 All-in-One Bootstrap Script for *arr Media Stack
# This script sets up everything from a fresh OS install to a fully deployed media stack
# =============================================================================
# Server Bootstrap Script
# =============================================================================
# Prepares a fresh server with Docker, Tailscale, and essential tools.
# Zero intervention required - just run and go.
#
# Usage: curl -sSL https://github.com/your-username/arr-suite-template/raw/branch/main/bootstrap.sh | bash
# Or: wget -qO- https://github.com/your-username/arr-suite-template/raw/branch/main/bootstrap.sh | bash
# Supported: Ubuntu, Debian, Fedora, Rocky/Alma/RHEL, Arch, openSUSE
#
# Tested on: Ubuntu 20.04+, Debian 11+
# Requirements: Fresh VPS with sudo access
# Usage:
# curl -fsSL <url>/bootstrap.sh | sudo bash
#
# Options:
# --no-tailscale Skip Tailscale installation
# --no-firewall Skip firewall configuration
# =============================================================================
set -euo pipefail
# Configuration
REPO_URL="https://github.com/your-username/arr-suite-template.git"
INSTALL_DIR="/opt/arr-stack"
SERVICE_USER="arrstack"
PYTHON_VERSION="3.9"
# Colors for output
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
NC='\033[0m'
# Logging functions
log() {
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
log() { echo -e "${BLUE}[INFO]${NC} $1"; }
success() { echo -e "${GREEN}[OK]${NC} $1"; }
warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
error() { echo -e "${RED}[ERROR]${NC} $1" >&2; exit 1; }
error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
# Configuration
SKIP_TAILSCALE=false
SKIP_FIREWALL=false
success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--no-tailscale) SKIP_TAILSCALE=true; shift ;;
--no-firewall) SKIP_FIREWALL=true; shift ;;
--help|-h)
echo "Usage: bootstrap.sh [--no-tailscale] [--no-firewall]"
exit 0
;;
*) shift ;;
esac
done
warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Check root
[[ $EUID -ne 0 ]] && error "Run as root: sudo bash bootstrap.sh"
info() {
echo -e "${CYAN}[INFO]${NC} $1"
}
step() {
echo -e "${PURPLE}[STEP]${NC} $1"
}
# Check if running as root
check_root() {
if [[ $EUID -eq 0 ]]; then
error "This script should not be run as root. Please run as a regular user with sudo privileges."
exit 1
fi
# Check sudo access
if ! sudo -n true 2>/dev/null; then
error "This script requires sudo privileges. Please ensure your user can use sudo."
exit 1
fi
}
# Detect OS and version
# Detect OS
detect_os() {
if [[ -f /etc/os-release ]]; then
. /etc/os-release
OS=$ID
OS_VERSION=$VERSION_ID
OS_VERSION=${VERSION_ID:-}
OS_NAME=$NAME
else
error "Cannot detect OS. This script supports Ubuntu 20.04+ and Debian 11+"
exit 1
error "Cannot detect OS"
fi
log "Detected: $OS_NAME $OS_VERSION"
}
log "Detected OS: $OS $OS_VERSION"
# Install essential packages
install_essentials() {
log "Installing essential packages..."
# Verify supported OS
case $OS in
ubuntu)
if [[ $(echo "$OS_VERSION >= 20.04" | bc -l) -eq 0 ]]; then
error "Ubuntu 20.04 or higher required. Found: $OS_VERSION"
exit 1
fi
ubuntu|debian|linuxmint|pop)
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get install -y -qq \
curl wget git unzip htop nano vim \
ca-certificates gnupg lsb-release \
jq tree ncdu lsof net-tools
;;
debian)
if [[ $(echo "$OS_VERSION >= 11" | bc -l) -eq 0 ]]; then
error "Debian 11 or higher required. Found: $OS_VERSION"
exit 1
fi
fedora)
dnf install -y -q \
curl wget git unzip htop nano vim \
ca-certificates gnupg \
jq tree ncdu lsof net-tools
;;
rocky|almalinux|rhel|centos)
dnf install -y -q epel-release 2>/dev/null || true
dnf install -y -q \
curl wget git unzip htop nano vim \
ca-certificates gnupg \
jq tree ncdu lsof net-tools
;;
arch|manjaro|endeavouros)
pacman -Sy --noconfirm \
curl wget git unzip htop nano vim \
ca-certificates gnupg \
jq tree ncdu lsof net-tools
;;
opensuse*|sles)
zypper install -y \
curl wget git unzip htop nano vim \
ca-certificates \
jq tree ncdu lsof net-tools
;;
*)
error "Unsupported OS: $OS. This script supports Ubuntu 20.04+ and Debian 11+"
exit 1
warn "Unknown OS: $OS - skipping package installation"
;;
esac
success "OS compatibility verified"
}
# Check system requirements
check_requirements() {
step "Checking system requirements..."
# Check memory (minimum 4GB)
MEMORY_GB=$(free -g | awk '/^Mem:/{print $2}')
if [[ $MEMORY_GB -lt 4 ]]; then
error "Minimum 4GB RAM required. Found: ${MEMORY_GB}GB"
exit 1
fi
# Check disk space (minimum 50GB)
DISK_GB=$(df / | awk 'NR==2{print int($4/1024/1024)}')
if [[ $DISK_GB -lt 50 ]]; then
error "Minimum 50GB free disk space required. Found: ${DISK_GB}GB"
exit 1
fi
# Check architecture
ARCH=$(uname -m)
if [[ $ARCH != "x86_64" ]]; then
error "x86_64 architecture required. Found: $ARCH"
exit 1
fi
success "System requirements met: ${MEMORY_GB}GB RAM, ${DISK_GB}GB disk, $ARCH"
}
# Update system packages
update_system() {
step "Updating system packages..."
sudo apt-get update -qq
sudo apt-get upgrade -y -qq
sudo apt-get install -y -qq \
curl \
wget \
git \
unzip \
software-properties-common \
apt-transport-https \
ca-certificates \
gnupg \
lsb-release \
bc \
jq \
htop \
nano \
vim \
ufw \
fail2ban
success "System packages updated"
}
# Install Python and pip
install_python() {
step "Installing Python $PYTHON_VERSION and pip..."
# Add deadsnakes PPA for newer Python versions on Ubuntu
if [[ $OS == "ubuntu" ]]; then
sudo add-apt-repository ppa:deadsnakes/ppa -y
sudo apt-get update -qq
fi
sudo apt-get install -y -qq \
python${PYTHON_VERSION} \
python${PYTHON_VERSION}-pip \
python${PYTHON_VERSION}-venv \
python${PYTHON_VERSION}-dev
# Create symlinks
sudo ln -sf /usr/bin/python${PYTHON_VERSION} /usr/local/bin/python3
sudo ln -sf /usr/bin/python${PYTHON_VERSION} /usr/local/bin/python
# Install pip if not available
if ! command -v pip3 &> /dev/null; then
curl -sSL https://bootstrap.pypa.io/get-pip.py | sudo python${PYTHON_VERSION}
fi
success "Python $PYTHON_VERSION installed"
success "Essential packages installed"
}
# Install Docker
install_docker() {
step "Installing Docker..."
# Remove old Docker versions
sudo apt-get remove -y -qq docker docker-engine docker.io containerd runc 2>/dev/null || true
# Add Docker GPG key
curl -fsSL https://download.docker.com/linux/$OS/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Add Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/$OS $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt-get update -qq
sudo apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Start and enable Docker
sudo systemctl start docker
sudo systemctl enable docker
# Add user to docker group
sudo usermod -aG docker $USER
success "Docker installed and configured"
}
# Install Ansible
install_ansible() {
step "Installing Ansible..."
# Install Ansible via pip for latest version
python3 -m pip install --user ansible ansible-core
# Add to PATH
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.bashrc
export PATH="$HOME/.local/bin:$PATH"
# Verify installation
if command -v ansible &> /dev/null; then
ANSIBLE_VERSION=$(ansible --version | head -n1 | cut -d' ' -f3)
success "Ansible $ANSIBLE_VERSION installed"
else
error "Ansible installation failed"
exit 1
if command -v docker &>/dev/null; then
success "Docker already installed"
return
fi
log "Installing Docker..."
case $OS in
ubuntu|debian|linuxmint|pop)
# Remove old versions
apt-get remove -y -qq docker docker-engine docker.io containerd runc 2>/dev/null || true
install -m 0755 -d /etc/apt/keyrings
# Determine base OS for Docker repo
DOCKER_OS=$OS
if [[ "$OS" == "linuxmint" || "$OS" == "pop" ]]; then
DOCKER_OS="ubuntu"
fi
curl -fsSL https://download.docker.com/linux/$DOCKER_OS/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg 2>/dev/null
chmod a+r /etc/apt/keyrings/docker.gpg
# Get codename
if [[ -n "${VERSION_CODENAME:-}" ]]; then
CODENAME=$VERSION_CODENAME
else
CODENAME=$(lsb_release -cs 2>/dev/null || echo "jammy")
fi
# Map derivative codenames to Ubuntu
case $OS in
linuxmint|pop)
case $OS_VERSION in
20*|21.0|21.1) CODENAME="focal" ;;
21.2|21.3|22*) CODENAME="jammy" ;;
*) CODENAME="jammy" ;;
esac
;;
esac
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/$DOCKER_OS $CODENAME stable" > /etc/apt/sources.list.d/docker.list
apt-get update -qq
apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
;;
fedora)
dnf remove -y -q docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine 2>/dev/null || true
dnf install -y -q dnf-plugins-core
dnf config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
dnf install -y -q docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
;;
rocky|almalinux|rhel|centos)
dnf remove -y -q docker docker-client docker-client-latest docker-common docker-latest docker-latest-logrotate docker-logrotate docker-engine 2>/dev/null || true
dnf install -y -q dnf-plugins-core
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf install -y -q docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
;;
arch|manjaro|endeavouros)
pacman -Sy --noconfirm docker docker-compose
;;
opensuse*|sles)
zypper install -y docker docker-compose
;;
*)
error "Unsupported OS for Docker: $OS"
;;
esac
systemctl enable --now docker
success "Docker installed"
}
# Install Tailscale (optional but recommended)
# Install Tailscale
install_tailscale() {
step "Installing Tailscale..."
$SKIP_TAILSCALE && return
# Add Tailscale repository
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.noarmor.gpg | sudo tee /usr/share/keyrings/tailscale-archive-keyring.gpg >/dev/null
curl -fsSL https://pkgs.tailscale.com/stable/ubuntu/focal.list | sudo tee /etc/apt/sources.list.d/tailscale.list
# Install Tailscale
sudo apt-get update -qq
sudo apt-get install -y -qq tailscale
success "Tailscale installed (run 'sudo tailscale up' to connect)"
}
# Clone repository
clone_repository() {
step "Cloning arr-stack repository..."
# Create install directory
sudo mkdir -p $INSTALL_DIR
sudo chown $USER:$USER $INSTALL_DIR
# Clone repository
git clone $REPO_URL $INSTALL_DIR
cd $INSTALL_DIR
success "Repository cloned to $INSTALL_DIR"
}
# Setup configuration
setup_configuration() {
step "Setting up configuration files..."
cd $INSTALL_DIR
# Create inventory from example
if [[ ! -f inventory/production.yml ]]; then
cp inventory/production.yml.example inventory/production.yml
# Get server IP
SERVER_IP=$(curl -s ifconfig.me || curl -s ipinfo.io/ip || hostname -I | awk '{print $1}')
# Update inventory with current server details
sed -i "s/your_server_ip/$SERVER_IP/g" inventory/production.yml
sed -i "s/your_ssh_user/$USER/g" inventory/production.yml
info "Updated inventory with server IP: $SERVER_IP"
if command -v tailscale &>/dev/null; then
success "Tailscale already installed"
return
fi
# Create vault file from example
if [[ ! -f group_vars/all/vault.yml ]]; then
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
info "Created vault.yml from example - you'll need to edit this with your credentials"
fi
success "Configuration files created"
log "Installing Tailscale..."
curl -fsSL https://tailscale.com/install.sh | sh
success "Tailscale installed - run 'sudo tailscale up' to connect"
}
# Configure firewall
configure_firewall() {
step "Configuring UFW firewall..."
$SKIP_FIREWALL && return
# Reset UFW to defaults
sudo ufw --force reset
log "Configuring firewall..."
# Set default policies
sudo ufw default deny incoming
sudo ufw default allow outgoing
case $OS in
ubuntu|debian|linuxmint|pop)
if ! command -v ufw &>/dev/null; then
apt-get install -y -qq ufw
fi
# Allow SSH
sudo ufw allow ssh
ufw --force reset >/dev/null 2>&1
ufw default deny incoming >/dev/null
ufw default allow outgoing >/dev/null
ufw allow ssh >/dev/null
ufw allow 32400/tcp >/dev/null # Plex
ufw allow 8096/tcp >/dev/null # Jellyfin
ufw --force enable >/dev/null
success "UFW firewall configured"
;;
# Allow Plex (public access)
sudo ufw allow 32400/tcp
fedora|rocky|almalinux|rhel|centos)
if command -v firewall-cmd &>/dev/null; then
firewall-cmd --permanent --add-service=ssh >/dev/null 2>&1
firewall-cmd --permanent --add-port=32400/tcp >/dev/null 2>&1 # Plex
firewall-cmd --permanent --add-port=8096/tcp >/dev/null 2>&1 # Jellyfin
firewall-cmd --reload >/dev/null 2>&1
success "firewalld configured"
fi
;;
# Allow Tailscale
sudo ufw allow in on tailscale0
# Enable firewall
sudo ufw --force enable
success "UFW firewall configured"
*)
warn "Firewall configuration skipped for $OS"
;;
esac
}
# Configure Fail2Ban
configure_fail2ban() {
step "Configuring Fail2Ban..."
# Create custom jail configuration
sudo tee /etc/fail2ban/jail.local > /dev/null << 'EOF'
[DEFAULT]
bantime = 3600
findtime = 600
maxretry = 5
backend = systemd
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 86400
[plex]
enabled = true
port = 32400
filter = plex
logpath = /opt/arr-stack/logs/plex.log
maxretry = 5
bantime = 3600
EOF
# Create Plex filter
sudo tee /etc/fail2ban/filter.d/plex.conf > /dev/null << 'EOF'
[Definition]
failregex = .*Plex.*Failed login attempt.*<HOST>
ignoreregex =
EOF
# Restart Fail2Ban
sudo systemctl restart fail2ban
sudo systemctl enable fail2ban
success "Fail2Ban configured"
}
# Create service user
create_service_user() {
step "Creating service user..."
# Create user if it doesn't exist
if ! id "$SERVICE_USER" &>/dev/null; then
sudo useradd -r -s /bin/bash -d /home/$SERVICE_USER -m $SERVICE_USER
sudo usermod -aG docker $SERVICE_USER
# Set up sudo access for service management
echo "$SERVICE_USER ALL=(ALL) NOPASSWD: /bin/systemctl start arr-stack, /bin/systemctl stop arr-stack, /bin/systemctl restart arr-stack, /bin/systemctl status arr-stack" | sudo tee /etc/sudoers.d/arr-stack
success "Service user '$SERVICE_USER' created"
else
info "Service user '$SERVICE_USER' already exists"
fi
}
# Install monitoring tools
install_monitoring() {
step "Installing monitoring tools..."
# Install system monitoring
sudo apt-get install -y -qq \
htop \
iotop \
nethogs \
ncdu \
tree \
lsof \
strace
# Install Docker monitoring
sudo curl -L "https://github.com/bcicen/ctop/releases/download/v0.7.7/ctop-0.7.7-linux-amd64" -o /usr/local/bin/ctop
sudo chmod +x /usr/local/bin/ctop
success "Monitoring tools installed"
}
# Create helpful aliases and scripts
# Create helpful aliases
create_aliases() {
step "Creating helpful aliases and scripts..."
log "Creating helpful aliases..."
# Create aliases file
tee ~/.bash_aliases > /dev/null << 'EOF'
# *arr Stack Management Aliases
alias arr-status='cd /opt/arr-stack && docker compose ps'
alias arr-logs='cd /opt/arr-stack && docker compose logs -f'
alias arr-restart='cd /opt/arr-stack && docker compose restart'
alias arr-update='cd /opt/arr-stack && docker compose pull && docker compose up -d'
alias arr-deploy='cd /opt/arr-stack && ./deploy.sh'
alias arr-backup='cd /opt/arr-stack && ./scripts/backup.sh'
cat > /etc/profile.d/server-aliases.sh << 'EOF'
# Server management aliases
alias dps='docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"'
alias dlogs='docker compose logs -f'
alias dstop='docker compose stop'
alias dstart='docker compose up -d'
alias drestart='docker compose restart'
alias dupdate='docker compose pull && docker compose up -d'
# System monitoring
alias sysinfo='echo "=== System Info ===" && uname -a && echo && echo "=== Memory ===" && free -h && echo && echo "=== Disk ===" && df -h && echo && echo "=== Docker ===" && docker system df'
alias containers='docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"'
alias logs-tail='docker compose logs -f --tail=100'
# Network tools
# System info
alias sysinfo='echo "=== System ===" && uname -a && echo && free -h && echo && df -h /'
alias myip='curl -s ifconfig.me'
alias ports='sudo netstat -tulpn | grep LISTEN'
alias vpn-status='docker exec gluetun curl -s ifconfig.me'
# Quick navigation
alias arr='cd /opt/arr-stack'
alias logs='cd /opt/arr-stack/logs'
alias configs='cd /opt/arr-stack/configs'
alias ports='ss -tulpn | grep LISTEN'
EOF
# Source aliases
echo 'source ~/.bash_aliases' >> ~/.bashrc
success "Helpful aliases created"
chmod +x /etc/profile.d/server-aliases.sh
success "Aliases created (reload shell to use)"
}
# Display final instructions
show_final_instructions() {
echo
echo "🎉 =============================================="
echo "🎉 *arr Media Stack Bootstrap Complete!"
echo "🎉 =============================================="
echo
echo "📋 Next Steps:"
echo
echo "1. 🔐 Configure your secrets:"
echo " cd $INSTALL_DIR"
echo " ansible-vault edit group_vars/all/vault.yml"
echo " # Add your VPN credentials and other secrets"
echo
echo "2. 🌐 Connect to Tailscale (recommended):"
echo " sudo tailscale up"
echo " # Follow the authentication link"
echo
echo "3. 🚀 Deploy the stack:"
echo " cd $INSTALL_DIR"
echo " ./deploy.sh"
echo
echo "4. 🔧 Access your services:"
echo " - Get your Tailscale IP: tailscale ip -4"
echo " - Prowlarr: http://TAILSCALE_IP:9696"
echo " - Sonarr: http://TAILSCALE_IP:8989"
echo " - Radarr: http://TAILSCALE_IP:7878"
echo " - Plex: http://$(curl -s ifconfig.me):32400"
echo
echo "📖 Documentation:"
echo " - Full guide: $INSTALL_DIR/ANSIBLE_DEPLOYMENT.md"
echo " - Configuration: $INSTALL_DIR/README.md"
echo
echo "🔧 Useful commands:"
echo " arr-status # Check container status"
echo " arr-logs # View logs"
echo " arr-restart # Restart services"
echo " arr-update # Update containers"
echo " sysinfo # System information"
echo
echo "⚠️ Important:"
echo " - Reboot or logout/login to apply group changes"
echo " - Configure your VPN credentials before deploying"
echo " - Set up indexers in Prowlarr after deployment"
echo
echo "🎯 Ready to deploy your media automation empire!"
echo
}
# Show completion message
show_complete() {
local IP=$(curl -s --max-time 5 ifconfig.me 2>/dev/null || hostname -I | awk '{print $1}')
# Main execution
main() {
echo "🚀 Starting *arr Media Stack Bootstrap..."
echo "This will install and configure everything needed for your media stack."
echo
# Confirmation
read -p "Continue with installation? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
warning "Installation cancelled by user."
exit 0
echo ""
echo "========================================"
echo " Server Bootstrap Complete!"
echo "========================================"
echo ""
echo "Installed:"
echo " ✅ Docker & Docker Compose"
if ! $SKIP_TAILSCALE; then
echo " ✅ Tailscale (run 'sudo tailscale up' to connect)"
fi
# Run all setup steps
check_root
detect_os
check_requirements
update_system
install_python
install_docker
install_ansible
install_tailscale
clone_repository
setup_configuration
configure_firewall
configure_fail2ban
create_service_user
install_monitoring
create_aliases
show_final_instructions
if ! $SKIP_FIREWALL; then
echo " ✅ Firewall (SSH, Plex, Jellyfin allowed)"
fi
echo " ✅ Essential tools (htop, git, curl, etc.)"
echo ""
echo "Server IP: $IP"
echo ""
echo "Next steps:"
echo " 1. Connect Tailscale: sudo tailscale up"
echo " 2. Install arr-suite:"
echo ""
echo " # Plex version:"
echo " curl -fsSL -H \"Authorization: token YOUR_TOKEN\" \\"
echo " \"https://git.vish.gg/Vish/arr-suite/raw/branch/main/install.sh\" | sudo bash"
echo ""
echo " # Jellyfin version:"
echo " curl -fsSL -H \"Authorization: token YOUR_TOKEN\" \\"
echo " \"https://git.vish.gg/Vish/arr-suite-jellyfin/raw/branch/main/install.sh\" | sudo bash"
echo ""
echo "Helpful commands:"
echo " dps - Show running containers"
echo " dlogs - View container logs"
echo " dupdate - Update all containers"
echo " sysinfo - System information"
echo " myip - Show public IP"
echo ""
}
# Handle script interruption
trap 'error "Installation interrupted. You may need to clean up manually."; exit 1' INT TERM
# Main
main() {
echo ""
echo "========================================"
echo " Server Bootstrap"
echo "========================================"
echo ""
# Run main function
main "$@"
detect_os
install_essentials
install_docker
install_tailscale
configure_firewall
create_aliases
show_complete
}
main

View File

@@ -1,152 +0,0 @@
version: '3.8'
services:
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/sonarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${SONARR_PORT:-8989}:8989/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8989/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
radarr:
image: linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/radarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${RADARR_PORT:-7878}:7878/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7878/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
lidarr:
image: linuxserver/lidarr:latest
container_name: lidarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/lidarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${LIDARR_PORT:-8686}:8686/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8686/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
bazarr:
image: linuxserver/bazarr:latest
container_name: bazarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/bazarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${BAZARR_PORT:-6767}:6767/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:6767/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# VPN Container (GlueTUN example)
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
cap_add:
- NET_ADMIN
environment:
- VPN_SERVICE_PROVIDER=${VPN_PROVIDER:-nordvpn}
- VPN_TYPE=${VPN_TYPE:-openvpn}
- OPENVPN_USER=${VPN_USER}
- OPENVPN_PASSWORD=${VPN_PASSWORD}
- SERVER_COUNTRIES=${VPN_COUNTRIES:-Netherlands}
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/gluetun:/gluetun
ports:
- ${PROWLARR_PORT:-9696}:9696/tcp # Prowlarr through VPN
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://www.google.com/"]
interval: 60s
timeout: 10s
retries: 3
start_period: 30s
# Prowlarr running through VPN
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/prowlarr:/config
network_mode: "container:gluetun" # Use VPN container's network
security_opt:
- no-new-privileges:true
restart: always
depends_on:
- gluetun
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9696/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s

View File

@@ -1,129 +0,0 @@
version: '3.8'
services:
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/sonarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${SONARR_PORT:-8989}:8989/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8989/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
radarr:
image: linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/radarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${RADARR_PORT:-7878}:7878/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7878/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
lidarr:
image: linuxserver/lidarr:latest
container_name: lidarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/lidarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${LIDARR_PORT:-8686}:8686/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8686/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
bazarr:
image: linuxserver/bazarr:latest
container_name: bazarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/bazarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${BAZARR_PORT:-6767}:6767/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:6767/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/prowlarr:/config
ports:
- ${PROWLARR_PORT:-9696}:9696/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9696/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# Optional: Uncomment if you want to use Docker networks instead of synobridge
# networks:
# arrs-network:
# driver: bridge
# ipam:
# config:
# - subnet: 172.20.0.0/16

View File

@@ -1,25 +0,0 @@
version: '3.8'
services:
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/prowlarr:/config
ports:
- ${PROWLARR_PORT:-9696}:9696/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9696/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s

View File

@@ -1,26 +0,0 @@
version: '3.8'
services:
radarr:
image: linuxserver/radarr:latest
container_name: radarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/radarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${RADARR_PORT:-7878}:7878/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7878/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s

View File

@@ -1,26 +0,0 @@
version: '3.8'
services:
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID=${PUID:-1234}
- PGID=${PGID:-65432}
- TZ=${TZ:-Europe/London}
- UMASK=022
volumes:
- ${CONFIG_ROOT:-/volume1/docker}/sonarr:/config
- ${DATA_ROOT:-/volume1/data}:/data
ports:
- ${SONARR_PORT:-8989}:8989/tcp
network_mode: ${NETWORK_MODE:-synobridge}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8989/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s

221
deploy.sh
View File

@@ -1,221 +0,0 @@
#!/bin/bash
# Quick deployment script for *arr Media Stack
# Usage: ./deploy.sh [environment]
set -euo pipefail
# Configuration
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
ENVIRONMENT="${1:-production}"
INVENTORY_FILE="inventory/${ENVIRONMENT}.yml"
PLAYBOOK_FILE="ansible-deployment.yml"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Logging function
log() {
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')]${NC} $1"
}
error() {
echo -e "${RED}[ERROR]${NC} $1" >&2
}
success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
# Check prerequisites
check_prerequisites() {
log "Checking prerequisites..."
# Check if ansible is installed
if ! command -v ansible &> /dev/null; then
error "Ansible is not installed. Please install Ansible first."
exit 1
fi
# Check if inventory file exists
if [[ ! -f "$INVENTORY_FILE" ]]; then
error "Inventory file $INVENTORY_FILE not found."
echo "Please create it from the example:"
echo "cp inventory/production.yml.example $INVENTORY_FILE"
exit 1
fi
# Check if vault file exists
if [[ ! -f "group_vars/all/vault.yml" ]]; then
warning "Vault file not found. Creating from example..."
cp group_vars/all/vault.yml.example group_vars/all/vault.yml
echo "Please edit group_vars/all/vault.yml with your credentials and encrypt it:"
echo "ansible-vault encrypt group_vars/all/vault.yml"
exit 1
fi
# Check if playbook exists
if [[ ! -f "$PLAYBOOK_FILE" ]]; then
error "Playbook file $PLAYBOOK_FILE not found."
exit 1
fi
success "Prerequisites check passed!"
}
# Test connectivity
test_connectivity() {
log "Testing connectivity to target hosts..."
if ansible all -i "$INVENTORY_FILE" -m ping --ask-vault-pass; then
success "Connectivity test passed!"
else
error "Connectivity test failed. Please check your inventory and SSH configuration."
exit 1
fi
}
# Validate configuration
validate_config() {
log "Validating Ansible configuration..."
if ansible-playbook -i "$INVENTORY_FILE" "$PLAYBOOK_FILE" --syntax-check; then
success "Configuration validation passed!"
else
error "Configuration validation failed. Please check your playbook syntax."
exit 1
fi
}
# Run deployment
deploy_stack() {
log "Starting deployment of *arr media stack..."
# Ask for confirmation
echo
echo "🚀 Ready to deploy the complete *arr media stack!"
echo "Environment: $ENVIRONMENT"
echo "Inventory: $INVENTORY_FILE"
echo "Playbook: $PLAYBOOK_FILE"
echo
read -p "Continue with deployment? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
warning "Deployment cancelled by user."
exit 0
fi
# Run the playbook
if ansible-playbook -i "$INVENTORY_FILE" "$PLAYBOOK_FILE" --ask-vault-pass; then
success "🎉 Deployment completed successfully!"
echo
echo "📊 Your *arr stack is now running!"
echo "Check the services at the URLs provided in the playbook output."
echo
echo "📋 Next steps:"
echo "1. Configure indexers in Prowlarr"
echo "2. Connect applications to Prowlarr"
echo "3. Set up download clients"
echo "4. Configure Plex libraries"
echo "5. Set up Jellyseerr for requests"
echo
echo "📖 See ANSIBLE_DEPLOYMENT.md for detailed configuration instructions."
else
error "Deployment failed. Check the output above for details."
exit 1
fi
}
# Dry run option
dry_run() {
log "Running deployment in check mode (dry run)..."
if ansible-playbook -i "$INVENTORY_FILE" "$PLAYBOOK_FILE" --check --ask-vault-pass; then
success "Dry run completed successfully!"
echo "No issues found. Ready for actual deployment."
else
error "Dry run failed. Please fix the issues before deploying."
exit 1
fi
}
# Show help
show_help() {
cat << EOF
🚀 *arr Media Stack Deployment Script
Usage: $0 [OPTIONS] [ENVIRONMENT]
ENVIRONMENTS:
production Deploy to production environment (default)
staging Deploy to staging environment
development Deploy to development environment
OPTIONS:
--check Run in check mode (dry run)
--test Test connectivity only
--validate Validate configuration only
--help Show this help message
EXAMPLES:
$0 # Deploy to production
$0 staging # Deploy to staging
$0 --check production # Dry run on production
$0 --test # Test connectivity
$0 --validate # Validate configuration
PREREQUISITES:
- Ansible 2.12+ installed
- SSH key-based authentication configured
- Inventory file created (inventory/\${ENVIRONMENT}.yml)
- Vault file configured (group_vars/all/vault.yml)
For detailed instructions, see ANSIBLE_DEPLOYMENT.md
EOF
}
# Main execution
main() {
cd "$SCRIPT_DIR"
case "${1:-}" in
--help|-h)
show_help
exit 0
;;
--check)
ENVIRONMENT="${2:-production}"
INVENTORY_FILE="inventory/${ENVIRONMENT}.yml"
check_prerequisites
validate_config
dry_run
;;
--test)
ENVIRONMENT="${2:-production}"
INVENTORY_FILE="inventory/${ENVIRONMENT}.yml"
check_prerequisites
test_connectivity
;;
--validate)
check_prerequisites
validate_config
;;
*)
check_prerequisites
validate_config
test_connectivity
deploy_stack
;;
esac
}
# Run main function with all arguments
main "$@"

View File

@@ -1,421 +0,0 @@
# Configuration Guide
This guide covers the initial configuration of each application in the Arrs stack after deployment.
## Initial Setup Workflow
1. **Start with Prowlarr** - Set up indexers first
2. **Configure Sonarr/Radarr/Lidarr** - Set up media management
3. **Set up Bazarr** - Configure subtitle management
4. **Add Download Clients** - Connect to your torrent/usenet clients
## Prowlarr Configuration
Prowlarr acts as the central indexer manager for all other applications.
### First-Time Setup
1. Access Prowlarr at `http://YOUR_NAS_IP:9696`
2. Complete the initial setup wizard
3. Set authentication if desired (Settings → General → Security)
### Adding Indexers
1. Go to **Indexers****Add Indexer**
2. Choose your indexer type (Public or Private)
3. Configure indexer settings:
- **Name**: Descriptive name
- **URL**: Indexer URL
- **API Key**: If required
- **Categories**: Select relevant categories
4. Test the connection
5. Save the indexer
### Connecting to Apps
1. Go to **Settings****Apps**
2. Click **Add Application**
3. Select the application type (Sonarr, Radarr, etc.)
4. Configure connection:
- **Name**: Application name
- **Server**: `172.20.0.1` (synobridge gateway)
- **Port**: Application port (8989 for Sonarr, 7878 for Radarr, etc.)
- **API Key**: Get from the target application
5. Test and save
## Sonarr Configuration (TV Shows)
### Initial Setup
1. Access Sonarr at `http://YOUR_NAS_IP:8989`
2. Complete the setup wizard
3. Set authentication (Settings → General → Security)
### Root Folders
1. Go to **Settings****Media Management**
2. Click **Add Root Folder**
3. Set path: `/data/media/tv`
4. Save the configuration
### Quality Profiles
1. Go to **Settings****Profiles****Quality Profiles**
2. Edit or create profiles based on your preferences:
- **HD-1080p**: For 1080p content
- **HD-720p/1080p**: For mixed HD content
- **Any**: For any quality
### Download Clients
1. Go to **Settings****Download Clients**
2. Click **Add Download Client**
3. Select your client type (qBittorrent, Transmission, etc.)
4. Configure connection:
- **Host**: `172.20.0.1`
- **Port**: Your client's port
- **Username/Password**: If required
- **Category**: `tv` (recommended)
5. Test and save
### Indexers (if not using Prowlarr)
1. Go to **Settings****Indexers**
2. Add indexers manually or sync from Prowlarr
## Radarr Configuration (Movies)
### Initial Setup
1. Access Radarr at `http://YOUR_NAS_IP:7878`
2. Complete the setup wizard
3. Set authentication (Settings → General → Security)
### Root Folders
1. Go to **Settings****Media Management**
2. Click **Add Root Folder**
3. Set path: `/data/media/movies`
4. Save the configuration
### Quality Profiles
Configure quality profiles similar to Sonarr:
- **HD-1080p**
- **HD-720p/1080p**
- **Ultra-HD** (for 4K content)
### Download Clients
1. Go to **Settings****Download Clients**
2. Configure similar to Sonarr
3. Set **Category**: `movies`
## Lidarr Configuration (Music)
### Initial Setup
1. Access Lidarr at `http://YOUR_NAS_IP:8686`
2. Complete the setup wizard
3. Set authentication (Settings → General → Security)
### Root Folders
1. Go to **Settings****Media Management**
2. Click **Add Root Folder**
3. Set path: `/data/media/music`
4. Save the configuration
### Quality Profiles
Configure for music quality:
- **Lossless**: FLAC, ALAC
- **High Quality**: 320kbps MP3
- **Standard**: 192-256kbps MP3
### Download Clients
1. Configure similar to other apps
2. Set **Category**: `music`
### Metadata Profiles
1. Go to **Settings****Profiles****Metadata Profiles**
2. Configure which metadata to download:
- **Primary Types**: Album, Single, EP
- **Secondary Types**: Studio, Live, etc.
## Bazarr Configuration (Subtitles)
### Initial Setup
1. Access Bazarr at `http://YOUR_NAS_IP:6767`
2. Complete the setup wizard
3. Set authentication (Settings → General → Security)
### Sonarr Integration
1. Go to **Settings****Sonarr**
2. Enable Sonarr integration
3. Configure connection:
- **Address**: `172.20.0.1`
- **Port**: `8989`
- **API Key**: From Sonarr
- **Base URL**: Leave empty
4. Set **Path Mappings** if needed
5. Test and save
### Radarr Integration
1. Go to **Settings****Radarr**
2. Configure similar to Sonarr integration
3. Use port `7878` for Radarr
### Subtitle Providers
1. Go to **Settings****Providers**
2. Add subtitle providers:
- **OpenSubtitles**: Free, requires registration
- **Subscene**: Free, no registration
- **Addic7ed**: Free, requires registration
3. Configure provider settings and authentication
### Languages
1. Go to **Settings****Languages**
2. Add desired subtitle languages
3. Set language priorities
## Download Client Configuration
### qBittorrent Example
If using qBittorrent as your download client:
1. **Connection Settings**:
- Host: `172.20.0.1`
- Port: `8081` (qBittorrent WebUI port - configured to avoid conflict with SABnzbd)
- Username/Password: Your qBittorrent credentials
2. **Category Setup** in qBittorrent:
- `movies``/data/torrents/movies`
- `tv``/data/torrents/tv`
- `music``/data/torrents/music`
3. **Path Mappings** (if needed):
- Remote Path: `/downloads`
- Local Path: `/data/torrents`
### Transmission Example
For Transmission:
1. **Connection Settings**:
- Host: `172.20.0.1`
- Port: `9091` (default Transmission port)
- Username/Password: If authentication enabled
2. **Directory Setup**:
- Download Directory: `/data/torrents`
- Use categories or labels for organization
## Advanced Configuration
### Custom Scripts
You can add custom scripts for post-processing:
1. Go to **Settings****Connect**
2. Add **Custom Script** connection
3. Configure script path and triggers
### Notifications
Set up notifications for downloads and imports:
1. Go to **Settings****Connect**
2. Add notification services:
- **Discord**
- **Slack**
- **Email**
- **Pushover**
### Quality Definitions
Fine-tune quality definitions:
1. Go to **Settings****Quality**
2. Adjust size limits for each quality
3. Set preferred quality ranges
## Remote Path Mappings
If your download client is on a different system or uses different paths:
1. Go to **Settings****Download Clients**
2. Click **Add Remote Path Mapping**
3. Configure:
- **Host**: Download client IP
- **Remote Path**: Path as seen by download client
- **Local Path**: Path as seen by Arr application
## Backup Configuration
### Export Settings
Each application allows exporting settings:
1. Go to **System****Backup**
2. Click **Backup Now**
3. Download the backup file
### Automated Backups
Use the provided backup script:
```bash
# Create backup
./scripts/backup.sh
# Schedule with cron (optional)
crontab -e
# Add: 0 2 * * 0 /path/to/synology-arrs-stack/scripts/backup.sh
```
## Monitoring and Maintenance
### Health Checks
Monitor application health:
1. Check **System****Status** in each app
2. Review **System****Logs** for errors
3. Use the logs script: `./scripts/logs.sh status`
### Updates
Keep applications updated:
1. **Manual**: Pull new images and restart containers
2. **Automated**: Use Watchtower or similar tools
3. **Script**: Use `./scripts/deploy.sh` to update
### Performance Tuning
Optimize performance:
1. **Resource Limits**: Set CPU/memory limits in docker-compose
2. **Database Maintenance**: Regular database cleanup
3. **Log Rotation**: Configure log rotation to save space
## Troubleshooting Common Issues
### Import Issues
1. **Check file permissions**: Ensure dockerlimited user can access files
2. **Verify paths**: Confirm download and media paths are correct
3. **Review logs**: Check application logs for specific errors
### Connection Issues
1. **Network connectivity**: Test connections between services
2. **API keys**: Verify API keys are correct and active
3. **Firewall**: Ensure ports are open
### Performance Issues
1. **Resource usage**: Monitor CPU and memory usage
2. **Disk I/O**: Check for disk bottlenecks
3. **Network**: Verify network performance
For more detailed troubleshooting, see [TROUBLESHOOTING.md](TROUBLESHOOTING.md).
## New Services Configuration
### Whisparr Configuration (Adult Content)
Whisparr manages adult content similar to Sonarr/Radarr.
1. Access Whisparr at `http://YOUR_VPS_IP:6969`
2. **Media Management**:
- Root Folder: `/data/xxx`
- File naming and organization similar to Sonarr
3. **Indexers**: Connect via Prowlarr or add manually
4. **Download Clients**: Use same clients as other Arrs
5. **Quality Profiles**: Set up quality preferences for adult content
### Tautulli Configuration (Plex Monitoring)
Tautulli provides detailed statistics and monitoring for Plex.
1. Access Tautulli at `http://YOUR_VPS_IP:8181`
2. **Initial Setup**:
- Plex Server: `http://plex:32400` (internal Docker network)
- Plex Token: Get from Plex settings
3. **Configuration**:
- Enable activity monitoring
- Set up notification agents
- Configure user management
4. **Features**:
- View play statistics
- Monitor user activity
- Set up kill stream notifications
- Generate usage reports
### Jellyseerr Configuration (Media Requests)
Jellyseerr allows users to request media additions.
1. Access Jellyseerr at `http://YOUR_VPS_IP:5055`
2. **Initial Setup**:
- Connect to Plex server: `http://plex:32400`
- Import Plex libraries and users
3. **Service Connections**:
- **Sonarr**: `http://sonarr:8989` with API key
- **Radarr**: `http://radarr:7878` with API key
- **Whisparr**: `http://whisparr:6969` with API key (if desired)
4. **User Management**:
- Set user permissions and quotas
- Configure approval workflows
- Set up notification preferences
### TubeArchivist Configuration (YouTube Archiving)
TubeArchivist downloads and organizes YouTube content.
1. Access TubeArchivist at `http://YOUR_VPS_IP:8000`
2. **Initial Setup**:
- Username: `tubearchivist`
- Password: `verysecret` (change after first login)
3. **Configuration**:
- **Downloads**: Set quality preferences and formats
- **Channels**: Subscribe to channels for automatic downloads
- **Playlists**: Import and monitor playlists
- **Scheduling**: Set up download schedules
4. **Integration**:
- Media files stored in `/data/youtube`
- Can be added to Plex as a library if desired
- Elasticsearch provides search functionality
- Redis handles task queuing
#### TubeArchivist Dependencies
- **Elasticsearch**: Provides search and indexing (internal port 9200)
- **Redis**: Handles background tasks and caching (internal port 6379)
- Both services are automatically configured and don't require manual setup
### Security Considerations for New Services
1. **Change Default Passwords**:
- TubeArchivist: Change from `verysecret`
- Elasticsearch: Change from `verysecret`
2. **Access Control**:
- Consider using reverse proxy with authentication
- Limit access via UFW firewall rules
- Use Tailscale for secure remote access
3. **Resource Usage**:
- TubeArchivist with Elasticsearch can be resource-intensive
- Monitor disk usage for YouTube downloads
- Consider setting download limits and cleanup policies

View File

@@ -1,205 +0,0 @@
# 🌐 Service Access Guide
This guide provides direct access URLs for all services in your Arrs media stack deployment.
## 🔗 **Tailscale Access (Recommended)**
**Tailscale IP**: `YOUR_TAILSCALE_IP`
All services are accessible via your Tailscale network for secure remote access:
### 📺 **Core Media Management (The Arrs)**
| Service | URL | Purpose | Default Login |
|---------|-----|---------|---------------|
| **Sonarr** | http://YOUR_TAILSCALE_IP:8989 | TV show management & automation | No default login |
| **Radarr** | http://YOUR_TAILSCALE_IP:7878 | Movie management & automation | No default login |
| **Lidarr** | http://YOUR_TAILSCALE_IP:8686 | Music management & automation | No default login |
| **Bazarr** | http://YOUR_TAILSCALE_IP:6767 | Subtitle management & automation | No default login |
| **Prowlarr** | http://YOUR_TAILSCALE_IP:9696 | Indexer management & search | No default login |
| **Whisparr** | http://YOUR_TAILSCALE_IP:6969 | Adult content management | No default login |
### ⬇️ **Download Clients (VPN Protected)**
| Service | URL | Purpose | Default Login |
|---------|-----|---------|---------------|
| **SABnzbd** | http://YOUR_TAILSCALE_IP:8080 | Usenet downloader (via VPN) | Setup wizard on first run |
| **qBittorrent** | http://YOUR_TAILSCALE_IP:8081 | BitTorrent client (via VPN) | admin / adminadmin |
**⚠️ Important**: Change qBittorrent default password immediately after first login!
### 🎬 **Media Server & Management**
| Service | URL | Purpose | Default Login |
|---------|-----|---------|---------------|
| **Plex** | http://YOUR_TAILSCALE_IP:32400/web | Media streaming server | Plex account required |
| **Tautulli** | http://YOUR_TAILSCALE_IP:8181 | Plex monitoring & statistics | No default login |
| **Jellyseerr** | http://YOUR_TAILSCALE_IP:5055 | Media request management | Setup wizard on first visit |
### 📺 **YouTube Archiving**
| Service | URL | Purpose | Default Login |
|---------|-----|---------|---------------|
| **TubeArchivist** | http://YOUR_TAILSCALE_IP:8000 | YouTube content archiving | Setup wizard on first visit |
## 🌍 **Public Internet Access**
If you've enabled public access in your configuration, services are also accessible via your VPS public IP:
### 📺 **Core Media Management**
- **Sonarr**: `http://YOUR_VPS_IP:8989`
- **Radarr**: `http://YOUR_VPS_IP:7878`
- **Lidarr**: `http://YOUR_VPS_IP:8686`
- **Bazarr**: `http://YOUR_VPS_IP:6767`
- **Prowlarr**: `http://YOUR_VPS_IP:9696`
- **Whisparr**: `http://YOUR_VPS_IP:6969`
### ⬇️ **Download Clients**
- **SABnzbd**: `http://YOUR_VPS_IP:8080`
- **qBittorrent**: `http://YOUR_VPS_IP:8081`
### 🎬 **Media Server & Management**
- **Plex**: `http://YOUR_VPS_IP:32400/web`
- **Tautulli**: `http://YOUR_VPS_IP:8181`
- **Jellyseerr**: `http://YOUR_VPS_IP:5055`
### 📺 **YouTube Archiving**
- **TubeArchivist**: `http://YOUR_VPS_IP:8000`
## 🔐 **Security Recommendations**
### 🛡️ **Tailscale Access (Recommended)**
- **✅ Secure**: All traffic encrypted through Tailscale VPN
- **✅ Private**: Services not exposed to public internet
- **✅ Convenient**: Access from any device with Tailscale installed
- **✅ No port forwarding**: No need to open firewall ports
### 🌐 **Public Access Considerations**
- **⚠️ Security Risk**: Services exposed to public internet
- **🔒 Authentication**: Ensure strong passwords on all services
- **🛡️ Firewall**: UFW firewall provides basic protection
- **📊 Monitoring**: Fail2ban monitors for intrusion attempts
## 🚀 **Quick Access Bookmarks**
Save these bookmarks for easy access to your media stack:
### 📱 **Mobile/Tablet Bookmarks**
```
Sonarr TV: http://YOUR_TAILSCALE_IP:8989
Radarr Movies: http://YOUR_TAILSCALE_IP:7878
Plex Media: http://YOUR_TAILSCALE_IP:32400/web
Jellyseerr Requests: http://YOUR_TAILSCALE_IP:5055
```
### 💻 **Desktop Bookmarks**
```
Media Management Dashboard:
├── Sonarr (TV): http://YOUR_TAILSCALE_IP:8989
├── Radarr (Movies): http://YOUR_TAILSCALE_IP:7878
├── Lidarr (Music): http://YOUR_TAILSCALE_IP:8686
├── Bazarr (Subtitles): http://YOUR_TAILSCALE_IP:6767
├── Prowlarr (Indexers): http://YOUR_TAILSCALE_IP:9696
└── Whisparr (Adult): http://YOUR_TAILSCALE_IP:6969
Download Clients:
├── SABnzbd: http://YOUR_TAILSCALE_IP:8080
└── qBittorrent: http://YOUR_TAILSCALE_IP:8081
Media & Monitoring:
├── Plex Server: http://YOUR_TAILSCALE_IP:32400/web
├── Tautulli Stats: http://YOUR_TAILSCALE_IP:8181
├── Jellyseerr Requests: http://YOUR_TAILSCALE_IP:5055
└── TubeArchivist: http://YOUR_TAILSCALE_IP:8000
```
## 🔧 **Service Configuration Tips**
### 🎯 **First-Time Setup Priority**
1. **Prowlarr** (http://YOUR_TAILSCALE_IP:9696) - Configure indexers first
2. **Sonarr/Radarr/Lidarr** - Add Prowlarr as indexer source
3. **Download Clients** - Configure qBittorrent/SABnzbd in Arrs
4. **Plex** - Add media libraries and configure remote access
5. **Jellyseerr** - Connect to Plex and configure user requests
### 📊 **Monitoring Setup**
1. **Tautulli** - Connect to Plex for detailed statistics
2. **Health Dashboard** - SSH to VPS and run `health` command
3. **VPN Status** - Check `docker logs gluetun` for VPN connection
### 🔐 **Security Setup**
1. **Change Default Passwords**: qBittorrent admin password
2. **Enable Authentication**: Set up auth on services that support it
3. **Review Firewall**: Check UFW status with `sudo ufw status`
4. **Monitor Logs**: Check `/var/log/arrs/` for monitoring logs
## 📞 **Support & Troubleshooting**
If you can't access a service:
1. **Check Service Status**: `docker ps` to see running containers
2. **Check Logs**: `docker logs [service-name]` for error messages
3. **Check Firewall**: `sudo ufw status` to verify port access
4. **Check Tailscale**: Ensure Tailscale is connected on your device
5. **Health Dashboard**: Run `/usr/local/bin/health-dashboard.sh` on VPS
## 🔧 **Specific Service Troubleshooting**
### 🚨 **TubeArchivist Not Loading**
If TubeArchivist (http://YOUR_TAILSCALE_IP:8000) shows connection errors:
```bash
# Check all TubeArchivist containers
docker ps | grep tubearchivist
# Restart in correct order (dependencies first)
docker-compose restart tubearchivist-es
sleep 30
docker-compose restart tubearchivist-redis
sleep 10
docker-compose restart tubearchivist
# Check logs if still not working
docker-compose logs -f tubearchivist
```
### 🔄 **Download Client Port Mix-up**
If ports are swapped (SABnzbd on 8081, qBittorrent on 8080):
```bash
# Restart VPN container and download clients
docker-compose restart gluetun
sleep 30
docker-compose restart qbittorrent sabnzbd
# Verify correct mapping
curl -I http://YOUR_TAILSCALE_IP:8080 # Should show SABnzbd
curl -I http://YOUR_TAILSCALE_IP:8081 # Should show qBittorrent
```
### 🌐 **VPN Connection Issues**
If download clients show "Unauthorized" or won't load:
```bash
# Check VPN status
docker-compose logs gluetun | tail -20
# Restart entire VPN stack
docker-compose down qbittorrent sabnzbd gluetun
docker-compose up -d gluetun
sleep 60
docker-compose up -d qbittorrent sabnzbd
```
## 🎉 **Enjoy Your Media Stack!**
Your complete Arrs media management stack is now accessible via Tailscale at `YOUR_TAILSCALE_IP`. All services are running and ready for configuration!
**Next Steps**:
1. Configure Prowlarr with your preferred indexers
2. Set up Sonarr/Radarr with your media preferences
3. Configure download clients with VPN protection
4. Add media libraries to Plex
5. Set up Jellyseerr for user requests
Happy streaming! 🍿📺🎵

View File

@@ -1,823 +0,0 @@
# Troubleshooting Guide
This guide covers common issues you might encounter when deploying and running the Arrs media stack, along with step-by-step solutions.
## Table of Contents
1. [Quick Diagnostics](#quick-diagnostics)
2. [Deployment Issues](#deployment-issues)
3. [Service Issues](#service-issues)
4. [Network and Connectivity](#network-and-connectivity)
5. [Storage and Permissions](#storage-and-permissions)
6. [VPN Issues](#vpn-issues)
7. [Performance Issues](#performance-issues)
8. [Backup and Recovery](#backup-and-recovery)
9. [Getting Help](#getting-help)
## Quick Diagnostics
### Essential Commands
```bash
# Check all container status
docker ps -a
# Check system resources
htop
df -h
# Check logs for all services
docker-compose logs --tail=50
# Check specific service logs
docker logs [service_name] --tail=50
# Test network connectivity
ping google.com
curl -I https://google.com
```
### Health Check Script
Create a quick health check script:
```bash
#!/bin/bash
# Save as ~/check_health.sh
echo "=== System Resources ==="
df -h | grep -E "(Filesystem|/dev/)"
free -h
echo ""
echo "=== Docker Status ==="
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
echo ""
echo "=== Service Health ==="
services=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "qbittorrent" "plex" "tubearchivist" "tubearchivist-es" "tubearchivist-redis" "jellyseerr" "tautulli")
for service in "${services[@]}"; do
if docker ps | grep -q $service; then
echo "$service: Running"
else
echo "$service: Not running"
fi
done
```
Make it executable and run:
```bash
chmod +x ~/check_health.sh
~/check_health.sh
```
## Deployment Issues
### Ansible Playbook Fails
#### Issue: "Permission denied" errors
**Symptoms:**
```
TASK [Create docker user] ****
fatal: [localhost]: FAILED! => {"msg": "Permission denied"}
```
**Solutions:**
1. Ensure you're running as a user with sudo privileges:
```bash
sudo -v # Test sudo access
```
2. If using a non-root user, ensure they're in the sudo group:
```bash
sudo usermod -aG sudo $USER
# Log out and back in
```
3. Run with explicit sudo if needed:
```bash
ansible-playbook -i inventory/hosts site.yml --become --ask-become-pass
```
#### Issue: "Docker not found" after installation
**Symptoms:**
```
TASK [Start Docker service] ****
fatal: [localhost]: FAILED! => {"msg": "Could not find the requested service docker"}
```
**Solutions:**
1. Manually install Docker:
```bash
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Log out and back in
```
2. Restart the deployment:
```bash
ansible-playbook -i inventory/hosts site.yml
```
#### Issue: "Port already in use"
**Symptoms:**
```
ERROR: for sonarr Cannot start service sonarr: driver failed programming external connectivity on endpoint sonarr: Bind for 0.0.0.0:8989 failed: port is already allocated
```
**Solutions:**
1. Check what's using the port:
```bash
sudo netstat -tulpn | grep :8989
sudo lsof -i :8989
```
2. Stop the conflicting service or change ports in `group_vars/all.yml`:
```yaml
ports:
sonarr: 8990 # Changed from 8989
```
3. Redeploy:
```bash
ansible-playbook -i inventory/hosts site.yml
```
### Configuration File Issues
#### Issue: YAML syntax errors
**Symptoms:**
```
ERROR! Syntax Error while loading YAML.
```
**Solutions:**
1. Validate YAML syntax:
```bash
python3 -c "import yaml; yaml.safe_load(open('group_vars/all.yml'))"
```
2. Common YAML mistakes:
- Missing quotes around special characters
- Incorrect indentation (use spaces, not tabs)
- Missing colons after keys
3. Use a YAML validator online or in your editor
#### Issue: Undefined variables
**Symptoms:**
```
AnsibleUndefinedVariable: 'vpn_provider' is undefined
```
**Solutions:**
1. Check if variable is defined in `group_vars/all.yml`
2. Ensure proper indentation and spelling
3. Add missing variables:
```yaml
vpn_provider: "nordvpn" # Add if missing
```
## Service Issues
### Services Won't Start
#### Issue: Container exits immediately
**Check logs:**
```bash
docker logs [container_name]
```
**Common causes and solutions:**
1. **Permission issues:**
```bash
# Fix ownership
sudo chown -R 1000:1000 /home/arrs/docker
sudo chown -R 1000:1000 /home/arrs/media
```
2. **Missing directories:**
```bash
# Create missing directories
mkdir -p /home/arrs/media/{movies,tv,music,downloads}
mkdir -p /home/arrs/docker/{sonarr,radarr,lidarr,bazarr,prowlarr,qbittorrent}
```
3. **Configuration file corruption:**
```bash
# Remove corrupted config and restart
sudo rm -rf /home/arrs/docker/sonarr/config.xml
docker restart sonarr
```
#### Issue: Service starts but web interface not accessible
**Symptoms:**
- Container shows as running
- Can't access web interface
**Solutions:**
1. Check if service is listening on correct port:
```bash
docker exec sonarr netstat -tulpn | grep :8989
```
2. Check firewall:
```bash
sudo ufw status
sudo ufw allow 8989 # Open required port
```
3. Check if service is bound to localhost only:
```bash
docker logs sonarr | grep -i "listening\|bind"
```
### Database Issues
#### Issue: Sonarr/Radarr database corruption
**Symptoms:**
- Service won't start
- Logs show database errors
- Web interface shows errors
**Solutions:**
1. **Backup current database:**
```bash
cp /home/arrs/docker/sonarr/sonarr.db /home/arrs/docker/sonarr/sonarr.db.backup
```
2. **Try database repair:**
```bash
# Install sqlite3
sudo apt install sqlite3
# Check database integrity
sqlite3 /home/arrs/docker/sonarr/sonarr.db "PRAGMA integrity_check;"
# Repair if needed
sqlite3 /home/arrs/docker/sonarr/sonarr.db ".recover" | sqlite3 /home/arrs/docker/sonarr/sonarr_recovered.db
```
3. **Restore from backup:**
```bash
# Stop service
docker stop sonarr
# Restore backup
cp /home/arrs/docker/sonarr/sonarr.db.backup /home/arrs/docker/sonarr/sonarr.db
# Start service
docker start sonarr
```
## Network and Connectivity
### Can't Access Services from Outside
#### Issue: Services only accessible from localhost
**Solutions:**
1. **Check Docker network configuration:**
```bash
docker network ls
docker network inspect arrs_network
```
2. **Verify port bindings:**
```bash
docker port sonarr
```
3. **Check VPS firewall:**
```bash
# Ubuntu/Debian
sudo ufw status
sudo ufw allow 8989
# CentOS/RHEL
sudo firewall-cmd --list-ports
sudo firewall-cmd --add-port=8989/tcp --permanent
sudo firewall-cmd --reload
```
4. **Check cloud provider firewall:**
- AWS: Security Groups
- DigitalOcean: Cloud Firewalls
- Google Cloud: VPC Firewall Rules
### DNS Resolution Issues
#### Issue: Services can't resolve external domains
**Symptoms:**
- Can't download metadata
- Indexer tests fail
- Updates don't work
**Solutions:**
1. **Check DNS in containers:**
```bash
docker exec sonarr nslookup google.com
docker exec sonarr cat /etc/resolv.conf
```
2. **Fix DNS configuration:**
```bash
# Edit Docker daemon configuration
sudo nano /etc/docker/daemon.json
```
Add:
```json
{
"dns": ["8.8.8.8", "1.1.1.1"]
}
```
Restart Docker:
```bash
sudo systemctl restart docker
docker-compose up -d
```
## Storage and Permissions
### Permission Denied Errors
#### Issue: Services can't write to media directories
**Symptoms:**
- Downloads fail
- Can't move files
- Import errors
**Solutions:**
1. **Check current permissions:**
```bash
ls -la /home/arrs/media/
ls -la /home/arrs/docker/
```
2. **Fix ownership:**
```bash
# Get user/group IDs
id arrs
# Fix ownership (replace 1000:1000 with actual IDs)
sudo chown -R 1000:1000 /home/arrs/media
sudo chown -R 1000:1000 /home/arrs/docker
```
3. **Fix permissions:**
```bash
sudo chmod -R 755 /home/arrs/media
sudo chmod -R 755 /home/arrs/docker
```
4. **Verify container user mapping:**
```bash
docker exec sonarr id
```
### Disk Space Issues
#### Issue: No space left on device
**Solutions:**
1. **Check disk usage:**
```bash
df -h
du -sh /home/arrs/* | sort -hr
```
2. **Clean up Docker:**
```bash
# Remove unused containers, networks, images
docker system prune -a
# Remove unused volumes (CAREFUL!)
docker volume prune
```
3. **Clean up logs:**
```bash
# Truncate large log files
sudo truncate -s 0 /var/log/syslog
sudo truncate -s 0 /var/log/kern.log
# Clean Docker logs
docker logs sonarr 2>/dev/null | wc -l # Check log size
sudo sh -c 'echo "" > $(docker inspect --format="{{.LogPath}}" sonarr)'
```
4. **Move media to external storage:**
```bash
# Mount additional storage
sudo mkdir /mnt/media
sudo mount /dev/sdb1 /mnt/media
# Update configuration
nano group_vars/all.yml
# Change media_root: "/mnt/media"
```
## TubeArchivist Issues
### TubeArchivist Won't Start
#### Issue: Elasticsearch container fails to start
**Symptoms:**
- TubeArchivist shows "Elasticsearch connection failed"
- Elasticsearch container exits with memory errors
- Web interface shows database connection errors
**Solutions:**
1. **Increase memory allocation:**
```bash
# Check available memory
free -h
# If less than 4GB available, reduce ES memory in docker-compose.yml
# Change ES_JAVA_OPTS from -Xms1g -Xmx1g to -Xms512m -Xmx512m
```
2. **Fix Elasticsearch permissions:**
```bash
sudo chown -R 1000:1000 /home/arrs/docker/tubearchivist/es
sudo chmod -R 755 /home/arrs/docker/tubearchivist/es
```
3. **Check disk space:**
```bash
df -h /home/arrs/docker/tubearchivist/
```
#### Issue: Downloads fail or get stuck
**Symptoms:**
- Videos remain in "Pending" status
- Download queue shows errors
- yt-dlp errors in logs
**Solutions:**
1. **Update yt-dlp:**
```bash
docker exec tubearchivist pip install --upgrade yt-dlp
docker restart tubearchivist
```
2. **Check YouTube channel/video availability:**
- Verify the channel/video is still available
- Check if the channel has geographic restrictions
- Try downloading a different video to test
3. **Clear download queue:**
```bash
# Access TubeArchivist web interface
# Go to Downloads → Queue → Clear Failed
```
4. **Check storage space:**
```bash
df -h /home/arrs/media/youtube
```
#### Issue: Videos won't play or thumbnails missing
**Symptoms:**
- Videos show but won't play
- Missing thumbnails
- Playback errors
**Solutions:**
1. **Check file permissions:**
```bash
ls -la /home/arrs/media/youtube/
sudo chown -R 1000:1000 /home/arrs/media/youtube
```
2. **Regenerate thumbnails:**
- Go to Settings → Application → Reindex
- Select "Thumbnails" and run reindex
3. **Check video file integrity:**
```bash
# Test video files
find /home/arrs/media/youtube -name "*.mp4" -exec file {} \;
```
### TubeArchivist Performance Issues
#### Issue: Slow downloads or high CPU usage
**Solutions:**
1. **Limit concurrent downloads:**
- Go to Settings → Download → Max concurrent downloads
- Reduce from default (4) to 1-2 for lower-end systems
2. **Adjust video quality:**
- Go to Settings → Download → Video quality
- Choose lower quality (720p instead of 1080p) to reduce processing
3. **Schedule downloads during off-peak hours:**
- Go to Settings → Scheduling
- Set download windows for low-usage periods
#### Issue: Database performance problems
**Solutions:**
1. **Optimize Elasticsearch:**
```bash
# Increase refresh interval
docker exec tubearchivist-es curl -X PUT "localhost:9200/_settings" -H 'Content-Type: application/json' -d'
{
"index": {
"refresh_interval": "30s"
}
}'
```
2. **Clean up old data:**
- Go to Settings → Application → Cleanup
- Remove old downloads and unused thumbnails
## VPN Issues
### VPN Won't Connect
#### Issue: Gluetun fails to connect
**Check logs:**
```bash
docker logs gluetun --tail=50
```
**Common solutions:**
1. **Wrong credentials:**
```yaml
# Verify in group_vars/all.yml
openvpn_user: "correct_username"
openvpn_password: "correct_password"
```
2. **Unsupported server location:**
```yaml
# Try different countries
vpn_countries: "Netherlands,Germany"
```
3. **Provider API issues:**
```bash
# Try manual server selection
vpn_server_hostnames: "nl123.nordvpn.com"
```
#### Issue: qBittorrent can't connect through VPN
**Solutions:**
1. **Check network mode:**
```bash
docker inspect qbittorrent | grep NetworkMode
# Should show: "NetworkMode": "service:gluetun"
```
2. **Test connectivity:**
```bash
# Test from qBittorrent container
docker exec qbittorrent curl -s https://ipinfo.io/ip
```
3. **Restart VPN stack:**
```bash
docker restart gluetun
docker restart qbittorrent
```
## Performance Issues
### Slow Performance
#### Issue: Services are slow or unresponsive
**Solutions:**
1. **Check system resources:**
```bash
htop
iotop # Install with: sudo apt install iotop
```
2. **Optimize Docker:**
```bash
# Limit container resources in docker-compose.yml
services:
sonarr:
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
```
3. **Optimize database:**
```bash
# Vacuum SQLite databases
sqlite3 /home/arrs/docker/sonarr/sonarr.db "VACUUM;"
sqlite3 /home/arrs/docker/radarr/radarr.db "VACUUM;"
```
4. **Check disk I/O:**
```bash
# Test disk speed
dd if=/dev/zero of=/tmp/test bs=1M count=1000
rm /tmp/test
```
### High CPU Usage
#### Issue: Containers using too much CPU
**Solutions:**
1. **Identify problematic containers:**
```bash
docker stats
```
2. **Check for runaway processes:**
```bash
docker exec sonarr ps aux
```
3. **Limit CPU usage:**
```yaml
# In docker-compose.yml
services:
sonarr:
deploy:
resources:
limits:
cpus: '0.5' # Limit to 50% of one CPU core
```
## Backup and Recovery
### Backup Issues
#### Issue: Backup script fails
**Check backup logs:**
```bash
# Check if backup script exists
ls -la /home/arrs/scripts/backup.sh
# Check backup logs
tail -f /var/log/arrs-backup.log
```
**Solutions:**
1. **Fix permissions:**
```bash
chmod +x /home/arrs/scripts/backup.sh
```
2. **Test backup manually:**
```bash
sudo /home/arrs/scripts/backup.sh
```
3. **Check backup destination:**
```bash
df -h /home/arrs/backups
```
### Recovery Procedures
#### Issue: Need to restore from backup
**Steps:**
1. **Stop all services:**
```bash
docker-compose down
```
2. **Restore configuration:**
```bash
# Extract backup
tar -xzf /home/arrs/backups/arrs-backup-YYYYMMDD.tar.gz -C /home/arrs/
```
3. **Fix permissions:**
```bash
sudo chown -R 1000:1000 /home/arrs/docker
```
4. **Start services:**
```bash
docker-compose up -d
```
## Getting Help
### Before Asking for Help
1. **Check logs:**
```bash
docker logs [service_name] --tail=100
```
2. **Gather system information:**
```bash
# Create debug info
echo "=== System Info ===" > debug.txt
uname -a >> debug.txt
docker --version >> debug.txt
docker-compose --version >> debug.txt
echo "=== Container Status ===" >> debug.txt
docker ps -a >> debug.txt
echo "=== Disk Usage ===" >> debug.txt
df -h >> debug.txt
echo "=== Memory Usage ===" >> debug.txt
free -h >> debug.txt
```
3. **Sanitize sensitive information:**
- Remove passwords, API keys, personal paths
- Replace IP addresses with placeholders
### Where to Get Help
1. **GitHub Issues**: Create an issue with:
- Clear description of the problem
- Steps to reproduce
- Error messages
- System information
- Configuration (sanitized)
2. **Community Forums**:
- r/selfhosted
- r/sonarr, r/radarr
- Discord servers for specific applications
3. **Documentation**:
- Official documentation for each service
- Docker documentation
- Ansible documentation
### Common Support Questions
**Q: "It doesn't work"**
A: Please provide specific error messages and logs.
**Q: "Can't access from outside"**
A: Check firewall settings and port configurations.
**Q: "Downloads don't start"**
A: Check indexer configuration and download client settings.
**Q: "VPN not working"**
A: Verify credentials and check Gluetun logs.
**Q: "Running out of space"**
A: Clean up Docker images and logs, consider external storage.
### Emergency Recovery
If everything breaks:
1. **Stop all services:**
```bash
docker-compose down
```
2. **Backup current state:**
```bash
sudo tar -czf emergency-backup-$(date +%Y%m%d).tar.gz /home/arrs/docker
```
3. **Reset to clean state:**
```bash
# Remove all containers and volumes
docker system prune -a --volumes
# Redeploy
ansible-playbook -i inventory/hosts site.yml
```
4. **Restore data from backups if needed**
Remember: Most issues are fixable with patience and systematic troubleshooting. Don't panic, and always backup before making major changes!

View File

@@ -1,495 +0,0 @@
# VPN Configuration Guide
This guide provides detailed configuration examples for popular VPN providers with the Arrs media stack. The VPN integration uses Gluetun to route qBittorrent traffic through your VPN provider for enhanced privacy and security.
## Table of Contents
1. [Overview](#overview)
2. [General Configuration](#general-configuration)
3. [Provider-Specific Configurations](#provider-specific-configurations)
4. [Testing Your VPN Connection](#testing-your-vpn-connection)
5. [Troubleshooting](#troubleshooting)
6. [Advanced Configuration](#advanced-configuration)
## Overview
### What Gets Protected
When VPN is enabled:
-**qBittorrent**: All torrent traffic routed through VPN
-**SABnzbd**: Optionally routed through VPN (configurable)
-**Other services**: Sonarr, Radarr, Plex, etc. use direct connection
### VPN Technologies Supported
- **OpenVPN**: Most common, works with most providers
- **WireGuard**: Faster, more modern protocol (where supported)
## General Configuration
### Basic VPN Settings
Edit `group_vars/all.yml`:
```yaml
# =============================================================================
# VPN CONFIGURATION (for qBittorrent)
# =============================================================================
# Enable VPN for download clients
vpn_enabled: true
# Optionally route SABnzbd through VPN as well (some prefer Usenet through VPN)
sabnzbd_vpn_enabled: false # Set to true if you want SABnzbd through VPN
# VPN Provider (see provider list below)
vpn_provider: "nordvpn" # Change to your provider
# VPN Type: openvpn or wireguard
vpn_type: "openvpn"
# OpenVPN credentials (if using OpenVPN)
openvpn_user: "your_username"
openvpn_password: "your_password"
# WireGuard configuration (if using WireGuard)
wireguard_private_key: ""
wireguard_addresses: ""
# VPN server countries (comma-separated)
vpn_countries: "Netherlands,Germany"
```
## Provider-Specific Configurations
### NordVPN
**Requirements:**
- NordVPN subscription
- Service credentials (not your regular login)
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "nordvpn"
vpn_type: "openvpn"
openvpn_user: "your_service_username"
openvpn_password: "your_service_password"
vpn_countries: "Netherlands,Germany,Switzerland"
```
**Getting Service Credentials:**
1. Log into your NordVPN account
2. Go to Services → NordVPN → Manual Setup
3. Generate service credentials
4. Use these credentials (not your regular login)
**Recommended Countries:**
- Netherlands, Germany, Switzerland (good for privacy)
- Romania, Moldova (good speeds)
### ExpressVPN
**Requirements:**
- ExpressVPN subscription
- Manual configuration credentials
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "expressvpn"
vpn_type: "openvpn"
openvpn_user: "your_username"
openvpn_password: "your_password"
vpn_countries: "Netherlands,Germany"
```
**Getting Credentials:**
1. Log into ExpressVPN account
2. Go to Set Up ExpressVPN → Manual Config
3. Select OpenVPN
4. Download credentials or note username/password
### Surfshark
**Requirements:**
- Surfshark subscription
- Service credentials
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "surfshark"
vpn_type: "openvpn"
openvpn_user: "your_service_username"
openvpn_password: "your_service_password"
vpn_countries: "Netherlands,Germany"
```
**Getting Service Credentials:**
1. Log into Surfshark account
2. Go to VPN → Manual setup → OpenVPN
3. Generate or view service credentials
### Private Internet Access (PIA)
**Requirements:**
- PIA subscription
- Username and password
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "private internet access"
vpn_type: "openvpn"
openvpn_user: "your_pia_username"
openvpn_password: "your_pia_password"
vpn_countries: "Netherlands,Germany,Switzerland"
```
**Note:** Use your regular PIA login credentials.
### CyberGhost
**Requirements:**
- CyberGhost subscription
- OpenVPN credentials
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "cyberghost"
vpn_type: "openvpn"
openvpn_user: "your_username"
openvpn_password: "your_password"
vpn_countries: "Netherlands,Germany"
```
**Getting Credentials:**
1. Log into CyberGhost account
2. Go to My Account → VPN → Configure new device
3. Select OpenVPN and download configuration
### ProtonVPN
**Requirements:**
- ProtonVPN subscription (Plus or higher for P2P)
- OpenVPN credentials
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "protonvpn"
vpn_type: "openvpn"
openvpn_user: "your_openvpn_username"
openvpn_password: "your_openvpn_password"
vpn_countries: "Netherlands,Germany,Switzerland"
```
**Getting Credentials:**
1. Log into ProtonVPN account
2. Go to Account → OpenVPN/IKEv2 username
3. Generate OpenVPN credentials
**Important:** Only Plus and Visionary plans support P2P traffic.
### Mullvad
**Requirements:**
- Mullvad subscription
- Account number
**OpenVPN Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "mullvad"
vpn_type: "openvpn"
openvpn_user: "your_account_number"
openvpn_password: "m" # Always "m" for Mullvad
vpn_countries: "Netherlands,Germany,Switzerland"
```
**WireGuard Configuration (Recommended):**
```yaml
vpn_enabled: true
vpn_provider: "mullvad"
vpn_type: "wireguard"
wireguard_private_key: "your_private_key"
wireguard_addresses: "10.x.x.x/32"
vpn_countries: "Netherlands,Germany,Switzerland"
```
**Getting WireGuard Keys:**
1. Log into Mullvad account
2. Go to WireGuard configuration
3. Generate a key pair
4. Note the private key and IP address
### Windscribe
**Requirements:**
- Windscribe Pro subscription
- OpenVPN credentials
**Configuration:**
```yaml
vpn_enabled: true
vpn_provider: "windscribe"
vpn_type: "openvpn"
openvpn_user: "your_username"
openvpn_password: "your_password"
vpn_countries: "Netherlands,Germany"
```
**Getting Credentials:**
1. Log into Windscribe account
2. Go to Setup → Config Generators → OpenVPN
3. Generate configuration with credentials
## Testing Your VPN Connection
### Step 1: Deploy with VPN Enabled
```bash
ansible-playbook -i inventory/hosts site.yml
```
### Step 2: Check Gluetun Connection
```bash
# Check Gluetun logs
docker logs gluetun
# Look for successful connection messages
docker logs gluetun | grep -i "connected"
```
### Step 3: Verify qBittorrent IP
1. Go to qBittorrent web interface: `http://YOUR_VPS_IP:8080`
2. Check the connection status
3. Use a torrent IP checker to verify your IP has changed
### Step 4: Test IP Leak Protection
```bash
# Check what IP qBittorrent sees
docker exec qbittorrent curl -s https://ipinfo.io/ip
```
This should show your VPN server's IP, not your VPS IP.
## Troubleshooting
### Common Issues
#### Gluetun Won't Connect
**Check logs:**
```bash
docker logs gluetun
```
**Common causes:**
- Wrong credentials
- Unsupported server location
- Provider API issues
**Solutions:**
1. Verify credentials are correct
2. Try different server countries
3. Check provider status page
#### qBittorrent Can't Access Internet
**Symptoms:**
- Can't download torrents
- Can't access trackers
**Check:**
```bash
# Test internet connectivity through Gluetun
docker exec qbittorrent curl -s https://google.com
```
**Solutions:**
1. Restart Gluetun: `docker restart gluetun`
2. Check VPN server status
3. Try different server location
#### Services Start But No VPN Protection
**Check network mode:**
```bash
docker inspect qbittorrent | grep -i network
```
Should show: `"NetworkMode": "service:gluetun"`
**Fix:**
1. Ensure `vpn_enabled: true` in configuration
2. Redeploy: `ansible-playbook -i inventory/hosts site.yml`
### Debug Commands
```bash
# Check all container status
docker ps
# Check Gluetun detailed logs
docker logs gluetun --tail 50
# Check qBittorrent logs
docker logs qbittorrent --tail 50
# Test VPN connection
docker exec gluetun wget -qO- https://ipinfo.io/ip
# Check qBittorrent network
docker exec qbittorrent ip route
```
## Advanced Configuration
### Custom VPN Servers
Some providers allow specifying exact servers:
```yaml
# For providers that support server selection
vpn_server_hostnames: "nl123.nordvpn.com,de456.nordvpn.com"
```
### Port Forwarding
For providers that support port forwarding (like PIA):
```yaml
# Enable port forwarding (if supported by provider)
vpn_port_forwarding: true
```
### Kill Switch
Gluetun includes a built-in kill switch that blocks all traffic if VPN disconnects. This is enabled by default.
### Custom DNS
```yaml
# Use custom DNS servers through VPN
vpn_dns_servers: "1.1.1.1,1.0.0.1"
```
### Multiple VPN Servers
```yaml
# Connect to multiple countries (load balanced)
vpn_countries: "Netherlands,Germany,Switzerland,Romania"
```
## Security Best Practices
### 1. Use Strong Credentials
- Generate unique passwords for VPN services
- Store credentials securely
- Rotate credentials periodically
### 2. Choose Privacy-Friendly Locations
**Good choices:**
- Netherlands, Germany, Switzerland (strong privacy laws)
- Romania, Moldova (good for torrenting)
**Avoid:**
- Countries with data retention laws
- Your home country (for maximum privacy)
### 3. Monitor Connection Status
Set up monitoring to alert if VPN disconnects:
```bash
# Add to crontab for monitoring
*/5 * * * * docker exec gluetun wget -qO- https://ipinfo.io/ip | grep -v "YOUR_VPS_IP" || echo "VPN DOWN" | mail -s "VPN Alert" your@email.com
```
### 4. Regular Testing
- Test IP changes monthly
- Verify no DNS leaks
- Check for WebRTC leaks
### 5. Enhanced Security Features
The VPN configuration includes several security enhancements:
**Kill Switch Protection:**
- Firewall automatically blocks traffic if VPN disconnects
- No data leaks even during VPN reconnection
- Configured automatically with `FIREWALL=on`
**DNS Leak Prevention:**
- Custom DNS servers prevent ISP DNS leaks
- DNS over TLS disabled to avoid conflicts
- Malicious domain blocking enabled
**Network Isolation:**
- Download clients isolated from other services
- Only necessary ports exposed through VPN
- Outbound traffic restricted to Docker subnet
**Port Configuration:**
- qBittorrent: Port 8080 (through VPN)
- SABnzbd: Port 8081 (through VPN, if enabled)
- Automatic port conflict resolution
## Provider Comparison
| Provider | OpenVPN | WireGuard | Port Forward | P2P Friendly | Notes |
|----------|---------|-----------|--------------|--------------|-------|
| NordVPN | ✅ | ❌ | ❌ | ✅ | Good speeds, many servers |
| ExpressVPN | ✅ | ❌ | ❌ | ✅ | Premium service, fast |
| Surfshark | ✅ | ✅ | ❌ | ✅ | Good value, unlimited devices |
| PIA | ✅ | ✅ | ✅ | ✅ | Port forwarding support |
| CyberGhost | ✅ | ❌ | ❌ | ✅ | Dedicated P2P servers |
| ProtonVPN | ✅ | ✅ | ❌ | ✅* | *Plus plan required for P2P |
| Mullvad | ✅ | ✅ | ✅ | ✅ | Privacy-focused, anonymous |
| Windscribe | ✅ | ❌ | ❌ | ✅ | Good free tier |
## Getting Help
### Provider Support
Most VPN providers have specific guides for Docker/Gluetun:
- Check your provider's knowledge base
- Search for "Docker" or "Gluetun" setup guides
- Contact provider support for OpenVPN credentials
### Community Resources
- [Gluetun Wiki](https://github.com/qdm12/gluetun-wiki) - Comprehensive provider list
- [r/VPN](https://reddit.com/r/VPN) - General VPN discussions
- [r/selfhosted](https://reddit.com/r/selfhosted) - Self-hosting community
### Troubleshooting Checklist
1. ✅ VPN subscription active?
2. ✅ Correct provider name in config?
3. ✅ Valid credentials?
4. ✅ Supported server location?
5. ✅ Provider allows P2P traffic?
6. ✅ Gluetun container running?
7. ✅ qBittorrent using Gluetun network?
8. ✅ No firewall blocking VPN?
Remember: VPN configuration can be tricky. Start with a simple setup and gradually add complexity as needed.

View File

@@ -1,57 +0,0 @@
# Ansible Vault Secrets Template
# Copy this file to vault.yml and encrypt with: ansible-vault encrypt vault.yml
# Edit with: ansible-vault edit vault.yml
# VPN Credentials
vault_vpn_provider: "nordvpn" # or "surfshark", "expressvpn", etc.
vault_vpn_username: "your_vpn_username"
vault_vpn_password: "your_vpn_password"
# API Keys (leave empty to auto-generate)
vault_prowlarr_api_key: ""
vault_sonarr_api_key: ""
vault_radarr_api_key: ""
vault_lidarr_api_key: ""
vault_whisparr_api_key: ""
vault_bazarr_api_key: ""
vault_jellyseerr_api_key: ""
vault_sabnzbd_api_key: ""
# Indexer Credentials (Optional)
vault_nzbgeek_api_key: "your_nzbgeek_api_key"
vault_nzbgeek_username: "your_nzbgeek_username"
# Usenet Provider (Optional)
vault_usenet_provider_host: "news.your-provider.com"
vault_usenet_provider_port: "563"
vault_usenet_provider_username: "your_usenet_username"
vault_usenet_provider_password: "your_usenet_password"
vault_usenet_provider_ssl: true
# Plex Configuration
vault_plex_claim: "" # Get from https://plex.tv/claim (optional)
# Notification Settings (Optional)
vault_discord_webhook: ""
vault_telegram_bot_token: ""
vault_telegram_chat_id: ""
# Database Passwords (if using external databases)
vault_postgres_password: ""
vault_mysql_password: ""
# SSL Certificates (if using custom certs)
vault_ssl_cert_path: ""
vault_ssl_key_path: ""
vault_ssl_ca_path: ""
# Backup Encryption
vault_backup_encryption_key: "your_backup_encryption_key"
# SSH Keys for remote backup
vault_backup_ssh_private_key: |
-----BEGIN OPENSSH PRIVATE KEY-----
your_private_key_here
-----END OPENSSH PRIVATE KEY-----
vault_backup_ssh_public_key: "ssh-rsa your_public_key_here"

View File

@@ -1,11 +0,0 @@
all:
children:
arrs_servers:
hosts:
your-vps:
ansible_host: YOUR_VPS_IP_ADDRESS
ansible_user: root
ansible_ssh_private_key_file: ~/.ssh/your_private_key
ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
# Optional: Tailscale IP for secure mesh networking
tailscale_ip: YOUR_TAILSCALE_IP_ADDRESS

View File

@@ -1,221 +0,0 @@
#!/bin/bash
# Synology Arrs Stack Backup Script
# This script creates backups of your Arrs configurations
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Load environment variables
load_env() {
if [ -f ".env" ]; then
set -a
source .env
set +a
print_status "Environment loaded from .env"
else
print_warning ".env file not found, using defaults"
CONFIG_ROOT="/volume1/docker"
fi
}
# Create backup
create_backup() {
local backup_date=$(date +"%Y%m%d_%H%M%S")
local backup_dir="backups/arrs_backup_$backup_date"
print_status "Creating backup directory: $backup_dir"
mkdir -p "$backup_dir"
# Backup each service configuration
local services=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr")
for service in "${services[@]}"; do
local config_path="$CONFIG_ROOT/$service"
if [ -d "$config_path" ]; then
print_status "Backing up $service configuration..."
cp -r "$config_path" "$backup_dir/"
else
print_warning "$service configuration directory not found: $config_path"
fi
done
# Backup environment file
if [ -f ".env" ]; then
print_status "Backing up .env file..."
cp ".env" "$backup_dir/"
fi
# Backup docker-compose files
if [ -d "compose" ]; then
print_status "Backing up compose files..."
cp -r "compose" "$backup_dir/"
fi
# Create archive
print_status "Creating compressed archive..."
tar -czf "arrs_backup_$backup_date.tar.gz" -C backups "arrs_backup_$backup_date"
# Remove uncompressed backup
rm -rf "$backup_dir"
print_status "Backup created: arrs_backup_$backup_date.tar.gz"
# Show backup size
local backup_size=$(du -h "arrs_backup_$backup_date.tar.gz" | cut -f1)
print_status "Backup size: $backup_size"
}
# Restore backup
restore_backup() {
local backup_file="$1"
if [ -z "$backup_file" ]; then
print_error "Please specify a backup file to restore"
echo "Usage: $0 restore <backup_file.tar.gz>"
exit 1
fi
if [ ! -f "$backup_file" ]; then
print_error "Backup file not found: $backup_file"
exit 1
fi
print_warning "This will overwrite existing configurations!"
read -p "Are you sure you want to continue? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_status "Restore cancelled"
exit 0
fi
# Extract backup
local temp_dir="temp_restore_$(date +%s)"
mkdir -p "$temp_dir"
print_status "Extracting backup..."
tar -xzf "$backup_file" -C "$temp_dir"
# Find the backup directory
local backup_dir=$(find "$temp_dir" -name "arrs_backup_*" -type d | head -n1)
if [ -z "$backup_dir" ]; then
print_error "Invalid backup file structure"
rm -rf "$temp_dir"
exit 1
fi
# Restore configurations
local services=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr")
for service in "${services[@]}"; do
if [ -d "$backup_dir/$service" ]; then
print_status "Restoring $service configuration..."
rm -rf "$CONFIG_ROOT/$service"
cp -r "$backup_dir/$service" "$CONFIG_ROOT/"
fi
done
# Restore .env file
if [ -f "$backup_dir/.env" ]; then
print_status "Restoring .env file..."
cp "$backup_dir/.env" "./"
fi
# Clean up
rm -rf "$temp_dir"
print_status "Restore completed successfully!"
print_warning "You may need to restart the containers for changes to take effect"
}
# List backups
list_backups() {
print_status "Available backups:"
echo ""
if ls arrs_backup_*.tar.gz 1> /dev/null 2>&1; then
for backup in arrs_backup_*.tar.gz; do
local size=$(du -h "$backup" | cut -f1)
local date=$(echo "$backup" | sed 's/arrs_backup_\([0-9]\{8\}_[0-9]\{6\}\).tar.gz/\1/' | sed 's/_/ /')
echo -e "${BLUE}$backup${NC} - Size: $size - Date: $date"
done
else
print_warning "No backups found"
fi
echo ""
}
# Clean old backups
clean_backups() {
local keep_days=${1:-30}
print_status "Cleaning backups older than $keep_days days..."
find . -name "arrs_backup_*.tar.gz" -mtime +$keep_days -delete
print_status "Old backups cleaned"
}
# Main execution
main() {
echo -e "${BLUE}=== Synology Arrs Stack Backup Tool ===${NC}"
echo ""
# Create backups directory
mkdir -p backups
load_env
case "${1:-backup}" in
"backup"|"create")
create_backup
;;
"restore")
restore_backup "$2"
;;
"list")
list_backups
;;
"clean")
clean_backups "$2"
;;
"help"|"-h"|"--help")
echo "Usage: $0 [command] [options]"
echo ""
echo "Commands:"
echo " backup, create Create a new backup (default)"
echo " restore <file> Restore from backup file"
echo " list List available backups"
echo " clean [days] Clean backups older than X days (default: 30)"
echo " help Show this help"
;;
*)
print_error "Unknown command: $1"
echo "Use '$0 help' for usage information"
exit 1
;;
esac
}
# Run main function
main "$@"

View File

@@ -1,263 +0,0 @@
#!/bin/bash
# Synology Arrs Stack Log Viewer
# This script helps view and manage container logs
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Load environment variables
load_env() {
if [ -f ".env" ]; then
set -a
source .env
set +a
fi
}
# Check if docker-compose is available
check_docker_compose() {
if command -v docker-compose >/dev/null 2>&1; then
COMPOSE_CMD="docker-compose"
elif command -v docker >/dev/null 2>&1 && docker compose version >/dev/null 2>&1; then
COMPOSE_CMD="docker compose"
else
print_error "Neither docker-compose nor docker compose found!"
echo "This script requires Docker Compose to be installed."
exit 1
fi
}
# Show all logs
show_all_logs() {
local follow=${1:-false}
local compose_file="compose/docker-compose.yml"
if [ "$follow" = "true" ]; then
print_status "Following logs for all containers (Ctrl+C to exit)..."
$COMPOSE_CMD -f "$compose_file" logs -f
else
print_status "Showing recent logs for all containers..."
$COMPOSE_CMD -f "$compose_file" logs --tail=100
fi
}
# Show logs for specific service
show_service_logs() {
local service="$1"
local follow=${2:-false}
local compose_file="compose/docker-compose.yml"
if [ -z "$service" ]; then
print_error "Please specify a service name"
echo "Available services: sonarr, radarr, lidarr, bazarr, prowlarr"
exit 1
fi
if [ "$follow" = "true" ]; then
print_status "Following logs for $service (Ctrl+C to exit)..."
$COMPOSE_CMD -f "$compose_file" logs -f "$service"
else
print_status "Showing recent logs for $service..."
$COMPOSE_CMD -f "$compose_file" logs --tail=100 "$service"
fi
}
# Show container status
show_status() {
local compose_file="compose/docker-compose.yml"
print_status "Container status:"
$COMPOSE_CMD -f "$compose_file" ps
echo ""
print_status "Container resource usage:"
if command -v docker >/dev/null 2>&1; then
docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}" \
sonarr radarr lidarr bazarr prowlarr 2>/dev/null || print_warning "Could not get resource usage"
fi
}
# Export logs to file
export_logs() {
local service="$1"
local compose_file="compose/docker-compose.yml"
local timestamp=$(date +"%Y%m%d_%H%M%S")
mkdir -p logs
if [ -z "$service" ]; then
# Export all logs
local log_file="logs/arrs_all_logs_$timestamp.txt"
print_status "Exporting all logs to $log_file..."
$COMPOSE_CMD -f "$compose_file" logs --no-color > "$log_file"
else
# Export specific service logs
local log_file="logs/${service}_logs_$timestamp.txt"
print_status "Exporting $service logs to $log_file..."
$COMPOSE_CMD -f "$compose_file" logs --no-color "$service" > "$log_file"
fi
print_status "Logs exported successfully!"
local file_size=$(du -h "$log_file" | cut -f1)
print_status "File size: $file_size"
}
# Interactive service selection
interactive_selection() {
echo ""
echo "Select a service to view logs:"
echo "1) All services"
echo "2) Sonarr"
echo "3) Radarr"
echo "4) Lidarr"
echo "5) Bazarr"
echo "6) Prowlarr"
echo ""
read -p "Enter choice [1]: " choice
choice=${choice:-1}
echo ""
echo "Log viewing options:"
echo "1) Show recent logs"
echo "2) Follow logs (live)"
echo "3) Export logs to file"
echo ""
read -p "Enter option [1]: " option
option=${option:-1}
local follow=false
if [ "$option" = "2" ]; then
follow=true
fi
case $choice in
1)
if [ "$option" = "3" ]; then
export_logs
else
show_all_logs "$follow"
fi
;;
2)
if [ "$option" = "3" ]; then
export_logs "sonarr"
else
show_service_logs "sonarr" "$follow"
fi
;;
3)
if [ "$option" = "3" ]; then
export_logs "radarr"
else
show_service_logs "radarr" "$follow"
fi
;;
4)
if [ "$option" = "3" ]; then
export_logs "lidarr"
else
show_service_logs "lidarr" "$follow"
fi
;;
5)
if [ "$option" = "3" ]; then
export_logs "bazarr"
else
show_service_logs "bazarr" "$follow"
fi
;;
6)
if [ "$option" = "3" ]; then
export_logs "prowlarr"
else
show_service_logs "prowlarr" "$follow"
fi
;;
*)
print_error "Invalid choice"
exit 1
;;
esac
}
# Main execution
main() {
echo -e "${BLUE}=== Synology Arrs Stack Log Viewer ===${NC}"
echo ""
load_env
check_docker_compose
case "${1:-}" in
"all")
show_all_logs "${2:-false}"
;;
"follow"|"-f")
if [ -n "$2" ]; then
show_service_logs "$2" true
else
show_all_logs true
fi
;;
"export")
export_logs "$2"
;;
"status")
show_status
;;
"sonarr"|"radarr"|"lidarr"|"bazarr"|"prowlarr")
show_service_logs "$1" "${2:-false}"
;;
"help"|"-h"|"--help")
echo "Usage: $0 [command] [options]"
echo ""
echo "Commands:"
echo " (no command) Interactive service selection"
echo " all [follow] Show logs for all services"
echo " follow, -f [svc] Follow logs (live) for service or all"
echo " export [service] Export logs to file"
echo " status Show container status and resource usage"
echo " <service> Show logs for specific service"
echo ""
echo "Services: sonarr, radarr, lidarr, bazarr, prowlarr"
echo ""
echo "Examples:"
echo " $0 # Interactive selection"
echo " $0 sonarr # Show Sonarr logs"
echo " $0 follow radarr # Follow Radarr logs"
echo " $0 export # Export all logs"
echo " $0 status # Show container status"
;;
"")
interactive_selection
;;
*)
print_error "Unknown command: $1"
echo "Use '$0 help' for usage information"
exit 1
;;
esac
}
# Run main function
main "$@"

View File

@@ -1,232 +0,0 @@
#!/bin/bash
# Synology Arrs Stack Setup Script
# This script helps set up the directory structure and environment for the Arrs stack
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default values
DEFAULT_DATA_ROOT="/volume1/data"
DEFAULT_CONFIG_ROOT="/volume1/docker"
DEFAULT_PROJECT_PATH="/volume1/docker/projects/arrs-compose"
echo -e "${BLUE}=== Synology Arrs Stack Setup ===${NC}"
echo ""
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Check if running on Synology
check_synology() {
if [ ! -f /etc/synoinfo.conf ]; then
print_warning "This doesn't appear to be a Synology system."
read -p "Continue anyway? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
exit 1
fi
fi
}
# Get user input for configuration
get_user_config() {
echo -e "${BLUE}Configuration Setup${NC}"
echo "Please provide the following information:"
echo ""
# Get PUID and PGID
if command -v id >/dev/null 2>&1; then
if id dockerlimited >/dev/null 2>&1; then
DEFAULT_PUID=$(id -u dockerlimited)
DEFAULT_PGID=$(id -g dockerlimited)
print_status "Found dockerlimited user: PUID=$DEFAULT_PUID, PGID=$DEFAULT_PGID"
else
print_warning "dockerlimited user not found. Please create it first."
DEFAULT_PUID="1234"
DEFAULT_PGID="65432"
fi
else
DEFAULT_PUID="1234"
DEFAULT_PGID="65432"
fi
read -p "PUID (User ID) [$DEFAULT_PUID]: " PUID
PUID=${PUID:-$DEFAULT_PUID}
read -p "PGID (Group ID) [$DEFAULT_PGID]: " PGID
PGID=${PGID:-$DEFAULT_PGID}
# Get timezone
read -p "Timezone [Europe/London]: " TZ
TZ=${TZ:-"Europe/London"}
# Get paths
read -p "Data root directory [$DEFAULT_DATA_ROOT]: " DATA_ROOT
DATA_ROOT=${DATA_ROOT:-$DEFAULT_DATA_ROOT}
read -p "Config root directory [$DEFAULT_CONFIG_ROOT]: " CONFIG_ROOT
CONFIG_ROOT=${CONFIG_ROOT:-$DEFAULT_CONFIG_ROOT}
read -p "Project path [$DEFAULT_PROJECT_PATH]: " PROJECT_PATH
PROJECT_PATH=${PROJECT_PATH:-$DEFAULT_PROJECT_PATH}
}
# Create directory structure
create_directories() {
print_status "Creating directory structure..."
# Create data directories
mkdir -p "$DATA_ROOT/media/movies"
mkdir -p "$DATA_ROOT/media/tv"
mkdir -p "$DATA_ROOT/media/music"
mkdir -p "$DATA_ROOT/media/books"
mkdir -p "$DATA_ROOT/torrents/movies"
mkdir -p "$DATA_ROOT/torrents/tv"
mkdir -p "$DATA_ROOT/torrents/music"
mkdir -p "$DATA_ROOT/torrents/books"
# Create config directories
mkdir -p "$CONFIG_ROOT/sonarr"
mkdir -p "$CONFIG_ROOT/radarr"
mkdir -p "$CONFIG_ROOT/lidarr"
mkdir -p "$CONFIG_ROOT/bazarr"
mkdir -p "$CONFIG_ROOT/prowlarr"
mkdir -p "$PROJECT_PATH"
print_status "Directory structure created successfully!"
}
# Set permissions
set_permissions() {
print_status "Setting directory permissions..."
# Set ownership if dockerlimited user exists
if id dockerlimited >/dev/null 2>&1; then
chown -R dockerlimited:dockerlimited "$DATA_ROOT" 2>/dev/null || print_warning "Could not set ownership on $DATA_ROOT (may require sudo)"
chown -R dockerlimited:dockerlimited "$CONFIG_ROOT" 2>/dev/null || print_warning "Could not set ownership on $CONFIG_ROOT (may require sudo)"
else
print_warning "dockerlimited user not found. Skipping ownership changes."
fi
# Set permissions
chmod -R 755 "$DATA_ROOT" 2>/dev/null || print_warning "Could not set permissions on $DATA_ROOT (may require sudo)"
chmod -R 755 "$CONFIG_ROOT" 2>/dev/null || print_warning "Could not set permissions on $CONFIG_ROOT (may require sudo)"
print_status "Permissions set successfully!"
}
# Create .env file
create_env_file() {
print_status "Creating .env file..."
cat > .env << EOF
# Synology Arrs Stack Environment Configuration
# Generated by setup script on $(date)
# User and Group Configuration
PUID=$PUID
PGID=$PGID
# Timezone Configuration
TZ=$TZ
# Directory Paths
DATA_ROOT=$DATA_ROOT
CONFIG_ROOT=$CONFIG_ROOT
# Network Configuration
NETWORK_MODE=synobridge
# Port Configuration
SONARR_PORT=8989
RADARR_PORT=7878
LIDARR_PORT=8686
BAZARR_PORT=6767
PROWLARR_PORT=9696
# VPN Configuration (for docker-compose-vpn.yml)
VPN_PROVIDER=nordvpn
VPN_TYPE=openvpn
VPN_USER=your_vpn_username
VPN_PASSWORD=your_vpn_password
VPN_COUNTRIES=Netherlands,Germany
EOF
print_status ".env file created successfully!"
}
# Copy docker-compose file to project directory
copy_compose_file() {
print_status "Copying docker-compose.yml to project directory..."
if [ -f "compose/docker-compose.yml" ]; then
cp "compose/docker-compose.yml" "$PROJECT_PATH/"
print_status "docker-compose.yml copied to $PROJECT_PATH/"
else
print_error "docker-compose.yml not found in compose/ directory"
return 1
fi
}
# Check prerequisites
check_prerequisites() {
print_status "Checking prerequisites..."
# Check if Container Manager is available
if [ -d "/var/packages/ContainerManager" ]; then
print_status "Container Manager package found"
else
print_warning "Container Manager package not found. Please install it from Package Center."
fi
# Check if synobridge network exists (this would require docker command)
print_status "Please ensure synobridge network is configured in Container Manager"
}
# Main execution
main() {
echo -e "${GREEN}Starting Synology Arrs Stack setup...${NC}"
echo ""
check_synology
get_user_config
create_directories
set_permissions
create_env_file
copy_compose_file
check_prerequisites
echo ""
echo -e "${GREEN}=== Setup Complete! ===${NC}"
echo ""
echo "Next steps:"
echo "1. Review and edit the .env file if needed"
echo "2. Ensure synobridge network is configured in Container Manager"
echo "3. Run './scripts/deploy.sh' to deploy the stack"
echo "4. Or manually create a project in Container Manager using:"
echo " - Project Name: media-stack"
echo " - Path: $PROJECT_PATH"
echo " - Use the docker-compose.yml file in that directory"
echo ""
echo -e "${BLUE}For more information, see the README.md file${NC}"
}
# Run main function
main "$@"

View File

@@ -1,209 +0,0 @@
---
# Backup automation setup tasks
- name: Create backup directories
file:
path: "{{ item }}"
state: directory
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
loop:
- "{{ backup_dir }}"
- "{{ backup_dir }}/configs"
- "{{ backup_dir }}/compose"
- "{{ backup_dir }}/scripts"
- "{{ backup_dir }}/logs"
tags: ['backup_dirs']
- name: Install backup utilities
apt:
name:
- rsync
- tar
- gzip
- pigz
- pv
state: present
tags: ['backup_tools']
- name: Create main backup script
template:
src: backup-arrs.sh.j2
dest: "{{ docker_root }}/scripts/backup-arrs.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create incremental backup script
template:
src: backup-incremental.sh.j2
dest: "{{ docker_root }}/scripts/backup-incremental.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create restore script
template:
src: restore-arrs.sh.j2
dest: "{{ docker_root }}/scripts/restore-arrs.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create backup verification script
template:
src: verify-backup.sh.j2
dest: "{{ docker_root }}/scripts/verify-backup.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create backup cleanup script
template:
src: cleanup-backups.sh.j2
dest: "{{ docker_root }}/scripts/cleanup-backups.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create backup configuration file
template:
src: backup.conf.j2
dest: "{{ docker_root }}/backup.conf"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0644'
tags: ['backup_config']
- name: Set up scheduled backup cron job
cron:
name: "Arrs configuration backup"
minute: "0"
hour: "2"
weekday: "0"
job: "{{ docker_root }}/scripts/backup-arrs.sh >> {{ docker_root }}/logs/backup.log 2>&1"
user: "{{ docker_user }}"
when: backup_enabled
tags: ['backup_cron']
- name: Set up daily incremental backup cron job
cron:
name: "Arrs incremental backup"
minute: "30"
hour: "3"
job: "{{ docker_root }}/scripts/backup-incremental.sh >> {{ docker_root }}/logs/backup-incremental.log 2>&1"
user: "{{ docker_user }}"
when: backup_enabled
tags: ['backup_cron']
- name: Set up backup cleanup cron job
cron:
name: "Backup cleanup"
minute: "0"
hour: "1"
job: "{{ docker_root }}/scripts/cleanup-backups.sh >> {{ docker_root }}/logs/backup-cleanup.log 2>&1"
user: "{{ docker_user }}"
when: backup_enabled
tags: ['backup_cron']
- name: Set up backup verification cron job
cron:
name: "Backup verification"
minute: "0"
hour: "4"
weekday: "1"
job: "{{ docker_root }}/scripts/verify-backup.sh >> {{ docker_root }}/logs/backup-verify.log 2>&1"
user: "{{ docker_user }}"
when: backup_enabled
tags: ['backup_cron']
- name: Create database backup script (for future use)
template:
src: backup-databases.sh.j2
dest: "{{ docker_root }}/scripts/backup-databases.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create media backup script (for large files)
template:
src: backup-media.sh.j2
dest: "{{ docker_root }}/scripts/backup-media.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create backup status script
template:
src: backup-status.sh.j2
dest: "{{ docker_root }}/scripts/backup-status.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create backup notification script
template:
src: backup-notify.sh.j2
dest: "{{ docker_root }}/scripts/backup-notify.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create emergency backup script
template:
src: emergency-backup.sh.j2
dest: "{{ docker_root }}/scripts/emergency-backup.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['backup_scripts']
- name: Create backup README
template:
src: backup-README.md.j2
dest: "{{ backup_dir }}/README.md"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0644'
tags: ['backup_docs']
- name: Set up log rotation for backup logs
template:
src: backup-logrotate.j2
dest: /etc/logrotate.d/arrs-backup
mode: '0644'
tags: ['backup_logging']
- name: Create initial backup
command: "{{ docker_root }}/scripts/backup-arrs.sh"
become_user: "{{ docker_user }}"
when: backup_enabled
tags: ['initial_backup']
- name: Display backup information
debug:
msg: |
Backup system configured successfully!
Backup location: {{ backup_dir }}
Backup schedule: {{ backup_schedule }}
Retention: {{ backup_retention_days }} days
Manual backup commands:
- Full backup: {{ docker_root }}/scripts/backup-arrs.sh
- Incremental: {{ docker_root }}/scripts/backup-incremental.sh
- Restore: {{ docker_root }}/scripts/restore-arrs.sh
- Status: {{ docker_root }}/scripts/backup-status.sh
Backup logs: {{ docker_root }}/logs/backup.log
tags: ['backup_info']

View File

@@ -1,125 +0,0 @@
---
# Docker installation and configuration tasks
- name: Remove old Docker packages
apt:
name:
- docker
- docker-engine
- docker.io
- containerd
- runc
state: absent
tags: ['docker_install']
- name: Add Docker GPG key
apt_key:
url: https://download.docker.com/linux/ubuntu/gpg
state: present
tags: ['docker_install']
- name: Add Docker repository
apt_repository:
repo: "deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
update_cache: yes
tags: ['docker_install']
- name: Install Docker CE
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: present
update_cache: yes
notify: restart docker
tags: ['docker_install']
- name: Install Docker Compose standalone
get_url:
url: "https://github.com/docker/compose/releases/download/v{{ docker_compose_version }}/docker-compose-linux-x86_64"
dest: /usr/local/bin/docker-compose
mode: '0755'
owner: root
group: root
tags: ['docker_compose']
- name: Remove existing docker-compose if present
file:
path: /usr/bin/docker-compose
state: absent
tags: ['docker_compose']
- name: Create docker-compose symlink
file:
src: /usr/local/bin/docker-compose
dest: /usr/bin/docker-compose
state: link
tags: ['docker_compose']
- name: Start and enable Docker service
systemd:
name: docker
state: started
enabled: yes
daemon_reload: yes
tags: ['docker_service']
- name: Configure Docker daemon
template:
src: daemon.json.j2
dest: /etc/docker/daemon.json
backup: yes
notify: restart docker
tags: ['docker_config']
- name: Create Docker log rotation configuration
template:
src: docker-logrotate.j2
dest: /etc/logrotate.d/docker
mode: '0644'
tags: ['docker_logging']
- name: Verify Docker installation
command: docker --version
register: docker_version
changed_when: false
tags: ['docker_verify']
- name: Verify Docker Compose installation
command: docker-compose --version
register: docker_compose_version_check
changed_when: false
tags: ['docker_verify']
- name: Display Docker versions
debug:
msg: |
Docker version: {{ docker_version.stdout }}
Docker Compose version: {{ docker_compose_version_check.stdout }}
tags: ['docker_verify']
- name: Test Docker functionality
docker_container:
name: hello-world-test
image: hello-world
state: started
auto_remove: yes
detach: no
register: docker_test
tags: ['docker_test']
- name: Remove test container
docker_container:
name: hello-world-test
state: absent
tags: ['docker_test']
- name: Clean up Docker test image
docker_image:
name: hello-world
state: absent
tags: ['docker_test']

View File

@@ -1,260 +0,0 @@
---
# Monitoring and logging setup tasks
- name: Create monitoring directories
file:
path: "{{ item }}"
state: directory
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
loop:
- "{{ docker_root }}/monitoring"
- "{{ docker_root }}/logs"
- "{{ docker_root }}/logs/arrs"
- "{{ docker_root }}/logs/system"
tags: ['monitoring_dirs']
- name: Install monitoring tools
apt:
name:
- htop
- iotop
- nethogs
- ncdu
- tree
- lsof
- strace
- tcpdump
- nmap
state: present
tags: ['monitoring_tools']
- name: Create monitoring scripts directory
file:
path: /usr/local/bin
state: directory
mode: '0755'
tags: ['monitoring_scripts']
- name: Create monitoring log directories
file:
path: "{{ item }}"
state: directory
owner: root
group: root
mode: '0755'
loop:
- /var/log/arrs
- /opt/monitoring
- /opt/monitoring/scripts
tags: ['monitoring_dirs']
- name: Deploy health dashboard script
template:
src: health-dashboard.sh.j2
dest: /usr/local/bin/health-dashboard.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy system monitoring script
template:
src: system-monitor.sh.j2
dest: /usr/local/bin/system-monitor.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy Docker monitoring script
template:
src: docker-monitor.sh.j2
dest: /usr/local/bin/docker-monitor.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy network monitoring script
template:
src: network-monitor.sh.j2
dest: /usr/local/bin/network-monitor.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy performance monitoring script
template:
src: performance-monitor.sh.j2
dest: /usr/local/bin/performance-monitor.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy security audit script
template:
src: security-audit.sh.j2
dest: /usr/local/bin/security-audit.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy disk usage monitoring script
template:
src: disk-usage-monitor.sh.j2
dest: /usr/local/bin/disk-usage-monitor.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy service health check script
template:
src: check-services.sh.j2
dest: /usr/local/bin/check-services.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Deploy log aggregator script
template:
src: log-aggregator.sh.j2
dest: /usr/local/bin/log-aggregator.sh
owner: root
group: root
mode: '0755'
tags: ['monitoring_scripts']
- name: Set up log rotation for Arrs applications
template:
src: arrs-logrotate.j2
dest: /etc/logrotate.d/arrs
mode: '0644'
tags: ['log_rotation']
- name: Add health dashboard alias to root bashrc
lineinfile:
path: /root/.bashrc
line: "alias health='/usr/local/bin/health-dashboard.sh'"
create: yes
tags: ['monitoring_scripts']
- name: Set up cron job for system monitoring
cron:
name: "System monitoring"
minute: "*/10"
job: "/usr/local/bin/system-monitor.sh >> /var/log/arrs/system-monitor.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for Docker monitoring
cron:
name: "Docker monitoring"
minute: "*/5"
job: "/usr/local/bin/docker-monitor.sh >> /var/log/arrs/docker-monitor.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for network monitoring
cron:
name: "Network monitoring"
minute: "*/15"
job: "/usr/local/bin/network-monitor.sh >> /var/log/arrs/network-monitor.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for performance monitoring
cron:
name: "Performance monitoring"
minute: "*/20"
job: "/usr/local/bin/performance-monitor.sh >> /var/log/arrs/performance-monitor.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for security audit
cron:
name: "Security audit"
minute: "0"
hour: "2"
job: "/usr/local/bin/security-audit.sh >> /var/log/arrs/security-audit.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for disk usage monitoring
cron:
name: "Disk usage monitoring"
minute: "0"
hour: "*/6"
job: "/usr/local/bin/disk-usage-monitor.sh >> /var/log/arrs/disk-usage.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for service health checks
cron:
name: "Service health checks"
minute: "*/5"
job: "/usr/local/bin/check-services.sh >> /var/log/arrs/service-checks.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Set up cron job for log aggregation
cron:
name: "Log aggregation"
minute: "0"
hour: "1"
job: "/usr/local/bin/log-aggregator.sh >> /var/log/arrs/log-aggregator.log 2>&1"
user: root
tags: ['monitoring_cron']
- name: Create alerting script
template:
src: alert-manager.sh.j2
dest: "{{ docker_root }}/scripts/alert-manager.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['alerting']
- name: Configure rsyslog for centralized logging
template:
src: rsyslog-arrs.conf.j2
dest: /etc/rsyslog.d/40-arrs.conf
mode: '0644'
notify: restart rsyslog
tags: ['centralized_logging']
- name: Create log analysis script
template:
src: log-analyzer.sh.j2
dest: "{{ docker_root }}/scripts/log-analyzer.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['log_analysis']
- name: Set up weekly log analysis cron job
cron:
name: "Weekly log analysis"
minute: "0"
hour: "2"
weekday: "0"
job: "{{ docker_root }}/scripts/log-analyzer.sh >> {{ docker_root }}/logs/system/log-analysis.log 2>&1"
user: "{{ docker_user }}"
tags: ['log_analysis']
- name: Create monitoring configuration file
template:
src: monitoring.conf.j2
dest: "{{ docker_root }}/monitoring/monitoring.conf"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0644'
tags: ['monitoring_config']

View File

@@ -1,185 +0,0 @@
---
# Security and firewall configuration tasks
- name: Configure SSH security
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
backup: yes
loop:
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin yes' }
- { regexp: '^#?PasswordAuthentication', line: 'PasswordAuthentication {{ "yes" if not ssh_key_based_auth else "no" }}' }
- { regexp: '^#?PubkeyAuthentication', line: 'PubkeyAuthentication yes' }
- { regexp: '^#?Port', line: 'Port {{ ssh_port }}' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 3' }
- { regexp: '^#?ClientAliveInterval', line: 'ClientAliveInterval 300' }
- { regexp: '^#?ClientAliveCountMax', line: 'ClientAliveCountMax 2' }
notify: restart sshd
tags: ['ssh_security']
- name: Configure fail2ban for SSH
template:
src: jail.local.j2
dest: /etc/fail2ban/jail.local
backup: yes
notify: restart fail2ban
tags: ['fail2ban']
- name: Configure fail2ban filter for Plex
template:
src: plex-fail2ban-filter.j2
dest: /etc/fail2ban/filter.d/plex.conf
backup: yes
when: plex_public_access | default(false)
notify: restart fail2ban
tags: ['fail2ban', 'plex']
- name: Start and enable fail2ban
systemd:
name: fail2ban
state: started
enabled: yes
tags: ['fail2ban']
- name: Reset UFW to defaults
ufw:
state: reset
when: ufw_enabled
tags: ['firewall']
- name: Configure UFW default policies
ufw:
direction: "{{ item.direction }}"
policy: "{{ item.policy }}"
loop:
- { direction: 'incoming', policy: "{{ ufw_default_policy_incoming }}" }
- { direction: 'outgoing', policy: "{{ ufw_default_policy_outgoing }}" }
when: ufw_enabled
tags: ['firewall']
- name: Allow SSH through UFW
ufw:
rule: allow
port: "{{ ssh_port }}"
proto: tcp
when: ufw_enabled
tags: ['firewall']
- name: Check if Tailscale is installed
command: which tailscale
register: tailscale_check
failed_when: false
changed_when: false
when: tailscale_enabled
tags: ['tailscale']
- name: Install Tailscale
shell: |
curl -fsSL https://tailscale.com/install.sh | sh
when: tailscale_enabled and tailscale_check.rc != 0
tags: ['tailscale']
- name: Get Tailscale interface information
shell: ip addr show {{ tailscale_interface }} | grep 'inet ' | awk '{print $2}' | cut -d'/' -f1
register: tailscale_ip
failed_when: false
changed_when: false
when: tailscale_enabled
tags: ['tailscale']
- name: Allow Tailscale interface through UFW
ufw:
rule: allow
interface: "{{ tailscale_interface }}"
direction: in
when: ufw_enabled and tailscale_enabled
tags: ['firewall', 'tailscale']
- name: Allow Arrs services from Tailscale network
ufw:
rule: allow
port: "{{ item.value }}"
proto: tcp
src: "{{ tailscale_ip.stdout | regex_replace('\\.[0-9]+$', '.0/24') }}"
loop: "{{ ports | dict2items }}"
when: ufw_enabled and tailscale_enabled and tailscale_ip.stdout != ""
tags: ['firewall', 'tailscale']
- name: Allow Docker bridge network communication
ufw:
rule: allow
from_ip: "{{ docker_network_subnet }}"
to_ip: "{{ docker_network_subnet }}"
when: ufw_enabled
tags: ['firewall', 'docker']
- name: Allow Plex Media Server through UFW (public access)
ufw:
rule: allow
port: "{{ item.port }}"
proto: "{{ item.proto }}"
comment: "{{ item.comment }}"
loop:
- { port: "32400", proto: "tcp", comment: "Plex Media Server" }
- { port: "3005", proto: "tcp", comment: "Plex Home Theater via Plex Companion" }
- { port: "8324", proto: "tcp", comment: "Plex for Roku via Plex Companion" }
- { port: "32469", proto: "tcp", comment: "Plex DLNA Server" }
- { port: "1900", proto: "udp", comment: "Plex DLNA Server" }
- { port: "32410", proto: "udp", comment: "Plex GDM network discovery" }
- { port: "32412", proto: "udp", comment: "Plex GDM network discovery" }
- { port: "32413", proto: "udp", comment: "Plex GDM network discovery" }
- { port: "32414", proto: "udp", comment: "Plex GDM network discovery" }
when: ufw_enabled and plex_public_access | default(false)
tags: ['firewall', 'plex']
- name: Enable UFW
ufw:
state: enabled
when: ufw_enabled
tags: ['firewall']
- name: Configure Docker security options
template:
src: docker-security.json.j2
dest: /etc/docker/seccomp-profile.json
mode: '0644'
notify: restart docker
tags: ['docker_security']
- name: Create AppArmor profile for Docker containers
template:
src: docker-apparmor.j2
dest: /etc/apparmor.d/docker-arrs
mode: '0644'
notify: reload apparmor
tags: ['apparmor']
- name: Set secure file permissions
file:
path: "{{ item.path }}"
mode: "{{ item.mode }}"
owner: "{{ item.owner | default('root') }}"
group: "{{ item.group | default('root') }}"
loop:
- { path: '/etc/ssh/sshd_config', mode: '0600' }
- { path: '/etc/fail2ban/jail.local', mode: '0644' }
- { path: '/etc/docker', mode: '0755' }
tags: ['file_permissions']
- name: Configure log monitoring
template:
src: rsyslog-docker.conf.j2
dest: /etc/rsyslog.d/30-docker.conf
mode: '0644'
notify: restart rsyslog
tags: ['logging']
- name: Create security audit script
template:
src: security-audit.sh.j2
dest: "{{ docker_root }}/scripts/security-audit.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['security_audit']

View File

@@ -1,192 +0,0 @@
---
# Services deployment tasks
- name: Generate Docker Compose file
template:
src: docker-compose.yml.j2
dest: "{{ docker_compose_dir }}/docker-compose.yml"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0644'
backup: yes
tags: ['compose']
- name: Create environment file for Docker Compose
template:
src: docker.env.j2
dest: "{{ docker_compose_dir }}/.env"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0600'
tags: ['compose']
- name: Create Gluetun VPN directory
file:
path: "{{ docker_root }}/gluetun"
state: directory
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
when: vpn_enabled
tags: ['vpn']
- name: Copy custom OpenVPN configuration
copy:
src: custom.conf
dest: "{{ docker_root }}/gluetun/custom.conf"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0600'
when: vpn_enabled and vpn_provider == 'custom' and vpn_type == 'openvpn'
tags: ['vpn']
- name: Copy WireGuard configuration
copy:
src: wireguard/protonvpn.conf
dest: "{{ docker_root }}/gluetun/protonvpn.conf"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0600'
when: vpn_enabled and vpn_type == 'wireguard'
tags: ['vpn']
- name: Pull Docker images
shell: docker-compose pull
args:
chdir: "{{ docker_compose_dir }}"
become_user: "{{ docker_user }}"
tags: ['images']
- name: Start Arrs services
shell: docker-compose up -d
args:
chdir: "{{ docker_compose_dir }}"
become_user: "{{ docker_user }}"
tags: ['services']
- name: Wait for services to be ready
wait_for:
port: "{{ item.value }}"
host: "127.0.0.1"
delay: 10
timeout: 300
loop: "{{ ports | dict2items }}"
tags: ['health_check']
- name: Verify service health
uri:
url: "http://127.0.0.1:{{ item.value }}/ping"
method: GET
status_code: 200
loop: "{{ ports | dict2items }}"
register: health_checks
retries: 5
delay: 10
until: health_checks is succeeded
tags: ['health_check']
- name: Create systemd service for Arrs stack
template:
src: arrs-stack.service.j2
dest: /etc/systemd/system/arrs-stack.service
mode: '0644'
notify: reload systemd
tags: ['systemd']
- name: Enable Arrs stack systemd service
systemd:
name: arrs-stack
enabled: yes
daemon_reload: yes
tags: ['systemd']
- name: Create service management script
template:
src: manage-arrs.sh.j2
dest: /usr/local/bin/manage-arrs
mode: '0755'
tags: ['management']
- name: Create Docker network if it doesn't exist
docker_network:
name: "{{ docker_network_name }}"
driver: bridge
ipam_config:
- subnet: "{{ docker_network_subnet }}"
gateway: "{{ docker_network_gateway }}"
ignore_errors: yes
tags: ['network']
- name: Set up log rotation for Docker containers
template:
src: docker-container-logrotate.j2
dest: /etc/logrotate.d/docker-containers
mode: '0644'
tags: ['logging']
- name: Create service status check script
template:
src: check-services.sh.j2
dest: "{{ docker_root }}/scripts/check-services.sh"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['monitoring']
- name: Set up cron job for service monitoring
cron:
name: "Check Arrs services"
minute: "*/5"
job: "{{ docker_root }}/scripts/check-services.sh >> {{ docker_root }}/logs/service-check.log 2>&1"
user: "{{ docker_user }}"
tags: ['monitoring']
- name: Display service information
debug:
msg: |
Services deployed successfully!
Access URLs:
- Sonarr: http://{{ ansible_default_ipv4.address }}:{{ ports.sonarr }}
- Radarr: http://{{ ansible_default_ipv4.address }}:{{ ports.radarr }}
- Lidarr: http://{{ ansible_default_ipv4.address }}:{{ ports.lidarr }}
- Bazarr: http://{{ ansible_default_ipv4.address }}:{{ ports.bazarr }}
- Prowlarr: http://{{ ansible_default_ipv4.address }}:{{ ports.prowlarr }}
Management commands:
- Start: sudo systemctl start arrs-stack
- Stop: sudo systemctl stop arrs-stack
- Status: sudo systemctl status arrs-stack
- Logs: docker-compose -f {{ docker_compose_dir }}/docker-compose.yml logs -f
tags: ['info']
- name: Deploy SABnzbd configuration fix script
template:
src: sabnzbd-config-fix.sh.j2
dest: "{{ docker_root }}/scripts/sabnzbd-config-fix.sh"
mode: '0755'
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
tags: ['services', 'sabnzbd']
- name: Apply SABnzbd hostname whitelist fix
shell: |
cd {{ docker_compose_dir }}
docker-compose exec -T sabnzbd /bin/bash -c "
if ! grep -q 'sonarr, radarr, lidarr' /config/sabnzbd.ini 2>/dev/null; then
echo 'Updating SABnzbd host_whitelist...'
sed -i 's/host_whitelist = \([^,]*\),/host_whitelist = \1, sonarr, radarr, lidarr, bazarr, prowlarr, whisparr, gluetun, localhost, 127.0.0.1,/' /config/sabnzbd.ini
echo 'SABnzbd host_whitelist updated for service connections'
else
echo 'SABnzbd host_whitelist already configured'
fi"
register: sabnzbd_config_result
changed_when: "'updated for service connections' in sabnzbd_config_result.stdout"
tags: ['services', 'sabnzbd']
- name: Restart SABnzbd if configuration was updated
shell: |
cd {{ docker_compose_dir }}
docker-compose restart sabnzbd
when: sabnzbd_config_result.changed
tags: ['services', 'sabnzbd']

View File

@@ -1,93 +0,0 @@
---
# System setup tasks for Arrs Media Stack deployment
- name: Set timezone
timezone:
name: "{{ timezone }}"
notify: reload systemd
tags: ['timezone']
- name: Update system packages
apt:
upgrade: dist
update_cache: yes
cache_valid_time: 3600
tags: ['system_update']
- name: Install additional system utilities
apt:
name:
- vim
- git
- rsync
- cron
- logrotate
- fail2ban
- ncdu
- iotop
- nethogs
- jq
state: present
tags: ['system_packages']
- name: Configure automatic security updates
apt:
name: unattended-upgrades
state: present
tags: ['security_updates']
- name: Configure unattended-upgrades
template:
src: 50unattended-upgrades.j2
dest: /etc/apt/apt.conf.d/50unattended-upgrades
backup: yes
tags: ['security_updates']
- name: Enable automatic security updates
template:
src: 20auto-upgrades.j2
dest: /etc/apt/apt.conf.d/20auto-upgrades
backup: yes
tags: ['security_updates']
- name: Configure system limits for Docker
pam_limits:
domain: "{{ docker_user }}"
limit_type: "{{ item.type }}"
limit_item: "{{ item.item }}"
value: "{{ item.value }}"
loop:
- { type: 'soft', item: 'nofile', value: '65536' }
- { type: 'hard', item: 'nofile', value: '65536' }
- { type: 'soft', item: 'nproc', value: '32768' }
- { type: 'hard', item: 'nproc', value: '32768' }
tags: ['system_limits']
- name: Configure kernel parameters for Docker
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: yes
loop:
- { name: 'vm.max_map_count', value: '262144' }
- { name: 'fs.file-max', value: '2097152' }
- { name: 'net.core.somaxconn', value: '65535' }
tags: ['kernel_params']
- name: Create systemd override directory for Docker
file:
path: /etc/systemd/system/docker.service.d
state: directory
mode: '0755'
tags: ['docker_systemd']
- name: Configure Docker systemd service
template:
src: docker-override.conf.j2
dest: /etc/systemd/system/docker.service.d/override.conf
backup: yes
notify:
- reload systemd
- restart docker
tags: ['docker_systemd']

View File

@@ -1,128 +0,0 @@
---
# User and directory setup tasks
- name: Create docker group
group:
name: "{{ docker_group }}"
state: present
tags: ['users']
- name: Create docker user
user:
name: "{{ docker_user }}"
group: "{{ docker_group }}"
groups: docker
shell: /bin/bash
home: "{{ docker_root }}"
create_home: yes
system: no
state: present
tags: ['users']
- name: Add docker user to docker group
user:
name: "{{ docker_user }}"
groups: docker
append: yes
tags: ['users']
- name: Get docker user UID and GID
getent:
database: passwd
key: "{{ docker_user }}"
tags: ['users']
- name: Get docker group GID
getent:
database: group
key: "{{ docker_group }}"
tags: ['users']
- name: Display docker user information
debug:
msg: |
Docker user: {{ docker_user }}
Docker UID: {{ ansible_facts['getent_passwd'][docker_user][1] }}
Docker GID: {{ ansible_facts['getent_group'][docker_group][1] }}
tags: ['users']
- name: Create media directories
file:
path: "{{ item }}"
state: directory
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
loop: "{{ media_dirs }}"
tags: ['directories']
- name: Create docker config directories
file:
path: "{{ item }}"
state: directory
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
loop: "{{ docker_dirs }}"
tags: ['directories']
- name: Set ownership of media root
file:
path: "{{ media_root }}"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
recurse: yes
state: directory
tags: ['permissions']
- name: Set ownership of docker root
file:
path: "{{ docker_root }}"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
recurse: yes
state: directory
tags: ['permissions']
- name: Create docker user .bashrc
template:
src: bashrc.j2
dest: "{{ docker_root }}/.bashrc"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0644'
tags: ['user_config']
- name: Create useful aliases for docker user
template:
src: bash_aliases.j2
dest: "{{ docker_root }}/.bash_aliases"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0644'
tags: ['user_config']
- name: Create scripts directory
file:
path: "{{ docker_root }}/scripts"
state: directory
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
tags: ['directories']
- name: Create management scripts
template:
src: "{{ item }}.j2"
dest: "{{ docker_root }}/scripts/{{ item }}"
owner: "{{ docker_user }}"
group: "{{ docker_group }}"
mode: '0755'
loop:
- arrs-start.sh
- arrs-stop.sh
- arrs-restart.sh
- arrs-logs.sh
- arrs-update.sh
- arrs-status.sh
tags: ['scripts']

View File

@@ -1,77 +0,0 @@
# Environment Configuration for *arr Stack
# Generated by Ansible - Do not edit manually
# System Configuration
PUID=1000
PGID=1000
TZ=UTC
UMASK=022
# Network Configuration
TAILSCALE_IP={{ tailscale_ip }}
BIND_TO_TAILSCALE={{ bind_to_tailscale_only | default(true) }}
# VPN Configuration
VPN_PROVIDER={{ vpn_provider }}
VPN_USERNAME={{ vpn_username }}
VPN_PASSWORD={{ vpn_password }}
VPN_SERVER_REGIONS=United States
# Service Ports
PROWLARR_PORT={{ services.prowlarr }}
SONARR_PORT={{ services.sonarr }}
RADARR_PORT={{ services.radarr }}
LIDARR_PORT={{ services.lidarr }}
WHISPARR_PORT={{ services.whisparr }}
BAZARR_PORT={{ services.bazarr }}
JELLYSEERR_PORT={{ services.jellyseerr }}
SABNZBD_PORT={{ services.sabnzbd }}
DELUGE_PORT={{ services.deluge }}
PLEX_PORT={{ services.plex }}
TAUTULLI_PORT={{ services.tautulli }}
# API Keys (Generated during deployment)
PROWLARR_API_KEY={{ api_keys.prowlarr }}
SONARR_API_KEY={{ api_keys.sonarr }}
RADARR_API_KEY={{ api_keys.radarr }}
LIDARR_API_KEY={{ api_keys.lidarr }}
WHISPARR_API_KEY={{ api_keys.whisparr }}
BAZARR_API_KEY={{ api_keys.bazarr }}
JELLYSEERR_API_KEY={{ api_keys.jellyseerr }}
SABNZBD_API_KEY={{ api_keys.sabnzbd }}
# Directory Paths
DOCKER_ROOT={{ base_path }}
MEDIA_ROOT={{ base_path }}/media
DOWNLOADS_ROOT={{ base_path }}/downloads
# Security Settings
ENABLE_FAIL2BAN={{ enable_fail2ban | default(true) }}
ENABLE_FIREWALL={{ enable_firewall | default(true) }}
ENABLE_AUTO_UPDATES={{ enable_auto_updates | default(true) }}
# Backup Configuration
BACKUP_ENABLED={{ backup_enabled | default(true) }}
BACKUP_RETENTION_DAYS={{ backup_retention_days | default(30) }}
BACKUP_SCHEDULE={{ backup_schedule | default('0 2 * * *') }}
# Monitoring
ENABLE_MONITORING={{ enable_monitoring | default(true) }}
HEALTH_CHECK_INTERVAL={{ health_check_interval | default(300) }}
# Plex Configuration
PLEX_CLAIM={{ plex_claim | default('') }}
PLEX_ADVERTISE_IP={{ ansible_default_ipv4.address }}
# Resource Limits
MEMORY_LIMIT_SONARR={{ memory_limits.sonarr | default('1g') }}
MEMORY_LIMIT_RADARR={{ memory_limits.radarr | default('1g') }}
MEMORY_LIMIT_LIDARR={{ memory_limits.lidarr | default('512m') }}
MEMORY_LIMIT_PROWLARR={{ memory_limits.prowlarr | default('512m') }}
MEMORY_LIMIT_BAZARR={{ memory_limits.bazarr | default('256m') }}
MEMORY_LIMIT_JELLYSEERR={{ memory_limits.jellyseerr | default('512m') }}
MEMORY_LIMIT_SABNZBD={{ memory_limits.sabnzbd | default('1g') }}
MEMORY_LIMIT_DELUGE={{ memory_limits.deluge | default('512m') }}
MEMORY_LIMIT_PLEX={{ memory_limits.plex | default('4g') }}
MEMORY_LIMIT_TAUTULLI={{ memory_limits.tautulli | default('256m') }}
MEMORY_LIMIT_GLUETUN={{ memory_limits.gluetun | default('256m') }}

View File

@@ -1,4 +0,0 @@
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Download-Upgradeable-Packages "1";

View File

@@ -1,135 +0,0 @@
// Automatically upgrade packages from these (origin:archive) pairs
//
// Note that in Ubuntu security updates may pull in new dependencies
// from non-security sources (e.g. chromium). By allowing the release
// pocket these get automatically pulled in.
Unattended-Upgrade::Allowed-Origins {
"${distro_id}:${distro_codename}";
"${distro_id}:${distro_codename}-security";
// Extended Security Maintenance; doesn't necessarily exist for
// every release and this system may not have it installed, but if
// available, the policy for updates is such that unattended-upgrades
// should also install from here by default.
"${distro_id}ESMApps:${distro_codename}-apps-security";
"${distro_id}ESM:${distro_codename}-infra-security";
"${distro_id}:${distro_codename}-updates";
// "${distro_id}:${distro_codename}-proposed";
// "${distro_id}:${distro_codename}-backports";
};
// Python regular expressions, matching packages to exclude from upgrading
Unattended-Upgrade::Package-Blacklist {
// The following matches all packages starting with linux-
// "linux-";
// Use $ to explicitely define the end of a package name. Without
// the $, "libc6" would match all of them.
// "libc6$";
// "libc6-dev$";
// "libc6-i686$";
// Special characters need escaping
// "libstdc\+\+6$";
// The following matches packages like xen-system-amd64, xen-utils-4.1,
// xenstore-utils and libxenstore3.0
// "(lib)?xen(store)?";
// For more information about Python regular expressions, see
// https://docs.python.org/3/howto/regex.html
};
// This option allows you to control if on a unclean dpkg exit
// unattended-upgrades will automatically run
// dpkg --force-confold --configure -a
// The default is true, to ensure updates keep getting installed
//Unattended-Upgrade::AutoFixInterruptedDpkg "true";
// Split the upgrade into the smallest possible chunks so that
// they can be interrupted with SIGTERM. This makes the upgrade
// a bit slower but it has the benefit that shutdown while a upgrade
// is running is possible (with a small delay)
//Unattended-Upgrade::MinimalSteps "true";
// Install all updates when the machine is shutting down
// instead of doing it in the background while the machine is running.
// This will (obviously) make shutdown slower.
// Unattended-upgrades increases logind's InhibitDelayMaxSec to 30s.
// This allows more time for unattended-upgrades to shut down gracefully
// or even install a few packages in InstallOnShutdown mode, but is still a
// big step back from the 30 minutes allowed for InstallOnShutdown previously.
// Users enabling InstallOnShutdown mode are advised to increase
// InhibitDelayMaxSec even further, possibly to 30 minutes.
//Unattended-Upgrade::InstallOnShutdown "false";
// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed. E.g. "user@example.com"
//Unattended-Upgrade::Mail "";
// Set this value to one of:
// "always", "only-on-error" or "on-change"
// If this is not set, then any legacy MailOnlyOnError (boolean) value
// is used to chose between "only-on-error" and "on-change"
//Unattended-Upgrade::MailReport "on-change";
// Remove unused automatically installed kernel-related packages
// (kernel images, kernel headers and kernel version locked tools).
Unattended-Upgrade::Remove-Unused-Kernel-Packages "true";
// Do automatic removal of newly unused dependencies after the upgrade
Unattended-Upgrade::Remove-New-Unused-Dependencies "true";
// Do automatic removal of unused packages after the upgrade
// (equivalent to apt autoremove)
Unattended-Upgrade::Remove-Unused-Dependencies "true";
// Automatically reboot *WITHOUT CONFIRMATION* if
// the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "false";
// Automatically reboot even if there are users currently logged in
// when Unattended-Upgrade::Automatic-Reboot is set to true
//Unattended-Upgrade::Automatic-Reboot-WithUsers "true";
// If automatic reboot is enabled and needed, reboot at the specific
// time instead of immediately
// Default: "now"
//Unattended-Upgrade::Automatic-Reboot-Time "02:00";
// Use apt bandwidth limit feature, this example limits the download
// speed to 70kb/sec
//Acquire::http::Dl-Limit "70";
// Enable logging to syslog. Default is False
Unattended-Upgrade::SyslogEnable "true";
// Specify syslog facility. Default is daemon
// Unattended-Upgrade::SyslogFacility "daemon";
// Download and install upgrades only on AC power
// (i.e. skip or gracefully stop updates on battery)
// Unattended-Upgrade::OnlyOnACPower "true";
// Download and install upgrades only on non-metered connection
// (i.e. skip or gracefully stop updates on a metered connection)
// Unattended-Upgrade::Skip-Updates-On-Metered-Connections "true";
// Verbose logging
// Unattended-Upgrade::Verbose "false";
// Print debugging information both in unattended-upgrades and
// in unattended-upgrade-shutdown
// Unattended-Upgrade::Debug "false";
// Allow package downgrade if Pin-Priority exceeds 1000
// Unattended-Upgrade::Allow-downgrade "false";
// When APT fails to mark a package to be upgraded or installed try adjusting
// candidates of related packages to help APT's resolver in finding a solution
// where the package can be upgraded or installed.
// This is a workaround until APT's resolver is fixed to always find a
// solution if it exists. (See LP: #1831002)
// The default is true, so keep problems fixed in the cloud.
// Unattended-Upgrade::Allow-APT-Mark-Fallback "true";

View File

@@ -1,79 +0,0 @@
# Logrotate configuration for Arrs applications
# Generated by Ansible
{{ docker_root }}/sonarr/logs/*.txt {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}
{{ docker_root }}/radarr/logs/*.txt {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}
{{ docker_root }}/lidarr/logs/*.txt {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}
{{ docker_root }}/bazarr/logs/*.log {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}
{{ docker_root }}/prowlarr/logs/*.txt {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}
{{ docker_root }}/logs/*.log {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}
{{ docker_root }}/logs/*/*.log {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}

View File

@@ -1,23 +0,0 @@
#!/bin/bash
# View logs for Arrs services
cd {{ docker_compose_dir }}
if [ $# -eq 0 ]; then
echo "Showing logs for all services..."
docker-compose logs -f
elif [ "$1" = "list" ]; then
echo "Available services:"
echo " sonarr"
echo " radarr"
echo " lidarr"
echo " bazarr"
echo " prowlarr"
echo " watchtower"
echo ""
echo "Usage: $0 [service_name]"
echo " $0 list"
else
echo "Showing logs for $1..."
docker-compose logs -f "$1"
fi

View File

@@ -1,9 +0,0 @@
#!/bin/bash
# Restart all Arrs services
echo "Restarting Arrs Media Stack..."
cd {{ docker_compose_dir }}
docker-compose restart
echo "Services restarted successfully!"
docker-compose ps

View File

@@ -1,21 +0,0 @@
[Unit]
Description=Arrs Media Stack
Requires=docker.service
After=docker.service
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
RemainAfterExit=yes
User={{ docker_user }}
Group={{ docker_group }}
WorkingDirectory={{ docker_compose_dir }}
ExecStart=/usr/local/bin/docker-compose up -d
ExecStop=/usr/local/bin/docker-compose down
ExecReload=/usr/local/bin/docker-compose restart
TimeoutStartSec=300
TimeoutStopSec=120
[Install]
WantedBy=multi-user.target

View File

@@ -1,21 +0,0 @@
#!/bin/bash
# Start Arrs Media Stack
# Generated by Ansible
cd {{ docker_compose_dir }}
echo "Starting Arrs Media Stack..."
docker-compose up -d
echo "Waiting for services to start..."
sleep 10
echo "Service Status:"
docker-compose ps
echo ""
echo "Access URLs:"
echo "- Sonarr: http://$(hostname -I | awk '{print $1}'):{{ ports.sonarr }}"
echo "- Radarr: http://$(hostname -I | awk '{print $1}'):{{ ports.radarr }}"
echo "- Lidarr: http://$(hostname -I | awk '{print $1}'):{{ ports.lidarr }}"
echo "- Bazarr: http://$(hostname -I | awk '{print $1}'):{{ ports.bazarr }}"
echo "- Prowlarr: http://$(hostname -I | awk '{print $1}'):{{ ports.prowlarr }}"

View File

@@ -1,31 +0,0 @@
#!/bin/bash
# Check Arrs Media Stack Status
# Generated by Ansible
cd {{ docker_compose_dir }}
echo "=== Docker Compose Status ==="
docker-compose ps
echo ""
echo "=== Container Health ==="
{% for service, port in ports.items() %}
if curl -s -o /dev/null -w "%{http_code}" http://localhost:{{ port }}/ping | grep -q "200"; then
echo "✅ {{ service|title }}: Healthy (Port {{ port }})"
else
echo "❌ {{ service|title }}: Unhealthy (Port {{ port }})"
fi
{% endfor %}
echo ""
echo "=== System Resources ==="
echo "Memory Usage:"
free -h
echo ""
echo "Disk Usage:"
df -h {{ media_root }} {{ docker_root }}
echo ""
echo "=== Recent Logs ==="
docker-compose logs --tail=5

View File

@@ -1,9 +0,0 @@
#!/bin/bash
# Stop Arrs Media Stack
# Generated by Ansible
cd {{ docker_compose_dir }}
echo "Stopping Arrs Media Stack..."
docker-compose down
echo "All services stopped."

View File

@@ -1,17 +0,0 @@
#!/bin/bash
# Update and restart all Arrs services
echo "Updating Arrs Media Stack..."
cd {{ docker_compose_dir }}
echo "Pulling latest images..."
docker-compose pull
echo "Restarting services with new images..."
docker-compose up -d
echo "Cleaning up old images..."
docker image prune -f
echo "Update completed successfully!"
docker-compose ps

View File

@@ -1,75 +0,0 @@
#!/bin/bash
# Arrs Configuration Backup Script
# Generated by Ansible - Do not edit manually
set -euo pipefail
# Configuration
BACKUP_DIR="{{ backup_dir }}"
TIMESTAMP=$(date +"%Y%m%d_%H%M%S")
BACKUP_NAME="arrs_backup_${TIMESTAMP}"
BACKUP_PATH="${BACKUP_DIR}/${BACKUP_NAME}"
LOG_FILE="{{ docker_root }}/logs/backup.log"
# Logging function
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# Create backup directory
mkdir -p "$BACKUP_PATH"
log "Starting Arrs backup: $BACKUP_NAME"
# Stop services for consistent backup
log "Stopping Arrs services..."
cd {{ docker_compose_dir }}
docker-compose stop
# Backup configurations
log "Backing up configurations..."
{% for path in backup_paths %}
if [ -d "{{ path }}" ]; then
rsync -av "{{ path }}/" "$BACKUP_PATH/$(basename {{ path }})/"
log "Backed up {{ path }}"
fi
{% endfor %}
# Backup Docker Compose files
log "Backing up Docker Compose configuration..."
cp -r {{ docker_compose_dir }} "$BACKUP_PATH/compose"
# Create backup metadata
cat > "$BACKUP_PATH/backup_info.txt" << EOF
Backup Date: $(date)
Hostname: $(hostname)
Docker User: {{ docker_user }}
Media Root: {{ media_root }}
Docker Root: {{ docker_root }}
Backup Paths:
{% for path in backup_paths %}
- {{ path }}
{% endfor %}
EOF
# Restart services
log "Restarting Arrs services..."
docker-compose up -d
# Create compressed archive
log "Creating compressed archive..."
cd "$BACKUP_DIR"
tar -czf "${BACKUP_NAME}.tar.gz" "$BACKUP_NAME"
rm -rf "$BACKUP_NAME"
# Set permissions
chown {{ docker_user }}:{{ docker_group }} "${BACKUP_NAME}.tar.gz"
log "Backup completed: ${BACKUP_NAME}.tar.gz"
log "Backup size: $(du -h ${BACKUP_NAME}.tar.gz | cut -f1)"
# Cleanup old backups
log "Cleaning up old backups..."
find "$BACKUP_DIR" -name "arrs_backup_*.tar.gz" -mtime +{{ backup_retention_days }} -delete
log "Backup process finished successfully"

View File

@@ -1,46 +0,0 @@
# Docker aliases
alias dps='docker ps'
alias dlog='docker logs'
alias dlogf='docker logs -f'
alias dexec='docker exec -it'
alias dstop='docker stop'
alias dstart='docker start'
alias drestart='docker restart'
# Docker Compose aliases
alias dcup='docker-compose up -d'
alias dcdown='docker-compose down'
alias dcrestart='docker-compose restart'
alias dcpull='docker-compose pull'
alias dclogs='docker-compose logs'
alias dclogsf='docker-compose logs -f'
# Arrs stack specific aliases
alias arrs-up='cd {{ docker_compose_dir }} && docker-compose up -d'
alias arrs-down='cd {{ docker_compose_dir }} && docker-compose down'
alias arrs-restart='cd {{ docker_compose_dir }} && docker-compose restart'
alias arrs-logs='cd {{ docker_compose_dir }} && docker-compose logs -f'
alias arrs-pull='cd {{ docker_compose_dir }} && docker-compose pull && docker-compose up -d'
alias arrs-status='cd {{ docker_compose_dir }} && docker-compose ps'
# Navigation aliases
alias media='cd {{ media_root }}'
alias docker-config='cd {{ docker_root }}'
alias compose='cd {{ docker_compose_dir }}'
# System monitoring aliases
alias htop='htop'
alias df='df -h'
alias du='du -h'
alias free='free -h'
alias ps='ps aux'
# Log viewing aliases
alias syslog='tail -f /var/log/syslog'
alias dockerlog='tail -f /var/log/docker.log'
alias arrs-log-sonarr='docker logs -f sonarr'
alias arrs-log-radarr='docker logs -f radarr'
alias arrs-log-lidarr='docker logs -f lidarr'
alias arrs-log-bazarr='docker logs -f bazarr'
alias arrs-log-prowlarr='docker logs -f prowlarr'
alias arrs-log-watchtower='docker logs -f watchtower'

View File

@@ -1,120 +0,0 @@
# ~/.bashrc: executed by bash(1) for non-login shells.
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
# don't put duplicate lines or lines starting with space in the history.
HISTCONTROL=ignoreboth
# append to the history file, don't overwrite it
shopt -s histappend
# for setting history length see HISTSIZE and HISTFILESIZE in bash(1)
HISTSIZE=1000
HISTFILESIZE=2000
# check the window size after each command and, if necessary,
# update the values of LINES and COLUMNS.
shopt -s checkwinsize
# make less more friendly for non-text input files, see lesspipe(1)
[ -x /usr/bin/lesspipe ] && eval "$(SHELL=/bin/sh lesspipe)"
# set variable identifying the chroot you work in (used in the prompt below)
if [ -z "${debian_chroot:-}" ] && [ -r /etc/debian_chroot ]; then
debian_chroot=$(cat /etc/debian_chroot)
fi
# set a fancy prompt (non-color, unless we know we "want" color)
case "$TERM" in
xterm-color|*-256color) color_prompt=yes;;
esac
if [ -n "$force_color_prompt" ]; then
if [ -x /usr/bin/tput ] && tput setaf 1 >&/dev/null; then
color_prompt=yes
else
color_prompt=
fi
fi
if [ "$color_prompt" = yes ]; then
PS1='${debian_chroot:+($debian_chroot)}\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\]\$ '
else
PS1='${debian_chroot:+($debian_chroot)}\u@\h:\w\$ '
fi
unset color_prompt force_color_prompt
# If this is an xterm set the title to user@host:dir
case "$TERM" in
xterm*|rxvt*)
PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1"
;;
*)
;;
esac
# enable color support of ls and also add handy aliases
if [ -x /usr/bin/dircolors ]; then
test -r ~/.dircolors && eval "$(dircolors -b ~/.dircolors)" || eval "$(dircolors -b)"
alias ls='ls --color=auto'
alias grep='grep --color=auto'
alias fgrep='fgrep --color=auto'
alias egrep='egrep --color=auto'
fi
# colored GCC warnings and errors
export GCC_COLORS='error=01;31:warning=01;35:note=01;36:caret=01;32:locus=01:quote=01'
# some more ls aliases
alias ll='ls -alF'
alias la='ls -A'
alias l='ls -CF'
# Docker aliases for Arrs stack management
alias dps='docker ps'
alias dlog='docker logs'
alias dlogf='docker logs -f'
alias dexec='docker exec -it'
alias dstop='docker stop'
alias dstart='docker start'
alias drestart='docker restart'
# Docker Compose aliases
alias dcup='docker-compose up -d'
alias dcdown='docker-compose down'
alias dcrestart='docker-compose restart'
alias dcpull='docker-compose pull'
alias dclogs='docker-compose logs'
alias dclogsf='docker-compose logs -f'
# Arrs stack specific aliases
cd {{ docker_compose_dir }}
alias arrs-up='docker-compose up -d'
alias arrs-down='docker-compose down'
alias arrs-restart='docker-compose restart'
alias arrs-logs='docker-compose logs -f'
alias arrs-pull='docker-compose pull && docker-compose up -d'
alias arrs-status='docker-compose ps'
# Navigation aliases
alias media='cd {{ media_root }}'
alias docker-config='cd {{ docker_root }}'
alias compose='cd {{ docker_compose_dir }}'
echo "Welcome to the Arrs Media Stack server!"
echo "Available commands:"
echo " arrs-up - Start all services"
echo " arrs-down - Stop all services"
echo " arrs-restart - Restart all services"
echo " arrs-logs - View logs"
echo " arrs-pull - Update and restart services"
echo " arrs-status - Show service status"
echo ""
echo "Navigation:"
echo " media - Go to media directory"
echo " compose - Go to docker-compose directory"
echo ""

View File

@@ -1,77 +0,0 @@
#!/bin/bash
# Service health check script for Arrs Media Stack
# Generated by Ansible
set -e
COMPOSE_DIR="{{ docker_compose_dir }}"
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
PORTS=({{ ports.sonarr }} {{ ports.radarr }} {{ ports.lidarr }} {{ ports.bazarr }} {{ ports.prowlarr }})
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
echo "=== Arrs Media Stack Health Check ==="
echo "Timestamp: $(date)"
echo
# Check if compose directory exists
if [[ ! -d "$COMPOSE_DIR" ]]; then
echo -e "${RED}ERROR: Compose directory not found: $COMPOSE_DIR${NC}"
exit 1
fi
cd "$COMPOSE_DIR"
# Check Docker Compose services
echo "Docker Compose Services:"
docker-compose ps
echo
echo "Service Health Status:"
# Check each service
for i in "${!SERVICES[@]}"; do
service="${SERVICES[$i]}"
# Skip watchtower port check
if [[ "$service" == "watchtower" ]]; then
if docker-compose ps "$service" | grep -q "Up"; then
echo -e " ${service}: ${GREEN}Running${NC}"
else
echo -e " ${service}: ${RED}Not Running${NC}"
fi
continue
fi
port="${PORTS[$i]}"
# Check if container is running
if docker-compose ps "$service" | grep -q "Up"; then
# Check if port is responding
if curl -s -f "http://localhost:$port/ping" >/dev/null 2>&1 || \
curl -s -f "http://localhost:$port" >/dev/null 2>&1 || \
nc -z localhost "$port" 2>/dev/null; then
echo -e " ${service} (port $port): ${GREEN}Healthy${NC}"
else
echo -e " ${service} (port $port): ${YELLOW}Running but not responding${NC}"
fi
else
echo -e " ${service} (port $port): ${RED}Not Running${NC}"
fi
done
echo
echo "System Resources:"
echo " Memory: $(free -h | grep Mem | awk '{print $3 "/" $2}')"
echo " Disk: $(df -h {{ docker_root }} | tail -1 | awk '{print $3 "/" $2 " (" $5 ")"}')"
echo
echo "Recent Container Events:"
docker events --since="1h" --until="now" 2>/dev/null | tail -5 || echo " No recent events"
echo
echo "=== End Health Check ==="

View File

@@ -1,104 +0,0 @@
#!/bin/bash
# Configure Bazarr Connections
echo "🔧 Configuring Bazarr connections..."
# Create Bazarr configuration directory if it doesn't exist
mkdir -p /config/config
# Configure Sonarr connection in Bazarr
cat > /tmp/sonarr_config.py << 'EOF'
import sqlite3
import json
# Connect to Bazarr database
conn = sqlite3.connect('/config/db/bazarr.db')
cursor = conn.cursor()
# Update Sonarr settings
sonarr_settings = {
'ip': 'sonarr',
'port': 8989,
'base_url': '',
'ssl': False,
'apikey': '{{ api_keys.sonarr }}',
'full_update': 'Daily',
'only_monitored': False,
'series_sync': 60,
'episodes_sync': 60
}
# Insert or update Sonarr settings
for key, value in sonarr_settings.items():
cursor.execute(
"INSERT OR REPLACE INTO table_settings_sonarr (key, value) VALUES (?, ?)",
(key, json.dumps(value) if isinstance(value, (dict, list)) else str(value))
)
conn.commit()
conn.close()
print("✅ Sonarr configuration updated in Bazarr")
EOF
# Configure Radarr connection in Bazarr
cat > /tmp/radarr_config.py << 'EOF'
import sqlite3
import json
# Connect to Bazarr database
conn = sqlite3.connect('/config/db/bazarr.db')
cursor = conn.cursor()
# Update Radarr settings
radarr_settings = {
'ip': 'radarr',
'port': 7878,
'base_url': '',
'ssl': False,
'apikey': '{{ api_keys.radarr }}',
'full_update': 'Daily',
'only_monitored': False,
'movies_sync': 60
}
# Insert or update Radarr settings
for key, value in radarr_settings.items():
cursor.execute(
"INSERT OR REPLACE INTO table_settings_radarr (key, value) VALUES (?, ?)",
(key, json.dumps(value) if isinstance(value, (dict, list)) else str(value))
)
conn.commit()
conn.close()
print("✅ Radarr configuration updated in Bazarr")
EOF
# Run the configuration scripts if Python is available
if command -v python3 >/dev/null 2>&1; then
python3 /tmp/sonarr_config.py 2>/dev/null || echo "⚠️ Sonarr config update failed - configure manually"
python3 /tmp/radarr_config.py 2>/dev/null || echo "⚠️ Radarr config update failed - configure manually"
else
echo "⚠️ Python not available - configure Bazarr manually via web interface"
fi
# Enable Sonarr and Radarr in Bazarr settings
cat > /tmp/enable_services.py << 'EOF'
import sqlite3
conn = sqlite3.connect('/config/db/bazarr.db')
cursor = conn.cursor()
# Enable Sonarr and Radarr
cursor.execute("INSERT OR REPLACE INTO table_settings_general (key, value) VALUES ('use_sonarr', 'True')")
cursor.execute("INSERT OR REPLACE INTO table_settings_general (key, value) VALUES ('use_radarr', 'True')")
conn.commit()
conn.close()
print("✅ Sonarr and Radarr enabled in Bazarr")
EOF
if command -v python3 >/dev/null 2>&1; then
python3 /tmp/enable_services.py 2>/dev/null || echo "⚠️ Service enabling failed"
fi
echo "✅ Bazarr configuration complete!"

View File

@@ -1,51 +0,0 @@
#!/bin/bash
# Configure Download Clients for *arr Services
echo "🔧 Configuring Download Clients..."
# Get service name from hostname
SERVICE=$(hostname)
# Configure SABnzbd
curl -X POST "http://localhost:$(cat /proc/1/environ | tr '\0' '\n' | grep PORT | cut -d'=' -f2)/api/v3/downloadclient" \
-H "X-Api-Key: $(cat /config/config.xml | grep -o '<ApiKey>[^<]*</ApiKey>' | sed 's/<[^>]*>//g')" \
-H 'Content-Type: application/json' \
-d '{
"enable": true,
"name": "SABnzbd",
"implementation": "Sabnzbd",
"configContract": "SabnzbdSettings",
"fields": [
{"name": "host", "value": "gluetun"},
{"name": "port", "value": 8081},
{"name": "apiKey", "value": "{{ api_keys.sabnzbd }}"},
{"name": "username", "value": ""},
{"name": "password", "value": ""},
{"name": "tvCategory", "value": "tv"},
{"name": "recentTvPriority", "value": 0},
{"name": "olderTvPriority", "value": 0},
{"name": "useSsl", "value": false}
]
}' 2>/dev/null || echo "SABnzbd configuration failed or already exists"
# Configure Deluge
curl -X POST "http://localhost:$(cat /proc/1/environ | tr '\0' '\n' | grep PORT | cut -d'=' -f2)/api/v3/downloadclient" \
-H "X-Api-Key: $(cat /config/config.xml | grep -o '<ApiKey>[^<]*</ApiKey>' | sed 's/<[^>]*>//g')" \
-H 'Content-Type: application/json' \
-d '{
"enable": true,
"name": "Deluge",
"implementation": "Deluge",
"configContract": "DelugeSettings",
"fields": [
{"name": "host", "value": "gluetun"},
{"name": "port", "value": 8112},
{"name": "password", "value": "deluge"},
{"name": "tvCategory", "value": "tv"},
{"name": "recentTvPriority", "value": 0},
{"name": "olderTvPriority", "value": 0},
{"name": "useSsl", "value": false}
]
}' 2>/dev/null || echo "Deluge configuration failed or already exists"
echo "✅ Download clients configuration complete for $SERVICE!"

View File

@@ -1,51 +0,0 @@
#!/bin/bash
# Configure Jellyseerr Services
echo "🔧 Configuring Jellyseerr services..."
# Wait for Jellyseerr to be ready
sleep 10
# Configure Sonarr in Jellyseerr
curl -X POST 'http://localhost:5055/api/v1/service/sonarr' \
-H 'X-Api-Key: {{ api_keys.jellyseerr }}' \
-H 'Content-Type: application/json' \
-d '{
"name": "Sonarr",
"hostname": "sonarr",
"port": 8989,
"apiKey": "{{ api_keys.sonarr }}",
"useSsl": false,
"baseUrl": "",
"activeProfileId": 1,
"activeLanguageProfileId": 1,
"activeDirectory": "/data/media/tv",
"is4k": false,
"enableSeasonFolders": true,
"externalUrl": "",
"syncEnabled": true,
"preventSearch": false
}' 2>/dev/null || echo "⚠️ Sonarr configuration failed - may already exist"
# Configure Radarr in Jellyseerr
curl -X POST 'http://localhost:5055/api/v1/service/radarr' \
-H 'X-Api-Key: {{ api_keys.jellyseerr }}' \
-H 'Content-Type: application/json' \
-d '{
"name": "Radarr",
"hostname": "radarr",
"port": 7878,
"apiKey": "{{ api_keys.radarr }}",
"useSsl": false,
"baseUrl": "",
"activeProfileId": 1,
"activeDirectory": "/data/media/movies",
"is4k": false,
"externalUrl": "",
"syncEnabled": true,
"preventSearch": false,
"minimumAvailability": "released"
}' 2>/dev/null || echo "⚠️ Radarr configuration failed - may already exist"
echo "✅ Jellyseerr services configuration complete!"
echo "🌐 Access Jellyseerr at: http://your-server:5055"

View File

@@ -1,70 +0,0 @@
#!/bin/bash
# Configure Prowlarr Applications
echo "🔧 Configuring Prowlarr Applications..."
# Add Sonarr
curl -X POST 'http://localhost:9696/api/v1/applications' \
-H 'X-Api-Key: {{ api_keys.prowlarr }}' \
-H 'Content-Type: application/json' \
-d '{
"name": "Sonarr",
"syncLevel": "fullSync",
"implementation": "Sonarr",
"configContract": "SonarrSettings",
"fields": [
{"name": "baseUrl", "value": "http://sonarr:8989"},
{"name": "apiKey", "value": "{{ api_keys.sonarr }}"},
{"name": "syncCategories", "value": [5000, 5030, 5040]}
]
}' || echo "Sonarr already configured or error occurred"
# Add Radarr
curl -X POST 'http://localhost:9696/api/v1/applications' \
-H 'X-Api-Key: {{ api_keys.prowlarr }}' \
-H 'Content-Type: application/json' \
-d '{
"name": "Radarr",
"syncLevel": "fullSync",
"implementation": "Radarr",
"configContract": "RadarrSettings",
"fields": [
{"name": "baseUrl", "value": "http://radarr:7878"},
{"name": "apiKey", "value": "{{ api_keys.radarr }}"},
{"name": "syncCategories", "value": [2000, 2010, 2020, 2030, 2040, 2045, 2050, 2060]}
]
}' || echo "Radarr already configured or error occurred"
# Add Lidarr
curl -X POST 'http://localhost:9696/api/v1/applications' \
-H 'X-Api-Key: {{ api_keys.prowlarr }}' \
-H 'Content-Type: application/json' \
-d '{
"name": "Lidarr",
"syncLevel": "fullSync",
"implementation": "Lidarr",
"configContract": "LidarrSettings",
"fields": [
{"name": "baseUrl", "value": "http://lidarr:8686"},
{"name": "apiKey", "value": "{{ api_keys.lidarr }}"},
{"name": "syncCategories", "value": [3000, 3010, 3020, 3030, 3040]}
]
}' || echo "Lidarr already configured or error occurred"
# Add Whisparr
curl -X POST 'http://localhost:9696/api/v1/applications' \
-H 'X-Api-Key: {{ api_keys.prowlarr }}' \
-H 'Content-Type: application/json' \
-d '{
"name": "Whisparr",
"syncLevel": "fullSync",
"implementation": "Whisparr",
"configContract": "WhisparrSettings",
"fields": [
{"name": "baseUrl", "value": "http://whisparr:6969"},
{"name": "apiKey", "value": "{{ api_keys.whisparr }}"},
{"name": "syncCategories", "value": [6000, 6010, 6020, 6030, 6040, 6050, 6060, 6070]}
]
}' || echo "Whisparr already configured or error occurred"
echo "✅ Prowlarr applications configuration complete!"

View File

@@ -1,18 +0,0 @@
{
"log-driver": "json-file",
"log-opts": {
"max-size": "{{ log_max_size | default('10m') }}",
"max-file": "{{ log_max_files | default('3') }}"
},
"storage-driver": "overlay2",
"userland-proxy": false,
"experimental": false,
"live-restore": true,
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 65536,
"Soft": 65536
}
}
}

View File

@@ -1,210 +0,0 @@
#!/bin/bash
# Disk usage monitoring script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs/system"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
DISK_LOG="$LOG_DIR/disk-usage-$(date '+%Y%m%d').log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Function to log with timestamp
log_disk() {
echo "[$TIMESTAMP] $1" >> "$DISK_LOG"
}
# Disk usage thresholds
WARNING_THRESHOLD=80
CRITICAL_THRESHOLD=90
log_disk "=== DISK USAGE MONITORING ==="
# Monitor main directories
DIRECTORIES=(
"{{ docker_root }}"
"{{ media_root }}"
"/var/lib/docker"
"/tmp"
"/var/log"
)
for dir in "${DIRECTORIES[@]}"; do
if [[ -d "$dir" ]]; then
USAGE=$(df "$dir" | tail -1)
FILESYSTEM=$(echo "$USAGE" | awk '{print $1}')
TOTAL=$(echo "$USAGE" | awk '{print $2}')
USED=$(echo "$USAGE" | awk '{print $3}')
AVAILABLE=$(echo "$USAGE" | awk '{print $4}')
PERCENT=$(echo "$USAGE" | awk '{print $5}' | cut -d'%' -f1)
# Convert to human readable
TOTAL_GB=$((TOTAL / 1024 / 1024))
USED_GB=$((USED / 1024 / 1024))
AVAILABLE_GB=$((AVAILABLE / 1024 / 1024))
log_disk "DISK_USAGE $dir - Filesystem: $FILESYSTEM, Total: ${TOTAL_GB}GB, Used: ${USED_GB}GB (${PERCENT}%), Available: ${AVAILABLE_GB}GB"
# Check thresholds
if [[ $PERCENT -ge $CRITICAL_THRESHOLD ]]; then
log_disk "CRITICAL_ALERT $dir disk usage is ${PERCENT}% (>=${CRITICAL_THRESHOLD}%)"
elif [[ $PERCENT -ge $WARNING_THRESHOLD ]]; then
log_disk "WARNING_ALERT $dir disk usage is ${PERCENT}% (>=${WARNING_THRESHOLD}%)"
fi
else
log_disk "DIRECTORY_NOT_FOUND $dir does not exist"
fi
done
# Monitor specific subdirectories in Docker root
log_disk "=== DOCKER SUBDIRECTORY USAGE ==="
DOCKER_SUBDIRS=(
"{{ docker_root }}/sonarr"
"{{ docker_root }}/radarr"
"{{ docker_root }}/lidarr"
"{{ docker_root }}/bazarr"
"{{ docker_root }}/prowlarr"
"{{ docker_root }}/compose"
"{{ docker_root }}/logs"
)
for subdir in "${DOCKER_SUBDIRS[@]}"; do
if [[ -d "$subdir" ]]; then
SIZE=$(du -sh "$subdir" 2>/dev/null | cut -f1)
log_disk "SUBDIR_SIZE $subdir: $SIZE"
fi
done
# Monitor media subdirectories
log_disk "=== MEDIA DIRECTORY USAGE ==="
MEDIA_SUBDIRS=(
"{{ media_root }}/movies"
"{{ media_root }}/tv"
"{{ media_root }}/music"
"{{ media_root }}/downloads"
)
for subdir in "${MEDIA_SUBDIRS[@]}"; do
if [[ -d "$subdir" ]]; then
SIZE=$(du -sh "$subdir" 2>/dev/null | cut -f1)
FILE_COUNT=$(find "$subdir" -type f 2>/dev/null | wc -l)
log_disk "MEDIA_SIZE $subdir: $SIZE ($FILE_COUNT files)"
else
log_disk "MEDIA_DIR_NOT_FOUND $subdir does not exist"
fi
done
# Docker system disk usage
if command -v docker >/dev/null 2>&1; then
log_disk "=== DOCKER SYSTEM USAGE ==="
# Docker system df
DOCKER_DF=$(docker system df --format "{{ '{{.Type}}' }}\t{{ '{{.TotalCount}}' }}\t{{ '{{.Active}}' }}\t{{ '{{.Size}}' }}\t{{ '{{.Reclaimable}}' }}" 2>/dev/null)
if [[ -n "$DOCKER_DF" ]]; then
echo "$DOCKER_DF" | while IFS=$'\t' read -r type total active size reclaimable; do
log_disk "DOCKER_USAGE $type - Total: $total, Active: $active, Size: $size, Reclaimable: $reclaimable"
done
fi
# Container sizes
cd {{ docker_compose_dir }}
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
for service in "${SERVICES[@]}"; do
CONTAINER_ID=$(docker-compose ps -q "$service" 2>/dev/null)
if [[ -n "$CONTAINER_ID" ]]; then
CONTAINER_SIZE=$(docker inspect "$CONTAINER_ID" --format='{{ "{{.SizeRw}}" }}' 2>/dev/null)
if [[ -n "$CONTAINER_SIZE" && "$CONTAINER_SIZE" != "null" ]]; then
CONTAINER_SIZE_MB=$((CONTAINER_SIZE / 1024 / 1024))
log_disk "CONTAINER_SIZE $service: ${CONTAINER_SIZE_MB}MB"
fi
fi
done
fi
# Large files detection
log_disk "=== LARGE FILES DETECTION ==="
LARGE_FILES=$(find {{ docker_root }} -type f -size +100M 2>/dev/null | head -10)
if [[ -n "$LARGE_FILES" ]]; then
echo "$LARGE_FILES" | while IFS= read -r file; do
SIZE=$(du -sh "$file" 2>/dev/null | cut -f1)
log_disk "LARGE_FILE $file: $SIZE"
done
else
log_disk "LARGE_FILES No files larger than 100MB found in {{ docker_root }}"
fi
# Log files size monitoring
log_disk "=== LOG FILES MONITORING ==="
LOG_DIRS=(
"{{ docker_root }}/logs"
"{{ docker_root }}/sonarr/logs"
"{{ docker_root }}/radarr/logs"
"{{ docker_root }}/lidarr/logs"
"{{ docker_root }}/bazarr/logs"
"{{ docker_root }}/prowlarr/logs"
"/var/log"
)
for log_dir in "${LOG_DIRS[@]}"; do
if [[ -d "$log_dir" ]]; then
LOG_SIZE=$(du -sh "$log_dir" 2>/dev/null | cut -f1)
LOG_COUNT=$(find "$log_dir" -name "*.log" -o -name "*.txt" 2>/dev/null | wc -l)
log_disk "LOG_DIR_SIZE $log_dir: $LOG_SIZE ($LOG_COUNT log files)"
# Find large log files
LARGE_LOGS=$(find "$log_dir" -name "*.log" -o -name "*.txt" -size +10M 2>/dev/null)
if [[ -n "$LARGE_LOGS" ]]; then
echo "$LARGE_LOGS" | while IFS= read -r logfile; do
SIZE=$(du -sh "$logfile" 2>/dev/null | cut -f1)
log_disk "LARGE_LOG $logfile: $SIZE"
done
fi
fi
done
# Disk I/O statistics
if command -v iostat >/dev/null 2>&1; then
log_disk "=== DISK I/O STATISTICS ==="
IOSTAT_OUTPUT=$(iostat -d 1 1 | tail -n +4)
echo "$IOSTAT_OUTPUT" | while IFS= read -r line; do
if [[ -n "$line" && "$line" != *"Device"* ]]; then
log_disk "DISK_IO $line"
fi
done
fi
# Cleanup recommendations
log_disk "=== CLEANUP RECOMMENDATIONS ==="
# Check for old Docker images
if command -v docker >/dev/null 2>&1; then
DANGLING_IMAGES=$(docker images -f "dangling=true" -q | wc -l)
if [[ $DANGLING_IMAGES -gt 0 ]]; then
log_disk "CLEANUP_RECOMMENDATION $DANGLING_IMAGES dangling Docker images can be removed with 'docker image prune'"
fi
UNUSED_VOLUMES=$(docker volume ls -f "dangling=true" -q | wc -l)
if [[ $UNUSED_VOLUMES -gt 0 ]]; then
log_disk "CLEANUP_RECOMMENDATION $UNUSED_VOLUMES unused Docker volumes can be removed with 'docker volume prune'"
fi
fi
# Check for old log files
OLD_LOGS=$(find {{ docker_root }}/logs -name "*.log" -mtime +30 2>/dev/null | wc -l)
if [[ $OLD_LOGS -gt 0 ]]; then
log_disk "CLEANUP_RECOMMENDATION $OLD_LOGS log files older than 30 days can be cleaned up"
fi
# Check for compressed logs
COMPRESSED_LOGS=$(find {{ docker_root }}/logs -name "*.gz" -mtime +90 2>/dev/null | wc -l)
if [[ $COMPRESSED_LOGS -gt 0 ]]; then
log_disk "CLEANUP_RECOMMENDATION $COMPRESSED_LOGS compressed log files older than 90 days can be removed"
fi
log_disk "=== END DISK USAGE MONITORING ==="
# Cleanup old disk usage logs (keep 7 days)
find "$LOG_DIR" -name "disk-usage-*.log" -mtime +7 -delete 2>/dev/null
exit 0

View File

@@ -1,32 +0,0 @@
#include <tunables/global>
profile docker-arrs flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
network,
capability,
file,
umount,
deny @{PROC}/* w, # deny write for all files directly in /proc (not in a subdir)
deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
deny @{PROC}/sys/[^k]** w, # deny /proc/sys except /proc/sys/k* (effectively /proc/sys/kernel)
deny @{PROC}/sys/kernel/{?,??,[^s][^h][^m]**} w, # deny everything except shm* in /proc/sys/kernel/
deny @{PROC}/sysrq-trigger rwklx,
deny @{PROC}/mem rwklx,
deny @{PROC}/kmem rwklx,
deny @{PROC}/kcore rwklx,
deny mount,
deny /sys/[^f]*/** wklx,
deny /sys/f[^s]*/** wklx,
deny /sys/fs/[^c]*/** wklx,
deny /sys/fs/c[^g]*/** wklx,
deny /sys/fs/cg[^r]*/** wklx,
deny /sys/firmware/** rwklx,
deny /sys/kernel/security/** rwklx,
# suppress ptrace denials when using 'docker ps' or using 'ps' inside a container
ptrace (trace,read) peer=docker-arrs,
}

View File

@@ -1,575 +0,0 @@
---
# Docker Compose for Arrs Media Stack
# Adapted from Dr. Frankenstein's guide for VPS deployment
# Generated by Ansible - Do not edit manually
version: '3.8'
services:
sonarr:
image: linuxserver/sonarr:latest
container_name: sonarr
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
volumes:
- {{ docker_root }}/sonarr:/config
- {{ media_root }}:/data
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.sonarr }}:8989/tcp" # Tailscale only
{% else %}
- "{{ ports.sonarr }}:8989/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8989/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
radarr:
image: linuxserver/radarr:latest
container_name: radarr
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
volumes:
- {{ docker_root }}/radarr:/config
- {{ media_root }}:/data
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.radarr }}:7878/tcp" # Tailscale only
{% else %}
- "{{ ports.radarr }}:7878/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:7878/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
lidarr:
image: linuxserver/lidarr:latest
container_name: lidarr
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
volumes:
- {{ docker_root }}/lidarr:/config
- {{ media_root }}:/data
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.lidarr }}:8686/tcp" # Tailscale only
{% else %}
- "{{ ports.lidarr }}:8686/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8686/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
bazarr:
image: linuxserver/bazarr:latest
container_name: bazarr
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
volumes:
- {{ docker_root }}/bazarr:/config
- {{ media_root }}:/data
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.bazarr }}:6767/tcp" # Tailscale only
{% else %}
- "{{ ports.bazarr }}:6767/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:6767/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
prowlarr:
image: linuxserver/prowlarr:latest
container_name: prowlarr
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
volumes:
- {{ docker_root }}/prowlarr:/config
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.prowlarr }}:9696/tcp" # Tailscale only
{% else %}
- "{{ ports.prowlarr }}:9696/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9696/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
whisparr:
image: ghcr.io/hotio/whisparr
container_name: whisparr
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
volumes:
- {{ docker_root }}/whisparr:/config
- {{ media_root }}:/data
- {{ media_root }}/xxx:/data/xxx # Adult content directory
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.whisparr }}:6969/tcp" # Tailscale only
{% else %}
- "{{ ports.whisparr }}:6969/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:6969/ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
sabnzbd:
image: linuxserver/sabnzbd:latest
container_name: sabnzbd
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
{% if vpn_enabled and sabnzbd_vpn_enabled %}
- WEBUI_PORT=8081 # Use different port when through VPN to avoid qBittorrent conflict
{% endif %}
volumes:
- {{ docker_root }}/sabnzbd:/config
- {{ media_root }}/downloads:/downloads
- {{ media_root }}/downloads/incomplete:/incomplete-downloads
{% if vpn_enabled and sabnzbd_vpn_enabled %}
network_mode: "service:gluetun" # Route through VPN
depends_on:
- gluetun
{% else %}
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.sabnzbd }}:8080/tcp" # Tailscale only
{% else %}
- "{{ ports.sabnzbd }}:8080/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
{% endif %}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
{% if vpn_enabled and sabnzbd_vpn_enabled %}
test: ["CMD", "curl", "-f", "http://localhost:8081/api?mode=version"]
{% else %}
test: ["CMD", "curl", "-f", "http://localhost:8080/api?mode=version"]
{% endif %}
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
plex:
image: linuxserver/plex:latest
container_name: plex
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- VERSION=docker
- PLEX_CLAIM={{ plex_claim_token | default('') }}
volumes:
- {{ docker_root }}/plex:/config
- {{ media_root }}/movies:/movies:ro
- {{ media_root }}/tv:/tv:ro
- {{ media_root }}/music:/music:ro
ports:
{% if plex_public_access %}
- "{{ ports.plex }}:32400/tcp" # Public access for direct streaming
{% elif bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.plex }}:32400/tcp" # Tailscale only
{% else %}
- "{{ ports.plex }}:32400/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:32400/web"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
labels:
- "com.centurylinklabs.watchtower.enable=true"
tautulli:
image: linuxserver/tautulli:latest
container_name: tautulli
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
volumes:
- {{ docker_root }}/tautulli:/config
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.tautulli }}:8181/tcp" # Tailscale only
{% else %}
- "{{ ports.tautulli }}:8181/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8181/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
jellyseerr:
image: fallenbagel/jellyseerr:latest
container_name: jellyseerr
environment:
- LOG_LEVEL=debug
- TZ={{ timezone }}
volumes:
- {{ docker_root }}/jellyseerr:/app/config
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.jellyseerr }}:5055/tcp" # Tailscale only
{% else %}
- "{{ ports.jellyseerr }}:5055/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:5055/api/v1/status"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
labels:
- "com.centurylinklabs.watchtower.enable=true"
{% if vpn_enabled %}
gluetun:
image: qmcgaw/gluetun:latest
container_name: gluetun
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun:/dev/net/tun
environment:
- VPN_SERVICE_PROVIDER={{ vpn_provider | default('') }}
- VPN_TYPE={{ vpn_type | default('openvpn') }}
{% if vpn_type == 'wireguard' %}
- WIREGUARD_PRIVATE_KEY={{ wireguard_private_key | default('') }}
- WIREGUARD_ADDRESSES={{ wireguard_addresses | default('') }}
- WIREGUARD_PUBLIC_KEY={{ wireguard_public_key | default('') }}
- VPN_ENDPOINT_IP={{ wireguard_endpoint.split(':')[0] | default('') }}
- VPN_ENDPOINT_PORT={{ wireguard_endpoint.split(':')[1] | default('51820') }}
{% else %}
- OPENVPN_USER={{ openvpn_user | default('') }}
- OPENVPN_PASSWORD={{ openvpn_password | default('') }}
{% if vpn_provider == 'custom' %}
- OPENVPN_CUSTOM_CONFIG=/gluetun/custom.conf
{% endif %}
{% endif %}
{% if vpn_provider != 'custom' and vpn_type != 'wireguard' %}
- SERVER_COUNTRIES={{ vpn_countries | default('') }}
{% endif %}
- FIREWALL_OUTBOUND_SUBNETS={{ docker_network_subnet }}
- FIREWALL_VPN_INPUT_PORTS=8080{% if sabnzbd_vpn_enabled %},8081{% endif %} # Allow WebUI access
- FIREWALL=on # Enable firewall kill switch
- DOT=off # Disable DNS over TLS to prevent leaks
- BLOCK_MALICIOUS=on # Block malicious domains
- BLOCK_ADS=off # Keep ads blocking off to avoid issues
- UNBLOCK= # No unblocking needed
- TZ={{ timezone }}
volumes:
- {{ docker_root }}/gluetun:/gluetun
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.sabnzbd }}:8081/tcp" # SABnzbd WebUI through VPN (Tailscale only)
- "{{ tailscale_bind_ip }}:{{ ports.deluge }}:8112/tcp" # Deluge WebUI through VPN (Tailscale only)
{% else %}
- "{{ ports.sabnzbd }}:8081/tcp" # SABnzbd WebUI through VPN (all interfaces)
- "{{ ports.deluge }}:8112/tcp" # Deluge WebUI through VPN (all interfaces)
{% endif %}
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://www.google.com/"]
interval: 60s
timeout: 30s
retries: 3
start_period: 120s
labels:
- "com.centurylinklabs.watchtower.enable=true"
{% endif %}
deluge:
image: linuxserver/deluge:latest
container_name: deluge
environment:
- PUID={{ docker_uid }}
- PGID={{ docker_gid }}
- TZ={{ timezone }}
- UMASK=022
- DELUGE_LOGLEVEL=error
volumes:
- {{ docker_root }}/deluge:/config
- {{ media_root }}/downloads:/downloads
{% if vpn_enabled %}
network_mode: "service:gluetun" # Route through VPN
depends_on:
- gluetun
{% else %}
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.deluge }}:8112/tcp" # Tailscale only
{% else %}
- "{{ ports.deluge }}:8112/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
{% endif %}
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8112/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
labels:
- "com.centurylinklabs.watchtower.enable=true"
# TubeArchivist stack - YouTube archiving
tubearchivist-es:
image: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
container_name: tubearchivist-es
environment:
- "ELASTIC_PASSWORD=verysecret"
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- "xpack.security.enabled=true"
- "discovery.type=single-node"
- "path.repo=/usr/share/elasticsearch/data/snapshot"
volumes:
- {{ docker_root }}/tubearchivist/es:/usr/share/elasticsearch/data
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-u", "elastic:verysecret", "-f", "http://localhost:9200/_cluster/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
labels:
- "com.centurylinklabs.watchtower.enable=true"
tubearchivist-redis:
image: redis/redis-stack-server:latest
container_name: tubearchivist-redis
volumes:
- {{ docker_root }}/tubearchivist/redis:/data
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
labels:
- "com.centurylinklabs.watchtower.enable=true"
tubearchivist:
image: bbilly1/tubearchivist:latest
container_name: tubearchivist
environment:
- ES_URL=http://tubearchivist-es:9200
- REDIS_CON=redis://tubearchivist-redis:6379
- HOST_UID={{ docker_uid }}
- HOST_GID={{ docker_gid }}
- TA_HOST=http://{{ tailscale_bind_ip }}:{{ ports.tubearchivist }}
- TA_USERNAME=tubearchivist
- TA_PASSWORD=verysecret
- ELASTIC_PASSWORD=verysecret
- TZ={{ timezone }}
volumes:
- {{ media_root }}/youtube:/youtube
- {{ docker_root }}/tubearchivist/cache:/cache
ports:
{% if bind_to_tailscale_only %}
- "{{ tailscale_bind_ip }}:{{ ports.tubearchivist }}:8000/tcp" # Tailscale only
{% else %}
- "{{ ports.tubearchivist }}:8000/tcp" # All interfaces
{% endif %}
networks:
- arrs_network
depends_on:
- tubearchivist-es
- tubearchivist-redis
security_opt:
- no-new-privileges:true
restart: always
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8000/health/"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
labels:
- "com.centurylinklabs.watchtower.enable=true"
{% if watchtower_enabled %}
watchtower:
image: containrrr/watchtower:1.7.1
container_name: watchtower
environment:
- TZ={{ timezone }}
- WATCHTOWER_SCHEDULE={{ watchtower_schedule }}
- WATCHTOWER_CLEANUP={{ watchtower_cleanup | lower }}
- WATCHTOWER_LABEL_ENABLE=true
- WATCHTOWER_INCLUDE_RESTARTING=true
- DOCKER_API_VERSION=1.44
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- {{ docker_root }}/watchtower:/config
networks:
- arrs_network
security_opt:
- no-new-privileges:true
restart: always
labels:
- "com.centurylinklabs.watchtower.enable=false"
{% endif %}
{% if log_rotation_enabled %}
logrotate:
image: blacklabelops/logrotate:latest
container_name: logrotate
environment:
- LOGS_DIRECTORIES=/var/lib/docker/containers /logs
- LOGROTATE_INTERVAL=daily
- LOGROTATE_COPIES=7
- LOGROTATE_SIZE=100M
volumes:
- /var/lib/docker/containers:/var/lib/docker/containers:ro
- {{ docker_root }}/logs:/logs
networks:
- arrs_network
restart: always
labels:
- "com.centurylinklabs.watchtower.enable=true"
{% endif %}
networks:
arrs_network:
driver: bridge
ipam:
config:
- subnet: {{ docker_network_subnet }}
gateway: {{ docker_network_gateway }}
volumes:
sonarr_config:
driver: local
radarr_config:
driver: local
lidarr_config:
driver: local
bazarr_config:
driver: local
prowlarr_config:
driver: local

View File

@@ -1,27 +0,0 @@
# Docker container log rotation configuration
# Generated by Ansible
/var/lib/docker/containers/*/*.log {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 root root
postrotate
/bin/kill -USR1 $(cat /var/run/docker.pid 2>/dev/null) 2>/dev/null || true
endscript
}
{{ docker_root }}/logs/*.log {
daily
rotate {{ log_max_files }}
size {{ log_max_size }}
compress
delaycompress
missingok
notifempty
create 0644 {{ docker_user }} {{ docker_group }}
}

View File

@@ -1,9 +0,0 @@
/var/lib/docker/containers/*/*.log {
rotate 7
daily
compress
size=1M
missingok
delaycompress
copytruncate
}

View File

@@ -1,104 +0,0 @@
#!/bin/bash
# Docker monitoring script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs/arrs"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
LOG_FILE="$LOG_DIR/docker-monitor-$(date '+%Y%m%d').log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Function to log with timestamp
log_with_timestamp() {
echo "[$TIMESTAMP] $1" >> "$LOG_FILE"
}
# Change to compose directory
cd {{ docker_compose_dir }}
# Check Docker daemon
if ! docker info >/dev/null 2>&1; then
log_with_timestamp "DOCKER_DAEMON FAILED - Docker daemon not responding"
exit 1
fi
log_with_timestamp "DOCKER_DAEMON OK"
# Get container stats
CONTAINER_STATS=$(docker stats --no-stream --format "table {{ '{{.Container}}' }}\t{{ '{{.CPUPerc}}' }}\t{{ '{{.MemUsage}}' }}\t{{ '{{.MemPerc}}' }}\t{{ '{{.NetIO}}' }}\t{{ '{{.BlockIO}}' }}")
# Log container resource usage
while IFS=$'\t' read -r container cpu mem_usage mem_perc net_io block_io; do
if [[ "$container" != "CONTAINER" ]]; then
log_with_timestamp "CONTAINER_STATS $container CPU:$cpu MEM:$mem_usage($mem_perc) NET:$net_io DISK:$block_io"
fi
done <<< "$CONTAINER_STATS"
# Check individual service health
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
PORTS=({{ ports.sonarr }} {{ ports.radarr }} {{ ports.lidarr }} {{ ports.bazarr }} {{ ports.prowlarr }})
for i in "${!SERVICES[@]}"; do
service="${SERVICES[$i]}"
# Get container status
STATUS=$(docker-compose ps -q "$service" | xargs docker inspect --format='{{ "{{.State.Status}}" }}' 2>/dev/null)
if [[ "$STATUS" == "running" ]]; then
# Check container health
HEALTH=$(docker-compose ps -q "$service" | xargs docker inspect --format='{{ "{{.State.Health.Status}}" }}' 2>/dev/null)
if [[ "$HEALTH" == "healthy" ]] || [[ "$HEALTH" == "" ]]; then
log_with_timestamp "SERVICE_$service OK"
else
log_with_timestamp "SERVICE_$service UNHEALTHY - Health status: $HEALTH"
fi
# Check restart count
RESTART_COUNT=$(docker-compose ps -q "$service" | xargs docker inspect --format='{{ "{{.RestartCount}}" }}' 2>/dev/null)
if [[ "$RESTART_COUNT" -gt 5 ]]; then
log_with_timestamp "SERVICE_$service WARNING - High restart count: $RESTART_COUNT"
fi
else
log_with_timestamp "SERVICE_$service FAILED - Status: $STATUS"
# Try to restart the service
log_with_timestamp "SERVICE_$service RESTART_ATTEMPT"
docker-compose restart "$service" 2>/dev/null
fi
done
# Check Docker system resources
DOCKER_SYSTEM_DF=$(docker system df --format "table {{ '{{.Type}}' }}\t{{ '{{.Total}}' }}\t{{ '{{.Active}}' }}\t{{ '{{.Size}}' }}\t{{ '{{.Reclaimable}}' }}")
log_with_timestamp "DOCKER_SYSTEM_DF $DOCKER_SYSTEM_DF"
# Check for stopped containers
STOPPED_CONTAINERS=$(docker ps -a --filter "status=exited" --format "{{ '{{.Names}}' }}" | grep -E "(sonarr|radarr|lidarr|bazarr|prowlarr|watchtower)" || true)
if [[ -n "$STOPPED_CONTAINERS" ]]; then
log_with_timestamp "STOPPED_CONTAINERS $STOPPED_CONTAINERS"
fi
# Check Docker logs for errors (last 5 minutes)
FIVE_MIN_AGO=$(date -d '5 minutes ago' '+%Y-%m-%dT%H:%M:%S')
for service in "${SERVICES[@]}"; do
ERROR_COUNT=$(docker-compose logs --since="$FIVE_MIN_AGO" "$service" 2>/dev/null | grep -i error | wc -l)
if [[ "$ERROR_COUNT" -gt 0 ]]; then
log_with_timestamp "SERVICE_$service ERRORS - $ERROR_COUNT errors in last 5 minutes"
fi
done
# Cleanup old log files (keep 7 days)
find "$LOG_DIR" -name "docker-monitor-*.log" -mtime +7 -delete 2>/dev/null
# Cleanup Docker system if disk usage is high
DISK_USAGE=$(df {{ docker_root }} | tail -1 | awk '{print $5}' | cut -d'%' -f1)
if [[ $DISK_USAGE -gt 85 ]]; then
log_with_timestamp "CLEANUP_ATTEMPT Disk usage ${DISK_USAGE}% - Running Docker cleanup"
docker system prune -f >/dev/null 2>&1
docker image prune -f >/dev/null 2>&1
log_with_timestamp "CLEANUP_COMPLETED Docker cleanup finished"
fi
exit 0

View File

@@ -1,9 +0,0 @@
[Service]
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

View File

@@ -1,18 +0,0 @@
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"userland-proxy": false,
"no-new-privileges": true,
"seccomp-profile": "/etc/docker/seccomp.json",
"default-ulimits": {
"nofile": {
"Name": "nofile",
"Hard": 65536,
"Soft": 65536
}
}
}

View File

@@ -1,25 +0,0 @@
# Docker Environment Variables for Arrs Media Stack
# Generated by Ansible on {{ ansible_date_time.iso8601 }}
# User and Group IDs
PUID=1000
PGID=1000
# Timezone
TZ={{ timezone }}
# Paths
MEDIA_ROOT={{ media_root }}
DOCKER_ROOT={{ docker_root }}
COMPOSE_DIR={{ docker_compose_dir }}
# Network
DOCKER_NETWORK=arrs-network
# Restart Policy
RESTART_POLICY=unless-stopped
# Logging
LOG_DRIVER=json-file
LOG_MAX_SIZE=10m
LOG_MAX_FILE=3

View File

@@ -1,314 +0,0 @@
#!/bin/bash
# Health check dashboard script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs/system"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
DASHBOARD_LOG="$LOG_DIR/health-dashboard-$(date '+%Y%m%d').log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Function to log with timestamp
log_health() {
echo "[$TIMESTAMP] $1" >> "$DASHBOARD_LOG"
}
# Colors for terminal output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to display colored output
display_status() {
local service="$1"
local status="$2"
local details="$3"
case "$status" in
"OK"|"RUNNING")
echo -e "${GREEN}✓${NC} $service: ${GREEN}$status${NC} $details"
;;
"WARNING"|"DEGRADED")
echo -e "${YELLOW}⚠${NC} $service: ${YELLOW}$status${NC} $details"
;;
"CRITICAL"|"FAILED"|"DOWN")
echo -e "${RED}✗${NC} $service: ${RED}$status${NC} $details"
;;
*)
echo -e "${BLUE}${NC} $service: ${BLUE}$status${NC} $details"
;;
esac
}
log_health "=== HEALTH DASHBOARD STARTED ==="
echo "=================================================================="
echo " ARRS MEDIA STACK HEALTH DASHBOARD"
echo "=================================================================="
echo "Generated: $TIMESTAMP"
echo "=================================================================="
# System Health
echo -e "\n${BLUE}SYSTEM HEALTH${NC}"
echo "------------------------------------------------------------------"
# CPU Usage
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
if (( $(echo "$CPU_USAGE > 80" | bc -l) )); then
display_status "CPU Usage" "CRITICAL" "(${CPU_USAGE}%)"
log_health "SYSTEM_HEALTH CPU_USAGE CRITICAL ${CPU_USAGE}%"
elif (( $(echo "$CPU_USAGE > 60" | bc -l) )); then
display_status "CPU Usage" "WARNING" "(${CPU_USAGE}%)"
log_health "SYSTEM_HEALTH CPU_USAGE WARNING ${CPU_USAGE}%"
else
display_status "CPU Usage" "OK" "(${CPU_USAGE}%)"
log_health "SYSTEM_HEALTH CPU_USAGE OK ${CPU_USAGE}%"
fi
# Memory Usage
MEMORY_PERCENT=$(free | grep Mem | awk '{printf "%.1f", $3/$2 * 100.0}')
if (( $(echo "$MEMORY_PERCENT > 90" | bc -l) )); then
display_status "Memory Usage" "CRITICAL" "(${MEMORY_PERCENT}%)"
log_health "SYSTEM_HEALTH MEMORY_USAGE CRITICAL ${MEMORY_PERCENT}%"
elif (( $(echo "$MEMORY_PERCENT > 75" | bc -l) )); then
display_status "Memory Usage" "WARNING" "(${MEMORY_PERCENT}%)"
log_health "SYSTEM_HEALTH MEMORY_USAGE WARNING ${MEMORY_PERCENT}%"
else
display_status "Memory Usage" "OK" "(${MEMORY_PERCENT}%)"
log_health "SYSTEM_HEALTH MEMORY_USAGE OK ${MEMORY_PERCENT}%"
fi
# Disk Usage
DISK_USAGE=$(df -h {{ docker_root }} | tail -1 | awk '{print $5}' | cut -d'%' -f1)
if [[ $DISK_USAGE -gt 90 ]]; then
display_status "Disk Usage" "CRITICAL" "(${DISK_USAGE}%)"
log_health "SYSTEM_HEALTH DISK_USAGE CRITICAL ${DISK_USAGE}%"
elif [[ $DISK_USAGE -gt 80 ]]; then
display_status "Disk Usage" "WARNING" "(${DISK_USAGE}%)"
log_health "SYSTEM_HEALTH DISK_USAGE WARNING ${DISK_USAGE}%"
else
display_status "Disk Usage" "OK" "(${DISK_USAGE}%)"
log_health "SYSTEM_HEALTH DISK_USAGE OK ${DISK_USAGE}%"
fi
# Load Average
LOAD_1MIN=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d',' -f1 | xargs)
if (( $(echo "$LOAD_1MIN > 2.0" | bc -l) )); then
display_status "Load Average" "WARNING" "(${LOAD_1MIN})"
log_health "SYSTEM_HEALTH LOAD_AVERAGE WARNING ${LOAD_1MIN}"
else
display_status "Load Average" "OK" "(${LOAD_1MIN})"
log_health "SYSTEM_HEALTH LOAD_AVERAGE OK ${LOAD_1MIN}"
fi
# Docker Services
echo -e "\n${BLUE}DOCKER SERVICES${NC}"
echo "------------------------------------------------------------------"
if command -v docker >/dev/null 2>&1; then
cd {{ docker_compose_dir }}
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "whisparr" "deluge" "sabnzbd" "plex" "tautulli" "jellyseerr" "tubearchivist" "gluetun" "watchtower" "logrotate")
for service in "${SERVICES[@]}"; do
CONTAINER_ID=$(docker-compose ps -q "$service" 2>/dev/null)
if [[ -n "$CONTAINER_ID" ]]; then
CONTAINER_STATUS=$(docker inspect "$CONTAINER_ID" --format='{{ "{{.State.Status}}" }}' 2>/dev/null)
CONTAINER_HEALTH=$(docker inspect "$CONTAINER_ID" --format='{{ "{{.State.Health.Status}}" }}' 2>/dev/null)
if [[ "$CONTAINER_STATUS" == "running" ]]; then
if [[ "$CONTAINER_HEALTH" == "healthy" ]] || [[ -z "$CONTAINER_HEALTH" ]] || [[ "$CONTAINER_HEALTH" == "<no value>" ]]; then
display_status "$service" "RUNNING" ""
log_health "DOCKER_SERVICE $service RUNNING"
else
display_status "$service" "DEGRADED" "(health: $CONTAINER_HEALTH)"
log_health "DOCKER_SERVICE $service DEGRADED $CONTAINER_HEALTH"
fi
else
display_status "$service" "DOWN" "(status: $CONTAINER_STATUS)"
log_health "DOCKER_SERVICE $service DOWN $CONTAINER_STATUS"
fi
else
display_status "$service" "NOT_FOUND" ""
log_health "DOCKER_SERVICE $service NOT_FOUND"
fi
done
else
display_status "Docker" "NOT_INSTALLED" ""
log_health "DOCKER_SERVICE docker NOT_INSTALLED"
fi
# Network Connectivity
echo -e "\n${BLUE}NETWORK CONNECTIVITY${NC}"
echo "------------------------------------------------------------------"
# Internet connectivity
if ping -c 1 8.8.8.8 >/dev/null 2>&1; then
display_status "Internet" "OK" ""
log_health "NETWORK_CONNECTIVITY internet OK"
else
display_status "Internet" "FAILED" ""
log_health "NETWORK_CONNECTIVITY internet FAILED"
fi
# DNS resolution
if nslookup google.com >/dev/null 2>&1; then
display_status "DNS Resolution" "OK" ""
log_health "NETWORK_CONNECTIVITY dns OK"
else
display_status "DNS Resolution" "FAILED" ""
log_health "NETWORK_CONNECTIVITY dns FAILED"
fi
# Service ports - Check on Tailscale network interface
TAILSCALE_IP="{{ tailscale_bind_ip }}"
SERVICES_PORTS=(
"sonarr:{{ ports.sonarr }}"
"radarr:{{ ports.radarr }}"
"lidarr:{{ ports.lidarr }}"
"bazarr:{{ ports.bazarr }}"
"prowlarr:{{ ports.prowlarr }}"
"deluge:{{ ports.deluge }}"
"sabnzbd:{{ ports.sabnzbd }}"
"plex:{{ ports.plex }}"
"tautulli:{{ ports.tautulli }}"
"jellyseerr:{{ ports.jellyseerr }}"
"tubearchivist:{{ ports.tubearchivist }}"
"whisparr:{{ ports.whisparr }}"
)
for service_port in "${SERVICES_PORTS[@]}"; do
SERVICE=$(echo "$service_port" | cut -d: -f1)
PORT=$(echo "$service_port" | cut -d: -f2)
# Check on Tailscale IP first, fallback to localhost for services that might bind to both
if nc -z "$TAILSCALE_IP" "$PORT" 2>/dev/null; then
display_status "$SERVICE Port" "OK" "(port $PORT on $TAILSCALE_IP)"
log_health "NETWORK_CONNECTIVITY ${SERVICE}_port OK $PORT $TAILSCALE_IP"
elif nc -z localhost "$PORT" 2>/dev/null; then
display_status "$SERVICE Port" "OK" "(port $PORT on localhost)"
log_health "NETWORK_CONNECTIVITY ${SERVICE}_port OK $PORT localhost"
else
display_status "$SERVICE Port" "FAILED" "(port $PORT)"
log_health "NETWORK_CONNECTIVITY ${SERVICE}_port FAILED $PORT"
fi
done
# Security Status
echo -e "\n${BLUE}SECURITY STATUS${NC}"
echo "------------------------------------------------------------------"
# UFW Status
if command -v ufw >/dev/null 2>&1; then
UFW_STATUS=$(ufw status | head -1 | awk '{print $2}')
if [[ "$UFW_STATUS" == "active" ]]; then
display_status "UFW Firewall" "OK" "(active)"
log_health "SECURITY_STATUS ufw OK active"
else
display_status "UFW Firewall" "WARNING" "(inactive)"
log_health "SECURITY_STATUS ufw WARNING inactive"
fi
fi
# Fail2ban Status
if command -v fail2ban-client >/dev/null 2>&1; then
if systemctl is-active fail2ban >/dev/null 2>&1; then
display_status "Fail2ban" "OK" "(active)"
log_health "SECURITY_STATUS fail2ban OK active"
else
display_status "Fail2ban" "WARNING" "(inactive)"
log_health "SECURITY_STATUS fail2ban WARNING inactive"
fi
fi
# Recent failed login attempts
FAILED_LOGINS=$(grep "Failed password" /var/log/auth.log 2>/dev/null | grep "$(date '+%b %d')" | wc -l)
if [[ $FAILED_LOGINS -gt 10 ]]; then
display_status "Failed Logins" "WARNING" "($FAILED_LOGINS today)"
log_health "SECURITY_STATUS failed_logins WARNING $FAILED_LOGINS"
elif [[ $FAILED_LOGINS -gt 0 ]]; then
display_status "Failed Logins" "OK" "($FAILED_LOGINS today)"
log_health "SECURITY_STATUS failed_logins OK $FAILED_LOGINS"
else
display_status "Failed Logins" "OK" "(none today)"
log_health "SECURITY_STATUS failed_logins OK 0"
fi
# Storage Status
echo -e "\n${BLUE}STORAGE STATUS${NC}"
echo "------------------------------------------------------------------"
# Media directories
MEDIA_DIRS=(
"{{ media_root }}/movies"
"{{ media_root }}/tv"
"{{ media_root }}/music"
"{{ media_root }}/downloads"
)
for media_dir in "${MEDIA_DIRS[@]}"; do
DIR_NAME=$(basename "$media_dir")
if [[ -d "$media_dir" ]]; then
SIZE=$(du -sh "$media_dir" 2>/dev/null | cut -f1)
FILE_COUNT=$(find "$media_dir" -type f 2>/dev/null | wc -l)
display_status "$DIR_NAME Directory" "OK" "($SIZE, $FILE_COUNT files)"
log_health "STORAGE_STATUS ${DIR_NAME}_directory OK $SIZE $FILE_COUNT"
else
display_status "$DIR_NAME Directory" "NOT_FOUND" ""
log_health "STORAGE_STATUS ${DIR_NAME}_directory NOT_FOUND"
fi
done
# Recent Activity Summary
echo -e "\n${BLUE}RECENT ACTIVITY${NC}"
echo "------------------------------------------------------------------"
# Check for recent downloads (last 24 hours)
RECENT_DOWNLOADS=0
for media_dir in "${MEDIA_DIRS[@]}"; do
if [[ -d "$media_dir" ]]; then
COUNT=$(find "$media_dir" -type f -mtime -1 2>/dev/null | wc -l)
RECENT_DOWNLOADS=$((RECENT_DOWNLOADS + COUNT))
fi
done
display_status "Recent Downloads" "INFO" "($RECENT_DOWNLOADS files in last 24h)"
log_health "ACTIVITY_SUMMARY recent_downloads INFO $RECENT_DOWNLOADS"
# System uptime
UPTIME=$(uptime -p)
display_status "System Uptime" "INFO" "($UPTIME)"
log_health "ACTIVITY_SUMMARY system_uptime INFO $UPTIME"
# Overall Health Summary
echo -e "\n${BLUE}OVERALL HEALTH SUMMARY${NC}"
echo "=================================================================="
# Count issues
CRITICAL_ISSUES=$(grep "CRITICAL" "$DASHBOARD_LOG" | wc -l)
WARNING_ISSUES=$(grep "WARNING" "$DASHBOARD_LOG" | wc -l)
if [[ $CRITICAL_ISSUES -gt 0 ]]; then
echo -e "${RED}SYSTEM STATUS: CRITICAL${NC} ($CRITICAL_ISSUES critical issues)"
log_health "OVERALL_HEALTH CRITICAL $CRITICAL_ISSUES"
elif [[ $WARNING_ISSUES -gt 0 ]]; then
echo -e "${YELLOW}SYSTEM STATUS: WARNING${NC} ($WARNING_ISSUES warnings)"
log_health "OVERALL_HEALTH WARNING $WARNING_ISSUES"
else
echo -e "${GREEN}SYSTEM STATUS: HEALTHY${NC}"
log_health "OVERALL_HEALTH HEALTHY 0"
fi
echo "=================================================================="
echo "Dashboard log: $DASHBOARD_LOG"
echo "=================================================================="
log_health "=== HEALTH DASHBOARD COMPLETED ==="
# Cleanup old dashboard logs (keep 7 days)
find "$LOG_DIR" -name "health-dashboard-*.log" -mtime +7 -delete 2>/dev/null
exit 0

View File

@@ -1,26 +0,0 @@
[DEFAULT]
# Ban hosts for one hour:
bantime = 3600
# Override /etc/fail2ban/jail.d/00-firewalld.conf:
banaction = iptables-multiport
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
findtime = 600
{% if plex_public_access | default(false) %}
[plex]
enabled = true
port = 32400
filter = plex
logpath = /home/docker/logs/plex/*.log
maxretry = 5
bantime = 7200
findtime = 600
{% endif %}

View File

@@ -1,94 +0,0 @@
#!/bin/bash
# Log aggregation script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs"
SYSTEM_LOG_DIR="$LOG_DIR/system"
ARRS_LOG_DIR="$LOG_DIR/arrs"
AGGREGATED_LOG="$LOG_DIR/aggregated-$(date '+%Y%m%d').log"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
# Ensure log directories exist
mkdir -p "$SYSTEM_LOG_DIR" "$ARRS_LOG_DIR"
# Function to aggregate logs with source prefix
aggregate_logs() {
local source="$1"
local log_file="$2"
if [[ -f "$log_file" ]]; then
while IFS= read -r line; do
echo "[$TIMESTAMP] [$source] $line" >> "$AGGREGATED_LOG"
done < "$log_file"
fi
}
# Start aggregation
echo "[$TIMESTAMP] [AGGREGATOR] Starting log aggregation" >> "$AGGREGATED_LOG"
# Aggregate system monitoring logs
for log_file in "$SYSTEM_LOG_DIR"/system-monitor-$(date '+%Y%m%d').log; do
if [[ -f "$log_file" ]]; then
aggregate_logs "SYSTEM" "$log_file"
fi
done
# Aggregate Docker monitoring logs
for log_file in "$ARRS_LOG_DIR"/docker-monitor-$(date '+%Y%m%d').log; do
if [[ -f "$log_file" ]]; then
aggregate_logs "DOCKER" "$log_file"
fi
done
# Aggregate Docker Compose logs (last 100 lines)
cd {{ docker_compose_dir }}
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
for service in "${SERVICES[@]}"; do
echo "[$TIMESTAMP] [AGGREGATOR] Collecting logs for $service" >> "$AGGREGATED_LOG"
docker-compose logs --tail=100 "$service" 2>/dev/null | while IFS= read -r line; do
echo "[$TIMESTAMP] [${service^^}] $line" >> "$AGGREGATED_LOG"
done
done
# Aggregate system logs (errors and warnings)
echo "[$TIMESTAMP] [AGGREGATOR] Collecting system errors" >> "$AGGREGATED_LOG"
journalctl --since="1 hour ago" --priority=err --no-pager -q | while IFS= read -r line; do
echo "[$TIMESTAMP] [SYSLOG_ERROR] $line" >> "$AGGREGATED_LOG"
done
journalctl --since="1 hour ago" --priority=warning --no-pager -q | while IFS= read -r line; do
echo "[$TIMESTAMP] [SYSLOG_WARNING] $line" >> "$AGGREGATED_LOG"
done
# Aggregate Docker daemon logs
echo "[$TIMESTAMP] [AGGREGATOR] Collecting Docker daemon logs" >> "$AGGREGATED_LOG"
journalctl -u docker --since="1 hour ago" --no-pager -q | while IFS= read -r line; do
echo "[$TIMESTAMP] [DOCKER_DAEMON] $line" >> "$AGGREGATED_LOG"
done
# Generate summary
echo "[$TIMESTAMP] [AGGREGATOR] Generating summary" >> "$AGGREGATED_LOG"
# Count errors and warnings
ERROR_COUNT=$(grep -c "ERROR\|FAILED" "$AGGREGATED_LOG" 2>/dev/null || echo "0")
WARNING_COUNT=$(grep -c "WARNING\|WARN" "$AGGREGATED_LOG" 2>/dev/null || echo "0")
echo "[$TIMESTAMP] [SUMMARY] Errors: $ERROR_COUNT, Warnings: $WARNING_COUNT" >> "$AGGREGATED_LOG"
# Check for critical issues
CRITICAL_ISSUES=$(grep -E "(FAILED|ERROR|CRITICAL|FATAL)" "$AGGREGATED_LOG" | tail -5)
if [[ -n "$CRITICAL_ISSUES" ]]; then
echo "[$TIMESTAMP] [SUMMARY] Recent critical issues:" >> "$AGGREGATED_LOG"
echo "$CRITICAL_ISSUES" >> "$AGGREGATED_LOG"
fi
echo "[$TIMESTAMP] [AGGREGATOR] Log aggregation completed" >> "$AGGREGATED_LOG"
# Cleanup old aggregated logs (keep 7 days)
find "$LOG_DIR" -name "aggregated-*.log" -mtime +7 -delete 2>/dev/null
# Compress logs older than 1 day
find "$LOG_DIR" -name "*.log" -mtime +1 ! -name "aggregated-$(date '+%Y%m%d').log" -exec gzip {} \; 2>/dev/null
exit 0

View File

@@ -1,218 +0,0 @@
#!/bin/bash
# Arrs Media Stack Management Script
# Generated by Ansible - Customized for your VPS
set -e
COMPOSE_DIR="{{ docker_compose_dir }}"
COMPOSE_FILE="$COMPOSE_DIR/docker-compose.yml"
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
print_header() {
echo -e "${BLUE}=== $1 ===${NC}"
}
# Check if running as docker user
check_user() {
if [[ $EUID -ne $(id -u {{ docker_user }}) ]]; then
print_error "This script must be run as the {{ docker_user }} user"
exit 1
fi
}
# Change to compose directory
cd_compose_dir() {
if [[ ! -d "$COMPOSE_DIR" ]]; then
print_error "Compose directory not found: $COMPOSE_DIR"
exit 1
fi
cd "$COMPOSE_DIR"
}
# Function to show usage
usage() {
echo "Usage: $0 {start|stop|restart|status|logs|update|backup|restore}"
echo ""
echo "Commands:"
echo " start - Start all Arrs services"
echo " stop - Stop all Arrs services"
echo " restart - Restart all Arrs services"
echo " status - Show status of all services"
echo " logs - Show logs for all services"
echo " update - Update all Docker images and restart services"
echo " backup - Create backup of configurations"
echo " restore - Restore configurations from backup"
echo ""
}
# Start services
start_services() {
print_header "Starting Arrs Media Stack"
cd_compose_dir
docker-compose up -d
print_status "All services started"
show_status
}
# Stop services
stop_services() {
print_header "Stopping Arrs Media Stack"
cd_compose_dir
docker-compose down
print_status "All services stopped"
}
# Restart services
restart_services() {
print_header "Restarting Arrs Media Stack"
cd_compose_dir
docker-compose restart
print_status "All services restarted"
show_status
}
# Show status
show_status() {
print_header "Arrs Media Stack Status"
cd_compose_dir
docker-compose ps
echo ""
print_status "Service URLs (via Tailscale):"
echo " Sonarr: http://$(hostname -I | awk '{print $1}'):{{ ports.sonarr }}"
echo " Radarr: http://$(hostname -I | awk '{print $1}'):{{ ports.radarr }}"
echo " Lidarr: http://$(hostname -I | awk '{print $1}'):{{ ports.lidarr }}"
echo " Bazarr: http://$(hostname -I | awk '{print $1}'):{{ ports.bazarr }}"
echo " Prowlarr: http://$(hostname -I | awk '{print $1}'):{{ ports.prowlarr }}"
}
# Show logs
show_logs() {
print_header "Arrs Media Stack Logs"
cd_compose_dir
if [[ -n "$2" ]]; then
docker-compose logs -f "$2"
else
docker-compose logs -f --tail=50
fi
}
# Update services
update_services() {
print_header "Updating Arrs Media Stack"
cd_compose_dir
print_status "Pulling latest images..."
docker-compose pull
print_status "Recreating containers..."
docker-compose up -d --force-recreate
print_status "Cleaning up old images..."
docker image prune -f
print_status "Update complete"
show_status
}
# Backup configurations
backup_configs() {
print_header "Backing up Arrs Configurations"
BACKUP_DIR="{{ backup_dir }}"
BACKUP_FILE="$BACKUP_DIR/arrs-backup-$(date +%Y%m%d-%H%M%S).tar.gz"
mkdir -p "$BACKUP_DIR"
print_status "Creating backup: $BACKUP_FILE"
tar -czf "$BACKUP_FILE" \
-C "{{ docker_root }}" \
sonarr radarr lidarr bazarr prowlarr compose
print_status "Backup created successfully"
ls -lh "$BACKUP_FILE"
}
# Restore configurations
restore_configs() {
print_header "Restoring Arrs Configurations"
if [[ -z "$2" ]]; then
print_error "Please specify backup file to restore"
echo "Usage: $0 restore <backup-file>"
exit 1
fi
BACKUP_FILE="$2"
if [[ ! -f "$BACKUP_FILE" ]]; then
print_error "Backup file not found: $BACKUP_FILE"
exit 1
fi
print_warning "This will overwrite current configurations!"
read -p "Are you sure? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
print_status "Restore cancelled"
exit 0
fi
print_status "Stopping services..."
stop_services
print_status "Restoring from: $BACKUP_FILE"
tar -xzf "$BACKUP_FILE" -C "{{ docker_root }}"
print_status "Starting services..."
start_services
print_status "Restore complete"
}
# Main script logic
check_user
case "$1" in
start)
start_services
;;
stop)
stop_services
;;
restart)
restart_services
;;
status)
show_status
;;
logs)
show_logs "$@"
;;
update)
update_services
;;
backup)
backup_configs
;;
restore)
restore_configs "$@"
;;
*)
usage
exit 1
;;
esac
exit 0

View File

@@ -1,237 +0,0 @@
#!/bin/bash
# Network monitoring script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs/system"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
NET_LOG="$LOG_DIR/network-monitor-$(date '+%Y%m%d').log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Function to log with timestamp
log_net() {
echo "[$TIMESTAMP] $1" >> "$NET_LOG"
}
log_net "=== NETWORK MONITORING ==="
# Network interface information
log_net "=== NETWORK INTERFACES ==="
ip addr show | grep -E "^[0-9]+:|inet " | while IFS= read -r line; do
log_net "INTERFACE $line"
done
# Default route
DEFAULT_ROUTE=$(ip route | grep default)
log_net "DEFAULT_ROUTE $DEFAULT_ROUTE"
# Network statistics
log_net "=== NETWORK STATISTICS ==="
MAIN_INTERFACE=$(ip route | grep default | awk '{print $5}' | head -1)
if [[ -n "$MAIN_INTERFACE" ]]; then
# Interface statistics
RX_BYTES=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/rx_bytes 2>/dev/null || echo "0")
TX_BYTES=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/tx_bytes 2>/dev/null || echo "0")
RX_PACKETS=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/rx_packets 2>/dev/null || echo "0")
TX_PACKETS=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/tx_packets 2>/dev/null || echo "0")
RX_ERRORS=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/rx_errors 2>/dev/null || echo "0")
TX_ERRORS=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/tx_errors 2>/dev/null || echo "0")
RX_DROPPED=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/rx_dropped 2>/dev/null || echo "0")
TX_DROPPED=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/tx_dropped 2>/dev/null || echo "0")
# Convert bytes to human readable
RX_MB=$((RX_BYTES / 1024 / 1024))
TX_MB=$((TX_BYTES / 1024 / 1024))
log_net "INTERFACE_STATS $MAIN_INTERFACE - RX: ${RX_MB}MB (${RX_PACKETS} packets, ${RX_ERRORS} errors, ${RX_DROPPED} dropped)"
log_net "INTERFACE_STATS $MAIN_INTERFACE - TX: ${TX_MB}MB (${TX_PACKETS} packets, ${TX_ERRORS} errors, ${TX_DROPPED} dropped)"
# Check for high error rates
if [[ $RX_ERRORS -gt 100 ]]; then
log_net "ALERT_NETWORK High RX errors on $MAIN_INTERFACE: $RX_ERRORS"
fi
if [[ $TX_ERRORS -gt 100 ]]; then
log_net "ALERT_NETWORK High TX errors on $MAIN_INTERFACE: $TX_ERRORS"
fi
fi
# Network connectivity tests
log_net "=== CONNECTIVITY TESTS ==="
# Test DNS resolution
if nslookup google.com >/dev/null 2>&1; then
log_net "DNS_TEST OK - DNS resolution working"
else
log_net "DNS_TEST FAILED - DNS resolution not working"
fi
# Test internet connectivity
if ping -c 1 8.8.8.8 >/dev/null 2>&1; then
log_net "INTERNET_TEST OK - Internet connectivity working"
else
log_net "INTERNET_TEST FAILED - No internet connectivity"
fi
# Test Tailscale connectivity (if configured)
if command -v tailscale >/dev/null 2>&1; then
TAILSCALE_STATUS=$(tailscale status --json 2>/dev/null | jq -r '.BackendState' 2>/dev/null || echo "unknown")
log_net "TAILSCALE_STATUS $TAILSCALE_STATUS"
if [[ "$TAILSCALE_STATUS" == "Running" ]]; then
TAILSCALE_IP=$(tailscale ip -4 2>/dev/null || echo "unknown")
log_net "TAILSCALE_IP $TAILSCALE_IP"
fi
fi
# Port connectivity tests for Arrs services
log_net "=== SERVICE PORT TESTS ==="
SERVICES_PORTS=(
"sonarr:{{ ports.sonarr }}"
"radarr:{{ ports.radarr }}"
"lidarr:{{ ports.lidarr }}"
"bazarr:{{ ports.bazarr }}"
"prowlarr:{{ ports.prowlarr }}"
)
for service_port in "${SERVICES_PORTS[@]}"; do
SERVICE=$(echo "$service_port" | cut -d: -f1)
PORT=$(echo "$service_port" | cut -d: -f2)
if nc -z localhost "$PORT" 2>/dev/null; then
log_net "PORT_TEST $SERVICE (port $PORT) - OK"
else
log_net "PORT_TEST $SERVICE (port $PORT) - FAILED"
fi
done
# Active network connections
log_net "=== ACTIVE CONNECTIONS ==="
ACTIVE_CONNECTIONS=$(netstat -tuln 2>/dev/null | grep LISTEN | wc -l)
log_net "LISTENING_PORTS Total listening ports: $ACTIVE_CONNECTIONS"
# Show listening ports for our services
netstat -tuln 2>/dev/null | grep -E ":{{ ports.sonarr }}|:{{ ports.radarr }}|:{{ ports.lidarr }}|:{{ ports.bazarr }}|:{{ ports.prowlarr }}" | while IFS= read -r line; do
log_net "SERVICE_PORT $line"
done
# Network load monitoring
log_net "=== NETWORK LOAD ==="
if command -v ss >/dev/null 2>&1; then
ESTABLISHED_CONNECTIONS=$(ss -t state established | wc -l)
TIME_WAIT_CONNECTIONS=$(ss -t state time-wait | wc -l)
log_net "CONNECTION_STATS Established: $ESTABLISHED_CONNECTIONS, Time-wait: $TIME_WAIT_CONNECTIONS"
# Check for high connection counts
if [[ $ESTABLISHED_CONNECTIONS -gt 1000 ]]; then
log_net "ALERT_NETWORK High number of established connections: $ESTABLISHED_CONNECTIONS"
fi
if [[ $TIME_WAIT_CONNECTIONS -gt 5000 ]]; then
log_net "ALERT_NETWORK High number of time-wait connections: $TIME_WAIT_CONNECTIONS"
fi
fi
# Docker network information
if command -v docker >/dev/null 2>&1; then
log_net "=== DOCKER NETWORK ==="
# Docker networks
DOCKER_NETWORKS=$(docker network ls --format "{{ '{{.Name}}' }}\t{{ '{{.Driver}}' }}\t{{ '{{.Scope}}' }}" 2>/dev/null)
if [[ -n "$DOCKER_NETWORKS" ]]; then
echo "$DOCKER_NETWORKS" | while IFS=$'\t' read -r name driver scope; do
log_net "DOCKER_NETWORK $name - Driver: $driver, Scope: $scope"
done
fi
# Container network stats
cd {{ docker_compose_dir }}
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
for service in "${SERVICES[@]}"; do
CONTAINER_ID=$(docker-compose ps -q "$service" 2>/dev/null)
if [[ -n "$CONTAINER_ID" ]]; then
# Get container network stats
NET_STATS=$(docker stats --no-stream --format "{{ '{{.NetIO}}' }}" "$CONTAINER_ID" 2>/dev/null)
if [[ -n "$NET_STATS" ]]; then
log_net "CONTAINER_NETWORK $service - Network I/O: $NET_STATS"
fi
fi
done
fi
# Firewall status
log_net "=== FIREWALL STATUS ==="
if command -v ufw >/dev/null 2>&1; then
UFW_STATUS=$(ufw status 2>/dev/null | head -1)
log_net "UFW_STATUS $UFW_STATUS"
# Show rules for our ports
ufw status numbered 2>/dev/null | grep -E "{{ ports.sonarr }}|{{ ports.radarr }}|{{ ports.lidarr }}|{{ ports.bazarr }}|{{ ports.prowlarr }}" | while IFS= read -r line; do
log_net "UFW_RULE $line"
done
fi
# Network security checks
log_net "=== SECURITY CHECKS ==="
# Check for open ports that shouldn't be
UNEXPECTED_PORTS=$(netstat -tuln 2>/dev/null | grep LISTEN | grep -v -E ":22|:{{ ports.sonarr }}|:{{ ports.radarr }}|:{{ ports.lidarr }}|:{{ ports.bazarr }}|:{{ ports.prowlarr }}|:127.0.0.1" | wc -l)
if [[ $UNEXPECTED_PORTS -gt 0 ]]; then
log_net "SECURITY_ALERT $UNEXPECTED_PORTS unexpected open ports detected"
netstat -tuln 2>/dev/null | grep LISTEN | grep -v -E ":22|:{{ ports.sonarr }}|:{{ ports.radarr }}|:{{ ports.lidarr }}|:{{ ports.bazarr }}|:{{ ports.prowlarr }}|:127.0.0.1" | while IFS= read -r line; do
log_net "UNEXPECTED_PORT $line"
done
fi
# Check for failed connection attempts (from auth.log)
FAILED_CONNECTIONS=$(grep "Failed" /var/log/auth.log 2>/dev/null | grep "$(date '+%b %d')" | wc -l)
if [[ $FAILED_CONNECTIONS -gt 10 ]]; then
log_net "SECURITY_ALERT $FAILED_CONNECTIONS failed connection attempts today"
fi
# Bandwidth usage estimation
log_net "=== BANDWIDTH ESTIMATION ==="
if [[ -n "$MAIN_INTERFACE" ]]; then
# Read current stats
CURRENT_RX=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/rx_bytes 2>/dev/null || echo "0")
CURRENT_TX=$(cat /sys/class/net/$MAIN_INTERFACE/statistics/tx_bytes 2>/dev/null || echo "0")
# Read previous stats if available
STATS_FILE="/tmp/network_stats_$MAIN_INTERFACE"
if [[ -f "$STATS_FILE" ]]; then
PREV_TIMESTAMP=$(head -1 "$STATS_FILE")
PREV_RX=$(sed -n '2p' "$STATS_FILE")
PREV_TX=$(sed -n '3p' "$STATS_FILE")
# Calculate time difference (in seconds)
TIME_DIFF=$(($(date +%s) - PREV_TIMESTAMP))
if [[ $TIME_DIFF -gt 0 ]]; then
# Calculate bandwidth (bytes per second)
RX_RATE=$(((CURRENT_RX - PREV_RX) / TIME_DIFF))
TX_RATE=$(((CURRENT_TX - PREV_TX) / TIME_DIFF))
# Convert to human readable (Mbps)
RX_MBPS=$(echo "scale=2; $RX_RATE * 8 / 1024 / 1024" | bc -l 2>/dev/null || echo "0")
TX_MBPS=$(echo "scale=2; $TX_RATE * 8 / 1024 / 1024" | bc -l 2>/dev/null || echo "0")
log_net "BANDWIDTH_USAGE RX: ${RX_MBPS} Mbps, TX: ${TX_MBPS} Mbps (over ${TIME_DIFF}s)"
fi
fi
# Save current stats for next run
echo "$(date +%s)" > "$STATS_FILE"
echo "$CURRENT_RX" >> "$STATS_FILE"
echo "$CURRENT_TX" >> "$STATS_FILE"
fi
log_net "=== END NETWORK MONITORING ==="
# Cleanup old network logs (keep 7 days)
find "$LOG_DIR" -name "network-monitor-*.log" -mtime +7 -delete 2>/dev/null
exit 0

View File

@@ -1,171 +0,0 @@
#!/bin/bash
# Performance monitoring script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs/system"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
PERF_LOG="$LOG_DIR/performance-$(date '+%Y%m%d').log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Function to log with timestamp
log_perf() {
echo "[$TIMESTAMP] $1" >> "$PERF_LOG"
}
# System performance metrics
log_perf "=== PERFORMANCE METRICS ==="
# CPU Information
CPU_MODEL=$(grep "model name" /proc/cpuinfo | head -1 | cut -d: -f2 | xargs)
CPU_CORES=$(nproc)
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
LOAD_1MIN=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d',' -f1 | xargs)
LOAD_5MIN=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $2}' | cut -d',' -f1 | xargs)
LOAD_15MIN=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $3}' | xargs)
log_perf "CPU_INFO Model: $CPU_MODEL, Cores: $CPU_CORES"
log_perf "CPU_USAGE ${CPU_USAGE}%"
log_perf "LOAD_AVERAGE 1min: $LOAD_1MIN, 5min: $LOAD_5MIN, 15min: $LOAD_15MIN"
# Memory Information
MEMORY_TOTAL=$(free -h | grep Mem | awk '{print $2}')
MEMORY_USED=$(free -h | grep Mem | awk '{print $3}')
MEMORY_FREE=$(free -h | grep Mem | awk '{print $4}')
MEMORY_PERCENT=$(free | grep Mem | awk '{printf "%.1f", $3/$2 * 100.0}')
SWAP_USED=$(free -h | grep Swap | awk '{print $3}')
SWAP_TOTAL=$(free -h | grep Swap | awk '{print $2}')
log_perf "MEMORY_USAGE Total: $MEMORY_TOTAL, Used: $MEMORY_USED (${MEMORY_PERCENT}%), Free: $MEMORY_FREE"
log_perf "SWAP_USAGE Used: $SWAP_USED, Total: $SWAP_TOTAL"
# Disk Information
DISK_USAGE=$(df -h {{ docker_root }} | tail -1)
DISK_TOTAL=$(echo "$DISK_USAGE" | awk '{print $2}')
DISK_USED=$(echo "$DISK_USAGE" | awk '{print $3}')
DISK_AVAILABLE=$(echo "$DISK_USAGE" | awk '{print $4}')
DISK_PERCENT=$(echo "$DISK_USAGE" | awk '{print $5}')
log_perf "DISK_USAGE {{ docker_root }} - Total: $DISK_TOTAL, Used: $DISK_USED ($DISK_PERCENT), Available: $DISK_AVAILABLE"
# Media directory disk usage if different
MEDIA_DISK_USAGE=$(df -h {{ media_root }} | tail -1)
MEDIA_DISK_TOTAL=$(echo "$MEDIA_DISK_USAGE" | awk '{print $2}')
MEDIA_DISK_USED=$(echo "$MEDIA_DISK_USAGE" | awk '{print $3}')
MEDIA_DISK_AVAILABLE=$(echo "$MEDIA_DISK_USAGE" | awk '{print $4}')
MEDIA_DISK_PERCENT=$(echo "$MEDIA_DISK_USAGE" | awk '{print $5}')
log_perf "MEDIA_DISK_USAGE {{ media_root }} - Total: $MEDIA_DISK_TOTAL, Used: $MEDIA_DISK_USED ($MEDIA_DISK_PERCENT), Available: $MEDIA_DISK_AVAILABLE"
# Network Statistics
NETWORK_INTERFACE=$(ip route | grep default | awk '{print $5}' | head -1)
if [[ -n "$NETWORK_INTERFACE" ]]; then
RX_BYTES=$(cat /sys/class/net/$NETWORK_INTERFACE/statistics/rx_bytes)
TX_BYTES=$(cat /sys/class/net/$NETWORK_INTERFACE/statistics/tx_bytes)
RX_PACKETS=$(cat /sys/class/net/$NETWORK_INTERFACE/statistics/rx_packets)
TX_PACKETS=$(cat /sys/class/net/$NETWORK_INTERFACE/statistics/tx_packets)
# Convert bytes to human readable
RX_MB=$((RX_BYTES / 1024 / 1024))
TX_MB=$((TX_BYTES / 1024 / 1024))
log_perf "NETWORK_STATS Interface: $NETWORK_INTERFACE, RX: ${RX_MB}MB (${RX_PACKETS} packets), TX: ${TX_MB}MB (${TX_PACKETS} packets)"
fi
# Docker Performance
if command -v docker >/dev/null 2>&1; then
cd {{ docker_compose_dir }}
log_perf "=== DOCKER PERFORMANCE ==="
# Docker system info
DOCKER_CONTAINERS_RUNNING=$(docker ps -q | wc -l)
DOCKER_CONTAINERS_TOTAL=$(docker ps -aq | wc -l)
DOCKER_IMAGES=$(docker images -q | wc -l)
log_perf "DOCKER_STATS Running containers: $DOCKER_CONTAINERS_RUNNING, Total containers: $DOCKER_CONTAINERS_TOTAL, Images: $DOCKER_IMAGES"
# Container resource usage
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
for service in "${SERVICES[@]}"; do
CONTAINER_ID=$(docker-compose ps -q "$service" 2>/dev/null)
if [[ -n "$CONTAINER_ID" ]]; then
# Get container stats (single snapshot)
STATS=$(docker stats --no-stream --format "{{ '{{.CPUPerc}}' }}\t{{ '{{.MemUsage}}' }}\t{{ '{{.MemPerc}}' }}\t{{ '{{.NetIO}}' }}\t{{ '{{.BlockIO}}' }}" "$CONTAINER_ID" 2>/dev/null)
if [[ -n "$STATS" ]]; then
CPU_PERC=$(echo "$STATS" | cut -f1)
MEM_USAGE=$(echo "$STATS" | cut -f2)
MEM_PERC=$(echo "$STATS" | cut -f3)
NET_IO=$(echo "$STATS" | cut -f4)
BLOCK_IO=$(echo "$STATS" | cut -f5)
log_perf "CONTAINER_PERF $service - CPU: $CPU_PERC, Memory: $MEM_USAGE ($MEM_PERC), Network: $NET_IO, Disk: $BLOCK_IO"
fi
fi
done
# Docker system disk usage
DOCKER_SYSTEM_DF=$(docker system df --format "{{ '{{.Type}}' }}\t{{ '{{.TotalCount}}' }}\t{{ '{{.Active}}' }}\t{{ '{{.Size}}' }}\t{{ '{{.Reclaimable}}' }}" 2>/dev/null)
if [[ -n "$DOCKER_SYSTEM_DF" ]]; then
log_perf "DOCKER_DISK_USAGE:"
echo "$DOCKER_SYSTEM_DF" | while IFS=$'\t' read -r type total active size reclaimable; do
log_perf " $type - Total: $total, Active: $active, Size: $size, Reclaimable: $reclaimable"
done
fi
fi
# Process Information
log_perf "=== TOP PROCESSES ==="
TOP_PROCESSES=$(ps aux --sort=-%cpu | head -6 | tail -5)
echo "$TOP_PROCESSES" | while IFS= read -r line; do
log_perf "TOP_CPU $line"
done
TOP_MEMORY=$(ps aux --sort=-%mem | head -6 | tail -5)
echo "$TOP_MEMORY" | while IFS= read -r line; do
log_perf "TOP_MEM $line"
done
# I/O Statistics
if command -v iostat >/dev/null 2>&1; then
log_perf "=== I/O STATISTICS ==="
IOSTAT_OUTPUT=$(iostat -x 1 1 | tail -n +4)
echo "$IOSTAT_OUTPUT" | while IFS= read -r line; do
if [[ -n "$line" && "$line" != *"Device"* ]]; then
log_perf "IOSTAT $line"
fi
done
fi
# Performance Alerts
log_perf "=== PERFORMANCE ALERTS ==="
# CPU Alert (>80%)
if (( $(echo "$CPU_USAGE > 80" | bc -l) )); then
log_perf "ALERT_CPU High CPU usage: ${CPU_USAGE}%"
fi
# Memory Alert (>90%)
if (( $(echo "$MEMORY_PERCENT > 90" | bc -l) )); then
log_perf "ALERT_MEMORY High memory usage: ${MEMORY_PERCENT}%"
fi
# Disk Alert (>85%)
DISK_PERCENT_NUM=$(echo "$DISK_PERCENT" | cut -d'%' -f1)
if [[ $DISK_PERCENT_NUM -gt 85 ]]; then
log_perf "ALERT_DISK High disk usage: $DISK_PERCENT"
fi
# Load Average Alert (>2.0)
if (( $(echo "$LOAD_1MIN > 2.0" | bc -l) )); then
log_perf "ALERT_LOAD High load average: $LOAD_1MIN"
fi
log_perf "=== END PERFORMANCE MONITORING ==="
# Cleanup old performance logs (keep 7 days)
find "$LOG_DIR" -name "performance-*.log" -mtime +7 -delete 2>/dev/null
exit 0

View File

@@ -1,13 +0,0 @@
# Fail2ban filter for Plex Media Server
# Protects against brute force authentication attempts
[Definition]
# Match failed authentication attempts in Plex logs
failregex = ^.*Authentication failed for user.*from <HOST>.*$
^.*Invalid credentials.*from <HOST>.*$
^.*Failed login attempt.*from <HOST>.*$
^.*Unauthorized access attempt.*from <HOST>.*$
# Ignore successful authentications
ignoreregex = ^.*Authentication successful.*$
^.*Login successful.*$

View File

@@ -1,9 +0,0 @@
# Docker logging configuration
# Log Docker daemon messages to separate file
:programname, isequal, "dockerd" /var/log/docker.log
& stop
# Log container messages to separate files
$template DockerLogFormat,"/var/log/docker/%programname%.log"
:syslogtag, startswith, "docker/" ?DockerLogFormat
& stop

View File

@@ -1,28 +0,0 @@
#!/bin/bash
# SABnzbd Configuration Fix for Docker Service Communication
# This script fixes the hostname verification issue that prevents
# *arr services from connecting to SABnzbd
SABNZBD_CONFIG="/config/sabnzbd.ini"
# Wait for SABnzbd to create its config file
while [ ! -f "$SABNZBD_CONFIG" ]; do
echo "Waiting for SABnzbd config file to be created..."
sleep 5
done
# Check if host_whitelist needs to be updated
if ! grep -q "sonarr, radarr, lidarr" "$SABNZBD_CONFIG"; then
echo "Updating SABnzbd host_whitelist to allow *arr service connections..."
# Backup original config
cp "$SABNZBD_CONFIG" "${SABNZBD_CONFIG}.backup"
# Update host_whitelist to include all service names
sed -i 's/host_whitelist = \([^,]*\),/host_whitelist = \1, sonarr, radarr, lidarr, bazarr, prowlarr, whisparr, gluetun, localhost, 127.0.0.1,/' "$SABNZBD_CONFIG"
echo "SABnzbd host_whitelist updated successfully!"
echo "Services can now connect to SABnzbd using container hostnames."
else
echo "SABnzbd host_whitelist already configured for service connections."
fi

View File

@@ -1,60 +0,0 @@
#!/bin/bash
# Security audit script for Arrs Media Stack
echo "=== Security Audit Report - $(date) ==="
echo
echo "1. System Information:"
hostname
uname -a
uptime
echo
echo "2. User and Group Information:"
whoami
id docker 2>/dev/null || echo "Docker user not found"
getent group docker
echo
echo "3. SSH Configuration:"
systemctl is-active ssh
grep "^PermitRootLogin" /etc/ssh/sshd_config || echo "PermitRootLogin not configured"
grep "^PasswordAuthentication" /etc/ssh/sshd_config || echo "PasswordAuthentication not configured"
echo
echo "4. Firewall Status:"
ufw status
echo
echo "5. Fail2ban Status:"
systemctl is-active fail2ban
fail2ban-client status sshd 2>/dev/null || echo "Fail2ban sshd jail not active"
echo
echo "6. Docker Security:"
systemctl is-active docker
docker --version 2>/dev/null || echo "Docker not available"
docker ps 2>/dev/null || echo "Cannot access Docker"
echo
echo "7. File Permissions:"
ls -l /etc/ssh/sshd_config
ls -l /etc/fail2ban/jail.local 2>/dev/null || echo "jail.local not found"
ls -ld {{ docker_root }}
ls -ld {{ media_root }}
echo
echo "8. System Resources:"
free -h
df -h /
echo
echo "9. Network Connections:"
netstat -tlnp 2>/dev/null | grep -E ":(8989|7878|8686|6767|9696)" || echo "No Arrs ports found"
echo
echo "10. Recent Security Events:"
tail -10 /var/log/auth.log 2>/dev/null | grep sshd || echo "No SSH logs found"
echo
echo "=== End of Security Audit ==="

View File

@@ -1,66 +0,0 @@
#!/bin/bash
# System monitoring script for Arrs Media Stack
# Generated by Ansible
LOG_DIR="{{ docker_root }}/logs/system"
TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
LOG_FILE="$LOG_DIR/system-monitor-$(date '+%Y%m%d').log"
# Ensure log directory exists
mkdir -p "$LOG_DIR"
# Function to log with timestamp
log_with_timestamp() {
echo "[$TIMESTAMP] $1" >> "$LOG_FILE"
}
# System metrics
CPU_USAGE=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)
MEMORY_USAGE=$(free | grep Mem | awk '{printf "%.1f", $3/$2 * 100.0}')
DISK_USAGE=$(df {{ docker_root }} | tail -1 | awk '{print $5}' | cut -d'%' -f1)
LOAD_AVG=$(uptime | awk -F'load average:' '{print $2}' | awk '{print $1}' | cut -d',' -f1)
# Log system metrics
log_with_timestamp "SYSTEM_METRICS CPU:${CPU_USAGE}% MEM:${MEMORY_USAGE}% DISK:${DISK_USAGE}% LOAD:${LOAD_AVG}"
# Check Docker service
if systemctl is-active --quiet docker; then
log_with_timestamp "DOCKER_SERVICE OK"
else
log_with_timestamp "DOCKER_SERVICE FAILED"
fi
# Check Arrs services
cd {{ docker_compose_dir }}
SERVICES=("sonarr" "radarr" "lidarr" "bazarr" "prowlarr" "watchtower")
for service in "${SERVICES[@]}"; do
if docker-compose ps "$service" | grep -q "Up"; then
log_with_timestamp "SERVICE_${service^^} OK"
else
log_with_timestamp "SERVICE_${service^^} FAILED"
# Try to restart failed service
docker-compose restart "$service" 2>/dev/null
log_with_timestamp "SERVICE_${service^^} RESTART_ATTEMPTED"
fi
done
# Check disk space warning (>80%)
if [[ $DISK_USAGE -gt 80 ]]; then
log_with_timestamp "DISK_WARNING Disk usage is ${DISK_USAGE}%"
fi
# Check memory warning (>90%)
if (( $(echo "$MEMORY_USAGE > 90" | bc -l) )); then
log_with_timestamp "MEMORY_WARNING Memory usage is ${MEMORY_USAGE}%"
fi
# Check load average warning (>2.0)
if (( $(echo "$LOAD_AVG > 2.0" | bc -l) )); then
log_with_timestamp "LOAD_WARNING Load average is $LOAD_AVG"
fi
# Cleanup old log files (keep 7 days)
find "$LOG_DIR" -name "system-monitor-*.log" -mtime +7 -delete 2>/dev/null
exit 0