Sanitized mirror from private repository - 2026-03-21 11:39:16 UTC
This commit is contained in:
105
docs/advanced/ansible/HOMELAB_STATUS_REPORT.md
Normal file
105
docs/advanced/ansible/HOMELAB_STATUS_REPORT.md
Normal file
@@ -0,0 +1,105 @@
|
||||
# Homelab Infrastructure Status Report
|
||||
*Generated: February 8, 2026*
|
||||
|
||||
## 🎯 Mission Accomplished: Complete Homelab Health Check
|
||||
|
||||
### 📊 Infrastructure Overview
|
||||
|
||||
**Tailscale Network Status**: ✅ **HEALTHY**
|
||||
- **Total Devices**: 28 devices in tailnet
|
||||
- **Online Devices**: 12 active devices
|
||||
- **Core Infrastructure**: All critical systems online
|
||||
|
||||
### 🔧 Synology NAS Cluster Status: ✅ **ALL HEALTHY**
|
||||
|
||||
| Device | IP | Status | DSM Version | RAID Status | Disk Usage |
|
||||
|--------|----|---------|-----------|-----------|-----------|
|
||||
| **atlantis** | 100.83.230.112 | ✅ Healthy | DSM 7.3.2 | Normal | 73% |
|
||||
| **calypso** | 100.103.48.78 | ✅ Healthy | DSM 7.3.2 | Normal | 84% |
|
||||
| **setillo** | 100.125.0.20 | ✅ Healthy | DSM 7.3.2 | Normal | 78% |
|
||||
|
||||
### 🌐 APT Proxy Infrastructure: ✅ **OPTIMAL**
|
||||
|
||||
**Proxy Server**: calypso (100.103.48.78:3142) - apt-cacher-ng service
|
||||
|
||||
| Client | OS | Proxy Status | Connectivity |
|
||||
|--------|----|--------------|--------------|
|
||||
| **homelab** | Ubuntu 24.04 | ✅ Configured | ✅ Connected |
|
||||
| **pi-5** | Debian 12.13 | ✅ Configured | ✅ Connected |
|
||||
| **vish-concord-nuc** | Ubuntu 24.04 | ✅ Configured | ✅ Connected |
|
||||
| **pve** | Debian 12.13 | ✅ Configured | ✅ Connected |
|
||||
| **truenas-scale** | Debian 12.9 | ✅ Configured | ✅ Connected |
|
||||
|
||||
**Summary**: 5/5 Debian clients properly configured and using apt-cacher proxy
|
||||
|
||||
### 🔐 SSH Connectivity Status: ✅ **RESOLVED**
|
||||
|
||||
**Previous Issues Resolved**:
|
||||
- ✅ **seattle-tailscale**: fail2ban had banned homelab IP - unbanned and added Tailscale subnet to ignore list
|
||||
- ✅ **homeassistant**: SSH access configured and verified
|
||||
|
||||
**Current SSH Access**:
|
||||
- All online Tailscale devices accessible via SSH
|
||||
- Tailscale subnet (100.64.0.0/10) added to fail2ban ignore lists where needed
|
||||
|
||||
### 📋 Ansible Infrastructure: ✅ **ENHANCED**
|
||||
|
||||
**New Playbooks Created**:
|
||||
1. **`check_apt_proxy.yml`** - Comprehensive APT proxy health monitoring
|
||||
- Tests configuration files
|
||||
- Verifies network connectivity
|
||||
- Validates APT settings
|
||||
- Provides detailed reporting and recommendations
|
||||
|
||||
**Updated Inventory**:
|
||||
- Added homeassistant (100.112.186.90) to hypervisors group
|
||||
- Enhanced debian_clients group with all relevant systems
|
||||
- Comprehensive host groupings for targeted operations
|
||||
|
||||
### 🎯 Key Achievements
|
||||
|
||||
1. **Complete Infrastructure Visibility**
|
||||
- All Synology devices health-checked and confirmed operational
|
||||
- APT proxy infrastructure verified and optimized
|
||||
- SSH connectivity issues identified and resolved
|
||||
|
||||
2. **Automated Monitoring**
|
||||
- Created comprehensive health check playbooks
|
||||
- Established baseline for ongoing monitoring
|
||||
- Documented all system configurations
|
||||
|
||||
3. **Network Optimization**
|
||||
- All Debian/Ubuntu clients using centralized APT cache
|
||||
- Reduced bandwidth usage and improved update speeds
|
||||
- Consistent package management across homelab
|
||||
|
||||
### 🔄 Ongoing Maintenance
|
||||
|
||||
**Offline Devices** (Expected):
|
||||
- pi-5-kevin (100.123.246.75) - Offline for 114 days
|
||||
- Various mobile devices and test systems
|
||||
|
||||
**Monitoring Recommendations**:
|
||||
- Run `ansible-playbook playbooks/synology_health.yml` monthly
|
||||
- Run `ansible-playbook playbooks/check_apt_proxy.yml` weekly
|
||||
- Monitor Tailscale connectivity via `tailscale status`
|
||||
|
||||
### 🏆 Infrastructure Maturity Level
|
||||
|
||||
**Current Status**: **Level 3 - Standardized**
|
||||
- ✅ Automated health monitoring
|
||||
- ✅ Centralized configuration management
|
||||
- ✅ Comprehensive documentation
|
||||
- ✅ Reliable connectivity and access controls
|
||||
|
||||
---
|
||||
|
||||
## 📁 File Locations
|
||||
|
||||
- **Ansible Playbooks**: `/home/homelab/organized/projects/homelab/ansible/automation/playbooks/`
|
||||
- **Inventory**: `/home/homelab/organized/projects/homelab/ansible/automation/hosts.ini`
|
||||
- **This Report**: `/home/homelab/organized/projects/homelab/ansible/automation/HOMELAB_STATUS_REPORT.md`
|
||||
|
||||
---
|
||||
|
||||
*Report generated by OpenHands automation - Homelab infrastructure is healthy and optimized! 🚀*
|
||||
206
docs/advanced/ansible/README.md
Normal file
206
docs/advanced/ansible/README.md
Normal file
@@ -0,0 +1,206 @@
|
||||
# Homelab Ansible Playbooks
|
||||
|
||||
Automated deployment and management of all homelab services across all hosts.
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
ansible/homelab/
|
||||
├── ansible.cfg # Ansible configuration
|
||||
├── inventory.yml # All hosts inventory
|
||||
├── site.yml # Master playbook
|
||||
├── generate_playbooks.py # Script to regenerate playbooks from compose files
|
||||
├── group_vars/ # Variables by group
|
||||
│ ├── all.yml # Global variables
|
||||
│ ├── synology.yml # Synology NAS specific
|
||||
│ └── vms.yml # Virtual machines specific
|
||||
├── host_vars/ # Variables per host (auto-generated)
|
||||
│ ├── atlantis.yml # 53 services
|
||||
│ ├── calypso.yml # 24 services
|
||||
│ ├── homelab_vm.yml # 33 services
|
||||
│ └── ...
|
||||
├── playbooks/ # Individual playbooks
|
||||
│ ├── common/ # Shared playbooks
|
||||
│ │ ├── install_docker.yml
|
||||
│ │ └── setup_directories.yml
|
||||
│ ├── deploy_atlantis.yml
|
||||
│ ├── deploy_calypso.yml
|
||||
│ └── ...
|
||||
└── roles/ # Reusable roles
|
||||
├── docker_stack/ # Deploy docker-compose stacks
|
||||
└── directory_setup/ # Create directory structures
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Prerequisites
|
||||
- Ansible 2.12+
|
||||
- SSH access to all hosts (via Tailscale)
|
||||
- Python 3.8+
|
||||
|
||||
### Installation
|
||||
```bash
|
||||
pip install ansible
|
||||
```
|
||||
|
||||
### Deploy Everything
|
||||
```bash
|
||||
cd ansible/homelab
|
||||
ansible-playbook site.yml
|
||||
```
|
||||
|
||||
### Deploy to Specific Host
|
||||
```bash
|
||||
ansible-playbook site.yml --limit atlantis
|
||||
```
|
||||
|
||||
### Deploy by Category
|
||||
```bash
|
||||
# Deploy all Synology hosts
|
||||
ansible-playbook site.yml --tags synology
|
||||
|
||||
# Deploy all VMs
|
||||
ansible-playbook site.yml --tags vms
|
||||
```
|
||||
|
||||
### Check Mode (Dry Run)
|
||||
```bash
|
||||
ansible-playbook site.yml --check --diff
|
||||
```
|
||||
|
||||
## 📋 Host Inventory
|
||||
|
||||
| Host | Category | Services | Description |
|
||||
|------|----------|----------|-------------|
|
||||
| atlantis | synology | 53 | Primary NAS (DS1823xs+) |
|
||||
| calypso | synology | 24 | Secondary NAS (DS920+) |
|
||||
| setillo | synology | 2 | Remote NAS |
|
||||
| guava | physical | 8 | TrueNAS Scale |
|
||||
| concord_nuc | physical | 11 | Intel NUC |
|
||||
| homelab_vm | vms | 33 | Primary VM |
|
||||
| rpi5_vish | edge | 3 | Raspberry Pi 5 |
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Vault Secrets
|
||||
Sensitive data should be stored in Ansible Vault:
|
||||
|
||||
```bash
|
||||
# Create vault password file (DO NOT commit this)
|
||||
echo "your-vault-password" > .vault_pass
|
||||
|
||||
# Encrypt a variable
|
||||
ansible-vault encrypt_string 'my-secret' --name 'api_key'
|
||||
|
||||
# Run playbook with vault
|
||||
ansible-playbook site.yml --vault-password-file .vault_pass
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
Create a `.env` file for each service or use host_vars:
|
||||
|
||||
```yaml
|
||||
# host_vars/atlantis.yml
|
||||
vault_plex_claim_token: !vault |
|
||||
$ANSIBLE_VAULT;1.1;AES256
|
||||
...
|
||||
```
|
||||
|
||||
## 📝 Adding New Services
|
||||
|
||||
### Method 1: Add docker-compose file
|
||||
1. Add your `docker-compose.yml` to `hosts/<category>/<host>/<service>/`
|
||||
2. Run the generator:
|
||||
```bash
|
||||
python3 generate_playbooks.py
|
||||
```
|
||||
|
||||
### Method 2: Manual addition
|
||||
1. Add service to `host_vars/<host>.yml`:
|
||||
```yaml
|
||||
host_services:
|
||||
- name: my_service
|
||||
stack_dir: my_service
|
||||
compose_file: hosts/synology/atlantis/my_service.yaml
|
||||
enabled: true
|
||||
```
|
||||
|
||||
## 🏷️ Tags
|
||||
|
||||
| Tag | Description |
|
||||
|-----|-------------|
|
||||
| `synology` | All Synology NAS hosts |
|
||||
| `vms` | All virtual machines |
|
||||
| `physical` | Physical servers |
|
||||
| `edge` | Edge devices (RPi, etc.) |
|
||||
| `arr-suite` | Media management (Sonarr, Radarr, etc.) |
|
||||
| `monitoring` | Prometheus, Grafana, etc. |
|
||||
|
||||
## 📊 Service Categories
|
||||
|
||||
### Media & Entertainment
|
||||
- Plex, Jellyfin, Tautulli
|
||||
- Sonarr, Radarr, Lidarr, Prowlarr
|
||||
- Jellyseerr, Overseerr
|
||||
|
||||
### Productivity
|
||||
- Paperless-ngx, Stirling PDF
|
||||
- Joplin, Dokuwiki
|
||||
- Syncthing
|
||||
|
||||
### Infrastructure
|
||||
- Nginx Proxy Manager
|
||||
- Traefik, Cloudflare Tunnel
|
||||
- AdGuard Home, Pi-hole
|
||||
|
||||
### Monitoring
|
||||
- Prometheus, Grafana
|
||||
- Uptime Kuma, Dozzle
|
||||
- Node Exporter
|
||||
|
||||
### Security
|
||||
- Vaultwarden
|
||||
- Authentik
|
||||
- Headscale
|
||||
|
||||
## 🔄 Regenerating Playbooks
|
||||
|
||||
If you modify docker-compose files directly:
|
||||
|
||||
```bash
|
||||
python3 generate_playbooks.py
|
||||
```
|
||||
|
||||
This will:
|
||||
1. Scan all `hosts/` directories for compose files
|
||||
2. Update `host_vars/` with service lists
|
||||
3. Regenerate individual host playbooks
|
||||
4. Update the master `site.yml`
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Test connectivity
|
||||
```bash
|
||||
ansible all -m ping
|
||||
```
|
||||
|
||||
### Test specific host
|
||||
```bash
|
||||
ansible atlantis -m ping
|
||||
```
|
||||
|
||||
### Verbose output
|
||||
```bash
|
||||
ansible-playbook site.yml -vvv
|
||||
```
|
||||
|
||||
### List tasks without running
|
||||
```bash
|
||||
ansible-playbook site.yml --list-tasks
|
||||
```
|
||||
|
||||
## 📚 Resources
|
||||
|
||||
- [Ansible Documentation](https://docs.ansible.com/)
|
||||
- [Docker Compose Reference](https://docs.docker.com/compose/compose-file/)
|
||||
- [Tailscale Documentation](https://tailscale.com/kb/)
|
||||
18
docs/advanced/ansible/ansible.cfg
Normal file
18
docs/advanced/ansible/ansible.cfg
Normal file
@@ -0,0 +1,18 @@
|
||||
[defaults]
|
||||
inventory = inventory.yml
|
||||
roles_path = roles
|
||||
host_key_checking = False
|
||||
retry_files_enabled = False
|
||||
gathering = smart
|
||||
fact_caching = jsonfile
|
||||
fact_caching_connection = /tmp/ansible_facts_cache
|
||||
fact_caching_timeout = 86400
|
||||
stdout_callback = yaml
|
||||
interpreter_python = auto_silent
|
||||
|
||||
[privilege_escalation]
|
||||
become = False
|
||||
|
||||
[ssh_connection]
|
||||
pipelining = True
|
||||
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
|
||||
296
docs/advanced/ansible/generate_playbooks.py
Normal file
296
docs/advanced/ansible/generate_playbooks.py
Normal file
@@ -0,0 +1,296 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Generate Ansible playbooks from existing docker-compose files in the homelab repo.
|
||||
This script scans the hosts/ directory and creates deployment playbooks.
|
||||
"""
|
||||
|
||||
import os
|
||||
import yaml
|
||||
from pathlib import Path
|
||||
from collections import defaultdict
|
||||
|
||||
REPO_ROOT = Path(__file__).parent.parent.parent
|
||||
HOSTS_DIR = REPO_ROOT / "hosts"
|
||||
ANSIBLE_DIR = Path(__file__).parent
|
||||
PLAYBOOKS_DIR = ANSIBLE_DIR / "playbooks"
|
||||
HOST_VARS_DIR = ANSIBLE_DIR / "host_vars"
|
||||
|
||||
# Mapping of directory names to ansible host names
|
||||
HOST_MAPPING = {
|
||||
"atlantis": "atlantis",
|
||||
"calypso": "calypso",
|
||||
"setillo": "setillo",
|
||||
"guava": "guava",
|
||||
"concord-nuc": "concord_nuc",
|
||||
"anubis": "anubis",
|
||||
"homelab-vm": "homelab_vm",
|
||||
"chicago-vm": "chicago_vm",
|
||||
"bulgaria-vm": "bulgaria_vm",
|
||||
"contabo-vm": "contabo_vm",
|
||||
"rpi5-vish": "rpi5_vish",
|
||||
"tdarr-node": "tdarr_node",
|
||||
}
|
||||
|
||||
# Host categories for grouping
|
||||
HOST_CATEGORIES = {
|
||||
"synology": ["atlantis", "calypso", "setillo"],
|
||||
"physical": ["guava", "concord-nuc", "anubis"],
|
||||
"vms": ["homelab-vm", "chicago-vm", "bulgaria-vm", "contabo-vm", "matrix-ubuntu-vm"],
|
||||
"edge": ["rpi5-vish", "nvidia_shield"],
|
||||
"proxmox": ["tdarr-node"],
|
||||
}
|
||||
|
||||
|
||||
def find_compose_files():
|
||||
"""Find all docker-compose files in the hosts directory."""
|
||||
compose_files = defaultdict(list)
|
||||
|
||||
for yaml_file in HOSTS_DIR.rglob("*.yaml"):
|
||||
if ".git" in str(yaml_file):
|
||||
continue
|
||||
compose_files[yaml_file.parent].append(yaml_file)
|
||||
|
||||
for yml_file in HOSTS_DIR.rglob("*.yml"):
|
||||
if ".git" in str(yml_file):
|
||||
continue
|
||||
compose_files[yml_file.parent].append(yml_file)
|
||||
|
||||
return compose_files
|
||||
|
||||
|
||||
def get_host_from_path(file_path):
|
||||
"""Extract REDACTED_APP_PASSWORD path."""
|
||||
parts = file_path.relative_to(HOSTS_DIR).parts
|
||||
|
||||
# Structure: hosts/<category>/<host>/...
|
||||
if len(parts) >= 2:
|
||||
category = parts[0]
|
||||
host = parts[1]
|
||||
return category, host
|
||||
return None, None
|
||||
|
||||
|
||||
def extract_service_name(file_path):
|
||||
"""Extract service name from file path."""
|
||||
# Get the service name from parent directory or filename
|
||||
if file_path.name in ["docker-compose.yml", "docker-compose.yaml"]:
|
||||
return file_path.parent.name
|
||||
else:
|
||||
return file_path.stem.replace("-", "_").replace(".", "_")
|
||||
|
||||
|
||||
def is_compose_file(file_path):
|
||||
"""Check if file looks like a docker-compose file."""
|
||||
try:
|
||||
with open(file_path, 'r') as f:
|
||||
content = yaml.safe_load(f)
|
||||
if content and isinstance(content, dict):
|
||||
return 'services' in content or 'version' in content
|
||||
except:
|
||||
pass
|
||||
return False
|
||||
|
||||
|
||||
def generate_service_vars(host, services):
|
||||
"""Generate host_vars with service definitions."""
|
||||
service_list = []
|
||||
|
||||
for service_path, service_name in services:
|
||||
rel_path = service_path.relative_to(REPO_ROOT)
|
||||
|
||||
# Determine the stack directory name
|
||||
if service_path.name in ["docker-compose.yml", "docker-compose.yaml"]:
|
||||
stack_dir = service_path.parent.name
|
||||
else:
|
||||
stack_dir = service_name
|
||||
|
||||
service_entry = {
|
||||
"name": service_name,
|
||||
"stack_dir": stack_dir,
|
||||
"compose_file": str(rel_path),
|
||||
"enabled": True,
|
||||
}
|
||||
|
||||
# Check for .env file
|
||||
env_file = service_path.parent / ".env"
|
||||
stack_env = service_path.parent / "stack.env"
|
||||
if env_file.exists():
|
||||
service_entry["env_file"] = str(env_file.relative_to(REPO_ROOT))
|
||||
elif stack_env.exists():
|
||||
service_entry["env_file"] = str(stack_env.relative_to(REPO_ROOT))
|
||||
|
||||
service_list.append(service_entry)
|
||||
|
||||
return service_list
|
||||
|
||||
|
||||
def generate_host_playbook(host_name, ansible_host, services, category):
|
||||
"""Generate a playbook for a specific host."""
|
||||
|
||||
# Create header comment
|
||||
header = f"""---
|
||||
# Deployment playbook for {host_name}
|
||||
# Category: {category}
|
||||
# Services: {len(services)}
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_{ansible_host}.yml
|
||||
# ansible-playbook playbooks/deploy_{ansible_host}.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_{ansible_host}.yml --check
|
||||
|
||||
"""
|
||||
|
||||
playbook = [
|
||||
{
|
||||
"name": f"Deploy services to {host_name}",
|
||||
"hosts": ansible_host,
|
||||
"gather_facts": True,
|
||||
"vars": {
|
||||
"services": "{{ host_services | default([]) }}"
|
||||
},
|
||||
"tasks": [
|
||||
{
|
||||
"name": "Display deployment info",
|
||||
"ansible.builtin.debug": {
|
||||
"msg": "Deploying {{ services | length }} services to {{ inventory_hostname }}"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Ensure docker data directory exists",
|
||||
"ansible.builtin.file": {
|
||||
"path": "{{ docker_data_path }}",
|
||||
"state": "directory",
|
||||
"mode": "0755"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "Deploy each enabled service",
|
||||
"ansible.builtin.include_role": {
|
||||
"name": "docker_stack"
|
||||
},
|
||||
"vars": {
|
||||
"stack_name": "{{ item.stack_dir }}",
|
||||
"stack_compose_file": "{{ item.compose_file }}",
|
||||
"stack_env_file": "{{ item.env_file | default(omit) }}"
|
||||
},
|
||||
"loop": "{{ services }}",
|
||||
"loop_control": {
|
||||
"label": "{{ item.name }}"
|
||||
},
|
||||
"when": "item.enabled | default(true)"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
return header, playbook
|
||||
|
||||
|
||||
def main():
|
||||
"""Main function to generate all playbooks."""
|
||||
print("=" * 60)
|
||||
print("Generating Ansible Playbooks from Homelab Repository")
|
||||
print("=" * 60)
|
||||
|
||||
# Ensure directories exist
|
||||
PLAYBOOKS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
HOST_VARS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Find all compose files
|
||||
compose_files = find_compose_files()
|
||||
|
||||
# Organize by host
|
||||
hosts_services = defaultdict(list)
|
||||
|
||||
for directory, files in compose_files.items():
|
||||
category, host = get_host_from_path(directory)
|
||||
if not host:
|
||||
continue
|
||||
|
||||
for f in files:
|
||||
if is_compose_file(f):
|
||||
service_name = extract_service_name(f)
|
||||
hosts_services[(category, host)].append((f, service_name))
|
||||
|
||||
# Generate playbooks and host_vars
|
||||
all_hosts = {}
|
||||
|
||||
for (category, host), services in sorted(hosts_services.items()):
|
||||
ansible_host = HOST_MAPPING.get(host, host.replace("-", "_"))
|
||||
|
||||
print(f"\n[{category}/{host}] Found {len(services)} services:")
|
||||
for service_path, service_name in services:
|
||||
print(f" - {service_name}")
|
||||
|
||||
# Generate host_vars
|
||||
service_vars = generate_service_vars(host, services)
|
||||
host_vars = {
|
||||
"host_services": service_vars
|
||||
}
|
||||
|
||||
host_vars_file = HOST_VARS_DIR / f"{ansible_host}.yml"
|
||||
with open(host_vars_file, 'w') as f:
|
||||
f.write("---\n")
|
||||
f.write(f"# Auto-generated host variables for {host}\n")
|
||||
f.write(f"# Services deployed to this host\n\n")
|
||||
yaml.dump(host_vars, f, default_flow_style=False, sort_keys=False)
|
||||
|
||||
# Generate individual host playbook
|
||||
header, playbook = generate_host_playbook(host, ansible_host, services, category)
|
||||
playbook_file = PLAYBOOKS_DIR / f"deploy_{ansible_host}.yml"
|
||||
with open(playbook_file, 'w') as f:
|
||||
f.write(header)
|
||||
yaml.dump(playbook, f, default_flow_style=False, sort_keys=False)
|
||||
|
||||
all_hosts[ansible_host] = {
|
||||
"category": category,
|
||||
"host": host,
|
||||
"services": len(services)
|
||||
}
|
||||
|
||||
# Generate master playbook
|
||||
master_playbook = [
|
||||
{
|
||||
"name": "Deploy all homelab services",
|
||||
"hosts": "localhost",
|
||||
"gather_facts": False,
|
||||
"tasks": [
|
||||
{
|
||||
"name": "Display deployment plan",
|
||||
"ansible.builtin.debug": {
|
||||
"msg": "Deploying services to all hosts. Use --limit to target specific hosts."
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
# Add imports for each host
|
||||
for ansible_host, info in sorted(all_hosts.items()):
|
||||
master_playbook.append({
|
||||
"name": f"Deploy to {info['host']} ({info['services']} services)",
|
||||
"ansible.builtin.import_playbook": f"playbooks/deploy_{ansible_host}.yml",
|
||||
"tags": [info['category'], ansible_host]
|
||||
})
|
||||
|
||||
master_file = ANSIBLE_DIR / "site.yml"
|
||||
with open(master_file, 'w') as f:
|
||||
f.write("---\n")
|
||||
f.write("# Master Homelab Deployment Playbook\n")
|
||||
f.write("# Auto-generated from docker-compose files\n")
|
||||
f.write("#\n")
|
||||
f.write("# Usage:\n")
|
||||
f.write("# Deploy everything: ansible-playbook site.yml\n")
|
||||
f.write("# Deploy specific host: ansible-playbook site.yml --limit atlantis\n")
|
||||
f.write("# Deploy by category: ansible-playbook site.yml --tags synology\n")
|
||||
f.write("#\n\n")
|
||||
yaml.dump(master_playbook, f, default_flow_style=False, sort_keys=False)
|
||||
|
||||
print(f"\n{'=' * 60}")
|
||||
print(f"Generated playbooks for {len(all_hosts)} hosts")
|
||||
print(f"Master playbook: {master_file}")
|
||||
print("=" * 60)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
35
docs/advanced/ansible/group_vars/all.yml
Normal file
35
docs/advanced/ansible/group_vars/all.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Global variables for all hosts
|
||||
|
||||
# Timezone
|
||||
timezone: "America/Los_Angeles"
|
||||
|
||||
# Domain settings
|
||||
base_domain: "vish.local"
|
||||
external_domain: "vish.gg"
|
||||
|
||||
# Common labels for Docker containers
|
||||
default_labels:
|
||||
maintainer: "vish"
|
||||
managed_by: "ansible"
|
||||
|
||||
# Docker restart policy
|
||||
docker_restart_policy: "unless-stopped"
|
||||
|
||||
# Common network settings
|
||||
docker_default_network: "proxy"
|
||||
|
||||
# Traefik settings (if used)
|
||||
traefik_enabled: false
|
||||
traefik_network: "proxy"
|
||||
|
||||
# Portainer settings
|
||||
portainer_url: "http://vishinator.synology.me:10000"
|
||||
|
||||
# Monitoring settings
|
||||
prometheus_enabled: true
|
||||
grafana_enabled: true
|
||||
|
||||
# Backup settings
|
||||
backup_enabled: true
|
||||
backup_path: "/backup"
|
||||
4
docs/advanced/ansible/group_vars/homelab_linux.yml
Normal file
4
docs/advanced/ansible/group_vars/homelab_linux.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
ansible_become: true
|
||||
ansible_become_method: sudo
|
||||
ansible_python_interpreter: auto
|
||||
33
docs/advanced/ansible/group_vars/synology.yml
Normal file
33
docs/advanced/ansible/group_vars/synology.yml
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
# Synology NAS specific variables
|
||||
|
||||
# Docker path on Synology
|
||||
docker_data_path: "/volume1/docker"
|
||||
|
||||
# Synology doesn't use sudo
|
||||
ansible_become: false
|
||||
|
||||
# Docker socket location
|
||||
docker_socket: "/var/run/docker.sock"
|
||||
|
||||
# PUID/PGID for Synology (typically admin user)
|
||||
puid: 1026
|
||||
pgid: 100
|
||||
|
||||
# Media paths
|
||||
media_path: "/volume1/media"
|
||||
downloads_path: "/volume1/downloads"
|
||||
photos_path: "/volume1/photos"
|
||||
documents_path: "/volume1/documents"
|
||||
|
||||
# Common volume mounts for arr suite
|
||||
arr_common_volumes:
|
||||
- "{{ downloads_path }}:/downloads"
|
||||
- "{{ media_path }}/movies:/movies"
|
||||
- "{{ media_path }}/tv:/tv"
|
||||
- "{{ media_path }}/music:/music"
|
||||
- "{{ media_path }}/anime:/anime"
|
||||
|
||||
# Synology specific ports (avoid conflicts with DSM)
|
||||
port_range_start: 8000
|
||||
port_range_end: 9999
|
||||
20
docs/advanced/ansible/group_vars/vms.yml
Normal file
20
docs/advanced/ansible/group_vars/vms.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
# Virtual machine specific variables
|
||||
|
||||
# Docker path on VMs
|
||||
docker_data_path: "/opt/docker"
|
||||
|
||||
# Use sudo for privilege escalation
|
||||
ansible_become: true
|
||||
ansible_become_method: sudo
|
||||
|
||||
# Docker socket location
|
||||
docker_socket: "/var/run/docker.sock"
|
||||
|
||||
# PUID/PGID for VMs (typically 1000:1000)
|
||||
puid: 1000
|
||||
pgid: 1000
|
||||
|
||||
# VM-specific port ranges
|
||||
port_range_start: 3000
|
||||
port_range_end: 9999
|
||||
37
docs/advanced/ansible/host_vars/anubis.yml
Normal file
37
docs/advanced/ansible/host_vars/anubis.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
# Auto-generated host variables for anubis
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: element
|
||||
stack_dir: element
|
||||
compose_file: hosts/physical/anubis/element.yml
|
||||
enabled: true
|
||||
- name: photoprism
|
||||
stack_dir: photoprism
|
||||
compose_file: hosts/physical/anubis/photoprism.yml
|
||||
enabled: true
|
||||
- name: draw_io
|
||||
stack_dir: draw_io
|
||||
compose_file: hosts/physical/anubis/draw.io.yml
|
||||
enabled: true
|
||||
- name: conduit
|
||||
stack_dir: conduit
|
||||
compose_file: hosts/physical/anubis/conduit.yml
|
||||
enabled: true
|
||||
- name: archivebox
|
||||
stack_dir: archivebox
|
||||
compose_file: hosts/physical/anubis/archivebox.yml
|
||||
enabled: true
|
||||
- name: chatgpt
|
||||
stack_dir: chatgpt
|
||||
compose_file: hosts/physical/anubis/chatgpt.yml
|
||||
enabled: true
|
||||
- name: pialert
|
||||
stack_dir: pialert
|
||||
compose_file: hosts/physical/anubis/pialert.yml
|
||||
enabled: true
|
||||
- name: proxitok
|
||||
stack_dir: proxitok
|
||||
compose_file: hosts/physical/anubis/proxitok.yml
|
||||
enabled: true
|
||||
219
docs/advanced/ansible/host_vars/atlantis.yml
Normal file
219
docs/advanced/ansible/host_vars/atlantis.yml
Normal file
@@ -0,0 +1,219 @@
|
||||
---
|
||||
# Auto-generated host variables for atlantis
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: redlib
|
||||
stack_dir: redlib
|
||||
compose_file: hosts/synology/atlantis/redlib.yaml
|
||||
enabled: true
|
||||
- name: repo_nginx
|
||||
stack_dir: repo_nginx
|
||||
compose_file: hosts/synology/atlantis/repo_nginx.yaml
|
||||
enabled: true
|
||||
- name: fenrus
|
||||
stack_dir: fenrus
|
||||
compose_file: hosts/synology/atlantis/fenrus.yaml
|
||||
enabled: true
|
||||
- name: iperf3
|
||||
stack_dir: iperf3
|
||||
compose_file: hosts/synology/atlantis/iperf3.yaml
|
||||
enabled: true
|
||||
- name: vaultwarden
|
||||
stack_dir: vaultwarden
|
||||
compose_file: hosts/synology/atlantis/vaultwarden.yaml
|
||||
enabled: true
|
||||
- name: dynamicdnsupdater
|
||||
stack_dir: dynamicdnsupdater
|
||||
compose_file: hosts/synology/atlantis/dynamicdnsupdater.yaml
|
||||
enabled: true
|
||||
- name: wireguard
|
||||
stack_dir: wireguard
|
||||
compose_file: hosts/synology/atlantis/wireguard.yaml
|
||||
enabled: true
|
||||
- name: youtubedl
|
||||
stack_dir: youtubedl
|
||||
compose_file: hosts/synology/atlantis/youtubedl.yaml
|
||||
enabled: true
|
||||
- name: termix
|
||||
stack_dir: termix
|
||||
compose_file: hosts/synology/atlantis/termix.yaml
|
||||
enabled: true
|
||||
- name: cloudflare_tunnel
|
||||
stack_dir: cloudflare_tunnel
|
||||
compose_file: hosts/synology/atlantis/cloudflare-tunnel.yaml
|
||||
enabled: true
|
||||
- name: ntfy
|
||||
stack_dir: ntfy
|
||||
compose_file: hosts/synology/atlantis/ntfy.yml
|
||||
enabled: true
|
||||
- name: grafana
|
||||
stack_dir: grafana
|
||||
compose_file: hosts/synology/atlantis/grafana.yml
|
||||
enabled: true
|
||||
- name: it_tools
|
||||
stack_dir: it_tools
|
||||
compose_file: hosts/synology/atlantis/it_tools.yml
|
||||
enabled: true
|
||||
- name: calibre_books
|
||||
stack_dir: calibre_books
|
||||
compose_file: hosts/synology/atlantis/calibre-books.yml
|
||||
enabled: true
|
||||
- name: mastodon
|
||||
stack_dir: mastodon
|
||||
compose_file: hosts/synology/atlantis/mastodon.yml
|
||||
enabled: true
|
||||
- name: firefly
|
||||
stack_dir: firefly
|
||||
compose_file: hosts/synology/atlantis/firefly.yml
|
||||
enabled: true
|
||||
- name: invidious
|
||||
stack_dir: invidious
|
||||
compose_file: hosts/synology/atlantis/invidious.yml
|
||||
enabled: true
|
||||
- name: dokuwiki
|
||||
stack_dir: dokuwiki
|
||||
compose_file: hosts/synology/atlantis/dokuwiki.yml
|
||||
enabled: true
|
||||
- name: watchtower
|
||||
stack_dir: watchtower
|
||||
compose_file: hosts/synology/atlantis/watchtower.yml
|
||||
enabled: true
|
||||
- name: netbox
|
||||
stack_dir: netbox
|
||||
compose_file: hosts/synology/atlantis/netbox.yml
|
||||
enabled: true
|
||||
- name: llamagpt
|
||||
stack_dir: llamagpt
|
||||
compose_file: hosts/synology/atlantis/llamagpt.yml
|
||||
enabled: true
|
||||
- name: synapse
|
||||
stack_dir: synapse
|
||||
compose_file: hosts/synology/atlantis/synapse.yml
|
||||
enabled: true
|
||||
- name: uptimekuma
|
||||
stack_dir: uptimekuma
|
||||
compose_file: hosts/synology/atlantis/uptimekuma.yml
|
||||
enabled: true
|
||||
- name: matrix
|
||||
stack_dir: matrix
|
||||
compose_file: hosts/synology/atlantis/matrix.yml
|
||||
enabled: true
|
||||
- name: gitlab
|
||||
stack_dir: gitlab
|
||||
compose_file: hosts/synology/atlantis/gitlab.yml
|
||||
enabled: true
|
||||
- name: jdownloader2
|
||||
stack_dir: jdownloader2
|
||||
compose_file: hosts/synology/atlantis/jdownloader2.yml
|
||||
enabled: true
|
||||
- name: piped
|
||||
stack_dir: piped
|
||||
compose_file: hosts/synology/atlantis/piped.yml
|
||||
enabled: true
|
||||
- name: syncthing
|
||||
stack_dir: syncthing
|
||||
compose_file: hosts/synology/atlantis/syncthing.yml
|
||||
enabled: true
|
||||
- name: dockpeek
|
||||
stack_dir: dockpeek
|
||||
compose_file: hosts/synology/atlantis/dockpeek.yml
|
||||
enabled: true
|
||||
- name: paperlessngx
|
||||
stack_dir: paperlessngx
|
||||
compose_file: hosts/synology/atlantis/paperlessngx.yml
|
||||
enabled: true
|
||||
- name: stirlingpdf
|
||||
stack_dir: stirlingpdf
|
||||
compose_file: hosts/synology/atlantis/stirlingpdf.yml
|
||||
enabled: true
|
||||
- name: pihole
|
||||
stack_dir: pihole
|
||||
compose_file: hosts/synology/atlantis/pihole.yml
|
||||
enabled: true
|
||||
- name: joplin
|
||||
stack_dir: joplin
|
||||
compose_file: hosts/synology/atlantis/joplin.yml
|
||||
enabled: true
|
||||
- name: nginxproxymanager
|
||||
stack_dir: nginxproxymanager
|
||||
compose_file: hosts/synology/atlantis/nginxproxymanager/nginxproxymanager.yaml
|
||||
enabled: true
|
||||
- name: baikal
|
||||
stack_dir: baikal
|
||||
compose_file: hosts/synology/atlantis/baikal/baikal.yaml
|
||||
enabled: true
|
||||
- name: turnserver_docker_compose
|
||||
stack_dir: turnserver_docker_compose
|
||||
compose_file: hosts/synology/atlantis/matrix_synapse_docs/turnserver_docker_compose.yml
|
||||
enabled: true
|
||||
- name: whisparr
|
||||
stack_dir: whisparr
|
||||
compose_file: hosts/synology/atlantis/arr-suite/whisparr.yaml
|
||||
enabled: true
|
||||
- name: jellyseerr
|
||||
stack_dir: jellyseerr
|
||||
compose_file: hosts/synology/atlantis/arr-suite/jellyseerr.yaml
|
||||
enabled: true
|
||||
- name: sabnzbd
|
||||
stack_dir: sabnzbd
|
||||
compose_file: hosts/synology/atlantis/arr-suite/sabnzbd.yaml
|
||||
enabled: true
|
||||
- name: arrs_compose
|
||||
stack_dir: arrs_compose
|
||||
compose_file: hosts/synology/atlantis/arr-suite/docker-compose.yml
|
||||
enabled: true
|
||||
- name: wizarr
|
||||
stack_dir: wizarr
|
||||
compose_file: hosts/synology/atlantis/arr-suite/wizarr.yaml
|
||||
enabled: true
|
||||
- name: prowlarr_flaresolverr
|
||||
stack_dir: prowlarr_flaresolverr
|
||||
compose_file: hosts/synology/atlantis/arr-suite/prowlarr_flaresolverr.yaml
|
||||
enabled: true
|
||||
- name: plex
|
||||
stack_dir: plex
|
||||
compose_file: hosts/synology/atlantis/arr-suite/plex.yaml
|
||||
enabled: true
|
||||
- name: tautulli
|
||||
stack_dir: tautulli
|
||||
compose_file: hosts/synology/atlantis/arr-suite/tautulli.yaml
|
||||
enabled: true
|
||||
- name: homarr
|
||||
stack_dir: homarr
|
||||
compose_file: hosts/synology/atlantis/homarr/docker-compose.yaml
|
||||
enabled: true
|
||||
- name: atlantis_node_exporter
|
||||
stack_dir: atlantis_node_exporter
|
||||
compose_file: hosts/synology/atlantis/grafana_prometheus/atlantis_node_exporter.yaml
|
||||
enabled: true
|
||||
- name: monitoring_stack
|
||||
stack_dir: monitoring_stack
|
||||
compose_file: hosts/synology/atlantis/grafana_prometheus/monitoring-stack.yaml
|
||||
enabled: true
|
||||
- name: dozzle
|
||||
stack_dir: dozzle
|
||||
compose_file: hosts/synology/atlantis/dozzle/dozzle.yaml
|
||||
enabled: true
|
||||
- name: documenso
|
||||
stack_dir: documenso
|
||||
compose_file: hosts/synology/atlantis/documenso/documenso.yaml
|
||||
enabled: true
|
||||
- name: theme_park
|
||||
stack_dir: theme_park
|
||||
compose_file: hosts/synology/atlantis/theme-park/theme-park.yaml
|
||||
enabled: true
|
||||
- name: jitsi
|
||||
stack_dir: jitsi
|
||||
compose_file: hosts/synology/atlantis/jitsi/jitsi.yml
|
||||
enabled: true
|
||||
env_file: hosts/synology/atlantis/jitsi/.env
|
||||
- name: immich
|
||||
stack_dir: immich
|
||||
compose_file: hosts/synology/atlantis/immich/docker-compose.yml
|
||||
enabled: true
|
||||
env_file: hosts/synology/atlantis/immich/stack.env
|
||||
- name: ollama
|
||||
stack_dir: ollama
|
||||
compose_file: hosts/synology/atlantis/ollama/docker-compose.yml
|
||||
enabled: true
|
||||
45
docs/advanced/ansible/host_vars/bulgaria_vm.yml
Normal file
45
docs/advanced/ansible/host_vars/bulgaria_vm.yml
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
# Auto-generated host variables for bulgaria-vm
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: mattermost
|
||||
stack_dir: mattermost
|
||||
compose_file: hosts/vms/bulgaria-vm/mattermost.yml
|
||||
enabled: true
|
||||
- name: nginx_proxy_manager
|
||||
stack_dir: nginx_proxy_manager
|
||||
compose_file: hosts/vms/bulgaria-vm/nginx_proxy_manager.yml
|
||||
enabled: true
|
||||
- name: navidrome
|
||||
stack_dir: navidrome
|
||||
compose_file: hosts/vms/bulgaria-vm/navidrome.yml
|
||||
enabled: true
|
||||
- name: invidious
|
||||
stack_dir: invidious
|
||||
compose_file: hosts/vms/bulgaria-vm/invidious.yml
|
||||
enabled: true
|
||||
- name: watchtower
|
||||
stack_dir: watchtower
|
||||
compose_file: hosts/vms/bulgaria-vm/watchtower.yml
|
||||
enabled: true
|
||||
- name: metube
|
||||
stack_dir: metube
|
||||
compose_file: hosts/vms/bulgaria-vm/metube.yml
|
||||
enabled: true
|
||||
- name: syncthing
|
||||
stack_dir: syncthing
|
||||
compose_file: hosts/vms/bulgaria-vm/syncthing.yml
|
||||
enabled: true
|
||||
- name: yourspotify
|
||||
stack_dir: yourspotify
|
||||
compose_file: hosts/vms/bulgaria-vm/yourspotify.yml
|
||||
enabled: true
|
||||
- name: fenrus
|
||||
stack_dir: fenrus
|
||||
compose_file: hosts/vms/bulgaria-vm/fenrus.yml
|
||||
enabled: true
|
||||
- name: rainloop
|
||||
stack_dir: rainloop
|
||||
compose_file: hosts/vms/bulgaria-vm/rainloop.yml
|
||||
enabled: true
|
||||
103
docs/advanced/ansible/host_vars/calypso.yml
Normal file
103
docs/advanced/ansible/host_vars/calypso.yml
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
# Auto-generated host variables for calypso
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: adguard
|
||||
stack_dir: adguard
|
||||
compose_file: hosts/synology/calypso/adguard.yaml
|
||||
enabled: true
|
||||
- name: gitea_server
|
||||
stack_dir: gitea_server
|
||||
compose_file: hosts/synology/calypso/gitea-server.yaml
|
||||
enabled: true
|
||||
- name: headscale
|
||||
stack_dir: headscale
|
||||
compose_file: hosts/synology/calypso/headscale.yaml
|
||||
enabled: true
|
||||
- name: arr_suite_wip
|
||||
stack_dir: arr_suite_wip
|
||||
compose_file: hosts/synology/calypso/arr-suite-wip.yaml
|
||||
enabled: true
|
||||
- name: rustdesk
|
||||
stack_dir: rustdesk
|
||||
compose_file: hosts/synology/calypso/rustdesk.yaml
|
||||
enabled: true
|
||||
- name: seafile_server
|
||||
stack_dir: seafile_server
|
||||
compose_file: hosts/synology/calypso/seafile-server.yaml
|
||||
enabled: true
|
||||
- name: wireguard_server
|
||||
stack_dir: wireguard_server
|
||||
compose_file: hosts/synology/calypso/wireguard-server.yaml
|
||||
enabled: true
|
||||
- name: openspeedtest
|
||||
stack_dir: openspeedtest
|
||||
compose_file: hosts/synology/calypso/openspeedtest.yaml
|
||||
enabled: true
|
||||
- name: syncthing
|
||||
stack_dir: syncthing
|
||||
compose_file: hosts/synology/calypso/syncthing.yaml
|
||||
enabled: true
|
||||
- name: gitea_runner
|
||||
stack_dir: gitea_runner
|
||||
compose_file: hosts/synology/calypso/gitea-runner.yaml
|
||||
enabled: true
|
||||
- name: node_exporter
|
||||
stack_dir: node_exporter
|
||||
compose_file: hosts/synology/calypso/node-exporter.yaml
|
||||
enabled: true
|
||||
- name: rackula
|
||||
stack_dir: rackula
|
||||
compose_file: hosts/synology/calypso/rackula.yml
|
||||
enabled: true
|
||||
- name: arr_suite_with_dracula
|
||||
stack_dir: arr_suite_with_dracula
|
||||
compose_file: hosts/synology/calypso/arr_suite_with_dracula.yml
|
||||
enabled: true
|
||||
- name: actualbudget
|
||||
stack_dir: actualbudget
|
||||
compose_file: hosts/synology/calypso/actualbudget.yml
|
||||
enabled: true
|
||||
- name: iperf3
|
||||
stack_dir: iperf3
|
||||
compose_file: hosts/synology/calypso/iperf3.yml
|
||||
enabled: true
|
||||
- name: prometheus
|
||||
stack_dir: prometheus
|
||||
compose_file: hosts/synology/calypso/prometheus.yml
|
||||
enabled: true
|
||||
- name: firefly
|
||||
stack_dir: firefly
|
||||
compose_file: hosts/synology/calypso/firefly/firefly.yaml
|
||||
enabled: true
|
||||
env_file: hosts/synology/calypso/firefly/stack.env
|
||||
- name: tdarr-node
|
||||
stack_dir: tdarr-node
|
||||
compose_file: hosts/synology/calypso/tdarr-node/docker-compose.yaml
|
||||
enabled: true
|
||||
- name: authentik
|
||||
stack_dir: authentik
|
||||
compose_file: hosts/synology/calypso/authentik/docker-compose.yaml
|
||||
enabled: true
|
||||
- name: apt_cacher_ng
|
||||
stack_dir: apt_cacher_ng
|
||||
compose_file: hosts/synology/calypso/apt-cacher-ng/apt-cacher-ng.yml
|
||||
enabled: true
|
||||
- name: immich
|
||||
stack_dir: immich
|
||||
compose_file: hosts/synology/calypso/immich/docker-compose.yml
|
||||
enabled: true
|
||||
env_file: hosts/synology/calypso/immich/stack.env
|
||||
- name: reactive_resume_v4
|
||||
stack_dir: reactive_resume_v4
|
||||
compose_file: hosts/synology/calypso/reactive_resume_v4/docker-compose.yml
|
||||
enabled: true
|
||||
- name: paperless_ai
|
||||
stack_dir: paperless_ai
|
||||
compose_file: hosts/synology/calypso/paperless/paperless-ai.yml
|
||||
enabled: true
|
||||
- name: paperless
|
||||
stack_dir: paperless
|
||||
compose_file: hosts/synology/calypso/paperless/docker-compose.yml
|
||||
enabled: true
|
||||
33
docs/advanced/ansible/host_vars/chicago_vm.yml
Normal file
33
docs/advanced/ansible/host_vars/chicago_vm.yml
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
# Auto-generated host variables for chicago-vm
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: watchtower
|
||||
stack_dir: watchtower
|
||||
compose_file: hosts/vms/chicago-vm/watchtower.yml
|
||||
enabled: true
|
||||
- name: matrix
|
||||
stack_dir: matrix
|
||||
compose_file: hosts/vms/chicago-vm/matrix.yml
|
||||
enabled: true
|
||||
- name: gitlab
|
||||
stack_dir: gitlab
|
||||
compose_file: hosts/vms/chicago-vm/gitlab.yml
|
||||
enabled: true
|
||||
- name: jdownloader2
|
||||
stack_dir: jdownloader2
|
||||
compose_file: hosts/vms/chicago-vm/jdownloader2.yml
|
||||
enabled: true
|
||||
- name: proxitok
|
||||
stack_dir: proxitok
|
||||
compose_file: hosts/vms/chicago-vm/proxitok.yml
|
||||
enabled: true
|
||||
- name: jellyfin
|
||||
stack_dir: jellyfin
|
||||
compose_file: hosts/vms/chicago-vm/jellyfin.yml
|
||||
enabled: true
|
||||
- name: neko
|
||||
stack_dir: neko
|
||||
compose_file: hosts/vms/chicago-vm/neko.yml
|
||||
enabled: true
|
||||
49
docs/advanced/ansible/host_vars/concord_nuc.yml
Normal file
49
docs/advanced/ansible/host_vars/concord_nuc.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
# Auto-generated host variables for concord-nuc
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: adguard
|
||||
stack_dir: adguard
|
||||
compose_file: hosts/physical/concord-nuc/adguard.yaml
|
||||
enabled: true
|
||||
- name: yourspotify
|
||||
stack_dir: yourspotify
|
||||
compose_file: hosts/physical/concord-nuc/yourspotify.yaml
|
||||
enabled: true
|
||||
- name: wireguard
|
||||
stack_dir: wireguard
|
||||
compose_file: hosts/physical/concord-nuc/wireguard.yaml
|
||||
enabled: true
|
||||
- name: piped
|
||||
stack_dir: piped
|
||||
compose_file: hosts/physical/concord-nuc/piped.yaml
|
||||
enabled: true
|
||||
- name: syncthing
|
||||
stack_dir: syncthing
|
||||
compose_file: hosts/physical/concord-nuc/syncthing.yaml
|
||||
enabled: true
|
||||
- name: dyndns_updater
|
||||
stack_dir: dyndns_updater
|
||||
compose_file: hosts/physical/concord-nuc/dyndns_updater.yaml
|
||||
enabled: true
|
||||
- name: homeassistant
|
||||
stack_dir: homeassistant
|
||||
compose_file: hosts/physical/concord-nuc/homeassistant.yaml
|
||||
enabled: true
|
||||
- name: plex
|
||||
stack_dir: plex
|
||||
compose_file: hosts/physical/concord-nuc/plex.yaml
|
||||
enabled: true
|
||||
- name: node_exporter
|
||||
stack_dir: node_exporter
|
||||
compose_file: hosts/physical/concord-nuc/node-exporter.yaml
|
||||
enabled: true
|
||||
- name: invidious
|
||||
stack_dir: invidious
|
||||
compose_file: hosts/physical/concord-nuc/invidious/invidious.yaml
|
||||
enabled: true
|
||||
- name: invidious
|
||||
stack_dir: invidious
|
||||
compose_file: hosts/physical/concord-nuc/invidious/invidious_old/invidious.yaml
|
||||
enabled: true
|
||||
9
docs/advanced/ansible/host_vars/contabo_vm.yml
Normal file
9
docs/advanced/ansible/host_vars/contabo_vm.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
# Auto-generated host variables for contabo-vm
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: ollama
|
||||
stack_dir: ollama
|
||||
compose_file: hosts/vms/contabo-vm/ollama/docker-compose.yml
|
||||
enabled: true
|
||||
9
docs/advanced/ansible/host_vars/guava.yml
Normal file
9
docs/advanced/ansible/host_vars/guava.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
# Auto-generated host variables for guava
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: tdarr-node
|
||||
stack_dir: tdarr-node
|
||||
compose_file: hosts/truenas/guava/tdarr-node/docker-compose.yaml
|
||||
enabled: true
|
||||
6
docs/advanced/ansible/host_vars/homelab.yml
Normal file
6
docs/advanced/ansible/host_vars/homelab.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
ansible_user: homelab
|
||||
ansible_become: true
|
||||
|
||||
tailscale_bin: /usr/bin/tailscale
|
||||
tailscale_manage_service: true
|
||||
tailscale_manage_install: true
|
||||
137
docs/advanced/ansible/host_vars/homelab_vm.yml
Normal file
137
docs/advanced/ansible/host_vars/homelab_vm.yml
Normal file
@@ -0,0 +1,137 @@
|
||||
---
|
||||
# Auto-generated host variables for homelab-vm
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: binternet
|
||||
stack_dir: binternet
|
||||
compose_file: hosts/vms/homelab-vm/binternet.yaml
|
||||
enabled: true
|
||||
- name: gitea_ntfy_bridge
|
||||
stack_dir: gitea_ntfy_bridge
|
||||
compose_file: hosts/vms/homelab-vm/gitea-ntfy-bridge.yaml
|
||||
enabled: true
|
||||
- name: alerting
|
||||
stack_dir: alerting
|
||||
compose_file: hosts/vms/homelab-vm/alerting.yaml
|
||||
enabled: true
|
||||
- name: libreddit
|
||||
stack_dir: libreddit
|
||||
compose_file: hosts/vms/homelab-vm/libreddit.yaml
|
||||
enabled: true
|
||||
- name: roundcube
|
||||
stack_dir: roundcube
|
||||
compose_file: hosts/vms/homelab-vm/roundcube.yaml
|
||||
enabled: true
|
||||
- name: ntfy
|
||||
stack_dir: ntfy
|
||||
compose_file: hosts/vms/homelab-vm/ntfy.yaml
|
||||
enabled: true
|
||||
- name: watchyourlan
|
||||
stack_dir: watchyourlan
|
||||
compose_file: hosts/vms/homelab-vm/watchyourlan.yaml
|
||||
enabled: true
|
||||
- name: l4d2_docker
|
||||
stack_dir: l4d2_docker
|
||||
compose_file: hosts/vms/homelab-vm/l4d2_docker.yaml
|
||||
enabled: true
|
||||
- name: proxitok
|
||||
stack_dir: proxitok
|
||||
compose_file: hosts/vms/homelab-vm/proxitok.yaml
|
||||
enabled: true
|
||||
- name: redlib
|
||||
stack_dir: redlib
|
||||
compose_file: hosts/vms/homelab-vm/redlib.yaml
|
||||
enabled: true
|
||||
- name: hoarder
|
||||
stack_dir: hoarder
|
||||
compose_file: hosts/vms/homelab-vm/hoarder.yaml
|
||||
enabled: true
|
||||
- name: roundcube_protonmail
|
||||
stack_dir: roundcube_protonmail
|
||||
compose_file: hosts/vms/homelab-vm/roundcube_protonmail.yaml
|
||||
enabled: true
|
||||
- name: perplexica
|
||||
stack_dir: perplexica
|
||||
compose_file: hosts/vms/homelab-vm/perplexica.yaml
|
||||
enabled: true
|
||||
- name: webcheck
|
||||
stack_dir: webcheck
|
||||
compose_file: hosts/vms/homelab-vm/webcheck.yaml
|
||||
enabled: true
|
||||
- name: archivebox
|
||||
stack_dir: archivebox
|
||||
compose_file: hosts/vms/homelab-vm/archivebox.yaml
|
||||
enabled: true
|
||||
- name: openhands
|
||||
stack_dir: openhands
|
||||
compose_file: hosts/vms/homelab-vm/openhands.yaml
|
||||
enabled: true
|
||||
- name: dashdot
|
||||
stack_dir: dashdot
|
||||
compose_file: hosts/vms/homelab-vm/dashdot.yaml
|
||||
enabled: true
|
||||
- name: satisfactory
|
||||
stack_dir: satisfactory
|
||||
compose_file: hosts/vms/homelab-vm/satisfactory.yaml
|
||||
enabled: true
|
||||
- name: paperminecraft
|
||||
stack_dir: paperminecraft
|
||||
compose_file: hosts/vms/homelab-vm/paperminecraft.yaml
|
||||
enabled: true
|
||||
- name: signal_api
|
||||
stack_dir: signal_api
|
||||
compose_file: hosts/vms/homelab-vm/signal_api.yaml
|
||||
enabled: true
|
||||
- name: cloudflare_tunnel
|
||||
stack_dir: cloudflare_tunnel
|
||||
compose_file: hosts/vms/homelab-vm/cloudflare-tunnel.yaml
|
||||
enabled: true
|
||||
- name: monitoring
|
||||
stack_dir: monitoring
|
||||
compose_file: hosts/vms/homelab-vm/monitoring.yaml
|
||||
enabled: true
|
||||
- name: drawio
|
||||
stack_dir: drawio
|
||||
compose_file: hosts/vms/homelab-vm/drawio.yml
|
||||
enabled: true
|
||||
- name: mattermost
|
||||
stack_dir: mattermost
|
||||
compose_file: hosts/vms/homelab-vm/mattermost.yml
|
||||
enabled: true
|
||||
- name: openproject
|
||||
stack_dir: openproject
|
||||
compose_file: hosts/vms/homelab-vm/openproject.yml
|
||||
enabled: true
|
||||
- name: ddns
|
||||
stack_dir: ddns
|
||||
compose_file: hosts/vms/homelab-vm/ddns.yml
|
||||
enabled: true
|
||||
- name: podgrab
|
||||
stack_dir: podgrab
|
||||
compose_file: hosts/vms/homelab-vm/podgrab.yml
|
||||
enabled: true
|
||||
- name: webcord
|
||||
stack_dir: webcord
|
||||
compose_file: hosts/vms/homelab-vm/webcord.yml
|
||||
enabled: true
|
||||
- name: syncthing
|
||||
stack_dir: syncthing
|
||||
compose_file: hosts/vms/homelab-vm/syncthing.yml
|
||||
enabled: true
|
||||
- name: shlink
|
||||
stack_dir: shlink
|
||||
compose_file: hosts/vms/homelab-vm/shlink.yml
|
||||
enabled: true
|
||||
- name: gotify
|
||||
stack_dir: gotify
|
||||
compose_file: hosts/vms/homelab-vm/gotify.yml
|
||||
enabled: true
|
||||
- name: node_exporter
|
||||
stack_dir: node_exporter
|
||||
compose_file: hosts/vms/homelab-vm/node-exporter.yml
|
||||
enabled: true
|
||||
- name: romm
|
||||
stack_dir: romm
|
||||
compose_file: hosts/vms/homelab-vm/romm/romm.yaml
|
||||
enabled: true
|
||||
9
docs/advanced/ansible/host_vars/lxc.yml
Normal file
9
docs/advanced/ansible/host_vars/lxc.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
# Auto-generated host variables for lxc
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: tdarr-node
|
||||
stack_dir: tdarr-node
|
||||
compose_file: hosts/proxmox/lxc/tdarr-node/docker-compose.yaml
|
||||
enabled: true
|
||||
13
docs/advanced/ansible/host_vars/matrix_ubuntu_vm.yml
Normal file
13
docs/advanced/ansible/host_vars/matrix_ubuntu_vm.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# Auto-generated host variables for matrix-ubuntu-vm
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: mattermost
|
||||
stack_dir: mattermost
|
||||
compose_file: hosts/vms/matrix-ubuntu-vm/mattermost/docker-compose.yml
|
||||
enabled: true
|
||||
- name: mastodon
|
||||
stack_dir: mastodon
|
||||
compose_file: hosts/vms/matrix-ubuntu-vm/mastodon/docker-compose.yml
|
||||
enabled: true
|
||||
17
docs/advanced/ansible/host_vars/rpi5_vish.yml
Normal file
17
docs/advanced/ansible/host_vars/rpi5_vish.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
# Auto-generated host variables for rpi5-vish
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: uptime_kuma
|
||||
stack_dir: uptime_kuma
|
||||
compose_file: hosts/edge/rpi5-vish/uptime-kuma.yaml
|
||||
enabled: true
|
||||
- name: glances
|
||||
stack_dir: glances
|
||||
compose_file: hosts/edge/rpi5-vish/glances.yaml
|
||||
enabled: true
|
||||
- name: immich
|
||||
stack_dir: immich
|
||||
compose_file: hosts/edge/rpi5-vish/immich/docker-compose.yml
|
||||
enabled: true
|
||||
13
docs/advanced/ansible/host_vars/setillo.yml
Normal file
13
docs/advanced/ansible/host_vars/setillo.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
# Auto-generated host variables for setillo
|
||||
# Services deployed to this host
|
||||
|
||||
host_services:
|
||||
- name: compose
|
||||
stack_dir: compose
|
||||
compose_file: hosts/synology/setillo/prometheus/compose.yaml
|
||||
enabled: true
|
||||
- name: adguard_stack
|
||||
stack_dir: adguard_stack
|
||||
compose_file: hosts/synology/setillo/adguard/adguard-stack.yaml
|
||||
enabled: true
|
||||
8
docs/advanced/ansible/host_vars/truenas-scale.yml
Normal file
8
docs/advanced/ansible/host_vars/truenas-scale.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
ansible_user: vish
|
||||
ansible_become: true
|
||||
|
||||
tailscale_bin: /usr/bin/tailscale
|
||||
tailscale_manage_service: true
|
||||
tailscale_manage_install: true
|
||||
# If you ever see interpreter errors, uncomment:
|
||||
# ansible_python_interpreter: /usr/local/bin/python3
|
||||
75
docs/advanced/ansible/hosts
Normal file
75
docs/advanced/ansible/hosts
Normal file
@@ -0,0 +1,75 @@
|
||||
# ================================
|
||||
# Vish's Homelab Ansible Inventory
|
||||
# Tailnet-connected via Tailscale
|
||||
# ================================
|
||||
|
||||
# --- Core Management Node ---
|
||||
[homelab]
|
||||
homelab ansible_host=100.67.40.126 ansible_user=homelab
|
||||
|
||||
# --- Synology NAS Cluster ---
|
||||
[synology]
|
||||
atlantis ansible_host=100.83.230.112 ansible_port=60000 ansible_user=vish
|
||||
calypso ansible_host=100.103.48.78 ansible_port=62000 ansible_user=Vish
|
||||
setillo ansible_host=100.125.0.20 ansible_user=vish # default SSH port 22
|
||||
|
||||
# --- Raspberry Pi Nodes ---
|
||||
[rpi]
|
||||
pi-5 ansible_host=100.77.151.40 ansible_user=vish
|
||||
pi-5-kevin ansible_host=100.123.246.75 ansible_user=vish
|
||||
|
||||
# --- Hypervisors / Storage ---
|
||||
[hypervisors]
|
||||
pve ansible_host=100.87.12.28 ansible_user=root
|
||||
truenas-scale ansible_host=100.75.252.64 ansible_user=vish
|
||||
homeassistant ansible_host=100.112.186.90 ansible_user=hassio
|
||||
|
||||
# --- Remote Systems ---
|
||||
[remote]
|
||||
vish-concord-nuc ansible_host=100.72.55.21 ansible_user=vish
|
||||
vmi2076105 ansible_host=100.99.156.20 ansible_user=root # Contabo VM
|
||||
|
||||
# --- Offline / Semi-Active Nodes ---
|
||||
[linux_offline]
|
||||
moon ansible_host=100.86.130.123 ansible_user=vish
|
||||
vishdebian ansible_host=100.86.60.62 ansible_user=vish
|
||||
vish-mint ansible_host=100.115.169.43 ansible_user=vish
|
||||
unraidtest ansible_host=100.69.105.115 ansible_user=root
|
||||
truenas-test-vish ansible_host=100.115.110.105 ansible_user=root
|
||||
sd ansible_host=100.83.141.1 ansible_user=root
|
||||
|
||||
# --- Miscellaneous / IoT / Windows ---
|
||||
[other]
|
||||
gl-be3600 ansible_host=100.105.59.123 ansible_user=root
|
||||
gl-mt3000 ansible_host=100.126.243.15 ansible_user=root
|
||||
glkvm ansible_host=100.64.137.1 ansible_user=root
|
||||
shinku-ryuu ansible_host=100.98.93.15 ansible_user=Administrator
|
||||
nvidia-shield-android-tv ansible_host=100.89.79.99
|
||||
iphone16 ansible_host=100.79.252.108
|
||||
ipad-pro-12-9-6th-gen-wificellular ansible_host=100.68.71.48
|
||||
mah-pc ansible_host=100.121.22.51 ansible_user=Administrator
|
||||
|
||||
# --- Debian / Ubuntu Clients using Calypso's APT Cache ---
|
||||
[debian_clients]
|
||||
homelab
|
||||
pi-5
|
||||
pi-5-kevin
|
||||
vish-concord-nuc
|
||||
pve
|
||||
vmi2076105
|
||||
homeassistant
|
||||
truenas-scale
|
||||
|
||||
# --- Active Group (used by most playbooks) ---
|
||||
[active:children]
|
||||
homelab
|
||||
synology
|
||||
rpi
|
||||
hypervisors
|
||||
remote
|
||||
debian_clients
|
||||
|
||||
# --- Global Variables ---
|
||||
[all:vars]
|
||||
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
|
||||
ansible_python_interpreter=/usr/bin/python3
|
||||
61
docs/advanced/ansible/hosts.ini
Normal file
61
docs/advanced/ansible/hosts.ini
Normal file
@@ -0,0 +1,61 @@
|
||||
# ================================
|
||||
# Vish's Homelab Ansible Inventory
|
||||
# Tailnet-connected via Tailscale
|
||||
# Updated: February 8, 2026
|
||||
# ================================
|
||||
|
||||
# --- Core Management Node ---
|
||||
[homelab]
|
||||
homelab ansible_host=100.67.40.126 ansible_user=homelab
|
||||
|
||||
# --- Synology NAS Cluster ---
|
||||
[synology]
|
||||
atlantis ansible_host=100.83.230.112 ansible_port=60000 ansible_user=vish
|
||||
calypso ansible_host=100.103.48.78 ansible_port=62000 ansible_user=Vish
|
||||
setillo ansible_host=100.125.0.20 ansible_user=vish
|
||||
|
||||
# --- Raspberry Pi Nodes ---
|
||||
[rpi]
|
||||
pi-5 ansible_host=100.77.151.40 ansible_user=vish
|
||||
pi-5-kevin ansible_host=100.123.246.75 ansible_user=vish
|
||||
|
||||
# --- Hypervisors / Storage ---
|
||||
[hypervisors]
|
||||
pve ansible_host=100.87.12.28 ansible_user=root
|
||||
truenas-scale ansible_host=100.75.252.64 ansible_user=vish
|
||||
homeassistant ansible_host=100.112.186.90 ansible_user=hassio
|
||||
|
||||
# --- Remote Systems ---
|
||||
[remote]
|
||||
vish-concord-nuc ansible_host=100.72.55.21 ansible_user=vish
|
||||
|
||||
# --- Debian / Ubuntu Clients using Calypso's APT Cache ---
|
||||
[debian_clients]
|
||||
homelab
|
||||
pi-5
|
||||
pi-5-kevin
|
||||
vish-concord-nuc
|
||||
pve
|
||||
homeassistant
|
||||
truenas-scale
|
||||
|
||||
# --- Legacy Group (for backward compatibility) ---
|
||||
[homelab_linux:children]
|
||||
homelab
|
||||
synology
|
||||
rpi
|
||||
hypervisors
|
||||
remote
|
||||
|
||||
# --- Active Group (used by most playbooks) ---
|
||||
[active:children]
|
||||
homelab
|
||||
synology
|
||||
rpi
|
||||
hypervisors
|
||||
remote
|
||||
|
||||
# --- Global Variables ---
|
||||
[all:vars]
|
||||
ansible_ssh_common_args='-o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null'
|
||||
ansible_python_interpreter=/usr/bin/python3
|
||||
116
docs/advanced/ansible/inventory.yml
Normal file
116
docs/advanced/ansible/inventory.yml
Normal file
@@ -0,0 +1,116 @@
|
||||
---
|
||||
# Homelab Ansible Inventory
|
||||
# All hosts are accessible via Tailscale IPs
|
||||
|
||||
all:
|
||||
vars:
|
||||
ansible_python_interpreter: /usr/bin/python3
|
||||
docker_compose_version: "2"
|
||||
|
||||
children:
|
||||
# Synology NAS devices
|
||||
synology:
|
||||
vars:
|
||||
docker_data_path: /volume1/docker
|
||||
ansible_become: false
|
||||
docker_socket: /var/run/docker.sock
|
||||
hosts:
|
||||
atlantis:
|
||||
ansible_host: 100.83.230.112
|
||||
ansible_user: vish
|
||||
ansible_port: 60000
|
||||
hostname: atlantis.vish.local
|
||||
description: "Primary NAS - Synology DS1823xs+"
|
||||
|
||||
calypso:
|
||||
ansible_host: 100.103.48.78
|
||||
ansible_user: vish
|
||||
ansible_port: 62000
|
||||
hostname: calypso.vish.local
|
||||
description: "Secondary NAS - Synology DS920+"
|
||||
|
||||
setillo:
|
||||
ansible_host: 100.125.0.20
|
||||
ansible_user: vish
|
||||
ansible_port: 22
|
||||
hostname: setillo.vish.local
|
||||
description: "Remote NAS - Synology"
|
||||
|
||||
# Physical servers
|
||||
physical:
|
||||
vars:
|
||||
docker_data_path: /opt/docker
|
||||
ansible_become: true
|
||||
hosts:
|
||||
guava:
|
||||
ansible_host: 100.75.252.64
|
||||
ansible_user: vish
|
||||
hostname: guava.vish.local
|
||||
description: "TrueNAS Scale Server"
|
||||
docker_data_path: /mnt/pool/docker
|
||||
|
||||
concord_nuc:
|
||||
ansible_host: 100.67.40.126
|
||||
ansible_user: homelab
|
||||
hostname: concord-nuc.vish.local
|
||||
description: "Intel NUC"
|
||||
|
||||
anubis:
|
||||
ansible_host: 100.100.100.100 # Update with actual IP
|
||||
ansible_user: vish
|
||||
hostname: anubis.vish.local
|
||||
description: "Physical server"
|
||||
|
||||
# Virtual machines
|
||||
vms:
|
||||
vars:
|
||||
docker_data_path: /opt/docker
|
||||
ansible_become: true
|
||||
hosts:
|
||||
homelab_vm:
|
||||
ansible_host: 100.67.40.126
|
||||
ansible_user: homelab
|
||||
hostname: homelab-vm.vish.local
|
||||
description: "Primary VM"
|
||||
|
||||
chicago_vm:
|
||||
ansible_host: 100.100.100.101 # Update with actual IP
|
||||
ansible_user: vish
|
||||
hostname: chicago-vm.vish.local
|
||||
description: "Chicago VPS"
|
||||
|
||||
bulgaria_vm:
|
||||
ansible_host: 100.100.100.102 # Update with actual IP
|
||||
ansible_user: vish
|
||||
hostname: bulgaria-vm.vish.local
|
||||
description: "Bulgaria VPS"
|
||||
|
||||
contabo_vm:
|
||||
ansible_host: 100.100.100.103 # Update with actual IP
|
||||
ansible_user: vish
|
||||
hostname: contabo-vm.vish.local
|
||||
description: "Contabo VPS"
|
||||
|
||||
# Edge devices
|
||||
edge:
|
||||
vars:
|
||||
docker_data_path: /opt/docker
|
||||
ansible_become: true
|
||||
hosts:
|
||||
rpi5_vish:
|
||||
ansible_host: 100.100.100.104 # Update with actual IP
|
||||
ansible_user: vish
|
||||
hostname: rpi5-vish.vish.local
|
||||
description: "Raspberry Pi 5"
|
||||
|
||||
# Proxmox LXC containers
|
||||
proxmox_lxc:
|
||||
vars:
|
||||
docker_data_path: /opt/docker
|
||||
ansible_become: true
|
||||
hosts:
|
||||
tdarr_node:
|
||||
ansible_host: 100.100.100.105 # Update with actual IP
|
||||
ansible_user: root
|
||||
hostname: tdarr-node.vish.local
|
||||
description: "Tdarr transcoding node"
|
||||
39
docs/advanced/ansible/playbooks/add_ssh_keys.yml
Normal file
39
docs/advanced/ansible/playbooks/add_ssh_keys.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
- name: Ensure homelab's SSH key is present on all reachable hosts
|
||||
hosts: all
|
||||
gather_facts: false
|
||||
become: true
|
||||
|
||||
vars:
|
||||
ssh_pub_key: "{{ lookup('file', '/home/homelab/.ssh/id_ed25519.pub') }}"
|
||||
ssh_user: "{{ ansible_user | default('vish') }}"
|
||||
ssh_port: "{{ ansible_port | default(22) }}"
|
||||
|
||||
tasks:
|
||||
- name: Check if SSH is reachable
|
||||
wait_for:
|
||||
host: "{{ inventory_hostname }}"
|
||||
port: "{{ ssh_port }}"
|
||||
timeout: 8
|
||||
state: started
|
||||
delegate_to: localhost
|
||||
ignore_errors: true
|
||||
register: ssh_port_check
|
||||
|
||||
- name: Add SSH key for user
|
||||
authorized_key:
|
||||
user: "{{ ssh_user }}"
|
||||
key: "{{ ssh_pub_key }}"
|
||||
state: present
|
||||
when: not ssh_port_check is failed
|
||||
ignore_unreachable: true
|
||||
|
||||
- name: Report hosts where SSH key was added
|
||||
debug:
|
||||
msg: "SSH key added successfully to {{ inventory_hostname }}"
|
||||
when: not ssh_port_check is failed
|
||||
|
||||
- name: Report hosts where SSH was unreachable
|
||||
debug:
|
||||
msg: "Skipped {{ inventory_hostname }} (SSH not reachable)"
|
||||
when: ssh_port_check is failed
|
||||
127
docs/advanced/ansible/playbooks/ansible_status_check.yml
Normal file
127
docs/advanced/ansible/playbooks/ansible_status_check.yml
Normal file
@@ -0,0 +1,127 @@
|
||||
---
|
||||
# Check Ansible status across all reachable hosts
|
||||
# Simple status check and upgrade where possible
|
||||
# Created: February 8, 2026
|
||||
|
||||
- name: Check Ansible status on all reachable hosts
|
||||
hosts: homelab,pi-5,vish-concord-nuc,pve
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
ignore_errors: yes
|
||||
|
||||
tasks:
|
||||
- name: Display host information
|
||||
debug:
|
||||
msg: |
|
||||
=== {{ inventory_hostname | upper }} ===
|
||||
IP: {{ ansible_host }}
|
||||
OS: {{ ansible_distribution }} {{ ansible_distribution_version }}
|
||||
Architecture: {{ ansible_architecture }}
|
||||
|
||||
- name: Check if Ansible is installed
|
||||
command: ansible --version
|
||||
register: ansible_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Display Ansible status
|
||||
debug:
|
||||
msg: |
|
||||
Ansible on {{ inventory_hostname }}:
|
||||
{% if ansible_check.rc == 0 %}
|
||||
✅ INSTALLED: {{ ansible_check.stdout_lines[0] }}
|
||||
{% else %}
|
||||
❌ NOT INSTALLED
|
||||
{% endif %}
|
||||
|
||||
- name: Check if apt is available (Debian/Ubuntu only)
|
||||
stat:
|
||||
path: /usr/bin/apt
|
||||
register: has_apt
|
||||
|
||||
- name: Try to install/upgrade Ansible (Debian/Ubuntu only)
|
||||
block:
|
||||
- name: Update package cache (ignore GPG errors)
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 0
|
||||
register: apt_update
|
||||
failed_when: false
|
||||
|
||||
- name: Install/upgrade Ansible
|
||||
apt:
|
||||
name: ansible
|
||||
state: latest
|
||||
register: ansible_install
|
||||
when: apt_update is not failed
|
||||
|
||||
- name: Display installation result
|
||||
debug:
|
||||
msg: |
|
||||
Ansible installation on {{ inventory_hostname }}:
|
||||
{% if ansible_install is succeeded %}
|
||||
{% if ansible_install.changed %}
|
||||
✅ {{ 'INSTALLED' if ansible_check.rc != 0 else 'UPGRADED' }} successfully
|
||||
{% else %}
|
||||
ℹ️ Already at latest version
|
||||
{% endif %}
|
||||
{% elif apt_update is failed %}
|
||||
⚠️ APT update failed - using cached packages
|
||||
{% else %}
|
||||
❌ Installation failed
|
||||
{% endif %}
|
||||
|
||||
when: has_apt.stat.exists
|
||||
rescue:
|
||||
- name: Installation failed
|
||||
debug:
|
||||
msg: "❌ Failed to install/upgrade Ansible on {{ inventory_hostname }}"
|
||||
|
||||
- name: Final Ansible version check
|
||||
command: ansible --version
|
||||
register: final_ansible_check
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Final status summary
|
||||
debug:
|
||||
msg: |
|
||||
=== FINAL STATUS: {{ inventory_hostname | upper }} ===
|
||||
{% if final_ansible_check.rc == 0 %}
|
||||
✅ Ansible: {{ final_ansible_check.stdout_lines[0] }}
|
||||
{% else %}
|
||||
❌ Ansible: Not available
|
||||
{% endif %}
|
||||
OS: {{ ansible_distribution }} {{ ansible_distribution_version }}
|
||||
APT Available: {{ '✅ Yes' if has_apt.stat.exists else '❌ No' }}
|
||||
|
||||
- name: Summary Report
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
run_once: true
|
||||
|
||||
tasks:
|
||||
- name: Display overall summary
|
||||
debug:
|
||||
msg: |
|
||||
|
||||
========================================
|
||||
ANSIBLE UPDATE SUMMARY - {{ ansible_date_time.date }}
|
||||
========================================
|
||||
|
||||
Processed hosts:
|
||||
- homelab (100.67.40.126)
|
||||
- pi-5 (100.77.151.40)
|
||||
- vish-concord-nuc (100.72.55.21)
|
||||
- pve (100.87.12.28)
|
||||
|
||||
Excluded hosts:
|
||||
- Synology devices (atlantis, calypso, setillo) - Use DSM package manager
|
||||
- homeassistant - Uses Home Assistant OS package management
|
||||
- truenas-scale - Uses TrueNAS package management
|
||||
- pi-5-kevin - Currently unreachable
|
||||
|
||||
✅ homelab: Already has Ansible 2.16.3 (latest)
|
||||
📋 Check individual host results above for details
|
||||
|
||||
========================================
|
||||
193
docs/advanced/ansible/playbooks/check_apt_proxy.yml
Normal file
193
docs/advanced/ansible/playbooks/check_apt_proxy.yml
Normal file
@@ -0,0 +1,193 @@
|
||||
---
|
||||
- name: Check APT Proxy Configuration on Debian/Ubuntu hosts
|
||||
hosts: debian_clients
|
||||
become: no
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
expected_proxy_host: 100.103.48.78 # calypso
|
||||
expected_proxy_port: 3142
|
||||
apt_proxy_file: /etc/apt/apt.conf.d/01proxy
|
||||
expected_proxy_url: "http://{{ expected_proxy_host }}:{{ expected_proxy_port }}/"
|
||||
|
||||
tasks:
|
||||
# ---------- System Detection ----------
|
||||
- name: Detect OS family
|
||||
ansible.builtin.debug:
|
||||
msg: "Host {{ inventory_hostname }} is running {{ ansible_os_family }} {{ ansible_distribution }} {{ ansible_distribution_version }}"
|
||||
|
||||
- name: Skip non-Debian systems
|
||||
ansible.builtin.meta: end_host
|
||||
when: ansible_os_family != "Debian"
|
||||
|
||||
# ---------- APT Proxy Configuration Check ----------
|
||||
- name: Check if APT proxy config file exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ apt_proxy_file }}"
|
||||
register: proxy_file_stat
|
||||
|
||||
- name: Read APT proxy configuration (if exists)
|
||||
ansible.builtin.slurp:
|
||||
src: "{{ apt_proxy_file }}"
|
||||
register: proxy_config_content
|
||||
when: proxy_file_stat.stat.exists
|
||||
failed_when: false
|
||||
|
||||
- name: Parse proxy configuration
|
||||
ansible.builtin.set_fact:
|
||||
proxy_config_decoded: "{{ proxy_config_content.content | b64decode }}"
|
||||
when: proxy_file_stat.stat.exists and proxy_config_content is defined
|
||||
|
||||
# ---------- Network Connectivity Test ----------
|
||||
- name: Test connectivity to expected proxy server
|
||||
ansible.builtin.uri:
|
||||
url: "http://{{ expected_proxy_host }}:{{ expected_proxy_port }}/"
|
||||
method: HEAD
|
||||
timeout: 10
|
||||
register: proxy_connectivity
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
|
||||
# ---------- APT Configuration Analysis ----------
|
||||
- name: Check current APT proxy settings via apt-config
|
||||
ansible.builtin.command: apt-config dump Acquire::http::Proxy
|
||||
register: apt_config_proxy
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
become: yes
|
||||
|
||||
- name: Test APT update with current configuration (dry-run)
|
||||
ansible.builtin.command: apt-get update --print-uris --dry-run
|
||||
register: apt_update_test
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
become: yes
|
||||
|
||||
# ---------- Analysis and Reporting ----------
|
||||
- name: Analyze proxy configuration status
|
||||
ansible.builtin.set_fact:
|
||||
proxy_status:
|
||||
file_exists: "{{ proxy_file_stat.stat.exists }}"
|
||||
file_content: "{{ proxy_config_decoded | default('N/A') }}"
|
||||
expected_config: "Acquire::http::Proxy \"{{ expected_proxy_url }}\";"
|
||||
proxy_reachable: "{{ proxy_connectivity.status is defined and (proxy_connectivity.status == 200 or proxy_connectivity.status == 406) }}"
|
||||
apt_config_output: "{{ apt_config_proxy.stdout | default('N/A') }}"
|
||||
using_expected_proxy: "{{ (proxy_config_decoded | default('')) is search(expected_proxy_host) }}"
|
||||
|
||||
# ---------- Health Assertions ----------
|
||||
- name: Assert APT proxy is properly configured
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- proxy_status.file_exists
|
||||
- proxy_status.using_expected_proxy
|
||||
- proxy_status.proxy_reachable
|
||||
success_msg: "✅ {{ inventory_hostname }} is correctly using APT proxy {{ expected_proxy_host }}:{{ expected_proxy_port }}"
|
||||
fail_msg: "❌ {{ inventory_hostname }} APT proxy configuration issues detected"
|
||||
failed_when: false
|
||||
register: proxy_assertion
|
||||
|
||||
# ---------- Detailed Summary ----------
|
||||
- name: Display comprehensive proxy status
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
|
||||
🔍 APT Proxy Status for {{ inventory_hostname }}:
|
||||
================================================
|
||||
OS: {{ ansible_distribution }} {{ ansible_distribution_version }}
|
||||
|
||||
📁 Configuration File:
|
||||
Path: {{ apt_proxy_file }}
|
||||
Exists: {{ proxy_status.file_exists }}
|
||||
Content: {{ proxy_status.file_content | regex_replace('\n', ' ') }}
|
||||
|
||||
🎯 Expected Configuration:
|
||||
{{ proxy_status.expected_config }}
|
||||
|
||||
🌐 Network Connectivity:
|
||||
Proxy Server: {{ expected_proxy_host }}:{{ expected_proxy_port }}
|
||||
Reachable: {{ proxy_status.proxy_reachable }}
|
||||
Response: {{ proxy_connectivity.status | default('N/A') }}
|
||||
|
||||
⚙️ Current APT Config:
|
||||
{{ proxy_status.apt_config_output }}
|
||||
|
||||
✅ Status: {{ 'CONFIGURED' if proxy_status.using_expected_proxy else 'NOT CONFIGURED' }}
|
||||
🔗 Connectivity: {{ 'OK' if proxy_status.proxy_reachable else 'FAILED' }}
|
||||
|
||||
{% if not proxy_assertion.failed %}
|
||||
🎉 Result: APT proxy is working correctly!
|
||||
{% else %}
|
||||
⚠️ Result: APT proxy needs attention
|
||||
{% endif %}
|
||||
|
||||
# ---------- Recommendations ----------
|
||||
- name: Provide configuration recommendations
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
|
||||
💡 Recommendations for {{ inventory_hostname }}:
|
||||
{% if not proxy_status.file_exists %}
|
||||
- Create APT proxy config: echo 'Acquire::http::Proxy "{{ expected_proxy_url }}";' | sudo tee {{ apt_proxy_file }}
|
||||
{% endif %}
|
||||
{% if not proxy_status.proxy_reachable %}
|
||||
- Check network connectivity to {{ expected_proxy_host }}:{{ expected_proxy_port }}
|
||||
- Verify calypso apt-cacher-ng service is running
|
||||
{% endif %}
|
||||
{% if proxy_status.file_exists and not proxy_status.using_expected_proxy %}
|
||||
- Update proxy configuration to use {{ expected_proxy_url }}
|
||||
{% endif %}
|
||||
when: proxy_assertion.failed
|
||||
|
||||
# ---------- Summary Statistics ----------
|
||||
- name: Record results for summary
|
||||
ansible.builtin.set_fact:
|
||||
host_proxy_result:
|
||||
hostname: "{{ inventory_hostname }}"
|
||||
configured: "{{ proxy_status.using_expected_proxy }}"
|
||||
reachable: "{{ proxy_status.proxy_reachable }}"
|
||||
status: "{{ 'OK' if (proxy_status.using_expected_proxy and proxy_status.proxy_reachable) else 'NEEDS_ATTENTION' }}"
|
||||
|
||||
# ---------- Final Summary Report ----------
|
||||
- name: APT Proxy Summary Report
|
||||
hosts: localhost
|
||||
gather_facts: no
|
||||
run_once: true
|
||||
|
||||
vars:
|
||||
expected_proxy_host: 100.103.48.78 # calypso
|
||||
expected_proxy_port: 3142
|
||||
|
||||
tasks:
|
||||
- name: Collect all host results
|
||||
ansible.builtin.set_fact:
|
||||
all_results: "{{ groups['debian_clients'] | map('extract', hostvars) | selectattr('host_proxy_result', 'defined') | map(attribute='host_proxy_result') | list }}"
|
||||
when: groups['debian_clients'] is defined
|
||||
|
||||
- name: Generate summary statistics
|
||||
ansible.builtin.set_fact:
|
||||
summary_stats:
|
||||
total_hosts: "{{ all_results | length }}"
|
||||
configured_hosts: "{{ all_results | selectattr('configured', 'equalto', true) | list | length }}"
|
||||
reachable_hosts: "{{ all_results | selectattr('reachable', 'equalto', true) | list | length }}"
|
||||
healthy_hosts: "{{ all_results | selectattr('status', 'equalto', 'OK') | list | length }}"
|
||||
when: all_results is defined
|
||||
|
||||
- name: Display final summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
|
||||
📊 APT PROXY HEALTH SUMMARY
|
||||
===========================
|
||||
Total Debian Clients: {{ summary_stats.total_hosts | default(0) }}
|
||||
Properly Configured: {{ summary_stats.configured_hosts | default(0) }}
|
||||
Proxy Reachable: {{ summary_stats.reachable_hosts | default(0) }}
|
||||
Fully Healthy: {{ summary_stats.healthy_hosts | default(0) }}
|
||||
|
||||
🎯 Target Proxy: calypso ({{ expected_proxy_host }}:{{ expected_proxy_port }})
|
||||
|
||||
{% if summary_stats.healthy_hosts | default(0) == summary_stats.total_hosts | default(0) %}
|
||||
🎉 ALL SYSTEMS OPTIMAL - APT proxy working perfectly across all clients!
|
||||
{% else %}
|
||||
⚠️ Some systems need attention - check individual host reports above
|
||||
{% endif %}
|
||||
when: summary_stats is defined
|
||||
26
docs/advanced/ansible/playbooks/cleanup.yml
Normal file
26
docs/advanced/ansible/playbooks/cleanup.yml
Normal file
@@ -0,0 +1,26 @@
|
||||
---
|
||||
- name: Clean up unused packages and temporary files
|
||||
hosts: all
|
||||
become: true
|
||||
tasks:
|
||||
- name: Autoremove unused packages
|
||||
apt:
|
||||
autoremove: yes
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Clean apt cache
|
||||
apt:
|
||||
autoclean: yes
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Clear temporary files
|
||||
file:
|
||||
path: /tmp
|
||||
state: absent
|
||||
ignore_errors: true
|
||||
|
||||
- name: Recreate /tmp directory
|
||||
file:
|
||||
path: /tmp
|
||||
state: directory
|
||||
mode: '1777'
|
||||
48
docs/advanced/ansible/playbooks/common/backup_configs.yml
Normal file
48
docs/advanced/ansible/playbooks/common/backup_configs.yml
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
# Backup all docker-compose configs and data
|
||||
- name: Backup Docker configurations
|
||||
hosts: "{{ target_host | default('all') }}"
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
backup_dest: "{{ backup_path | default('/backup') }}"
|
||||
backup_timestamp: "{{ ansible_date_time.date }}_{{ ansible_date_time.hour }}{{ ansible_date_time.minute }}"
|
||||
|
||||
tasks:
|
||||
- name: Create backup directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ backup_dest }}/{{ inventory_hostname }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
delegate_to: localhost
|
||||
|
||||
- name: Find all docker-compose files
|
||||
ansible.builtin.find:
|
||||
paths: "{{ docker_data_path }}"
|
||||
patterns: "docker-compose.yml,docker-compose.yaml,.env"
|
||||
recurse: true
|
||||
register: compose_files
|
||||
|
||||
- name: Archive docker configs
|
||||
ansible.builtin.archive:
|
||||
path: "{{ docker_data_path }}"
|
||||
dest: "/tmp/{{ inventory_hostname }}_configs_{{ backup_timestamp }}.tar.gz"
|
||||
format: gz
|
||||
exclude_path:
|
||||
- "*/data/*"
|
||||
- "*/logs/*"
|
||||
- "*/cache/*"
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Fetch backup to control node
|
||||
ansible.builtin.fetch:
|
||||
src: "/tmp/{{ inventory_hostname }}_configs_{{ backup_timestamp }}.tar.gz"
|
||||
dest: "{{ backup_dest }}/{{ inventory_hostname }}/"
|
||||
flat: true
|
||||
|
||||
- name: Clean up remote archive
|
||||
ansible.builtin.file:
|
||||
path: "/tmp/{{ inventory_hostname }}_configs_{{ backup_timestamp }}.tar.gz"
|
||||
state: absent
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
55
docs/advanced/ansible/playbooks/common/install_docker.yml
Normal file
55
docs/advanced/ansible/playbooks/common/install_docker.yml
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
# Install Docker on a host (for non-Synology systems)
|
||||
- name: Install Docker
|
||||
hosts: "{{ target_host | default('all:!synology') }}"
|
||||
become: true
|
||||
gather_facts: true
|
||||
|
||||
tasks:
|
||||
- name: Install prerequisites
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- apt-transport-https
|
||||
- ca-certificates
|
||||
- curl
|
||||
- gnupg
|
||||
- lsb-release
|
||||
- python3-pip
|
||||
state: present
|
||||
update_cache: true
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Add Docker GPG key
|
||||
ansible.builtin.apt_key:
|
||||
url: https://download.docker.com/linux/{{ ansible_distribution | lower }}/gpg
|
||||
state: present
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Add Docker repository
|
||||
ansible.builtin.apt_repository:
|
||||
repo: "deb https://download.docker.com/linux/{{ ansible_distribution | lower }} {{ ansible_distribution_release }} stable"
|
||||
state: present
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Install Docker
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- docker-ce
|
||||
- docker-ce-cli
|
||||
- containerd.io
|
||||
- docker-compose-plugin
|
||||
state: present
|
||||
update_cache: true
|
||||
when: ansible_os_family == "Debian"
|
||||
|
||||
- name: Ensure Docker service is running
|
||||
ansible.builtin.service:
|
||||
name: docker
|
||||
state: started
|
||||
enabled: true
|
||||
|
||||
- name: Add user to docker group
|
||||
ansible.builtin.user:
|
||||
name: "{{ ansible_user }}"
|
||||
groups: docker
|
||||
append: true
|
||||
27
docs/advanced/ansible/playbooks/common/logs.yml
Normal file
27
docs/advanced/ansible/playbooks/common/logs.yml
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
# View logs for a specific service
|
||||
# Usage: ansible-playbook playbooks/common/logs.yml -e "service_name=plex" -e "target_host=atlantis"
|
||||
- name: View service logs
|
||||
hosts: "{{ target_host }}"
|
||||
gather_facts: false
|
||||
|
||||
vars:
|
||||
log_lines: 100
|
||||
follow_logs: false
|
||||
|
||||
tasks:
|
||||
- name: Validate service_name is provided
|
||||
ansible.builtin.fail:
|
||||
msg: "service_name variable is required. Use -e 'service_name=<name>'"
|
||||
when: service_name is not defined
|
||||
|
||||
- name: Get service logs
|
||||
ansible.builtin.command:
|
||||
cmd: "docker compose logs --tail={{ log_lines }} {{ '--follow' if follow_logs else '' }}"
|
||||
chdir: "{{ docker_data_path }}/{{ service_name }}"
|
||||
register: logs_result
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Display logs
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ logs_result.stdout }}"
|
||||
23
docs/advanced/ansible/playbooks/common/restart_service.yml
Normal file
23
docs/advanced/ansible/playbooks/common/restart_service.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
---
|
||||
# Restart a specific service
|
||||
# Usage: ansible-playbook playbooks/common/restart_service.yml -e "service_name=plex" -e "target_host=atlantis"
|
||||
- name: Restart Docker service
|
||||
hosts: "{{ target_host }}"
|
||||
gather_facts: false
|
||||
|
||||
tasks:
|
||||
- name: Validate service_name is provided
|
||||
ansible.builtin.fail:
|
||||
msg: "service_name variable is required. Use -e 'service_name=<name>'"
|
||||
when: service_name is not defined
|
||||
|
||||
- name: Restart service
|
||||
ansible.builtin.command:
|
||||
cmd: docker compose restart
|
||||
chdir: "{{ docker_data_path }}/{{ service_name }}"
|
||||
register: restart_result
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Display result
|
||||
ansible.builtin.debug:
|
||||
msg: "Service {{ service_name }} restarted on {{ inventory_hostname }}"
|
||||
34
docs/advanced/ansible/playbooks/common/setup_directories.yml
Normal file
34
docs/advanced/ansible/playbooks/common/setup_directories.yml
Normal file
@@ -0,0 +1,34 @@
|
||||
---
|
||||
# Setup base directories for Docker services
|
||||
- name: Setup Docker directories
|
||||
hosts: "{{ target_host | default('all') }}"
|
||||
gather_facts: true
|
||||
|
||||
tasks:
|
||||
- name: Create base docker directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Create common directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}/{{ item }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
loop:
|
||||
- configs
|
||||
- data
|
||||
- logs
|
||||
- backups
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Create service directories from host_services
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}/{{ item.stack_dir }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
loop: "{{ host_services | default([]) }}"
|
||||
when: host_services is defined
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
49
docs/advanced/ansible/playbooks/common/status.yml
Normal file
49
docs/advanced/ansible/playbooks/common/status.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
# Check status of all Docker containers
|
||||
- name: Check container status
|
||||
hosts: "{{ target_host | default('all') }}"
|
||||
gather_facts: true
|
||||
|
||||
tasks:
|
||||
- name: Get list of running containers
|
||||
ansible.builtin.command:
|
||||
cmd: docker ps --format "table {{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Status{{ '}}' }}\t{{ '{{' }}.Image{{ '}}' }}"
|
||||
register: docker_ps
|
||||
changed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Display running containers
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
|
||||
=== {{ inventory_hostname }} ===
|
||||
{{ docker_ps.stdout }}
|
||||
|
||||
- name: Get stopped/exited containers
|
||||
ansible.builtin.command:
|
||||
cmd: docker ps -a --filter "status=exited" --format "table {{ '{{' }}.Names{{ '}}' }}\t{{ '{{' }}.Status{{ '}}' }}"
|
||||
register: docker_exited
|
||||
changed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Display stopped containers
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
|
||||
=== Stopped containers on {{ inventory_hostname }} ===
|
||||
{{ docker_exited.stdout }}
|
||||
when: docker_exited.stdout_lines | length > 1
|
||||
|
||||
- name: Get disk usage
|
||||
ansible.builtin.command:
|
||||
cmd: docker system df
|
||||
register: docker_df
|
||||
changed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Display disk usage
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
|
||||
=== Docker disk usage on {{ inventory_hostname }} ===
|
||||
{{ docker_df.stdout }}
|
||||
46
docs/advanced/ansible/playbooks/common/update_containers.yml
Normal file
46
docs/advanced/ansible/playbooks/common/update_containers.yml
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
# Update all Docker containers (pull new images and recreate)
|
||||
- name: Update Docker containers
|
||||
hosts: "{{ target_host | default('all') }}"
|
||||
gather_facts: true
|
||||
|
||||
vars:
|
||||
services: "{{ host_services | default([]) }}"
|
||||
|
||||
tasks:
|
||||
- name: Display update info
|
||||
ansible.builtin.debug:
|
||||
msg: "Updating {{ services | length }} services on {{ inventory_hostname }}"
|
||||
|
||||
- name: Pull latest images for each service
|
||||
ansible.builtin.command:
|
||||
cmd: docker compose pull
|
||||
chdir: "{{ docker_data_path }}/{{ item.stack_dir }}"
|
||||
loop: "{{ services }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: item.enabled | default(true)
|
||||
register: pull_result
|
||||
changed_when: "'Downloaded' in pull_result.stdout"
|
||||
failed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Recreate containers with new images
|
||||
ansible.builtin.command:
|
||||
cmd: docker compose up -d --remove-orphans
|
||||
chdir: "{{ docker_data_path }}/{{ item.stack_dir }}"
|
||||
loop: "{{ services }}"
|
||||
loop_control:
|
||||
label: "{{ item.name }}"
|
||||
when: item.enabled | default(true)
|
||||
register: up_result
|
||||
changed_when: "'Started' in up_result.stdout or 'Recreated' in up_result.stdout"
|
||||
failed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Clean up unused images
|
||||
ansible.builtin.command:
|
||||
cmd: docker image prune -af
|
||||
when: prune_images | default(true)
|
||||
changed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
62
docs/advanced/ansible/playbooks/configure_apt_proxy.yml
Normal file
62
docs/advanced/ansible/playbooks/configure_apt_proxy.yml
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
- name: Configure APT Proxy on Debian/Ubuntu hosts
|
||||
hosts: debian_clients
|
||||
become: yes
|
||||
gather_facts: yes
|
||||
|
||||
vars:
|
||||
apt_proxy_host: 100.103.48.78
|
||||
apt_proxy_port: 3142
|
||||
apt_proxy_file: /etc/apt/apt.conf.d/01proxy
|
||||
|
||||
tasks:
|
||||
- name: Verify OS compatibility
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- ansible_os_family == "Debian"
|
||||
fail_msg: "Host {{ inventory_hostname }} is not Debian-based. Skipping."
|
||||
success_msg: "Host {{ inventory_hostname }} is Debian-based."
|
||||
tags: verify
|
||||
|
||||
- name: Create APT proxy configuration
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ apt_proxy_file }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
content: |
|
||||
Acquire::http::Proxy "http://{{ apt_proxy_host }}:{{ apt_proxy_port }}/";
|
||||
Acquire::https::Proxy "false";
|
||||
register: proxy_conf
|
||||
tags: config
|
||||
|
||||
- name: Ensure APT cache directories exist
|
||||
ansible.builtin.file:
|
||||
path: /var/cache/apt/archives
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
tags: config
|
||||
|
||||
- name: Test APT proxy connection (dry-run)
|
||||
ansible.builtin.command: >
|
||||
apt-get update --print-uris -o Acquire::http::Proxy="http://{{ apt_proxy_host }}:{{ apt_proxy_port }}/"
|
||||
register: apt_proxy_test
|
||||
changed_when: false
|
||||
failed_when: apt_proxy_test.rc != 0
|
||||
tags: verify
|
||||
|
||||
- name: Display proxy test result
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
✅ {{ inventory_hostname }} is using APT proxy {{ apt_proxy_host }}:{{ apt_proxy_port }}
|
||||
{{ apt_proxy_test.stdout | default('') }}
|
||||
when: apt_proxy_test.rc == 0
|
||||
tags: verify
|
||||
|
||||
- name: Display failure if APT proxy test failed
|
||||
ansible.builtin.debug:
|
||||
msg: "⚠️ {{ inventory_hostname }} failed to reach APT proxy at {{ apt_proxy_host }}:{{ apt_proxy_port }}"
|
||||
when: apt_proxy_test.rc != 0
|
||||
tags: verify
|
||||
35
docs/advanced/ansible/playbooks/deploy_anubis.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_anubis.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for anubis
|
||||
# Category: physical
|
||||
# Services: 8
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_anubis.yml
|
||||
# ansible-playbook playbooks/deploy_anubis.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_anubis.yml --check
|
||||
|
||||
- name: Deploy services to anubis
|
||||
hosts: anubis
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_atlantis.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_atlantis.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for atlantis
|
||||
# Category: synology
|
||||
# Services: 53
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_atlantis.yml
|
||||
# ansible-playbook playbooks/deploy_atlantis.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_atlantis.yml --check
|
||||
|
||||
- name: Deploy services to atlantis
|
||||
hosts: atlantis
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_bulgaria_vm.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_bulgaria_vm.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for bulgaria-vm
|
||||
# Category: vms
|
||||
# Services: 10
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_bulgaria_vm.yml
|
||||
# ansible-playbook playbooks/deploy_bulgaria_vm.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_bulgaria_vm.yml --check
|
||||
|
||||
- name: Deploy services to bulgaria-vm
|
||||
hosts: bulgaria_vm
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_calypso.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_calypso.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for calypso
|
||||
# Category: synology
|
||||
# Services: 24
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_calypso.yml
|
||||
# ansible-playbook playbooks/deploy_calypso.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_calypso.yml --check
|
||||
|
||||
- name: Deploy services to calypso
|
||||
hosts: calypso
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_chicago_vm.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_chicago_vm.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for chicago-vm
|
||||
# Category: vms
|
||||
# Services: 7
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_chicago_vm.yml
|
||||
# ansible-playbook playbooks/deploy_chicago_vm.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_chicago_vm.yml --check
|
||||
|
||||
- name: Deploy services to chicago-vm
|
||||
hosts: chicago_vm
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_concord_nuc.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_concord_nuc.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for concord-nuc
|
||||
# Category: physical
|
||||
# Services: 11
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_concord_nuc.yml
|
||||
# ansible-playbook playbooks/deploy_concord_nuc.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_concord_nuc.yml --check
|
||||
|
||||
- name: Deploy services to concord-nuc
|
||||
hosts: concord_nuc
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_contabo_vm.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_contabo_vm.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for contabo-vm
|
||||
# Category: vms
|
||||
# Services: 1
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_contabo_vm.yml
|
||||
# ansible-playbook playbooks/deploy_contabo_vm.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_contabo_vm.yml --check
|
||||
|
||||
- name: Deploy services to contabo-vm
|
||||
hosts: contabo_vm
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_guava.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_guava.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for guava
|
||||
# Category: truenas
|
||||
# Services: 1
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_guava.yml
|
||||
# ansible-playbook playbooks/deploy_guava.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_guava.yml --check
|
||||
|
||||
- name: Deploy services to guava
|
||||
hosts: guava
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_homelab_vm.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_homelab_vm.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for homelab-vm
|
||||
# Category: vms
|
||||
# Services: 33
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_homelab_vm.yml
|
||||
# ansible-playbook playbooks/deploy_homelab_vm.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_homelab_vm.yml --check
|
||||
|
||||
- name: Deploy services to homelab-vm
|
||||
hosts: homelab_vm
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_lxc.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_lxc.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for lxc
|
||||
# Category: proxmox
|
||||
# Services: 1
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_lxc.yml
|
||||
# ansible-playbook playbooks/deploy_lxc.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_lxc.yml --check
|
||||
|
||||
- name: Deploy services to lxc
|
||||
hosts: lxc
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_matrix_ubuntu_vm.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_matrix_ubuntu_vm.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for matrix-ubuntu-vm
|
||||
# Category: vms
|
||||
# Services: 2
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_matrix_ubuntu_vm.yml
|
||||
# ansible-playbook playbooks/deploy_matrix_ubuntu_vm.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_matrix_ubuntu_vm.yml --check
|
||||
|
||||
- name: Deploy services to matrix-ubuntu-vm
|
||||
hosts: matrix_ubuntu_vm
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_rpi5_vish.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_rpi5_vish.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for rpi5-vish
|
||||
# Category: edge
|
||||
# Services: 3
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_rpi5_vish.yml
|
||||
# ansible-playbook playbooks/deploy_rpi5_vish.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_rpi5_vish.yml --check
|
||||
|
||||
- name: Deploy services to rpi5-vish
|
||||
hosts: rpi5_vish
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
35
docs/advanced/ansible/playbooks/deploy_setillo.yml
Normal file
35
docs/advanced/ansible/playbooks/deploy_setillo.yml
Normal file
@@ -0,0 +1,35 @@
|
||||
---
|
||||
# Deployment playbook for setillo
|
||||
# Category: synology
|
||||
# Services: 2
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook playbooks/deploy_setillo.yml
|
||||
# ansible-playbook playbooks/deploy_setillo.yml -e "stack_deploy=false"
|
||||
# ansible-playbook playbooks/deploy_setillo.yml --check
|
||||
|
||||
- name: Deploy services to setillo
|
||||
hosts: setillo
|
||||
gather_facts: true
|
||||
vars:
|
||||
services: '{{ host_services | default([]) }}'
|
||||
tasks:
|
||||
- name: Display deployment info
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying {{ services | length }} services to {{ inventory_hostname }}
|
||||
- name: Ensure docker data directory exists
|
||||
ansible.builtin.file:
|
||||
path: '{{ docker_data_path }}'
|
||||
state: directory
|
||||
mode: '0755'
|
||||
- name: Deploy each enabled service
|
||||
ansible.builtin.include_role:
|
||||
name: docker_stack
|
||||
vars:
|
||||
stack_name: '{{ item.stack_dir }}'
|
||||
stack_compose_file: '{{ item.compose_file }}'
|
||||
stack_env_file: '{{ item.env_file | default(omit) }}'
|
||||
loop: '{{ services }}'
|
||||
loop_control:
|
||||
label: '{{ item.name }}'
|
||||
when: item.enabled | default(true)
|
||||
17
docs/advanced/ansible/playbooks/install_tools.yml
Normal file
17
docs/advanced/ansible/playbooks/install_tools.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
- name: Install common diagnostic tools
|
||||
hosts: all
|
||||
become: true
|
||||
tasks:
|
||||
- name: Install essential packages
|
||||
package:
|
||||
name:
|
||||
- htop
|
||||
- curl
|
||||
- wget
|
||||
- net-tools
|
||||
- iperf3
|
||||
- ncdu
|
||||
- vim
|
||||
- git
|
||||
state: present
|
||||
137
docs/advanced/ansible/playbooks/synology_health.yml
Normal file
137
docs/advanced/ansible/playbooks/synology_health.yml
Normal file
@@ -0,0 +1,137 @@
|
||||
---
|
||||
- name: Synology Healthcheck
|
||||
hosts: synology
|
||||
gather_facts: yes
|
||||
become: false
|
||||
|
||||
vars:
|
||||
ts_candidates:
|
||||
- /var/packages/Tailscale/target/bin/tailscale
|
||||
- /usr/bin/tailscale
|
||||
|
||||
tasks:
|
||||
# ---------- System info ----------
|
||||
- name: DSM version
|
||||
ansible.builtin.shell: |
|
||||
set -e
|
||||
if [ -f /etc.defaults/VERSION ]; then
|
||||
. /etc.defaults/VERSION
|
||||
echo "${productversion:-unknown} (build ${buildnumber:-unknown})"
|
||||
else
|
||||
echo "unknown"
|
||||
fi
|
||||
register: dsm_version
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Uptime (pretty)
|
||||
ansible.builtin.command: uptime -p
|
||||
register: uptime_pretty
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Load averages
|
||||
ansible.builtin.command: cat /proc/loadavg
|
||||
register: loadavg
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Memory summary (MB)
|
||||
ansible.builtin.command: free -m
|
||||
register: mem
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ---------- Storage ----------
|
||||
- name: Disk usage of root (/)
|
||||
ansible.builtin.shell: df -P / | awk 'NR==2 {print $5}' | tr -d '%'
|
||||
register: root_usage
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Disk usage of /volume1 (if present)
|
||||
ansible.builtin.shell: |
|
||||
if mountpoint -q /volume1; then
|
||||
df -P /volume1 | awk 'NR==2 {print $5}' | tr -d '%'
|
||||
fi
|
||||
register: vol1_usage
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: RAID status (/proc/mdstat)
|
||||
ansible.builtin.command: cat /proc/mdstat
|
||||
register: mdstat
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
# ---------- Tailscale (optional) ----------
|
||||
- name: Detect Tailscale binary path (first that exists)
|
||||
ansible.builtin.shell: |
|
||||
for p in {{ ts_candidates | join(' ') }}; do
|
||||
[ -x "$p" ] && echo "$p" && exit 0
|
||||
done
|
||||
echo ""
|
||||
register: ts_bin
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get Tailscale IPv4 (if tailscale present)
|
||||
ansible.builtin.command: "{{ ts_bin.stdout }} ip -4"
|
||||
register: ts_ip
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ts_bin.stdout | length > 0
|
||||
|
||||
- name: Get Tailscale self status (brief)
|
||||
ansible.builtin.command: "{{ ts_bin.stdout }} status --self"
|
||||
register: ts_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ts_bin.stdout | length > 0
|
||||
|
||||
# ---------- Assertions (lightweight, no sudo) ----------
|
||||
- name: Check RAID not degraded/resyncing
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- mdstat.stdout is not search('degraded', ignorecase=True)
|
||||
- mdstat.stdout is not search('resync', ignorecase=True)
|
||||
success_msg: "RAID OK"
|
||||
fail_msg: "RAID issue detected (degraded or resync) — check Storage Manager"
|
||||
changed_when: false
|
||||
|
||||
- name: Check root FS usage < 90%
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- (root_usage.stdout | default('0')) | int < 90
|
||||
success_msg: "Root filesystem usage OK ({{ root_usage.stdout | default('n/a') }}%)"
|
||||
fail_msg: "Root filesystem high ({{ root_usage.stdout | default('n/a') }}%)"
|
||||
changed_when: false
|
||||
|
||||
- name: Check /volume1 usage < 90% (if present)
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- (vol1_usage.stdout | default('0')) | int < 90
|
||||
success_msg: "/volume1 usage OK ({{ vol1_usage.stdout | default('n/a') }}%)"
|
||||
fail_msg: "/volume1 usage high ({{ vol1_usage.stdout | default('n/a') }}%)"
|
||||
when: vol1_usage.stdout is defined and vol1_usage.stdout != ""
|
||||
changed_when: false
|
||||
|
||||
# ---------- Summary (shows the results) ----------
|
||||
- name: Summary
|
||||
ansible.builtin.debug:
|
||||
msg: |
|
||||
Host: {{ inventory_hostname }}
|
||||
DSM: {{ dsm_version.stdout | default('unknown') }}
|
||||
Uptime: {{ uptime_pretty.stdout | default('n/a') }}
|
||||
Load: {{ loadavg.stdout | default('n/a') }}
|
||||
Memory (MB):
|
||||
{{ (mem.stdout | default('n/a')) | indent(2) }}
|
||||
Root usage: {{ root_usage.stdout | default('n/a') }}%
|
||||
Volume1 usage: {{ (vol1_usage.stdout | default('n/a')) if (vol1_usage.stdout is defined and vol1_usage.stdout != "") else 'n/a' }}%
|
||||
RAID (/proc/mdstat):
|
||||
{{ (mdstat.stdout | default('n/a')) | indent(2) }}
|
||||
Tailscale:
|
||||
binary: {{ (ts_bin.stdout | default('not found')) if ts_bin.stdout|length > 0 else 'not found' }}
|
||||
ip: {{ ts_ip.stdout | default('n/a') }}
|
||||
self:
|
||||
{{ (ts_status.stdout | default('n/a')) | indent(2) }}
|
||||
12
docs/advanced/ansible/playbooks/system_info.yml
Normal file
12
docs/advanced/ansible/playbooks/system_info.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
- name: Display system information
|
||||
hosts: all
|
||||
gather_facts: yes
|
||||
tasks:
|
||||
- name: Print system details
|
||||
debug:
|
||||
msg:
|
||||
- "Hostname: {{ ansible_hostname }}"
|
||||
- "OS: {{ ansible_distribution }} {{ ansible_distribution_version }}"
|
||||
- "Kernel: {{ ansible_kernel }}"
|
||||
- "Uptime (hours): {{ ansible_uptime_seconds | int / 3600 | round(1) }}"
|
||||
75
docs/advanced/ansible/playbooks/tailscale_health.yml
Normal file
75
docs/advanced/ansible/playbooks/tailscale_health.yml
Normal file
@@ -0,0 +1,75 @@
|
||||
---
|
||||
- name: Tailscale Health Check (Homelab)
|
||||
hosts: active # or "all" if you want to check everything
|
||||
gather_facts: yes
|
||||
become: false
|
||||
|
||||
vars:
|
||||
tailscale_bin: "/usr/bin/tailscale"
|
||||
tailscale_service: "tailscaled"
|
||||
|
||||
tasks:
|
||||
|
||||
- name: Verify Tailscale binary exists
|
||||
stat:
|
||||
path: "{{ tailscale_bin }}"
|
||||
register: ts_bin
|
||||
ignore_errors: true
|
||||
|
||||
- name: Skip host if Tailscale not installed
|
||||
meta: end_host
|
||||
when: not ts_bin.stat.exists
|
||||
|
||||
- name: Get Tailscale CLI version
|
||||
command: "{{ tailscale_bin }} version"
|
||||
register: ts_version
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Get Tailscale status (JSON)
|
||||
command: "{{ tailscale_bin }} status --json"
|
||||
register: ts_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
|
||||
- name: Parse Tailscale JSON
|
||||
set_fact:
|
||||
ts_parsed: "{{ ts_status.stdout | from_json }}"
|
||||
when: ts_status.rc == 0 and (ts_status.stdout | length) > 0 and ts_status.stdout is search('{')
|
||||
|
||||
- name: Extract important fields
|
||||
set_fact:
|
||||
ts_backend_state: "{{ ts_parsed.BackendState | default('unknown') }}"
|
||||
ts_ips: "{{ ts_parsed.Self.TailscaleIPs | default([]) }}"
|
||||
ts_hostname: "{{ ts_parsed.Self.HostName | default(inventory_hostname) }}"
|
||||
when: ts_parsed is defined
|
||||
|
||||
- name: Report healthy nodes
|
||||
debug:
|
||||
msg: >-
|
||||
HEALTHY: {{ ts_hostname }}
|
||||
version={{ ts_version.stdout | default('n/a') }},
|
||||
backend={{ ts_backend_state }},
|
||||
ips={{ ts_ips }}
|
||||
when:
|
||||
- ts_parsed is defined
|
||||
- ts_backend_state == "Running"
|
||||
- ts_ips | length > 0
|
||||
|
||||
- name: Report unhealthy or unreachable nodes
|
||||
debug:
|
||||
msg: >-
|
||||
UNHEALTHY: {{ inventory_hostname }}
|
||||
rc={{ ts_status.rc }},
|
||||
backend={{ ts_backend_state | default('n/a') }},
|
||||
ips={{ ts_ips | default([]) }},
|
||||
version={{ ts_version.stdout | default('n/a') }}
|
||||
when: ts_parsed is not defined or ts_backend_state != "Running"
|
||||
|
||||
- name: Always print concise summary
|
||||
debug:
|
||||
msg: >-
|
||||
Host={{ inventory_hostname }},
|
||||
Version={{ ts_version.stdout | default('n/a') }},
|
||||
Backend={{ ts_backend_state | default('unknown') }},
|
||||
IPs={{ ts_ips | default([]) }}
|
||||
96
docs/advanced/ansible/playbooks/update_ansible.yml
Normal file
96
docs/advanced/ansible/playbooks/update_ansible.yml
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
# Update and upgrade Ansible on Linux hosts
|
||||
# Excludes Synology devices and handles Home Assistant carefully
|
||||
# Created: February 8, 2026
|
||||
|
||||
- name: Update package cache and upgrade Ansible on Linux hosts
|
||||
hosts: debian_clients:!synology
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
vars:
|
||||
ansible_become_pass: "{{ ansible_ssh_pass | default(omit) }}"
|
||||
|
||||
tasks:
|
||||
- name: Display target host information
|
||||
debug:
|
||||
msg: "Updating Ansible on {{ inventory_hostname }} ({{ ansible_host }})"
|
||||
|
||||
- name: Check if host is Home Assistant
|
||||
set_fact:
|
||||
is_homeassistant: "{{ inventory_hostname == 'homeassistant' }}"
|
||||
|
||||
- name: Skip Home Assistant with warning
|
||||
debug:
|
||||
msg: "Skipping {{ inventory_hostname }} - Home Assistant uses its own package management"
|
||||
when: is_homeassistant
|
||||
|
||||
- name: Update apt package cache
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 3600
|
||||
when: not is_homeassistant
|
||||
register: apt_update_result
|
||||
|
||||
- name: Display apt update results
|
||||
debug:
|
||||
msg: "APT cache updated on {{ inventory_hostname }}"
|
||||
when: not is_homeassistant and apt_update_result is succeeded
|
||||
|
||||
- name: Check current Ansible version
|
||||
command: ansible --version
|
||||
register: current_ansible_version
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: not is_homeassistant
|
||||
|
||||
- name: Display current Ansible version
|
||||
debug:
|
||||
msg: "Current Ansible version on {{ inventory_hostname }}: {{ current_ansible_version.stdout_lines[0] if current_ansible_version.stdout_lines else 'Not installed' }}"
|
||||
when: not is_homeassistant and current_ansible_version is defined
|
||||
|
||||
- name: Upgrade Ansible package
|
||||
apt:
|
||||
name: ansible
|
||||
state: latest
|
||||
only_upgrade: yes
|
||||
when: not is_homeassistant
|
||||
register: ansible_upgrade_result
|
||||
|
||||
- name: Display Ansible upgrade results
|
||||
debug:
|
||||
msg: |
|
||||
Ansible upgrade on {{ inventory_hostname }}:
|
||||
{% if ansible_upgrade_result.changed %}
|
||||
✅ Ansible was upgraded successfully
|
||||
{% else %}
|
||||
ℹ️ Ansible was already at the latest version
|
||||
{% endif %}
|
||||
when: not is_homeassistant
|
||||
|
||||
- name: Check new Ansible version
|
||||
command: ansible --version
|
||||
register: new_ansible_version
|
||||
changed_when: false
|
||||
when: not is_homeassistant and ansible_upgrade_result is succeeded
|
||||
|
||||
- name: Display new Ansible version
|
||||
debug:
|
||||
msg: "New Ansible version on {{ inventory_hostname }}: {{ new_ansible_version.stdout_lines[0] }}"
|
||||
when: not is_homeassistant and new_ansible_version is defined
|
||||
|
||||
- name: Summary of changes
|
||||
debug:
|
||||
msg: |
|
||||
Summary for {{ inventory_hostname }}:
|
||||
{% if is_homeassistant %}
|
||||
- Skipped (Home Assistant uses its own package management)
|
||||
{% else %}
|
||||
- APT cache: {{ 'Updated' if apt_update_result.changed else 'Already current' }}
|
||||
- Ansible: {{ 'Upgraded' if ansible_upgrade_result.changed else 'Already latest version' }}
|
||||
{% endif %}
|
||||
|
||||
handlers:
|
||||
- name: Clean apt cache
|
||||
apt:
|
||||
autoclean: yes
|
||||
when: not is_homeassistant
|
||||
122
docs/advanced/ansible/playbooks/update_ansible_targeted.yml
Normal file
122
docs/advanced/ansible/playbooks/update_ansible_targeted.yml
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
# Targeted Ansible update for confirmed Debian/Ubuntu hosts
|
||||
# Excludes Synology, TrueNAS, Home Assistant, and unreachable hosts
|
||||
# Created: February 8, 2026
|
||||
|
||||
- name: Update and upgrade Ansible on confirmed Linux hosts
|
||||
hosts: homelab,pi-5,vish-concord-nuc,pve
|
||||
gather_facts: yes
|
||||
become: yes
|
||||
serial: 1 # Process one host at a time for better control
|
||||
|
||||
tasks:
|
||||
- name: Display target host information
|
||||
debug:
|
||||
msg: |
|
||||
Processing: {{ inventory_hostname }} ({{ ansible_host }})
|
||||
OS: {{ ansible_distribution }} {{ ansible_distribution_version }}
|
||||
Python: {{ ansible_python_version }}
|
||||
|
||||
- name: Check if apt is available
|
||||
stat:
|
||||
path: /usr/bin/apt
|
||||
register: apt_available
|
||||
|
||||
- name: Skip non-Debian hosts
|
||||
debug:
|
||||
msg: "Skipping {{ inventory_hostname }} - apt not available"
|
||||
when: not apt_available.stat.exists
|
||||
|
||||
- name: Update apt package cache (with retry)
|
||||
apt:
|
||||
update_cache: yes
|
||||
cache_valid_time: 0 # Force update
|
||||
register: apt_update_result
|
||||
retries: 3
|
||||
delay: 10
|
||||
when: apt_available.stat.exists
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display apt update status
|
||||
debug:
|
||||
msg: |
|
||||
APT update on {{ inventory_hostname }}:
|
||||
{% if apt_update_result is succeeded %}
|
||||
✅ Success - Cache updated
|
||||
{% elif apt_update_result is failed %}
|
||||
❌ Failed - {{ apt_update_result.msg | default('Unknown error') }}
|
||||
{% else %}
|
||||
⏭️ Skipped - apt not available
|
||||
{% endif %}
|
||||
|
||||
- name: Check if Ansible is installed
|
||||
command: which ansible
|
||||
register: ansible_installed
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: apt_available.stat.exists and apt_update_result is succeeded
|
||||
|
||||
- name: Get current Ansible version if installed
|
||||
command: ansible --version
|
||||
register: current_ansible_version
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_installed is succeeded and ansible_installed.rc == 0
|
||||
|
||||
- name: Display current Ansible status
|
||||
debug:
|
||||
msg: |
|
||||
Ansible status on {{ inventory_hostname }}:
|
||||
{% if ansible_installed is defined and ansible_installed.rc == 0 %}
|
||||
📦 Installed: {{ current_ansible_version.stdout_lines[0] if current_ansible_version.stdout_lines else 'Version check failed' }}
|
||||
{% else %}
|
||||
📦 Not installed
|
||||
{% endif %}
|
||||
|
||||
- name: Install or upgrade Ansible
|
||||
apt:
|
||||
name: ansible
|
||||
state: latest
|
||||
update_cache: no # We already updated above
|
||||
register: ansible_upgrade_result
|
||||
when: apt_available.stat.exists and apt_update_result is succeeded
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Display Ansible installation/upgrade results
|
||||
debug:
|
||||
msg: |
|
||||
Ansible operation on {{ inventory_hostname }}:
|
||||
{% if ansible_upgrade_result is succeeded %}
|
||||
{% if ansible_upgrade_result.changed %}
|
||||
✅ {{ 'Installed' if ansible_installed.rc != 0 else 'Upgraded' }} successfully
|
||||
{% else %}
|
||||
ℹ️ Already at latest version
|
||||
{% endif %}
|
||||
{% elif ansible_upgrade_result is failed %}
|
||||
❌ Failed: {{ ansible_upgrade_result.msg | default('Unknown error') }}
|
||||
{% else %}
|
||||
⏭️ Skipped due to previous errors
|
||||
{% endif %}
|
||||
|
||||
- name: Verify final Ansible version
|
||||
command: ansible --version
|
||||
register: final_ansible_version
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
when: ansible_upgrade_result is succeeded
|
||||
|
||||
- name: Final status summary
|
||||
debug:
|
||||
msg: |
|
||||
=== SUMMARY FOR {{ inventory_hostname | upper }} ===
|
||||
Host: {{ ansible_host }}
|
||||
OS: {{ ansible_distribution }} {{ ansible_distribution_version }}
|
||||
APT Update: {{ '✅ Success' if apt_update_result is succeeded else '❌ Failed' if apt_update_result is defined else '⏭️ Skipped' }}
|
||||
Ansible: {% if final_ansible_version is succeeded %}{{ final_ansible_version.stdout_lines[0] }}{% elif ansible_upgrade_result is succeeded %}{{ 'Installed/Updated' if ansible_upgrade_result.changed else 'Already current' }}{% else %}{{ '❌ Failed or skipped' }}{% endif %}
|
||||
|
||||
post_tasks:
|
||||
- name: Clean up apt cache
|
||||
apt:
|
||||
autoclean: yes
|
||||
when: apt_available.stat.exists and apt_update_result is succeeded
|
||||
ignore_errors: yes
|
||||
8
docs/advanced/ansible/playbooks/update_system.yml
Normal file
8
docs/advanced/ansible/playbooks/update_system.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
- hosts: all
|
||||
become: true
|
||||
tasks:
|
||||
- name: Update apt cache and upgrade packages
|
||||
apt:
|
||||
update_cache: yes
|
||||
upgrade: dist
|
||||
when: ansible_os_family == "Debian"
|
||||
30
docs/advanced/ansible/roles/directory_setup/tasks/main.yml
Normal file
30
docs/advanced/ansible/roles/directory_setup/tasks/main.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
# Directory Setup Role
|
||||
# Creates necessary directories for Docker services
|
||||
|
||||
- name: Create base docker directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
when: create_base_dir | default(true)
|
||||
|
||||
- name: Create service directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}/{{ item.name }}"
|
||||
state: directory
|
||||
mode: "{{ item.mode | default('0755') }}"
|
||||
owner: "{{ item.owner | default(omit) }}"
|
||||
group: "{{ item.group | default(omit) }}"
|
||||
loop: "{{ service_directories | default([]) }}"
|
||||
when: service_directories is defined
|
||||
|
||||
- name: Create nested service directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}/{{ item.0.name }}/{{ item.1 }}"
|
||||
state: directory
|
||||
mode: "{{ item.0.mode | default('0755') }}"
|
||||
owner: "{{ item.0.owner | default(omit) }}"
|
||||
group: "{{ item.0.group | default(omit) }}"
|
||||
loop: "{{ service_directories | default([]) | subelements('subdirs', skip_missing=True) }}"
|
||||
when: service_directories is defined
|
||||
@@ -0,0 +1,6 @@
|
||||
---
|
||||
# Default variables for docker_stack role
|
||||
|
||||
stack_deploy: true
|
||||
stack_pull_images: true
|
||||
stack_health_wait: 10
|
||||
107
docs/advanced/ansible/roles/docker_stack/tasks/main.yml
Normal file
107
docs/advanced/ansible/roles/docker_stack/tasks/main.yml
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
# Docker Stack Deployment Role
|
||||
# Deploys docker-compose stacks to hosts
|
||||
#
|
||||
# Required variables:
|
||||
# stack_name: Name of the stack/directory
|
||||
# stack_compose_file: Path to the compose file (relative to repo root)
|
||||
#
|
||||
# Optional variables:
|
||||
# stack_env_file: Path to .env file (relative to repo root)
|
||||
# stack_config_files: List of additional config files to copy
|
||||
# stack_deploy: Whether to deploy the stack (default: true)
|
||||
# stack_pull_images: Whether to pull images first (default: true)
|
||||
|
||||
- name: Ensure stack directory exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}/{{ stack_name }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Ensure stack subdirectories exist
|
||||
ansible.builtin.file:
|
||||
path: "{{ docker_data_path }}/{{ stack_name }}/{{ item }}"
|
||||
state: directory
|
||||
mode: '0755'
|
||||
loop: "{{ stack_subdirs | default(['config', 'data']) }}"
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Copy docker-compose file from repo
|
||||
ansible.builtin.copy:
|
||||
src: "{{ playbook_dir }}/../../{{ stack_compose_file }}"
|
||||
dest: "{{ docker_data_path }}/{{ stack_name }}/docker-compose.yml"
|
||||
mode: '0644'
|
||||
backup: true
|
||||
register: compose_file_result
|
||||
when: stack_compose_file is defined
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Copy docker-compose content directly
|
||||
ansible.builtin.copy:
|
||||
content: "{{ stack_compose_content }}"
|
||||
dest: "{{ docker_data_path }}/{{ stack_name }}/docker-compose.yml"
|
||||
mode: '0644'
|
||||
backup: true
|
||||
register: compose_content_result
|
||||
when:
|
||||
- stack_compose_content is defined
|
||||
- stack_compose_file is not defined
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Copy environment file from repo
|
||||
ansible.builtin.copy:
|
||||
src: "{{ playbook_dir }}/../../{{ stack_env_file }}"
|
||||
dest: "{{ docker_data_path }}/{{ stack_name }}/.env"
|
||||
mode: '0600'
|
||||
backup: true
|
||||
when: stack_env_file is defined
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Copy environment content directly
|
||||
ansible.builtin.copy:
|
||||
content: "{{ stack_env_content }}"
|
||||
dest: "{{ docker_data_path }}/{{ stack_name }}/.env"
|
||||
mode: '0600'
|
||||
when:
|
||||
- stack_env_content is defined
|
||||
- stack_env_file is not defined
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Copy additional config files
|
||||
ansible.builtin.copy:
|
||||
src: "{{ playbook_dir }}/../../{{ item.src }}"
|
||||
dest: "{{ docker_data_path }}/{{ stack_name }}/{{ item.dest }}"
|
||||
mode: "{{ item.mode | default('0644') }}"
|
||||
backup: true
|
||||
loop: "{{ stack_config_files | default([]) }}"
|
||||
when: stack_config_files is defined
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Pull Docker images
|
||||
ansible.builtin.command:
|
||||
cmd: docker compose pull
|
||||
chdir: "{{ docker_data_path }}/{{ stack_name }}"
|
||||
register: pull_result
|
||||
when: stack_pull_images | default(true)
|
||||
changed_when: "'Downloaded' in pull_result.stdout"
|
||||
failed_when: false
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Deploy stack with docker compose
|
||||
ansible.builtin.command:
|
||||
cmd: docker compose up -d --remove-orphans
|
||||
chdir: "{{ docker_data_path }}/{{ stack_name }}"
|
||||
register: deploy_result
|
||||
when: stack_deploy | default(true)
|
||||
changed_when:
|
||||
- "'Started' in deploy_result.stdout or 'Created' in deploy_result.stdout"
|
||||
- compose_file_result.changed | default(false) or compose_content_result.changed | default(false)
|
||||
become: "{{ ansible_become | default(false) }}"
|
||||
|
||||
- name: Wait for stack to be healthy
|
||||
ansible.builtin.pause:
|
||||
seconds: "{{ stack_health_wait | default(5) }}"
|
||||
when:
|
||||
- stack_deploy | default(true)
|
||||
- stack_health_wait | default(5) > 0
|
||||
11
docs/advanced/ansible/scripts/run_healthcheck.sh
Executable file
11
docs/advanced/ansible/scripts/run_healthcheck.sh
Executable file
@@ -0,0 +1,11 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
# update from git (ignore if local changes)
|
||||
git pull --rebase --autostash || true
|
||||
|
||||
# run playbook and save logs
|
||||
mkdir -p logs
|
||||
ts="$(date +%F_%H-%M-%S)"
|
||||
ansible-playbook playbooks/tailscale_health.yml | tee logs/tailscale_health_${ts}.log
|
||||
82
docs/advanced/ansible/site.yml
Normal file
82
docs/advanced/ansible/site.yml
Normal file
@@ -0,0 +1,82 @@
|
||||
---
|
||||
# Master Homelab Deployment Playbook
|
||||
# Auto-generated from docker-compose files
|
||||
#
|
||||
# Usage:
|
||||
# Deploy everything: ansible-playbook site.yml
|
||||
# Deploy specific host: ansible-playbook site.yml --limit atlantis
|
||||
# Deploy by category: ansible-playbook site.yml --tags synology
|
||||
#
|
||||
|
||||
- name: Deploy all homelab services
|
||||
hosts: localhost
|
||||
gather_facts: false
|
||||
tasks:
|
||||
- name: Display deployment plan
|
||||
ansible.builtin.debug:
|
||||
msg: Deploying services to all hosts. Use --limit to target specific hosts.
|
||||
- name: Deploy to anubis (8 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_anubis.yml
|
||||
tags:
|
||||
- physical
|
||||
- anubis
|
||||
- name: Deploy to atlantis (53 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_atlantis.yml
|
||||
tags:
|
||||
- synology
|
||||
- atlantis
|
||||
- name: Deploy to bulgaria-vm (10 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_bulgaria_vm.yml
|
||||
tags:
|
||||
- vms
|
||||
- bulgaria_vm
|
||||
- name: Deploy to calypso (24 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_calypso.yml
|
||||
tags:
|
||||
- synology
|
||||
- calypso
|
||||
- name: Deploy to chicago-vm (7 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_chicago_vm.yml
|
||||
tags:
|
||||
- vms
|
||||
- chicago_vm
|
||||
- name: Deploy to concord-nuc (11 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_concord_nuc.yml
|
||||
tags:
|
||||
- physical
|
||||
- concord_nuc
|
||||
- name: Deploy to contabo-vm (1 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_contabo_vm.yml
|
||||
tags:
|
||||
- vms
|
||||
- contabo_vm
|
||||
- name: Deploy to guava (1 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_guava.yml
|
||||
tags:
|
||||
- truenas
|
||||
- guava
|
||||
- name: Deploy to homelab-vm (33 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_homelab_vm.yml
|
||||
tags:
|
||||
- vms
|
||||
- homelab_vm
|
||||
- name: Deploy to lxc (1 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_lxc.yml
|
||||
tags:
|
||||
- proxmox
|
||||
- lxc
|
||||
- name: Deploy to matrix-ubuntu-vm (2 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_matrix_ubuntu_vm.yml
|
||||
tags:
|
||||
- vms
|
||||
- matrix_ubuntu_vm
|
||||
- name: Deploy to rpi5-vish (3 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_rpi5_vish.yml
|
||||
tags:
|
||||
- edge
|
||||
- rpi5_vish
|
||||
- name: Deploy to setillo (2 services)
|
||||
ansible.builtin.import_playbook: playbooks/deploy_setillo.yml
|
||||
tags:
|
||||
- synology
|
||||
- setillo
|
||||
10
docs/advanced/ansible/test-nginx/docker-compose.yml
Normal file
10
docs/advanced/ansible/test-nginx/docker-compose.yml
Normal file
@@ -0,0 +1,10 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
web:
|
||||
image: nginx:alpine
|
||||
container_name: test-nginx
|
||||
ports:
|
||||
- "8080:80"
|
||||
command: ["/bin/sh", "-c", "echo '<h1>Hello from Vish! This is hard + Gitea 🚀</h1>' > /usr/share/nginx/html/index.html && nginx -g 'daemon off;'"]
|
||||
restart: unless-stopped
|
||||
1
docs/advanced/ansible/test-nginx/html/index.html
Normal file
1
docs/advanced/ansible/test-nginx/html/index.html
Normal file
@@ -0,0 +1 @@
|
||||
echo "Hello from Portainer + Gitea deploy test app 🚀"
|
||||
Reference in New Issue
Block a user