Sanitized mirror from private repository - 2026-03-21 08:52:36 UTC
This commit is contained in:
408
docs/admin/GITEA_ACTIONS_GUIDE.md
Normal file
408
docs/admin/GITEA_ACTIONS_GUIDE.md
Normal file
@@ -0,0 +1,408 @@
|
||||
# Gitea Actions & Runner Guide
|
||||
|
||||
*How to use the `calypso-runner` for homelab automation*
|
||||
|
||||
## Overview
|
||||
|
||||
The `calypso-runner` is a Gitea Act Runner running on Calypso (`gitea/act_runner:latest`).
|
||||
It picks up jobs from any workflow in any repo it's registered to and executes them in
|
||||
Docker containers. A single runner handles all workflows sequentially — for a homelab this
|
||||
is plenty.
|
||||
|
||||
**Runner labels** (what `runs-on:` values work):
|
||||
|
||||
| `runs-on:` value | Container used |
|
||||
|---|---|
|
||||
| `ubuntu-latest` | `node:20-bookworm` |
|
||||
| `ubuntu-22.04` | `ubuntu:22.04` |
|
||||
| `python` | `python:3.11` |
|
||||
|
||||
Workflows go in `.gitea/workflows/*.yml`. They use the same syntax as GitHub Actions.
|
||||
|
||||
---
|
||||
|
||||
## Existing workflows
|
||||
|
||||
| File | Trigger | What it does |
|
||||
|---|---|---|
|
||||
| `mirror-to-public.yaml` | push to main | Sanitizes repo and force-pushes to `homelab-optimized` |
|
||||
| `validate.yml` | every push + PR | YAML lint + secret scan on changed files |
|
||||
| `portainer-deploy.yml` | push to main (hosts/ changed) | Auto-redeploys matching Portainer stacks |
|
||||
| `dns-audit.yml` | daily 08:00 UTC + manual | DNS resolution, NPM↔DDNS cross-reference, CF proxy audit |
|
||||
|
||||
---
|
||||
|
||||
## Repo secrets
|
||||
|
||||
Stored at: **Gitea → Vish/homelab → Settings → Secrets → Actions**
|
||||
|
||||
| Secret | Used by | Notes |
|
||||
|---|---|---|
|
||||
| `PUBLIC_REPO_TOKEN` | mirror-to-public | Write access to homelab-optimized |
|
||||
| `PUBLIC_REPO_URL` | mirror-to-public | URL of the public mirror repo |
|
||||
| `PORTAINER_TOKEN` | portainer-deploy | `ptr_*` Portainer API token |
|
||||
| `GIT_TOKEN` | portainer-deploy, dns-audit | Gitea token for repo checkout + Portainer git auth |
|
||||
| `NTFY_URL` | portainer-deploy, dns-audit | Full ntfy topic URL (optional) |
|
||||
| `NPM_EMAIL` | dns-audit | NPM admin email for API login |
|
||||
| `NPM_PASSWORD` | dns-audit | NPM admin password for API login |
|
||||
| `CF_TOKEN` | dns-audit | Cloudflare API token (same one used by DDNS containers) |
|
||||
| `CF_SYNC` | dns-audit | Set to `true` to auto-patch CF proxy mismatches (optional) |
|
||||
|
||||
> Note: Gitea reserves the `GITEA_` prefix for built-in variables — use `GIT_TOKEN`
|
||||
> not `GITEA_TOKEN`.
|
||||
|
||||
---
|
||||
|
||||
## Workflow recipes
|
||||
|
||||
### DNS record audit
|
||||
|
||||
This is a live workflow — see `.gitea/workflows/dns-audit.yml` and the full
|
||||
documentation at `docs/guides/dns-audit.md`.
|
||||
|
||||
It runs the script at `.gitea/scripts/dns-audit.py` which does a 5-step audit:
|
||||
1. Parses all DDNS compose files for the canonical domain + proxy-flag list
|
||||
2. Queries the NPM API for all proxy host domains
|
||||
3. Live DNS checks — proxied domains must resolve to CF IPs, unproxied to direct IPs
|
||||
4. Cross-references NPM ↔ DDNS (flags orphaned entries in either direction)
|
||||
5. Cloudflare API audit — checks proxy settings match DDNS config; auto-patches with `CF_SYNC=true`
|
||||
|
||||
Required secrets: `GIT_TOKEN`, `NPM_EMAIL`, `NPM_PASSWORD`, `CF_TOKEN` <!-- pragma: allowlist secret -->
|
||||
Optional: `NTFY_URL` (alert on failure), `CF_SYNC=true` (auto-patch mismatches)
|
||||
|
||||
---
|
||||
|
||||
### Ansible dry-run on changed playbooks
|
||||
|
||||
Validates any Ansible playbook you change before it gets used in production.
|
||||
Requires your inventory to be reachable from the runner.
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/ansible-check.yml
|
||||
name: Ansible Check
|
||||
|
||||
on:
|
||||
push:
|
||||
paths: ['ansible/**']
|
||||
pull_request:
|
||||
paths: ['ansible/**']
|
||||
|
||||
jobs:
|
||||
ansible-lint:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Install Ansible
|
||||
run: |
|
||||
apt-get update -q && apt-get install -y -q ansible ansible-lint
|
||||
|
||||
- name: Syntax check changed playbooks
|
||||
run: |
|
||||
CHANGED=$(git diff --name-only HEAD~1 HEAD | grep 'ansible/.*\.yml$' || true)
|
||||
if [ -z "$CHANGED" ]; then
|
||||
echo "No playbooks changed"
|
||||
exit 0
|
||||
fi
|
||||
for playbook in $CHANGED; do
|
||||
echo "Checking: $playbook"
|
||||
ansible-playbook --syntax-check "$playbook" -i ansible/homelab/inventory/ || exit 1
|
||||
done
|
||||
|
||||
- name: Lint changed playbooks
|
||||
run: |
|
||||
CHANGED=$(git diff --name-only HEAD~1 HEAD | grep 'ansible/.*\.yml$' || true)
|
||||
if [ -z "$CHANGED" ]; then exit 0; fi
|
||||
ansible-lint $CHANGED --exclude ansible/archive/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Notify on push
|
||||
|
||||
Sends an ntfy notification with a summary of every push to main — who pushed,
|
||||
what changed, and a link to the commit.
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/notify-push.yml
|
||||
name: Notify on Push
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
notify:
|
||||
runs-on: python
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 2
|
||||
|
||||
- name: Send push notification
|
||||
env:
|
||||
NTFY_URL: ${{ secrets.NTFY_URL }}
|
||||
run: |
|
||||
python3 << 'PYEOF'
|
||||
import subprocess, requests, os
|
||||
|
||||
ntfy_url = os.environ.get('NTFY_URL', '')
|
||||
if not ntfy_url:
|
||||
print("NTFY_URL not set, skipping")
|
||||
exit()
|
||||
|
||||
author = subprocess.check_output(
|
||||
['git', 'log', '-1', '--format=%an'], text=True).strip()
|
||||
message = subprocess.check_output(
|
||||
['git', 'log', '-1', '--format=%s'], text=True).strip()
|
||||
changed = subprocess.check_output(
|
||||
['git', 'diff', '--name-only', 'HEAD~1', 'HEAD'], text=True).strip()
|
||||
file_count = len(changed.splitlines()) if changed else 0
|
||||
sha = subprocess.check_output(
|
||||
['git', 'rev-parse', '--short', 'HEAD'], text=True).strip()
|
||||
|
||||
body = f"{message}\n{file_count} file(s) changed\nCommit: {sha}"
|
||||
requests.post(ntfy_url,
|
||||
data=body,
|
||||
headers={'Title': f'📦 Push by {author}', 'Priority': '2', 'Tags': 'inbox_tray'},
|
||||
timeout=10)
|
||||
print(f"Notified: {message}")
|
||||
PYEOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scheduled service health check
|
||||
|
||||
Pings all your services and sends an alert if any are down. Runs every 30 minutes.
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/health-check.yml
|
||||
name: Service Health Check
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '*/30 * * * *' # every 30 minutes
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
health:
|
||||
runs-on: python
|
||||
steps:
|
||||
- name: Check services
|
||||
env:
|
||||
NTFY_URL: ${{ secrets.NTFY_URL }}
|
||||
run: |
|
||||
pip install requests -q
|
||||
python3 << 'PYEOF'
|
||||
import requests, os, sys
|
||||
from requests.packages.urllib3.exceptions import InsecureRequestWarning
|
||||
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
|
||||
|
||||
# Services to check: (name, url, expected_status)
|
||||
SERVICES = [
|
||||
('Gitea', 'https://git.vish.gg', 200),
|
||||
('Portainer', 'https://192.168.0.200:9443', 200),
|
||||
('Authentik', 'https://sso.vish.gg', 200),
|
||||
('Stoatchat', 'https://st.vish.gg', 200),
|
||||
('Vaultwarden', 'https://vault.vish.gg', 200),
|
||||
('Paperless', 'https://paperless.vish.gg', 200),
|
||||
('Immich', 'https://photos.vish.gg', 200),
|
||||
('Uptime Kuma', 'https://status.vish.gg', 200),
|
||||
# add more here
|
||||
]
|
||||
|
||||
down = []
|
||||
for name, url, expected in SERVICES:
|
||||
try:
|
||||
r = requests.get(url, timeout=10, verify=False, allow_redirects=True)
|
||||
if r.status_code == expected or r.status_code in [200, 301, 302, 401, 403]:
|
||||
print(f"OK {name} ({r.status_code})")
|
||||
else:
|
||||
down.append(f"{name}: HTTP {r.status_code}")
|
||||
print(f"ERR {name}: HTTP {r.status_code}")
|
||||
except Exception as e:
|
||||
down.append(f"{name}: unreachable ({e})")
|
||||
print(f"ERR {name}: {e}")
|
||||
|
||||
ntfy_url = os.environ.get('NTFY_URL', '')
|
||||
if down:
|
||||
if ntfy_url:
|
||||
requests.post(ntfy_url,
|
||||
data='\n'.join(down),
|
||||
headers={'Title': '🚨 Services Down', 'Priority': '5', 'Tags': 'rotating_light'},
|
||||
timeout=10)
|
||||
sys.exit(1)
|
||||
PYEOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Backup verification
|
||||
|
||||
Checks that backup files on your NAS are recent and non-empty. Uses SSH to
|
||||
check file modification times.
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/backup-verify.yml
|
||||
name: Backup Verification
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 10 * * *' # daily at 10:00 UTC (after nightly backups complete)
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
verify:
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- name: Check backups via SSH
|
||||
env:
|
||||
NTFY_URL: ${{ secrets.NTFY_URL }}
|
||||
SSH_KEY: ${{ secrets.BACKUP_SSH_KEY }} # add this secret: private SSH key
|
||||
run: |
|
||||
# Write SSH key
|
||||
mkdir -p ~/.ssh
|
||||
echo "$SSH_KEY" > ~/.ssh/id_rsa
|
||||
chmod 600 ~/.ssh/id_rsa
|
||||
ssh-keyscan -H 192.168.0.200 >> ~/.ssh/known_hosts 2>/dev/null
|
||||
|
||||
# Check that backup directories exist and have files modified in last 24h
|
||||
ssh -i ~/.ssh/id_rsa homelab@192.168.0.200 << 'SSHEOF'
|
||||
MAX_AGE_HOURS=24
|
||||
BACKUP_DIRS=(
|
||||
"/volume1/backups/paperless"
|
||||
"/volume1/backups/vaultwarden"
|
||||
"/volume1/backups/immich"
|
||||
)
|
||||
FAILED=0
|
||||
for dir in "${BACKUP_DIRS[@]}"; do
|
||||
RECENT=$(find "$dir" -newer /tmp/.timeref -name "*.tar*" -o -name "*.sql*" 2>/dev/null | head -1)
|
||||
if [ -z "$RECENT" ]; then
|
||||
echo "STALE: $dir (no recent backup found)"
|
||||
FAILED=1
|
||||
else
|
||||
echo "OK: $dir -> $(basename $RECENT)"
|
||||
fi
|
||||
done
|
||||
exit $FAILED
|
||||
SSHEOF
|
||||
```
|
||||
|
||||
> To use this, add a `BACKUP_SSH_KEY` secret containing the private key for a
|
||||
> user with read access to your backup directories.
|
||||
|
||||
---
|
||||
|
||||
### Docker image update check
|
||||
|
||||
Checks for newer versions of your key container images and notifies you without
|
||||
automatically pulling — gives you a heads-up to review before Watchtower does it.
|
||||
|
||||
```yaml
|
||||
# .gitea/workflows/image-check.yml
|
||||
name: Image Update Check
|
||||
|
||||
on:
|
||||
schedule:
|
||||
- cron: '0 9 * * 1' # every Monday at 09:00 UTC
|
||||
workflow_dispatch:
|
||||
|
||||
jobs:
|
||||
check:
|
||||
runs-on: python
|
||||
steps:
|
||||
- name: Check for image updates
|
||||
env:
|
||||
NTFY_URL: ${{ secrets.NTFY_URL }}
|
||||
run: |
|
||||
pip install requests -q
|
||||
python3 << 'PYEOF'
|
||||
import requests, os
|
||||
|
||||
# Images to track: (friendly name, image, current tag)
|
||||
IMAGES = [
|
||||
('Authentik', 'ghcr.io/goauthentik/server', 'latest'),
|
||||
('Gitea', 'gitea/gitea', 'latest'),
|
||||
('Immich', 'ghcr.io/immich-app/immich-server', 'release'),
|
||||
('Paperless', 'ghcr.io/paperless-ngx/paperless-ngx', 'latest'),
|
||||
('Vaultwarden', 'vaultwarden/server', 'latest'),
|
||||
('Stoatchat', 'ghcr.io/stoatchat/backend', 'latest'),
|
||||
]
|
||||
|
||||
updates = []
|
||||
for name, image, tag in IMAGES:
|
||||
try:
|
||||
# Check Docker Hub or GHCR for latest digest
|
||||
if image.startswith('ghcr.io/'):
|
||||
repo = image[len('ghcr.io/'):]
|
||||
r = requests.get(
|
||||
f'https://ghcr.io/v2/{repo}/manifests/{tag}',
|
||||
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
|
||||
timeout=10)
|
||||
digest = r.headers.get('Docker-Content-Digest', 'unknown')
|
||||
else:
|
||||
r = requests.get(
|
||||
f'https://hub.docker.com/v2/repositories/{image}/tags/{tag}',
|
||||
timeout=10).json()
|
||||
digest = r.get('digest', 'unknown')
|
||||
print(f"OK {name}: {digest[:20]}...")
|
||||
updates.append(f"{name}: {digest[:16]}...")
|
||||
except Exception as e:
|
||||
print(f"ERR {name}: {e}")
|
||||
|
||||
ntfy_url = os.environ.get('NTFY_URL', '')
|
||||
if ntfy_url and updates:
|
||||
requests.post(ntfy_url,
|
||||
data='\n'.join(updates),
|
||||
headers={'Title': '📋 Weekly Image Digest Check', 'Priority': '2', 'Tags': 'docker'},
|
||||
timeout=10)
|
||||
PYEOF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How to add a new workflow
|
||||
|
||||
1. Create a file in `.gitea/workflows/yourname.yml`
|
||||
2. Set `runs-on:` to one of: `ubuntu-latest`, `ubuntu-22.04`, or `python`
|
||||
3. Use `${{ secrets.SECRET_NAME }}` for any tokens/passwords
|
||||
4. Push to main — the runner picks it up immediately
|
||||
5. View results: **Gitea → Vish/homelab → Actions**
|
||||
|
||||
## How to run a workflow manually
|
||||
|
||||
Any workflow with `workflow_dispatch:` in its trigger can be run from the UI:
|
||||
**Gitea → Vish/homelab → Actions → select workflow → Run workflow**
|
||||
|
||||
## Cron schedule reference
|
||||
|
||||
```
|
||||
┌─ minute (0-59)
|
||||
│ ┌─ hour (0-23, UTC)
|
||||
│ │ ┌─ day of month (1-31)
|
||||
│ │ │ ┌─ month (1-12)
|
||||
│ │ │ │ ┌─ day of week (0=Sun, 6=Sat)
|
||||
│ │ │ │ │
|
||||
* * * * *
|
||||
|
||||
Examples:
|
||||
0 8 * * * = daily at 08:00 UTC
|
||||
*/30 * * * * = every 30 minutes
|
||||
0 9 * * 1 = every Monday at 09:00 UTC
|
||||
0 2 * * 0 = every Sunday at 02:00 UTC
|
||||
```
|
||||
|
||||
## Debugging a failed workflow
|
||||
|
||||
```bash
|
||||
# View runner logs on Calypso via Portainer API
|
||||
curl -sk -H "X-API-Key: $PORTAINER_TOKEN" \
|
||||
"https://192.168.0.200:9443/api/endpoints/443397/docker/containers/json?all=true" | \
|
||||
jq -r '.[] | select(.Names[0]=="/gitea-runner") | .Id' | \
|
||||
xargs -I{} curl -sk -H "X-API-Key: $PORTAINER_TOKEN" \
|
||||
"https://192.168.0.200:9443/api/endpoints/443397/docker/containers/{}/logs?stdout=1&stderr=1&tail=50" | strings
|
||||
```
|
||||
|
||||
Or view run results directly in the Gitea UI:
|
||||
**Gitea → Vish/homelab → Actions → click any run**
|
||||
Reference in New Issue
Block a user