15 KiB
Gitea Actions & Runner Guide
How to use the calypso-runner for homelab automation
Overview
The calypso-runner is a Gitea Act Runner running on Calypso (gitea/act_runner:latest).
It picks up jobs from any workflow in any repo it's registered to and executes them in
Docker containers. A single runner handles all workflows sequentially — for a homelab this
is plenty.
Runner labels (what runs-on: values work):
runs-on: value |
Container used |
|---|---|
ubuntu-latest |
node:20-bookworm |
ubuntu-22.04 |
ubuntu:22.04 |
python |
python:3.11 |
Workflows go in .gitea/workflows/*.yml. They use the same syntax as GitHub Actions.
Existing workflows
| File | Trigger | What it does |
|---|---|---|
mirror-to-public.yaml |
push to main | Sanitizes repo and force-pushes to homelab-optimized |
validate.yml |
every push + PR | YAML lint + secret scan on changed files |
portainer-deploy.yml |
push to main (hosts/ changed) | Auto-redeploys matching Portainer stacks |
Repo secrets
Stored at: Gitea → Vish/homelab → Settings → Secrets → Actions
| Secret | Used by | Notes |
|---|---|---|
PUBLIC_REPO_TOKEN |
mirror-to-public | Write access to homelab-optimized |
PUBLIC_REPO_URL |
mirror-to-public | URL of the public mirror repo |
PORTAINER_TOKEN |
portainer-deploy | ptr_* Portainer API token |
GIT_TOKEN |
portainer-deploy | Gitea token for Portainer git auth during redeploy |
NTFY_URL |
portainer-deploy | Full ntfy topic URL (optional) |
Note: Gitea reserves the
GITEA_prefix for built-in variables — useGIT_TOKENnotGITEA_TOKEN.
Workflow recipes
DNS record audit
Checks that every domain/subdomain resolves to the expected IP. Add your domains to the list and it will alert via ntfy if anything is wrong.
# .gitea/workflows/dns-audit.yml
name: DNS Audit
on:
schedule:
- cron: '0 8 * * *' # daily at 08:00 UTC
workflow_dispatch: # also runnable manually from the Gitea UI
jobs:
dns-check:
runs-on: python
steps:
- name: Check DNS records
env:
NTFY_URL: ${{ secrets.NTFY_URL }}
run: |
pip install requests dnspython -q
python3 << 'PYEOF'
import dns.resolver, requests, os, sys
# Map hostname -> expected IP (or CNAME target)
EXPECTED = {
'vish.gg': '1.2.3.4', # replace with your WAN IP
'git.vish.gg': '192.168.0.250',
'st.vish.gg': '1.2.3.4',
'api.st.vish.gg': '1.2.3.4',
'events.st.vish.gg': '1.2.3.4',
# add more here
}
resolver = dns.resolver.Resolver()
resolver.timeout = 5
resolver.lifetime = 5
failures = []
for host, expected_ip in EXPECTED.items():
try:
answers = resolver.resolve(host, 'A')
actual = [r.address for r in answers]
if expected_ip not in actual:
failures.append(f"{host}: expected {expected_ip}, got {', '.join(actual)}")
else:
print(f"OK {host} -> {expected_ip}")
except Exception as e:
failures.append(f"{host}: lookup failed ({e})")
print(f"ERR {host}: {e}")
ntfy_url = os.environ.get('NTFY_URL', '')
if failures:
print(f"\n{len(failures)} failure(s):")
for f in failures:
print(f" {f}")
if ntfy_url:
requests.post(ntfy_url,
data='\n'.join(failures),
headers={'Title': '⚠️ DNS Audit Failed', 'Priority': '4', 'Tags': 'warning'},
timeout=10)
sys.exit(1)
else:
print(f"\nAll {len(EXPECTED)} records OK")
PYEOF
Ansible dry-run on changed playbooks
Validates any Ansible playbook you change before it gets used in production. Requires your inventory to be reachable from the runner.
# .gitea/workflows/ansible-check.yml
name: Ansible Check
on:
push:
paths: ['ansible/**']
pull_request:
paths: ['ansible/**']
jobs:
ansible-lint:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v4
- name: Install Ansible
run: |
apt-get update -q && apt-get install -y -q ansible ansible-lint
- name: Syntax check changed playbooks
run: |
CHANGED=$(git diff --name-only HEAD~1 HEAD | grep 'ansible/.*\.yml$' || true)
if [ -z "$CHANGED" ]; then
echo "No playbooks changed"
exit 0
fi
for playbook in $CHANGED; do
echo "Checking: $playbook"
ansible-playbook --syntax-check "$playbook" -i ansible/homelab/inventory/ || exit 1
done
- name: Lint changed playbooks
run: |
CHANGED=$(git diff --name-only HEAD~1 HEAD | grep 'ansible/.*\.yml$' || true)
if [ -z "$CHANGED" ]; then exit 0; fi
ansible-lint $CHANGED --exclude ansible/archive/
Notify on push
Sends an ntfy notification with a summary of every push to main — who pushed, what changed, and a link to the commit.
# .gitea/workflows/notify-push.yml
name: Notify on Push
on:
push:
branches: [main]
jobs:
notify:
runs-on: python
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 2
- name: Send push notification
env:
NTFY_URL: ${{ secrets.NTFY_URL }}
run: |
python3 << 'PYEOF'
import subprocess, requests, os
ntfy_url = os.environ.get('NTFY_URL', '')
if not ntfy_url:
print("NTFY_URL not set, skipping")
exit()
author = subprocess.check_output(
['git', 'log', '-1', '--format=%an'], text=True).strip()
message = subprocess.check_output(
['git', 'log', '-1', '--format=%s'], text=True).strip()
changed = subprocess.check_output(
['git', 'diff', '--name-only', 'HEAD~1', 'HEAD'], text=True).strip()
file_count = len(changed.splitlines()) if changed else 0
sha = subprocess.check_output(
['git', 'rev-parse', '--short', 'HEAD'], text=True).strip()
body = f"{message}\n{file_count} file(s) changed\nCommit: {sha}"
requests.post(ntfy_url,
data=body,
headers={'Title': f'📦 Push by {author}', 'Priority': '2', 'Tags': 'inbox_tray'},
timeout=10)
print(f"Notified: {message}")
PYEOF
Scheduled service health check
Pings all your services and sends an alert if any are down. Runs every 30 minutes.
# .gitea/workflows/health-check.yml
name: Service Health Check
on:
schedule:
- cron: '*/30 * * * *' # every 30 minutes
workflow_dispatch:
jobs:
health:
runs-on: python
steps:
- name: Check services
env:
NTFY_URL: ${{ secrets.NTFY_URL }}
run: |
pip install requests -q
python3 << 'PYEOF'
import requests, os, sys
from requests.packages.urllib3.exceptions import InsecureRequestWarning
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# Services to check: (name, url, expected_status)
SERVICES = [
('Gitea', 'https://git.vish.gg', 200),
('Portainer', 'https://192.168.0.200:9443', 200),
('Authentik', 'https://sso.vish.gg', 200),
('Stoatchat', 'https://st.vish.gg', 200),
('Vaultwarden', 'https://vault.vish.gg', 200),
('Paperless', 'https://paperless.vish.gg', 200),
('Immich', 'https://photos.vish.gg', 200),
('Uptime Kuma', 'https://status.vish.gg', 200),
# add more here
]
down = []
for name, url, expected in SERVICES:
try:
r = requests.get(url, timeout=10, verify=False, allow_redirects=True)
if r.status_code == expected or r.status_code in [200, 301, 302, 401, 403]:
print(f"OK {name} ({r.status_code})")
else:
down.append(f"{name}: HTTP {r.status_code}")
print(f"ERR {name}: HTTP {r.status_code}")
except Exception as e:
down.append(f"{name}: unreachable ({e})")
print(f"ERR {name}: {e}")
ntfy_url = os.environ.get('NTFY_URL', '')
if down:
if ntfy_url:
requests.post(ntfy_url,
data='\n'.join(down),
headers={'Title': '🚨 Services Down', 'Priority': '5', 'Tags': 'rotating_light'},
timeout=10)
sys.exit(1)
PYEOF
Backup verification
Checks that backup files on your NAS are recent and non-empty. Uses SSH to check file modification times.
# .gitea/workflows/backup-verify.yml
name: Backup Verification
on:
schedule:
- cron: '0 10 * * *' # daily at 10:00 UTC (after nightly backups complete)
workflow_dispatch:
jobs:
verify:
runs-on: ubuntu-22.04
steps:
- name: Check backups via SSH
env:
NTFY_URL: ${{ secrets.NTFY_URL }}
SSH_KEY: ${{ secrets.BACKUP_SSH_KEY }} # add this secret: private SSH key
run: |
# Write SSH key
mkdir -p ~/.ssh
echo "$SSH_KEY" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
ssh-keyscan -H 192.168.0.200 >> ~/.ssh/known_hosts 2>/dev/null
# Check that backup directories exist and have files modified in last 24h
ssh -i ~/.ssh/id_rsa homelab@192.168.0.200 << 'SSHEOF'
MAX_AGE_HOURS=24
BACKUP_DIRS=(
"/volume1/backups/paperless"
"/volume1/backups/vaultwarden"
"/volume1/backups/immich"
)
FAILED=0
for dir in "${BACKUP_DIRS[@]}"; do
RECENT=$(find "$dir" -newer /tmp/.timeref -name "*.tar*" -o -name "*.sql*" 2>/dev/null | head -1)
if [ -z "$RECENT" ]; then
echo "STALE: $dir (no recent backup found)"
FAILED=1
else
echo "OK: $dir -> $(basename $RECENT)"
fi
done
exit $FAILED
SSHEOF
To use this, add a
BACKUP_SSH_KEYsecret containing the private key for a user with read access to your backup directories.
Docker image update check
Checks for newer versions of your key container images and notifies you without automatically pulling — gives you a heads-up to review before Watchtower does it.
# .gitea/workflows/image-check.yml
name: Image Update Check
on:
schedule:
- cron: '0 9 * * 1' # every Monday at 09:00 UTC
workflow_dispatch:
jobs:
check:
runs-on: python
steps:
- name: Check for image updates
env:
NTFY_URL: ${{ secrets.NTFY_URL }}
run: |
pip install requests -q
python3 << 'PYEOF'
import requests, os
# Images to track: (friendly name, image, current tag)
IMAGES = [
('Authentik', 'ghcr.io/goauthentik/server', 'latest'),
('Gitea', 'gitea/gitea', 'latest'),
('Immich', 'ghcr.io/immich-app/immich-server', 'release'),
('Paperless', 'ghcr.io/paperless-ngx/paperless-ngx', 'latest'),
('Vaultwarden', 'vaultwarden/server', 'latest'),
('Stoatchat', 'ghcr.io/stoatchat/backend', 'latest'),
]
updates = []
for name, image, tag in IMAGES:
try:
# Check Docker Hub or GHCR for latest digest
if image.startswith('ghcr.io/'):
repo = image[len('ghcr.io/'):]
r = requests.get(
f'https://ghcr.io/v2/{repo}/manifests/{tag}',
headers={'Accept': 'application/vnd.oci.image.index.v1+json'},
timeout=10)
digest = r.headers.get('Docker-Content-Digest', 'unknown')
else:
r = requests.get(
f'https://hub.docker.com/v2/repositories/{image}/tags/{tag}',
timeout=10).json()
digest = r.get('digest', 'unknown')
print(f"OK {name}: {digest[:20]}...")
updates.append(f"{name}: {digest[:16]}...")
except Exception as e:
print(f"ERR {name}: {e}")
ntfy_url = os.environ.get('NTFY_URL', '')
if ntfy_url and updates:
requests.post(ntfy_url,
data='\n'.join(updates),
headers={'Title': '📋 Weekly Image Digest Check', 'Priority': '2', 'Tags': 'docker'},
timeout=10)
PYEOF
How to add a new workflow
- Create a file in
.gitea/workflows/yourname.yml - Set
runs-on:to one of:ubuntu-latest,ubuntu-22.04, orpython - Use
${{ secrets.SECRET_NAME }}for any tokens/passwords - Push to main — the runner picks it up immediately
- View results: Gitea → Vish/homelab → Actions
How to run a workflow manually
Any workflow with workflow_dispatch: in its trigger can be run from the UI:
Gitea → Vish/homelab → Actions → select workflow → Run workflow
Cron schedule reference
┌─ minute (0-59)
│ ┌─ hour (0-23, UTC)
│ │ ┌─ day of month (1-31)
│ │ │ ┌─ month (1-12)
│ │ │ │ ┌─ day of week (0=Sun, 6=Sat)
│ │ │ │ │
* * * * *
Examples:
0 8 * * * = daily at 08:00 UTC
*/30 * * * * = every 30 minutes
0 9 * * 1 = every Monday at 09:00 UTC
0 2 * * 0 = every Sunday at 02:00 UTC
Debugging a failed workflow
# View runner logs on Calypso via Portainer API
curl -sk -H "X-API-Key: $PORTAINER_TOKEN" \
"https://192.168.0.200:9443/api/endpoints/443397/docker/containers/json?all=true" | \
jq -r '.[] | select(.Names[0]=="/gitea-runner") | .Id' | \
xargs -I{} curl -sk -H "X-API-Key: $PORTAINER_TOKEN" \
"https://192.168.0.200:9443/api/endpoints/443397/docker/containers/{}/logs?stdout=1&stderr=1&tail=50" | strings
Or view run results directly in the Gitea UI: Gitea → Vish/homelab → Actions → click any run