# Quick Start Guide ## Overview This guide will help you deploy your first service in the homelab environment within 15 minutes. We'll use Uptime Kuma as an example service since it's lightweight, useful, and demonstrates the core deployment workflow. ## Prerequisites Check Before starting, ensure you have: - [ ] SSH access to a homelab server - [ ] Docker and Docker Compose installed - [ ] Git repository access - [ ] Basic understanding of Docker concepts ```bash # Quick verification ssh homelab@server-ip docker --version docker-compose --version git --version ``` ## Step 1: Choose Your Deployment Method ### Option A: Portainer (Recommended for Beginners) - Web-based interface - Visual stack management - Built-in monitoring - Easy rollbacks ### Option B: Command Line (Recommended for Advanced Users) - Direct Docker Compose - Faster deployment - Scriptable automation - Full control ## Step 2: Deploy Uptime Kuma (Portainer Method) ### Access Portainer 1. Navigate to [Portainer](http://atlantis.vish.local:9000) 2. Login with your credentials 3. Select the **local** endpoint ### Create New Stack 1. Go to **Stacks** → **Add Stack** 2. Name: `uptime-kuma-quickstart` 3. Choose **Web Editor** ### Paste Configuration ```yaml version: '3.8' services: uptime-kuma: image: louislam/uptime-kuma:1 container_name: uptime-kuma-quickstart restart: unless-stopped ports: - "3001:3001" volumes: - uptime-kuma-data:/app/data - /var/run/docker.sock:/var/run/docker.sock:ro environment: - PUID=1000 - PGID=1000 labels: - "traefik.enable=true" - "traefik.http.routers.uptime-kuma.rule=Host(`uptime.vish.local`)" - "traefik.http.services.uptime-kuma.loadbalancer.server.port=3001" volumes: uptime-kuma-data: driver: local ``` ### Deploy Stack 1. Click **Deploy the Stack** 2. Wait for deployment to complete 3. Check **Containers** tab for running status ### Access Service - Direct: http://server-ip:3001 - Domain: http://uptime.vish.local (if DNS configured) ## Step 3: Deploy Uptime Kuma (Command Line Method) ### Clone Repository ```bash # Clone homelab repository git clone https://git.vish.gg/Vish/homelab.git cd homelab # Navigate to appropriate server directory cd hosts/raspberry-pi-5-vish # or your target server ``` ### Create Service File ```bash # Create uptime-kuma.yml cat > uptime-kuma-quickstart.yml << 'EOF' version: '3.8' services: uptime-kuma: image: louislam/uptime-kuma:1 container_name: uptime-kuma-quickstart restart: unless-stopped ports: - "3001:3001" volumes: - uptime-kuma-data:/app/data - /var/run/docker.sock:/var/run/docker.sock:ro environment: - PUID=1000 - PGID=1000 volumes: uptime-kuma-data: driver: local EOF ``` ### Deploy Service ```bash # Deploy with Docker Compose docker-compose -f uptime-kuma-quickstart.yml up -d # Check status docker-compose -f uptime-kuma-quickstart.yml ps # View logs docker-compose -f uptime-kuma-quickstart.yml logs -f ``` ## Step 4: Initial Configuration ### First-Time Setup 1. Access Uptime Kuma at http://server-ip:3001 2. Create admin account: - Username: `admin` - Password: "REDACTED_PASSWORD" - Email: `admin@vish.local` ### Add Your First Monitor 1. Click **Add New Monitor** 2. Configure basic HTTP monitor: - **Monitor Type**: HTTP(s) - **Friendly Name**: `Homelab Wiki` - **URL**: `https://git.vish.gg/Vish/homelab/wiki` - **Heartbeat Interval**: `60 seconds` - **Max Retries**: `3` 3. Click **Save** ### Configure Notifications (Optional) 1. Go to **Settings** → **Notifications** 2. Add notification method: - **NTFY**: `http://homelab-vm.vish.local:80/homelab-alerts` - **Email**: Configure SMTP settings - **Discord**: Add webhook URL ## Step 5: Verification & Testing ### Health Check ```bash # Check container health docker ps | grep uptime-kuma # Test HTTP endpoint curl -I http://localhost:3001 # Check logs for errors docker logs uptime-kuma-quickstart ``` ### Monitor Verification 1. Wait 2-3 minutes for first heartbeat 2. Verify monitor shows **UP** status 3. Check response time graphs 4. Test notification (if configured) ### Resource Usage ```bash # Check resource consumption docker stats uptime-kuma-quickstart # Expected usage: # CPU: < 5% # Memory: < 100MB # Network: Minimal ``` ## Step 6: Integration with Homelab ### Add to Monitoring Stack ```yaml # Add to existing monitoring docker-compose.yml uptime-kuma: # ... existing configuration ... networks: - monitoring labels: - "monitoring.enable=true" - "backup.enable=true" networks: monitoring: external: true ``` ### Configure Reverse Proxy ```yaml # Nginx Proxy Manager configuration # Host: uptime.vish.local # Forward Hostname/IP: uptime-kuma-quickstart # Forward Port: 3001 # SSL: Let's Encrypt or self-signed ``` ### Add to Backup Schedule ```bash # Add volume to backup script echo "uptime-kuma-data" >> /etc/backup/volumes.list # Test backup ./scripts/backup-volumes.sh uptime-kuma-data ``` ## Common Quick Start Issues ### Port Already in Use ```bash # Check what's using port 3001 netstat -tulpn | grep :3001 # Solution: Change external port ports: - "3002:3001" # Use port 3002 instead ``` ### Permission Denied ```bash # Fix volume permissions sudo chown -R 1000:1000 /var/lib/docker/volumes/uptime-kuma-data # Or use named volume (recommended) volumes: uptime-kuma-data: driver: local ``` ### Container Won't Start ```bash # Check Docker daemon systemctl status docker # Check logs docker logs uptime-kuma-quickstart # Restart container docker-compose restart uptime-kuma ``` ### Can't Access Web Interface ```bash # Check firewall sudo ufw status sudo ufw allow 3001/tcp # Check container port binding docker port uptime-kuma-quickstart # Test local connectivity curl http://localhost:3001 ``` ## Next Steps ### Expand Monitoring 1. **Add More Monitors**: - Internal services (Plex, Nextcloud, etc.) - External websites - API endpoints - Database connections 2. **Configure Status Pages**: - Public status page for external services - Internal dashboard for homelab services - Custom branding and themes 3. **Set Up Alerting**: - Email notifications for critical services - NTFY push notifications - Discord/Slack integration - Escalation policies ### Deploy More Services 1. **[Grafana](../services/individual/grafana.md)** - Advanced monitoring dashboards 2. **[Nextcloud](../services/individual/nextcloud.md)** - Personal cloud storage 3. **[Plex](../services/individual/plex.md)** - Media server 4. **[Portainer](../services/individual/portainer.md)** - Container management ### Learn Advanced Concepts 1. **[GitOps Deployment](../admin/gitops-deployment-guide.md)** - Infrastructure as code 2. **[Service Categories](20-Service-Categories.md)** - Explore all available services 3. **[Architecture Overview](03-Architecture-Overview.md)** - Understand the infrastructure 4. **[Security Guidelines](../security/README.md)** - Harden your deployment ## Deployment Templates ### Basic Service Template ```yaml version: '3.8' services: service-name: image: organization/service:latest container_name: service-name restart: unless-stopped ports: - "8080:8080" volumes: - service-data:/data - service-config:/config environment: - PUID=1000 - PGID=1000 - TZ=America/New_York volumes: service-data: service-config: ``` ### Service with Database ```yaml version: '3.8' services: app: image: app:latest container_name: app restart: unless-stopped depends_on: - db ports: - "8080:8080" environment: - DB_HOST=db - DB_USER=appuser - DB_PASS="REDACTED_PASSWORD" - DB_NAME=appdb db: image: postgres:15 container_name: app-db restart: unless-stopped environment: - POSTGRES_USER=appuser - POSTGRES_PASSWORD="REDACTED_PASSWORD" - POSTGRES_DB=appdb volumes: - db-data:/var/lib/postgresql/data volumes: db-data: ``` ### Service with Reverse Proxy ```yaml version: '3.8' services: app: image: app:latest container_name: app restart: unless-stopped expose: - "8080" networks: - proxy labels: - "traefik.enable=true" - "traefik.http.routers.app.rule=Host(`app.vish.local`)" - "traefik.http.services.app.loadbalancer.server.port=8080" networks: proxy: external: true ``` ## Automation Scripts ### Quick Deploy Script ```bash #!/bin/bash # quick-deploy.sh SERVICE_NAME=$1 SERVER=$2 if [ -z "$SERVICE_NAME" ] || [ -z "$SERVER" ]; then echo "Usage: $0 " echo "Example: $0 uptime-kuma raspberry-pi" exit 1 fi echo "Deploying $SERVICE_NAME on $SERVER..." # Navigate to server directory cd "hosts/$SERVER" || exit 1 # Check if service file exists if [ ! -f "$SERVICE_NAME.yml" ]; then echo "Error: $SERVICE_NAME.yml not found in hosts/$SERVER/" exit 1 fi # Deploy service docker-compose -f "$SERVICE_NAME.yml" up -d # Wait for service to start sleep 10 # Check status docker-compose -f "$SERVICE_NAME.yml" ps echo "Deployment complete!" echo "Check logs with: docker-compose -f hosts/$SERVER/$SERVICE_NAME.yml logs -f" ``` ### Health Check Script ```bash #!/bin/bash # health-check.sh SERVICE_NAME=$1 EXPECTED_PORT=$2 if [ -z "$SERVICE_NAME" ] || [ -z "$EXPECTED_PORT" ]; then echo "Usage: $0 " exit 1 fi echo "Checking health of $SERVICE_NAME on port $EXPECTED_PORT..." # Check container status if docker ps | grep -q "$SERVICE_NAME"; then echo "✅ Container is running" else echo "❌ Container is not running" exit 1 fi # Check port accessibility if curl -f "http://localhost:$EXPECTED_PORT" > /dev/null 2>&1; then echo "✅ Service is responding" else echo "❌ Service is not responding" exit 1 fi echo "✅ Health check passed!" ``` ## Support & Resources ### Documentation - **[Full Documentation](../README.md)** - Complete homelab documentation - **[Service Categories](20-Service-Categories.md)** - All available services - **[Troubleshooting](40-Common-Issues.md)** - Common issues and solutions ### Community - **[Homelab Subreddit](https://reddit.com/r/homelab)** - Community discussions - **[Self-Hosted](https://reddit.com/r/selfhosted)** - Self-hosting community - **[Docker Community](https://forums.docker.com/)** - Docker support ### Tools - **[Portainer](http://atlantis.vish.local:9000)** - Container management - **[Grafana](http://atlantis.vish.local:3000)** - Monitoring dashboards - **[Uptime Kuma](http://raspberry-pi.vish.local:3001)** - Service monitoring --- *This quick start guide gets you up and running with your first service deployment. Once comfortable with the basics, explore the comprehensive documentation for advanced configurations and additional services.*