Files
homelab-optimized/hosts/synology/calypso/REACTIVE_RESUME_V5_DEPLOYMENT.md
Gitea Mirror Bot fb00a325d1
Some checks failed
Documentation / Build Docusaurus (push) Failing after 5m14s
Documentation / Deploy to GitHub Pages (push) Has been skipped
Sanitized mirror from private repository - 2026-04-18 11:19:59 UTC
2026-04-18 11:19:59 +00:00

318 lines
8.5 KiB
Markdown

# Reactive Resume v5 with AI Integration - Complete Deployment Guide
## 🎯 Overview
This document provides complete deployment instructions for Reactive Resume v5 with AI integration on Calypso server. The deployment includes Ollama for local AI features and maintains compatibility with existing v4 configurations.
**Deployment Date**: 2026-02-16
**Status**: ✅ PRODUCTION READY
**External URL**: https://rx.vish.gg
**AI Model**: llama3.2:3b (2GB)
## 🏗️ Architecture
```
Internet (YOUR_WAN_IP)
↓ Port 80/443
Router (Port Forwarding)
↓ 80→8880, 443→8443
Nginx Proxy Manager (Calypso:8880/8443)
↓ Proxy to internal services
Reactive Resume v5 Stack (Calypso:9751)
├── Resume-ACCESS-V5 (Main App)
├── Resume-DB-V5 (PostgreSQL 18)
├── Resume-BROWSERLESS-V5 (PDF Gen)
├── Resume-SEAWEEDFS-V5 (S3 Storage)
└── Resume-OLLAMA-V5 (AI Engine)
```
## 🚀 Quick Deployment
### Prerequisites
1. **Router Configuration**: Port forwarding 80→8880, 443→8443
2. **DNS**: rx.vish.gg pointing to YOUR_WAN_IP
3. **SSH Access**: To Calypso server (192.168.0.250:62000)
### Deploy Everything
```bash
# Clone the repo (if not already done)
git clone https://git.vish.gg/Vish/homelab.git
cd homelab/Calypso
# Deploy NPM first (infrastructure)
cd nginx_proxy_manager
./deploy.sh deploy
# Deploy Reactive Resume v5 with AI
cd ../reactive_resume_v5
./deploy.sh deploy
```
## 🤖 AI Integration Details
### Ollama Configuration
- **Model**: `llama3.2:3b`
- **Size**: ~2GB download
- **Purpose**: Resume assistance, content generation
- **API Endpoint**: `http://ollama:11434` (internal)
- **External API**: `http://192.168.0.250:11434`
### AI Features in Reactive Resume v5
1. **Resume Content Suggestions**: AI-powered content recommendations
2. **Job Description Analysis**: Match skills to job requirements
3. **Skills Optimization**: Suggest relevant skills based on experience
4. **Cover Letter Generation**: AI-assisted cover letter writing
### Model Performance
- **Speed**: Fast inference on CPU (3B parameters)
- **Quality**: Good for resume/professional content
- **Memory**: ~4GB RAM usage during inference
- **Offline**: Fully local, no external API calls
## 📁 Directory Structure
```
homelab/Calypso/
├── reactive_resume_v5/
│ ├── docker-compose.yml # Main stack definition
│ ├── deploy.sh # GitOps deployment script
│ ├── README.md # Service documentation
│ └── MIGRATION.md # v4 to v5 migration notes
├── nginx_proxy_manager/
│ ├── docker-compose.yml # NPM configuration
│ ├── deploy.sh # NPM deployment script
│ └── README.md # NPM documentation
└── DEPLOYMENT_SUMMARY.md # This deployment overview
```
## 🔧 Configuration Details
### Environment Variables (Reactive Resume)
```yaml
# Core Configuration
APP_URL: "https://rx.vish.gg"
NODE_ENV: "production"
PORT: "3000"
# Database
DATABASE_URL: "postgresql://resumeuser:REDACTED_PASSWORD@resume-db:5432/resume"
# AI Integration
AI_PROVIDER: "ollama"
OLLAMA_URL: "http://ollama:11434"
OLLAMA_MODEL: "llama3.2:3b"
# Storage (S3-compatible)
S3_ENDPOINT: "http://seaweedfs:8333"
S3_BUCKET: "reactive-resume"
S3_ACCESS_KEY_ID: "seaweedfs"
S3_SECRET_ACCESS_KEY: "seaweedfs"
# PDF Generation
PRINTER_ENDPOINT: "ws://browserless:3000?token=1234567890"
# SMTP (Gmail)
SMTP_HOST: "smtp.gmail.com"
SMTP_PORT: "465"
SMTP_USER: "your-email@example.com"
SMTP_PASS: "REDACTED_PASSWORD"
SMTP_SECURE: "true"
```
### Port Mapping
```yaml
Services:
- Resume-ACCESS-V5: 9751:3000 # Main application
- Resume-OLLAMA-V5: 11434:11434 # AI API
- Resume-SEAWEEDFS-V5: 9753:8333 # S3 API (download service)
- Resume-BROWSERLESS-V5: 4000:3000 # PDF generation
- nginx-proxy-manager: 8880:80, 8443:443, 81:81
```
## 🛠️ Management Commands
### Reactive Resume v5
```bash
cd homelab/Calypso/reactive_resume_v5
# Deployment
./deploy.sh deploy # Full deployment
./deploy.sh setup-ollama # Setup AI model only
# Management
./deploy.sh status # Check all services
./deploy.sh logs # View application logs
./deploy.sh restart # Restart services
./deploy.sh stop # Stop all services
./deploy.sh update # Update images and redeploy
```
### Nginx Proxy Manager
```bash
cd homelab/Calypso/nginx_proxy_manager
# Deployment
./deploy.sh deploy # Deploy NPM
./deploy.sh cleanup # Clean up broken containers
# Management
./deploy.sh status # Check NPM status
./deploy.sh logs # View NPM logs
./deploy.sh restart # Restart NPM
```
## 🌐 Network Configuration
### Router Port Forwarding
Configure your router to forward:
- **Port 80** → **192.168.0.250:8880** (HTTP)
- **Port 443** → **192.168.0.250:8443** (HTTPS)
### NPM Proxy Host Configuration
In NPM Admin UI (http://192.168.0.250:81):
1. **rx.vish.gg**:
- Forward Hostname/IP: `192.168.0.250`
- Forward Port: `9751`
- Enable SSL with Cloudflare Origin Certificate
2. **rxdl.vish.gg** (Download Service):
- Forward Hostname/IP: `192.168.0.250`
- Forward Port: `9753`
- Enable SSL with Cloudflare Origin Certificate
## 🔍 Troubleshooting
### AI Features Not Working
```bash
# Check Ollama service
./deploy.sh logs | grep ollama
# Verify model is loaded
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"
# Test AI API directly
curl http://192.168.0.250:11434/api/generate -d '{
"model": "llama3.2:3b",
"prompt": "Write a professional summary for a software engineer",
"stream": false
}'
```
### External Access Issues
```bash
# Test DNS resolution
nslookup rx.vish.gg
# Test external connectivity
curl -I https://rx.vish.gg
# Check NPM proxy configuration
./deploy.sh status
```
### Service Health Check
```bash
# Check all containers
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker ps"
# Check specific service logs
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker logs Resume-ACCESS-V5"
```
## 📊 Performance Metrics
### Resource Usage (Typical)
- **CPU**: 2-4 cores during AI inference
- **RAM**: 6-8GB total (4GB for Ollama + 2-4GB for other services)
- **Storage**: ~15GB (2GB model + 3GB images + data)
- **Network**: Minimal (all AI processing local)
### Response Times
- **App Load**: <2s
- **AI Suggestions**: 3-10s (depending on prompt complexity)
- **PDF Generation**: 2-5s
- **File Upload**: <1s (local S3)
## 🔐 Security Considerations
### Access Control
- All services behind NPM reverse proxy
- External access only via HTTPS
- AI processing completely local (no data leaves network)
- Database credentials environment-specific
### SSL/TLS
- Cloudflare Origin Certificates in NPM
- End-to-end encryption for external access
- Internal services use HTTP (behind firewall)
## 🔄 Backup & Recovery
### Critical Data Locations
```bash
# Database backup
/volume1/docker/rxv5/db/
# File storage backup
/volume1/docker/rxv5/seaweedfs/
# AI model data
/volume1/docker/rxv5/ollama/
# NPM configuration
/volume1/docker/nginx-proxy-manager/data/
```
### Backup Commands
```bash
# Create backup
ssh Vish@192.168.0.250 -p 62000 "sudo tar -czf /volume1/backups/rxv5-$(date +%Y%m%d).tar.gz /volume1/docker/rxv5/"
# Restore from backup
ssh Vish@192.168.0.250 -p 62000 "sudo tar -xzf /volume1/backups/rxv5-YYYYMMDD.tar.gz -C /"
```
## 📈 Monitoring
### Health Endpoints
- **Application**: http://192.168.0.250:9751/health
- **Database**: PostgreSQL health checks via Docker
- **AI Service**: http://192.168.0.250:11434/api/tags
- **Storage**: SeaweedFS S3 API health
### Log Locations
```bash
# Application logs
sudo /usr/local/bin/docker logs Resume-ACCESS-V5
# AI service logs
sudo /usr/local/bin/docker logs Resume-OLLAMA-V5
# Database logs
sudo /usr/local/bin/docker logs Resume-DB-V5
```
## 🎉 Success Criteria
**External Access**: https://rx.vish.gg responds with 200
**AI Integration**: Ollama model loaded and responding
**PDF Generation**: Browserless service healthy
**File Storage**: SeaweedFS S3 API functional
**Database**: PostgreSQL healthy and accessible
**Proxy**: NPM routing traffic correctly
## 📞 Support
For issues with this deployment:
1. Check service status: `./deploy.sh status`
2. Review logs: `./deploy.sh logs`
3. Verify network connectivity and DNS
4. Ensure router port forwarding is correct
5. Check NPM proxy host configuration
---
**Last Updated**: 2026-02-16
**Deployed By**: OpenHands GitOps
**Version**: Reactive Resume v5.0.9 + Ollama llama3.2:3b