Sanitized mirror from private repository - 2026-03-11 06:48:12 UTC
This commit is contained in:
230
hosts/synology/calypso/reactive_resume_v5/AI_MODEL_GUIDE.md
Normal file
230
hosts/synology/calypso/reactive_resume_v5/AI_MODEL_GUIDE.md
Normal file
@@ -0,0 +1,230 @@
|
||||
# Reactive Resume v5 - AI Model Configuration Guide
|
||||
|
||||
## 🤖 Current AI Setup
|
||||
|
||||
### Ollama Configuration
|
||||
- **Model**: `llama3.2:3b`
|
||||
- **Provider**: `ollama`
|
||||
- **Endpoint**: `http://ollama:11434` (internal)
|
||||
- **External API**: `http://192.168.0.250:11434`
|
||||
|
||||
## 📋 Model Details for Reactive Resume v5
|
||||
|
||||
### Environment Variables
|
||||
Add these to your `docker-compose.yml` environment section:
|
||||
|
||||
```yaml
|
||||
environment:
|
||||
# AI Integration (Ollama) - v5 uses OpenAI-compatible API
|
||||
OPENAI_API_KEY: "ollama" # Dummy key for local Ollama
|
||||
OPENAI_BASE_URL: "http://ollama:11434/v1" # Ollama OpenAI-compatible endpoint
|
||||
OPENAI_MODEL: "llama3.2:3b" # Model name
|
||||
```
|
||||
|
||||
### Model Specifications
|
||||
|
||||
#### llama3.2:3b
|
||||
- **Size**: ~2GB download
|
||||
- **Parameters**: 3 billion
|
||||
- **Context Length**: 8,192 tokens
|
||||
- **Use Case**: General text generation, resume assistance
|
||||
- **Performance**: Fast inference on CPU
|
||||
- **Memory**: ~4GB RAM during inference
|
||||
|
||||
## 🔧 Alternative Models
|
||||
|
||||
If you want to use different models, here are recommended options:
|
||||
|
||||
### Lightweight Options (< 4GB RAM)
|
||||
```yaml
|
||||
# Fastest, smallest
|
||||
OLLAMA_MODEL: "llama3.2:1b" # ~1GB, very fast
|
||||
|
||||
# Balanced performance
|
||||
OLLAMA_MODEL: "llama3.2:3b" # ~2GB, good quality (current)
|
||||
|
||||
# Better quality, still reasonable
|
||||
OLLAMA_MODEL: "qwen2.5:3b" # ~2GB, good for professional text
|
||||
```
|
||||
|
||||
### High-Quality Options (8GB+ RAM)
|
||||
```yaml
|
||||
# Better reasoning
|
||||
OLLAMA_MODEL: "llama3.2:7b" # ~4GB, higher quality
|
||||
|
||||
# Excellent for professional content
|
||||
OLLAMA_MODEL: "qwen2.5:7b" # ~4GB, great for business writing
|
||||
|
||||
# Best quality (if you have the resources)
|
||||
OLLAMA_MODEL: "llama3.2:11b" # ~7GB, excellent quality
|
||||
```
|
||||
|
||||
### Specialized Models
|
||||
```yaml
|
||||
# Code-focused (good for tech resumes)
|
||||
OLLAMA_MODEL: "codellama:7b" # ~4GB, code-aware
|
||||
|
||||
# Instruction-following
|
||||
OLLAMA_MODEL: "mistral:7b" # ~4GB, good at following prompts
|
||||
```
|
||||
|
||||
## 🚀 Model Management Commands
|
||||
|
||||
### Pull New Models
|
||||
```bash
|
||||
# Pull a different model
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama pull qwen2.5:3b"
|
||||
|
||||
# List available models
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"
|
||||
|
||||
# Remove unused models
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama rm llama3.2:1b"
|
||||
```
|
||||
|
||||
### Change Active Model
|
||||
1. Update `OLLAMA_MODEL` in `docker-compose.yml`
|
||||
2. Redeploy: `./deploy.sh restart`
|
||||
3. Pull new model if needed: `./deploy.sh setup-ollama`
|
||||
|
||||
## 🧪 Testing AI Features
|
||||
|
||||
### Direct API Test
|
||||
```bash
|
||||
# Test the AI API directly
|
||||
curl -X POST http://192.168.0.250:11434/api/generate \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"model": "llama3.2:3b",
|
||||
"prompt": "Write a professional summary for a software engineer with 5 years experience in Python and React",
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
### Expected Response
|
||||
```json
|
||||
{
|
||||
"model": "llama3.2:3b",
|
||||
"created_at": "2026-02-16T10:00:00.000Z",
|
||||
"response": "Experienced Software Engineer with 5+ years of expertise in full-stack development using Python and React. Proven track record of building scalable web applications...",
|
||||
"done": true
|
||||
}
|
||||
```
|
||||
|
||||
## 🎯 AI Features in Reactive Resume v5
|
||||
|
||||
### 1. Resume Content Suggestions
|
||||
- **Trigger**: Click "AI Assist" button in any text field
|
||||
- **Function**: Suggests professional content based on context
|
||||
- **Model Usage**: Generates 2-3 sentence suggestions
|
||||
|
||||
### 2. Job Description Analysis
|
||||
- **Trigger**: Paste job description in "Job Match" feature
|
||||
- **Function**: Analyzes requirements and suggests skill additions
|
||||
- **Model Usage**: Extracts key requirements and matches to profile
|
||||
|
||||
### 3. Skills Optimization
|
||||
- **Trigger**: "Optimize Skills" button in Skills section
|
||||
- **Function**: Suggests relevant skills based on experience
|
||||
- **Model Usage**: Analyzes work history and recommends skills
|
||||
|
||||
### 4. Cover Letter Generation
|
||||
- **Trigger**: "Generate Cover Letter" in Documents section
|
||||
- **Function**: Creates personalized cover letter
|
||||
- **Model Usage**: Uses resume data + job description to generate letter
|
||||
|
||||
## 📊 Performance Tuning
|
||||
|
||||
### Model Performance Comparison
|
||||
| Model | Size | Speed | Quality | RAM Usage | Best For |
|
||||
|-------|------|-------|---------|-----------|----------|
|
||||
| llama3.2:1b | 1GB | Very Fast | Good | 2GB | Quick suggestions |
|
||||
| llama3.2:3b | 2GB | Fast | Very Good | 4GB | **Recommended** |
|
||||
| qwen2.5:3b | 2GB | Fast | Very Good | 4GB | Professional content |
|
||||
| llama3.2:7b | 4GB | Medium | Excellent | 8GB | High quality |
|
||||
|
||||
### Optimization Settings
|
||||
```yaml
|
||||
# In docker-compose.yml for Ollama service
|
||||
environment:
|
||||
OLLAMA_HOST: "0.0.0.0"
|
||||
OLLAMA_KEEP_ALIVE: "5m" # Keep model loaded for 5 minutes
|
||||
OLLAMA_MAX_LOADED_MODELS: "1" # Only keep one model in memory
|
||||
OLLAMA_NUM_PARALLEL: "1" # Number of parallel requests
|
||||
```
|
||||
|
||||
## 🔍 Troubleshooting AI Issues
|
||||
|
||||
### Model Not Loading
|
||||
```bash
|
||||
# Check if model exists
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama list"
|
||||
|
||||
# Pull model manually
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b"
|
||||
|
||||
# Check Ollama logs
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker logs Resume-OLLAMA-V5"
|
||||
```
|
||||
|
||||
### Slow AI Responses
|
||||
1. **Check CPU usage**: `htop` on Calypso
|
||||
2. **Reduce model size**: Switch to `llama3.2:1b`
|
||||
3. **Increase keep-alive**: Set `OLLAMA_KEEP_ALIVE: "30m"`
|
||||
|
||||
### AI Features Not Appearing in UI
|
||||
1. **Check environment variables**: Ensure `AI_PROVIDER=ollama` is set
|
||||
2. **Verify connectivity**: Test API endpoint from app container
|
||||
3. **Check app logs**: Look for AI-related errors
|
||||
|
||||
### Memory Issues
|
||||
```bash
|
||||
# Check memory usage
|
||||
ssh Vish@192.168.0.250 -p 62000 "free -h"
|
||||
|
||||
# If low memory, switch to smaller model
|
||||
OLLAMA_MODEL: "llama3.2:1b" # Uses ~2GB instead of 4GB
|
||||
```
|
||||
|
||||
## 🔄 Model Updates
|
||||
|
||||
### Updating to Newer Models
|
||||
1. **Check available models**: https://ollama.ai/library
|
||||
2. **Pull new model**: `ollama pull model-name`
|
||||
3. **Update compose file**: Change `OLLAMA_MODEL` value
|
||||
4. **Restart services**: `./deploy.sh restart`
|
||||
|
||||
### Model Versioning
|
||||
```yaml
|
||||
# Pin to specific version
|
||||
OLLAMA_MODEL: "llama3.2:3b-q4_0" # Specific quantization
|
||||
|
||||
# Use latest (auto-updates)
|
||||
OLLAMA_MODEL: "llama3.2:3b" # Latest version
|
||||
```
|
||||
|
||||
## 📈 Monitoring AI Performance
|
||||
|
||||
### Metrics to Watch
|
||||
- **Response Time**: Should be < 10s for most prompts
|
||||
- **Memory Usage**: Monitor RAM consumption
|
||||
- **Model Load Time**: First request after idle takes longer
|
||||
- **Error Rate**: Check for failed AI requests
|
||||
|
||||
### Performance Commands
|
||||
```bash
|
||||
# Check AI API health
|
||||
curl http://192.168.0.250:11434/api/tags
|
||||
|
||||
# Monitor resource usage
|
||||
ssh Vish@192.168.0.250 -p 62000 "docker stats Resume-OLLAMA-V5"
|
||||
|
||||
# Check AI request logs
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo /usr/local/bin/docker logs Resume-ACCESS-V5 | grep -i ollama"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Current Configuration**: llama3.2:3b (Recommended)
|
||||
**Last Updated**: 2026-02-16
|
||||
**Performance**: ✅ Optimized for Calypso hardware
|
||||
72
hosts/synology/calypso/reactive_resume_v5/MIGRATION.md
Normal file
72
hosts/synology/calypso/reactive_resume_v5/MIGRATION.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Migration from Reactive Resume v4 to v5
|
||||
|
||||
## Migration Summary
|
||||
Successfully migrated from Reactive Resume v4 to v5 on 2026-02-16.
|
||||
|
||||
## Port Configuration
|
||||
- **Main Application**: Port 9751 (same as v4)
|
||||
- **S3 API**: Port 9753 (same as v4 MinIO)
|
||||
- **PDF Service**: Port 4000 (internal)
|
||||
|
||||
## Reverse Proxy Compatibility
|
||||
The migration maintains the same external ports, so existing reverse proxy rules continue to work:
|
||||
- `http://192.168.0.250:9751` → `rx.vish.gg`
|
||||
- `http://192.168.0.250:9753` → `rxdl.vish.gg` (S3 API)
|
||||
|
||||
## Changes from v4 to v5
|
||||
|
||||
### Storage Backend
|
||||
- **v4**: MinIO for S3-compatible storage
|
||||
- **v5**: SeaweedFS for S3-compatible storage
|
||||
- Same S3 API compatibility on port 9753
|
||||
|
||||
### Database
|
||||
- **v4**: PostgreSQL 16
|
||||
- **v5**: PostgreSQL 18
|
||||
- Database migration handled automatically
|
||||
|
||||
### PDF Generation
|
||||
- **v4**: Browserless Chrome with HTTP API
|
||||
- **v5**: Browserless Chrome with WebSocket API
|
||||
- Better performance and real-time updates
|
||||
|
||||
### Authentication
|
||||
- **v4**: Custom auth system
|
||||
- **v5**: Better Auth framework
|
||||
- More secure and feature-rich
|
||||
|
||||
## Data Migration
|
||||
- Database data preserved in `/volume1/docker/rxv5/db/`
|
||||
- File storage migrated to SeaweedFS format
|
||||
- User accounts and resumes preserved
|
||||
|
||||
## Removed Services
|
||||
The following v4 containers were stopped and removed:
|
||||
- `Resume-ACCESS` (v4 main app)
|
||||
- `Resume-DB` (v4 database)
|
||||
- `Resume-PRINTER` (v4 PDF service)
|
||||
- `Resume-MINIO` (v4 storage)
|
||||
|
||||
## New Services
|
||||
The following v5 containers are now running:
|
||||
- `Resume-ACCESS-V5` (v5 main app)
|
||||
- `Resume-DB-V5` (v5 database)
|
||||
- `Resume-BROWSERLESS-V5` (v5 PDF service)
|
||||
- `Resume-SEAWEEDFS-V5` (v5 storage)
|
||||
- `Resume-BUCKET-V5` (storage initialization)
|
||||
|
||||
## Configuration Files
|
||||
- v4 configuration archived to: `/home/homelab/organized/repos/homelab/archive/reactive_resume_v4_archived/`
|
||||
- v5 configuration active in: `/home/homelab/organized/repos/homelab/Calypso/reactive_resume_v5/`
|
||||
|
||||
## Verification
|
||||
- ✅ Application accessible at http://calypso.vish.local:9751
|
||||
- ✅ S3 API accessible at http://calypso.vish.local:9753
|
||||
- ✅ All containers healthy and running
|
||||
- ✅ Reverse proxy rules unchanged
|
||||
- ✅ Account creation working (no more "Invalid origin" errors)
|
||||
|
||||
## Future Enhancements
|
||||
- Ollama AI integration (when v5 supports it)
|
||||
- External domain configuration for https://rx.vish.gg
|
||||
- Automated backups of SeaweedFS data
|
||||
134
hosts/synology/calypso/reactive_resume_v5/README.md
Normal file
134
hosts/synology/calypso/reactive_resume_v5/README.md
Normal file
@@ -0,0 +1,134 @@
|
||||
# Reactive Resume v5 - GitOps Deployment
|
||||
|
||||
This directory contains the GitOps deployment configuration for Reactive Resume v5 on the Calypso server with AI integration.
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
```bash
|
||||
# Deploy the complete stack
|
||||
./deploy.sh
|
||||
|
||||
# Check status
|
||||
./deploy.sh status
|
||||
|
||||
# View logs
|
||||
./deploy.sh logs
|
||||
```
|
||||
|
||||
## 🌐 Access URLs
|
||||
|
||||
- **External**: https://rx.vish.gg
|
||||
- **Internal**: http://192.168.0.250:9751
|
||||
- **Download Service**: http://192.168.0.250:9753 (rxdl.vish.gg)
|
||||
- **Ollama API**: http://192.168.0.250:11434
|
||||
|
||||
## 🏗️ Architecture
|
||||
|
||||
### Core Services
|
||||
- **Main App**: Reactive Resume v5 with AI features
|
||||
- **Database**: PostgreSQL 18
|
||||
- **Storage**: SeaweedFS (S3-compatible)
|
||||
- **PDF Generation**: Browserless Chrome
|
||||
- **AI Engine**: Ollama with llama3.2:3b model
|
||||
|
||||
### Infrastructure
|
||||
- **Proxy**: Nginx Proxy Manager (ports 8880/8443)
|
||||
- **Router**: Port forwarding 80→8880, 443→8443
|
||||
|
||||
## 🤖 AI Features
|
||||
|
||||
Reactive Resume v5 includes AI-powered features:
|
||||
- Resume content suggestions
|
||||
- Job description analysis
|
||||
- Skills optimization
|
||||
- Cover letter generation
|
||||
|
||||
Powered by Ollama running locally with the llama3.2:3b model.
|
||||
|
||||
## 📋 Prerequisites
|
||||
|
||||
1. **Router Configuration**: Forward ports 80→8880 and 443→8443
|
||||
2. **DNS**: rx.vish.gg and rxdl.vish.gg pointing to YOUR_WAN_IP
|
||||
3. **SSL**: Cloudflare Origin certificates in NPM
|
||||
|
||||
## 🛠️ Deployment Commands
|
||||
|
||||
```bash
|
||||
# Full deployment
|
||||
./deploy.sh deploy
|
||||
|
||||
# Setup individual components
|
||||
./deploy.sh setup-npm # Setup Nginx Proxy Manager
|
||||
./deploy.sh setup-ollama # Setup AI model
|
||||
|
||||
# Management
|
||||
./deploy.sh restart # Restart services
|
||||
./deploy.sh stop # Stop services
|
||||
./deploy.sh update # Update images and redeploy
|
||||
./deploy.sh status # Check service status
|
||||
./deploy.sh logs # View application logs
|
||||
```
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
### Environment Variables
|
||||
- `APP_URL`: https://rx.vish.gg
|
||||
- `AI_PROVIDER`: ollama
|
||||
- `OLLAMA_URL`: http://ollama:11434
|
||||
- `OLLAMA_MODEL`: llama3.2:3b
|
||||
|
||||
### Volumes
|
||||
- `/volume1/docker/rxv5/db` - PostgreSQL data
|
||||
- `/volume1/docker/rxv5/seaweedfs` - File storage
|
||||
- `/volume1/docker/rxv5/ollama` - AI model data
|
||||
|
||||
## 🔄 Migration from v4
|
||||
|
||||
This deployment maintains compatibility with v4:
|
||||
- Same ports (9751, 9753)
|
||||
- Same SMTP configuration
|
||||
- Same database credentials
|
||||
- Preserves existing NPM proxy rules
|
||||
|
||||
## 🚨 Troubleshooting
|
||||
|
||||
### External Access Issues
|
||||
1. Check router port forwarding: 80→8880, 443→8443
|
||||
2. Verify NPM proxy hosts are configured
|
||||
3. Confirm DNS propagation: `nslookup rx.vish.gg`
|
||||
|
||||
### AI Features Not Working
|
||||
1. Check Ollama service: `docker logs Resume-OLLAMA-V5`
|
||||
2. Pull model manually: `docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b`
|
||||
3. Verify model is loaded: `docker exec Resume-OLLAMA-V5 ollama list`
|
||||
|
||||
### Service Health
|
||||
```bash
|
||||
# Check all services
|
||||
./deploy.sh status
|
||||
|
||||
# Check specific container
|
||||
ssh Vish@192.168.0.250 -p 62000 "sudo docker logs Resume-ACCESS-V5"
|
||||
```
|
||||
|
||||
## 📊 Monitoring
|
||||
|
||||
- **Application Health**: http://192.168.0.250:9751/health
|
||||
- **Database**: PostgreSQL on port 5432 (internal)
|
||||
- **Storage**: SeaweedFS S3 API on port 8333 (internal)
|
||||
- **AI**: Ollama API on port 11434
|
||||
|
||||
## 🔐 Security
|
||||
|
||||
- All services run with `no-new-privileges:true`
|
||||
- Database credentials are environment-specific
|
||||
- SMTP uses app-specific passwords
|
||||
- External access only through NPM with SSL
|
||||
|
||||
## 📈 Status
|
||||
|
||||
**Status**: ✅ **ACTIVE DEPLOYMENT** (GitOps with AI integration)
|
||||
- **Version**: v5.0.9
|
||||
- **Deployed**: 2026-02-16
|
||||
- **AI Model**: llama3.2:3b
|
||||
- **External Access**: ✅ Configured
|
||||
210
hosts/synology/calypso/reactive_resume_v5/deploy.sh
Executable file
210
hosts/synology/calypso/reactive_resume_v5/deploy.sh
Executable file
@@ -0,0 +1,210 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Reactive Resume v5 GitOps Deployment Script
|
||||
# Usage: ./deploy.sh [action]
|
||||
# Actions: deploy, restart, stop, logs, status
|
||||
|
||||
set -e
|
||||
|
||||
COMPOSE_FILE="docker-compose.yml"
|
||||
REMOTE_HOST="Vish@192.168.0.250"
|
||||
SSH_PORT="62000"
|
||||
REMOTE_PATH="/volume1/docker/rxv5"
|
||||
SERVICE_NAME="reactive-resume-v5"
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
log() {
|
||||
echo -e "${BLUE}[$(date +'%Y-%m-%d %H:%M:%S')] $1${NC}"
|
||||
}
|
||||
|
||||
success() {
|
||||
echo -e "${GREEN}✅ $1${NC}"
|
||||
}
|
||||
|
||||
warning() {
|
||||
echo -e "${YELLOW}⚠️ $1${NC}"
|
||||
}
|
||||
|
||||
error() {
|
||||
echo -e "${RED}❌ $1${NC}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
check_prerequisites() {
|
||||
if [[ ! -f "$COMPOSE_FILE" ]]; then
|
||||
error "docker-compose.yml not found in current directory"
|
||||
fi
|
||||
|
||||
if ! ssh -q -p "$SSH_PORT" "$REMOTE_HOST" exit; then
|
||||
error "Cannot connect to $REMOTE_HOST"
|
||||
fi
|
||||
}
|
||||
|
||||
setup_npm() {
|
||||
log "Setting up Nginx Proxy Manager..."
|
||||
|
||||
# Create NPM directories
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "mkdir -p /volume1/homes/Vish/npm/{data,letsencrypt}"
|
||||
|
||||
# Stop existing NPM if running
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker stop nginx-proxy-manager 2>/dev/null || true"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker rm nginx-proxy-manager 2>/dev/null || true"
|
||||
|
||||
# Start NPM with correct port mapping
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker run -d \
|
||||
--name nginx-proxy-manager \
|
||||
--restart unless-stopped \
|
||||
-p 8880:80 \
|
||||
-p 8443:443 \
|
||||
-p 81:81 \
|
||||
-v /volume1/homes/Vish/npm/data:/data \
|
||||
-v /volume1/homes/Vish/npm/letsencrypt:/etc/letsencrypt \
|
||||
jc21/nginx-proxy-manager:latest"
|
||||
|
||||
success "NPM started on ports 8880/8443"
|
||||
warning "Make sure your router forwards port 80→8880 and 443→8443"
|
||||
}
|
||||
|
||||
setup_ollama() {
|
||||
log "Setting up Ollama AI model..."
|
||||
|
||||
# Wait for Ollama to be ready
|
||||
log "Waiting for Ollama service to start..."
|
||||
sleep 30
|
||||
|
||||
# Pull the required model
|
||||
log "Pulling llama3.2:3b model (this may take a while)..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b" || {
|
||||
warning "Failed to pull model automatically. You can pull it manually later with:"
|
||||
warning "docker exec Resume-OLLAMA-V5 ollama pull llama3.2:3b"
|
||||
}
|
||||
|
||||
success "Ollama setup complete"
|
||||
}
|
||||
|
||||
deploy() {
|
||||
log "Deploying $SERVICE_NAME to $REMOTE_HOST..."
|
||||
|
||||
# Create required directories
|
||||
log "Creating required directories..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "mkdir -p $REMOTE_PATH/{db,seaweedfs,ollama}"
|
||||
|
||||
# Copy compose file
|
||||
log "Copying docker-compose.yml to $REMOTE_HOST:$REMOTE_PATH/"
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cat > $REMOTE_PATH/docker-compose.yml" < "$COMPOSE_FILE"
|
||||
|
||||
# Deploy services
|
||||
log "Starting services..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose up -d"
|
||||
|
||||
# Wait for services to be healthy
|
||||
log "Waiting for services to be healthy..."
|
||||
sleep 30
|
||||
|
||||
# Setup Ollama model
|
||||
setup_ollama
|
||||
|
||||
# Check status
|
||||
if ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker ps | grep -q 'Resume.*V5'"; then
|
||||
success "$SERVICE_NAME deployed successfully!"
|
||||
log "Local access: http://192.168.0.250:9751"
|
||||
log "External access: https://rx.vish.gg"
|
||||
log "Ollama API: http://192.168.0.250:11434"
|
||||
warning "Make sure NPM is configured for external access"
|
||||
else
|
||||
warning "Services started but may not be fully healthy yet. Check logs with: ./deploy.sh logs"
|
||||
fi
|
||||
}
|
||||
|
||||
restart() {
|
||||
log "Restarting $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose restart"
|
||||
success "$SERVICE_NAME restarted!"
|
||||
}
|
||||
|
||||
stop() {
|
||||
log "Stopping $SERVICE_NAME..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose down"
|
||||
success "$SERVICE_NAME stopped!"
|
||||
}
|
||||
|
||||
logs() {
|
||||
log "Showing logs for Resume-ACCESS-V5..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker logs Resume-ACCESS-V5 --tail 50 -f"
|
||||
}
|
||||
|
||||
status() {
|
||||
log "Checking status of $SERVICE_NAME services..."
|
||||
echo
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "sudo /usr/local/bin/docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Ports}}' | grep -E 'Resume.*V5|NAMES'"
|
||||
echo
|
||||
|
||||
# Check if application is responding
|
||||
if curl -s -f http://192.168.0.250:9751 > /dev/null; then
|
||||
success "Application is responding at http://192.168.0.250:9751"
|
||||
else
|
||||
warning "Application may not be responding"
|
||||
fi
|
||||
}
|
||||
|
||||
update() {
|
||||
log "Updating $SERVICE_NAME (pull latest images and redeploy)..."
|
||||
ssh -p "$SSH_PORT" "$REMOTE_HOST" "cd $REMOTE_PATH && sudo /usr/local/bin/docker-compose pull"
|
||||
deploy
|
||||
}
|
||||
|
||||
# Main script logic
|
||||
case "${1:-deploy}" in
|
||||
deploy)
|
||||
check_prerequisites
|
||||
deploy
|
||||
;;
|
||||
restart)
|
||||
check_prerequisites
|
||||
restart
|
||||
;;
|
||||
stop)
|
||||
check_prerequisites
|
||||
stop
|
||||
;;
|
||||
logs)
|
||||
check_prerequisites
|
||||
logs
|
||||
;;
|
||||
status)
|
||||
check_prerequisites
|
||||
status
|
||||
;;
|
||||
update)
|
||||
check_prerequisites
|
||||
update
|
||||
;;
|
||||
setup-npm)
|
||||
check_prerequisites
|
||||
setup_npm
|
||||
;;
|
||||
setup-ollama)
|
||||
check_prerequisites
|
||||
setup_ollama
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 [deploy|restart|stop|logs|status|update|setup-npm|setup-ollama]"
|
||||
echo
|
||||
echo "Commands:"
|
||||
echo " deploy - Deploy/update the service (default)"
|
||||
echo " restart - Restart all services"
|
||||
echo " stop - Stop all services"
|
||||
echo " logs - Show application logs"
|
||||
echo " status - Show service status"
|
||||
echo " update - Pull latest images and redeploy"
|
||||
echo " setup-npm - Setup Nginx Proxy Manager"
|
||||
echo " setup-ollama - Setup Ollama AI model"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
157
hosts/synology/calypso/reactive_resume_v5/docker-compose.yml
Normal file
157
hosts/synology/calypso/reactive_resume_v5/docker-compose.yml
Normal file
@@ -0,0 +1,157 @@
|
||||
# Reactive Resume v5 - Upgraded from v4 with same configuration values
|
||||
# Docs: https://docs.rxresu.me/self-hosting/docker
|
||||
|
||||
services:
|
||||
db:
|
||||
image: postgres:18
|
||||
container_name: Resume-DB-V5
|
||||
hostname: resume-db
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
healthcheck:
|
||||
test: ["CMD", "pg_isready", "-q", "-d", "resume", "-U", "resumeuser"]
|
||||
timeout: 45s
|
||||
interval: 10s
|
||||
retries: 10
|
||||
volumes:
|
||||
- /volume1/docker/rxv5/db:/var/lib/postgresql:rw
|
||||
environment:
|
||||
POSTGRES_DB: resume
|
||||
POSTGRES_USER: resumeuser
|
||||
POSTGRES_PASSWORD: "REDACTED_PASSWORD"
|
||||
restart: unless-stopped
|
||||
|
||||
browserless:
|
||||
image: ghcr.io/browserless/chromium:latest
|
||||
container_name: Resume-BROWSERLESS-V5
|
||||
ports:
|
||||
- "4000:3000"
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:3000/pressure?token=1234567890"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
environment:
|
||||
QUEUED: 30
|
||||
HEALTH: true
|
||||
CONCURRENT: 20
|
||||
TOKEN: 1234567890
|
||||
restart: unless-stopped
|
||||
|
||||
seaweedfs:
|
||||
image: chrislusf/seaweedfs:latest
|
||||
container_name: Resume-SEAWEEDFS-V5
|
||||
ports:
|
||||
- "9753:8333" # S3 API port (same as v4 MinIO)
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "-q", "-O", "/dev/null", "http://localhost:8888"]
|
||||
start_period: 10s
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
command: server -s3 -filer -dir=/data -ip=0.0.0.0
|
||||
environment:
|
||||
AWS_ACCESS_KEY_ID: seaweedfs
|
||||
AWS_SECRET_ACCESS_KEY: seaweedfs
|
||||
volumes:
|
||||
- /volume1/docker/rxv5/seaweedfs:/data:rw
|
||||
restart: unless-stopped
|
||||
|
||||
seaweedfs-create-bucket:
|
||||
image: quay.io/minio/mc:latest
|
||||
container_name: Resume-BUCKET-V5
|
||||
entrypoint: >
|
||||
/bin/sh -c "
|
||||
sleep 5;
|
||||
mc alias set seaweedfs http://seaweedfs:8333 seaweedfs seaweedfs;
|
||||
mc mb seaweedfs/reactive-resume;
|
||||
exit 0;
|
||||
"
|
||||
depends_on:
|
||||
seaweedfs:
|
||||
condition: service_healthy
|
||||
restart: on-failure:5
|
||||
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
container_name: Resume-OLLAMA-V5
|
||||
ports:
|
||||
- "11434:11434"
|
||||
volumes:
|
||||
- /volume1/docker/rxv5/ollama:/root/.ollama:rw
|
||||
environment:
|
||||
OLLAMA_HOST: "0.0.0.0"
|
||||
restart: unless-stopped
|
||||
# Uncomment if you have GPU support
|
||||
# deploy:
|
||||
# resources:
|
||||
# reservations:
|
||||
# devices:
|
||||
# - driver: nvidia
|
||||
# count: 1
|
||||
# capabilities: [gpu]
|
||||
|
||||
resume:
|
||||
image: amruthpillai/reactive-resume:v5
|
||||
container_name: Resume-ACCESS-V5
|
||||
hostname: resume
|
||||
security_opt:
|
||||
- no-new-privileges:true
|
||||
ports:
|
||||
- "9751:3000" # Main application port (same as v4)
|
||||
environment:
|
||||
# --- Server ---
|
||||
PORT: 3000
|
||||
TZ: "America/Chicago"
|
||||
NODE_ENV: production
|
||||
APP_URL: "https://rx.vish.gg"
|
||||
PRINTER_APP_URL: "http://resume:3000"
|
||||
|
||||
# --- Database ---
|
||||
DATABASE_URL: "postgresql://resumeuser:REDACTED_PASSWORD@resume-db:5432/resume"
|
||||
|
||||
# --- Authentication ---
|
||||
# Using same secret as v4 for consistency
|
||||
AUTH_SECRET: "d5c3e165dafd2d82bf84acacREDACTED_GITEA_TOKEN"
|
||||
|
||||
# --- Printer (v5 uses WebSocket) ---
|
||||
PRINTER_ENDPOINT: "ws://browserless:3000?token=1234567890"
|
||||
|
||||
# --- Storage (S3 - SeaweedFS) ---
|
||||
S3_ACCESS_KEY_ID: "seaweedfs"
|
||||
S3_SECRET_ACCESS_KEY: "seaweedfs"
|
||||
S3_ENDPOINT: "http://seaweedfs:8333"
|
||||
S3_BUCKET: "reactive-resume"
|
||||
S3_FORCE_PATH_STYLE: "true"
|
||||
STORAGE_USE_SSL: "false"
|
||||
|
||||
# --- Email (SMTP) - Same as v4 ---
|
||||
SMTP_HOST: "smtp.gmail.com"
|
||||
SMTP_PORT: "465"
|
||||
SMTP_USER: "your-email@example.com"
|
||||
SMTP_PASS: "REDACTED_PASSWORD" rnqz rnqz rnqz"
|
||||
SMTP_FROM: "your-email@example.com"
|
||||
SMTP_SECURE: "true"
|
||||
|
||||
# --- OAuth / SSO (Authentik) ---
|
||||
OAUTH_PROVIDER_NAME: "Authentik"
|
||||
OAUTH_CLIENT_ID: "REDACTED_CLIENT_ID"
|
||||
OAUTH_CLIENT_SECRET: "REDACTED_CLIENT_SECRET" # pragma: allowlist secret
|
||||
OAUTH_DISCOVERY_URL: "https://sso.vish.gg/application/o/reactive-resume/.well-known/openid-configuration"
|
||||
|
||||
# --- Feature Flags ---
|
||||
FLAG_DISABLE_SIGNUPS: "false"
|
||||
FLAG_DISABLE_EMAIL_AUTH: "false"
|
||||
|
||||
# --- AI Integration (Ollama) ---
|
||||
# v5 uses different environment variable names
|
||||
OPENAI_API_KEY: "ollama" # Dummy key for local Ollama
|
||||
OPENAI_BASE_URL: "http://ollama:11434/v1" # Ollama OpenAI-compatible endpoint
|
||||
OPENAI_MODEL: "llama3.2:3b" # Model name
|
||||
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
seaweedfs:
|
||||
condition: service_healthy
|
||||
restart: unless-stopped
|
||||
Reference in New Issue
Block a user