Sanitized mirror from private repository - 2026-04-20 01:32:01 UTC
This commit is contained in:
400
hosts/vms/seattle/README-ollama.md
Normal file
400
hosts/vms/seattle/README-ollama.md
Normal file
@@ -0,0 +1,400 @@
|
||||
# Ollama on Seattle - Local LLM Inference Server
|
||||
|
||||
## Overview
|
||||
|
||||
| Setting | Value |
|
||||
|---------|-------|
|
||||
| **Host** | Seattle VM (Contabo VPS) |
|
||||
| **Port** | 11434 (Ollama API) |
|
||||
| **Image** | `ollama/ollama:latest` |
|
||||
| **API** | http://100.82.197.124:11434 (Tailscale) |
|
||||
| **Stack File** | `hosts/vms/seattle/ollama.yaml` |
|
||||
| **Data Volume** | `ollama-seattle-data` |
|
||||
|
||||
## Why Ollama on Seattle?
|
||||
|
||||
Ollama was deployed on seattle to provide:
|
||||
1. **CPU-Only Inference**: Ollama is optimized for CPU inference, unlike vLLM which requires GPU
|
||||
2. **Additional Capacity**: Supplements the main Ollama instance on Atlantis (192.168.0.200)
|
||||
3. **Geographic Distribution**: Runs on a Contabo VPS, providing inference capability outside the local network
|
||||
4. **Integration with Perplexica**: Can be added as an additional LLM provider for redundancy
|
||||
|
||||
## Specifications
|
||||
|
||||
### Hardware
|
||||
- **CPU**: 16 vCPU AMD EPYC Processor
|
||||
- **RAM**: 64GB
|
||||
- **Storage**: 300GB SSD
|
||||
- **Location**: Contabo Data Center
|
||||
- **Network**: Tailscale VPN (100.82.197.124)
|
||||
|
||||
### Resource Allocation
|
||||
```yaml
|
||||
limits:
|
||||
cpus: '12'
|
||||
memory: 32G
|
||||
reservations:
|
||||
cpus: '4'
|
||||
memory: 8G
|
||||
```
|
||||
|
||||
## Installed Models
|
||||
|
||||
### Qwen 2.5 1.5B Instruct
|
||||
- **Model ID**: `qwen2.5:1.5b`
|
||||
- **Size**: ~986 MB
|
||||
- **Context Window**: 32K tokens
|
||||
- **Use Case**: Fast, lightweight inference for search queries
|
||||
- **Performance**: Excellent on CPU, ~5-10 tokens/second
|
||||
|
||||
## Installation History
|
||||
|
||||
### February 16, 2026 - Initial Setup
|
||||
|
||||
**Problem**: Attempted to use vLLM for CPU inference
|
||||
- vLLM container crashed with device detection errors
|
||||
- vLLM is primarily designed for GPU inference
|
||||
- CPU mode is not well-supported in recent vLLM versions
|
||||
|
||||
**Solution**: Switched to Ollama
|
||||
- Ollama is specifically optimized for CPU inference
|
||||
- Provides better performance and reliability on CPU-only systems
|
||||
- Simpler configuration and management
|
||||
- Native support for multiple model formats
|
||||
|
||||
**Deployment Steps**:
|
||||
1. Removed failing vLLM container
|
||||
2. Created `ollama.yaml` docker-compose configuration
|
||||
3. Deployed Ollama container
|
||||
4. Pulled `qwen2.5:1.5b` model
|
||||
5. Tested API connectivity via Tailscale
|
||||
|
||||
## Configuration
|
||||
|
||||
### Docker Compose
|
||||
|
||||
See `hosts/vms/seattle/ollama.yaml`:
|
||||
|
||||
```yaml
|
||||
services:
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
container_name: ollama-seattle
|
||||
ports:
|
||||
- "11434:11434"
|
||||
environment:
|
||||
- OLLAMA_HOST=0.0.0.0:11434
|
||||
- OLLAMA_KEEP_ALIVE=24h
|
||||
- OLLAMA_NUM_PARALLEL=2
|
||||
- OLLAMA_MAX_LOADED_MODELS=2
|
||||
volumes:
|
||||
- ollama-data:/root/.ollama
|
||||
restart: unless-stopped
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
- `OLLAMA_HOST`: Bind to all interfaces
|
||||
- `OLLAMA_KEEP_ALIVE`: Keep models loaded for 24 hours
|
||||
- `OLLAMA_NUM_PARALLEL`: Allow 2 parallel requests
|
||||
- `OLLAMA_MAX_LOADED_MODELS`: Cache up to 2 models in memory
|
||||
|
||||
## Usage
|
||||
|
||||
### API Endpoints
|
||||
|
||||
#### List Models
|
||||
```bash
|
||||
curl http://100.82.197.124:11434/api/tags
|
||||
```
|
||||
|
||||
#### Generate Completion
|
||||
```bash
|
||||
curl http://100.82.197.124:11434/api/generate -d '{
|
||||
"model": "qwen2.5:1.5b",
|
||||
"prompt": "Explain quantum computing in simple terms"
|
||||
}'
|
||||
```
|
||||
|
||||
#### Chat Completion
|
||||
```bash
|
||||
curl http://100.82.197.124:11434/api/chat -d '{
|
||||
"model": "qwen2.5:1.5b",
|
||||
"messages": [
|
||||
{"role": "user", "content": "Hello!"}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
### Model Management
|
||||
|
||||
#### Pull a New Model
|
||||
```bash
|
||||
ssh seattle-tailscale "docker exec ollama-seattle ollama pull <model-name>"
|
||||
|
||||
# Examples:
|
||||
# docker exec ollama-seattle ollama pull qwen2.5:3b
|
||||
# docker exec ollama-seattle ollama pull llama3.2:3b
|
||||
# docker exec ollama-seattle ollama pull mistral:7b
|
||||
```
|
||||
|
||||
#### List Downloaded Models
|
||||
```bash
|
||||
ssh seattle-tailscale "docker exec ollama-seattle ollama list"
|
||||
```
|
||||
|
||||
#### Remove a Model
|
||||
```bash
|
||||
ssh seattle-tailscale "docker exec ollama-seattle ollama rm <model-name>"
|
||||
```
|
||||
|
||||
## Integration with Perplexica
|
||||
|
||||
To add this Ollama instance as an LLM provider in Perplexica:
|
||||
|
||||
1. Navigate to **http://192.168.0.210:4785/settings**
|
||||
2. Click **"Model Providers"**
|
||||
3. Click **"Add Provider"**
|
||||
4. Configure as follows:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "Ollama Seattle",
|
||||
"type": "ollama",
|
||||
"baseURL": "http://100.82.197.124:11434",
|
||||
"apiKey": ""
|
||||
}
|
||||
```
|
||||
|
||||
5. Click **"Save"**
|
||||
6. Select `qwen2.5:1.5b` from the model dropdown when searching
|
||||
|
||||
### Benefits of Multiple Ollama Instances
|
||||
|
||||
- **Load Distribution**: Distribute inference load across multiple servers
|
||||
- **Redundancy**: If one instance is down, use the other
|
||||
- **Model Variety**: Different instances can host different models
|
||||
- **Network Optimization**: Use closest/fastest instance
|
||||
|
||||
## Performance
|
||||
|
||||
### Expected Performance (CPU-Only)
|
||||
|
||||
| Model | Size | Tokens/Second | Memory Usage |
|
||||
|-------|------|---------------|--------------|
|
||||
| qwen2.5:1.5b | 986 MB | 8-12 | ~2-3 GB |
|
||||
| qwen2.5:3b | ~2 GB | 5-8 | ~4-5 GB |
|
||||
| llama3.2:3b | ~2 GB | 4-7 | ~4-5 GB |
|
||||
| mistral:7b | ~4 GB | 2-4 | ~8-10 GB |
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Use Smaller Models**: 1.5B and 3B models work best on CPU
|
||||
2. **Limit Parallel Requests**: Set `OLLAMA_NUM_PARALLEL=2` to avoid overload
|
||||
3. **Keep Models Loaded**: Long `OLLAMA_KEEP_ALIVE` prevents reload delays
|
||||
4. **Monitor Memory**: Watch RAM usage with `docker stats ollama-seattle`
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Container Status
|
||||
```bash
|
||||
# Check if running
|
||||
ssh seattle-tailscale "docker ps | grep ollama"
|
||||
|
||||
# View logs
|
||||
ssh seattle-tailscale "docker logs -f ollama-seattle"
|
||||
|
||||
# Check resource usage
|
||||
ssh seattle-tailscale "docker stats ollama-seattle"
|
||||
```
|
||||
|
||||
### API Health Check
|
||||
```bash
|
||||
# Test connectivity
|
||||
curl -m 5 http://100.82.197.124:11434/api/tags
|
||||
|
||||
# Test inference
|
||||
curl http://100.82.197.124:11434/api/generate -d '{
|
||||
"model": "qwen2.5:1.5b",
|
||||
"prompt": "test",
|
||||
"stream": false
|
||||
}'
|
||||
```
|
||||
|
||||
### Performance Metrics
|
||||
```bash
|
||||
# Check response time
|
||||
time curl -s http://100.82.197.124:11434/api/tags > /dev/null
|
||||
|
||||
# Monitor CPU usage
|
||||
ssh seattle-tailscale "top -b -n 1 | grep ollama"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container Won't Start
|
||||
|
||||
```bash
|
||||
# Check logs
|
||||
ssh seattle-tailscale "docker logs ollama-seattle"
|
||||
|
||||
# Common issues:
|
||||
# - Port 11434 already in use
|
||||
# - Insufficient memory
|
||||
# - Volume mount permissions
|
||||
```
|
||||
|
||||
### Slow Inference
|
||||
|
||||
**Causes**:
|
||||
- Model too large for available CPU
|
||||
- Too many parallel requests
|
||||
- Insufficient RAM
|
||||
|
||||
**Solutions**:
|
||||
```bash
|
||||
# Use a smaller model
|
||||
docker exec ollama-seattle ollama pull qwen2.5:1.5b
|
||||
|
||||
# Reduce parallel requests
|
||||
# Edit ollama.yaml: OLLAMA_NUM_PARALLEL=1
|
||||
|
||||
# Increase CPU allocation
|
||||
# Edit ollama.yaml: cpus: '16'
|
||||
```
|
||||
|
||||
### Connection Timeout
|
||||
|
||||
**Problem**: Unable to reach Ollama from other machines
|
||||
|
||||
**Solutions**:
|
||||
1. Verify Tailscale connection:
|
||||
```bash
|
||||
ping 100.82.197.124
|
||||
tailscale status | grep seattle
|
||||
```
|
||||
|
||||
2. Check firewall:
|
||||
```bash
|
||||
ssh seattle-tailscale "ss -tlnp | grep 11434"
|
||||
```
|
||||
|
||||
3. Verify container is listening:
|
||||
```bash
|
||||
ssh seattle-tailscale "docker exec ollama-seattle netstat -tlnp"
|
||||
```
|
||||
|
||||
### Model Download Fails
|
||||
|
||||
```bash
|
||||
# Check available disk space
|
||||
ssh seattle-tailscale "df -h"
|
||||
|
||||
# Check internet connectivity
|
||||
ssh seattle-tailscale "curl -I https://ollama.com"
|
||||
|
||||
# Try manual download
|
||||
ssh seattle-tailscale "docker exec -it ollama-seattle ollama pull <model>"
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Updates
|
||||
|
||||
```bash
|
||||
# Pull latest Ollama image
|
||||
ssh seattle-tailscale "docker pull ollama/ollama:latest"
|
||||
|
||||
# Recreate container
|
||||
ssh seattle-tailscale "cd /opt/ollama && docker compose up -d --force-recreate"
|
||||
```
|
||||
|
||||
### Backup
|
||||
|
||||
```bash
|
||||
# Backup models and configuration
|
||||
ssh seattle-tailscale "docker run --rm -v ollama-seattle-data:/data -v $(pwd):/backup alpine tar czf /backup/ollama-backup.tar.gz /data"
|
||||
|
||||
# Restore
|
||||
ssh seattle-tailscale "docker run --rm -v ollama-seattle-data:/data -v $(pwd):/backup alpine tar xzf /backup/ollama-backup.tar.gz -C /"
|
||||
```
|
||||
|
||||
### Cleanup
|
||||
|
||||
```bash
|
||||
# Remove unused models
|
||||
ssh seattle-tailscale "docker exec ollama-seattle ollama list"
|
||||
ssh seattle-tailscale "docker exec ollama-seattle ollama rm <unused-model>"
|
||||
|
||||
# Clean up Docker
|
||||
ssh seattle-tailscale "docker system prune -f"
|
||||
```
|
||||
|
||||
## Security Considerations
|
||||
|
||||
### Network Access
|
||||
|
||||
- Ollama is exposed on port 11434
|
||||
- **Only accessible via Tailscale** (100.82.197.124)
|
||||
- Not exposed to public internet
|
||||
- Consider adding authentication if exposing publicly
|
||||
|
||||
### API Security
|
||||
|
||||
Ollama doesn't have built-in authentication. For production use:
|
||||
|
||||
1. **Use a reverse proxy** with authentication (Nginx, Caddy)
|
||||
2. **Restrict access** via firewall rules
|
||||
3. **Use Tailscale ACLs** to limit access
|
||||
4. **Monitor usage** for abuse
|
||||
|
||||
## Cost Analysis
|
||||
|
||||
### Contabo VPS Costs
|
||||
- **Monthly Cost**: ~$25-35 USD
|
||||
- **Inference Cost**: $0 (self-hosted)
|
||||
- **vs Cloud APIs**: OpenAI costs ~$0.15-0.60 per 1M tokens
|
||||
|
||||
### Break-even Analysis
|
||||
- **Light usage** (<1M tokens/month): Cloud APIs cheaper
|
||||
- **Medium usage** (1-10M tokens/month): Self-hosted breaks even
|
||||
- **Heavy usage** (>10M tokens/month): Self-hosted much cheaper
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Potential Improvements
|
||||
|
||||
1. **GPU Support**: Migrate to GPU-enabled VPS for faster inference
|
||||
2. **Load Balancer**: Set up Nginx to load balance between Ollama instances
|
||||
3. **Auto-scaling**: Deploy additional instances based on load
|
||||
4. **Model Caching**: Pre-warm multiple models for faster switching
|
||||
5. **Monitoring Dashboard**: Grafana + Prometheus for metrics
|
||||
6. **API Gateway**: Add rate limiting and authentication
|
||||
|
||||
### Model Recommendations
|
||||
|
||||
For different use cases on CPU:
|
||||
|
||||
- **Fast responses**: qwen2.5:1.5b, phi3:3.8b
|
||||
- **Better quality**: qwen2.5:3b, llama3.2:3b
|
||||
- **Code tasks**: qwen2.5-coder:1.5b, codegemma:2b
|
||||
- **Instruction following**: mistral:7b (slower but better)
|
||||
|
||||
## Related Services
|
||||
|
||||
- **Atlantis Ollama** (`192.168.0.200:11434`) - Main Ollama instance
|
||||
- **Perplexica** (`192.168.0.210:4785`) - AI search engine client
|
||||
- **LM Studio** (`100.98.93.15:1234`) - Alternative LLM server
|
||||
|
||||
## References
|
||||
|
||||
- [Ollama Documentation](https://github.com/ollama/ollama)
|
||||
- [Available Models](https://ollama.com/library)
|
||||
- [Ollama API Reference](https://github.com/ollama/ollama/blob/main/docs/api.md)
|
||||
- [Qwen 2.5 Model Card](https://ollama.com/library/qwen2.5)
|
||||
|
||||
---
|
||||
|
||||
**Status:** ✅ Fully operational
|
||||
**Last Updated:** February 16, 2026
|
||||
**Maintained By:** Docker Compose (manual)
|
||||
127
hosts/vms/seattle/README.md
Normal file
127
hosts/vms/seattle/README.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# Seattle VM (Contabo VPS)
|
||||
|
||||
## 🖥️ Machine Specifications
|
||||
|
||||
| Component | Details |
|
||||
|-----------|---------|
|
||||
| **Provider** | Contabo VPS |
|
||||
| **Hostname** | vmi2076105 (seattle-vm) |
|
||||
| **OS** | Ubuntu 24.04.4 LTS |
|
||||
| **Kernel** | Linux 6.8.0-90-generic |
|
||||
| **Architecture** | x86_64 |
|
||||
| **CPU** | 16 vCPU AMD EPYC Processor |
|
||||
| **Memory** | 64GB RAM |
|
||||
| **Storage** | 300GB SSD (24% used) |
|
||||
| **Virtualization** | KVM |
|
||||
|
||||
## 🌐 Network Configuration
|
||||
|
||||
| Interface | IP Address | Purpose |
|
||||
|-----------|------------|---------|
|
||||
| **eth0** | YOUR_WAN_IP/21 | Public Internet |
|
||||
| **tailscale0** | 100.82.197.124/32 | Tailscale VPN |
|
||||
| **docker0** | 172.17.0.1/16 | Docker default bridge |
|
||||
| **Custom bridges** | 172.18-20.0.1/16 | Service-specific networks |
|
||||
|
||||
## 🚀 Running Services
|
||||
|
||||
### Web Services (Docker)
|
||||
- **[Wallabag](./wallabag/)** - Read-later service at `wb.vish.gg`
|
||||
- **[Obsidian](./obsidian/)** - Note-taking web interface at `obs.vish.gg`
|
||||
- **[MinIO](./stoatchat/)** - Object storage for StoatChat at ports 14009-14010
|
||||
|
||||
### AI/ML Services
|
||||
- **[Ollama](./README-ollama.md)** - Local LLM inference server
|
||||
- API Port: 11434
|
||||
- Tailscale: `100.82.197.124:11434`
|
||||
- Models: `qwen2.5:1.5b`
|
||||
- Purpose: CPU-based inference for Perplexica integration
|
||||
- **[HolyClaude](../../../docs/services/individual/holyclaude.md)** - Claude Code web UI workstation (testing)
|
||||
- Port: 3059 (bound to Tailscale only)
|
||||
- URL: `http://seattle:3059`
|
||||
- Compose: `holyclaude.yaml`
|
||||
|
||||
### Chat Platform
|
||||
- **[StoatChat (Revolt)](./stoatchat/)** - Self-hosted chat platform
|
||||
- Multiple microservices: Delta, Bonfire, Autumn, January, Gifbox
|
||||
- Ports: 14702-14706
|
||||
|
||||
### Gaming Services
|
||||
- **[PufferPanel](./pufferpanel/)** - Game server management panel
|
||||
- Web UI: Port 8080
|
||||
- SFTP: Port 5657
|
||||
- **[Garry's Mod PropHunt](./gmod-prophunt/)** - Game server
|
||||
- Game Port: 27015
|
||||
- RCON: 39903
|
||||
|
||||
### System Services
|
||||
- **Nginx** - Reverse proxy (ports 80, 443)
|
||||
- **Tailscale** - VPN mesh networking
|
||||
- **SSH** - Remote access (ports 22, 2222)
|
||||
- **MariaDB** - Database server (port 3306)
|
||||
- **Redis** - Cache server (port 6379)
|
||||
- **Postfix** - Mail server (port 25)
|
||||
|
||||
## 📁 Service Directories
|
||||
|
||||
```
|
||||
/opt/
|
||||
├── wallabag/ # Wallabag installation
|
||||
├── obsidian/ # Obsidian web interface
|
||||
├── gmod-prophunt/ # Garry's Mod server files
|
||||
└── pufferpanel/ # Game server management
|
||||
|
||||
/home/gmod/ # Garry's Mod user directory
|
||||
/etc/nginx/sites-enabled/ # Nginx virtual hosts
|
||||
```
|
||||
|
||||
## 🔧 Management
|
||||
|
||||
### Docker Services
|
||||
```bash
|
||||
# View running containers
|
||||
docker ps
|
||||
|
||||
# Restart a service
|
||||
docker-compose -f /opt/wallabag/docker-compose.yml restart
|
||||
|
||||
# View logs
|
||||
docker logs wallabag
|
||||
```
|
||||
|
||||
### System Services
|
||||
```bash
|
||||
# Check service status
|
||||
systemctl status nginx tailscaled
|
||||
|
||||
# Restart nginx
|
||||
sudo systemctl restart nginx
|
||||
|
||||
# View logs
|
||||
journalctl -u nginx -f
|
||||
```
|
||||
|
||||
### Game Server Management
|
||||
- **PufferPanel Web UI**: Access via configured domain
|
||||
- **Direct SRCDS**: Located in `/home/gmod/gmod-prophunt-server/`
|
||||
|
||||
## 🔒 Security Features
|
||||
|
||||
- **Tailscale VPN** for secure remote access
|
||||
- **Nginx reverse proxy** with SSL termination
|
||||
- **Firewall** configured for specific service ports
|
||||
- **SSH** on both standard (22) and alternate (2222) ports
|
||||
- **Local-only binding** for sensitive services (MySQL, Redis)
|
||||
|
||||
## 📊 Monitoring
|
||||
|
||||
- **System resources**: `htop`, `df -h`, `free -h`
|
||||
- **Network**: `ss -tlnp`, `netstat -tulpn`
|
||||
- **Docker**: `docker stats`, `docker logs`
|
||||
- **Services**: `systemctl status`
|
||||
|
||||
## 🔗 Related Documentation
|
||||
|
||||
- [StoatChat Deployment Guide](./stoatchat/DEPLOYMENT_GUIDE.md)
|
||||
- [Service Management Guide](./stoatchat/SERVICE_MANAGEMENT.md)
|
||||
- [Troubleshooting Guide](./stoatchat/TROUBLESHOOTING.md)
|
||||
43
hosts/vms/seattle/bookstack/docker-compose.yml
Normal file
43
hosts/vms/seattle/bookstack/docker-compose.yml
Normal file
@@ -0,0 +1,43 @@
|
||||
services:
|
||||
bookstack:
|
||||
image: lscr.io/linuxserver/bookstack:latest
|
||||
container_name: bookstack
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- APP_URL=http://100.82.197.124:6875
|
||||
- DB_HOST=bookstack-db
|
||||
- DB_PORT=3306
|
||||
- DB_USER=bookstack
|
||||
- DB_PASS="REDACTED_PASSWORD"
|
||||
- DB_DATABASE=bookstack
|
||||
- APP_KEY=base64:OyXRjle+VXdiPS2BBADYCrHSS/rCAo/VE9m2fW97YW8=
|
||||
volumes:
|
||||
- /opt/bookstack/data:/config
|
||||
ports:
|
||||
- "100.82.197.124:6875:80"
|
||||
depends_on:
|
||||
- bookstack-db
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:80/status"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 30s
|
||||
|
||||
bookstack-db:
|
||||
image: lscr.io/linuxserver/mariadb:latest
|
||||
container_name: bookstack-db
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- MYSQL_ROOT_PASSWORD="REDACTED_PASSWORD"
|
||||
- MYSQL_DATABASE=bookstack
|
||||
- MYSQL_USER=bookstack
|
||||
- MYSQL_PASSWORD="REDACTED_PASSWORD"
|
||||
volumes:
|
||||
- /opt/bookstack/db:/config
|
||||
44
hosts/vms/seattle/ddns-updater.yaml
Normal file
44
hosts/vms/seattle/ddns-updater.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
# Dynamic DNS Updater — Seattle VM (Contabo VPS, YOUR_WAN_IP)
|
||||
# Keeps Cloudflare A records current with the VPS public IP.
|
||||
# Three services: proxied, stoatchat unproxied, and DERP unproxied.
|
||||
services:
|
||||
# vish.gg services behind Cloudflare proxy (HTTP/HTTPS via CF edge)
|
||||
ddns-seattle-proxied:
|
||||
image: favonia/cloudflare-ddns:latest
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
read_only: true
|
||||
cap_drop: [all]
|
||||
security_opt: [no-new-privileges:true]
|
||||
environment:
|
||||
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
|
||||
# General Seattle VM services (CF proxy on)
|
||||
- DOMAINS=nx.vish.gg,obs.vish.gg,pp.vish.gg,wb.vish.gg
|
||||
- PROXIED=true
|
||||
|
||||
# StoatChat WebRTC subdomains — must be unproxied (direct IP for WebSockets / LiveKit UDP)
|
||||
ddns-seattle-stoatchat:
|
||||
image: favonia/cloudflare-ddns:latest
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
read_only: true
|
||||
cap_drop: [all]
|
||||
security_opt: [no-new-privileges:true]
|
||||
environment:
|
||||
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
|
||||
# st.vish.gg + all subdomains need direct IP for real-time connections
|
||||
- DOMAINS=st.vish.gg,api.st.vish.gg,events.st.vish.gg,files.st.vish.gg,proxy.st.vish.gg,voice.st.vish.gg,livekit.st.vish.gg
|
||||
- PROXIED=false
|
||||
|
||||
# DERP relay — must be unproxied (DERP protocol requires direct TLS, CF proxy breaks it)
|
||||
ddns-seattle-derp:
|
||||
image: favonia/cloudflare-ddns:latest
|
||||
network_mode: host
|
||||
restart: unless-stopped
|
||||
read_only: true
|
||||
cap_drop: [all]
|
||||
security_opt: [no-new-privileges:true]
|
||||
environment:
|
||||
- CLOUDFLARE_API_TOKEN=${CLOUDFLARE_API_TOKEN}
|
||||
- DOMAINS=derp-sea.vish.gg
|
||||
- PROXIED=false
|
||||
47
hosts/vms/seattle/derper.yaml
Normal file
47
hosts/vms/seattle/derper.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
# Standalone DERP Relay Server — Seattle VPS
|
||||
# =============================================================================
|
||||
# Tailscale/Headscale DERP relay for external fallback connectivity.
|
||||
# Serves as region 901 "Seattle VPS" in the headscale derpmap.
|
||||
#
|
||||
# Why standalone (not behind nginx):
|
||||
# The DERP protocol does an HTTP→binary protocol switch inside TLS.
|
||||
# It is incompatible with HTTP reverse proxies. Must handle TLS directly.
|
||||
#
|
||||
# Port layout:
|
||||
# 8444/tcp — DERP relay (direct TLS, NOT proxied through nginx)
|
||||
# 3478/udp — STUN (NAT traversal hints)
|
||||
#
|
||||
# TLS cert:
|
||||
# Issued by Let's Encrypt via certbot DNS challenge (Cloudflare).
|
||||
# Cert path: /etc/letsencrypt/live/derp-sea.vish.gg/
|
||||
# Renewal hook at /etc/letsencrypt/renewal-hooks/deploy/derp-sea-symlinks.sh
|
||||
# auto-restarts this container after renewal.
|
||||
#
|
||||
# UFW rules required (one-time, already applied):
|
||||
# ufw allow 8444/tcp # DERP TLS
|
||||
# ufw allow 3478/udp # STUN
|
||||
#
|
||||
# DNS: derp-sea.vish.gg → YOUR_WAN_IP (managed by ddns-updater.yaml, unproxied)
|
||||
# =============================================================================
|
||||
|
||||
services:
|
||||
derper:
|
||||
image: fredliang/derper:latest
|
||||
container_name: derper
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8444:8444" # DERP TLS — direct, not behind nginx
|
||||
- "3478:3478/udp" # STUN
|
||||
volumes:
|
||||
# Full letsencrypt mount required — live/ contains symlinks into archive/
|
||||
# mounting only live/ breaks symlink resolution inside the container
|
||||
- /etc/letsencrypt:/etc/letsencrypt:ro
|
||||
environment:
|
||||
- DERP_DOMAIN=derp-sea.vish.gg
|
||||
- DERP_CERT_MODE=manual
|
||||
- DERP_CERT_DIR=/etc/letsencrypt/live/derp-sea.vish.gg
|
||||
- DERP_ADDR=:8444
|
||||
- DERP_STUN=true
|
||||
- DERP_STUN_PORT=3478
|
||||
- DERP_HTTP_PORT=-1 # disable plain HTTP, TLS only
|
||||
- DERP_VERIFY_CLIENTS=false # allow any node (headscale manages auth)
|
||||
28
hosts/vms/seattle/diun.yaml
Normal file
28
hosts/vms/seattle/diun.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
# Diun — Docker Image Update Notifier
|
||||
#
|
||||
# Watches all running containers on this host and sends ntfy
|
||||
# notifications when upstream images update their digest.
|
||||
# Schedule: Mondays 09:00 (weekly cadence).
|
||||
#
|
||||
# ntfy topic: https://ntfy.vish.gg/diun
|
||||
|
||||
services:
|
||||
diun:
|
||||
image: crazymax/diun:latest
|
||||
container_name: diun
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- diun-data:/data
|
||||
environment:
|
||||
LOG_LEVEL: info
|
||||
DIUN_WATCH_WORKERS: "20"
|
||||
DIUN_WATCH_SCHEDULE: "0 9 * * 1"
|
||||
DIUN_WATCH_JITTER: 30s
|
||||
DIUN_PROVIDERS_DOCKER: "true"
|
||||
DIUN_PROVIDERS_DOCKER_WATCHBYDEFAULT: "true"
|
||||
DIUN_NOTIF_NTFY_ENDPOINT: "https://ntfy.vish.gg"
|
||||
DIUN_NOTIF_NTFY_TOPIC: "diun"
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
diun-data:
|
||||
15
hosts/vms/seattle/dozzle-agent.yaml
Normal file
15
hosts/vms/seattle/dozzle-agent.yaml
Normal file
@@ -0,0 +1,15 @@
|
||||
services:
|
||||
dozzle-agent:
|
||||
image: amir20/dozzle:latest
|
||||
container_name: dozzle-agent
|
||||
command: agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
ports:
|
||||
- "7007:7007"
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "/dozzle", "healthcheck"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
176
hosts/vms/seattle/gmod-prophunt/README.md
Normal file
176
hosts/vms/seattle/gmod-prophunt/README.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# Garry's Mod PropHunt Server
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
A dedicated Garry's Mod server running the PropHunt gamemode, where players hide as props while others try to find and eliminate them.
|
||||
|
||||
## 🔧 Service Details
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Game** | Garry's Mod |
|
||||
| **Gamemode** | PropHunt |
|
||||
| **Server Port** | 27015 |
|
||||
| **RCON Port** | 39903 |
|
||||
| **Max Players** | 24 |
|
||||
| **Tickrate** | 66 |
|
||||
| **Map** | ph_office |
|
||||
| **Process User** | `gmod` |
|
||||
|
||||
## 🌐 Network Access
|
||||
|
||||
- **Game Server**: `YOUR_WAN_IP:27015`
|
||||
- **RCON**: `127.0.0.1:39903` (localhost only)
|
||||
- **Steam Server Account**: Configured with Steam Game Server Token
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
/home/gmod/gmod-prophunt-server/
|
||||
├── srcds_run # Server startup script
|
||||
├── srcds_linux # Server binary
|
||||
├── garrysmod/ # Game files
|
||||
│ ├── addons/ # Server addons/plugins
|
||||
│ ├── gamemodes/ # PropHunt gamemode
|
||||
│ ├── maps/ # Server maps
|
||||
│ └── cfg/ # Configuration files
|
||||
└── docker/ # Docker configuration
|
||||
└── docker-compose.yml
|
||||
```
|
||||
|
||||
## 🚀 Management Commands
|
||||
|
||||
### Direct Server Control
|
||||
```bash
|
||||
# Switch to gmod user
|
||||
sudo su - gmod
|
||||
|
||||
# Navigate to server directory
|
||||
cd /home/gmod/gmod-prophunt-server/
|
||||
|
||||
# Start server (manual)
|
||||
./srcds_run -game garrysmod -console -port 27015 +ip 0.0.0.0 +maxplayers 24 +map ph_office +gamemode prop_hunt -tickrate 66 +hostname "PropHunt Server" +sv_setsteamaccount YOUR_TOKEN -disableluarefresh -nohltv
|
||||
|
||||
# Check if server is running
|
||||
ps aux | grep srcds_linux
|
||||
```
|
||||
|
||||
### Process Management
|
||||
```bash
|
||||
# Find server process
|
||||
ps aux | grep srcds_linux
|
||||
|
||||
# Kill server (if needed)
|
||||
pkill -f srcds_linux
|
||||
|
||||
# Check server logs
|
||||
tail -f /home/gmod/gmod-prophunt-server/garrysmod/console.log
|
||||
```
|
||||
|
||||
### Docker Management (Alternative)
|
||||
```bash
|
||||
# Using Docker Compose
|
||||
cd /opt/gmod-prophunt/docker/
|
||||
docker-compose up -d
|
||||
docker-compose logs -f
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Server Configuration
|
||||
- **server.cfg**: Located in `/home/gmod/gmod-prophunt-server/garrysmod/cfg/`
|
||||
- **Steam Token**: Required for public server listing
|
||||
- **RCON Password**: Set in server configuration
|
||||
|
||||
### PropHunt Gamemode
|
||||
- **Gamemode Files**: Located in `garrysmod/gamemodes/prop_hunt/`
|
||||
- **Maps**: PropHunt-specific maps in `garrysmod/maps/`
|
||||
- **Addons**: Additional functionality in `garrysmod/addons/`
|
||||
|
||||
## 🎮 Server Features
|
||||
|
||||
### PropHunt Gameplay
|
||||
- **Props Team**: Hide as objects in the map
|
||||
- **Hunters Team**: Find and eliminate props
|
||||
- **Round-based**: Automatic team switching
|
||||
- **Map Rotation**: Multiple PropHunt maps
|
||||
|
||||
### Server Settings
|
||||
- **Friendly Fire**: Disabled
|
||||
- **Voice Chat**: Enabled
|
||||
- **Admin System**: ULX/ULib (if installed)
|
||||
- **Anti-Cheat**: VAC enabled
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Regular Tasks
|
||||
```bash
|
||||
# Update server files
|
||||
cd /home/gmod/gmod-prophunt-server/
|
||||
./steamcmd.sh +login anonymous +force_install_dir . +app_update 4020 validate +quit
|
||||
|
||||
# Backup server data
|
||||
tar -czf gmod-backup-$(date +%Y%m%d).tar.gz garrysmod/
|
||||
|
||||
# Clean old logs
|
||||
find garrysmod/logs/ -name "*.log" -mtime +30 -delete
|
||||
```
|
||||
|
||||
### Performance Monitoring
|
||||
```bash
|
||||
# Check server performance
|
||||
htop -p $(pgrep srcds_linux)
|
||||
|
||||
# Monitor network connections
|
||||
ss -tuln | grep 27015
|
||||
|
||||
# Check disk usage
|
||||
du -sh /home/gmod/gmod-prophunt-server/
|
||||
```
|
||||
|
||||
## 🔒 Security Considerations
|
||||
|
||||
- **RCON**: Bound to localhost only (127.0.0.1:39903)
|
||||
- **User Isolation**: Runs under dedicated `gmod` user
|
||||
- **File Permissions**: Proper ownership and permissions
|
||||
- **Steam VAC**: Anti-cheat protection enabled
|
||||
- **Firewall**: Only game port (27015) exposed publicly
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
```bash
|
||||
# Server won't start
|
||||
- Check if port 27015 is already in use: ss -tlnp | grep 27015
|
||||
- Verify Steam token is valid
|
||||
- Check file permissions: ls -la /home/gmod/gmod-prophunt-server/
|
||||
|
||||
# Players can't connect
|
||||
- Verify firewall allows port 27015
|
||||
- Check server is listening: ss -tlnp | grep 27015
|
||||
- Test connectivity: telnet YOUR_WAN_IP 27015
|
||||
|
||||
# Performance issues
|
||||
- Monitor CPU/RAM usage: htop
|
||||
- Check for addon conflicts
|
||||
- Review server logs for errors
|
||||
```
|
||||
|
||||
### Log Locations
|
||||
- **Console Output**: `/home/gmod/gmod-prophunt-server/garrysmod/console.log`
|
||||
- **Error Logs**: `/home/gmod/gmod-prophunt-server/garrysmod/logs/`
|
||||
- **System Logs**: `journalctl -u gmod-server` (if systemd service)
|
||||
|
||||
## 🔗 Related Services
|
||||
|
||||
- **PufferPanel**: Can manage this server through web interface
|
||||
- **Steam**: Requires Steam Game Server Account
|
||||
- **Nginx**: May proxy web-based admin interfaces
|
||||
|
||||
## 📚 External Resources
|
||||
|
||||
- [Garry's Mod Wiki](https://wiki.facepunch.com/gmod/)
|
||||
- [PropHunt Gamemode](https://steamcommunity.com/sharedfiles/filedetails/?id=135509255)
|
||||
- [Server Administration Guide](https://wiki.facepunch.com/gmod/Server_Administration)
|
||||
- [Steam Game Server Account Management](https://steamcommunity.com/dev/managegameservers)
|
||||
65
hosts/vms/seattle/gmod-prophunt/docker-compose.yml
Normal file
65
hosts/vms/seattle/gmod-prophunt/docker-compose.yml
Normal file
@@ -0,0 +1,65 @@
|
||||
services:
|
||||
gmod-prophunt:
|
||||
build:
|
||||
context: ..
|
||||
dockerfile: docker/Dockerfile
|
||||
container_name: gmod-prophunt
|
||||
restart: unless-stopped
|
||||
stdin_open: true
|
||||
tty: true
|
||||
|
||||
environment:
|
||||
- SRCDS_TOKEN=${SRCDS_TOKEN:-}
|
||||
- SERVER_NAME=${SERVER_NAME:-PropHunt Server}
|
||||
- RCON_PASSWORD="REDACTED_PASSWORD"
|
||||
- MAX_PLAYERS=${MAX_PLAYERS:-24}
|
||||
- MAP=${MAP:-gm_construct}
|
||||
- PORT=${PORT:-27015}
|
||||
- GAMEMODE=${GAMEMODE:-prop_hunt}
|
||||
- WORKSHOP_COLLECTION=${WORKSHOP_COLLECTION:-}
|
||||
- TICKRATE=${TICKRATE:-66}
|
||||
- TZ=${TZ:-America/Los_Angeles}
|
||||
- AUTO_UPDATE=${AUTO_UPDATE:-true}
|
||||
|
||||
ports:
|
||||
- "${PORT:-27015}:27015/tcp"
|
||||
- "${PORT:-27015}:27015/udp"
|
||||
- "27005:27005/udp"
|
||||
- "27020:27020/udp"
|
||||
|
||||
volumes:
|
||||
# Persistent server files (includes addons, data, configs)
|
||||
- gmod-server:/home/gmod/serverfiles
|
||||
|
||||
networks:
|
||||
- gmod-network
|
||||
|
||||
# Required for Source engine servers
|
||||
ulimits:
|
||||
memlock:
|
||||
soft: -1
|
||||
hard: -1
|
||||
|
||||
# Resource limits (optional, adjust as needed)
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '4'
|
||||
memory: 8G
|
||||
reservations:
|
||||
cpus: '1'
|
||||
memory: 2G
|
||||
|
||||
# Logging configuration
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
networks:
|
||||
gmod-network:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
gmod-server:
|
||||
25
hosts/vms/seattle/holyclaude.yaml
Normal file
25
hosts/vms/seattle/holyclaude.yaml
Normal file
@@ -0,0 +1,25 @@
|
||||
# HolyClaude - AI coding workstation (Claude Code CLI + web UI + misc AI CLIs)
|
||||
# Image: coderluii/holyclaude (MIT, https://github.com/CoderLuii/HolyClaude)
|
||||
# Access: Tailscale-only via http://seattle:3059 (bound to 100.82.197.124 only)
|
||||
# Status: testing
|
||||
|
||||
services:
|
||||
holyclaude:
|
||||
image: coderluii/holyclaude:latest
|
||||
container_name: holyclaude
|
||||
ports:
|
||||
- "100.82.197.124:3059:3001"
|
||||
environment:
|
||||
- TZ=America/Los_Angeles
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
volumes:
|
||||
- holyclaude-data:/home/claude
|
||||
- holyclaude-workspace:/workspace
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
holyclaude-data:
|
||||
name: holyclaude-data
|
||||
holyclaude-workspace:
|
||||
name: holyclaude-workspace
|
||||
199
hosts/vms/seattle/obsidian/README.md
Normal file
199
hosts/vms/seattle/obsidian/README.md
Normal file
@@ -0,0 +1,199 @@
|
||||
# Obsidian - Web-based Note Taking
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
Obsidian is a powerful knowledge management and note-taking application. This deployment provides web-based access to Obsidian through a containerized environment, allowing remote access to your notes and knowledge base.
|
||||
|
||||
## 🔧 Service Details
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Container Name** | `obsidian` |
|
||||
| **Image** | `lscr.io/linuxserver/obsidian:latest` |
|
||||
| **Web Port** | 127.0.0.1:3000 |
|
||||
| **Secondary Port** | 127.0.0.1:3001 |
|
||||
| **Domain** | `obs.vish.gg` |
|
||||
| **User** | `admin` |
|
||||
| **Timezone** | `America/Los_Angeles` |
|
||||
|
||||
## 🌐 Network Access
|
||||
|
||||
- **Public URL**: `https://obs.vish.gg`
|
||||
- **Local Access**: `http://127.0.0.1:3000`
|
||||
- **Secondary Port**: `http://127.0.0.1:3001`
|
||||
- **Reverse Proxy**: Nginx configuration in `/etc/nginx/sites-enabled/obsidian`
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
/opt/obsidian/
|
||||
├── docker-compose.yml # Service configuration
|
||||
└── config/ # Application configuration
|
||||
├── data/ # Obsidian vaults and notes
|
||||
├── Desktop/ # Desktop environment files
|
||||
└── .config/ # Application settings
|
||||
```
|
||||
|
||||
## 🚀 Management Commands
|
||||
|
||||
### Docker Operations
|
||||
```bash
|
||||
# Navigate to service directory
|
||||
cd /opt/obsidian/
|
||||
|
||||
# Start service
|
||||
docker-compose up -d
|
||||
|
||||
# Stop service
|
||||
docker-compose down
|
||||
|
||||
# Restart service
|
||||
docker-compose restart
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f
|
||||
|
||||
# Update service
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Container Management
|
||||
```bash
|
||||
# Check container status
|
||||
docker ps | grep obsidian
|
||||
|
||||
# Execute commands in container
|
||||
docker exec -it obsidian bash
|
||||
|
||||
# View container logs
|
||||
docker logs obsidian -f
|
||||
|
||||
# Check resource usage
|
||||
docker stats obsidian
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
- **PUID/PGID**: 1000 (user permissions)
|
||||
- **Timezone**: America/Los_Angeles
|
||||
- **Custom User**: admin
|
||||
- **Password**: REDACTED_PASSWORD (change in production!)
|
||||
|
||||
### Security Options
|
||||
- **seccomp**: unconfined (required for GUI applications)
|
||||
- **Shared Memory**: 1GB (for browser rendering)
|
||||
|
||||
### Volume Mounts
|
||||
- **Config**: `/opt/obsidian/config` → `/config`
|
||||
|
||||
## 🔒 Security Considerations
|
||||
|
||||
- **Local Binding**: Only accessible via localhost (127.0.0.1)
|
||||
- **Nginx Proxy**: SSL termination and authentication
|
||||
- **Default Credentials**: Change default password immediately
|
||||
- **Container Isolation**: Runs in isolated Docker environment
|
||||
- **File Permissions**: Proper user/group mapping
|
||||
|
||||
## 💻 Usage
|
||||
|
||||
### Web Interface
|
||||
1. Access via `https://obs.vish.gg`
|
||||
2. Log in with configured credentials
|
||||
3. Use Obsidian's full interface through the browser
|
||||
4. Create and manage vaults
|
||||
5. Install community plugins and themes
|
||||
|
||||
### Features Available
|
||||
- **Full Obsidian Interface**: Complete desktop experience in browser
|
||||
- **Vault Management**: Create and switch between vaults
|
||||
- **Plugin Support**: Install community plugins
|
||||
- **Theme Support**: Customize appearance
|
||||
- **File Management**: Upload and organize files
|
||||
- **Graph View**: Visualize note connections
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Backup
|
||||
```bash
|
||||
# Backup entire configuration
|
||||
tar -czf obsidian-backup-$(date +%Y%m%d).tar.gz /opt/obsidian/config/
|
||||
|
||||
# Backup specific vault
|
||||
tar -czf vault-backup-$(date +%Y%m%d).tar.gz /opt/obsidian/config/data/YourVaultName/
|
||||
```
|
||||
|
||||
### Updates
|
||||
```bash
|
||||
cd /opt/obsidian/
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Performance Tuning
|
||||
```bash
|
||||
# Increase shared memory if needed
|
||||
# Edit docker-compose.yml and increase shm_size
|
||||
|
||||
# Monitor resource usage
|
||||
docker stats obsidian
|
||||
```
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
```bash
|
||||
# Container won't start
|
||||
docker-compose logs obsidian
|
||||
|
||||
# GUI not loading
|
||||
# Check shared memory allocation
|
||||
# Verify seccomp:unconfined is set
|
||||
|
||||
# Permission issues
|
||||
sudo chown -R 1000:1000 /opt/obsidian/config/
|
||||
|
||||
# Performance issues
|
||||
# Increase shm_size in docker-compose.yml
|
||||
# Check available system resources
|
||||
```
|
||||
|
||||
### Connection Issues
|
||||
```bash
|
||||
# Test local endpoint
|
||||
curl -I http://127.0.0.1:3000
|
||||
|
||||
# Test public endpoint
|
||||
curl -I https://obs.vish.gg
|
||||
|
||||
# Check nginx configuration
|
||||
sudo nginx -t
|
||||
sudo systemctl reload nginx
|
||||
```
|
||||
|
||||
### File Access Issues
|
||||
```bash
|
||||
# Check file permissions
|
||||
ls -la /opt/obsidian/config/
|
||||
|
||||
# Fix ownership
|
||||
sudo chown -R 1000:1000 /opt/obsidian/config/
|
||||
|
||||
# Check disk space
|
||||
df -h /opt/obsidian/
|
||||
```
|
||||
|
||||
## 🔗 Related Services
|
||||
|
||||
- **Nginx**: Reverse proxy with SSL termination
|
||||
- **Let's Encrypt**: SSL certificate management
|
||||
- **Docker**: Container runtime
|
||||
|
||||
## 📚 External Resources
|
||||
|
||||
- [Obsidian Official Site](https://obsidian.md/)
|
||||
- [LinuxServer.io Documentation](https://docs.linuxserver.io/images/docker-obsidian)
|
||||
- [Docker Hub](https://hub.docker.com/r/linuxserver/obsidian)
|
||||
- [Obsidian Community](https://obsidian.md/community)
|
||||
- [Plugin Directory](https://obsidian.md/plugins)
|
||||
20
hosts/vms/seattle/obsidian/docker-compose.yml
Normal file
20
hosts/vms/seattle/obsidian/docker-compose.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
version: '3.8'
|
||||
services:
|
||||
obsidian:
|
||||
image: lscr.io/linuxserver/obsidian:latest
|
||||
container_name: obsidian
|
||||
restart: unless-stopped
|
||||
security_opt:
|
||||
- seccomp:unconfined
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- TZ=America/Los_Angeles
|
||||
- CUSTOM_USER=admin
|
||||
- PASSWORD="REDACTED_PASSWORD"
|
||||
volumes:
|
||||
- /opt/obsidian/config:/config
|
||||
ports:
|
||||
- "127.0.0.1:3000:3000"
|
||||
- "127.0.0.1:3001:3001"
|
||||
shm_size: "1gb"
|
||||
36
hosts/vms/seattle/ollama.yaml
Normal file
36
hosts/vms/seattle/ollama.yaml
Normal file
@@ -0,0 +1,36 @@
|
||||
# Ollama - Local LLM inference server
|
||||
# OpenAI-compatible API for running local language models
|
||||
# Port: 11434 (Ollama API), 8000 (OpenAI-compatible proxy)
|
||||
#
|
||||
# Ollama is much better suited for CPU inference than vLLM.
|
||||
# It provides efficient CPU-based inference with automatic optimization.
|
||||
|
||||
services:
|
||||
ollama:
|
||||
image: ollama/ollama:latest
|
||||
container_name: ollama-seattle
|
||||
ports:
|
||||
- "11434:11434"
|
||||
environment:
|
||||
- OLLAMA_HOST=0.0.0.0:11434
|
||||
- OLLAMA_KEEP_ALIVE=24h
|
||||
# CPU-specific optimizations
|
||||
- OLLAMA_NUM_PARALLEL=2
|
||||
- OLLAMA_MAX_LOADED_MODELS=2
|
||||
volumes:
|
||||
# Persist model downloads
|
||||
- ollama-data:/root/.ollama
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '12'
|
||||
memory: 32G
|
||||
reservations:
|
||||
cpus: '4'
|
||||
memory: 8G
|
||||
|
||||
|
||||
volumes:
|
||||
ollama-data:
|
||||
name: ollama-seattle-data
|
||||
104
hosts/vms/seattle/palworld/README.md
Normal file
104
hosts/vms/seattle/palworld/README.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Palworld Dedicated Server
|
||||
|
||||
Palworld dedicated server running on the Seattle VM via Docker, using [thijsvanloef/palworld-server-docker](https://github.com/thijsvanloef/palworld-server-docker).
|
||||
|
||||
## Connection Info
|
||||
|
||||
| Service | Address | Protocol |
|
||||
|---------|--------------------------------|----------|
|
||||
| Game | `100.82.197.124:8211` | UDP |
|
||||
| Query | `100.82.197.124:27016` | UDP |
|
||||
| RCON | `100.82.197.124:25575` | TCP |
|
||||
|
||||
Connect in-game using the Tailscale IP: `100.82.197.124:8211`
|
||||
|
||||
RCON is accessible only over Tailscale (port 25575/tcp).
|
||||
|
||||
Query port is set to 27016 instead of the default 27015 to avoid conflict with the Gmod server.
|
||||
|
||||
## Server Management
|
||||
|
||||
```bash
|
||||
# Start the server
|
||||
cd /opt/palworld && docker compose up -d
|
||||
|
||||
# Stop the server
|
||||
docker compose down
|
||||
|
||||
# View logs
|
||||
docker compose logs -f palworld-server
|
||||
|
||||
# Restart the server
|
||||
docker compose restart palworld-server
|
||||
|
||||
# Force update the server
|
||||
docker compose down && docker compose pull && docker compose up -d
|
||||
```
|
||||
|
||||
### RCON Commands
|
||||
|
||||
Connect with any RCON client to `100.82.197.124:25575` using the admin password.
|
||||
|
||||
Useful commands:
|
||||
|
||||
| Command | Description |
|
||||
|----------------------------------|--------------------------|
|
||||
| `/Info` | Server info |
|
||||
| `/ShowPlayers` | List connected players |
|
||||
| `/KickPlayer <steamid>` | Kick a player |
|
||||
| `/BanPlayer <steamid>` | Ban a player |
|
||||
| `/Save` | Force world save |
|
||||
| `/Shutdown <seconds> <message>` | Graceful shutdown |
|
||||
|
||||
## Configuration
|
||||
|
||||
Environment variables are set in `docker-compose.yml`. Key settings:
|
||||
|
||||
| Variable | Default | Description |
|
||||
|--------------------|-----------------|--------------------------------------|
|
||||
| `SERVER_NAME` | Vish Palworld | Server name shown in server browser |
|
||||
| `SERVER_PASSWORD` | *(empty)* | Set via `SERVER_PASSWORD` env var |
|
||||
| `ADMIN_PASSWORD` | changeme | RCON password, set via env var |
|
||||
| `PLAYERS` | 16 | Max concurrent players |
|
||||
| `MULTITHREADING` | true | Multi-threaded CPU usage |
|
||||
| `COMMUNITY` | false | Community server listing visibility |
|
||||
| `UPDATE_ON_BOOT` | true | Auto-update server on container start|
|
||||
| `TZ` | America/Los_Angeles | Server timezone |
|
||||
|
||||
To set passwords without committing them, export env vars before starting:
|
||||
|
||||
```bash
|
||||
export SERVER_PASSWORD="REDACTED_PASSWORD"
|
||||
export ADMIN_PASSWORD="REDACTED_PASSWORD"
|
||||
docker compose up -d
|
||||
```
|
||||
|
||||
## Resource Limits
|
||||
|
||||
- CPU limit: 8 cores, reservation: 2 cores
|
||||
- Memory limit: 16 GB, reservation: 4 GB
|
||||
|
||||
## Data & Backups
|
||||
|
||||
Server data persists in the `palworld-data` Docker volume.
|
||||
|
||||
```bash
|
||||
# Find volume location
|
||||
docker volume inspect palworld_palworld-data
|
||||
|
||||
# Backup the volume
|
||||
docker run --rm -v palworld_palworld-data:/data -v $(pwd):/backup alpine tar czf /backup/palworld-backup.tar.gz -C /data .
|
||||
```
|
||||
|
||||
## Firewall Rules
|
||||
|
||||
The following ports must be open on the Seattle VM:
|
||||
|
||||
- `8211/udp` -- Game traffic (open to Tailscale or LAN)
|
||||
- `27016/udp` -- Steam query (open to Tailscale or LAN)
|
||||
- `25575/tcp` -- RCON (restrict to Tailscale only)
|
||||
|
||||
## Reference
|
||||
|
||||
- Image docs: https://github.com/thijsvanloef/palworld-server-docker
|
||||
- Palworld server wiki: https://tech.palworldgame.com/dedicated-server-guide
|
||||
48
hosts/vms/seattle/palworld/docker-compose.yml
Normal file
48
hosts/vms/seattle/palworld/docker-compose.yml
Normal file
@@ -0,0 +1,48 @@
|
||||
services:
|
||||
palworld-server:
|
||||
image: thijsvanloef/palworld-server-docker:latest
|
||||
container_name: palworld-server
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8211:8211/udp" # Game port
|
||||
- "27016:27016/udp" # Query port (27015 used by gmod)
|
||||
- "25575:25575/tcp" # RCON (Tailscale-only access)
|
||||
environment:
|
||||
- PUID=1000
|
||||
- PGID=1000
|
||||
- PORT=8211
|
||||
- QUERY_PORT=27016
|
||||
- PLAYERS=16
|
||||
- MULTITHREADING=true
|
||||
- COMMUNITY=false
|
||||
- SERVER_NAME=Vish Palworld
|
||||
- SERVER_PASSWORD="REDACTED_PASSWORD"
|
||||
- ADMIN_PASSWORD="REDACTED_PASSWORD"
|
||||
- RCON_ENABLED=true
|
||||
- RCON_PORT=25575
|
||||
- UPDATE_ON_BOOT=true
|
||||
- TZ=America/Los_Angeles
|
||||
volumes:
|
||||
- palworld-data:/palworld
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: "8"
|
||||
memory: 16G
|
||||
reservations:
|
||||
cpus: "2"
|
||||
memory: 4G
|
||||
logging:
|
||||
driver: json-file
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
networks:
|
||||
- palworld-network
|
||||
|
||||
volumes:
|
||||
palworld-data:
|
||||
|
||||
networks:
|
||||
palworld-network:
|
||||
driver: bridge
|
||||
108
hosts/vms/seattle/pufferpanel/README.md
Normal file
108
hosts/vms/seattle/pufferpanel/README.md
Normal file
@@ -0,0 +1,108 @@
|
||||
# PufferPanel - Game Server Management
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
PufferPanel is a web-based game server management panel that provides an easy-to-use interface for managing game servers, including Minecraft, Garry's Mod, and other popular games.
|
||||
|
||||
## 🔧 Service Details
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Service Type** | System Service |
|
||||
| **Binary Location** | `/usr/sbin/pufferpanel` |
|
||||
| **Configuration** | `/etc/pufferpanel/` |
|
||||
| **Data Directory** | `/var/lib/pufferpanel/` |
|
||||
| **Web Port** | 8080 |
|
||||
| **SFTP Port** | 5657 |
|
||||
| **Process User** | `pufferpanel` |
|
||||
|
||||
## 🌐 Network Access
|
||||
|
||||
- **Web Interface**: `http://seattle-vm:8080`
|
||||
- **SFTP Access**: `sftp://seattle-vm:5657`
|
||||
- **Reverse Proxy**: Configured via Nginx (check `/etc/nginx/sites-enabled/pufferpanel`)
|
||||
|
||||
## 🚀 Management Commands
|
||||
|
||||
### Service Control
|
||||
```bash
|
||||
# Check status
|
||||
sudo systemctl status pufferpanel
|
||||
|
||||
# Start/stop/restart
|
||||
sudo systemctl start pufferpanel
|
||||
sudo systemctl stop pufferpanel
|
||||
sudo systemctl restart pufferpanel
|
||||
|
||||
# Enable/disable autostart
|
||||
sudo systemctl enable pufferpanel
|
||||
sudo systemctl disable pufferpanel
|
||||
```
|
||||
|
||||
### Logs and Monitoring
|
||||
```bash
|
||||
# View logs
|
||||
sudo journalctl -u pufferpanel -f
|
||||
|
||||
# Check process
|
||||
ps aux | grep pufferpanel
|
||||
|
||||
# Check listening ports
|
||||
ss -tlnp | grep -E "(8080|5657)"
|
||||
```
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
/etc/pufferpanel/
|
||||
├── config.json # Main configuration
|
||||
└── ...
|
||||
|
||||
/var/lib/pufferpanel/
|
||||
├── servers/ # Game server instances
|
||||
├── templates/ # Server templates
|
||||
├── cache/ # Temporary files
|
||||
└── logs/ # Application logs
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Main Config (`/etc/pufferpanel/config.json`)
|
||||
- Web interface settings
|
||||
- Database configuration
|
||||
- SFTP server settings
|
||||
- Authentication providers
|
||||
|
||||
### Server Management
|
||||
- Create new game servers via web interface
|
||||
- Configure server resources and settings
|
||||
- Manage server files via SFTP or web interface
|
||||
- Monitor server performance and logs
|
||||
|
||||
## 🔒 Security Considerations
|
||||
|
||||
- **User Access**: Managed through PufferPanel's user system
|
||||
- **File Permissions**: Servers run under restricted user accounts
|
||||
- **Network**: SFTP and web ports exposed, consider firewall rules
|
||||
- **Updates**: Keep PufferPanel updated for security patches
|
||||
|
||||
## 🎮 Supported Games
|
||||
|
||||
PufferPanel supports many game servers including:
|
||||
- Minecraft (Java & Bedrock)
|
||||
- Garry's Mod
|
||||
- Counter-Strike
|
||||
- Team Fortress 2
|
||||
- And many more via templates
|
||||
|
||||
## 🔗 Related Services
|
||||
|
||||
- **Garry's Mod PropHunt Server**: Managed through this panel
|
||||
- **Nginx**: Provides reverse proxy for web interface
|
||||
- **System Users**: Game servers run under dedicated users
|
||||
|
||||
## 📚 External Resources
|
||||
|
||||
- [PufferPanel Documentation](https://docs.pufferpanel.com/)
|
||||
- [GitHub Repository](https://github.com/PufferPanel/PufferPanel)
|
||||
- [Community Discord](https://discord.gg/pufferpanel)
|
||||
87
hosts/vms/seattle/pufferpanel/docker-compose.yml
Normal file
87
hosts/vms/seattle/pufferpanel/docker-compose.yml
Normal file
@@ -0,0 +1,87 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
pufferpanel:
|
||||
image: pufferpanel/pufferpanel:latest
|
||||
container_name: pufferpanel
|
||||
restart: unless-stopped
|
||||
|
||||
environment:
|
||||
- PUFFER_WEB_HOST=0.0.0.0:8080
|
||||
- PUFFER_DAEMON_SFTP_HOST=0.0.0.0:5657
|
||||
- PUFFER_DAEMON_DATA_FOLDER=/var/lib/pufferpanel
|
||||
- PUFFER_DAEMON_CONSOLE_BUFFER=50
|
||||
- PUFFER_DAEMON_CONSOLE_FORWARD=false
|
||||
- PUFFER_LOGS_LEVEL=INFO
|
||||
- PUFFER_WEB_SESSION_KEY=changeme-generate-random-key
|
||||
- TZ=America/Los_Angeles
|
||||
|
||||
ports:
|
||||
- "8080:8080" # Web interface
|
||||
- "5657:5657" # SFTP server
|
||||
|
||||
volumes:
|
||||
# Configuration and data
|
||||
- pufferpanel-config:/etc/pufferpanel
|
||||
- pufferpanel-data:/var/lib/pufferpanel
|
||||
- pufferpanel-logs:/var/log/pufferpanel
|
||||
|
||||
# Docker socket for container management (if needed)
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
|
||||
# Game server files (optional, for direct file access)
|
||||
- pufferpanel-servers:/var/lib/pufferpanel/servers
|
||||
|
||||
networks:
|
||||
- pufferpanel-network
|
||||
|
||||
# Security context
|
||||
user: "1000:1000"
|
||||
|
||||
# Resource limits
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '2'
|
||||
memory: 2G
|
||||
reservations:
|
||||
cpus: '0.5'
|
||||
memory: 512M
|
||||
|
||||
# Health check
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/api/self"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
|
||||
# Logging configuration
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
networks:
|
||||
pufferpanel-network:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
pufferpanel-config:
|
||||
driver: local
|
||||
pufferpanel-data:
|
||||
driver: local
|
||||
pufferpanel-logs:
|
||||
driver: local
|
||||
pufferpanel-servers:
|
||||
driver: local
|
||||
|
||||
# Note: This is a reference Docker Compose configuration.
|
||||
# The current installation runs as a system service.
|
||||
# To migrate to Docker:
|
||||
# 1. Stop the system service: sudo systemctl stop pufferpanel
|
||||
# 2. Backup current data: sudo cp -r /var/lib/pufferpanel /backup/
|
||||
# 3. Update this configuration with your specific settings
|
||||
# 4. Run: docker-compose up -d
|
||||
# 5. Restore data if needed
|
||||
482
hosts/vms/seattle/stoatchat/DEPLOYMENT_GUIDE.md
Normal file
482
hosts/vms/seattle/stoatchat/DEPLOYMENT_GUIDE.md
Normal file
@@ -0,0 +1,482 @@
|
||||
# Stoatchat Complete Deployment Guide - Seattle VM
|
||||
|
||||
This guide documents the complete process used to deploy Stoatchat on the Seattle VM. Follow these steps to recreate the deployment on a new server.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Ubuntu/Debian server with root access
|
||||
- Domain name with Cloudflare DNS management
|
||||
- Gmail account with App Password for SMTP
|
||||
- At least 4GB RAM and 20GB storage
|
||||
|
||||
## Step 1: Server Preparation
|
||||
|
||||
### 1.1 Update System
|
||||
```bash
|
||||
apt update && apt upgrade -y
|
||||
apt install -y curl wget git build-essential pkg-config libssl-dev nginx certbot python3-certbot-nginx
|
||||
```
|
||||
|
||||
### 1.2 Install Docker
|
||||
```bash
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sh get-docker.sh
|
||||
systemctl enable docker
|
||||
systemctl start docker
|
||||
```
|
||||
|
||||
### 1.3 Install Rust
|
||||
```bash
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
source ~/.cargo/env
|
||||
rustup default stable
|
||||
```
|
||||
|
||||
## Step 2: Clone and Build Stoatchat
|
||||
|
||||
### 2.1 Clone Repository
|
||||
```bash
|
||||
cd /root
|
||||
git clone https://github.com/stoatchat/stoatchat.git
|
||||
cd stoatchat
|
||||
```
|
||||
|
||||
### 2.2 Build Services
|
||||
```bash
|
||||
# This takes 15-30 minutes depending on server specs
|
||||
cargo build --release
|
||||
|
||||
# Or for debug builds (faster compilation, used in current deployment):
|
||||
cargo build
|
||||
```
|
||||
|
||||
## Step 3: Infrastructure Services Setup
|
||||
|
||||
### 3.1 Create Docker Compose File
|
||||
```bash
|
||||
cat > compose.yml << 'EOF'
|
||||
services:
|
||||
redis:
|
||||
image: eqalpha/keydb
|
||||
container_name: stoatchat-redis
|
||||
ports:
|
||||
- "6380:6379"
|
||||
volumes:
|
||||
- ./data/redis:/data
|
||||
restart: unless-stopped
|
||||
|
||||
database:
|
||||
image: mongo:7
|
||||
container_name: stoatchat-mongodb
|
||||
ports:
|
||||
- "27017:27017"
|
||||
volumes:
|
||||
- ./data/mongodb:/data/db
|
||||
environment:
|
||||
MONGO_INITDB_ROOT_USERNAME: stoatchat
|
||||
MONGO_INITDB_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
restart: unless-stopped
|
||||
|
||||
minio:
|
||||
image: minio/minio:latest
|
||||
container_name: stoatchat-minio
|
||||
command: server /data --console-address ":9001"
|
||||
environment:
|
||||
MINIO_ROOT_USER: REDACTED_MINIO_CRED
|
||||
MINIO_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
volumes:
|
||||
- ./data/minio:/data
|
||||
ports:
|
||||
- "14009:9000"
|
||||
- "9001:9001"
|
||||
restart: unless-stopped
|
||||
|
||||
livekit:
|
||||
image: livekit/livekit-server:v1.9.9
|
||||
container_name: stoatchat-livekit
|
||||
ports:
|
||||
- "7880:7880"
|
||||
- "7881:7881"
|
||||
- "7882:7882/udp"
|
||||
volumes:
|
||||
- ./livekit.yml:/livekit.yml:ro
|
||||
command: --config /livekit.yml
|
||||
restart: unless-stopped
|
||||
EOF
|
||||
```
|
||||
|
||||
### 3.2 Create LiveKit Configuration
|
||||
```bash
|
||||
cat > livekit.yml << 'EOF'
|
||||
port: 7880
|
||||
redis:
|
||||
address: localhost:6380
|
||||
username: ""
|
||||
password: ""
|
||||
webhook:
|
||||
api_key: worldwide
|
||||
urls:
|
||||
- 'http://localhost:8500/worldwide'
|
||||
logging:
|
||||
level: debug
|
||||
keys:
|
||||
worldwide: YOUR_LIVEKIT_API_KEY_GENERATE_RANDOM_32_CHARS
|
||||
EOF
|
||||
```
|
||||
|
||||
### 3.3 Start Infrastructure Services
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
## Step 4: Stoatchat Configuration
|
||||
|
||||
### 4.1 Create Configuration Override
|
||||
```bash
|
||||
cat > Revolt.overrides.toml << 'EOF'
|
||||
[database]
|
||||
redis = "redis://127.0.0.1:6380/"
|
||||
mongodb = "mongodb://stoatchat:YOUR_SECURE_MONGODB_PASSWORD@127.0.0.1:27017/revolt"
|
||||
|
||||
[hosts]
|
||||
app = "https://YOUR_DOMAIN"
|
||||
api = "https://api.YOUR_DOMAIN"
|
||||
events = "wss://events.YOUR_DOMAIN"
|
||||
autumn = "https://files.YOUR_DOMAIN"
|
||||
january = "https://proxy.YOUR_DOMAIN"
|
||||
|
||||
[hosts.livekit]
|
||||
worldwide = "wss://voice.YOUR_DOMAIN"
|
||||
|
||||
[email]
|
||||
smtp_host = "smtp.gmail.com"
|
||||
smtp_port = 587
|
||||
smtp_username = "YOUR_GMAIL@gmail.com"
|
||||
smtp_password = "REDACTED_PASSWORD"
|
||||
from_address = "YOUR_GMAIL@gmail.com"
|
||||
smtp_tls = true
|
||||
|
||||
[files]
|
||||
s3_region = "us-east-1"
|
||||
s3_bucket = "revolt-uploads"
|
||||
s3_endpoint = "http://127.0.0.1:14009"
|
||||
s3_access_key_id = "REDACTED_MINIO_CRED"
|
||||
s3_secret_access_key = "YOUR_SECURE_MINIO_PASSWORD"
|
||||
|
||||
[security]
|
||||
vapid_private_key = REDACTED_VAPID_PRIVATE_KEY
|
||||
|
||||
[features]
|
||||
captcha_enabled = false
|
||||
email_verification = true
|
||||
invite_only = false
|
||||
|
||||
[limits]
|
||||
max_file_size = 104857600 # 100MB
|
||||
max_message_length = 2000
|
||||
max_embed_count = 10
|
||||
EOF
|
||||
```
|
||||
|
||||
## Step 5: SSL Certificates Setup
|
||||
|
||||
### 5.1 Configure Cloudflare DNS
|
||||
Set up A records for all subdomains pointing to your server IP:
|
||||
- YOUR_DOMAIN
|
||||
- api.YOUR_DOMAIN
|
||||
- events.YOUR_DOMAIN
|
||||
- files.YOUR_DOMAIN
|
||||
- proxy.YOUR_DOMAIN
|
||||
- voice.YOUR_DOMAIN
|
||||
|
||||
### 5.2 Obtain SSL Certificates
|
||||
```bash
|
||||
# Get certificates for all domains
|
||||
certbot certonly --nginx -d YOUR_DOMAIN -d api.YOUR_DOMAIN -d events.YOUR_DOMAIN -d files.YOUR_DOMAIN -d proxy.YOUR_DOMAIN -d voice.YOUR_DOMAIN
|
||||
|
||||
# Or individually if needed:
|
||||
certbot certonly --nginx -d YOUR_DOMAIN
|
||||
certbot certonly --nginx -d api.YOUR_DOMAIN
|
||||
certbot certonly --nginx -d events.YOUR_DOMAIN
|
||||
certbot certonly --nginx -d files.YOUR_DOMAIN
|
||||
certbot certonly --nginx -d proxy.YOUR_DOMAIN
|
||||
certbot certonly --nginx -d voice.YOUR_DOMAIN
|
||||
```
|
||||
|
||||
## Step 6: Nginx Configuration
|
||||
|
||||
### 6.1 Create Nginx Configuration
|
||||
```bash
|
||||
cat > /etc/nginx/sites-available/stoatchat << 'EOF'
|
||||
# Main app (placeholder/frontend)
|
||||
server {
|
||||
listen 80;
|
||||
server_name YOUR_DOMAIN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name YOUR_DOMAIN;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/YOUR_DOMAIN/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/YOUR_DOMAIN/privkey.pem;
|
||||
|
||||
location / {
|
||||
return 200 'Stoatchat - Coming Soon';
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
}
|
||||
|
||||
# API Server
|
||||
server {
|
||||
listen 80;
|
||||
server_name api.YOUR_DOMAIN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name api.YOUR_DOMAIN;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/api.YOUR_DOMAIN/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/api.YOUR_DOMAIN/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14702;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# Events WebSocket
|
||||
server {
|
||||
listen 80;
|
||||
server_name events.YOUR_DOMAIN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name events.YOUR_DOMAIN;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/events.YOUR_DOMAIN/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/events.YOUR_DOMAIN/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14703;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
}
|
||||
|
||||
# File Server
|
||||
server {
|
||||
listen 80;
|
||||
server_name files.YOUR_DOMAIN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name files.YOUR_DOMAIN;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/files.YOUR_DOMAIN/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/files.YOUR_DOMAIN/privkey.pem;
|
||||
|
||||
client_max_body_size 100M;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14704;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# Media Proxy
|
||||
server {
|
||||
listen 80;
|
||||
server_name proxy.YOUR_DOMAIN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name proxy.YOUR_DOMAIN;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/proxy.YOUR_DOMAIN/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/proxy.YOUR_DOMAIN/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14705;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# Voice/Video (LiveKit)
|
||||
server {
|
||||
listen 80;
|
||||
server_name voice.YOUR_DOMAIN;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name voice.YOUR_DOMAIN;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/voice.YOUR_DOMAIN/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/voice.YOUR_DOMAIN/privkey.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:7880;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
}
|
||||
EOF
|
||||
```
|
||||
|
||||
### 6.2 Enable Configuration
|
||||
```bash
|
||||
ln -s /etc/nginx/sites-available/stoatchat /etc/nginx/sites-enabled/
|
||||
nginx -t
|
||||
systemctl reload nginx
|
||||
```
|
||||
|
||||
## Step 7: Start Stoatchat Services
|
||||
|
||||
### 7.1 Create Service Startup Script
|
||||
```bash
|
||||
cat > /root/stoatchat/start-services.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Start services in background
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
nohup ./target/debug/revolt-autumn > files.log 2>&1 &
|
||||
nohup ./target/debug/revolt-january > proxy.log 2>&1 &
|
||||
nohup ./target/debug/revolt-gifbox > gifbox.log 2>&1 &
|
||||
|
||||
echo "All Stoatchat services started"
|
||||
EOF
|
||||
|
||||
chmod +x /root/stoatchat/start-services.sh
|
||||
```
|
||||
|
||||
### 7.2 Start Services
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
./start-services.sh
|
||||
```
|
||||
|
||||
## Step 8: Verification
|
||||
|
||||
### 8.1 Check Services
|
||||
```bash
|
||||
# Check processes
|
||||
ps aux | grep revolt
|
||||
|
||||
# Check ports
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
# Test endpoints
|
||||
curl -k https://api.YOUR_DOMAIN/
|
||||
curl -k https://files.YOUR_DOMAIN/
|
||||
curl -k https://proxy.YOUR_DOMAIN/
|
||||
curl -k https://voice.YOUR_DOMAIN/
|
||||
```
|
||||
|
||||
### 8.2 Expected Responses
|
||||
- API: `{"revolt":"0.10.3","features":...}`
|
||||
- Files: `{"autumn":"Hello, I am a file server!","version":"0.10.3"}`
|
||||
- Proxy: `{"january":"Hello, I am a media proxy server!","version":"0.10.3"}`
|
||||
- Voice: `OK`
|
||||
|
||||
## Step 9: Setup Systemd Services (Optional but Recommended)
|
||||
|
||||
### 9.1 Create Systemd Service Files
|
||||
```bash
|
||||
# Create service for each component
|
||||
cat > /etc/systemd/system/stoatchat-api.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Stoatchat API Server
|
||||
After=network.target docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/root/stoatchat
|
||||
ExecStart=/root/stoatchat/target/debug/revolt-delta
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Repeat for other services...
|
||||
systemctl daemon-reload
|
||||
systemctl enable stoatchat-api
|
||||
systemctl start stoatchat-api
|
||||
```
|
||||
|
||||
## Step 10: Frontend Setup (Future)
|
||||
|
||||
The main domain currently shows a placeholder. To complete the setup:
|
||||
|
||||
1. Deploy a Revolt.js frontend or compatible client
|
||||
2. Update nginx configuration to serve the frontend
|
||||
3. Configure the frontend to use your API endpoints
|
||||
|
||||
## Security Considerations
|
||||
|
||||
1. **Change all default passwords** in the configuration files
|
||||
2. **Generate new API keys** for LiveKit and VAPID
|
||||
3. **Set up firewall rules** to restrict access to internal ports
|
||||
4. **Enable fail2ban** for SSH protection
|
||||
5. **Regular security updates** for the system and Docker images
|
||||
|
||||
## Backup Strategy
|
||||
|
||||
1. **Database**: Regular MongoDB dumps
|
||||
2. **Files**: Backup MinIO data directory
|
||||
3. **Configuration**: Backup all .toml and .yml files
|
||||
4. **SSL Certificates**: Backup Let's Encrypt directory
|
||||
|
||||
## Monitoring
|
||||
|
||||
Consider setting up monitoring for:
|
||||
- Service health checks
|
||||
- Resource usage (CPU, RAM, disk)
|
||||
- Log aggregation
|
||||
- SSL certificate expiration
|
||||
- Database performance
|
||||
|
||||
---
|
||||
|
||||
This deployment guide captures the complete process used to set up Stoatchat on the Seattle VM. Adjust domain names, passwords, and paths as needed for your specific deployment.
|
||||
345
hosts/vms/seattle/stoatchat/MIGRATION_GUIDE.md
Normal file
345
hosts/vms/seattle/stoatchat/MIGRATION_GUIDE.md
Normal file
@@ -0,0 +1,345 @@
|
||||
# Stoatchat Migration Guide
|
||||
|
||||
This guide covers migrating the Stoatchat deployment from the Seattle VM to a new server.
|
||||
|
||||
## Pre-Migration Checklist
|
||||
|
||||
### 1. Document Current State
|
||||
```bash
|
||||
# On Seattle VM - document current configuration
|
||||
cd /root/stoatchat
|
||||
|
||||
# Save current configuration
|
||||
cp Revolt.overrides.toml Revolt.overrides.toml.backup
|
||||
cp livekit.yml livekit.yml.backup
|
||||
cp compose.yml compose.yml.backup
|
||||
|
||||
# Document running services
|
||||
ps aux | grep revolt > running_services.txt
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)" > port_status.txt
|
||||
|
||||
# Check Docker services
|
||||
docker-compose ps > docker_status.txt
|
||||
```
|
||||
|
||||
### 2. Backup Data
|
||||
```bash
|
||||
# Create backup directory
|
||||
mkdir -p /root/stoatchat-backup/$(date +%Y%m%d)
|
||||
cd /root/stoatchat-backup/$(date +%Y%m%d)
|
||||
|
||||
# Backup MongoDB
|
||||
docker exec stoatchat-mongodb mongodump --uri="mongodb://stoatchat:stoatchat_secure_password_change_me@localhost:27017/revolt" --out ./mongodb-backup
|
||||
|
||||
# Backup MinIO data
|
||||
docker exec stoatchat-minio tar czf - /data > minio-backup.tar.gz
|
||||
|
||||
# Backup Redis data (optional - mostly cache)
|
||||
docker exec stoatchat-redis redis-cli BGSAVE
|
||||
docker cp stoatchat-redis:/data/dump.rdb ./redis-backup.rdb
|
||||
|
||||
# Backup configuration files
|
||||
cp /root/stoatchat/Revolt.overrides.toml ./
|
||||
cp /root/stoatchat/livekit.yml ./
|
||||
cp /root/stoatchat/compose.yml ./
|
||||
cp -r /etc/nginx/sites-available/stoatchat ./nginx-config
|
||||
|
||||
# Backup SSL certificates
|
||||
sudo tar czf letsencrypt-backup.tar.gz /etc/letsencrypt/
|
||||
```
|
||||
|
||||
### 3. Test Backup Integrity
|
||||
```bash
|
||||
# Verify MongoDB backup
|
||||
ls -la mongodb-backup/revolt/
|
||||
mongorestore --dry-run --uri="mongodb://stoatchat:stoatchat_secure_password_change_me@localhost:27017/revolt-test" mongodb-backup/
|
||||
|
||||
# Verify MinIO backup
|
||||
tar -tzf minio-backup.tar.gz | head -10
|
||||
|
||||
# Verify configuration files
|
||||
cat Revolt.overrides.toml | grep -E "(mongodb|redis|s3_)"
|
||||
```
|
||||
|
||||
## Migration Process
|
||||
|
||||
### Phase 1: Prepare New Server
|
||||
|
||||
#### 1.1 Server Setup
|
||||
```bash
|
||||
# On new server - follow deployment guide steps 1-2
|
||||
# Install dependencies, Docker, Rust
|
||||
# Clone repository and build services
|
||||
```
|
||||
|
||||
#### 1.2 DNS Preparation
|
||||
```bash
|
||||
# Update Cloudflare DNS to point to new server IP
|
||||
# Or use Cloudflare API with your token (see Vaultwarden → Homelab → Cloudflare)
|
||||
|
||||
# Example API call to update DNS:
|
||||
curl -X PUT "https://api.cloudflare.com/client/v4/zones/ZONE_ID/dns_records/RECORD_ID" \
|
||||
-H "Authorization: Bearer <CLOUDFLARE_TOKEN>" \
|
||||
-H "Content-Type: application/json" \
|
||||
--data '{"type":"A","name":"api.st.vish.gg","content":"NEW_SERVER_IP"}'
|
||||
```
|
||||
|
||||
### Phase 2: Data Migration
|
||||
|
||||
#### 2.1 Transfer Backup Files
|
||||
```bash
|
||||
# From Seattle VM to new server
|
||||
scp -r /root/stoatchat-backup/$(date +%Y%m%d)/* root@NEW_SERVER_IP:/root/stoatchat-restore/
|
||||
|
||||
# Or use rsync for better reliability
|
||||
rsync -avz --progress /root/stoatchat-backup/$(date +%Y%m%d)/ root@NEW_SERVER_IP:/root/stoatchat-restore/
|
||||
```
|
||||
|
||||
#### 2.2 Restore Configuration
|
||||
```bash
|
||||
# On new server
|
||||
cd /root/stoatchat-restore
|
||||
|
||||
# Restore configuration files
|
||||
cp Revolt.overrides.toml /root/stoatchat/
|
||||
cp livekit.yml /root/stoatchat/
|
||||
cp compose.yml /root/stoatchat/
|
||||
|
||||
# Update configuration for new server if needed
|
||||
sed -i 's/OLD_SERVER_IP/NEW_SERVER_IP/g' /root/stoatchat/Revolt.overrides.toml
|
||||
```
|
||||
|
||||
#### 2.3 Restore SSL Certificates
|
||||
```bash
|
||||
# On new server
|
||||
cd /root/stoatchat-restore
|
||||
|
||||
# Restore Let's Encrypt certificates
|
||||
sudo tar xzf letsencrypt-backup.tar.gz -C /
|
||||
|
||||
# Or obtain new certificates
|
||||
certbot certonly --nginx -d st.vish.gg -d api.st.vish.gg -d events.st.vish.gg -d files.st.vish.gg -d proxy.st.vish.gg -d voice.st.vish.gg
|
||||
```
|
||||
|
||||
#### 2.4 Setup Infrastructure Services
|
||||
```bash
|
||||
# On new server
|
||||
cd /root/stoatchat
|
||||
|
||||
# Start infrastructure services
|
||||
docker-compose up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
sleep 30
|
||||
```
|
||||
|
||||
#### 2.5 Restore Data
|
||||
```bash
|
||||
# Restore MongoDB
|
||||
docker exec -i stoatchat-mongodb mongorestore --uri="mongodb://stoatchat:stoatchat_secure_password_change_me@localhost:27017" --drop /root/stoatchat-restore/mongodb-backup/
|
||||
|
||||
# Restore MinIO data
|
||||
docker exec -i stoatchat-minio sh -c 'cd / && tar xzf -' < /root/stoatchat-restore/minio-backup.tar.gz
|
||||
|
||||
# Restart MinIO to recognize new data
|
||||
docker-compose restart minio
|
||||
```
|
||||
|
||||
### Phase 3: Service Migration
|
||||
|
||||
#### 3.1 Configure Nginx
|
||||
```bash
|
||||
# On new server
|
||||
cp /root/stoatchat-restore/nginx-config /etc/nginx/sites-available/stoatchat
|
||||
ln -s /etc/nginx/sites-available/stoatchat /etc/nginx/sites-enabled/
|
||||
|
||||
# Test and reload nginx
|
||||
nginx -t
|
||||
systemctl reload nginx
|
||||
```
|
||||
|
||||
#### 3.2 Start Stoatchat Services
|
||||
```bash
|
||||
# On new server
|
||||
cd /root/stoatchat
|
||||
|
||||
# Start services
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
nohup ./target/debug/revolt-autumn > files.log 2>&1 &
|
||||
nohup ./target/debug/revolt-january > proxy.log 2>&1 &
|
||||
nohup ./target/debug/revolt-gifbox > gifbox.log 2>&1 &
|
||||
```
|
||||
|
||||
### Phase 4: Verification and Testing
|
||||
|
||||
#### 4.1 Service Health Check
|
||||
```bash
|
||||
# Check all services are running
|
||||
ps aux | grep revolt
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
# Test endpoints
|
||||
curl -k https://api.st.vish.gg/
|
||||
curl -k https://files.st.vish.gg/
|
||||
curl -k https://proxy.st.vish.gg/
|
||||
curl -k https://voice.st.vish.gg/
|
||||
```
|
||||
|
||||
#### 4.2 Data Integrity Check
|
||||
```bash
|
||||
# Check MongoDB data
|
||||
docker exec stoatchat-mongodb mongo --eval "db.adminCommand('listCollections')" revolt
|
||||
|
||||
# Check MinIO data
|
||||
docker exec stoatchat-minio mc ls local/revolt-uploads/
|
||||
|
||||
# Check Redis connectivity
|
||||
docker exec stoatchat-redis redis-cli ping
|
||||
```
|
||||
|
||||
#### 4.3 Functional Testing
|
||||
```bash
|
||||
# Test API endpoints
|
||||
curl -X GET https://api.st.vish.gg/users/@me -H "Authorization: Bearer TEST_TOKEN"
|
||||
|
||||
# Test file upload (if you have test files)
|
||||
curl -X POST https://files.st.vish.gg/attachments -F "file=@test.jpg"
|
||||
|
||||
# Test WebSocket connection (using wscat if available)
|
||||
wscat -c wss://events.st.vish.gg/
|
||||
```
|
||||
|
||||
## Post-Migration Tasks
|
||||
|
||||
### 1. Update DNS (if not done earlier)
|
||||
```bash
|
||||
# Update all DNS records to point to new server
|
||||
# api.st.vish.gg -> NEW_SERVER_IP
|
||||
# events.st.vish.gg -> NEW_SERVER_IP
|
||||
# files.st.vish.gg -> NEW_SERVER_IP
|
||||
# proxy.st.vish.gg -> NEW_SERVER_IP
|
||||
# voice.st.vish.gg -> NEW_SERVER_IP
|
||||
# st.vish.gg -> NEW_SERVER_IP
|
||||
```
|
||||
|
||||
### 2. Update Monitoring
|
||||
```bash
|
||||
# Update any monitoring systems to check new server
|
||||
# Update health check URLs
|
||||
# Update alerting configurations
|
||||
```
|
||||
|
||||
### 3. Cleanup Old Server
|
||||
```bash
|
||||
# On Seattle VM - ONLY after confirming new server works
|
||||
# Stop services
|
||||
pkill -f revolt-
|
||||
|
||||
# Stop Docker services
|
||||
docker-compose down
|
||||
|
||||
# Archive data (don't delete immediately)
|
||||
mv /root/stoatchat /root/stoatchat-archived-$(date +%Y%m%d)
|
||||
```
|
||||
|
||||
## Rollback Plan
|
||||
|
||||
If migration fails, you can quickly rollback:
|
||||
|
||||
### 1. Immediate Rollback
|
||||
```bash
|
||||
# Update DNS back to Seattle VM IP
|
||||
# Restart services on Seattle VM
|
||||
|
||||
# On Seattle VM
|
||||
cd /root/stoatchat
|
||||
docker-compose up -d
|
||||
./start-services.sh
|
||||
```
|
||||
|
||||
### 2. Data Rollback
|
||||
```bash
|
||||
# If data was corrupted during migration
|
||||
# Restore from backup on Seattle VM
|
||||
|
||||
cd /root/stoatchat-backup/$(date +%Y%m%d)
|
||||
# Follow restore procedures above
|
||||
```
|
||||
|
||||
## Migration Checklist
|
||||
|
||||
### Pre-Migration
|
||||
- [ ] Document current state
|
||||
- [ ] Create complete backup
|
||||
- [ ] Test backup integrity
|
||||
- [ ] Prepare new server
|
||||
- [ ] Plan DNS update strategy
|
||||
|
||||
### During Migration
|
||||
- [ ] Transfer backup files
|
||||
- [ ] Restore configuration
|
||||
- [ ] Setup infrastructure services
|
||||
- [ ] Restore data
|
||||
- [ ] Configure nginx
|
||||
- [ ] Start Stoatchat services
|
||||
|
||||
### Post-Migration
|
||||
- [ ] Verify all services running
|
||||
- [ ] Test all endpoints
|
||||
- [ ] Check data integrity
|
||||
- [ ] Update DNS records
|
||||
- [ ] Update monitoring
|
||||
- [ ] Archive old server data
|
||||
|
||||
### Rollback Ready
|
||||
- [ ] Keep old server running until confirmed
|
||||
- [ ] Have DNS rollback plan
|
||||
- [ ] Keep backup accessible
|
||||
- [ ] Document any issues found
|
||||
|
||||
## Troubleshooting Common Issues
|
||||
|
||||
### Services Won't Start
|
||||
```bash
|
||||
# Check logs
|
||||
tail -f /root/stoatchat/*.log
|
||||
|
||||
# Check configuration
|
||||
cat /root/stoatchat/Revolt.overrides.toml | grep -E "(mongodb|redis)"
|
||||
|
||||
# Check infrastructure services
|
||||
docker-compose logs
|
||||
```
|
||||
|
||||
### Database Connection Issues
|
||||
```bash
|
||||
# Test MongoDB connection
|
||||
docker exec stoatchat-mongodb mongo --eval "db.adminCommand('ismaster')"
|
||||
|
||||
# Check credentials
|
||||
grep mongodb /root/stoatchat/Revolt.overrides.toml
|
||||
```
|
||||
|
||||
### SSL Certificate Issues
|
||||
```bash
|
||||
# Check certificate validity
|
||||
openssl x509 -in /etc/letsencrypt/live/api.st.vish.gg/fullchain.pem -text -noout
|
||||
|
||||
# Renew certificates if needed
|
||||
certbot renew --dry-run
|
||||
```
|
||||
|
||||
### DNS Propagation Issues
|
||||
```bash
|
||||
# Check DNS resolution
|
||||
dig api.st.vish.gg
|
||||
nslookup api.st.vish.gg 8.8.8.8
|
||||
|
||||
# Check from different locations
|
||||
curl -H "Host: api.st.vish.gg" http://NEW_SERVER_IP/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
This migration guide provides a comprehensive process for moving Stoatchat to a new server while minimizing downtime and ensuring data integrity.
|
||||
107
hosts/vms/seattle/stoatchat/README.md
Normal file
107
hosts/vms/seattle/stoatchat/README.md
Normal file
@@ -0,0 +1,107 @@
|
||||
# Stoatchat Deployment - Seattle VM
|
||||
|
||||
Stoatchat is a self-hosted Discord/Slack alternative (Revolt.chat fork) deployed on the Seattle VM at st.vish.gg.
|
||||
|
||||
## Server Information
|
||||
|
||||
- **Host**: Seattle VM (YOUR_WAN_IP)
|
||||
- **Location**: /root/stoatchat
|
||||
- **Repository**: https://github.com/stoatchat/stoatchat.git
|
||||
- **Domain**: st.vish.gg (and subdomains)
|
||||
|
||||
## Quick Status Check
|
||||
|
||||
```bash
|
||||
# SSH to Seattle VM first
|
||||
ssh root@YOUR_WAN_IP
|
||||
|
||||
# Check all services
|
||||
ps aux | grep revolt
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
# Test endpoints locally
|
||||
curl -k https://api.st.vish.gg/ --resolve api.st.vish.gg:443:127.0.0.1
|
||||
curl -k https://files.st.vish.gg/ --resolve files.st.vish.gg:443:127.0.0.1
|
||||
curl -k https://proxy.st.vish.gg/ --resolve proxy.st.vish.gg:443:127.0.0.1
|
||||
curl -k https://voice.st.vish.gg/ --resolve voice.st.vish.gg:443:127.0.0.1
|
||||
```
|
||||
|
||||
## Service URLs
|
||||
|
||||
- **Main App**: https://st.vish.gg (frontend - placeholder currently)
|
||||
- **API**: https://api.st.vish.gg
|
||||
- **WebSocket Events**: wss://events.st.vish.gg
|
||||
- **File Server**: https://files.st.vish.gg
|
||||
- **Media Proxy**: https://proxy.st.vish.gg
|
||||
- **Voice/Video**: wss://voice.st.vish.gg
|
||||
|
||||
## Architecture on Seattle VM
|
||||
|
||||
```
|
||||
Internet → Cloudflare → Seattle VM (YOUR_WAN_IP)
|
||||
│
|
||||
Nginx (443/80)
|
||||
│
|
||||
┌───────┼───────┐
|
||||
│ │ │
|
||||
Stoatchat Docker System
|
||||
Services Services Services
|
||||
│ │ │
|
||||
┌───┼───┐ │ ┌───┼───┐
|
||||
│ │ │ │ │ │ │
|
||||
API Events Files Redis MongoDB MinIO
|
||||
14702 14703 14704 6380 27017 14009
|
||||
│
|
||||
LiveKit
|
||||
7880
|
||||
```
|
||||
|
||||
## Current Status: ✅ OPERATIONAL
|
||||
|
||||
All services are running and tested on Seattle VM. The setup is production-ready except for the frontend client.
|
||||
|
||||
## Files in this Directory
|
||||
|
||||
- `docker-compose.yml` - Infrastructure services (Redis, MongoDB, MinIO, LiveKit)
|
||||
- `Revolt.overrides.toml` - Main configuration file
|
||||
- `livekit.yml` - LiveKit voice/video configuration
|
||||
- `nginx-config.conf` - Nginx reverse proxy configuration
|
||||
- `DEPLOYMENT_GUIDE.md` - Complete step-by-step deployment instructions
|
||||
- `MIGRATION_GUIDE.md` - Instructions for moving to a new server
|
||||
- `TROUBLESHOOTING.md` - Common issues and solutions
|
||||
- `SERVICE_MANAGEMENT.md` - Start/stop/restart procedures
|
||||
|
||||
## Service Management
|
||||
|
||||
### Starting Services
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Start infrastructure services
|
||||
docker-compose up -d
|
||||
|
||||
# Stoatchat services are built and run as binaries
|
||||
# They should auto-start, but if needed:
|
||||
./target/debug/revolt-delta & # API server
|
||||
./target/debug/revolt-bonfire & # Events WebSocket
|
||||
./target/debug/revolt-autumn & # File server
|
||||
./target/debug/revolt-january & # Media proxy
|
||||
./target/debug/revolt-gifbox & # GIF service
|
||||
```
|
||||
|
||||
### Checking Status
|
||||
```bash
|
||||
# Check processes
|
||||
ps aux | grep revolt
|
||||
|
||||
# Check ports
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
# Check Docker services
|
||||
docker-compose ps
|
||||
|
||||
# Check nginx
|
||||
systemctl status nginx
|
||||
```
|
||||
|
||||
Last verified: 2026-02-11
|
||||
594
hosts/vms/seattle/stoatchat/SERVICE_MANAGEMENT.md
Normal file
594
hosts/vms/seattle/stoatchat/SERVICE_MANAGEMENT.md
Normal file
@@ -0,0 +1,594 @@
|
||||
# Stoatchat Service Management
|
||||
|
||||
Complete guide for managing Stoatchat services on the Seattle VM.
|
||||
|
||||
## Service Architecture
|
||||
|
||||
```
|
||||
Stoatchat Services (Native Binaries)
|
||||
├── revolt-delta (API Server) → Port 14702
|
||||
├── revolt-bonfire (Events WebSocket) → Port 14703
|
||||
├── revolt-autumn (File Server) → Port 14704
|
||||
├── revolt-january (Media Proxy) → Port 14705
|
||||
└── revolt-gifbox (GIF Service) → Port 14706
|
||||
|
||||
Infrastructure Services (Docker)
|
||||
├── Redis (KeyDB) → Port 6380
|
||||
├── MongoDB → Port 27017
|
||||
├── MinIO → Port 14009
|
||||
└── LiveKit → Port 7880
|
||||
|
||||
System Services
|
||||
└── Nginx → Ports 80, 443
|
||||
```
|
||||
|
||||
## Starting Services
|
||||
|
||||
### 1. Start Infrastructure Services
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Start all Docker services
|
||||
docker-compose up -d
|
||||
|
||||
# Check status
|
||||
docker-compose ps
|
||||
|
||||
# Wait for services to be ready (important!)
|
||||
sleep 30
|
||||
```
|
||||
|
||||
### 2. Start Stoatchat Services
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Start all services in background
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
nohup ./target/debug/revolt-autumn > files.log 2>&1 &
|
||||
nohup ./target/debug/revolt-january > proxy.log 2>&1 &
|
||||
nohup ./target/debug/revolt-gifbox > gifbox.log 2>&1 &
|
||||
|
||||
echo "All Stoatchat services started"
|
||||
```
|
||||
|
||||
### 3. Automated Startup Script
|
||||
```bash
|
||||
# Create startup script
|
||||
cat > /root/stoatchat/start-all-services.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
cd /root/stoatchat
|
||||
|
||||
echo "Starting infrastructure services..."
|
||||
docker-compose up -d
|
||||
|
||||
echo "Waiting for infrastructure to be ready..."
|
||||
sleep 30
|
||||
|
||||
echo "Starting Stoatchat services..."
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
nohup ./target/debug/revolt-autumn > files.log 2>&1 &
|
||||
nohup ./target/debug/revolt-january > proxy.log 2>&1 &
|
||||
nohup ./target/debug/revolt-gifbox > gifbox.log 2>&1 &
|
||||
|
||||
echo "All services started. Checking status..."
|
||||
sleep 5
|
||||
ps aux | grep revolt | grep -v grep
|
||||
EOF
|
||||
|
||||
chmod +x /root/stoatchat/start-all-services.sh
|
||||
```
|
||||
|
||||
## Stopping Services
|
||||
|
||||
### 1. Stop Stoatchat Services
|
||||
```bash
|
||||
# Stop all revolt processes
|
||||
pkill -f revolt-
|
||||
|
||||
# Or stop individually
|
||||
pkill -f revolt-delta # API
|
||||
pkill -f revolt-bonfire # Events
|
||||
pkill -f revolt-autumn # Files
|
||||
pkill -f revolt-january # Proxy
|
||||
pkill -f revolt-gifbox # GIF
|
||||
```
|
||||
|
||||
### 2. Stop Infrastructure Services
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Stop all Docker services
|
||||
docker-compose down
|
||||
|
||||
# Or stop individually
|
||||
docker-compose stop redis
|
||||
docker-compose stop database
|
||||
docker-compose stop minio
|
||||
docker-compose stop livekit
|
||||
```
|
||||
|
||||
### 3. Complete Shutdown Script
|
||||
```bash
|
||||
# Create shutdown script
|
||||
cat > /root/stoatchat/stop-all-services.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
cd /root/stoatchat
|
||||
|
||||
echo "Stopping Stoatchat services..."
|
||||
pkill -f revolt-
|
||||
|
||||
echo "Stopping infrastructure services..."
|
||||
docker-compose down
|
||||
|
||||
echo "All services stopped."
|
||||
EOF
|
||||
|
||||
chmod +x /root/stoatchat/stop-all-services.sh
|
||||
```
|
||||
|
||||
## Restarting Services
|
||||
|
||||
### 1. Restart Individual Stoatchat Service
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Example: Restart API server
|
||||
pkill -f revolt-delta
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
|
||||
# Example: Restart Events service
|
||||
pkill -f revolt-bonfire
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
```
|
||||
|
||||
### 2. Restart Infrastructure Service
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Example: Restart Redis
|
||||
docker-compose restart redis
|
||||
|
||||
# Example: Restart MongoDB
|
||||
docker-compose restart database
|
||||
```
|
||||
|
||||
### 3. Complete Restart
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Stop everything
|
||||
./stop-all-services.sh
|
||||
|
||||
# Wait a moment
|
||||
sleep 5
|
||||
|
||||
# Start everything
|
||||
./start-all-services.sh
|
||||
```
|
||||
|
||||
## Service Status Monitoring
|
||||
|
||||
### 1. Check Running Processes
|
||||
```bash
|
||||
# Check all Stoatchat processes
|
||||
ps aux | grep revolt | grep -v grep
|
||||
|
||||
# Check specific service
|
||||
ps aux | grep revolt-delta
|
||||
|
||||
# Check with process tree
|
||||
pstree -p | grep revolt
|
||||
```
|
||||
|
||||
### 2. Check Listening Ports
|
||||
```bash
|
||||
# Check all Stoatchat ports
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
# Check specific port
|
||||
ss -tlnp | grep 14702
|
||||
|
||||
# Check with netstat
|
||||
netstat -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
```
|
||||
|
||||
### 3. Check Docker Services
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Check all services
|
||||
docker-compose ps
|
||||
|
||||
# Check specific service
|
||||
docker-compose ps redis
|
||||
|
||||
# Check service logs
|
||||
docker-compose logs redis
|
||||
docker-compose logs database
|
||||
docker-compose logs minio
|
||||
docker-compose logs livekit
|
||||
```
|
||||
|
||||
### 4. Service Health Check
|
||||
```bash
|
||||
# Test all endpoints
|
||||
curl -s https://api.st.vish.gg/ | jq .revolt
|
||||
curl -s https://files.st.vish.gg/ | jq .autumn
|
||||
curl -s https://proxy.st.vish.gg/ | jq .january
|
||||
curl -s https://voice.st.vish.gg/
|
||||
|
||||
# Or use the health check script
|
||||
/root/stoatchat/health-check.sh
|
||||
```
|
||||
|
||||
## Log Management
|
||||
|
||||
### 1. View Service Logs
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# View current logs
|
||||
tail -f api.log # API server
|
||||
tail -f events.log # Events WebSocket
|
||||
tail -f files.log # File server
|
||||
tail -f proxy.log # Media proxy
|
||||
tail -f gifbox.log # GIF service
|
||||
|
||||
# View all logs simultaneously
|
||||
tail -f *.log
|
||||
|
||||
# View with timestamps
|
||||
tail -f api.log | while read line; do echo "$(date): $line"; done
|
||||
```
|
||||
|
||||
### 2. Log Rotation
|
||||
```bash
|
||||
# Create log rotation script
|
||||
cat > /root/stoatchat/rotate-logs.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Rotate logs if they're larger than 100MB
|
||||
for log in api.log events.log files.log proxy.log gifbox.log; do
|
||||
if [ -f "$log" ] && [ $(stat -f%z "$log" 2>/dev/null || stat -c%s "$log") -gt 104857600 ]; then
|
||||
mv "$log" "$log.$(date +%Y%m%d-%H%M%S)"
|
||||
touch "$log"
|
||||
echo "Rotated $log"
|
||||
fi
|
||||
done
|
||||
EOF
|
||||
|
||||
chmod +x /root/stoatchat/rotate-logs.sh
|
||||
|
||||
# Add to crontab for daily rotation
|
||||
# crontab -e
|
||||
# 0 2 * * * /root/stoatchat/rotate-logs.sh
|
||||
```
|
||||
|
||||
### 3. Clear Logs
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Clear all logs
|
||||
> api.log
|
||||
> events.log
|
||||
> files.log
|
||||
> proxy.log
|
||||
> gifbox.log
|
||||
|
||||
# Or remove and recreate
|
||||
rm -f *.log
|
||||
touch api.log events.log files.log proxy.log gifbox.log
|
||||
```
|
||||
|
||||
## Configuration Management
|
||||
|
||||
### 1. Backup Configuration
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Create backup
|
||||
cp Revolt.overrides.toml Revolt.overrides.toml.backup.$(date +%Y%m%d)
|
||||
cp livekit.yml livekit.yml.backup.$(date +%Y%m%d)
|
||||
cp compose.yml compose.yml.backup.$(date +%Y%m%d)
|
||||
```
|
||||
|
||||
### 2. Apply Configuration Changes
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# After editing Revolt.overrides.toml
|
||||
# Restart affected services
|
||||
pkill -f revolt-
|
||||
./start-all-services.sh
|
||||
|
||||
# After editing livekit.yml
|
||||
docker-compose restart livekit
|
||||
|
||||
# After editing compose.yml
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### 3. Validate Configuration
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Check TOML syntax
|
||||
python3 -c "import toml; toml.load('Revolt.overrides.toml')" && echo "TOML valid"
|
||||
|
||||
# Check YAML syntax
|
||||
python3 -c "import yaml; yaml.safe_load(open('livekit.yml'))" && echo "YAML valid"
|
||||
python3 -c "import yaml; yaml.safe_load(open('compose.yml'))" && echo "Compose valid"
|
||||
|
||||
# Check nginx configuration
|
||||
nginx -t
|
||||
```
|
||||
|
||||
## Systemd Service Setup (Optional)
|
||||
|
||||
### 1. Create Systemd Services
|
||||
```bash
|
||||
# API Service
|
||||
cat > /etc/systemd/system/stoatchat-api.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Stoatchat API Server
|
||||
After=network.target docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/root/stoatchat
|
||||
ExecStart=/root/stoatchat/target/debug/revolt-delta
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/root/stoatchat/api.log
|
||||
StandardError=append:/root/stoatchat/api.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Events Service
|
||||
cat > /etc/systemd/system/stoatchat-events.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Stoatchat Events WebSocket
|
||||
After=network.target docker.service stoatchat-api.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/root/stoatchat
|
||||
ExecStart=/root/stoatchat/target/debug/revolt-bonfire
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/root/stoatchat/events.log
|
||||
StandardError=append:/root/stoatchat/events.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Files Service
|
||||
cat > /etc/systemd/system/stoatchat-files.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Stoatchat File Server
|
||||
After=network.target docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/root/stoatchat
|
||||
ExecStart=/root/stoatchat/target/debug/revolt-autumn
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/root/stoatchat/files.log
|
||||
StandardError=append:/root/stoatchat/files.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# Proxy Service
|
||||
cat > /etc/systemd/system/stoatchat-proxy.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Stoatchat Media Proxy
|
||||
After=network.target docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/root/stoatchat
|
||||
ExecStart=/root/stoatchat/target/debug/revolt-january
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/root/stoatchat/proxy.log
|
||||
StandardError=append:/root/stoatchat/proxy.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
|
||||
# GIF Service
|
||||
cat > /etc/systemd/system/stoatchat-gifbox.service << 'EOF'
|
||||
[Unit]
|
||||
Description=Stoatchat GIF Service
|
||||
After=network.target docker.service
|
||||
Requires=docker.service
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User=root
|
||||
WorkingDirectory=/root/stoatchat
|
||||
ExecStart=/root/stoatchat/target/debug/revolt-gifbox
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
StandardOutput=append:/root/stoatchat/gifbox.log
|
||||
StandardError=append:/root/stoatchat/gifbox.log
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
EOF
|
||||
```
|
||||
|
||||
### 2. Enable and Start Systemd Services
|
||||
```bash
|
||||
# Reload systemd
|
||||
systemctl daemon-reload
|
||||
|
||||
# Enable services
|
||||
systemctl enable stoatchat-api
|
||||
systemctl enable stoatchat-events
|
||||
systemctl enable stoatchat-files
|
||||
systemctl enable stoatchat-proxy
|
||||
systemctl enable stoatchat-gifbox
|
||||
|
||||
# Start services
|
||||
systemctl start stoatchat-api
|
||||
systemctl start stoatchat-events
|
||||
systemctl start stoatchat-files
|
||||
systemctl start stoatchat-proxy
|
||||
systemctl start stoatchat-gifbox
|
||||
|
||||
# Check status
|
||||
systemctl status stoatchat-api
|
||||
systemctl status stoatchat-events
|
||||
systemctl status stoatchat-files
|
||||
systemctl status stoatchat-proxy
|
||||
systemctl status stoatchat-gifbox
|
||||
```
|
||||
|
||||
### 3. Manage with Systemd
|
||||
```bash
|
||||
# Start all services
|
||||
systemctl start stoatchat-api stoatchat-events stoatchat-files stoatchat-proxy stoatchat-gifbox
|
||||
|
||||
# Stop all services
|
||||
systemctl stop stoatchat-api stoatchat-events stoatchat-files stoatchat-proxy stoatchat-gifbox
|
||||
|
||||
# Restart all services
|
||||
systemctl restart stoatchat-api stoatchat-events stoatchat-files stoatchat-proxy stoatchat-gifbox
|
||||
|
||||
# Check status of all services
|
||||
systemctl status stoatchat-*
|
||||
```
|
||||
|
||||
## Maintenance Tasks
|
||||
|
||||
### 1. Regular Maintenance
|
||||
```bash
|
||||
# Weekly maintenance script
|
||||
cat > /root/stoatchat/weekly-maintenance.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
cd /root/stoatchat
|
||||
|
||||
echo "=== Weekly Stoatchat Maintenance ==="
|
||||
echo "Date: $(date)"
|
||||
|
||||
# Rotate logs
|
||||
./rotate-logs.sh
|
||||
|
||||
# Update Docker images
|
||||
docker-compose pull
|
||||
|
||||
# Restart services with new images
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
|
||||
# Clean up old Docker images
|
||||
docker image prune -f
|
||||
|
||||
# Check disk usage
|
||||
echo "Disk usage:"
|
||||
df -h /root/stoatchat
|
||||
|
||||
echo "Maintenance completed."
|
||||
EOF
|
||||
|
||||
chmod +x /root/stoatchat/weekly-maintenance.sh
|
||||
```
|
||||
|
||||
### 2. Update Procedures
|
||||
```bash
|
||||
# Update Stoatchat code
|
||||
cd /root/stoatchat
|
||||
git pull origin main
|
||||
|
||||
# Rebuild services
|
||||
cargo build
|
||||
|
||||
# Restart services
|
||||
./stop-all-services.sh
|
||||
./start-all-services.sh
|
||||
```
|
||||
|
||||
### 3. Backup Procedures
|
||||
```bash
|
||||
# Create backup script
|
||||
cat > /root/stoatchat/backup.sh << 'EOF'
|
||||
#!/bin/bash
|
||||
BACKUP_DIR="/root/stoatchat-backups/$(date +%Y%m%d)"
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
cd /root/stoatchat
|
||||
|
||||
# Backup configuration
|
||||
cp Revolt.overrides.toml "$BACKUP_DIR/"
|
||||
cp livekit.yml "$BACKUP_DIR/"
|
||||
cp compose.yml "$BACKUP_DIR/"
|
||||
|
||||
# Backup MongoDB
|
||||
docker exec stoatchat-mongodb mongodump --out "$BACKUP_DIR/mongodb"
|
||||
|
||||
# Backup MinIO data
|
||||
docker exec stoatchat-minio tar czf - /data > "$BACKUP_DIR/minio-data.tar.gz"
|
||||
|
||||
echo "Backup completed: $BACKUP_DIR"
|
||||
EOF
|
||||
|
||||
chmod +x /root/stoatchat/backup.sh
|
||||
```
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Essential Commands
|
||||
```bash
|
||||
# Start everything
|
||||
cd /root/stoatchat && ./start-all-services.sh
|
||||
|
||||
# Stop everything
|
||||
cd /root/stoatchat && ./stop-all-services.sh
|
||||
|
||||
# Check status
|
||||
ps aux | grep revolt && docker-compose ps
|
||||
|
||||
# View logs
|
||||
cd /root/stoatchat && tail -f *.log
|
||||
|
||||
# Test endpoints
|
||||
curl https://api.st.vish.gg/ && curl https://files.st.vish.gg/
|
||||
```
|
||||
|
||||
### Service Ports
|
||||
- API (revolt-delta): 14702
|
||||
- Events (revolt-bonfire): 14703
|
||||
- Files (revolt-autumn): 14704
|
||||
- Proxy (revolt-january): 14705
|
||||
- GIF (revolt-gifbox): 14706
|
||||
- LiveKit: 7880
|
||||
- Redis: 6380
|
||||
- MongoDB: 27017
|
||||
- MinIO: 14009
|
||||
|
||||
### Important Files
|
||||
- Configuration: `/root/stoatchat/Revolt.overrides.toml`
|
||||
- LiveKit config: `/root/stoatchat/livekit.yml`
|
||||
- Docker config: `/root/stoatchat/compose.yml`
|
||||
- Nginx config: `/etc/nginx/sites-available/stoatchat`
|
||||
- Logs: `/root/stoatchat/*.log`
|
||||
473
hosts/vms/seattle/stoatchat/TROUBLESHOOTING.md
Normal file
473
hosts/vms/seattle/stoatchat/TROUBLESHOOTING.md
Normal file
@@ -0,0 +1,473 @@
|
||||
# Stoatchat Troubleshooting Guide
|
||||
|
||||
Common issues and solutions for the Stoatchat deployment on Seattle VM.
|
||||
|
||||
## Quick Diagnostics
|
||||
|
||||
### Check All Services Status
|
||||
```bash
|
||||
# SSH to Seattle VM
|
||||
ssh root@YOUR_WAN_IP
|
||||
|
||||
# Check Stoatchat processes
|
||||
ps aux | grep revolt
|
||||
|
||||
# Check ports
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
# Check Docker services
|
||||
cd /root/stoatchat
|
||||
docker-compose ps
|
||||
|
||||
# Check nginx
|
||||
systemctl status nginx
|
||||
```
|
||||
|
||||
### Test All Endpoints
|
||||
```bash
|
||||
# Test locally on server
|
||||
curl -k https://api.st.vish.gg/ --resolve api.st.vish.gg:443:127.0.0.1
|
||||
curl -k https://files.st.vish.gg/ --resolve files.st.vish.gg:443:127.0.0.1
|
||||
curl -k https://proxy.st.vish.gg/ --resolve proxy.st.vish.gg:443:127.0.0.1
|
||||
curl -k https://voice.st.vish.gg/ --resolve voice.st.vish.gg:443:127.0.0.1
|
||||
|
||||
# Test externally
|
||||
curl https://api.st.vish.gg/
|
||||
curl https://files.st.vish.gg/
|
||||
curl https://proxy.st.vish.gg/
|
||||
curl https://voice.st.vish.gg/
|
||||
```
|
||||
|
||||
## Common Issues
|
||||
|
||||
### 1. Services Not Starting
|
||||
|
||||
#### Symptoms
|
||||
- `ps aux | grep revolt` shows no processes
|
||||
- Ports not listening
|
||||
- Connection refused errors
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Check if binaries exist
|
||||
ls -la target/debug/revolt-*
|
||||
|
||||
# Try starting manually to see errors
|
||||
./target/debug/revolt-delta
|
||||
|
||||
# Check logs
|
||||
tail -f api.log events.log files.log proxy.log gifbox.log
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Rebuild if binaries missing
|
||||
cargo build
|
||||
|
||||
# Check configuration
|
||||
cat Revolt.overrides.toml | grep -E "(mongodb|redis|s3_)"
|
||||
|
||||
# Restart infrastructure services
|
||||
docker-compose down && docker-compose up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
sleep 30
|
||||
|
||||
# Start Stoatchat services
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
nohup ./target/debug/revolt-autumn > files.log 2>&1 &
|
||||
nohup ./target/debug/revolt-january > proxy.log 2>&1 &
|
||||
nohup ./target/debug/revolt-gifbox > gifbox.log 2>&1 &
|
||||
```
|
||||
|
||||
### 2. Database Connection Issues
|
||||
|
||||
#### Symptoms
|
||||
- Services start but crash immediately
|
||||
- "Connection refused" in logs
|
||||
- MongoDB/Redis errors
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
# Check Docker services
|
||||
docker-compose ps
|
||||
|
||||
# Test MongoDB connection
|
||||
docker exec stoatchat-mongodb mongo --eval "db.adminCommand('ismaster')"
|
||||
|
||||
# Test Redis connection
|
||||
docker exec stoatchat-redis redis-cli ping
|
||||
|
||||
# Check configuration
|
||||
grep -E "(mongodb|redis)" /root/stoatchat/Revolt.overrides.toml
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Restart infrastructure
|
||||
docker-compose restart
|
||||
|
||||
# Check MongoDB logs
|
||||
docker-compose logs database
|
||||
|
||||
# Check Redis logs
|
||||
docker-compose logs redis
|
||||
|
||||
# Verify ports are accessible
|
||||
telnet 127.0.0.1 27017
|
||||
telnet 127.0.0.1 6380
|
||||
```
|
||||
|
||||
### 3. SSL Certificate Issues
|
||||
|
||||
#### Symptoms
|
||||
- SSL errors in browser
|
||||
- Certificate expired warnings
|
||||
- nginx fails to start
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
# Check certificate validity
|
||||
openssl x509 -in /etc/letsencrypt/live/api.st.vish.gg/fullchain.pem -text -noout | grep -A2 "Validity"
|
||||
|
||||
# Check nginx configuration
|
||||
nginx -t
|
||||
|
||||
# Check certificate files exist
|
||||
ls -la /etc/letsencrypt/live/*/
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Renew certificates
|
||||
certbot renew
|
||||
|
||||
# Or renew specific certificate
|
||||
certbot renew --cert-name api.st.vish.gg
|
||||
|
||||
# Test renewal
|
||||
certbot renew --dry-run
|
||||
|
||||
# Reload nginx after renewal
|
||||
systemctl reload nginx
|
||||
```
|
||||
|
||||
### 4. File Upload Issues
|
||||
|
||||
#### Symptoms
|
||||
- File uploads fail
|
||||
- 413 Request Entity Too Large
|
||||
- MinIO connection errors
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
# Check MinIO status
|
||||
docker-compose logs minio
|
||||
|
||||
# Test MinIO connection
|
||||
curl http://127.0.0.1:14009/minio/health/live
|
||||
|
||||
# Check nginx file size limits
|
||||
grep client_max_body_size /etc/nginx/sites-available/stoatchat
|
||||
|
||||
# Check MinIO credentials
|
||||
grep -A5 "\[files\]" /root/stoatchat/Revolt.overrides.toml
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Restart MinIO
|
||||
docker-compose restart minio
|
||||
|
||||
# Check MinIO bucket exists
|
||||
docker exec stoatchat-minio mc ls local/
|
||||
|
||||
# Create bucket if missing
|
||||
docker exec stoatchat-minio mc mb local/revolt-uploads
|
||||
|
||||
# Increase nginx file size limit if needed
|
||||
sed -i 's/client_max_body_size 100M;/client_max_body_size 500M;/' /etc/nginx/sites-available/stoatchat
|
||||
systemctl reload nginx
|
||||
```
|
||||
|
||||
### 5. WebSocket Connection Issues
|
||||
|
||||
#### Symptoms
|
||||
- Events service returns 502
|
||||
- WebSocket connections fail
|
||||
- Real-time features not working
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
# Check events service
|
||||
curl -k https://events.st.vish.gg/ --resolve events.st.vish.gg:443:127.0.0.1
|
||||
|
||||
# Check if service is listening
|
||||
ss -tlnp | grep 14703
|
||||
|
||||
# Check nginx WebSocket configuration
|
||||
grep -A10 "events.st.vish.gg" /etc/nginx/sites-available/stoatchat
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Restart events service
|
||||
pkill -f revolt-bonfire
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
|
||||
# Check WebSocket headers in nginx
|
||||
# Ensure these are present:
|
||||
# proxy_set_header Upgrade $http_upgrade;
|
||||
# proxy_set_header Connection "upgrade";
|
||||
|
||||
# Test WebSocket connection (if wscat available)
|
||||
wscat -c wss://events.st.vish.gg/
|
||||
```
|
||||
|
||||
### 6. LiveKit Voice Issues
|
||||
|
||||
#### Symptoms
|
||||
- Voice/video not working
|
||||
- LiveKit returns errors
|
||||
- Connection timeouts
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
# Check LiveKit status
|
||||
docker-compose logs livekit
|
||||
|
||||
# Test LiveKit endpoint
|
||||
curl -k https://voice.st.vish.gg/ --resolve voice.st.vish.gg:443:127.0.0.1
|
||||
|
||||
# Check LiveKit configuration
|
||||
cat /root/stoatchat/livekit.yml
|
||||
|
||||
# Check if using correct image
|
||||
docker images | grep livekit
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Restart LiveKit
|
||||
docker-compose restart livekit
|
||||
|
||||
# Check Redis connection for LiveKit
|
||||
docker exec stoatchat-redis redis-cli ping
|
||||
|
||||
# Verify LiveKit configuration
|
||||
# Ensure Redis address matches: localhost:6380
|
||||
|
||||
# Check firewall for UDP ports
|
||||
ufw status | grep 7882
|
||||
```
|
||||
|
||||
### 7. Email/SMTP Issues
|
||||
|
||||
#### Symptoms
|
||||
- Email verification not working
|
||||
- SMTP connection errors
|
||||
- Authentication failures
|
||||
|
||||
#### Diagnosis
|
||||
```bash
|
||||
# Check SMTP configuration
|
||||
grep -A10 "\[email\]" /root/stoatchat/Revolt.overrides.toml
|
||||
|
||||
# Test SMTP connection
|
||||
telnet smtp.gmail.com 587
|
||||
|
||||
# Check logs for SMTP errors
|
||||
grep -i smtp /root/stoatchat/*.log
|
||||
```
|
||||
|
||||
#### Solutions
|
||||
```bash
|
||||
# Verify Gmail App Password is correct
|
||||
# Check if 2FA is enabled on Gmail account
|
||||
# Ensure "Less secure app access" is not needed (use App Password instead)
|
||||
|
||||
# Test SMTP manually
|
||||
openssl s_client -starttls smtp -connect smtp.gmail.com:587
|
||||
```
|
||||
|
||||
## Performance Issues
|
||||
|
||||
### High CPU Usage
|
||||
```bash
|
||||
# Check which service is using CPU
|
||||
top -p $(pgrep -d',' revolt)
|
||||
|
||||
# Check for memory leaks
|
||||
ps aux --sort=-%mem | grep revolt
|
||||
|
||||
# Monitor resource usage
|
||||
htop
|
||||
```
|
||||
|
||||
### High Memory Usage
|
||||
```bash
|
||||
# Check memory usage per service
|
||||
ps aux --sort=-%mem | grep revolt
|
||||
|
||||
# Check Docker container usage
|
||||
docker stats
|
||||
|
||||
# Check system memory
|
||||
free -h
|
||||
```
|
||||
|
||||
### Slow Response Times
|
||||
```bash
|
||||
# Check nginx access logs
|
||||
tail -f /var/log/nginx/access.log
|
||||
|
||||
# Check service logs for slow queries
|
||||
grep -i "slow\|timeout" /root/stoatchat/*.log
|
||||
|
||||
# Test response times
|
||||
time curl https://api.st.vish.gg/
|
||||
```
|
||||
|
||||
## Log Analysis
|
||||
|
||||
### Service Logs Location
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Main service logs
|
||||
tail -f api.log # API server
|
||||
tail -f events.log # WebSocket events
|
||||
tail -f files.log # File server
|
||||
tail -f proxy.log # Media proxy
|
||||
tail -f gifbox.log # GIF service
|
||||
|
||||
# System logs
|
||||
journalctl -u nginx -f
|
||||
docker-compose logs -f
|
||||
```
|
||||
|
||||
### Common Log Patterns
|
||||
```bash
|
||||
# Database connection errors
|
||||
grep -i "connection.*refused\|timeout" *.log
|
||||
|
||||
# Authentication errors
|
||||
grep -i "auth\|login\|token" *.log
|
||||
|
||||
# File upload errors
|
||||
grep -i "upload\|s3\|minio" *.log
|
||||
|
||||
# WebSocket errors
|
||||
grep -i "websocket\|upgrade" *.log
|
||||
```
|
||||
|
||||
## Recovery Procedures
|
||||
|
||||
### Complete Service Restart
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Stop all Stoatchat services
|
||||
pkill -f revolt-
|
||||
|
||||
# Restart infrastructure
|
||||
docker-compose down
|
||||
docker-compose up -d
|
||||
|
||||
# Wait for services to be ready
|
||||
sleep 30
|
||||
|
||||
# Start Stoatchat services
|
||||
nohup ./target/debug/revolt-delta > api.log 2>&1 &
|
||||
nohup ./target/debug/revolt-bonfire > events.log 2>&1 &
|
||||
nohup ./target/debug/revolt-autumn > files.log 2>&1 &
|
||||
nohup ./target/debug/revolt-january > proxy.log 2>&1 &
|
||||
nohup ./target/debug/revolt-gifbox > gifbox.log 2>&1 &
|
||||
|
||||
# Restart nginx
|
||||
systemctl restart nginx
|
||||
```
|
||||
|
||||
### Emergency Rebuild
|
||||
```bash
|
||||
cd /root/stoatchat
|
||||
|
||||
# Stop services
|
||||
pkill -f revolt-
|
||||
|
||||
# Clean build
|
||||
cargo clean
|
||||
cargo build
|
||||
|
||||
# Restart everything
|
||||
docker-compose down && docker-compose up -d
|
||||
sleep 30
|
||||
|
||||
# Start services with new binaries
|
||||
./start-services.sh # If you created this script
|
||||
```
|
||||
|
||||
### Database Recovery
|
||||
```bash
|
||||
# If MongoDB is corrupted
|
||||
docker-compose stop database
|
||||
docker volume rm stoatchat_mongodb_data # WARNING: This deletes data
|
||||
docker-compose up -d database
|
||||
|
||||
# Restore from backup if available
|
||||
# mongorestore --uri="mongodb://127.0.0.1:27017/revolt" /path/to/backup
|
||||
```
|
||||
|
||||
## Monitoring Commands
|
||||
|
||||
### Health Check Script
|
||||
```bash
|
||||
#!/bin/bash
|
||||
# Save as /root/stoatchat/health-check.sh
|
||||
|
||||
echo "=== Stoatchat Health Check ==="
|
||||
echo "Date: $(date)"
|
||||
echo
|
||||
|
||||
echo "=== Process Status ==="
|
||||
ps aux | grep revolt | grep -v grep
|
||||
|
||||
echo -e "\n=== Port Status ==="
|
||||
ss -tlnp | grep -E "(14702|14703|14704|14705|14706|7880)"
|
||||
|
||||
echo -e "\n=== Docker Services ==="
|
||||
cd /root/stoatchat && docker-compose ps
|
||||
|
||||
echo -e "\n=== Nginx Status ==="
|
||||
systemctl is-active nginx
|
||||
|
||||
echo -e "\n=== Endpoint Tests ==="
|
||||
for endpoint in api files proxy voice; do
|
||||
echo -n "$endpoint.st.vish.gg: "
|
||||
curl -s -o /dev/null -w "%{http_code}" https://$endpoint.st.vish.gg/ || echo "FAIL"
|
||||
done
|
||||
|
||||
echo -e "\n=== Disk Usage ==="
|
||||
df -h /root/stoatchat
|
||||
|
||||
echo -e "\n=== Memory Usage ==="
|
||||
free -h
|
||||
```
|
||||
|
||||
### Automated Monitoring
|
||||
```bash
|
||||
# Add to crontab for regular health checks
|
||||
# crontab -e
|
||||
# */5 * * * * /root/stoatchat/health-check.sh >> /var/log/stoatchat-health.log 2>&1
|
||||
```
|
||||
|
||||
## Contact Information
|
||||
|
||||
For additional support:
|
||||
- Repository: https://github.com/stoatchat/stoatchat
|
||||
- Documentation: Check /root/stoatchat/docs/
|
||||
- Logs: /root/stoatchat/*.log
|
||||
- Configuration: /root/stoatchat/Revolt.overrides.toml
|
||||
77
hosts/vms/seattle/stoatchat/docker-compose.yml
Normal file
77
hosts/vms/seattle/stoatchat/docker-compose.yml
Normal file
@@ -0,0 +1,77 @@
|
||||
services:
|
||||
# Redis
|
||||
redis:
|
||||
image: eqalpha/keydb
|
||||
ports:
|
||||
- "6380:6379"
|
||||
|
||||
# MongoDB
|
||||
database:
|
||||
image: mongo
|
||||
ports:
|
||||
- "27017:27017"
|
||||
volumes:
|
||||
- ./.data/db:/data/db
|
||||
ulimits:
|
||||
nofile:
|
||||
soft: 65536
|
||||
hard: 65536
|
||||
|
||||
# MinIO
|
||||
minio:
|
||||
image: minio/minio
|
||||
command: server /data
|
||||
environment:
|
||||
MINIO_ROOT_USER: REDACTED_MINIO_CRED
|
||||
MINIO_ROOT_PASSWORD: "REDACTED_PASSWORD"
|
||||
volumes:
|
||||
- ./.data/minio:/data
|
||||
ports:
|
||||
- "14009:9000"
|
||||
- "14010:9001"
|
||||
restart: unless-stopped
|
||||
|
||||
# Create buckets for minio.
|
||||
createbuckets:
|
||||
image: minio/mc
|
||||
depends_on:
|
||||
- minio
|
||||
entrypoint: >
|
||||
/bin/sh -c "while ! /usr/bin/mc ready minio; do
|
||||
/usr/bin/mc alias set minio http://minio:9000 REDACTED_MINIO_CRED REDACTED_MINIO_CRED;
|
||||
echo 'Waiting minio...' && sleep 1;
|
||||
done; /usr/bin/mc mb minio/revolt-uploads; exit 0;"
|
||||
|
||||
# Rabbit
|
||||
rabbit:
|
||||
image: rabbitmq:4-management
|
||||
environment:
|
||||
RABBITMQ_DEFAULT_USER: rabbituser
|
||||
RABBITMQ_DEFAULT_PASS: "REDACTED_PASSWORD"
|
||||
volumes:
|
||||
- ./.data/rabbit:/var/lib/rabbitmq
|
||||
#- ./rabbit_plugins:/opt/rabbitmq/plugins/
|
||||
#- ./rabbit_enabled_plugins:/etc/rabbitmq/enabled_plugins
|
||||
# uncomment this if you need to enable other plugins
|
||||
ports:
|
||||
- "5672:5672"
|
||||
- "15672:15672" # management UI, for development
|
||||
|
||||
# Mock SMTP server
|
||||
maildev:
|
||||
image: maildev/maildev
|
||||
ports:
|
||||
- "14025:25"
|
||||
- "14080:8080"
|
||||
environment:
|
||||
MAILDEV_SMTP_PORT: 25
|
||||
MAILDEV_WEB_PORT: 8080
|
||||
MAILDEV_INCOMING_USER: smtp
|
||||
MAILDEV_INCOMING_PASS: "REDACTED_PASSWORD"
|
||||
|
||||
livekit:
|
||||
image: livekit/livekit-server:v1.9.9
|
||||
command: --config /etc/livekit.yml
|
||||
network_mode: "host"
|
||||
volumes:
|
||||
- ./livekit.yml:/etc/livekit.yml
|
||||
166
hosts/vms/seattle/stoatchat/nginx-config.conf
Normal file
166
hosts/vms/seattle/stoatchat/nginx-config.conf
Normal file
@@ -0,0 +1,166 @@
|
||||
# Main app - st.vish.gg
|
||||
server {
|
||||
listen 80;
|
||||
server_name st.vish.gg;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name st.vish.gg;
|
||||
|
||||
ssl_certificate /etc/nginx/ssl/st.vish.gg.crt;
|
||||
ssl_certificate_key /etc/nginx/ssl/st.vish.gg.key;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location / {
|
||||
# This would proxy to the frontend app when it's set up
|
||||
# For now, return a placeholder
|
||||
return 200 "Stoatchat Frontend - Coming Soon";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
}
|
||||
|
||||
# API - api.st.vish.gg
|
||||
server {
|
||||
listen 80;
|
||||
server_name api.st.vish.gg;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name api.st.vish.gg;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/api.st.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/api.st.vish.gg/privkey.pem;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14702;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_set_header X-Forwarded-Host $host;
|
||||
proxy_set_header X-Forwarded-Port $server_port;
|
||||
}
|
||||
}
|
||||
|
||||
# Events WebSocket - events.st.vish.gg
|
||||
server {
|
||||
listen 80;
|
||||
server_name events.st.vish.gg;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name events.st.vish.gg;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/events.st.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/events.st.vish.gg/privkey.pem;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14703;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
|
||||
# Files - files.st.vish.gg
|
||||
server {
|
||||
listen 80;
|
||||
server_name files.st.vish.gg;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name files.st.vish.gg;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/files.st.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/files.st.vish.gg/privkey.pem;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
client_max_body_size 100M;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14704;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# Proxy - proxy.st.vish.gg
|
||||
server {
|
||||
listen 80;
|
||||
server_name proxy.st.vish.gg;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name proxy.st.vish.gg;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/proxy.st.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/proxy.st.vish.gg/privkey.pem;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:14705;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
}
|
||||
|
||||
# Voice/LiveKit - voice.st.vish.gg
|
||||
server {
|
||||
listen 80;
|
||||
server_name voice.st.vish.gg;
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
server_name voice.st.vish.gg;
|
||||
|
||||
ssl_certificate /etc/letsencrypt/live/voice.st.vish.gg/fullchain.pem;
|
||||
ssl_certificate_key /etc/letsencrypt/live/voice.st.vish.gg/privkey.pem;
|
||||
ssl_protocols TLSv1.2 TLSv1.3;
|
||||
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA384;
|
||||
ssl_prefer_server_ciphers on;
|
||||
|
||||
location / {
|
||||
proxy_pass http://127.0.0.1:7880;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection "upgrade";
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
19
hosts/vms/seattle/surmai/docker-compose.yml
Normal file
19
hosts/vms/seattle/surmai/docker-compose.yml
Normal file
@@ -0,0 +1,19 @@
|
||||
services:
|
||||
surmai:
|
||||
image: ghcr.io/rohitkumbhar/surmai:main
|
||||
container_name: surmai
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- SURMAI_ADMIN_EMAIL=admin@surmai.local
|
||||
- SURMAI_ADMIN_PASSWORD="REDACTED_PASSWORD"
|
||||
- PB_DATA_DIRECTORY=/pb_data
|
||||
volumes:
|
||||
- /opt/surmai/data:/pb_data
|
||||
ports:
|
||||
- "100.82.197.124:9497:8080"
|
||||
healthcheck:
|
||||
test: ["CMD", "nc", "-z", "localhost", "8080"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 5
|
||||
start_period: 10s
|
||||
51
hosts/vms/seattle/vllm.yaml
Normal file
51
hosts/vms/seattle/vllm.yaml
Normal file
@@ -0,0 +1,51 @@
|
||||
# vLLM - High-performance LLM inference server
|
||||
# OpenAI-compatible API for running local language models
|
||||
# Port: 8000
|
||||
#
|
||||
# This configuration runs vLLM in CPU-only mode since seattle doesn't have a GPU.
|
||||
# For better performance, consider using a machine with CUDA-compatible GPU.
|
||||
|
||||
services:
|
||||
vllm-qwen-1.5b:
|
||||
image: vllm/vllm-openai:latest
|
||||
container_name: vllm-qwen-1.5b
|
||||
ports:
|
||||
- "8000:8000"
|
||||
environment:
|
||||
# Force CPU mode - disable all CUDA detection
|
||||
- CUDA_VISIBLE_DEVICES=""
|
||||
- VLLM_DEVICE=cpu
|
||||
- VLLM_LOGGING_LEVEL=INFO
|
||||
# Prevent CUDA/GPU detection attempts
|
||||
- VLLM_USE_MODELSCOPE=False
|
||||
command:
|
||||
- --model
|
||||
- Qwen/Qwen2.5-1.5B-Instruct
|
||||
- --device
|
||||
- cpu
|
||||
- --max-model-len
|
||||
- "4096"
|
||||
- --dtype
|
||||
- float16
|
||||
- --trust-remote-code
|
||||
- --host
|
||||
- "0.0.0.0"
|
||||
- --port
|
||||
- "8000"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
# Cache model downloads to avoid re-downloading
|
||||
- vllm-cache:/root/.cache/huggingface
|
||||
# Resource limits for CPU mode (adjust based on server capacity)
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '8'
|
||||
memory: 16G
|
||||
reservations:
|
||||
cpus: '4'
|
||||
memory: 8G
|
||||
|
||||
volumes:
|
||||
vllm-cache:
|
||||
name: vllm-cache
|
||||
182
hosts/vms/seattle/wallabag/README.md
Normal file
182
hosts/vms/seattle/wallabag/README.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Wallabag - Read-Later Service
|
||||
|
||||
## 📋 Overview
|
||||
|
||||
Wallabag is a self-hosted read-later application that allows you to save articles, web pages, and other content to read later. It's similar to Pocket or Instapaper but completely self-hosted.
|
||||
|
||||
## 🔧 Service Details
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| **Container Name** | `wallabag` |
|
||||
| **Image** | `wallabag/wallabag:latest` |
|
||||
| **Internal Port** | 80 |
|
||||
| **Host Port** | 127.0.0.1:8880 |
|
||||
| **Domain** | `wb.vish.gg` |
|
||||
| **Database** | SQLite (embedded) |
|
||||
|
||||
## 🌐 Network Access
|
||||
|
||||
- **Public URL**: `https://wb.vish.gg`
|
||||
- **Local Access**: `http://127.0.0.1:8880`
|
||||
- **Reverse Proxy**: Nginx configuration in `/etc/nginx/sites-enabled/wallabag`
|
||||
|
||||
## 📁 Directory Structure
|
||||
|
||||
```
|
||||
/opt/wallabag/
|
||||
├── docker-compose.yml # Service configuration
|
||||
├── data/ # Application data
|
||||
│ ├── db/ # SQLite database
|
||||
│ └── assets/ # User uploads
|
||||
└── images/ # Article images
|
||||
```
|
||||
|
||||
## 🚀 Management Commands
|
||||
|
||||
### Docker Operations
|
||||
```bash
|
||||
# Navigate to service directory
|
||||
cd /opt/wallabag/
|
||||
|
||||
# Start service
|
||||
docker-compose up -d
|
||||
|
||||
# Stop service
|
||||
docker-compose down
|
||||
|
||||
# Restart service
|
||||
docker-compose restart
|
||||
|
||||
# View logs
|
||||
docker-compose logs -f
|
||||
|
||||
# Update service
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Container Management
|
||||
```bash
|
||||
# Check container status
|
||||
docker ps | grep wallabag
|
||||
|
||||
# Execute commands in container
|
||||
docker exec -it wallabag bash
|
||||
|
||||
# View container logs
|
||||
docker logs wallabag -f
|
||||
|
||||
# Check container health
|
||||
docker inspect wallabag | grep -A 10 Health
|
||||
```
|
||||
|
||||
## ⚙️ Configuration
|
||||
|
||||
### Environment Variables
|
||||
- **Database**: SQLite (no external database required)
|
||||
- **Domain**: `https://wb.vish.gg`
|
||||
- **Registration**: Disabled (`FOSUSER_REGISTRATION=false`)
|
||||
- **Email Confirmation**: Disabled (`FOSUSER_CONFIRMATION=false`)
|
||||
|
||||
### Volume Mounts
|
||||
- **Data**: `/opt/wallabag/data` → `/var/www/wallabag/data`
|
||||
- **Images**: `/opt/wallabag/images` → `/var/www/wallabag/web/assets/images`
|
||||
|
||||
### Health Check
|
||||
- **Endpoint**: `http://localhost:80`
|
||||
- **Interval**: 30 seconds
|
||||
- **Timeout**: 10 seconds
|
||||
- **Retries**: 3
|
||||
|
||||
## 🔒 Security Features
|
||||
|
||||
- **Local Binding**: Only accessible via localhost (127.0.0.1:8880)
|
||||
- **Nginx Proxy**: SSL termination and security headers
|
||||
- **Registration Disabled**: Prevents unauthorized account creation
|
||||
- **Data Isolation**: Runs in isolated Docker container
|
||||
|
||||
## 📱 Usage
|
||||
|
||||
### Web Interface
|
||||
1. Access via `https://wb.vish.gg`
|
||||
2. Log in with configured credentials
|
||||
3. Use browser extension or bookmarklet to save articles
|
||||
4. Organize with tags and categories
|
||||
5. Export/import data as needed
|
||||
|
||||
### Browser Extensions
|
||||
- Available for Chrome, Firefox, and other browsers
|
||||
- Allows one-click saving of web pages
|
||||
- Automatic tagging and categorization
|
||||
|
||||
## 🔧 Maintenance
|
||||
|
||||
### Backup
|
||||
```bash
|
||||
# Backup data directory
|
||||
tar -czf wallabag-backup-$(date +%Y%m%d).tar.gz /opt/wallabag/data/
|
||||
|
||||
# Backup database only
|
||||
cp /opt/wallabag/data/db/wallabag.sqlite /backup/location/
|
||||
```
|
||||
|
||||
### Updates
|
||||
```bash
|
||||
cd /opt/wallabag/
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Database Maintenance
|
||||
```bash
|
||||
# Access SQLite database
|
||||
docker exec -it wallabag sqlite3 /var/www/wallabag/data/db/wallabag.sqlite
|
||||
|
||||
# Check database size
|
||||
du -sh /opt/wallabag/data/db/wallabag.sqlite
|
||||
```
|
||||
|
||||
## 🐛 Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
```bash
|
||||
# Container won't start
|
||||
docker-compose logs wallabag
|
||||
|
||||
# Permission issues
|
||||
sudo chown -R 33:33 /opt/wallabag/data/
|
||||
sudo chmod -R 755 /opt/wallabag/data/
|
||||
|
||||
# Database corruption
|
||||
# Restore from backup or recreate container
|
||||
|
||||
# Nginx proxy issues
|
||||
sudo nginx -t
|
||||
sudo systemctl reload nginx
|
||||
```
|
||||
|
||||
### Health Check
|
||||
```bash
|
||||
# Test local endpoint
|
||||
curl -I http://127.0.0.1:8880
|
||||
|
||||
# Test public endpoint
|
||||
curl -I https://wb.vish.gg
|
||||
|
||||
# Check container health
|
||||
docker inspect wallabag | grep -A 5 '"Health"'
|
||||
```
|
||||
|
||||
## 🔗 Related Services
|
||||
|
||||
- **Nginx**: Reverse proxy with SSL termination
|
||||
- **Let's Encrypt**: SSL certificate management
|
||||
- **Docker**: Container runtime
|
||||
|
||||
## 📚 External Resources
|
||||
|
||||
- [Wallabag Documentation](https://doc.wallabag.org/)
|
||||
- [Docker Hub](https://hub.docker.com/r/wallabag/wallabag)
|
||||
- [GitHub Repository](https://github.com/wallabag/wallabag)
|
||||
- [Browser Extensions](https://wallabag.org/en/download)
|
||||
30
hosts/vms/seattle/wallabag/docker-compose.yml
Normal file
30
hosts/vms/seattle/wallabag/docker-compose.yml
Normal file
@@ -0,0 +1,30 @@
|
||||
version: '3.8'
|
||||
services:
|
||||
wallabag:
|
||||
image: wallabag/wallabag:latest
|
||||
container_name: wallabag
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- SYMFONY__ENV__DATABASE_DRIVER=pdo_sqlite
|
||||
- SYMFONY__ENV__DATABASE_HOST=127.0.0.1
|
||||
- SYMFONY__ENV__DATABASE_PORT=~
|
||||
- SYMFONY__ENV__DATABASE_NAME=symfony
|
||||
- SYMFONY__ENV__DATABASE_USER=~
|
||||
- SYMFONY__ENV__DATABASE_PASSWORD=~
|
||||
- SYMFONY__ENV__DATABASE_CHARSET=utf8
|
||||
- SYMFONY__ENV__DATABASE_TABLE_PREFIX=wallabag_
|
||||
- SYMFONY__ENV__DATABASE_PATH=/var/www/wallabag/data/db/wallabag.sqlite
|
||||
- SYMFONY__ENV__DOMAIN_NAME=https://wb.vish.gg
|
||||
- SYMFONY__ENV__SERVER_NAME="Wallabag"
|
||||
- SYMFONY__ENV__FOSUSER_REGISTRATION=false
|
||||
- SYMFONY__ENV__FOSUSER_CONFIRMATION=false
|
||||
volumes:
|
||||
- /opt/wallabag/data:/var/www/wallabag/data
|
||||
- /opt/wallabag/images:/var/www/wallabag/web/assets/images
|
||||
ports:
|
||||
- "127.0.0.1:8880:80"
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
Reference in New Issue
Block a user