4.1 KiB
4.1 KiB
Api
🟢 Other Service
📋 Service Overview
| Property | Value |
|---|---|
| Service Name | api |
| Host | Atlantis |
| Category | Other |
| Difficulty | 🟢 |
| Docker Image | ghcr.io/getumbrel/llama-gpt-api:latest |
| Compose File | Atlantis/llamagpt.yml |
| Directory | Atlantis |
🎯 Purpose
api is a specialized service that provides specific functionality for the homelab infrastructure.
🚀 Quick Start
Prerequisites
- Docker and Docker Compose installed
- Basic understanding of REDACTED_APP_PASSWORD
- Access to the host system (Atlantis)
Deployment
# Navigate to service directory
cd Atlantis
# Start the service
docker-compose up -d
# Check service status
docker-compose ps
# View logs
docker-compose logs -f api
🔧 Configuration
Docker Compose Configuration
cap_add:
- IPC_LOCK
container_name: LlamaGPT-api
cpu_shares: 768
environment:
MODEL: /models/llama-2-7b-chat.bin
MODEL_DOWNLOAD_URL: https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GGML/resolve/main/nous-hermes-llama-2-7b.ggmlv3.q4_0.bin
USE_MLOCK: 1
hostname: llamagpt-api
image: ghcr.io/getumbrel/llama-gpt-api:latest
mem_limit: 8g
restart: on-failure:5
security_opt:
- no-new-privileges:true
Environment Variables
| Variable | Value | Description |
|---|---|---|
MODEL |
/models/llama-2-7b-chat.bin |
Configuration variable |
MODEL_DOWNLOAD_URL |
https://huggingface.co/TheBloke/Nous-Hermes-Llama-2-7B-GGML/resolve/main/nous-hermes-llama-2-7b.ggmlv3.q4_0.bin |
Configuration variable |
USE_MLOCK |
1 |
Configuration variable |
Port Mappings
No ports exposed.
Volume Mappings
No volumes mounted.
🌐 Access Information
This service does not expose any web interfaces.
🔒 Security Considerations
- ✅ Security options configured
- ⚠️ Consider running as non-root user
📊 Resource Requirements
No resource limits configured
Recommended Resources
- Minimum RAM: 512MB
- Recommended RAM: 1GB+
- CPU: 1 core minimum
- Storage: Varies by usage
Resource Monitoring
Monitor resource usage with:
docker stats
🔍 Health Monitoring
⚠️ No health check configured Consider adding a health check:
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:PORT/health"]
interval: 30s
timeout: 10s
retries: 3
Manual Health Checks
# Check container health
docker inspect --format='{{.State.Health.Status}}' CONTAINER_NAME
# View health check logs
docker inspect --format='{{range .State.Health.Log}}{{.Output}}{{end}}' CONTAINER_NAME
🚨 Troubleshooting
Common Issues
Service won't start
- Check Docker logs:
docker-compose logs service-name - Verify port availability:
netstat -tulpn | grep PORT - Check file permissions on mounted volumes
Can't access web interface
- Verify service is running:
docker-compose ps - Check firewall settings
- Confirm correct port mapping
Performance issues
- Monitor resource usage:
docker stats - Check available disk space:
df -h - Review service logs for errors
Useful Commands
# Check service status
docker-compose ps
# View real-time logs
docker-compose logs -f api
# Restart service
docker-compose restart api
# Update service
docker-compose pull api
docker-compose up -d api
# Access service shell
docker-compose exec api /bin/bash
# or
docker-compose exec api /bin/sh
📚 Additional Resources
- Official Documentation: Check the official docs for api
- Docker Hub: ghcr.io/getumbrel/llama-gpt-api:latest
- Community Forums: Search for community discussions and solutions
- GitHub Issues: Check the project's GitHub for known issues
🔗 Related Services
Other services in the other category on Atlantis
This documentation is auto-generated from the Docker Compose configuration. For the most up-to-date information, refer to the official documentation and the actual compose file.
Last Updated: 2025-11-17
Configuration Source: Atlantis/llamagpt.yml