2.1 KiB
2.1 KiB
Perplexica Integration Status
Last Updated: 2026-02-16 13:58 UTC
Current Status
🔴 NOT WORKING - Configured but user reports web UI not functioning properly
Configuration
- Web UI: http://192.168.0.210:4785
- Container:
perplexica(itzcrazykns1337/perplexica:latest) - Data Volume:
perplexica-data
LLM Provider: Groq (Primary)
- Model: llama-3.3-70b-versatile
- API: https://api.groq.com/openai/v1
- Speed: 0.4 seconds per response
- Rate Limit: 30 req/min (free tier)
LLM Provider: Seattle Ollama (Fallback)
- Host: seattle (100.82.197.124:11434 via Tailscale)
- Chat Models:
- tinyllama:1.1b (12s responses)
- qwen2.5:1.5b (10min responses - not recommended)
- Embedding Model: nomic-embed-text:latest (used by default)
Search Engine: SearXNG
- URL: http://localhost:8080 (inside container)
- Status: ✅ Working (returns 31+ results)
Performance Timeline
| Date | Configuration | Result |
|---|---|---|
| 2026-02-16 13:37 | Qwen2.5:1.5b on Seattle CPU | ❌ 10 minutes per query |
| 2026-02-16 13:51 | TinyLlama:1.1b on Seattle CPU | ⚠️ 12 seconds per query |
| 2026-02-16 13:58 | Groq Llama 3.3 70B | ❓ 0.4s API response, but web UI issues |
Issues
- Initial: CPU-only inference on Seattle too slow
- Current: Groq configured but web UI not working (details unclear)
Related Documentation
Next Session TODO
- Test web UI and capture exact error
- Check browser console logs
- Check Perplexica container logs during search
- Verify Groq API calls in browser network tab
- Consider alternative LLM providers if needed
Files Modified
/hosts/vms/homelab-vm/perplexica.yaml- Docker Compose (env vars)- Docker volume
perplexica-data:/home/perplexica/data/config.json- Model configuration (not git-tracked) /hosts/vms/seattle/ollama.yaml- Ollama deployment