64 lines
2.1 KiB
Markdown
64 lines
2.1 KiB
Markdown
# Perplexica Integration Status
|
|
|
|
**Last Updated**: 2026-02-16 13:58 UTC
|
|
|
|
## Current Status
|
|
|
|
🔴 **NOT WORKING** - Configured but user reports web UI not functioning properly
|
|
|
|
## Configuration
|
|
|
|
- **Web UI**: http://192.168.0.210:4785
|
|
- **Container**: `perplexica` (itzcrazykns1337/perplexica:latest)
|
|
- **Data Volume**: `perplexica-data`
|
|
|
|
### LLM Provider: Groq (Primary)
|
|
- **Model**: llama-3.3-70b-versatile
|
|
- **API**: https://api.groq.com/openai/v1
|
|
- **Speed**: 0.4 seconds per response
|
|
- **Rate Limit**: 30 req/min (free tier)
|
|
|
|
### LLM Provider: Seattle Ollama (Fallback)
|
|
- **Host**: seattle (100.82.197.124:11434 via Tailscale)
|
|
- **Chat Models**:
|
|
- tinyllama:1.1b (12s responses)
|
|
- qwen2.5:1.5b (10min responses - not recommended)
|
|
- **Embedding Model**: nomic-embed-text:latest (used by default)
|
|
|
|
### Search Engine: SearXNG
|
|
- **URL**: http://localhost:8080 (inside container)
|
|
- **Status**: ✅ Working (returns 31+ results)
|
|
|
|
## Performance Timeline
|
|
|
|
| Date | Configuration | Result |
|
|
|------|--------------|--------|
|
|
| 2026-02-16 13:37 | Qwen2.5:1.5b on Seattle CPU | ❌ 10 minutes per query |
|
|
| 2026-02-16 13:51 | TinyLlama:1.1b on Seattle CPU | ⚠️ 12 seconds per query |
|
|
| 2026-02-16 13:58 | Groq Llama 3.3 70B | ❓ 0.4s API response, but web UI issues |
|
|
|
|
## Issues
|
|
|
|
1. **Initial**: CPU-only inference on Seattle too slow
|
|
2. **Current**: Groq configured but web UI not working (details unclear)
|
|
|
|
## Related Documentation
|
|
|
|
- [Setup Guide](./docs/guides/PERPLEXICA_SEATTLE_INTEGRATION.md)
|
|
- [Troubleshooting](./docs/guides/PERPLEXICA_TROUBLESHOOTING.md)
|
|
- [Ollama Setup](./hosts/vms/seattle/README-ollama.md)
|
|
|
|
## Next Session TODO
|
|
|
|
1. Test web UI and capture exact error
|
|
2. Check browser console logs
|
|
3. Check Perplexica container logs during search
|
|
4. Verify Groq API calls in browser network tab
|
|
5. Consider alternative LLM providers if needed
|
|
|
|
## Files Modified
|
|
|
|
- `/hosts/vms/homelab-vm/perplexica.yaml` - Docker Compose (env vars)
|
|
- Docker volume `perplexica-data:/home/perplexica/data/config.json` - Model configuration (not git-tracked)
|
|
- `/hosts/vms/seattle/ollama.yaml` - Ollama deployment
|