Files
homelab-optimized/docs/guides/PERPLEXICA_STATUS.md
Gitea Mirror Bot 5735cfcb2c
Some checks failed
Documentation / Build Docusaurus (push) Failing after 5m0s
Documentation / Deploy to GitHub Pages (push) Has been skipped
Sanitized mirror from private repository - 2026-04-08 00:57:50 UTC
2026-04-08 00:57:50 +00:00

2.1 KiB

Perplexica Integration Status

Last Updated: 2026-02-16 13:58 UTC

Current Status

🔴 NOT WORKING - Configured but user reports web UI not functioning properly

Configuration

LLM Provider: Groq (Primary)

LLM Provider: Seattle Ollama (Fallback)

  • Host: seattle (100.82.197.124:11434 via Tailscale)
  • Chat Models:
    • tinyllama:1.1b (12s responses)
    • qwen2.5:1.5b (10min responses - not recommended)
  • Embedding Model: nomic-embed-text:latest (used by default)

Search Engine: SearXNG

Performance Timeline

Date Configuration Result
2026-02-16 13:37 Qwen2.5:1.5b on Seattle CPU 10 minutes per query
2026-02-16 13:51 TinyLlama:1.1b on Seattle CPU ⚠️ 12 seconds per query
2026-02-16 13:58 Groq Llama 3.3 70B 0.4s API response, but web UI issues

Issues

  1. Initial: CPU-only inference on Seattle too slow
  2. Current: Groq configured but web UI not working (details unclear)

Next Session TODO

  1. Test web UI and capture exact error
  2. Check browser console logs
  3. Check Perplexica container logs during search
  4. Verify Groq API calls in browser network tab
  5. Consider alternative LLM providers if needed

Files Modified

  • /hosts/vms/homelab-vm/perplexica.yaml - Docker Compose (env vars)
  • Docker volume perplexica-data:/home/perplexica/data/config.json - Model configuration (not git-tracked)
  • /hosts/vms/seattle/ollama.yaml - Ollama deployment