180 lines
6.4 KiB
Markdown
180 lines
6.4 KiB
Markdown
# Jellyfin on Olares
|
|
|
|
## Service Overview
|
|
|
|
| Property | Value |
|
|
|----------|-------|
|
|
| **Host** | olares (192.168.0.145) |
|
|
| **Platform** | Olares Marketplace (K3s) |
|
|
| **Namespace** | `jellyfin-vishinator` |
|
|
| **Image** | `docker.io/beclab/jellyfin-jellyfin:10.11.6` |
|
|
| **LAN Access** | `http://192.168.0.145:30096` |
|
|
| **Olares Proxy** | `https://7e89d2a1.vishinator.olares.com` |
|
|
| **GPU** | NVIDIA RTX 5090 Max-Q (24GB) — hardware transcoding |
|
|
|
|
## Purpose
|
|
|
|
Jellyfin media server on Olares with NVIDIA GPU hardware transcoding and NFS media from Atlantis. Replaces a previous Plex attempt (Plex had issues with Olares proxy auth and indirect connections from desktop apps).
|
|
|
|
## Architecture
|
|
|
|
```
|
|
Atlantis NAS (192.168.0.200)
|
|
└─ NFS: /volume1/data/media
|
|
└─ mounted on olares at /mnt/atlantis_media (fstab)
|
|
└─ hostPath volume in Jellyfin pod at /media (read-only)
|
|
|
|
Olares K3s cluster
|
|
└─ jellyfin-vishinator namespace
|
|
└─ Deployment: jellyfin (2 containers)
|
|
├─ jellyfin (main app, port 8096)
|
|
└─ olares-envoy-sidecar (Olares proxy)
|
|
```
|
|
|
|
## Deployment Patches
|
|
|
|
The Jellyfin app was installed from the Olares marketplace, then patched with `kubectl patch` for:
|
|
|
|
### 1. NFS Media Mount
|
|
```bash
|
|
kubectl patch deployment jellyfin -n jellyfin-vishinator --type=json -p '[
|
|
{"op":"add","path":"/spec/template/spec/volumes/-","value":{"name":"atlantis-media","hostPath":{"path":"/mnt/atlantis_media","type":"Directory"}}},
|
|
{"op":"add","path":"/spec/template/spec/containers/0/volumeMounts/-","value":{"name":"atlantis-media","mountPath":"/media","readOnly":true}}
|
|
]'
|
|
```
|
|
|
|
### 2. GPU Access (NVIDIA runtime + env vars)
|
|
```bash
|
|
kubectl patch deployment jellyfin -n jellyfin-vishinator --type=json -p '[
|
|
{"op":"add","path":"/spec/template/spec/REDACTED_APP_PASSWORD","value":"nvidia"},
|
|
{"op":"add","path":"/spec/template/metadata/annotations","value":{"applications.app.bytetrade.io/gpu-inject":"true"}},
|
|
{"op":"replace","path":"/spec/template/spec/containers/0/resources/limits","value":{"cpu":"4","memory":"8Gi"}},
|
|
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"NVIDIA_VISIBLE_DEVICES","value":"all"}},
|
|
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"NVIDIA_DRIVER_CAPABILITIES","value":"all"}}
|
|
]'
|
|
```
|
|
|
|
**Important**: Do NOT request `nvidia.com/gpu` or `nvidia.com/gpumem` resources. HAMI's vGPU interceptor (`libvgpu.so` injected via `/etc/ld.so.preload`) causes ffmpeg to segfault (exit code 139) during CUDA transcode operations (especially `tonemap_cuda`). By omitting GPU resource requests, HAMI doesn't inject its interceptor, and Jellyfin gets direct GPU access via the nvidia runtime class.
|
|
|
|
### 3. HAMI Memory Override (if GPU resources are requested)
|
|
If you do need HAMI GPU scheduling (e.g., to share GPU fairly with LLM workloads), override the memory limit:
|
|
```bash
|
|
kubectl patch deployment jellyfin -n jellyfin-vishinator --type=json -p '[
|
|
{"op":"add","path":"/spec/template/spec/containers/0/env/-","value":{"name":"CUDA_DEVICE_MEMORY_LIMIT_0","value":"8192m"}}
|
|
]'
|
|
```
|
|
Note: This alone does NOT fix the segfault — `libvgpu.so` in `/etc/ld.so.preload` is the root cause.
|
|
|
|
## LAN Access
|
|
|
|
Olares's envoy proxy adds ~100ms per request, causing buffering on high-bitrate streams. Direct LAN access bypasses this.
|
|
|
|
### NodePort Service
|
|
```yaml
|
|
apiVersion: v1
|
|
kind: Service
|
|
metadata:
|
|
name: jellyfin-lan
|
|
namespace: jellyfin-vishinator
|
|
spec:
|
|
type: NodePort
|
|
externalIPs:
|
|
- 192.168.0.145
|
|
selector:
|
|
app: jellyfin
|
|
ports:
|
|
- port: 8096
|
|
targetPort: 8096
|
|
nodePort: 30096
|
|
name: jellyfin-web
|
|
```
|
|
|
|
### Calico GlobalNetworkPolicy
|
|
Olares auto-creates restrictive NetworkPolicies (`app-np`) that block external LAN traffic and cannot be modified (admission webhook reverts changes). A Calico GlobalNetworkPolicy bypasses this:
|
|
|
|
```yaml
|
|
apiVersion: crd.projectcalico.org/v1
|
|
kind: GlobalNetworkPolicy
|
|
metadata:
|
|
name: allow-lan-to-jellyfin
|
|
spec:
|
|
order: 100
|
|
selector: app == 'jellyfin'
|
|
types:
|
|
- Ingress
|
|
ingress:
|
|
- action: Allow
|
|
source:
|
|
nets:
|
|
- 192.168.0.0/24
|
|
```
|
|
|
|
This is the **correct** approach for LAN access on Olares. Alternatives that don't work:
|
|
- Patching `app-np` NetworkPolicy — webhook reverts it
|
|
- Adding custom NetworkPolicy — webhook deletes it
|
|
- iptables rules on Calico chains — Calico reconciles and removes them
|
|
|
|
## Jellyfin Settings
|
|
|
|
### Hardware Transcoding
|
|
In Dashboard > Playback > Transcoding:
|
|
- **Hardware acceleration**: NVIDIA NVENC
|
|
- **Hardware decoding**: All codecs enabled (H264, HEVC, VP9, AV1, etc.)
|
|
- **Enhanced NVDEC**: Enabled
|
|
- **Hardware encoding**: Enabled
|
|
- **HEVC encoding**: Allowed
|
|
- **AV1 encoding**: Allowed (RTX 5090 supports AV1 encode)
|
|
- **Tone mapping**: Enabled (bt2390, HDR→SDR on GPU)
|
|
|
|
### Library Paths
|
|
| Library | Path |
|
|
|---------|------|
|
|
| Movies | `/media/movies` |
|
|
| TV Shows | `/media/tv` |
|
|
| Anime | `/media/anime` |
|
|
| Music | `/media/music` |
|
|
| Audiobooks | `/media/audiobooks` |
|
|
|
|
## NFS Mount
|
|
|
|
```
|
|
# /etc/fstab on olares
|
|
192.168.0.200:/volume1/data/media /mnt/atlantis_media nfs rw,async,hard,intr,rsize=131072,wsize=131072 0 0
|
|
```
|
|
|
|
### Performance
|
|
- Sequential read: 180-420 MB/s (varies by cache state)
|
|
- More than sufficient for multiple 4K remux streams (~100 Mbps each)
|
|
|
|
## Known Issues
|
|
|
|
- **Patches lost on Olares app update** — if Jellyfin is updated via the marketplace, the NFS mount and GPU patches need to be re-applied
|
|
- **HAMI vGPU causes ffmpeg segfaults** — do NOT request `nvidia.com/gpu` resources; use nvidia runtime class without HAMI resource limits
|
|
- **Olares proxy buffering** — use direct LAN access (`http://192.168.0.145:30096`) for streaming, not the Olares proxy URL
|
|
- **GPU shared with Ollama** — both Jellyfin and Ollama access the full 24GB VRAM without HAMI partitioning; heavy concurrent use (4K transcode + large model inference) may cause OOM
|
|
|
|
## Maintenance
|
|
|
|
### Check status
|
|
```bash
|
|
kubectl get pods -n jellyfin-vishinator
|
|
kubectl exec -n jellyfin-vishinator deploy/jellyfin -c jellyfin -- nvidia-smi
|
|
```
|
|
|
|
### Re-apply patches after update
|
|
Run the kubectl patch commands from the Deployment Patches section above.
|
|
|
|
### Check transcoding
|
|
```bash
|
|
# Is ffmpeg using GPU?
|
|
kubectl exec -n jellyfin-vishinator deploy/jellyfin -c jellyfin -- nvidia-smi
|
|
# Look for ffmpeg process with GPU memory usage
|
|
|
|
# Check transcode logs
|
|
kubectl logs -n jellyfin-vishinator deploy/jellyfin -c jellyfin | grep ffmpeg | tail -5
|
|
```
|
|
|
|
---
|
|
|
|
**Last Updated**: 2026-04-03
|