- Some upstream feeds are intermittently unavailable or require source-side auth/public preview.
- On CPU-only VPS deployments, LLM tasks can queue and complete with delay during ingestion spikes.
- Per-item severity/category/confidence is strongest when LLM analysis is available; fallback heuristics are estimates.
Found an issue or gap? Use the feedback form to report it with source URL and timestamp.