GET/api/v1/trending/velocity-ranked topics · sparklines_7d · 120s cache
GET/api/v1/pulse/?topic=per-topic sentiment · top 3 signals per platform · LLM intelligence · 300s cache
GET/api/v1/divergence/leaderboard/top 10 cross-platform disputes · DISTINCT ON · severity · 60s cache NEW
GET/api/v1/compare/divergence events by topic/platform pair
GET/POST/DELETE/api/v1/alerts/watch/webhook subscriptions · cooldown · audit log · requires token NEW
GET/api/v1/signals/raw signal feed · filterable by platform/topic/trending
GET/api/v1/stats/totals/platform signal counts · all-time + 24h · 300s cache
WS/ws/signals/real-time push via Django Channels + Redis pub/sub
AUTH/api/token/POST → {access, refresh} · access expires 60min · POST /api/token/refresh/ to rotate
GET/health/health check · no auth required · no DB query
Webhook Alert System NEW
→POST /alerts/watch/ → row in watched_topics
→TrendingView fires _dispatch_topic_alerts()
→3 gates: trend threshold, platform count, cooldown
→Race-safe: UPDATE last_fired_at (no read-modify-write)
→All attempts logged in alert_deliveries
Caching Strategy
→trending: 120s · pulse: 300s · stats: 300s
→divergence/leaderboard: 60s (freshness matters)
→Cache keyed by endpoint + query params
→WebSocket bypasses cache entirely — always live
→Redis DB1 (REST cache) · DB2 (Channels pub/sub)
# WebSocket push after bulk upsert — processing service
redis.publish("asgi:group:posts_feed", signal_payload)
# Django Channels consumer receives and forwards to clients
class SignalConsumer(AsyncWebsocketConsumer):
async def signal_update(self, event):
await self.send(json.dumps(event["payload"]))
Daphne serves HTTP and WebSocket on the same ASGI process. All REST endpoints require Authorization: Bearer <access_token> header. Cache TTLs are intentionally conservative on divergence (60s) vs pulse (300s) — divergence data drives alerts, staleness matters more.