Analytics (per-embed metrics)
Per-embed analytics that show what's working and what to fix
Understand how each AI assistant performs across your sites: from engagement and containment to citation quality and unanswered questions.
Engagement & Resolution
- Impressions -> opens -> chats -> messages
- Session containment vs follow-up intent
- Lead prompt performance and drop-off
- Time to first answer
Retrieval & Quality Signals
- Fallback reason codes (low relevance, timeout, provider error)
- Unanswered questions and follow-up intent
- Top cited documents and stale content alerts
- Precision sampling to detect hallucinations
Operational insight
Per-embed isolation lets product teams and agencies compare performance across sites, brands, or clients. Export raw events for deeper analysis, attribution modeling, or QA workflows.
All metrics are traceable. Every aggregate ties back to raw events (retrieval and generation spans) to support audits, regressions, and investigation.
This is how you debug trust, not just traffic.
FAQ
FAQ
What is a containment metric?
Containment measures sessions resolved without human follow-up; it reflects deflection efficiency and assistant trustworthiness.
How often are analytics updated?
Event ingestion is near real-time (typically under 5 seconds), with aggregated views updating within minutes.
Can we export raw events?
Yes. Raw event exports are available for advanced analysis, QA workflows, and attribution modeling.