Know what your users are actually asking
Once your assistant is live, real users start asking real questions. Analytics gives you a window into those conversations — so you can understand what’s working, what’s not, and where to improve. Droog has two levels of analytics:- Workspace Analytics — a bird’s-eye view across all your assistants combined
- Bot-Level Analytics — a detailed breakdown for each individual assistant
Workspace Analytics
The Workspace Analytics page gives you a combined view of activity across all your assistants in one place. This is the right place to start when you want to understand the overall health of your Droog deployment.
How to open it
- Go to your main Dashboard
- Click Analytics in the left navigation
Summary metrics
The five cards at the top give you an instant health check across your entire workspace:| Metric | What it means |
|---|---|
| Total Sessions | The number of individual conversations started by users across all your assistants |
| Total Messages | The total number of messages exchanged — both user questions and assistant replies |
| Avg Latency | The average time your assistant takes to reply to a message |
| P95 Latency | The response time for the slowest 5% of messages — useful for spotting occasional delays |
| Escalation Rate | The percentage of conversations where the assistant couldn’t fully resolve the user’s query on its own |
A high Escalation Rate means users are frequently hitting dead ends. This is a strong signal to review your knowledge base and add missing content.
Usage & Performance Trends chart
Below the summary cards is a chart that plots messages, sessions, and average latency over your selected date range — all in one view.
- Spikes in sessions or messages — often linked to campaigns, product launches, or events bringing traffic to your site
- Latency increases — if the latency line climbs sharply on a particular day, check if large knowledge base changes were made around that time
- Quiet periods — consistently low engagement may mean users aren’t finding the chat widget easily
Bot Usage Breakdown
At the bottom of the Workspace Analytics page is a table breaking down activity per assistant — so you can see which bots are doing the most work.
| Column | What it tells you |
|---|---|
| Bot Name | The name of the assistant |
| Status | Whether the assistant is currently active or inactive |
| Sessions | Number of conversations started on this assistant |
| Messages | Total messages exchanged on this assistant |
| Storage (MB) | How much knowledge base storage this assistant is using |
Bot-Level Analytics
For a deeper look at any individual assistant, open that assistant from your dashboard. The overview panel shows detailed metrics for that bot alone — broken into four sections.
Performance & Cost
How fast the bot responds and what it costs to run. This panel tells you about the speed and operational cost of your assistant.| Metric | What it means |
|---|---|
| Avg Latency | The average time taken to respond to a user message |
| P95 Latency | Response time for the slowest 5% of messages — helps identify occasional delays |
| Cost / Session | Estimated cost per conversation |
| Cost / Message | Estimated cost per individual message |
User Engagement
How actively users interact with the bot. This panel shows how users are actually behaving inside conversations.| Metric | What it means |
|---|---|
| Total Users (Est.) | Estimated number of unique users who have interacted with the assistant |
| Active Users | Users who sent at least one message in the selected period |
| Avg Msg / Session | Average number of messages per conversation — higher means users are engaging more deeply |
| Dropoff Rate | Percentage of users who opened the chat but didn’t send a message |
A high Dropoff Rate may mean users are unsure what to ask when they open the chat. Consider adding a welcome message or example questions to give users a clear starting point.
User Feedback
How users rate the quality of your assistant’s responses. Users can rate individual responses with a thumbs up or thumbs down directly inside the chat. This panel aggregates that feedback.| Metric | What it means |
|---|---|
| Thumbs Up | Number of responses users found helpful |
| Thumbs Down | Number of responses users found unhelpful |
| Avg Rating | Overall satisfaction score out of 5 |
Automation Health
The bot’s ability to handle conversations on its own — without needing to escalate.| Metric | What it means |
|---|---|
| Total Escalations | Number of conversations where the assistant couldn’t resolve the query independently |
| Escalation Rate | Percentage of all conversations that ended in an escalation |
Using analytics to improve your assistant
Analytics isn’t just a report — it’s a feedback loop. Here’s how to act on what you see:High escalation rate?
Review conversations that escalated. Identify topics the assistant couldn’t answer and upload content to cover those gaps.
Low feedback scores?
Filter for Thumbs Down responses. Find the weak answers and improve the relevant documents in your knowledge base.
Latency spikes?
Check if latency increases coincide with large knowledge base uploads or high-traffic days. Flag to support if spikes persist.
High dropoff rate?
Add a friendly opening message or example questions to your assistant to give users a clear starting point.
A simple weekly review routine
A 10-minute weekly check is enough to keep your assistant improving consistently.- Open Workspace Analytics → check total sessions and escalation rate at a glance
- Review the trends chart → spot any unusual spikes or drops in the past week
- Open Bot-Level Analytics for your most active assistant → check feedback scores and dropoff rate
- Act on one thing → upload a missing document, adjust a constraint, or improve the welcome message
What’s next?
Update your knowledge base
Add content to cover the gaps your analytics revealed.
Test your changes
After uploading new content, test to confirm the improvements are working.
Have a question? Reach us at hi@droog.io
