Conversations
/conversations is the primary tool for debugging and analysing calls processed by Delphi. It surfaces a chronological log of everything that happened during a call: dialogue turns, AI-model invocations with cost tracking, telephony events, errors, and performance metrics.
The page is read-only. Data is generated by TelPhi and the voice pipeline during live calls and persisted to Postgres. For operator-side debugging (SIP ladder, trace IDs), pair this page with Monitoring in SigNoz.
Page layout
| Panel | Width | Content |
|---|---|---|
| Left | ~33% | App selector + conversations list. |
| Right | ~67% | Detailed log table for the selected conversation. |
Left panel
- App selector — filter conversations by app. Supports deep linking via
?appId={appId}. - Conversations list — sorted by start time (newest first).
| Column | Description |
|---|---|
| ID | Conversation identifier. |
| Start Time | yyyy-MM-dd HH:mm:ss. |
| Recording Status | Recording indicator (see below). |
Click a row to load its logs.
Right panel
The log detail table. Default sort is timestamp ascending (chronological order).
Recording
When call recording is enabled on the app, conversations show a recording status:
| Status | Indicator | Description |
|---|---|---|
| AVAILABLE | Play button | Recording ready. Click to play back. |
| PROCESSING | Spinner | Upload to S3 pending. |
Playback is accessible from both the conversation list and the detail view.
Call transfers
When an AI agent initiates a transfer (via the transfer() tool), the event is logged in the conversation timeline:
- Transfer target — destination number / endpoint.
- Transfer status — success / failure.
- Transfer metadata — additional context from the transfer.
Transfer events appear inline with other log types.
Token usage
Model Usage entries include detailed token consumption:
| Field | Description |
|---|---|
| Input Tokens | Tokens sent to the model. |
| Output Tokens | Tokens received from the model. |
| Model Name | Specific model used. |
Token counts appear in the summary (Input: 150 → Output: 80) and in the detail dialog. They aggregate into the Dashboard token widgets.
Log types
Every log row has a type that determines its icon, colour, and the data it carries.
| Type | Icon | Captures | Key fields |
|---|---|---|---|
| Conversation | Chat bubble | Individual dialogue turns. | Actor (User / AI), Event Type, Duration, Interrupted, Interruption Reason, Text. |
| Model Usage | Robot | Each AI-model API call with cost. | Provider, Model, Service Type (STT / LLM / TTS / Realtime), Estimated Cost, Tokens, Latency, Duration, Audio Size, Characters, Region, Quality Tier. |
| System | Gear | Internal platform events. | Component, Action, Status. |
| Performance | Lightning | Timing for internal operations. | Component, Operation, Duration (ms). |
| Audio | Music note | Audio processing events. | Direction, Format, Operation, Duration. |
| Error | Error symbol | Errors during the conversation. | Component, Error Message. |
| Telephony | Phone | SIP / call-level events. | Event Type (INVITE, BYE, …), Channel Type, Channel ID, Caller, Called. |
| Sandbox | Island | Sandbox execution events. | — |
| VM | Wrench | JavaScript VM (tool) execution. | Operation, Function Name, Function Count, Execution Time. |
Log detail dialog
Clicking the info icon opens a structured view of all data for that log:
- Model Usage — estimated cost, token counts, latency, provider, model.
- Conversation — actor, transcript, duration, interruption details.
- Telephony — caller / called numbers, event type, channel info.
- All types — raw JSON section at the bottom.
Cost calculation
The platform automatically estimates costs for Model Usage logs based on provider and model:
| Pricing method | Used for | How it works |
|---|---|---|
| Token-based | LLM calls | Cost per input token + cost per output token. |
| Time-based | Realtime, STT | Cost per second of audio. |
| Character-based | TTS | Cost per character converted to speech. |
OpenAI and Azure pricing are supported. Costs are estimates based on standard pricing.
Log summary format
| Log type | Summary |
|---|---|
| Model Usage | Provider/Model — tokens — latency |
| Conversation | Actor: EventType (duration) [INTERRUPTED] |
| System | Component.Action: Status |
| Performance | Component.Operation: duration ms |
| Audio | Direction Format Operation duration |
| Error | Component: Message |
| Telephony | Event: ChannelType ChannelId (caller -> called) |
| VM | Operation FunctionName (functionCount) executionTime |
Filtering and navigation
- App filter — selecting an app resets the conversation selection.
- Deep linking —
?appId={appId}pre-selects an app via URL. - Search — built-in search in the log detail table.
- Server-side filtering — supported via the filter model.
- Server-side sorting — default timestamp ascending; configurable.
- Infinite scroll — logs load in pages of 50.
- Pagination — the conversations list supports 10 / 25 / 50 rows per page.
Workflows
Investigate a call
- Go to
/conversations. - Select the app.
- Pick the conversation by start time.
- Review logs chronologically in the right panel.
- Click the info icon on any log for details.
Check AI costs
- Select a conversation.
- Look at the Cost column on Model Usage entries.
- Click the info icon for token counts, latency, and provider details.
Debug an error
- Select a conversation.
- Filter / scroll to the Error entries (red chip).
- Click the info icon for component + full error message.
- Cross-reference with Telephony and System logs at the same timestamp.
| Tab | Shows |
|---|---|
| Debug | Span tree and SIP ladder for the call's trace, sourced from SigNoz. |
| Timeline | Channel events ordered by ChannelMessage.timestamp. |
| Flow run | Per-node execution of the Flow Builder graph with inputs / outputs. |
| QA | QA scoring results (gated by qaScoring; enqueued on flow finalisation / hangup). |
| Token | Token usage by provider. |
| Action | Browser action invocations and results. |
The Debug tab is the bridge between this page and the operator-side Monitoring in SigNoz workflow.
See also
- Dashboard — aggregated metrics behind the conversation totals.
- Monitoring in SigNoz — operator-side debugging.
- Endpoints — what determines which conversations land where.