CallMeter Docs

Metrics Collection

How CallMeter collects 150+ real-time VoIP metrics per endpoint per second, organizes them by direction, and displays them in the dashboard.

CallMeter collects over 150 measurements per endpoint per second during every call. This page explains how metrics are captured, organized, and made available for analysis.

Collection Architecture

Each virtual phone endpoint runs two independent metric collectors during a call:

CollectorWhat It MeasuresData Source
Send collectorOutbound media stream qualityLocal encoding stats, outbound RTP counters, RTCP Sender Reports
Receive collectorInbound media stream qualityIncoming RTP analysis, RTCP Receiver Reports, jitter buffer state, decoder stats

Both collectors sample their metrics once per second and report independently. This dual-collector architecture ensures send and receive quality are measured separately — essential for diagnosing asymmetric issues where one direction degrades while the other is fine.

Metric Categories

Metrics are organized into seven categories, each covering a different layer of the VoIP stack:

CategoryMetricsWhat It Covers
Quality9MOS, R-Factor, jitter, RTT, packet loss, clock drift
Network16Packet counts, bitrate, sequence analysis, packet spacing
Feedback5NACK, PLI, FIR, SLI counts
Jitter Buffer9Buffer delay, late packets, retransmission success
Audio11PLC events, signal/noise levels, Opus-specific diagnostics
Video10Frame rate, resolution, freeze events, keyframe intervals
Call Timing8PDD, setup time, call duration, SIP result codes

For the complete list with keys and units, see the Metrics Glossary.

Time-Series vs One-Shot

Most metrics are time-series — sampled every second throughout the call, producing a continuous data stream. These appear as interactive charts in the endpoint detail view where you can zoom, pan, and correlate across metrics.

Call timing metrics are one-shot — captured once per call at specific SIP events (INVITE sent, 200 OK received, etc.). These appear as single values in the call timing summary, not as time-series charts.

Quality Scoring

Two automated quality scores are computed from collected metrics:

MOS (Mean Opinion Score)

Computed using the E-model algorithm (ITU-T G.107). The E-model takes network impairments (delay, jitter, packet loss) as inputs and produces a quality rating on a 1-5 scale:

MOS RangeQuality
4.0 - 5.0Good to excellent
3.5 - 4.0Acceptable
2.5 - 3.5Poor — users will notice degradation
1.0 - 2.5Bad — unusable

R-Factor

The raw transmission rating from the E-model on a 0-100 scale. MOS is derived from R-Factor. R-Factor provides finer granularity for threshold-based alerting.

MOS during hold

MOS computation is paused when a call is on hold. Hold periods produce no meaningful quality data since media is intentionally muted. MOS resumes when the call is taken off hold.

Direction Filtering

Every metric has a direction context:

  • Send — What the endpoint transmits (packets sent, send bitrate, encoder stats)
  • Receive — What the endpoint receives (packets received, jitter, decoder stats, buffer state)
  • Send + Receive — Aggregated view (MOS, R-Factor computed from both directions)

The dashboard's direction filter lets you isolate send or receive metrics. This is the primary tool for diagnosing one-way audio, asymmetric quality, or upstream vs downstream issues.

Per-Track Metrics

For multi-track calls (multiple audio or video streams), metrics are collected independently per track. Each track has its own set of 150+ measurements, allowing you to compare quality across tracks within the same call.

Track-level metrics appear in the endpoint detail view grouped by track number and media type.

Zero-Loss Collection

The metric collection pipeline is designed for zero data loss:

  • Metrics are buffered locally on the worker during collection
  • Buffered metrics are streamed to the platform in batches
  • If the network connection to the platform is temporarily interrupted, metrics are held in the local buffer until connectivity resumes
  • No metric samples are dropped due to transient network issues

Where Metrics Appear

LocationWhat You See
Test run overviewAggregated quality summary across all endpoints
Group summaryPer-group quality averages and endpoint counts
Endpoint detailFull time-series charts for all 150+ metrics
Endpoint timelineSIP event milestones with absolute timestamps
Paired endpointSide-by-side comparison with the paired caller/callee

Next Steps

On this page