CallMeter Docs

Running a Test

Execute a SIP load test, monitor real-time progress, understand run statuses, stop tests early, and manage concurrent test limits.

Once you have created a test configuration, you can run it as many times as needed. Each execution produces an independent test run with its own metrics, endpoint data, and SIP traces. This guide covers the entire lifecycle of a test run, from initiation to completion.

Starting a Test Run

  1. Open a test from your project's Tests page
  2. Click Run Test
  3. The platform performs pre-run validation:
    • Checks your subscription plan limits (endpoint count, concurrent runs)
    • Verifies worker availability in the selected regions
    • Confirms sufficient SIP accounts for all groups
    • Validates credit balance for the estimated usage
  4. If all checks pass, the run enters the queue

Tests do not auto-run

Creating a test only saves the configuration. You must explicitly click Run Test to start each execution. This lets you reuse configurations and run them on demand.

Test Run Status Progression

Every test run moves through a defined sequence of statuses:

StatusDescriptionTerminal
PENDINGThe run has been submitted and is waiting for a queue slotNo
QUEUEDThe run is in the execution queue. Workers are being allocated and endpoints are being assignedNo
RUNNINGWorkers are actively executing SIP calls. Endpoints are registering, calling, and exchanging media. Metrics are flowing in real timeNo
STOPPINGYou clicked Stop Test. Workers are hanging up active calls and reporting final metricsNo
COMPLETEDAll endpoints finished their calls and reported final metrics. The run succeededYes
FAILEDA critical error stopped the run before all endpoints could completeYes
CANNOT_RUN_FOR_NOWThe run could not start due to insufficient worker capacity, billing limits, or resource constraintsYes

Normal progression:

PENDING --> QUEUED --> RUNNING --> COMPLETED

Manual stop:

PENDING --> QUEUED --> RUNNING --> STOPPING --> COMPLETED

Failure paths:

PENDING --> QUEUED --> RUNNING --> FAILED
PENDING --> CANNOT_RUN_FOR_NOW

What Happens During Execution

Understanding the internal execution flow helps you interpret real-time dashboards and troubleshoot issues.

Phase 1: Allocation (QUEUED)

  • The platform selects workers based on each group's assignment mode (region auto-distribution or explicit worker selection)
  • Endpoints are distributed across the selected workers
  • SIP accounts are assigned to each endpoint from the group's registrar
  • Workers receive their endpoint assignments and begin initialization

Phase 2: Registration

  • Endpoints begin SIP registration with their assigned registrar, staggered over the configured buildup period
  • Each endpoint sends a SIP REGISTER request and waits for a 200 OK response
  • Successful registrations advance the endpoint to the REGISTERED phase
  • Failed registrations set the REGISTRATION_FAILED outcome and are recorded with the SIP response code for diagnosis

Phase 3: Call Setup

  • Caller endpoints send SIP INVITE requests to their configured callee targets
  • Callee endpoints (if configured to receive) wait for incoming INVITE requests
  • SDP offer/answer negotiation establishes the media codec and transport parameters
  • Once the call is answered (200 OK + ACK), endpoints advance to the INCALL phase

Phase 4: Media Exchange (RUNNING)

  • Endpoints exchange audio (and optionally video) media using RTP
  • Quality metrics are collected every second: MOS, jitter, packet loss, RTT, bitrate, and 50+ additional metrics
  • RTCP reports are exchanged between endpoints for quality feedback
  • Metrics flow to the platform in real time and appear on the dashboard

Phase 5: Teardown

  • When the configured duration expires, endpoints send SIP BYE requests to terminate calls
  • Final metrics are reported to the platform
  • Workers release their endpoint assignments and become available for other tests
  • The run transitions to COMPLETED

Monitoring a Running Test

While a test is in the RUNNING state, the test run page provides real-time visibility:

Overview Tab

  • Aggregate metrics -- average MOS, jitter, packet loss, and RTT across all endpoints
  • Endpoint phase breakdown -- how many endpoints are in each phase (INITIALIZING, REGISTERED, CALLING, RINGING, NEGOTIATING, INCALL, DISCONNECTING, CLOSED)
  • Progress indicator -- elapsed time relative to configured duration
  • Group breakdown -- per-group status when running multi-group tests

Endpoint List

  • Each endpoint is displayed with its current phase and outcome
  • Color-coded phase indicators: green for INCALL, yellow for CALLING/RINGING/NEGOTIATING, red for error outcomes
  • Click an endpoint to view its individual metrics and SIP trace in real time

Real-Time Metrics

During execution, metrics update every few seconds. Key metrics to watch:

MetricWhat to Watch For
MOSShould be above 3.5 for acceptable quality. Below 3.0 indicates problems
JitterBelow 30ms is good. Above 50ms may cause audible artifacts
Packet LossBelow 1% is good. Above 3% degrades call quality significantly
RTTBelow 150ms for acceptable quality. Above 300ms causes conversational difficulty
Registration RateAll endpoints should register within the buildup period
Call Success RateShould approach 100% unless you are stress-testing beyond capacity

Stopping a Test Early

You can stop a running test before its configured duration expires:

  1. Click Stop Test on the test run page
  2. The run transitions to STOPPING status
  3. Workers send BYE requests to terminate all active calls
  4. Final metrics are collected and reported
  5. The run transitions to COMPLETED with whatever data was collected up to that point

Stopped runs are marked as COMPLETED (not FAILED) because the stop was intentional. All metrics collected before the stop are preserved and available for analysis.

Stopping takes time

Stopping a large test with hundreds of endpoints may take 10-30 seconds as workers gracefully terminate all active calls and report final metrics. Do not refresh the page during this process.

Re-Running a Test

To run the same test configuration again:

  1. Open the test from your test list
  2. Click Run Test
  3. A new, independent test run is created with a new run ID

Previous runs and their complete metrics history are preserved. You can compare results across runs to identify trends, regressions, or improvements.

Each run is fully independent -- different workers may be allocated, and network conditions may vary. This is by design, as it provides a realistic picture of how your SIP infrastructure performs over time.

Concurrent Test Limits

Your subscription plan defines how many tests can run simultaneously:

ConsiderationDetails
Concurrent runs per projectLimited by your plan tier
Concurrent runs per organizationLimited by your plan tier
Total concurrent endpointsThe sum of endpoints across all running tests cannot exceed your plan limit

If you attempt to start a run that would exceed these limits, the run enters CANNOT_RUN_FOR_NOW status. Wait for current runs to complete or upgrade your plan.

Troubleshooting Failed Runs

When a test run fails, check these common causes:

IssueStatusWhat to Check
No workers available in regionCANNOT_RUN_FOR_NOWOpen the Workers page and verify workers are in ONLINE status in the selected region
All endpoints REGISTRATION_FAILEDFAILEDVerify registrar domain, port, transport, and SIP account credentials
Billing limit reachedCANNOT_RUN_FOR_NOWCheck your plan limits and credit balance in Organization Settings
Worker disconnected during testFAILEDCheck worker connectivity and Docker container health
Network unreachableFAILEDEnsure workers can reach your SIP server (firewall rules, DNS resolution)
Insufficient SIP accountsCANNOT_RUN_FOR_NOWAdd more SIP accounts to the registrar or reduce endpoint count

For each failed endpoint, the SIP Message Trace in the results view shows the exact signaling exchange, including the SIP response code that caused the failure. See Common Test Failures for detailed diagnosis.

Best Practices

  • Run during controlled conditions -- for comparable results, run tests at the same time of day and under similar network conditions
  • Check workers first -- before running a large test, verify workers are ONLINE on the Workers page
  • Use buildup for large tests -- a 30-60 second buildup prevents SIP registration storms
  • Save test configurations -- create named test configurations for scenarios you run repeatedly
  • Compare across runs -- run the same test multiple times and compare results to distinguish one-time issues from persistent problems

Next Steps

On this page