Running a Test
Execute a SIP load test, monitor real-time progress, understand run statuses, stop tests early, and manage concurrent test limits.
Once you have created a test configuration, you can run it as many times as needed. Each execution produces an independent test run with its own metrics, endpoint data, and SIP traces. This guide covers the entire lifecycle of a test run, from initiation to completion.
Starting a Test Run
- Open a test from your project's Tests page
- Click Run Test
- The platform performs pre-run validation:
- Checks your subscription plan limits (endpoint count, concurrent runs)
- Verifies worker availability in the selected regions
- Confirms sufficient SIP accounts for all groups
- Validates credit balance for the estimated usage
- If all checks pass, the run enters the queue
Tests do not auto-run
Creating a test only saves the configuration. You must explicitly click Run Test to start each execution. This lets you reuse configurations and run them on demand.
Test Run Status Progression
Every test run moves through a defined sequence of statuses:
| Status | Description | Terminal |
|---|---|---|
| PENDING | The run has been submitted and is waiting for a queue slot | No |
| QUEUED | The run is in the execution queue. Workers are being allocated and endpoints are being assigned | No |
| RUNNING | Workers are actively executing SIP calls. Endpoints are registering, calling, and exchanging media. Metrics are flowing in real time | No |
| STOPPING | You clicked Stop Test. Workers are hanging up active calls and reporting final metrics | No |
| COMPLETED | All endpoints finished their calls and reported final metrics. The run succeeded | Yes |
| FAILED | A critical error stopped the run before all endpoints could complete | Yes |
| CANNOT_RUN_FOR_NOW | The run could not start due to insufficient worker capacity, billing limits, or resource constraints | Yes |
Normal progression:
PENDING --> QUEUED --> RUNNING --> COMPLETEDManual stop:
PENDING --> QUEUED --> RUNNING --> STOPPING --> COMPLETEDFailure paths:
PENDING --> QUEUED --> RUNNING --> FAILED
PENDING --> CANNOT_RUN_FOR_NOWWhat Happens During Execution
Understanding the internal execution flow helps you interpret real-time dashboards and troubleshoot issues.
Phase 1: Allocation (QUEUED)
- The platform selects workers based on each group's assignment mode (region auto-distribution or explicit worker selection)
- Endpoints are distributed across the selected workers
- SIP accounts are assigned to each endpoint from the group's registrar
- Workers receive their endpoint assignments and begin initialization
Phase 2: Registration
- Endpoints begin SIP registration with their assigned registrar, staggered over the configured buildup period
- Each endpoint sends a SIP REGISTER request and waits for a 200 OK response
- Successful registrations advance the endpoint to the REGISTERED phase
- Failed registrations set the REGISTRATION_FAILED outcome and are recorded with the SIP response code for diagnosis
Phase 3: Call Setup
- Caller endpoints send SIP INVITE requests to their configured callee targets
- Callee endpoints (if configured to receive) wait for incoming INVITE requests
- SDP offer/answer negotiation establishes the media codec and transport parameters
- Once the call is answered (200 OK + ACK), endpoints advance to the INCALL phase
Phase 4: Media Exchange (RUNNING)
- Endpoints exchange audio (and optionally video) media using RTP
- Quality metrics are collected every second: MOS, jitter, packet loss, RTT, bitrate, and 50+ additional metrics
- RTCP reports are exchanged between endpoints for quality feedback
- Metrics flow to the platform in real time and appear on the dashboard
Phase 5: Teardown
- When the configured duration expires, endpoints send SIP BYE requests to terminate calls
- Final metrics are reported to the platform
- Workers release their endpoint assignments and become available for other tests
- The run transitions to COMPLETED
Monitoring a Running Test
While a test is in the RUNNING state, the test run page provides real-time visibility:
Overview Tab
- Aggregate metrics -- average MOS, jitter, packet loss, and RTT across all endpoints
- Endpoint phase breakdown -- how many endpoints are in each phase (INITIALIZING, REGISTERED, CALLING, RINGING, NEGOTIATING, INCALL, DISCONNECTING, CLOSED)
- Progress indicator -- elapsed time relative to configured duration
- Group breakdown -- per-group status when running multi-group tests
Endpoint List
- Each endpoint is displayed with its current phase and outcome
- Color-coded phase indicators: green for INCALL, yellow for CALLING/RINGING/NEGOTIATING, red for error outcomes
- Click an endpoint to view its individual metrics and SIP trace in real time
Real-Time Metrics
During execution, metrics update every few seconds. Key metrics to watch:
| Metric | What to Watch For |
|---|---|
| MOS | Should be above 3.5 for acceptable quality. Below 3.0 indicates problems |
| Jitter | Below 30ms is good. Above 50ms may cause audible artifacts |
| Packet Loss | Below 1% is good. Above 3% degrades call quality significantly |
| RTT | Below 150ms for acceptable quality. Above 300ms causes conversational difficulty |
| Registration Rate | All endpoints should register within the buildup period |
| Call Success Rate | Should approach 100% unless you are stress-testing beyond capacity |
Stopping a Test Early
You can stop a running test before its configured duration expires:
- Click Stop Test on the test run page
- The run transitions to STOPPING status
- Workers send BYE requests to terminate all active calls
- Final metrics are collected and reported
- The run transitions to COMPLETED with whatever data was collected up to that point
Stopped runs are marked as COMPLETED (not FAILED) because the stop was intentional. All metrics collected before the stop are preserved and available for analysis.
Stopping takes time
Stopping a large test with hundreds of endpoints may take 10-30 seconds as workers gracefully terminate all active calls and report final metrics. Do not refresh the page during this process.
Re-Running a Test
To run the same test configuration again:
- Open the test from your test list
- Click Run Test
- A new, independent test run is created with a new run ID
Previous runs and their complete metrics history are preserved. You can compare results across runs to identify trends, regressions, or improvements.
Each run is fully independent -- different workers may be allocated, and network conditions may vary. This is by design, as it provides a realistic picture of how your SIP infrastructure performs over time.
Concurrent Test Limits
Your subscription plan defines how many tests can run simultaneously:
| Consideration | Details |
|---|---|
| Concurrent runs per project | Limited by your plan tier |
| Concurrent runs per organization | Limited by your plan tier |
| Total concurrent endpoints | The sum of endpoints across all running tests cannot exceed your plan limit |
If you attempt to start a run that would exceed these limits, the run enters CANNOT_RUN_FOR_NOW status. Wait for current runs to complete or upgrade your plan.
Troubleshooting Failed Runs
When a test run fails, check these common causes:
| Issue | Status | What to Check |
|---|---|---|
| No workers available in region | CANNOT_RUN_FOR_NOW | Open the Workers page and verify workers are in ONLINE status in the selected region |
| All endpoints REGISTRATION_FAILED | FAILED | Verify registrar domain, port, transport, and SIP account credentials |
| Billing limit reached | CANNOT_RUN_FOR_NOW | Check your plan limits and credit balance in Organization Settings |
| Worker disconnected during test | FAILED | Check worker connectivity and Docker container health |
| Network unreachable | FAILED | Ensure workers can reach your SIP server (firewall rules, DNS resolution) |
| Insufficient SIP accounts | CANNOT_RUN_FOR_NOW | Add more SIP accounts to the registrar or reduce endpoint count |
For each failed endpoint, the SIP Message Trace in the results view shows the exact signaling exchange, including the SIP response code that caused the failure. See Common Test Failures for detailed diagnosis.
Best Practices
- Run during controlled conditions -- for comparable results, run tests at the same time of day and under similar network conditions
- Check workers first -- before running a large test, verify workers are ONLINE on the Workers page
- Use buildup for large tests -- a 30-60 second buildup prevents SIP registration storms
- Save test configurations -- create named test configurations for scenarios you run repeatedly
- Compare across runs -- run the same test multiple times and compare results to distinguish one-time issues from persistent problems
Next Steps
- Analyzing Results -- Deep dive into test run metrics and charts
- Groups -- Configure multi-group tests
- Common Test Failures -- Diagnose and fix failed test runs
- Endpoint Statuses -- Complete endpoint lifecycle reference
Scenario Actions
Configure mid-call behavior with time-based and event-driven scenario actions including hold, resume, media updates, deferred track activation, DTMF generation, and IVR testing.
Analyzing Results
Interpret test run metrics, drill into per-endpoint data, use filters, read time-series charts, compare runs, and identify quality patterns.