Analyze your reports thanks to the indicators, active users and requests / responses over time, and distribution


Global menu points to consolidated statistics.

Details menu points to per-request-type statistics.

Overall Simulation charts

Most of those charts are available for both the overall simulation report and for per request/group charts.

Response time ranges


This chart shows how response times are distributed among standard ranges. The right panel show number of OK/KO requests.



The top panel shows some standard statistics such as min, max, average, standard deviation and percentiles globally and per request.

The bottom panel shows some details on the failed requests.

Active users over time


This chart displays the active users during the simulation: total and per scenario.

“Active users” is neither “concurrent users” or “users arrival rate”. It’s a kind of mixed metric that serves for both open and closed workload models and that represents “users who were active on the system under load at a given second”.

It’s computed as:

  (number of alive users at previous second)
+ (number of users that were started during this second)
- (number of users that were terminated during previous second)

Response time distribution


This chart displays the distribution of the response times.

Response time percentiles over time


This chart displays a variety of response time percentiles over time, but only for successful requests. As failed requests can end prematurely or be caused by timeouts, they would have a drastic effect on the percentiles’ computation.

Requests per second over time


This chart displays the number of requests sent per second over time.

Responses per second over time


This chart displays the number of responses received per second over time: total, successes and failures.

Request/group specific charts

Those charts are only available when consulting the details for a request/group.

Response Time against Global RPS


This chart shows how the response time for the given request is distributed, depending on the overall number of request at the same time.

Edit this page on GitHub