Optional configurations for simulations

Optional configurations for simulations in Gatling Enterprise Edition.

Optional configurations for simulations

In addition to the standard configuration options, Gatling Enterprise Edition provides several optional configurations for simulations. These options allow you to customize the behavior of your simulations to better suit your testing needs.

Load generator parameters

You can specify load generator parameters in your simulation configuration. This is useful for scenarios where you need to customize the behavior of the load generator, such as adjusting the number of virtual users or the ramp-up time.

This step allows you to define the Java system properties or JS parameters and environment variables used when running this particular simulation. Properties/variables entered here are in addition to the default parameters, unless you ignore the defaults.

If have the same key in your simulation configuration and the the default load generator parameters table the simulation value is used and overrides the default.

Configure Service Level Objectives (SLOs)

A Service Level Objective (SLO) defines a target performance threshold for your test. Gatling Enterprise Edition evaluates each SLO over the duration of the run and reports the percentage of time the condition was met.

You can add multiple SLOs to a test. Each SLO is defined by:

  • Metric: the global statistic to measure:
    • Error ratio: the percentage of failed requests across all scenarios.
    • Response time percentile: the Nth percentile of response times, in milliseconds. Available percentiles: p50, p95, p99, p99.9, p99.99, p99.999, p99.9999.
  • Operator: the comparison to apply — less than (<) or less than or equal ().
  • Threshold: the target value — in milliseconds for response time, or as a percentage for error ratio.

The SLO result is expressed as a percentage: the proportion of seconds during the run where the condition was met. Ramp-up and ramp-down periods are excluded from this calculation (see Time window).

SLO results appear in the run summary as circular gauges, color-coded by compliance:

  • Green: the condition was met 99% of the time or more.
  • Orange: the condition was met between 90% and 99% of the time.
  • Red: the condition was met less than 90% of the time.

Time window

You can configure ramp-up and ramp-down time windows to be excluded from SLOs and assertions computation. This is typically useful when you know that at the beginning of your test run you expect higher response times than when your system is warm (JIT compiler has kicked in, autoscaling has done its work, caches are filled, etc.) and don’t want the warm-up time to cause your SLOs or assertions to fail.

  • Ramp Up: the number of seconds you want to exclude at the beginning of the run.
  • Ramp Down: the number of seconds you want to exclude at the end of the run.

Stop criteria

In this step, you can configure specific stop criteria to end the run earlier if certain thresholds are exceeded. This is particularly useful for terminating test runs once key performance metrics exceed acceptable limits.

Each stop criterion must include:

  • Metric: The metric for which the stop criterion is evaluated. (Mean CPU, Global Error Ratio or Global Response Time)
  • Threshold: The value that, when reached or exceeded, will trigger the stop condition. (eg: over 30% / 300ms on 99.9 percentile)
  • Timeframe (in seconds): The period during which the metric must exceed the threshold for the entire duration to trigger the stop (eg: last 60 seconds).

You can base stop criteria on the following metrics:

  • Mean CPU Usage: The average CPU usage of the load generators, measured as a percentage.
  • Global Error Ratio: The percentage of failed requests across all test scenarios.
  • Global Response Time: The response time of all requests, measured at a specific percentile, in milliseconds.

Acceptance criteria (no-code simulations only)

Gatling Enterprise Edition allows you to define acceptance criteria for your simulations. This includes specifying thresholds for key performance indicators (KPIs) such as response time, throughput, and error rate. By setting clear acceptance criteria, you can ensure that your application meets the desired performance standards before it is deployed. For test-as-code simulations, these criteria can be defined in the simulation code itself, while for no-code simulations, they can be configured through the user interface.

Edit this page on GitHub