Injection

Injection profiles, differences between open and closed workload models

The definition of the injection profile of users is done with the injectOpen and injectClosed methods (just inject in Scala). This method takes as an argument a sequence of injection steps that will be processed sequentially.

Open vs closed workload models

When it comes to load model, systems behave in 2 different ways:

  • Closed systems, where you control the concurrent number of users
  • Open systems, where you control the arrival rate of users

Make sure to use the proper load model that matches the load your live system experiences.

Closed system are system where the number of concurrent users is capped. At full capacity, a new user can effectively enter the system only once another exits.

Typical systems that behave this way are:

  • call center where all operators are busy
  • ticketing websites where users get placed into a queue when the system is at full capacity

On the contrary, open systems have no control over the number of concurrent users: users keep on arriving even though applications have trouble serving them. Most websites behave this way.

You can read more about open and closed models here.

Open model

       
setUp(
  scn.injectOpen(
    nothingFor(4), // 1
    atOnceUsers(10), // 2
    rampUsers(10).during(5), // 3
    constantUsersPerSec(20).during(15), // 4
    constantUsersPerSec(20).during(15).randomized(), // 5
    rampUsersPerSec(10).to(20).during(10), // 6
    rampUsersPerSec(10).to(20).during(10).randomized(), // 7
    stressPeakUsers(1000).during(20) // 8
  ).protocols(httpProtocol)
);
setUp(
  scn.injectOpen(
    nothingFor(4), // 1
    atOnceUsers(10), // 2
    rampUsers(10).during(5), // 3
    constantUsersPerSec(20).during(15), // 4
    constantUsersPerSec(20).during(15).randomized(), // 5
    rampUsersPerSec(10).to(20).during(10), // 6
    rampUsersPerSec(10).to(20).during(10).randomized(), // 7
    stressPeakUsers(1000).during(20) // 8
  ).protocols(httpProtocol)
);
setUp(
  scn.injectOpen(
    nothingFor(4), // 1
    atOnceUsers(10), // 2
    rampUsers(10).during(5), // 3
    constantUsersPerSec(20.0).during(15), // 4
    constantUsersPerSec(20.0).during(15).randomized(), // 5
    rampUsersPerSec(10.0).to(20.0).during(10), // 6
    rampUsersPerSec(10.0).to(20.0).during(10).randomized(), // 7
    stressPeakUsers(1000).during(20) // 8
  ).protocols(httpProtocol)
)
setUp(
  scn.inject(
    nothingFor(4), // 1
    atOnceUsers(10), // 2
    rampUsers(10).during(5), // 3
    constantUsersPerSec(20).during(15), // 4
    constantUsersPerSec(20).during(15).randomized, // 5
    rampUsersPerSec(10).to(20).during(10.minutes), // 6
    rampUsersPerSec(10).to(20).during(10.minutes).randomized, // 7
    stressPeakUsers(1000).during(20) // 8
  ).protocols(httpProtocol)
)

The building blocks for open model profile injection are:

  1. nothingFor(duration): Pause for a given duration.
  2. atOnceUsers(nbUsers): Injects a given number of users at once.
  3. rampUsers(nbUsers).during(duration): Injects a given number of users distributed evenly on a time window of a given duration.
  4. constantUsersPerSec(rate).during(duration): Injects users at a constant rate, defined in users per second, during a given duration. Users will be injected at regular intervals.
  5. constantUsersPerSec(rate).during(duration).randomized: Injects users at a constant rate, defined in users per second, during a given duration. Users will be injected at randomized intervals.
  6. rampUsersPerSec(rate1).to(rate2).during(duration): Injects users from starting rate to target rate, defined in users per second, during a given duration. Users will be injected at regular intervals.
  7. rampUsersPerSec(rate1).to(rate2).during(duration).randomized: Injects users from starting rate to target rate, defined in users per second, during a given duration. Users will be injected at randomized intervals.
  8. stressPeakUsers(nbUsers).during(duration): Injects a given number of users following a smooth approximation of the heaviside step function stretched to a given duration.

Closed model

       
setUp(
  scn.injectClosed(
    constantConcurrentUsers(10).during(10), // 1
    rampConcurrentUsers(10).to(20).during(10) // 2
  )
);
setUp(
  scn.injectClosed(
    constantConcurrentUsers(10).during(10), // 1
    rampConcurrentUsers(10).to(20).during(10) // 2
  )
);
setUp(
  scn.injectClosed(
    constantConcurrentUsers(10).during(10), // 1
    rampConcurrentUsers(10).to(20).during(10) // 2
  )
)
setUp(
  scn.inject(
    constantConcurrentUsers(10).during(10), // 1
    rampConcurrentUsers(10).to(20).during(10) // 2
  )
)

The building blocks for closed model profile injection are:

  1. constantConcurrentUsers(nbUsers).during(duration): Inject so that number of concurrent users in the system is constant
  2. rampConcurrentUsers(fromNbUsers).to(toNbUsers).during(duration): Inject so that number of concurrent users in the system ramps linearly from a number to another

Meta DSL

It is possible to use elements of Meta DSL to write tests in an easier way. If you want to chain levels and ramps to reach the limit of your application (a test sometimes called capacity load testing), you can do it manually using the regular DSL and looping using map and flatMap. But there is now an alternative using the meta DSL.

incrementUsersPerSec

       
setUp(
  // generate an open workload injection profile
  // with levels of 10, 15, 20, 25 and 30 arriving users per second
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.injectOpen(
    incrementUsersPerSec(5.0)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Double
  )
);
setUp(
  // generate an open workload injection profile
  // with levels of 10, 15, 20, 25 and 30 arriving users per second
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.injectOpen(
    incrementUsersPerSec(5.0)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Double
  )
);
setUp(
  // generate an open workload injection profile
  // with levels of 10, 15, 20, 25 and 30 arriving users per second
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.injectOpen(
    incrementUsersPerSec(5.0)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10.0) // Double
  )
)
setUp(
  // generate an open workload injection profile
  // with levels of 10, 15, 20, 25 and 30 arriving users per second
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.inject(
    incrementUsersPerSec(5.0)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Double
  )
)

incrementConcurrentUsers

       
setUp(
  // generate a closed workload injection profile
  // with levels of 10, 15, 20, 25 and 30 concurrent users
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.injectClosed(
    incrementConcurrentUsers(5)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Int
  )
);
setUp(
  // generate a closed workload injection profile
  // with levels of 10, 15, 20, 25 and 30 concurrent users
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.injectClosed(
    incrementConcurrentUsers(5)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Int
  )
);
setUp(
  // generate a closed workload injection profile
  // with levels of 10, 15, 20, 25 and 30 concurrent users
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.injectClosed(
    incrementConcurrentUsers(5)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Int
  )
)
setUp(
  // generate a closed workload injection profile
  // with levels of 10, 15, 20, 25 and 30 concurrent users
  // each level lasting 10 seconds
  // separated by linear ramps lasting 10 seconds
  scn.inject(
    incrementConcurrentUsers(5)
      .times(5)
      .eachLevelLasting(10)
      .separatedByRampsLasting(10)
      .startingFrom(10) // Int
  )
)

incrementUsersPerSec is for open workload and incrementConcurrentUsers is for closed workload (users/sec vs concurrent users).

separatedByRampsLasting and startingFrom are both optional. If you don’t specify a ramp, the test will jump from one level to another as soon as it is finished. If you don’t specify the number of starting users the test will start at 0 concurrent user or 0 user per sec and will go to the next step right away.

Chaining injection steps

For a given scenario, you can pass a sequence of injection steps. They must all be of the model kind. The next injection step will start when all the virtual users defined by the current one have started.

       
// open model
setUp(
  scn.injectOpen(
    // first ramp the arrival rate to 100 users/s in 1 minute
    rampUsersPerSec(0).to(100).during(Duration.ofMinutes(1)),
    // then keep a steady rate of 100 users/s during 10 minutes
    constantUsersPerSec(100).during(Duration.ofMinutes(10))
  )
);

// closed model
setUp(
  scn.injectClosed(
    // first ramp the number of concurrent users to 100 users in 1 minute
    rampConcurrentUsers(0).to(100).during(Duration.ofMinutes(1)),
    // then keep a steady number of concurrent users of 100 users during 10 minutes
    constantConcurrentUsers(100).during(Duration.ofMinutes(10))
  )
);
// open model
setUp(
  scn.injectOpen(
    // first ramp the arrival rate to 100 users/s in 1 minute
    rampUsersPerSec(0.0).to(100.0).during(60),
    // then keep a steady rate of 100 users/s during 10 minutes
    constantUsersPerSec(100.0).during(600)
  )
);

// closed model
setUp(
  scn.injectClosed(
    // first ramp the number of concurrent users to 100 users in 1 minute
    rampConcurrentUsers(0).to(100).during(60),
    // then keep a steady number of concurrent users of 100 users during 10 minutes
    constantConcurrentUsers(100).during(600)
  )
);
// open model
setUp(
  scn.injectOpen(
    // first ramp the arrival rate to 100 users/s in 1 minute
    rampUsersPerSec(0.0).to(100.0).during(Duration.ofMinutes(1)),
    // then keep a steady rate of 100 users/s during 10 minutes
    constantUsersPerSec(100.0).during(Duration.ofMinutes(10))
  )
)

// closed model
setUp(
  scn.injectClosed(
    // first ramp the number of concurrent users to 100 users in 1 minute
    rampConcurrentUsers(0).to(100).during(Duration.ofMinutes(1)),
    // then keep a steady number of concurrent users of 100 users during 10 minutes
    constantConcurrentUsers(100).during(Duration.ofMinutes(10))
  )
)
// open model
setUp(
  scn.inject(
    // first ramp the arrival rate to 100 users/s in 1 minute
    rampUsersPerSec(0.0).to(100.0).during(1.minute),
    // then keep a steady rate of 100 users/s during 10 minutes
    constantUsersPerSec(100.0).during(10.minutes)
  )
)

// closed model
setUp(
  scn.inject(
    // first ramp the number of concurrent users to 100 users in 1 minute
    rampConcurrentUsers(0).to(100).during(1.minute),
    // then keep a steady number of concurrent users of 100 users during 10 minutes
    constantConcurrentUsers(100).during(10.minutes)
  )
)

Concurrent scenarios

You can configure multiple scenarios in the same setUp block to start at the same time and execute concurrently.

       
setUp(
  scenario1.injectOpen(injectionProfile1),
  scenario2.injectOpen(injectionProfile2)
);
setUp(
  scenario1.injectOpen(injectionProfile1),
  scenario2.injectOpen(injectionProfile2)
);
setUp(
  scenario1.injectOpen(injectionProfile1),
  scenario2.injectOpen(injectionProfile2)
)
setUp(
  scenario1.inject(injectionProfile1),
  scenario2.inject(injectionProfile2)
)

Sequential scenarios

It’s also possible with andThen to chain scenarios, so that children scenarios start once all the users in the parent scenario terminate.

       
setUp(
  parent.injectClosed(injectionProfile)
    // child1 and child2 will start at the same time when last parent user will terminate
    .andThen(
      child1.injectClosed(injectionProfile)
        // grandChild will start when last child1 user will terminate
        .andThen(grandChild.injectClosed(injectionProfile)),
      child2.injectClosed(injectionProfile)
    ).andThen(
      // child3 will start when last grandChild and child2 users will terminate
      child3.injectClosed(injectionProfile)
    )
);
setUp(
  parent.injectClosed(injectionProfile)
    // child1 and child2 will start at the same time when last parent user will terminate
    .andThen(
      child1.injectClosed(injectionProfile)
        // grandChild will start when last child1 user will terminate
        .andThen(grandChild.injectClosed(injectionProfile)),
      child2.injectClosed(injectionProfile)
    ).andThen(
      // child3 will start when last grandChild and child2 users will terminate
      child3.injectClosed(injectionProfile)
    )
);
setUp(
  parent.injectClosed(injectionProfile)
    // child1 and child2 will start at the same time when last parent user will terminate
    .andThen(
      child1.injectClosed(injectionProfile)
        // grandChild will start when last child1 user will terminate
        .andThen(grandChild.injectClosed(injectionProfile)),
      child2.injectClosed(injectionProfile)
    ).andThen(
      // child3 will start when last grandChild and child2 users will terminate
      child3.injectClosed(injectionProfile)
    )
)
setUp(
  parent.inject(injectionProfile)
    // child1 and child2 will start at the same time when last parent user will terminate
    .andThen(
      child1.inject(injectionProfile)
        // grandChild will start when last child1 user will terminate
        .andThen(grandChild.inject(injectionProfile)),
      child2.inject(injectionProfile)
    )
)

When chaining andThen calls, Gatling will define the new children to only start once all the users of the previous children have terminated, descendants included.

Disabling Gatling Enterprise load sharding

By default, Gatling Enterprise will distribute your injection profile amongst all load generators when running a distributed test from multiple nodes.

This might not be the desired behavior, typically when running a first initial scenario with one single user in order to fetch some auth token to be used by the actual scenario. Indeed, only one node would run this user, leaving the other nodes without an initialized token.

You can use noShard to disable load sharding. In this case, all the nodes will use the injection and throttling profiles as defined in the Simulation.

       
setUp(
  // parent load won't be sharded
  parent.injectOpen(atOnceUsers(1)).noShard()
    .andThen(
      // child load will be sharded
      child1.injectClosed(injectionProfile)
    )
);
setUp(
  // parent load won't be sharded
  parent.injectOpen(atOnceUsers(1)).noShard()
    .andThen(
      // child load will be sharded
      child1.injectClosed(injectionProfile)
    )
);
setUp(
  // parent load won't be sharded
  parent.injectOpen(atOnceUsers(1)).noShard()
    .andThen(
      // child load will be sharded
      child1.injectClosed(injectionProfile)
    )
)
setUp(
  // parent load won't be sharded
  parent.inject(atOnceUsers(1)).noShard
    .andThen(
      // child load will be sharded
      child1.inject(injectionProfile)
    ).andThen(
      // child3 will start when last grandChild and child2 users will terminate
      child3.inject(injectionProfile)
    )
)

Edit this page on GitHub