Feeders

Inject data into your virtual users from an external source, eg a CSV file

Feeder is a type alias for Iterator<Map<String, T>>, meaning that the component created by the feed method will poll Map<String, T> records and inject its content.

It’s very simple to build a custom one. For example, here’s how one could build a random email generator:

       
// import org.apache.commons.lang3.RandomStringUtils
Iterator<Map<String, Object>> feeder =
  Stream.generate((Supplier<Map<String, Object>>) () -> {
      String email = RandomStringUtils.randomAlphanumeric(20) + "@foo.com";
      return Map.of("email", email);
    }
  ).iterator();
The `feeder(Iterator)` method is currently not supported by Gatling JS.
// import org.apache.commons.lang3.RandomStringUtils
val feeder = generateSequence {
  val email = RandomStringUtils.randomAlphanumeric(20) + "@foo.com"
  mapOf("email" to email)
}.iterator()
import scala.util.Random
val feeder = Iterator.continually {
  Map("email" -> s"${Random.alphanumeric.take(20).mkString}@foo.com")
}

The structure DSL provides a feed method that can be called at the same place as exec.

       
feed(feeder);
feed(feeder);
feed(feeder)
feed(feeder)

This defines a workflow step where every virtual user feeds on the same Feeder.

Every time a virtual user reaches this step, it will pop a record out of the Feeder, which will be injected into the user’s Session, resulting in a new Session instance.

It’s also possible to feed multiple records at once. In this case, values will be Java List or Scala Seq containing all the values of the same key.

       
// feed 2 records at once
feed(feeder, 2);
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}");
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder, session -> session.getInt("numberOfRecords"));
// feed 2 records at once
feed(feeder, 2);
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}");
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder, (session) => session.get("numberOfRecords"));
// feed 2 records at once
feed(feeder, 2)
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}")
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder) { session -> session.getInt("numberOfRecords") }
// feed 2 records at once
feed(feeder, 2)
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}")
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder, session => session("numberOfRecords").as[Int])

Strategies

Gatling provides multiple strategies for the built-in feeders:

       
// default behavior: use an Iterator on the underlying sequence
csv("foo").queue();
// randomly pick an entry in the sequence
csv("foo").random();
// shuffle entries, then behave like queue
csv("foo").shuffle();
// go back to the top of the sequence once the end is reached
csv("foo").circular();
// default behavior: use an Iterator on the underlying sequence
csv("foo").queue();
// randomly pick an entry in the sequence
csv("foo").random();
// shuffle entries, then behave like queue
csv("foo").shuffle();
// go back to the top of the sequence once the end is reached
csv("foo").circular();
// default behavior: use an Iterator on the underlying sequence
csv("foo").queue()
// randomly pick an entry in the sequence
csv("foo").random()
// shuffle entries, then behave like queue
csv("foo").shuffle()
// go back to the top of the sequence once the end is reached
csv("foo").circular()
// default behavior: use an Iterator on the underlying sequence
csv("foo").queue()
// randomly pick an entry in the sequence
csv("foo").random()
// shuffle entries, then behave like queue
csv("foo").shuffle()
// go back to the top of the sequence once the end is reached
csv("foo").circular()

Using arrays and lists

Gatling provides some converters to use in-memory datastructures as Feeders.

       
// using an array
arrayFeeder(new Map[] {
  Map.of("foo", "foo1"),
  Map.of("foo", "foo2"),
  Map.of("foo", "foo3")
}).random();

// using a List
listFeeder(List.of(
  Map.of("foo", "foo1"),
  Map.of("foo", "foo2"),
  Map.of("foo", "foo3")
)).random();
// using an array
arrayFeeder([
  { "foo": "foo1" },
  { "foo": "foo2" },
  { "foo": "foo3" }
]).random();
// using an array
arrayFeeder(arrayOf(
  mapOf("foo" to "foo1"),
  mapOf("foo" to "foo2"),
  mapOf("foo" to "foo3")
)).random()

// using a List
listFeeder(listOf(
  mapOf("foo" to "foo1"),
  mapOf("foo" to "foo2"),
  mapOf("foo" to "foo3")
)).random()
// using an array (implicit conversion)
Array(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")
).random

// using a IndexedSeq (implicit conversion)
IndexedSeq(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")
).random

File based feeders

Gatling provides various file based feeders.

When using the bundle distribution, files must be in the user-files/resources directory. This location can be overridden, see configuration.

When using a build tool such as Maven, Gradle or sbt, files must be placed in src/main/resources or src/test/resources.

In order to locate the file, Gatling tries the following strategies in sequence:

  1. as a classpath resource from the classpath root, eg data/file.csv for targeting the your_project/src/main/resources/data/file.csv file. This is the recommended strategy.
  2. as an absolute filesystem path to a file. Use this strategy if you want your feeder files to be deployed separately.

CSV feeders

Gatling provides several built-ins for reading character-separated values files.

Our parser honors the RFC4180 specification.

The only difference is that header fields get trimmed of wrapping whitespaces.

       
// use a comma separator
csv("foo.csv");
// use a tabulation separator
tsv("foo.tsv");
// use a semicolon separator
ssv("foo.ssv");
// use a custom separator
separatedValues("foo.txt", '#');
// use a comma separator
csv("foo.csv");
// use a tabulation separator
tsv("foo.tsv");
// use a semicolon separator
ssv("foo.ssv");
// use a custom separator
separatedValues("foo.txt", '#');
// use a comma separator
csv("foo.csv")
// use a tabulation separator
tsv("foo.tsv")
// use a semicolon separator
ssv("foo.ssv")
// use a custom separator
separatedValues("foo.txt", '#')
// use a comma separator
csv("foo.csv")
// use a tabulation separator
tsv("foo.tsv")
// use a semicolon separator
ssv("foo.ssv")
// use a custom separator
separatedValues("foo.txt", '#')

Loading mode

CSV files feeders provide several options for how data should be loaded in memory.

eager loads the whole data in memory before the Simulation starts, saving disk access at runtime. This mode works best with reasonably small files that can be parsed quickly without delaying simulation start time and easily sit in memory. This behavior was the default prior to Gatling 3.1 and you can still force it.

       
csv("foo.csv").eager().random();
csv("foo.csv").eager().random();
csv("foo.csv").eager().random()
csv("foo.csv").eager

batch works better with large files whose parsing would delay simulation start time and eat a lot of heap space. Data is then read by chunks.

       
// use default buffer size (2000 lines)
csv("foo.csv").batch().random();
// tune internal buffer size
csv("foo.csv").batch(200).random();
// use default buffer size (2000 lines)
csv("foo.csv").batch().random();
// tune internal buffer size
csv("foo.csv").batch(200).random();
// use default buffer size (2000 lines)
csv("foo.csv").batch().random()
// tune internal buffer size
csv("foo.csv").batch(200).random()
// use default buffer size (2000 lines)
csv("foo.csv").batch
// tune internal buffer size
csv("foo.csv").batch(200)

Default behavior is an adaptive policy based on (unzipped, sharded) file size, see gatling.core.feederAdaptiveLoadModeThreshold in config file. Gatling will use eager below threshold and batch above.

Zipped files

If your files are very large, you can provide them zipped and ask gatling to unzip them on the fly:

       
csv("foo.csv.zip").unzip();
csv("foo.csv.zip").unzip();
csv("foo.csv.zip").unzip()
csv("foo.csv.zip").unzip

Supported formats are gzip and zip (but archive must contain only one single file).

Distributed files (Gatling Enterprise only)

If you want to run distributed with Gatling Enterprise and you want to distribute data so that users don’t use the same data when they run on different cluster nodes, you can use the shard option. For example, if you have a file with 30,000 records deployed on 3 nodes, each will use a 10,000 records slice.

       
csv("foo.csv").shard();
csv("foo.csv").shard();
csv("foo.csv").shard()
csv("foo.csv").shard

JSON feeders

Some users might want to use data in JSON format instead of CSV:

       
jsonFile("foo.json");
jsonUrl("http://me.com/foo.json");
jsonFile("foo.json");
jsonUrl("http://me.com/foo.json");
jsonFile("foo.json")
jsonUrl("http://me.com/foo.json")
jsonFile("foo.json")
jsonUrl("http://me.com/foo.json")

For example, the following JSON:

[
 {
  "id":19434,
  "foo":1
  },
  {
    "id":19435,
    "foo":2
  }
]

will be turned into:

Map("id" -> 19434, "foo" -> 1) // record #1
Map("id" -> 19435, "foo" -> 2) // record #2

Note that the root element has of course to be an array.

JDBC feeder

Gatling also provides a builtin that reads from a JDBC connection.

       
// beware: you need to import the jdbc module
// import static io.gatling.javaapi.jdbc.JdbcDsl.*;

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users");
Not supported by Gatling JS.
// beware: you need to import the jdbc module
// import static io.gatling.javaapi.jdbc.JdbcDsl.*;

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")
// beware: you need to import the jdbc module
import io.gatling.jdbc.Predef._

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")

Just like File parser built-ins, this returns a RecordSeqFeederBuilder instance.

  • The databaseUrl must be a JDBC URL (e.g. jdbc:postgresql:gatling),
  • the username and password are the credentials to access the database,
  • sql is the query that will get the values needed.

Only JDBC4 drivers are supported, so that they automatically register to the DriverManager.

Sitemap feeder

Gatling supports a feeder that reads data from a Sitemap file.

       
// beware: you need to import the http module
import static io.gatling.javaapi.http.HttpDsl.*;

sitemap("/path/to/sitemap/file");
// beware: you need to import the http module
import { sitemap } from "@gatling.io/http";

sitemap("/path/to/sitemap/file");
// beware: you need to import the http module
import static io.gatling.javaapi.http.HttpDsl.*;

sitemap("/path/to/sitemap/file")
// beware: you need to import the http module
import io.gatling.http.Predef._

// beware: you need to import the http module
sitemap("/path/to/sitemap/file")

The following Sitemap file:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>http://www.example.com/</loc>
    <lastmod>2005-01-01</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=12&amp;desc=vacation_hawaii</loc>
    <changefreq>weekly</changefreq>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=73&amp;desc=vacation_new_zealand</loc>
    <lastmod>2004-12-23</lastmod>
    <changefreq>weekly</changefreq>
  </url>
</urlset>

will be turned into:

// record #1
Map(
  "loc" -> "http://www.example.com/",
  "lastmod" -> "2005-01-01",
  "changefreq" -> "monthly",
  "priority" -> "0.8"
)

// record #2
Map(
  "loc" -> "http://www.example.com/catalog?item=12&amp;desc=vacation_hawaii",
  "changefreq" -> "weekly"
)

// record #3
Map(
  "loc" -> "http://www.example.com/catalog?item=73&amp;desc=vacation_new_zealand",
  "lastmod" -> "2004-12-23",
  "changefreq" -> "weekly"
) 

Redis feeder

This feature was originally contributed by Krishnen Chedambarum.

Gatling can read data from Redis using one of the following Redis commands.

  • LPOP - remove and return the first element of the list
  • SPOP - remove and return a random element from the set
  • SRANDMEMBER - return a random element from the set
  • RPOPLPUSH - return the last element of the list and also store as first element to another list

By default, RedisFeeder uses LPOP command:

       
// beware: you need to import the redis module
// import io.gatling.javaapi.redis.*;
// import static io.gatling.javaapi.redis.RedisDsl.*;
RedisClientPool redisPool =
  new RedisClientPool("localhost", 6379)
    .withMaxIdle(8)
    .withDatabase(0)
    .withSecret(null)
    .withTimeout(0)
    .withMaxConnections(-1)
    .withPoolWaitTimeout(3000)
    .withSSLContext(null)
    .withBatchMode(false);

// use a list, so there's one single value per record, which is here named "foo"
redisFeeder(redisPool, "foo");
// identical to above, LPOP is the default
redisFeeder(redisPool, "foo").LPOP();
Not supported by Gatling JS.
// beware: you need to import the redis module
// import io.gatling.javaapi.redis.*
// import io.gatling.javaapi.redis.RedisDsl.*
val redisPool =
  RedisClientPool("localhost", 6379)
    .withMaxIdle(8)
      .withDatabase(0)
      .withSecret(null)
      .withTimeout(0)
      .withMaxConnections(-1)
      .withPoolWaitTimeout(3000)
      .withSSLContext(null)
      .withBatchMode(false)

// use a list, so there's one single value per record, which is here named "foo"
redisFeeder(redisPool, "foo")
// identical to above, LPOP is the default
redisFeeder(redisPool, "foo").LPOP()
// beware: you need to import the redis module
import io.gatling.redis.Predef._
import com.redis._
val redisPool = new RedisClientPool("localhost", 6379)

// use a list, so there's one single value per record, which is here named "foo"
// same as redisFeeder(redisPool, "foo").LPOP
redisFeeder(redisPool, "foo")

You can then override the desired Redis command:

       
// read data using SPOP command from a set named "foo"
redisFeeder(redisPool, "foo").SPOP();
Not supported by Gatling JS.
// read data using SPOP command from a set named "foo"
redisFeeder(redisPool, "foo").SPOP()
// read data using SPOP command from a set named "foo"
redisFeeder(redisPool, "foo").SPOP
       
// read data using SRANDMEMBER command from a set named "foo"
redisFeeder(redisPool, "foo").SRANDMEMBER();
Not supported by Gatling JS.
// read data using SRANDMEMBER command from a set named "foo"
redisFeeder(redisPool, "foo").SRANDMEMBER()
// read data using SRANDMEMBER command from a set named "foo"
redisFeeder(redisPool, "foo").SRANDMEMBER

You can create a circular feeder by using the same keys with RPOPLPUSH

       
// read data using RPOPLPUSH command from a list named "foo" and atomically store in list named "bar"
redisFeeder(redisPool, "foo", "bar").RPOPLPUSH();
// identical to above but we create a circular list by using the same keys
redisFeeder(redisPool, "foo", "foo").RPOPLPUSH();
Not supported by Gatling JS.
// read data using RPOPLPUSH command from a list named "foo" and atomically store in list named "bar"
redisFeeder(redisPool, "foo", "bar").RPOPLPUSH();
// identical to above but we create a circular list by using the same keys
redisFeeder(redisPool, "foo", "foo").RPOPLPUSH();
// read data using RPOPLPUSH command from a list named "foo" and atomically store in list named "bar"
redisFeeder(redisPool, "foo", "bar").RPOPLPUSH
// identical to above but we create a circular list by using the same keys
redisFeeder(redisPool, "foo", "foo").RPOPLPUSH();

Transforming records

Sometimes, you might want to transform the raw data you receive from your feeder.

For example, a csv feeder would give you only Strings, but you might want to transform one of the attributes into an Int.

transform takes:

  • in Java and Kotlin, a BiFunction<String, T, Object>
  • in Scala a PartialFunction[(String, T), Any] that is defined only for records you want to transform, leaving the other ones as is

For example:

       
csv("myFile.csv").transform((key, value) ->
  key.equals("attributeThatShouldBeAnInt") ? Integer.valueOf(value) : value
);
csv("myFile.csv").transform((key, value) =>
  key === "attributeThatShouldBeAnInt" ? parseInt(value) : value
);
csv("myFile.csv").transform { key, value ->
  if (key.equals("attributeThatShouldBeAnInt")) Integer.valueOf(value) else value
}
csv("myFile.csv").transform {
  case ("attributeThatShouldBeAnInt", string) => string.toInt
}

Loading all the records in memory

Sometimes, you might just want to reuse a convenient built-in feeder for custom needs and get your hands on the actual records.

     
List<Map<String, Object>> records = csv("myFile.csv").readRecords();
val records = csv("myFile.csv").readRecords()
val records: Seq[Map[String, Any]] = csv("myFile.csv").readRecords

Count the number of records

Sometimes, you want to know the size of your feeder without having to use readRecords and copy all the data in memory.

       
int recordsCount = csv("myFile.csv").recordsCount();
const recordsCount = csv("myFile.csv").recordsCount();
val recordsCount = csv("myFile.csv").recordsCount()
val recordsCount = csv("myFile.csv").recordsCount

Edit this page on GitHub