Feeders

Inject data into your virtual users from an external source, eg a CSV file

Feeders are a stock of data that your virtual users can consume records from.

The SDK provides a feed method that can be called at the same place as exec.

       
// called directly
feed(feeder);
// attached to a scenario or an exec
scenario("scn").feed(feeder);
// called directly
feed(feeder);
// attached to a scenario or an exec
scenario("scn").feed(feeder);
// called directly
feed(feeder)
// attached to a scenario or an exec
scenario("scn").feed(feeder)
// called directly
feed(feeder)
// attached to a scenario or an exec
scenario("scn").feed(feeder)

This defines a step where every virtual user feeds on the same Feeder.

Every time a virtual user reaches this step, it collects a record from the Feeder.
This record is then injected into the user’s Session, making new attributes available.

It’s also possible to feed multiple records at once. In this case, values are Java List or Scala Seq containing all the values of the same key.

       
// feed 2 records at once
feed(feeder, 2);
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}");
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder, session -> session.getInt("numberOfRecords"));
// feed 2 records at once
feed(feeder, 2);
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}");
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder, (session) => session.get("numberOfRecords"));
// feed 2 records at once
feed(feeder, 2)
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}")
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder) { session -> session.getInt("numberOfRecords") }
// feed 2 records at once
feed(feeder, 2)
// feed a number of records that's defined as the "numberOfRecords" attribute
// stored in the session of the virtual user
feed(feeder, "#{numberOfRecords}")
// feed a number of records that's computed dynamically from the session
// with a function
feed(feeder, session => session("numberOfRecords").as[Int])

Using arrays and lists

Gatling lets you use in-memory data structures as Feeders.

       
// using an array
arrayFeeder(new Map[] {
  Map.of("foo", "foo1"),
  Map.of("foo", "foo2"),
  Map.of("foo", "foo3")
});

// using a List
listFeeder(List.of(
  Map.of("foo", "foo1"),
  Map.of("foo", "foo2"),
  Map.of("foo", "foo3")
));
// using an array
arrayFeeder([
  { "foo": "foo1" },
  { "foo": "foo2" },
  { "foo": "foo3" }
]);
// using an array
arrayFeeder(arrayOf(
  mapOf("foo" to "foo1"),
  mapOf("foo" to "foo2"),
  mapOf("foo" to "foo3")
))

// using a List
listFeeder(listOf(
  mapOf("foo" to "foo1"),
  mapOf("foo" to "foo2"),
  mapOf("foo" to "foo3")
))
// using an array (implicit conversion)
Array(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")
)

// using a IndexedSeq (implicit conversion)
IndexedSeq(
  Map("foo" -> "foo1", "bar" -> "bar1"),
  Map("foo" -> "foo2", "bar" -> "bar2"),
  Map("foo" -> "foo3", "bar" -> "bar3")
)

File based feeders

Gatling provides multiple file-based feeders.

When using Java, Kotlin or Scala, files must be placed in src/main/resources or src/test/resources (or src/gatling/resources when using Gradle).
When using JavaScript or TypeScript, files must be places in resources.
You then have to configure the relative path from this root.
This is the recommended strategy.

As an alternative, you can also configure an absolute path if you want to deploy your feeder files separately and have them directly sit on the host’s filesystem.

CSV feeders

Gatling provides several built-ins for reading character-separated values files.

Our parser honors the RFC4180 specification.

The only difference is that header fields get trimmed of wrapping whitespaces.

       
// use a comma separator
csv("foo.csv");
// use a tabulation separator
tsv("foo.tsv");
// use a semicolon separator
ssv("foo.ssv");
// use a custom separator
separatedValues("foo.txt", '#');
// use a comma separator
csv("foo.csv");
// use a tabulation separator
tsv("foo.tsv");
// use a semicolon separator
ssv("foo.ssv");
// use a custom separator
separatedValues("foo.txt", '#');
// use a comma separator
csv("foo.csv")
// use a tabulation separator
tsv("foo.tsv")
// use a semicolon separator
ssv("foo.ssv")
// use a custom separator
separatedValues("foo.txt", '#')
// use a comma separator
csv("foo.csv")
// use a tabulation separator
tsv("foo.tsv")
// use a semicolon separator
ssv("foo.ssv")
// use a custom separator
separatedValues("foo.txt", '#')

JSON feeders

Some users might want to use data in JSON format instead of CSV:

       
jsonFile("foo.json");
jsonUrl("http://me.com/foo.json");
jsonFile("foo.json");
jsonUrl("http://me.com/foo.json");
jsonFile("foo.json")
jsonUrl("http://me.com/foo.json")
jsonFile("foo.json")
jsonUrl("http://me.com/foo.json")

For example, the following JSON:

[
 {
  "id":19434,
  "foo":1
  },
  {
    "id":19435,
    "foo":2
  }
]

is turned into:

Map("id" -> 19434, "foo" -> 1) // record #1
Map("id" -> 19435, "foo" -> 2) // record #2

Note that the root element must be an array.

Sitemap feeder

Gatling supports a feeder that reads data from a Sitemap file.

       
// beware: you need to import the http module
import static io.gatling.javaapi.http.HttpDsl.*;

sitemap("/path/to/sitemap/file");
// beware: you need to import the http module
import { sitemap } from "@gatling.io/http";

sitemap("/path/to/sitemap/file");
// beware: you need to import the http module
import static io.gatling.javaapi.http.HttpDsl.*;

sitemap("/path/to/sitemap/file")
// beware: you need to import the http module
import io.gatling.http.Predef._

// beware: you need to import the http module
sitemap("/path/to/sitemap/file")

The following Sitemap file:

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>http://www.example.com/</loc>
    <lastmod>2005-01-01</lastmod>
    <changefreq>monthly</changefreq>
    <priority>0.8</priority>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=12&amp;desc=vacation_hawaii</loc>
    <changefreq>weekly</changefreq>
  </url>

  <url>
    <loc>http://www.example.com/catalog?item=73&amp;desc=vacation_new_zealand</loc>
    <lastmod>2004-12-23</lastmod>
    <changefreq>weekly</changefreq>
  </url>
</urlset>

will be turned into:

// record #1
Map(
  "loc" -> "http://www.example.com/",
  "lastmod" -> "2005-01-01",
  "changefreq" -> "monthly",
  "priority" -> "0.8"
)

// record #2
Map(
  "loc" -> "http://www.example.com/catalog?item=12&amp;desc=vacation_hawaii",
  "changefreq" -> "weekly"
)

// record #3
Map(
  "loc" -> "http://www.example.com/catalog?item=73&amp;desc=vacation_new_zealand",
  "lastmod" -> "2004-12-23",
  "changefreq" -> "weekly"
) 

Zipped files

If your files are very large, you can provide them zipped and ask gatling to unzip them on the fly:

       
csv("foo.csv.zip").unzip();
csv("foo.csv.zip").unzip();
csv("foo.csv.zip").unzip()
csv("foo.csv.zip").unzip

Supported formats are gzip and zip (but archive must contain only one single file).

Distributed files Enterprise

If you want to run distributed tests with Gatling Enterprise Edition and you want to distribute data so that users don’t use the same data when they run on different cluster nodes, you can use the shard option. For example, if you have a file with 30,000 records deployed on 3 nodes, each will use a 10,000 records slice.

       
csv("foo.csv").shard();
csv("foo.csv").shard();
csv("foo.csv").shard()
csv("foo.csv").shard

JDBC feeder

Gatling also provides a built-in that reads from a JDBC connection.

       
// beware: you need to import the jdbc module
// import static io.gatling.javaapi.jdbc.JdbcDsl.*;

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users");
Not supported by Gatling JS.
// beware: you need to import the jdbc module
// import static io.gatling.javaapi.jdbc.JdbcDsl.*;

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")
// beware: you need to import the jdbc module
import io.gatling.jdbc.Predef._

jdbcFeeder("databaseUrl", "username", "password", "SELECT * FROM users")

Just like File parser built-ins, this returns a RecordSeqFeederBuilder instance.

  • The databaseUrl must be a JDBC URL (e.g. jdbc:postgresql:gatling),
  • the username and password are the credentials to access the database,
  • sql is the query that gets the needed values.

Only JDBC4 drivers are supported, so that they automatically register to the DriverManager.

Redis feeder

This feature was originally contributed by Krishnen Chedambarum.

Gatling can read data from Redis using one of the following Redis commands.

  • LPOP - remove and return the first element of the list
  • SPOP - remove and return a random element from the set
  • SRANDMEMBER - return a random element from the set
  • RPOPLPUSH - return the last element of the list and also store as first element to another list

By default, RedisFeeder uses LPOP command:

       
// beware: you need to import the redis module
// import io.gatling.javaapi.redis.*;
// import static io.gatling.javaapi.redis.RedisDsl.*;
RedisClientPool redisPool =
  new RedisClientPool("localhost", 6379)
    .withMaxIdle(8)
    .withDatabase(0)
    .withSecret(null)
    .withTimeout(0)
    .withMaxConnections(-1)
    .withPoolWaitTimeout(3000)
    .withSSLContext(null)
    .withBatchMode(false);

// use a list, so there's one single value per record, which is here named "foo"
redisFeeder(redisPool, "foo");
// identical to above, LPOP is the default
redisFeeder(redisPool, "foo").LPOP();
Not supported by Gatling JS.
// beware: you need to import the redis module
// import io.gatling.javaapi.redis.*
// import io.gatling.javaapi.redis.RedisDsl.*
val redisPool =
  RedisClientPool("localhost", 6379)
    .withMaxIdle(8)
      .withDatabase(0)
      .withSecret(null)
      .withTimeout(0)
      .withMaxConnections(-1)
      .withPoolWaitTimeout(3000)
      .withSSLContext(null)
      .withBatchMode(false)

// use a list, so there's one single value per record, which is here named "foo"
redisFeeder(redisPool, "foo")
// identical to above, LPOP is the default
redisFeeder(redisPool, "foo").LPOP()
// beware: you need to import the redis module
import io.gatling.redis.Predef._
import com.redis._
val redisPool = new RedisClientPool("localhost", 6379)

// use a list, so there's one single value per record, which is here named "foo"
// same as redisFeeder(redisPool, "foo").LPOP
redisFeeder(redisPool, "foo")

You can then override the desired Redis command:

       
// read data using SPOP command from a set named "foo"
redisFeeder(redisPool, "foo").SPOP();
Not supported by Gatling JS.
// read data using SPOP command from a set named "foo"
redisFeeder(redisPool, "foo").SPOP()
// read data using SPOP command from a set named "foo"
redisFeeder(redisPool, "foo").SPOP
       
// read data using SRANDMEMBER command from a set named "foo"
redisFeeder(redisPool, "foo").SRANDMEMBER();
Not supported by Gatling JS.
// read data using SRANDMEMBER command from a set named "foo"
redisFeeder(redisPool, "foo").SRANDMEMBER()
// read data using SRANDMEMBER command from a set named "foo"
redisFeeder(redisPool, "foo").SRANDMEMBER

You can create a circular feeder by using the same keys with RPOPLPUSH

       
// read data using RPOPLPUSH command from a list named "foo" and atomically store in list named "bar"
redisFeeder(redisPool, "foo", "bar").RPOPLPUSH();
// identical to above but we create a circular list by using the same keys
redisFeeder(redisPool, "foo", "foo").RPOPLPUSH();
Not supported by Gatling JS.
// read data using RPOPLPUSH command from a list named "foo" and atomically store in list named "bar"
redisFeeder(redisPool, "foo", "bar").RPOPLPUSH();
// identical to above but we create a circular list by using the same keys
redisFeeder(redisPool, "foo", "foo").RPOPLPUSH();
// read data using RPOPLPUSH command from a list named "foo" and atomically store in list named "bar"
redisFeeder(redisPool, "foo", "bar").RPOPLPUSH
// identical to above but we create a circular list by using the same keys
redisFeeder(redisPool, "foo", "foo").RPOPLPUSH();

Strategies

Gatling provides multiple strategies for the built-in feeders.

queue

This is the default strategy that will be applied if you don’t specify one.

The queue strategy makes the virtual users consume the records, meaning that:

  • no virtual user can collect the same record
  • at some point, the whole stock of data is consumed

This strategy is suited when:

  • no virtual users must collect the same record because it would result in duplicates, e.g. multiple virtual users using the same credentials
  • you want the records to be consumed in the exact order as they are defined in your source, e.g. your CSV file
       
// default behavior, can be omitted
csv("foo").queue();
// default behavior, can be omitted
csv("foo").queue();
// default behavior, can be omitted
csv("foo").queue()
// default behavior, can be omitted
csv("foo").queue
For example, given a CSV file with the following content:
key
1
2
3

When using the `queue` strategy:
* the 1st virtual user consumes the record ("key", "1").
* the 2nd virtual user consumes the record ("key", "2").
* the 3rd virtual user consumes the record ("key", "3").
* the 4th virtual user will cause Gatling to crash.

shuffle

The shuffle strategy is very similar to the queue one, except that the records are collected in a random order.

This strategy is suited when:

  • no virtual users must collect the same record because it would result in duplicates, e.g. multiple virtual users using the same credentials
  • you want to introduce some randomness in the order the records are consumed
       
csv("foo").shuffle();
csv("foo").shuffle();
csv("foo").shuffle()
csv("foo").shuffle
For example, given a CSV file with the following content:
key
1
2
3

When using the `shuffle` strategy:
* the 1st virtual user consumes a random record, e.g. ("key", "3").
* the 2nd virtual user consumes a random record from the remaining ones, e.g. ("key", "2").
* the 3rd virtual user consumes the last available record ("key", "1").
* the 4th virtual user will cause Gatling to crash.

random

The random strategy makes the virtual users collect records in a random order, meaning that:

  • multiple virtual users can collect the same record
  • your stock of data will never run out

This strategy is suited when:

  • you don’t care that multiple virtual users use the same data, e.g. search keywords
  • you want to introduce some randomness in the order the records are consumed
       
-mail-generator
// import org.apache.commons.lang3.RandomStringUtils
Iterator<Map<String, Object>> feeder =
  Stream.generate((Supplier<Map<String, Object>>) () -> {
      String email = RandomStringUtils.insecure().nextAlphanumeric(20) + "@foo.com";
      return Map.of("email", email);
    }
  ).iterator();
-mail-generator
The `feeder(Iterator)` method is currently not supported by Gatling JS.
-mail-generator
// import org.apache.commons.lang3.RandomStringUtils
val feeder = generateSequence {
  val email = RandomStringUtils.insecure().nextAlphanumeric(20) + "@foo.com"
  mapOf("email" to email)
}.iterator()
-mail-generator
import scala.util.Random
val feeder = Iterator.continually {
  Map("email" -> s"${Random.alphanumeric.take(20).mkString}@foo.com")
}
For example, given a `data.csv` file with the following content:
key
1
2
3

When using the `random` strategy:
* the 1st virtual user collects a random record, e.g ("key", "3").
* the 2nd virtual user collects a random record, e.g ("key", "1").
* the 3rd virtual user collects a random record, e.g ("key", "3") again.
* the 4th virtual user collects a random record, e.g ("key", "2").

circular

The circular strategy is very similar to the random one, except that it preserves the original record order from your source:

  • multiple virtual users can collect the same record: when the reading index reaches the end of the source, it moves back to the beginning
  • you want the records to be consumed in the exact order as they are defined in your source, e.g. your CSV file
For example, given a `data.csv` file with the following content:
key
1
2
3

When using the `circular` strategy:
* the 1st virtual user collects the record ("key", "1").
* the 2nd virtual user collects the record ("key", "2").
* the 3rd virtual user collects the record ("key", "3").
* the 4th virtual user collects the record ("key", "1") again.
       
csv("foo").circular();
csv("foo").circular();
csv("foo").circular()
csv("foo").circular

Transforming records

Sometimes, you might want to transform the raw data you receive from your feeder.

For example, a csv feeder would give you only Strings, but you might want to transform one of the attributes into an Int.

transform takes:

  • in Java and Kotlin, a BiFunction<String, T, Object>
  • in Scala a PartialFunction[(String, T), Any] that is defined only for records you want to transform, leaving the other ones as-is

For example:

       
csv("myFile.csv").transform((key, value) ->
  key.equals("attributeThatShouldBeAnInt") ? Integer.valueOf(value) : value
);
csv("myFile.csv").transform((key, value) =>
  key === "attributeThatShouldBeAnInt" ? parseInt(value) : value
);
csv("myFile.csv").transform { key, value ->
  if (key.equals("attributeThatShouldBeAnInt")) Integer.valueOf(value) else value
}
csv("myFile.csv").transform {
  case ("attributeThatShouldBeAnInt", string) => string.toInt
}

Loading all the records in memory

Sometimes, you might just want to reuse a convenient built-in feeder for custom needs and get your hands on the actual records.

     
List<Map<String, Object>> records = csv("myFile.csv").readRecords();
val records = csv("myFile.csv").readRecords()
val records: Seq[Map[String, Any]] = csv("myFile.csv").readRecords

Count the number of records

Sometimes, you want to know the size of your feeder without having to use readRecords and copy all the data in memory.

       
int recordsCount = csv("myFile.csv").recordsCount();
const recordsCount = csv("myFile.csv").recordsCount();
val recordsCount = csv("myFile.csv").recordsCount()
val recordsCount = csv("myFile.csv").recordsCount

Custom feeders

Feeder is a type alias for Iterator<Map<String, T>>, meaning that the component created by the feed method will poll Map<String, T> records and inject its content.

It’s very simple to build a custom one. For example, here’s how one could build a random email generator:

       
// import org.apache.commons.lang3.RandomStringUtils
Iterator<Map<String, Object>> feeder =
  Stream.generate((Supplier<Map<String, Object>>) () -> {
      String email = RandomStringUtils.insecure().nextAlphanumeric(20) + "@foo.com";
      return Map.of("email", email);
    }
  ).iterator();
The `feeder(Iterator)` method is currently not supported by Gatling JS.
// import org.apache.commons.lang3.RandomStringUtils
val feeder = generateSequence {
  val email = RandomStringUtils.insecure().nextAlphanumeric(20) + "@foo.com"
  mapOf("email" to email)
}.iterator()
import scala.util.Random
val feeder = Iterator.continually {
  Map("email" -> s"${Random.alphanumeric.take(20).mkString}@foo.com")
}

Edit this page on GitHub