Use S3 Bucket Feeders for large scale data
Learn how to use AWS S3 buckets to store and fetch simulation feeder data.
Introduction
When working with large feeder files or files containing sensitive data, storing them locally in your repository is not ideal. Instead, you can store them in an S3 bucket and access the feeder data directly from your Gatling script. This guide explains how to correctly retrieve data from feeders stored in an S3 bucket.
Prerequisites
This guide requires load generators using:
- Private locations on Gatling Enterprise
- Local test execution
- Gatling SDK with Java 11, 17, 21, or 22.
Configuration
Local configuration
If you are running your scripts locally, make sure to have AWS credentials set up on your local machine.
Private Locations
To enable secure access to AWS S3, you need to:
- Assign an IAM instance profile to your load generators. This profile should grant access permissions for retrieving objects from the designated S3 bucket. For more information, see Gatling AWS Locations Configuration.
{ "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::<feeders-bucket-name>/*" } ] }
- Attach a new policy to the control plane role that allows it to pass the previously created IAM instance profile, to the load generators.
- Create the following policy:
{ "Version": "2012-10-17", "Statement": [ { "Action": ["iam:PassRole"], "Effect": "Allow", "Resource": ["arn:aws:iam::{Account}:role/{RoleNameWithPath}"] } ] }
- Attach it to the control plane role.
- Create the following policy:
Installation
Install the AWS SDK into your Java project using either Maven or Gradle:
Suggested implementation
Notes:
- The example below retrieves the feeder file from the S3 bucket, temporarily stores it in the project’s root directory, and deletes it once the simulation is completed.
import io.gatling.javaapi.core.*;
import static io.gatling.javaapi.core.CoreDsl.*;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardCopyOption;
import io.gatling.javaapi.core.FeederBuilder;
import software.amazon.awssdk.core.ResponseInputStream;
import software.amazon.awssdk.services.s3.S3Client;
import software.amazon.awssdk.services.s3.model.GetObjectRequest;
import software.amazon.awssdk.services.s3.model.GetObjectResponse;
public class AWSS3BucketFeederSampleJava extends Simulation {
private final String bucketName = "<bucket-name>";
private final String objectKey = "<feeder-file-object-key>";
private final Path feederFile = loadFeeder();
private Path loadFeeder() {
try (S3Client s3 = S3Client.create()) {
// Create a request to get the object
GetObjectRequest getObjectRequest = GetObjectRequest.builder()
.bucket(this.bucketName)
.key(this.objectKey)
.build();
Path tempFeederFile = Files.createTempFile("hello", ".file");
tempFeederFile.toFile().deleteOnExit();
try (ResponseInputStream<GetObjectResponse> inputStream = s3.getObject(getObjectRequest)) {
// Write to the temp file
Files.copy(inputStream, tempFeederFile, StandardCopyOption.REPLACE_EXISTING);
}
return tempFeederFile;
} catch (IOException e) {
throw new RuntimeException(e);
}
}
// Use the feeder method that corresponds to the feeder file type: csv,
// json..etc
FeederBuilder<String> feeder = csv(feederFile.toFile().getAbsolutePath()).random();
}
Test your S3 feeder
Pass the feeder
object to your scenario and ensure that the feeder data is retrieved correctly, following the required behavior.