Defining Throughput on AWS DLT using JMeter

57 Views Asked by At

I'm using AWS Distributed Load Testing to test the performance of various APIs. When I upload a script to AWS DLT, and define the time to hold for 5 minutes, it seems that solution will continually run HTTP Samplers until the hold time is finished. This seems like a closed-workflow approach to performance testing, and I had a few questions if anyone knows the answer, as I can't seem to find anything regarding AWS DLT anywhere.

  1. When defining Task Count and Concurrency, does the concurrency value completely override the "Number of Threads" definition in a JMeter script? As I know in tools such as Blazemeter, you can toggle this on or off by not setting the concurrency values in Blazemeter.
  2. Is there any standard way to define throughput when uploading a JMeter script? I have tried using a throughput shaping timer in my JMeter script, however it doesn't seem to work too well with AWS DLT's test details, and often seems to result in a much lower Rp/s than defined in the throughput timer. I've tried making sure that the concurrency value in AWS DLT is high enough to allow for the target RPS, but no luck.
  3. For the Zip file submitted to AWS for more than just jmx files, a plugin file has to be located in the root directory of the file, containing the .jar files for all plugins used in the script - are there any other file considerations that need to be made?
  4. Does the concurrency defined in AWS DLT apply to all thread groups in a script?

Thanks if anyone knows the answers!

1

There are 1 best solutions below

0
On

Have you tried reading the documentation first? It contains all the answers.

  1. It does

  2. Throughput can be defined using one of the suitable Timers like:

  3. If you would like to include plugins, any .jar files that are included in a /plugins subdirectory in the bundled zip file will be copied to the JMeter extensions directory and be available for load testing.