How to setup Jmeter test to have a certain throughput? - file-upload

I am trying to perform a load test, and according to our stats (that I can't disclose) we expect peaks of 300 users per minute, uploading files of different sizes to our system.
Now, I created a jmeter test, which works fine, but what I don't know how to fine tune is - aim for certain throughput.
I create a test with 150 users 100 loops, expecting it to simulate 150 users coming and going, and in total upload 15000 files, but that never happened because at certain point tests started failing.
Looking at our new relic monitoring, it seems that somehow I reached 1600 requests in a single minute. I am testing a microservice, running 12 instances, so that might play the role here for a higher number of requests, but even with it I expected tests to pass. My uploaded file was 600kb. In the end, I had 98% failure.
I reduced the file size to 13kb, at that point, I got 17% failiure.
So, there's obviously something with the time needed to upload the bigger file, but I don't understand what causes 150 thread/users in X loops to become 1600 at the same time. I'd expect Jmeter to never start a new loop with the same thread, unless the original user is finished. That being said - I'd expect tops 150 users in a given minute.
Any clarification on how to get exact number of users/threads running at the same time is well appreciated.
I tried to play with KeepAlive checkbox, I tried adding lifetime of request to 10 seconds (all them uploads get response earlier) - but then JMeter finished the Thread, and I had only 150 runs, no loops.
Thanks!

By default JMeter executes Samplers as fast as it can so there are 2 main factors which define the actual throughput (number of requests per unit of time):
JMeter configuration
Application under test response time
So if you're following JMeter Best Practices and JMeter has enough headroom to operate in terms of CPU, RAM, etc. - you are only limited by your application response time as JMeter waits for previous request to finish before starting a new one.
If you need to "slow down" your test execution consider adding i.e. Constant Throughput Timer to your Test Plan where you will be able to define the desired number of requests per minute

Related

How much maximum thread capacity of jmeter command line mode?

I have to test load testing for 32000 users, duration 15 minutes. And I have run it on command line mode. Threads--300, ramp up--100, loop 1. But after showing some data, it is freeze. So I can't get the full report/html. Even i can't run for 50 users. How can I get rid of this. Please let me know.
From 0 to 2147483647 threads depending on various factors including but not limited to:
Hardware specifications of the machine where you run JMeter
Operating system limitations (if any) of the machine where you run JMeter
JMeter Configuration
The nature of your test (protocol(s) in use, the size of request/response, presence of pre/post processors, assertions and listeners)
Application response time
Phase of the moon
etc.
There is no answer like "on my macbook I can have about 3000 threads" as it varies from test to test, for GET requests returning small amount of data the number will be more, for POST requests uploading huge files and getting huge responses the number will be less.
The approach is the following:
Make sure to follow JMeter Best Practices
Set up monitoring of the machine where you run JMeter (CPU, RAM, Swap usage, etc.), if you don't have a better idea you can go for JMeter PerfMon Plugin
Start your test with 1 user and gradually increase the load at the same time looking into resources consumption
When any of monitored resources consumption starts exceeding reasonable threshold, i.e. 80% of maximum available capacity stop your test and see how many users were online at this stage. This is how many users you can simulate from particular this machine for particular this test.
Another machine or test - repeat from the beginning.
Most probably for 32000 users you will have to go for distributed testing
If your test "hangs" even for smaller amount of users (300 could be simulated even with default JMeter settings and maybe even in GUI mode):
take a look at jmeter.log file
take a thread dump and see what threads are doing

Jmeter test with infinite loop in 'loop controller',constant runtime and 'constant timer'.What are the advantages and how to tune with this approach

I have setup a jmeter script with constant runtime set in runtime controller, infinite loop in loop controller and a constant delay between threads in'constant timer'. How can I perform tuning using this setup? Is there a correlation between 'no of threads', 'rampup time' and 'delay' that should be kept in mind while trying different combinations of these values for performance testing?
Number of threads is basically the number of users you will be simulating. Each JMeter thread (or virtual user) must represent a real user using your application so treat it this way. If you have a requirement that the application must support 1000 concurrent users - stick to this number as the baseline for your testing. With regards to "how much load will my N JMeter users generate" - it depends on several factors like nature of your test, server response time, timers,
Ramp-up is the time for JMeter to kick off the virtual users from point 1. Unless you're doing spike testing you should be increasing the load gradually as if you release all the users right away you will get much less information and in case of gradual increasing the load you will be able to correlate it with increasing response time and decreasing throughput, number of errors, etc. Moreover it will allow to "wamp up" application under test and it will be more ready for the stress
Delay is time the virtual user is "thinking" between operations. Real users don't hammer application non-stop, they need some time to "think" before making the next step. Depending on what the user is "doing" the think time might be different so I would recommend going for Uniform Random Timer instead of the "Constant" one.

How to load test with simultaneous users?

In jMeter
I have a test plan with 100 virtual users. If i set ramp up time to 100 then the whole test takes 100 sec to complete the whole set. That means each thread takes 1 sec to perform for each virtual user. Meaning that each thread is carried out step by step. However, each thread is carried out after completion of previous one.
Problem: I need 100 users accessing the website at a same time , concurently and simultaneously. I read about CSV but still it does act step wise dosent it. OR if I am not clear about it. Please enlighten me.
You're running into "classic" situation described in Max Users is Lower than Expected article.
JMeter acts as follows:
Threads are being started according to the ramp-up time. If you put 1 there - threads will be started immediately. If you put 100 threads and 100 seconds ramp-up time initially 1 thread will start and each 1 second next thread will be kicked off.
Threads start executing samplers upside down (or according to logic controllers)
When thread doesn't have more samplers to execute and more loops to iterate - it's being shut down.
So I would suggest adding more loops on Thread Group level so threads kicked off earlier kept looping while others are starting so finally you could have 100 threads working at the same time. You can configure test execution time either in Thread Group "Scheduler" section or via Runtime Controller.
Another good option is using Ultimate Thread Group available via JMeter Plugins which provides easy way of configuring your load scenario.

Finding an applications scalibility point using JMeter

I am trying to find an applications scalibility point using JMeter. I define the scalability point as "The minimum number of concurrent users from which any increase no longer increases the Throughput per second".
I am using the following technique. Schedule my load test to run for an hour, starting a new thread sending SOAP/XML-RPC Requests every 30 seconds. I do this by setting my number of threads to 120 and my ramp up period to 3600 seconds.
Then looking at my TOTAL rows Throughput in my Summary Report Listener. A new row (thread) is added every 30 seconds, the total throughput number rises until it plateaus at about 123 requests per second after 80 of the threads are active in my case. It then slowly drops the throughput number to 120 per second as the last 20 threads are added. I then conclude that my applications scalability point is 123 requests per second with 80 active users.
My question, is this a valid way to find an application scalibility point or is there different technique that I should be trying?
From a technical perspective what you're doing does answer your question regarding one specific user scenario, though I think you might be missing the big picture.
First of all keep in mind that the actual HTTP request you're sending and ramp up times can often impact what you call a scalability point. Are your requests hitting a cache? Are they not random enough? Are they too random? Do they represent real world requests? is 30 seconds going to give you the same results as 20 seconds or 10 seconds?
From my personal experience it's MUCH easier and more intuitive to look at graphs when trying to analyze app performance. It's not just a question of raw numbers but also looking and trends and rates of change.
For example here is an example testing the ghost.org blogging platofom using JMeter with an interactive JMeter results graph.
http://blazemeter.com/blog/ghost-performance-benchmark

Processing data while it is loading

We have a tool which loads data from some optical media, and once it's all copied to the hard drive runs it through a third-party tool for processing. I would like to optimise this process so each file is processed as it is read in. Trouble is, the third-party tool (which naturally I cannot change) has a 12 second startup overhead. What is the best way I can deal with this, in terms of finishing the entire process as soon as possible? I can pass any number of files to the processing tool in each run, so I need to be able to determine exactly when to run the tool to get the fastest result overall. The data being copied could be anything from one large file (which can't be processed until it's fully copied) to hundreds of small files.
The simplest would be to create and run 2 threads, one that runs the tool and one that loads data. Start 12 seconds timer and trigger both threads. Upon each file load completion check the passed time. If 12 seconds passed, fetch the data into the thread running the tool. Restart loading the data in parallel to processing of previous bulk. Once previous bulk processing completes restart the 12 sec timer and continue checking it upon every file load completion. Repeat till no more data remains.
For better results a more complex solution might be required. You can do some benchmarking to get an evaluation of average data loading time. Since it might be different for small and large files, several evaluations may be needed for different categories of files (according to size). Optimal resources utilization would be the one that processes the data in the same rate the new data arrives. Processing time includes the 12 seconds startup. The benchmarking should give you a ratio of processing threads number vs. reading threads number (you can also decrease/increase the number of active reading threads according to the incoming file sizes). Actually, it's a variation of producer-consumer problem with multiple producers and consumers.