I Want to go for stress testing to start with the anticipated number of users (or just from 1 virtual user) and gradually increase the load such as for 10 threads, 20 threads, …. 100 threads until response time starts exceeding the acceptable value or errors start occurring.But For all this test run should i increases the Ramp-up Period(Seconds) or it will remain the same for all test?
Picture is given below:
Apparently, the Ramp-up time shouldn't be the same for all of your tests. You have to set the Ramp-up period accordingly.
Ramp up is the time in which all the users arrive on your tested application server.
You can check this thread also: How should I calculate Ramp-up time in Jmeter
As per JMeter documentation:
The ramp-up period tells JMeter how long to take to "ramp-up" to the full number of threads chosen. If 10 threads are used, and the ramp-up period is 100 seconds, then JMeter will take 100 seconds to get all 10 threads up and running. Each thread will start 10 (100/10) seconds after the previous thread was begun. If there are 30 threads and a ramp-up period of 120 seconds, then each successive thread will be delayed by 4 seconds.
Ramp-up needs to be long enough to avoid too large a work-load at the start of a test, and short enough that the last threads start running before the first ones finish (unless one wants that to happen).
Start with Ramp-up = number of threads and adjust up or down as needed.
So if you don't have a better idea - go for the ramp-up period in seconds equal to the number of users.
The point of ramp-up is increasing the load gradually so will be able to correlate increasing load with other performance metrics for websites like response time, throughput, number of server hits per second, number of errors per second, etc.
See JMeter Glossary for the metrics which are stored by JMeter explained
Related
I'm planning to run my performance test with 100 users for 30 minutes, I noticed it generates bunch of request almost (2M) upon checking the result, is there's a way I could simulate 100 request/per seconds at end of the execution I'm only expecting 180,000 requests. Your response is highly appreciated. Thank you
Expectation Request:
Per seconds = 100
Per Minute = 6,000
30 Minutes = 180,000
Thread Setup:
Your setup is kicking off 100 users which are active for 30 minutes and they're executing samplers as fast as they can
If you need to slow down the requests execution rate to the given number of requests per second you can consider using Throughput Shaping Timer to limit JMeter's samplers execution rate to the desired value.
You can install Throughput Shaping Timer using JMeter Plugins Manager
JMeter's built-in alternatives are Constant Throughput Timer and Precise Throughput Timer
I am trying to perform a load test, and according to our stats (that I can't disclose) we expect peaks of 300 users per minute, uploading files of different sizes to our system.
Now, I created a jmeter test, which works fine, but what I don't know how to fine tune is - aim for certain throughput.
I create a test with 150 users 100 loops, expecting it to simulate 150 users coming and going, and in total upload 15000 files, but that never happened because at certain point tests started failing.
Looking at our new relic monitoring, it seems that somehow I reached 1600 requests in a single minute. I am testing a microservice, running 12 instances, so that might play the role here for a higher number of requests, but even with it I expected tests to pass. My uploaded file was 600kb. In the end, I had 98% failure.
I reduced the file size to 13kb, at that point, I got 17% failiure.
So, there's obviously something with the time needed to upload the bigger file, but I don't understand what causes 150 thread/users in X loops to become 1600 at the same time. I'd expect Jmeter to never start a new loop with the same thread, unless the original user is finished. That being said - I'd expect tops 150 users in a given minute.
Any clarification on how to get exact number of users/threads running at the same time is well appreciated.
I tried to play with KeepAlive checkbox, I tried adding lifetime of request to 10 seconds (all them uploads get response earlier) - but then JMeter finished the Thread, and I had only 150 runs, no loops.
Thanks!
By default JMeter executes Samplers as fast as it can so there are 2 main factors which define the actual throughput (number of requests per unit of time):
JMeter configuration
Application under test response time
So if you're following JMeter Best Practices and JMeter has enough headroom to operate in terms of CPU, RAM, etc. - you are only limited by your application response time as JMeter waits for previous request to finish before starting a new one.
If you need to "slow down" your test execution consider adding i.e. Constant Throughput Timer to your Test Plan where you will be able to define the desired number of requests per minute
I need to make a concurrent thread group where the given number of concurrent users should hit only once, what happening now is that in a given period of time, it is hitting multiple times. See graph.
The concurrency thread graph is like this, what I want is 0-1 seconds, concurrent users are continuously hitting. I want concurrent users to hit once only. Graph should be like this.
I want a graph like this. All concurrent users should hit once and then, that's it
You can set it using
Ultimate Thread Group JMeter plugin:
Set number of rows, each with different Delay by seconds 0,60,120,.. so each will start every minute
Set startup and shutdown as 0 so you'll have peaks as desired
In jMeter
I have a test plan with 100 virtual users. If i set ramp up time to 100 then the whole test takes 100 sec to complete the whole set. That means each thread takes 1 sec to perform for each virtual user. Meaning that each thread is carried out step by step. However, each thread is carried out after completion of previous one.
Problem: I need 100 users accessing the website at a same time , concurently and simultaneously. I read about CSV but still it does act step wise dosent it. OR if I am not clear about it. Please enlighten me.
You're running into "classic" situation described in Max Users is Lower than Expected article.
JMeter acts as follows:
Threads are being started according to the ramp-up time. If you put 1 there - threads will be started immediately. If you put 100 threads and 100 seconds ramp-up time initially 1 thread will start and each 1 second next thread will be kicked off.
Threads start executing samplers upside down (or according to logic controllers)
When thread doesn't have more samplers to execute and more loops to iterate - it's being shut down.
So I would suggest adding more loops on Thread Group level so threads kicked off earlier kept looping while others are starting so finally you could have 100 threads working at the same time. You can configure test execution time either in Thread Group "Scheduler" section or via Runtime Controller.
Another good option is using Ultimate Thread Group available via JMeter Plugins which provides easy way of configuring your load scenario.
I am trying to find an applications scalibility point using JMeter. I define the scalability point as "The minimum number of concurrent users from which any increase no longer increases the Throughput per second".
I am using the following technique. Schedule my load test to run for an hour, starting a new thread sending SOAP/XML-RPC Requests every 30 seconds. I do this by setting my number of threads to 120 and my ramp up period to 3600 seconds.
Then looking at my TOTAL rows Throughput in my Summary Report Listener. A new row (thread) is added every 30 seconds, the total throughput number rises until it plateaus at about 123 requests per second after 80 of the threads are active in my case. It then slowly drops the throughput number to 120 per second as the last 20 threads are added. I then conclude that my applications scalability point is 123 requests per second with 80 active users.
My question, is this a valid way to find an application scalibility point or is there different technique that I should be trying?
From a technical perspective what you're doing does answer your question regarding one specific user scenario, though I think you might be missing the big picture.
First of all keep in mind that the actual HTTP request you're sending and ramp up times can often impact what you call a scalability point. Are your requests hitting a cache? Are they not random enough? Are they too random? Do they represent real world requests? is 30 seconds going to give you the same results as 20 seconds or 10 seconds?
From my personal experience it's MUCH easier and more intuitive to look at graphs when trying to analyze app performance. It's not just a question of raw numbers but also looking and trends and rates of change.
For example here is an example testing the ghost.org blogging platofom using JMeter with an interactive JMeter results graph.
http://blazemeter.com/blog/ghost-performance-benchmark