I have 5 thread groups each one have 3 api requests and each thread should execute one after one, in 1 hour load test should achieve 120 hits per second.
Pacing: 5 sec
Thinktime: 8 sec
Each thread single iteration time: 20 sec
So for this how much users I need to give to achieve required 120 hits per second and how can I do load test for 5 thread groups because each one should execute one after one.
It's a matter of simple arithmetic calculations and I believe question should go to https://math.stackexchange.com/ (or alternatively you can catch a student of the nearest school ask ask him)
Each thread single iteration time: 20 sec
means that each user executes 3 requests per 20 seconds, to wit 1 request per 6.6 seconds.
So you need 6.6 users to get 1 request per second or 792 users to reach 120 requests per second.
Also "pacing" concept is for the the "dumb" tools which don't support setting the desired throughput and JMeter provides:
Constant Throughput Timer
Precise Throughput Timer
Throughput Shaping Timer
any of them provides possibility to define the number of requests per second, especially the latter one which can be connected with Concurrency Thread Group
Related
I ran a jmeter test case where I found :-
Samples - 26133
99% Line to be - 2061ms
ThroughPut - 43.6/s
My question is how can the throughput be 43.6 requests per second when then 99% Line is showing at 2061ms. From my understanding that means 99% of the samples took NO MORE THAN this time. The 1% remaining samples took at least as long as this.
So Shouldn't the throughput be less than 1 request per second? How is it able to serve 46 requests per second when 99% itself take 2 seconds to respond?
99% is response time
Throughput is the number of requests for the test duration
Given the number of samplers and the throughput my expectation is that you ran your test for 10 minutes.
If you would execute your test with 1 user having response time of 2 seconds for 10 minutes you would get 300 samples. Looking at the numbers I can assume that you had something like 87 users.
And this 87 users (or whatever is your number) is a key point because throughput indicates concurrency
1 user which executes 1 request each 2 seconds - throughput will be 0.5 hits per second
2 users which execute 1 request each 2 seconds - throughput will be 1 hit per second
4 users which execute 1 request each 2 seconds - throughput will be 2 hits per second
etc.
More information:
JMeter Glossary
What is the Relationship Between Users and Hits Per Second?
I have repetitive tasks that I want to process with a number of workers (i.e., competing consumers pattern). The probability of failure during the task is fairly low so in case of such rare events, I would like to try again after a short period of time, say 1 second.
A sequence of consecutive failures is even less probable but still possible, so for a few initial retries, I would like to stick to a 1-second delay.
However, if the sequence of failures reaches some point, then the most likely there is some external reason that may cause these failures. So from that point, I would like to start extending the delay.
Let's say that the desired distribution of delays looks like this:
first appearance in the queue - no delay
retry 1 - 1 second
retry 2 - 1 second
retry 3 - 1 second
retry 4 - 5 second
retry 5 - 10 seconds
retry 6 - 20 seconds
retry 7 - 40 seconds
retry 8 - 80 seconds
retry 9 - 160 seconds
retry 10 - 320 seconds
another retry - drop the message
I have found a lot of information about DLXes (Dead Letter Exchanges) that can partially solve the problem. It appears to be easy to achieve an infinite number of retries with the same delay. At the same time, I haven't found a way to increase the delay or to stop after certain number of retries.
I'm looking for the purest RabbitMQ solution possible. However, I'm interested in anything that works.
There is a plugin available for this. I think you can use it to achieve what you need.
I've used it for something in a similar fashion for handling custom retries with dynamic delays.
RabbitMQ Delayed Message Plugin
Using a combination of DLXes and expire/TTL times, you can accomplish this except for the case when you want to change the redelivery time, for instance, implementing an exponential backoff.
The only way I could make it work using a pure RabbitMQ approach is to set the expire time to the smallest time needed and then use the x-death array to figure out how many times the message has been killed and then reject (ie. DLX it again) or ack the message accordingly.
Let's say you set expire time to 1 minute and you need to backoff 1 minute first time, then 5 minutes and then 30 minutes. This translates to x-death.count = 1, followed by 5 and then 30. Any other time you just reject the message.
Note that this can create lots of churn if you have many retry-messages. But if retries are rare, go for it.
I am trying to call Amadeus API in parallel (/v1/shopping/hotel-offers) in the test environment. Unfortunately when I start 3 threads simultaneously, then only the very first one gets the OK response and the others get HTTP 429 Too Many Requests responses.
I have not exceeded the monthly limit quota yet, so that error is really related to the parallel execution.
Does anybody know what are the exact limits (#requests/sec or #requests in parallel) ? Is it even possible to have more than one request at a time ?
The throttling is not the same depending of the environment:
Test: 10 transactions per sec per user (10 TPS/user) -> With the constrains: not more than 1 request every 100ms.
Production: 20 transactions per sec per user (20 TPS/user) -> With the constraint: not more than 1 request every 50ms.
I am new to Jmeter I don't have any idea about it. I want to use a Jmeter plugin named as Custom Thread Group -> Arrivals Thread Group available at location https://jmeter-plugins.org/wiki/ArrivalsThreadGroup/ for arrival rate simulation. I searched a lot about these properties but didn't get clear definition or understanding. I have a vague idea about its configuration properties. I wrote the details I know about all these properties as a code comment
Target Rate(arrivals/min): 60
Ramp Up Time(min): 1 // how long to take to "ramp-up" to the full number of threads
Ramp-Up Steps Count: 10 // It divides Ramp-up time into specified parts and ramp-up threads accordingly
Hold Target Rate Time(min): 2// It will repeat the same for the next two minutes
Thread Iterations Limit:
Can anybody help me to understand clearly what is the significance of all these properties?
According to above settings:
Target Rate: 60 arrivals in a minute means there will be one arrival per second. Each second JMeter will kick off a virtual user which will be executing samplers.
Ramp-up time: the time which will be taken to reach the target rate, i.e. JMeter starts from zero arrivals per minute and increases the arrivals rate to 60 arrivals per minute in 60 seconds.
Ramp-up steps: here you can set the “granularity” of increasing arrivals rate, more steps - more smooth pattern, fewer steps - you will have “spikes”
Hold Target Rate: it will keep the threads in steady state for the duration specified. In your case, it will keep a number of threads 60 for the end of the run. As explained in above comment.
So according to settings, JMeter will ramp-up from 0 to 1 arrival per second in one minute plus run the test for 2 minutes.
If I have 1 sampler in Test Plan it will be something like 153 executions, if I have 2 samplers - 153 executions per sampler, in total 306 executions. Approximate request rate will be 50 requests/minute.
Let's say I have 5M users (for easy math) who vary widely in their visits per month.
User loyalty, in visits per month
1. 1M <1 visits/month
2. 1M 1-10 visits/month
3. 1M 10-50 visits/month
4. 1M 50-100 visits/month
5. 1M >100 visits/month
The goal for each user is to access data that takes (let's say) 1 CPU cycle to fetch (for example... the reality in our situation is much much less, but it's easier math with 1).
Each data fetch takes too long to load inline, so it's preferred to have data ready for them when they come. (via crons)
Let's say that in order to satisfy our most active users, we would need to run the cron 10 times a day to have it ready for them when they want it. (I say "when they want it" because typically that's 4 times within a 8 hour work day, not 4 times spread evenly over 24 hours). That's 1M(users) * 10(data fetches) per day. Or (at 1 CPU cycle per fetch), 10M CPU Cycles for these 1M most active users. The good news is that they're at least using the fetched data.
However, what about our less active users? What strategy do you recommend to still provide relevant fetched data results while protecting from wasted CPU cycles fetching data that will never or rarely be seen?
Here's a chart of the minimum cycles required based on the chart above. The ideal answer would get as close to this as possible.
Group # Users Visits/Month CPU Cycles/Month
1. 1M 0.1 0.1M
2. 1M 1 1M
3. 1M 10 10M
4. 1M 50 50M
5. 1M 100 100M
-------------------------------------------------
5M 161.1 161.1M
If I did the same cron necessary to keep Group 5 happy for everyone, that'd be 500M CPU cycles (roughly 70% wasted).
What do you recommend to minimize wasted CPU cycles? But to still keep infrequent users happy (because we still want them to turn into active users).