What is the relationship between the 99% Line and throughput in Jmeter's Aggregate Report - testing

I ran a jmeter test case where I found :-
Samples - 26133
99% Line to be - 2061ms
ThroughPut - 43.6/s
My question is how can the throughput be 43.6 requests per second when then 99% Line is showing at 2061ms. From my understanding that means 99% of the samples took NO MORE THAN this time. The 1% remaining samples took at least as long as this.
So Shouldn't the throughput be less than 1 request per second? How is it able to serve 46 requests per second when 99% itself take 2 seconds to respond?

99% is response time
Throughput is the number of requests for the test duration
Given the number of samplers and the throughput my expectation is that you ran your test for 10 minutes.
If you would execute your test with 1 user having response time of 2 seconds for 10 minutes you would get 300 samples. Looking at the numbers I can assume that you had something like 87 users.
And this 87 users (or whatever is your number) is a key point because throughput indicates concurrency
1 user which executes 1 request each 2 seconds - throughput will be 0.5 hits per second
2 users which execute 1 request each 2 seconds - throughput will be 1 hit per second
4 users which execute 1 request each 2 seconds - throughput will be 2 hits per second
etc.
More information:
JMeter Glossary
What is the Relationship Between Users and Hits Per Second?

Related

Need to achieve 120 hits per second in Jmeter during loadtest

I have 5 thread groups each one have 3 api requests and each thread should execute one after one, in 1 hour load test should achieve 120 hits per second.
Pacing: 5 sec
Thinktime: 8 sec
Each thread single iteration time: 20 sec
So for this how much users I need to give to achieve required 120 hits per second and how can I do load test for 5 thread groups because each one should execute one after one.
It's a matter of simple arithmetic calculations and I believe question should go to https://math.stackexchange.com/ (or alternatively you can catch a student of the nearest school ask ask him)
Each thread single iteration time: 20 sec
means that each user executes 3 requests per 20 seconds, to wit 1 request per 6.6 seconds.
So you need 6.6 users to get 1 request per second or 792 users to reach 120 requests per second.
Also "pacing" concept is for the the "dumb" tools which don't support setting the desired throughput and JMeter provides:
Constant Throughput Timer
Precise Throughput Timer
Throughput Shaping Timer
any of them provides possibility to define the number of requests per second, especially the latter one which can be connected with Concurrency Thread Group

Kusto Query limitation gets timeout error

I am running a Kusto Query in my Azure Diagnostics where I am querying logs of last 1 week and the query times out after 10 mins. Is there a way I can increase the timeout limits? if yes can someone please guide me the steps. I downloaded Kusto explorer but couldnt see any easy way of connecting my Azure cluster. Need help as how can i increase this timeout duration from inside Azure portal for query I am running?
It seems like 10 minutes are the max value for timeout.
https://learn.microsoft.com/en-us/azure/azure-monitor/service-limits
Query API
Category
Limit
Comments
Maximum records returned in a single query
500,000
Maximum size of data returned
~104 MB (~100 MiB)
The API returns up to 64 MB of compressed data, which translates to up to 100 MB of raw data.
Maximum query running time
10 minutes
See Timeouts for details.
Maximum request rate
200 requests per 30 seconds per Azure AD user or client IP address
See Log queries and language.
https://learn.microsoft.com/en-us/azure/azure-monitor/logs/api/timeouts
Timeouts
Query execution times can vary widely based on:
The complexity of the query
The amount of data being analyzed
The load on the system at the time of the query
The load on the workspace at the time of the query
You may want to customize the timeout for the query.
The default timeout is 3 minutes, and the maximum timeout is 10 minutes.

What is the API usage (requests per seconds) limit of Amadeus test environment?

I am trying to call Amadeus API in parallel (/v1/shopping/hotel-offers) in the test environment. Unfortunately when I start 3 threads simultaneously, then only the very first one gets the OK response and the others get HTTP 429 Too Many Requests responses.
I have not exceeded the monthly limit quota yet, so that error is really related to the parallel execution.
Does anybody know what are the exact limits (#requests/sec or #requests in parallel) ? Is it even possible to have more than one request at a time ?
The throttling is not the same depending of the environment:
Test: 10 transactions per sec per user (10 TPS/user) -> With the constrains: not more than 1 request every 100ms.
Production: 20 transactions per sec per user (20 TPS/user) -> With the constraint: not more than 1 request every 50ms.

How to setup Arrivals Thread Group(Custom Thread Groups)

I am new to Jmeter I don't have any idea about it. I want to use a Jmeter plugin named as Custom Thread Group -> Arrivals Thread Group available at location https://jmeter-plugins.org/wiki/ArrivalsThreadGroup/ for arrival rate simulation. I searched a lot about these properties but didn't get clear definition or understanding. I have a vague idea about its configuration properties. I wrote the details I know about all these properties as a code comment
Target Rate(arrivals/min): 60
Ramp Up Time(min): 1 // how long to take to "ramp-up" to the full number of threads
Ramp-Up Steps Count: 10 // It divides Ramp-up time into specified parts and ramp-up threads accordingly
Hold Target Rate Time(min): 2// It will repeat the same for the next two minutes
Thread Iterations Limit:
Can anybody help me to understand clearly what is the significance of all these properties?
According to above settings:
Target Rate: 60 arrivals in a minute means there will be one arrival per second. Each second JMeter will kick off a virtual user which will be executing samplers.
Ramp-up time: the time which will be taken to reach the target rate, i.e. JMeter starts from zero arrivals per minute and increases the arrivals rate to 60 arrivals per minute in 60 seconds.
Ramp-up steps: here you can set the “granularity” of increasing arrivals rate, more steps - more smooth pattern, fewer steps - you will have “spikes”
Hold Target Rate: it will keep the threads in steady state for the duration specified. In your case, it will keep a number of threads 60 for the end of the run. As explained in above comment.
So according to settings, JMeter will ramp-up from 0 to 1 arrival per second in one minute plus run the test for 2 minutes.
If I have 1 sampler in Test Plan it will be something like 153 executions, if I have 2 samplers - 153 executions per sampler, in total 306 executions. Approximate request rate will be 50 requests/minute.

How should I manage expensive reporting crons for users who visit infrequently?

Let's say I have 5M users (for easy math) who vary widely in their visits per month.
User loyalty, in visits per month
1. 1M <1 visits/month
2. 1M 1-10 visits/month
3. 1M 10-50 visits/month
4. 1M 50-100 visits/month
5. 1M >100 visits/month
The goal for each user is to access data that takes (let's say) 1 CPU cycle to fetch (for example... the reality in our situation is much much less, but it's easier math with 1).
Each data fetch takes too long to load inline, so it's preferred to have data ready for them when they come. (via crons)
Let's say that in order to satisfy our most active users, we would need to run the cron 10 times a day to have it ready for them when they want it. (I say "when they want it" because typically that's 4 times within a 8 hour work day, not 4 times spread evenly over 24 hours). That's 1M(users) * 10(data fetches) per day. Or (at 1 CPU cycle per fetch), 10M CPU Cycles for these 1M most active users. The good news is that they're at least using the fetched data.
However, what about our less active users? What strategy do you recommend to still provide relevant fetched data results while protecting from wasted CPU cycles fetching data that will never or rarely be seen?
Here's a chart of the minimum cycles required based on the chart above. The ideal answer would get as close to this as possible.
Group # Users Visits/Month CPU Cycles/Month
1. 1M 0.1 0.1M
2. 1M 1 1M
3. 1M 10 10M
4. 1M 50 50M
5. 1M 100 100M
-------------------------------------------------
5M 161.1 161.1M
If I did the same cron necessary to keep Group 5 happy for everyone, that'd be 500M CPU cycles (roughly 70% wasted).
What do you recommend to minimize wasted CPU cycles? But to still keep infrequent users happy (because we still want them to turn into active users).