JMeter fail tests if threshold exceeded - testing

I'm hoping that I can find help here because I didn't find anything on the internet. I have multiple JMeter plans and I want to fail the plan if a throughput threshold for a group of requests is exceeded. How can I get the real threshold value from JMeter and fail the test if it is exceeded. I need to do this per request, like the threshold value displayed in the Summary Report per each group of requets.
Thank you in advance.

You cannot fail the "plan", you can fail only a sampler using Assertion
The options are in:
Using JMeter AutoStop Plugin stop the test if average response time exceeds threshold. After test finishes you can compare anticipated duration with real duration and if it less - state that test has failed somehow (i.e. return non-zero exit code
Using Taurus tool as a wrapper for your JMeter test you can use Pass/Fail Criteria subsystem to set the desired failure conditions. Taurus will automatically fail the test if the specified criteria is met.

Related

Best way to select test sample size for huge test data set

I am writing an automation test case for lease time validation for dhcp subscribers.the total number of subscribers will vary from range 1000 to 10000.The logic is introducing some sleep inbetween and checking the value of lease timer and configured lease time is less.So this logic can not be applied for all subscriber count.I want to randomly select n of subscribers from total subscriber count.
Please can any one suggest me the simple yet best way to select test sample size(n) which will give best coverage of test samples .Since the verification logic is more complex the test data selection process should be simple in order to improve the test execution time

Purpose of setting the loop count

What is the purpose of setting the loop count? Is it just depend on how many times i want to run the test? Or it has other purpose of test with different loop count? Will it affect the final test result?
"If you give loop count as 2 then every request two times to the server"
I found this online, but i don't understand what it means.
Based on my understanding, the loop count set to 2 because of i want to repeat the test twice only. After the first test end, then the threads in first round test in dead before the second test starts. Then the new thread group will send the request to the server. Why "every request two times to the server"?
The loop count means each thread of your thread group will run the steps inside the loop twice if iteration is set to 2
The thread will start based on delay and rampup and not related to this setting
If your server has concurrent users limit, for example 100, and you want to execute more, as 600, you can set loop count as 6 and execute 600 requests with given server limits
It's the number of times for each JMeter thread (virtual user) to execute Samplers inside the Thread Group
Each JMeter thread executes Samplers upside down (or according to the Logic Controllers) so if there are no more Samplers to execute the thread will shut down. And it might be the case you won't be able to achieve the desired concurrency because some threads have already finished execution and some haven't been yet started like it's described in the JMeter Test Results: Why the Actual Users Number is Lower than Expected so you might want to increase the number of iterations or even set it to "Infinite" and control the test duration using "Duration" section of the Thread Group or Runtime Controller

What is the API usage (requests per seconds) limit of Amadeus test environment?

I am trying to call Amadeus API in parallel (/v1/shopping/hotel-offers) in the test environment. Unfortunately when I start 3 threads simultaneously, then only the very first one gets the OK response and the others get HTTP 429 Too Many Requests responses.
I have not exceeded the monthly limit quota yet, so that error is really related to the parallel execution.
Does anybody know what are the exact limits (#requests/sec or #requests in parallel) ? Is it even possible to have more than one request at a time ?
The throttling is not the same depending of the environment:
Test: 10 transactions per sec per user (10 TPS/user) -> With the constrains: not more than 1 request every 100ms.
Production: 20 transactions per sec per user (20 TPS/user) -> With the constraint: not more than 1 request every 50ms.

Measure query execution time excluding start-up cost in postgres

I want to measure the total time taken by postgres to execute my query excluding the start-up cost. Earlier I was using \timing but now I found \timing includes start-up cost.
I also tried: "explain analyze" in which I found that actual time is specified in a particular format like: actual time=12.04..12.09
So, does this mean that the time taken to execute postgres query excluding start-up time is 0.05. If not, then is there a way to exclude start-up costs and measure query execution time?
What you want is actually quite ill-defined.
"Startup cost" could mean:
network connection overhead and back-end start cost of establishing a new connection. Avoided by re-using the same session.
network round-trip times for sending the query and getting the results. Avoided by measuring the timing server-side with log_statement_min_duration = 0 or (with timing overhead) using explain analyze or the auto_explain module.
Query planning time. Avoided by PREPAREing the query, then timing only the subsequent EXECUTE.
Lock acquisition time. There is not currently any way to exclude this.
Note that using EXPLAIN ANALYZE may not be ideal for your purposes: it throws the query result away, and it adds its own costs because of the detailed timing it does. I would set log_statement_min_duration = 0, set client_min_messages appropriately, and capture the timings from the log output.
So it sounds like you want to PREPARE a query then EXPLAIN ANALYZE EXECUTE it or just EXECUTE it with log_statement_min_duration set to 0.
For exploring PLANNING costs and EXECUTE costs separately you need to set on several postgres.conf parameters:
log_planner_stats = on
log_executor_stats = on
and explore your log file.
Update:
1. find your config file location with executing:
SHOW config_file;
2. Set parameters. Don't foget to remove comment-symbol '#'.
3. Restart postgresql service
4. Execute your query
5. Explore your log file.

Running rational performance tester on a schedule

Is is possible to run rational performance tester once every hour and generate a report which contains all response times for every hour for all pages? Like this
hour 1: hello.html min avg max time
hour 2: hello.html min avg max time
hour 3: hello.html min avg max time
if you use a ordinary schedule and let it iterate once every hour all response times get lumped together in the report likes this:
hello.html` min avg max count=24
.
Would it be possible to start rpt from a script and run a specific project/schedule and then let cron run that script every hour?
To run the Rational Performance tester tests automatically, one can you use commandline feature that is built within tool. So if you can create window scheduler to use bat file(or unix crontab to use shell script) and the following command inserted into that file, that would solve the first bit of calling rpt test automatically.
cmdline –workspace “workspace “–project “testproj” –schedule “schedule_or_test”
For more details on the above command, refer the below link
Executing Rational Performance tester from command line
Second bit, To produce response time report automatically, there seems to be no easy way(which is shame), but one can write java custom code to log page responses time into text file.
For sure, you can schedule that task using Rational Quality Manager, the new IBM's Centralized QA Management Tool. However, in the same tool you can start your test plan with a Java code that allows you to manage that.
Hope this helps.
Why would you want to do that? Sounds like you are looking for a method of monitoring a running website! If so then there are much simpler ways such as adding %D in apache logformat to write out the time taken to service the page and process your web logs every hour instead :-)
If you really want to do this then don't use RPT - use JMeter or something more commandliney, would be easy then. In fact if its just loading a page then Curl on a cron would do it.
Well it is not a single page it is a websphere portal running on an mainframe so it is not just to open up an apache config.
Haven't looked into JMeter but we have a couple of steps that must be done in the test ( log in make some stuff and logout) that we want to measure and we allready had a testflow in RPT that we use so it would be nice to reuse it even if it is not what rpt are ment for.
//Olme
You can use several stages for the scheduler(select the scheduler, add in "user load" tab).
stage 1, duration 1 hour
stage 2, duration 1 hour
stage 3, duration 1 hour
You would get the test result with several stages. Select all the stages and right click, there's "compare" option, after compare the stages' result, it looks:
stage 1: hello.html min avg max time
stage 2: hello.html min avg max time
srage 3: hello.html min avg max time