how to get the average response time for a request using Jmeter testing tool - testing

I am trying Jmeter for load and performance test, so I created the thread group and below is the output of aggregate report.
The first column Avg request t/s, i have calculated using the formula
((Average/Total Requests)/1000)
but it does not seems good, as I am loggin request time in my code, almost every request is taking minimum 2-4 seconds.
I tried with MEdian/1000 but again, i am in doubt.
what is the corerct way to get the average time for a request?
Avg request t/s
Total Requests
Average
MEdian
Min
Max
Error %
ThroughPut request per time unit
Recieved KB
Sent KB
0.07454
100
7454
6663
2464
19313
0
3/sec
2.062251152
1.074506499
1.11322
100
111322
107240
4400
222042
0
26.3/min
0.1408915377
0.1271878015
1.19035
100
119035
117718
0.03
26.3/min
0.1309013211
0.1279624502
1.21287
100
121287
119198
0
0.4136384882
0.135725129
0.1211831508
1.11943
100
111943
111582
5257
220004
0
0.4359482965
0.1507086884
0.1264420352
1.14289
100
114289
114215
4543
223947
0
0.4369846313
0.1497867242
0.1288763268
0.23614
150
35421
26731
4759
114162
0
0.9494271789
0.3600282257
0.1842358496

Don't you have the ThroughPut request per time unit column already? What else do you need to "calculate"?
As per Aggregate Report documentation
Throughput - the Throughput is measured in requests per second/minute/hour. The time unit is chosen so that the displayed rate is at least 1.0. When the throughput is saved to a CSV file, it is expressed in requests/second, i.e. 30.0 requests/minute is saved as 0.5.
So if you click Save Table Data button:
you will get the average transactions per second in the CSV file
It is also possible to generate the CSV file with the calculated aggregate values using JMeter Plugins Command Line Tool as
JMeterPluginsCMD.bat --generate-csv aggregate-report.csv --input-jtl /path/to/your/results.jtl --plugin-type AggregateReport

Related

How to build an ongoing alert that catches sudden spikes for a certain http error code?

I could really use an ongoing alert that catches a sudden rise (spike) in a certain error code (such as 404 or 502 etc...)
I tried giving this some thought on how to achieve that, and... Well... I could really use your help with the script :-)
From my understanding the search query should "know" or, "sense" the normal traffic (not sure for how long, maybe for 1hr, 2hrs) and alert when there is a spike in the error code compared to 1-2 hours ago.
I think the error code spike threshold should be more than 5% of total traffic, while occurring for longer than 90 seconds.
Here is a Splunk Query I use today, I appreciate your help tuning it to what I described above:
tag=NginxLogs host=www1 OR host=www2 |stats count by status|eventstats sum(count) as total|eval perc=round((count/total)*100,2)|where status="404" AND perc>5
The top command automatically provides the count and percent.
http://docs.splunk.com/Documentation/Splunk/7.1.2/SearchReference/Top
tag=NginxLogs host=www1 OR host=www2
| top status
| search percent > 5 AND status > 399
If you have the url,http request method and user in your splunk logs, you can add it as a part of this alert. Example:
tag=NginxLogs host=www1 OR host=www2
| eventstats distinct_count(userid) as NoOfUsersAffected by requestUri,status,httpmethod
| top status,httpmethod,NoOfUsersAffected by requestUri
| search NoOfUsersAffected > 2 AND ((status>499 AND percentage > 5) OR (StatusCode=400 AND percentage > 95))
You can use the following alert message:
$result.percent$ % ($result.count$ calls) has StatusCode $result.status$ for
$result.requestUri$ - $result.httpmethod$.
$result.NoOfUsersAffected$ users were affected
You will get alert like:
21.19 % (850 calls) has StatusCode 500 for https://app.test.com/hello - GET.
90 users are affected

jmeter and apachetop - why I see different values?

Probably explanation is simple - but I couldn't find answer to my question:
I am running jmeter test from one VM (worker) to another (target). On worker I have jmeter with 100 threads (100 users). On target I have API that runs on Apache. When I run "apachetop -f access_log" on target, I see only about 7 req/s.
Can someone explain me, why I don't see 100 req/s on target?
In test result in jmeter I see always 200 OK, so all request are hitting the target, and moreover target always responds. So I am not dropping any requests here. Network bandwidth between machines is 1G. What I am missing here?
Thanks,
Daddy
100 users doesn't necessarily mean 100 requests per second, even more, it is highly unlikely.
According to JMeter glossary:
Elapsed time. JMeter measures the elapsed time from just before sending the request to just after the last response has been received. JMeter does not include the time needed to render the response, nor does JMeter process any client code, for example Javascript.
Roughly, if JMeter is able to get response from server in 1 second - you will get 100 requests/second. If response time will be 2 seconds - throughput will be 50 requests/second, etc, response time 4 seconds - 25 requests/second, etc.
Also JMeter configuration matters. If you don't provide enough loops you may run into a situation where some threads already finished and some are not even started. See JMeter Test Results: Why the Actual Users Number is Lower than Expected article for more detailed explanation.
Your target load = 100 threads ( you are assuming it should generate 100 req/sec as per your plan)
Your actual load = 7 req / sec = 7*3600 / hour = 25200
Per thread throughput = 25200 / 100 threads = 252 iterations / thread / hour
Per transaction time = 3600 / 252 = 14.2 secs
This means, JMeter should be actually sending each request every 14 secs per thread. i.e., 100 requests for every 14.2 secs.
Now, analyze your JMeter summary report for the transaction timers to find out where the remaining 13.2 secs are being spent.
Possible issues are
1. High DNS resolution time (DNS issue)
2. High connection setup time (indicates load balancer issues)
3. High Request send time (indicates n/w or firewall throttling issues)
4. High request receive time (same as #3)
Now, the time that you see in Apache logs are mostly visible to JMeter as time to first byte time. I am not sure about the machine that you are running your testing. If your worker can support curl, use Curl to find the components for a single request.
echo 'request payload for POST'
| curl -X POST -H 'User-Agent: myBrowser' -H 'Content-Type: application/json' -d #- -s -w '\nDNS time:\t%{time_namelookup}\nTCP Connect time:\t%{time_connect}\nAppCon Protocol time:\t%{time_appconnect}\nRedirect time:\t%{time_redirect}\nPreXfer time:\t%{time_pretransfer}\nStartXfer time:\t%{time_starttransfer}\n\nTotal time:\t%{time_total}\n' http://mytest.test.com
If the above output indicates no such issues then the time must have been spent within JMeter. You should tune your JMeter implementation by using various options like beanshell / JSR223 etc.

Gatling user injection for 50 total users in 1 hour adding 10 users per 5 minutes

I need to setup a Gatling Test with a total of 50 concurrent users, but I have a problem because there is no choice to get it.
I use rampUsers(10) over (60 minutes) but it gets only 10 concurrent users.
Using constantUsersPerSec(users) during (60 minutes) is too stressful.
Is there any suggestion?
Thanks.
This could be done as follow:
val scn = scenario("Test").during(1 hours) {
exec(http("test").get("/"))
}
setUp(scn.inject(splitUsers(50) into atOnceUsers(10) separatedBy(5 minutes))
.protocols(httpConf))
see http://gatling.io/docs/2.0.3/general/simulation_setup.html:
splitUsers(nbUsers) into(injectionStep) separatedBy(duration): Repeatedly execute the defined injection step separated by a pause of the given duration until reaching nbUsers, the total number of users to inject.

Mono error when load testing

During load testing (using Load UI) of a new .Net web api using Mono hosted on a medium sized Amazon server I'm receiving the following results (in chronological order over the course of about ten minutes)
5 connections per second for 60 seconds
No errors
50 connections per second for 60 seconds
No errors
100 connections per second for 60 seconds
Received 3 errors, appearing later during the run
2014-02-07 00:12:10Z Error HttpResponseExtensions Error occured while Processing Request: [IOException] Write failure Write failure|The socket has been shut down
2014-02-07 00:12:10Z Info HttpResponseExtensions Failed to write error to response: {0} Cannot be changed after headers are sent.
5 connections per second for 60 seconds
No errors
100 connections per second for 30 seconds
No errors
100 connections per second for 60 seconds
Received 1 error same as above, appearing later during the run
100 connections per second for 45 seconds
No errors
Doing some research on this, this error seems to be a standard one received when a client closed the connection. As this is only occurring during the heavier load tests, I am wondering if it is just getting to the upper limits of what the server instance can support? If not any suggestions on hunting down the source of the errors?

How to calculate total response time in jmeter?

I have send 10 consecutive http requests in jmeter.
I have stored output as csv file.
endTimeMillis responseTime latency sentBytes receivedBytes responseCode
1357279943.984 1426 1426 347 287 200
1357279944.685 1888 1888 347 287 200
..............
..............
In above output file response time displayed by each request. But i need to calculate total response time for 10 requests.
How to calculate total response time in jmeter?
You need a Transaction Controller. Put elements times of which you want to sum under it. Transaction controller will then appear in all your listeners. Its load and latency times will be sums of those parameters of its nested elements.
Note that this time by default includes all processing within the controller scope, not just the samples, this can be changed by unchecking "Include duration of timer and pre-post processors in generated sample".