JMeter: Non HTTP response message: Connection to URL refused - ssl

I'm using JMeter to test our backend services by using OS Samplers. I'm using CURL in the OS samplers to generate the load of a 4 step process.
POST the certificate to receive the token
POST the token to receive the session
GET session info
POST renew session
The issue is that I'm facing is JMeter reports much higher levels of response time than the service logs. We need to identify where the extra time (+125 ms with 1 concurrent user) is coming from during the transaction execution. The test environment is all on the same VLAN without firewalls or proxy servers between the two client and target servers. The median latency between the two servers is .3 ms with the average being 1.2 ms (with a small sample size). Speaking to the dev team they state that the service logs don't log the very first moment when request is received but can't see how it could be more than just a few ms difference. Data from a few tests which increases the throughput and the overhead roughly remains constant would be consistent with that assumption.
So we focusing on seeing if JMeter is causing the extra overhead, at this point. One assumption is that JMeter begins the transaction time when it begins to generate the CURL request and the packaging of the request is included in this timing. So we want to remove the CURL OS Sampler from the test and replace it with a HTTP Sampler.
When converting the JMeter OS Sampler CURL request to HTTP Sampler HTTPS request we're running into an error JMeter: Non HTTP response message: Connection to URL refused. As stated above we first post the certificate, post the token and then steps 3 and 4. The HTTP Sampler is failing on the 2nd step when posting the acquired token from the first step. We've verified that the acquired token is good by continuing on error and processing the 2nd step original CURL POST request. So there are 2 things to here. 1. the error message says it never completes the handshake so it doesn't get to the point of processing the message. 2. the following CURL request using the same information completes the handshake and correctly processes the transaction.
Making the conversion boils down to a question of "Why would sending a OS Sampler CURL command complete and a HTTP Sampler fail to complete a handshake?
OS Sampler CURL command is configured as:
curl -k -d "" -v -H "{token}" {URL}
HTTP Sampler is configured as:
IP: {URL}
PORT: {Port#}
Implementation: HttpClient4
Protocol: HTTPS
Method: POST
Path: {path}
Use KeepAlive: Check
Header Manager: {token}

There are two separate questions in your post. Regarding the first:
How are you measuring latency between the servers? If you're using ping, you're measuring the round trip time for 1 send and receive. A HTTP POST is usually more than that including TCP back and forth to handshake and then sending out content - which again depending on size can be split across several packets - HTTP responses usually are larger than requests. There is also a possibility of latency being a tad bit higher for larger payload packets compared to a simple ping.
This might not account for the whole difference you're seeing (Like you've noted, some of it comes from the delay in launching curl), but is still something that contributes to increased overall latency. You should use a network analyzer of some sort, at the very least a sniffer like WireShark to understand the chattiness or number-of back and forth turns for each HTTP step you're using.

Related

Latency from Jmeter b/w Servers in Same Network

I am testing an api in Jmeter from a Linux loadgenerator machine. When executed in non GUI mode, I am seeing a latency of 35s. But when did a ping command from LG server to the app server, the time was just in milli sec-
35 sec latency from the view results tree-
Both the servers are in the same network. Then why is there so much latency.
You're looking into 2 different metrics.
Ping sends an ICMP packet which just indicates success of failure in communicating between 2 machines.
Latency includes:
Time to establish connection
Time to send the request
Time required for the server to process the request
Time to get 1st byte of the response
So in other words Latency is Time to first byte and if your server needs 35 seconds to process the request it indicates a server-side issue rather than a network issue.
More information:
JMeter Glossary
Understanding Your Reports: Part 1 - What are KPIs?

JMeter and Connect Times for SSL Connections

For a benchmarking test, I have a very basic test setup wherein I have a single user looping for 100 times (loop delay 100ms) hitting an https endpoint (GET) with HttpClient4 implementation, keep-alive has been turned on.
In the test results, I have observed a pattern wherein every 5/6th request the connect metric is higher as if a full SSL handshake is occurring, check the image below. I am a bit confused with this, any ideas on whats going on here and why the connect times are higher every n request?
[UPDATE]
I was able to troubleshoot this issue a bit further today after turning on access logs on the load balancer (target of this test) and I can see a pattern wherein JMeter seems to be switching the ports on the client side every few requests - the frequency matches the pattern observed previously with the JMeter test results.
This should probably explain the elevated connect times, now the question is why JMeter switches the port?
This could be keep-alive, it certainly was for my issue. Firstly make sure it's enabled on the sampler. Then there's also this JMeter setting to say how long to keep connections alive for.
httpclient4.time_to_live
I've set to 120000 in jmeter.properties but looking at the docs user.properties file should be used. I know jmeter.properties with a setting of 120000 worked for me.
I set the value high to see if it is an http keep alive causing the port switch. Whatever you set it to you need to ensure the client you are emulating does the same.
As you get some quick results I would guess it is a short timer somewhere and not the server side not allowing keep alive at all. Wireshark can help you pin point this as it could be the server side resetting the connection after a certain time. The above config extends the client side time which may get the information you need, if not have a look at the server side equivalent which will vary depending on what services the endpoint.

jmeter HTTP response code: org.apache.http.conn.HttpHostConnectException,Non HTTP response message: Connection refused Error

I am working to JMeter for testing the load.I am using Amazon Server.when I test the load to 400 Concurrent users, I am getting the error message
HTTP response code: org.apache.http.conn.HttpHostConnectException,Non HTTP response message: Connection refused
Till the 400 Threads Request, it is working fine and provide us the response.
We are using Xammp and apache server.Can anyone help me out?
Thanks
Mitesh
Make sure that your Apache is configured to accept as many as 400+ concurrent users. Here are instructions on calculating and setting.
Make sure that your JMeter is configured to produce as many as 400+ concurrent users. Here is the guide on proper JMeter tuning.
Make sure that your Apache and JMeter systems are not overloaded and have enough spare CPU, RAM, Network and Disk IO, disk space, swap, etc. You can use PerfMon JMeter Plugin for monitoring systems health.

FIN pkt with HTTP connection after some time

Opened 2 TCP connections :
1. Normal connection(while implementing echo server,client) &
2. HTTP connection
Opened HTTP connection with curl(modified) utility while running apache as server, where curl is not sending GET request for some time after connection establishment.
For normal connection after connection establishment, server is waiting for request from client.
But as observed, Strangely in HTTP connection after connection establishment, if GET request is not coming from client(for some time), server is sending FIN pkt to client & closing his part of connection.
Is it a mandatory condition for HTTP client to send GET request immediately after initial connection.
Apache got a parameter called Timeout.
Its manual page ( Apache Core - Timeout Directive ) states:
The TimeOut directive defines the length of time Apache will wait for I/O in various circumstances:
When reading data from the client, the length of time to wait for a
TCP packet to arrive if the read buffer is empty.
When writing data to the client, the length of time to wait for an
acknowledgement of a packet if the send buffer is full.
In mod_cgi, the length of time to wait for output from a CGI script.
In mod_ext_filter, the length of time to wait for output from a
filtering process.
In mod_proxy, the default timeout value if ProxyTimeout is not
configured.
I think you fell into case NUMBER ONE
EDIT
I was lurking into W3 HTTP document and I found no refer to timeouts.
but into the chapter 8 (connections) I found:
8.1.4 Practical Considerations
Servers will usually have some time-out value beyond which they will no longer maintain an inactive connection. (...) The use of persistent connections places no requirements on the length (or existence) of this time-out for either the client or the server.
that sounds to me like "every server or client is free to choose his behaviour about inactive connection timeouts"

Http server - slow read

I am trying to simulate a slow http read attack against apache server running on my localhost.
But it seems like, the server does not complain and simply waits forever for the client to read.
This is what I do:
Request a huge file (say ~1MB) from the http server
Read the response from the server in a loop waiting 100 secs before successive reads
Since the file is huge and the client receive buffer is small, the server has to send the file in multiple chunks. But, at the client side, I wait for 100 secs between successive reads. As a result, the server often polls the client and finds that, the receive window size of the client is zero since the client has not yet read the receive buffer.
But it looks like the server does not bother to break the connection and it silently keeps polling the client. Server sends the data when the client window size is > 0 and again goes back to wait for the client.
I want to know whether there are any apache config parameters that I can set to break the connection from the server side after waiting sometime for the client to read the data.
Perhaps this would be more useful to you, (simpler and saves you time): http://ha.ckers.org/slowloris/ which is a Perl script that sends partial HTTP requests, the Apache server leaves the connection open (now unavailable to new users) and if executed on a Linux environment, (Linux does not limit threads beyond hardware capability) you can effectively block all open sockets, and in turn prevent other users from accessing the server. It uses minimal bandwidth because it does not "flood" the server with requests, it simply slowly takes the sockets hostage. You can download the file here: http://ha.ckers.org/slowloris/slowloris.pl
To prevent an attack like this (well, mitigate) see here: https://serverfault.com/questions/32361/how-to-best-defend-against-a-slowloris-dos-attack-against-an-apache-web-server
You could also use a load-balancer or round-robin setup.
Try slowhttptest to test the slow read attack you're describing. (It can also be used to test slow sending of headers.)