I have some trouble finding out why I'm experiencing several SSL/TLS handshakes on the same page (for several resources on the same page, i.e. multiple HTTP requests), when both Keep-Alive and Session Identifiers/Tickets are active on the website/server.
I've recently activated TLS (https) on my website and therefore I wanted to check what impact this had on the speed/load performance of the site. When going through the waterfall diagram from both various speed tests on the internet (e.g. tools.pingdom.com and webpagetest.org) and Chrome Developer Tools, i see multiple SSL handshakes/negotiations on the same page, on different content. You can see an image of this here:
As can be seen, there are multiple SSL negotiations on different http requests within the same domaian. I'm wondering about this, as both Keep-Alive and Session identifiers & -tickets are active (checked via multiple tests such as the ones from webpagetest.org and ssllabs.com/ssltest/). Please do also note that I dont have access to the server (apache) configurations as I'm on a shared host.
Is what I'm experiencing possibly:
due to the server configuration limiting some amount of connections?
a misconfiguration of some sort?
something entirely else?
Me that have misunderstood something?
Please note that I'm a complete rookie in this field, but I've tried to find as much information regarding this topic, but sadly not an answer.
In case you would like to test something for yourself, the website is https://www.aktie-skat.dk
It is normal for a browser to establish multiple parallel connections to the same site since each connection can only request and load a single resource at a time. Even with HTTP keep-alive these resources do not get loaded in parallel over a single HTTP/1.x connection but only one after the other. This is only different with HTTP/2. Apart from that some requests might result in an Connection: close from server which requires the client to use a different connection for the next requests.
In detail: The first two handshakes start at 0.362s and 0.333s and take each about 100ms. These are full handshakes. All the other TLS handshakes are way shorter (about 50ms) and are thus abbreviated handshakes using session resume. The second TCP/TLS connection could not use session resume yet since the TLS handshake for the first connection was not done yet and thus no session was available for resume.
Related
I've done a speedtest with webpagetest. My website is SSL secured. For some reason the SSL Negotiation happens twice.
There is one SSL negotiation for the index html which seems to be correct. The second request is done by fetch. I assume that the second SSL negotiation is not neccessary.
fetch("/api/menu")
For the remaining requests to the same domain there are no more negotiations.
There is first a TCP connect for menu which is then followed by the SSL setup. This means that it is not using the previously established TCP connection for the new connection but creates a new one. And this new one of course needs SSL too.
It is quite normal that browsers have several connections open to the same site when HTTP/1.1 is in use since only a single request can be handled at a time within one connection (this is different with HTTP/2). Since in your case the first connection is still in use for other requests, creating a new connection might speed up the total delivery time.
It can also be seen that the second SSL setup takes less time than the first. This is probably because it is doing a session resume, i.e. using the same SSL session as established in the first connection which speeds up the TLS handshake.
I'm testing SSL/TLS stream proxying within NGINX that will connect to a web server using gnutls as the underlying TLS API. Using the command line test tool in gnutls (gnutls-serv) the entire process works, but I can't understand the logic:
the NGINX client (proxying HTTP requests from an actual client to the gnutls server) seems to want to handshake the connection multiple times. In fact in most tests it seems to handshake 3 times without error before the server will respond with a test webpage. Using wireshark, or just debugging messages, it looks like the socket on the client side (in the perspective of the gnutls server) is being closed and reopened on different ports. Finally on the successful connection, gnutls uses a resumed sessions, which I imagine is one of the previously mentioned successful handshakes.
I am failing to find any documentation about this sort of behaviour, and am wondering if this is just an 'NGINX thing.'
Though the handshake eventually works with the test programs, it seems kind of wasteful (to have multiple expensive handshakes) and implementing handshake logic in a non-test environment will be tricky without actually understanding what the client is trying to do.
I don't think there are any timeouts or problems happening on the transport, the test environment is a few different VMs on the same subnet connected between 1 switch.
NGINX version is the latest mainline: 1.11.7. I was originally using 1.10.something, and the behaviour was similar though there were more transport errors. Those errors seemed to get cleaned up nicely with upgrading.
Any info or experience from other people is greatly appreciated!
Use either RSA key exchange between NGINX and the backend server or use SSLKEYLOGFILE LD_PRELOAD for NGINX to have the necessary data for Wireshark to decrypt the data.
While a single incoming connection should generate just one outgoing connection, there may be some optimisations in NGINX to fetch common files (favicon.ico, robots.txt).
When I test my website performance I noticed SSL handshakes are happening as part of connection setup. I understand the first request (of the page) needs the full SSL handshake.
But, if you notice from the pingdom test, only certain other resources are doing the SSL handshake. The remaining requests in the page do not.
Can someone please explain the logic behind this.
Resources that are loaded via HTTPS will have their own SSL handshake if new TCP connections are used to retrieve them. If HTTP keep-alives are used, resources on the same server may be retrieved using existing connections. Resources retrieved via HTTP instead of HTTPS, or reside on different servers/domains, will have to use separate connections.
Your test results are fairly useless for diagnosing this, without knowing which resources are being retrieved, where they are being retrieved from, and what protocol they are being retrieved with.
On the same nginx/apache server:
Scene 1: a.test.com and b.test.com can use keep-alive?
Scene 2: localhost and 127.0.0.1 can use keep-alive?
or the "Host header" must be consistent and the keep-alive will effective?
For Apache it looks like it's based on IP address rather than host header, though I guess it also very much depends on client implementation.
https://httpd.apache.org/docs/2.4/vhosts/details.html#hostmatching
Persistent connections
The IP lookup described above is only done once for a particular
TCP/IP session while the name lookup is done on every request during a
KeepAlive/persistent connection. In other words, a client may request
pages from different name-based vhosts during a single persistent
connection.
Unless you are using tens of hundreds of domains I'd say you'd struggle to notice either way though it should be easy enough to test using developer tools or webpagetest.org to see if time is spent negotiating a new connection.
I'm using Jmeter to load test my web application. I have two web servers and we are using HAProxy for load balance. All my tests are running fine and configured correctly. I have three jmeter remote clients so I can run my tests distributed. The problem I'm facing is that ALL my jmeter requests are only being processed by one of the web servers. For some reason it's not balancing and I'm having many time outs, and huge response times. I've looked around a lot for a way to make these requests being balanced, but I'm having no luck so far. Does anyone know what can be the cause of this behavior? Please let me know if you need to know anything about my environment first and I will provide the answers.
Check your haproxy configuration:
What is it's load balancing policy, if not round-robin is it based on ip source or some other info that might be common to your 3 remote servers?
Are you sure load balancing is working right? Try testing with browser first, if you can add some information about the web server in response to debug.
Check your test plan:
Are you sure you don't have somewhere in your requests a sessionid that is hardcoded?
How many threads did you configure?
In your Jmeter script by default the HTTP Request "Use KeepAlive" header option is checked.
Keep-Alive is a header that maintains a persistent connection between
client and server, preventing a connection from breaking
intermittently. Also known as HTTP keep-alive, it can be defined as a
method to allow the same TCP connection for HTTP communication instead
of opening a new connection for each new request.
This may cause all requests to go to the same server. Just uncheck the option and save, stop your script and re-run.