What is the difference between HTTPS Load-Balancer w/ non-TLS backend and HTTPS Load-Balancer w/ TLS backend - ssl

I'm trying to configure load balancer to serve in HTTPS with certificates provide by Lets Encrypt, even though I couldn't do it yet, reading this article gives steps how to configure
Self-signed certs
Network Load-Balancer w/ TLS backend
HTTPS Load-Balancer w/ non-TLS backend
HTTPS Load-Balancer w/ TLS backend
As I'm intersting only in HTTPS, I wonder what is the difference between this two:
HTTPS Load-Balancer w/ non-TLS backend
HTTPS Load-Balancer w/ TLS backend
But I meant not the obvious reason that is the first one is not encrypted from load balancer to backend, I mean in performance and HTTP2 conection, for example will I continue to get all the benefits from http2 like multiplexing and streaming? or is the first option
HTTPS Load-Balancer w/ non-TLS backend
only an illusion but I won't get http2?

To talk HTTP/2 all web browsers require the use of HTTPS. And even without HTTP/2 it's still good to have HTTPS for various reasons.
So the point your web browser needs to talk to (often called the edge server), needs to be HTTPS enabled. This is often a load balancer, but could also be a CDN, or just a single web server (e.g. Apache) in front of an application server (e.g. Tomcat).
So then the question is does the connection from that edge server to any downstream servers need to be HTTPS enabled? Well, ultimately the browser will not know, so not for it. Then you're down to two reasons to encrypt this connection:
Because the traffic is still travelling across an insecure channel (e.g. CDN to origin server, across the Internet).
Many feel it's disingenuous to make the user think they are on a secure (with a green padlock) then in fact they are not for the full end to end connection.
This to me is less of an issue if your load balancer is in a segregated network area (or perhaps even on the same machine!) as the server it is connecting to. For example if the load balancer and the 2 (or more) web servers is is connecting to are both in a separate area in a DMZ segregated network or their own VPC.
Ultimately the traffic will be decrypted at some point and the question for server owners is where/when in your networking stack that happens and how comfortable you are with it.
Because you want HTTPS for some other reason (e.g. HTTP/2 all the way through).
On this one I think there is less of a good case to be made. HTTP/2 primarily helps high latency, low bandwidth connections (i.e. browser to edge node) and is less important for low latency, high bandwidth connections (as load balancer to web servers often are). My answer to this question discusses this more.
In both the above scenarios, if you are using HTTPS on your downstream servers, you can use self-signed certificates, or long lived self-signed certificates. This means you are not bound by the 30 days LetsEncrypt limitations, nor does it require you to purchase longer certificates from another CA. As the browser never sees these certificates you only need your load balancer to trust them, which is in your control to do for self-signed certificates. This is also useful if the downstream web servers cannot talk to LetsEncrypt to be able to get certificates from there.
The third option, if it really is important to have HTTPS and/or HTTP/2 all the way through, is to use a TCP load balancer (which is option 2 in your question so apologies for confusing the numbering here!). This just forwards TCP packets to the downstream servers. The packets may still be HTTPS encrypted but the load balancer does not care - it's just forwarding them on and if they are HTTPS encrypted then the downstream server is tasked with decrypting them. So you still can have HTTPS and HTTP/2 in this scenario, you just have the end user certificates (i.e. the LetsEncrypt ones) on the downstream web servers. This can be difficult to manage (should the same certificates be used on both? Or should they have different ones? Do we need to have sticky sessions so HTTPS traffic always hits the sae downstream server). It also means the load balancer cannot see or understand any HTTP traffic - they are all just TCP packets as far as it is concerned. So no filtering on HTTP headers, or adding new HTTP headers (e.g. X-FORWARDED_FOR with the orignal IP address.)
To be honest it is absolutely fine, and even quite common, to have HTTPS on the load balancer and HTTP traffic on downstream servers - if in a secure network between the two. It is usually the easiest to set up (one place to manage HTTPS certificates and renewals) and the easiest supported (e.g. some downstream servers may not easily support HTTPS or HTTP/2). Using HTTPS on this connection either by use of self-signed certificates or CA issued certificates is equally fine, though requires a bit more effort, and the TCP load balancer option is probably the most effort.

Related

Is HTTPS necessary on internal proxies?

I have drawed the following chart to help explain our situation:
We have an API running on the KESTREL WebServer in the back end. This is running as several instances in docker containers. In front of this, we have a HAProxy Load Balancer which directs the traffic to the Kestrel server with the least connections.
All of this so far is only available internally on our network.
We have one server accessible from the internet, running IIS on port 80 and 443.
In IIS we have added a SSL Certificate which encrypts the communication between the end user and our first entry point.
Is there any further need to encrypt the communication between IIS which proxies the request to our HAProxy on the internal network, and from there on, necessary to encrypt communication from HAProxy to the backends?
As far as my knowledge goes, the only benefit of this, would be that nobody on the internal network at our office can listen to the connections going between the servers, but please correct me if I am wrong here.

SSL Certificate issue with using HTTP/2 as a communication mechanism inside LAN

We are running servers which run multiple processes each, who in turn currently communicate with each other using HTTP (Using 1-2 physical servers per client, and there are multiple clients with separate servers).
The servers are hosted locally per client.
We're thinking of migrating our nginx service, which is serving static files (multiple images, videos), to HTTP/2, in order to speed things up, as it is very common to request 1000 images at a time, which is an area where HTTP/2 excels.
For the client side we're using a chromium-based (Electron) client.
A problem arises from the above, where a TLS certificate is required when using HTTP/2 in the version of chromium we're using. Since this is a LAN there's no domain name, and even the IP addresses are not guaranteed to be static.
note: Using TLS is just a bonus, our main goal is to get the latency improvement from HTTP/2.
Is there a way around this?
The solution was to issue a self-signed certificate for a domain, which was added to the hosts file of all affected clients. The certificate authority was manually declared as trusted in all client machines.
For a more general solution, one could use a any DNS resolution option so long as it is consistent across for all clients, and any signed certificate, while a self-signed-signature would require a manual addition of the CA file to all the clients.

TCP Load Balancer In Front of TLS/SSL Endpoints

Last week I was playing with a load balancer for my TLS-enabled endpoints (share the same certificate) and was surprised it is possible to have TPC load balancer in place in front of SSL endpoint. Having that configured it was possible to communicate with TCP load balancer as like it configured to support TLS/SSL. So, I would like to ensure such a network configuration is fully working solution:
TLS/SSL session and handshake workflow are stateless, meaning it is possible to start handshake with a primary server and end it with a mirror. Is it true?
Are there any hidden dangers I must be aware of?
If previous statements are true, what the reason to to do all TLS/SSL work on a load balancer itself?
P.s. the reason I do not do TLS/SSL work on a load balancer is that I need to balance multiple proprietary endpoint only supports SSL/TLS.
TLS/SSL session and handshake workflow are stateless, meaning it is possible to start handshake with a primary server and end it with a mirror. Is it true?
No. I suspect your load balancer is using TCP keep-alive so that the handshake is completing on the same server every time.
Are there any hidden dangers I must be aware of?
You may be incurring a significant performance penalty. HTTPS has "session keys" that are, probably by default, unique to the server. If you aren't able to do something like sticky sessions with the load balancer, then you will do a full handshake every time a client moves from one server to the other.
You also will have session tickets that won't work between servers, so session resumption will probably not work either, and fall back to a full handshake. Some servers support configuring a common session ticket key, like nginx.
If previous statements are true, what the reason to to do all TLS/SSL work on a load balancer itself?
Well, they aren't entirely true. There are other benefits though. The main one being that the load balancer can be more intelligent since it can see the plaintext of the session. An example might be examining the request's cookies to determine which server to send the request to. This is a common need for blue/green deployments.

Is it standard to use HTTPS from client to Load Balancer, but not from LB to app server?

Just wondering if it is a standard practice to use AWS Load Balancer to handle the HTTPS and forward it to the application as HTTP so none of the app instances have to worry about ssl certificates.
Yes, that's a common practice. One of the most important optimizations you could do for a website is to perform the SSL offloading geographically as close as possible to the client.
The SSL handshake consists in a couple of exchanges between the client and the server in order to establish the SSL session. And by having the SSL offloading as close as possible to the user you are reducing the network latency. The load balancer could then dispatch the request to your webfarm which could be situated anywhere in the world.

Capturing HTTPS traffic in the clear?

I've got a local application (which I didn't write, and can't change) that talks to a remote web service. It uses HTTPS, and I'd like to see what's in the traffic.
Is there any way I can do this? I'd prefer a Windows system, but I'm happy to set up a proxy on Linux if this makes things easier.
What I'm considering:
Redirecting the web site by hacking my hosts file (or setting up alternate DNS).
Installing an HTTPS server on that site, with a self-signed (but trusted) certificate.
Apparently, WireShark can see what's in HTTPS if you feed it the private key. I've never tried this.
Somehow, proxy this traffic to the real server (i.e. it's a full-blown man-in-the-middle "attack").
Does this sound sensible? Can WireShark really see what's in HTTPS traffic? Can anyone point me at a suitable proxy (and configuration for same)?
Does Fiddler do what you want?
What is Fiddler?
Fiddler is a Web Debugging Proxy which
logs all HTTP(S) traffic between your
computer and the Internet. Fiddler
allows you to inspect all HTTP(S)
traffic, set breakpoints, and "fiddle"
with incoming or outgoing data.
Fiddler includes a powerful
event-based scripting subsystem, and
can be extended using any .NET
language.
Fiddler is freeware and can debug
traffic from virtually any
application, including Internet
Explorer, Mozilla Firefox, Opera, and
thousands more.
Wireshark can definitely display TLS/SSL encrypted streams as plaintext. However, you will definitely need the private key of the server to do so. The private key must be added to Wireshark as an SSL option under preferences. Note that this only works if you can follow the SSL stream from the start. It will not work if an SSL connection is reused.
For Internet Explorer this (SSL session reuse) can be avoided by clearing the SSL state using the Internet Options dialog. Other environments may require restarting a browser or even rebooting a system (to avoid SSL session reuse).
The other key constraint is that an RSA cipher must be used. Wireshark can not decode TLS/SSL stream that use DFH (Diffie-Hellman).
Assuming you can satisfy the constraints above, the "Follow SSL Stream" right-click command works rather well.
You need to setup a proxy for your local application and if it doesnt honour proxy settings, put a transparent proxy and route all https traffic into it before going outside. Something like this can be the "man" in the middle: http://crypto.stanford.edu/ssl-mitm
Also, here's brief instructions on how to archive this with wireshark: http://predev.wikidot.com/decrypt-ssl-traffic
You should also consider Charles. From the product description at the time of this answer:
Charles is an HTTP proxy / HTTP monitor / Reverse Proxy that enables a developer to view all of the HTTP and SSL / HTTPS traffic between their machine and the Internet. This includes requests, responses and the HTTP headers (which contain the cookies and caching information).
For using https proxy to monitor, it depends on the type of handshake. If you local application does not check the server's certificate by CA's signature which you can not fake, and the server does not check your local application's certificate ( or if you have one to setup on https proxy) then you can set up a https proxy to monitor the https traffic. Otherwise, I think it is impossible to monitor traffic with https proxy.
Another way you can try is to add instrumentation probe at the routines of your client program where it send and receive messages from its https library. It needs some reverse engineering work, but should work for you for all situations.
I would recommend WireShark, it is the best tool to follow on different pieces of traffic. Although, I am not sure what can you see with SSL turned on. Maybe, if you supply it with a certificate?