Secure connections between client, load balancer and server - ssl

The past week or so I have been getting to grips with the world of AWS and more specifically Elastic Beanstalk and Load Balancing...
The application I'm developing enforces a SSL/HTTPS connection using a custom RequireHttps attribute I'm applying globally. I initially had problems configuring the load balancer with this set up, but it does appear to be working as expected.
My problem originates from a blog post I was glancing over around the time I was setting up the Load Balancer/RequireHttps attribute. Quoting this blog post:
When using Elastic Beanstalk ... the connection between the load balancer and application server is not secure. However, you don't need to be concerned with the security of the connection between the load balancer and the server but you do need to be concerned about the connection between the client and the load balancer.
As configuring load balancers is an entirely new area for me, I'm a little sceptical that the above is entirely true.
Is the connection between a load balancer and server truly none of my concern? Would it be better to not terminate SSL at the Load Balancer and pass a sercure connection straight through to the server?

After a little further research, I stumbled across the following post/discussion on security.stackexchange: Should SSL be terminated at a load balancer?
makerofthings7:
It seems to me the question is "do you trust your own datacenter". In other words, it seems like you're trying to finely draw the line where the untrusted networks lie, and the trust begins.
In my opinion, SSL/TLS trust should terminate at the SSL offloading device since the department that manages that device often also manages the networking and infrastructure. There is a certain amount of contractual trust there. There is no point of encrypting data at a downstream server since the same people who are supporting the network usually have access to this as well. (with the possible exception in multi-tenant environments, or unique business requirements that require deeper segmentation).
A second reason SSL should terminate at the load balancer is because it offers a centralized place to correct SSL attacks such as CRIME or BEAST. If SSL is terminated at a variety of web servers, running on different OS's you're more likely to run into problems due to the additional complexity . Keep it simple, and you'll have fewer problems in the long run.
I can see that what #makerofthings7 is saying makes sense. Whether SSL is terminated at the Load Balancer or Server should make little difference.

Related

What is purpose of decryption of data at both the load balancer and then the web server?

I heard that to alleviate the web server of the burden of performing the SSL Termination, it is moved to load balancers and then HTTP connection is made from the LB to the web server. However, in order to ensure security, an accepted practice is to re encrypt the data on the LB and then transmit it to the web server. If we are eventually sending the encrypted data to the web servers, what is the purpose of having a LB terminate SSL in the first place ?
A load balancer will spread the load over multiple backend servers so that each backend server takes only a part of the load. This balancing of the load can be done in a variety of ways, also depending on the requirements of the web application:
If the application is fully stateless (like only serving static content) each TCP connection can be send to an arbitrary server. In this case no SSL inspection would be needed since the decision does not depend on the content of the traffic.
If the application is instead stateful the decision which backend to use might be done based on the session cookie, so that requests end up at the same server as the previous requests for the session. Since the session cookie is part of the encrypted content SSL inspection is needed. Note that in this case often a simpler approach can be used, like basing the decision on the clients source IP address and thus avoiding the costly SSL inspection.
Sometimes load balancers also do more than just balance the load. They might incorporate security features, like a Web Application Firewall, they might sanitize the traffic or similar. These features work on the content so SSL inspection is needed.

SSL Configuration in Clustered environment

We have an Oracle application (Agile PLM) which is deployed in a clustered environment. We have one admin node and two managed nodes supporting our application, where admin and 1 managed nodes are on the same server. We also have Load balancer which manages the traffic between the cluster.
We want to configure SSL in our application so that the application URL will be accessible over https only. We have already configured SSL at Load Balancer level(by installing security certificates in weblogic server which is the admin server) but want to know if we have to configure SSL on the managed server as well or bringing Load Balancer on https is sufficient?
All the users access the application using the Load Balancer URL only but since I am from the development team, so is only aware of the fact that we can also connect to the application with Managed server URLs, which are still running on http. Is it must to bring Managed servers also on https or it is just a good practice but not necessary?
It's not necessary, though probably a good practice.
I believe I have read in Oracle's installation guide that the recommended way is HTTP on the managed servers and terminating SSL on the load balancer. This may have changed.
For what it's worth, I leave HTTP on the managed servers.

TCP Load Balancer In Front of TLS/SSL Endpoints

Last week I was playing with a load balancer for my TLS-enabled endpoints (share the same certificate) and was surprised it is possible to have TPC load balancer in place in front of SSL endpoint. Having that configured it was possible to communicate with TCP load balancer as like it configured to support TLS/SSL. So, I would like to ensure such a network configuration is fully working solution:
TLS/SSL session and handshake workflow are stateless, meaning it is possible to start handshake with a primary server and end it with a mirror. Is it true?
Are there any hidden dangers I must be aware of?
If previous statements are true, what the reason to to do all TLS/SSL work on a load balancer itself?
P.s. the reason I do not do TLS/SSL work on a load balancer is that I need to balance multiple proprietary endpoint only supports SSL/TLS.
TLS/SSL session and handshake workflow are stateless, meaning it is possible to start handshake with a primary server and end it with a mirror. Is it true?
No. I suspect your load balancer is using TCP keep-alive so that the handshake is completing on the same server every time.
Are there any hidden dangers I must be aware of?
You may be incurring a significant performance penalty. HTTPS has "session keys" that are, probably by default, unique to the server. If you aren't able to do something like sticky sessions with the load balancer, then you will do a full handshake every time a client moves from one server to the other.
You also will have session tickets that won't work between servers, so session resumption will probably not work either, and fall back to a full handshake. Some servers support configuring a common session ticket key, like nginx.
If previous statements are true, what the reason to to do all TLS/SSL work on a load balancer itself?
Well, they aren't entirely true. There are other benefits though. The main one being that the load balancer can be more intelligent since it can see the plaintext of the session. An example might be examining the request's cookies to determine which server to send the request to. This is a common need for blue/green deployments.

Load balancing for SSL server, NOT web server

I'm finding it pretty difficult to get reliable information on Google about how exactly to do load balancing for anything other than a web server. Here is my situation: I currently have a python/twisted SSL server running on one machine. This is not fast enough so I want to change this so that multiple instances of this server will run on multiple machines behind a load balancer. So suppose I have two copies of this server process: TWISTED1 and TWISTED2. TWISTED1 will run on MACHINE1 and TWISTED2 will run on MACHINE2. Both TWISTED1 and TWISTED2 are SSL server processes. A separate machine LOAD_BALANCER is used to load balance between the two machines.
Where do I put my existing SSL certificate? Do I put an identical copy on both MACHINE1 and MACHINE2? Do I also have an identical copy on LOAD_BALANCER? I do NOT want unencrypted traffic between LOAD_BALANCER and MACHINE1 or MACHINE2, and also the twisted processes are already set up as SSL servers, so it would be unnecessary work to remove SSL from the twisted process. Basically I want to set up load balancing for SSL traffic with minimal changes to the existing twisted scripts. What is the solution?
Regarding the load balancer, is it sufficient to use another machine MACHINE3 and put HAPROXY onto this machine as the load balancer or is it better to use a hardware load balancer like Baracuda?
Note also that most of the connections to the twisted process are persistent connections.
Could you have the certs on one machine and a mount from the other machine to the machine w/ the certs? Allowing you to only have one set of ssl certs.
The problem with load balancing a TLS server is that without the HTTP "forwarded for" header, there's no way to tell where the original connection came from. That's why so much documentation focuses on load-balancing HTTP(S).
You can configure TLS termination in more or less all of the ways that you've described; it sounds like you can simply have your load balancer act as a TCP load balancer. The only reason to have your load balancer have your certificate (with the implication being that it also would have your private key) would be for it to decrypt the traffic to figure out what to do with it, then re-encrypt it to the machines. If you don't need to do that, put the certificate only on the target machines and not on the LB.

Round robin server setup

From what I understand, if you have multiple web servers, then you need some kind of load balancer that will split the traffic amongst your web servers.
Does this mean that the load balancer is the main connecting point on the network? ie. the load balancer has the IP address of the domain name?
If this is the case, it makes it really easy to add new hardware since you don't have to wait for any dns propogation right?
There are several solutions to this "problem".
You could round-robin at the DNS-level. I.e. have www.yourdomain.com point to several IP-addresses (well all your servers).
This doesn't give you any intelligence in the load balancing, but the load will be more or less randomly distributed, but you wouldn't be resilient to hardware failures as they would still require changes to DNS.
On the other hand you could use a proxy or a loadbalancing proxy that has a single IP but then distributes the traffic to several back-end boxes. This gives you a single point of failure (the proxy, you could of course have several proxies to defeat that problem) and would also give you the added bonus of being able to use some metric to divide the load more evenly and intelligently than with just round-robin dns.
This setup can also handle hardware failure in the back-end pretty seamlessly. The end user never sees the back-end, just the front-end.
There are other issues to think about as well, if your page uses sessions or other smart logic, you can run into synchronisation problems when your user (potentially) hits different servers on every access.
It does (in general). It depends on what server OS and software you are using, but in general, you'll hit the load balancer for each request, and the load balancer will then farm out the work according to the scheme you have in place (round robin, least busy, session controlled, application controlled, etc...)
andy has part of the answer, but for true load balancing and high availability you would want to use a pair of hardware load balancers like F5 bigips in an active passive configuration.
Yes your domain IP would be hosted on these devices and traffic would connect firstly to those devices. Bigips offer a lot of added functionality including multiple ways of load balancing and some great url rewriting, ssl acceleration, etc. It also allows you to run your web servers on a seperate non routable address scheme and even run multiple sites on different ports with the F5's handling the translations.
Once you introduce load balancing you may have some other considerations to take into account for your application(s) like sticky sessions and session state but that is a different subject