SSL Termination at F5 or ZUUl/Eureka/Services? - ssl

We have a few services running in our environment with Spring Cloud Netflix, Eureka and Zuul. Also, we use Spring Boot for developing the services.
We also F5 as the hardware load balancer which receives the external requests and routes them to one of ZUUL instances based on the configured rule.
As of now, we use HTTP for communication between the services. We now want to secure all communications via HTTPS.
All the services including ZUUL and Eureka are scaled up with 2 instances in separate machines for failover.
My question is should I setup and enable HTTPS for each of the services including Eureka,ZUUL ad other downstream services (OR) Is it possible to only use HTTPS only for the F5. and leave the other instances in HTTP itself.
I heard of a feature called SSL Termination/off-loading which is provided by most load balancers. I am not sure F5 support it. If it supports would it make sense to only use it for HTTPS and leave the rest in HTTP.
I feel this can reduce the complexity in setting up SSL for each of the instances(which can change in the future based on the load) and also reduce the slowness which will be inherent with SSL decryption and encryption.
Should I secure every instance including eureka/zuul and downstream services or just do ssl-termination at F5 alone.

If the back end endpoints are HTTPS then the load balancers need to load balance at TCP layer, as they cannot inspect the content. If the load balancer endpoints are HTTPS themselves, then there is usually little point of encrypting the internal traffic, and the load balancer can inspect the traffic and do smart decisions where to route the traffic (eg. sticky session). If the application endpoint needs to know that the original request is HTTPS (which is often the case) then a HTTP header is added to the internal leg to advertise this, the de-facto convention being the X-FORWARDED-PROTO header.
If you choose to let the LB-to-app leg on clear, then you need to make sure that the segment is trustworthy and your app endpoints are not reachable directly, bypassing the LB.

Related

Consul load balancing north south traffic

I am trying to run some of my micro services within consul service mesh. As per consul documentation, it is clear that consul takes care of routing, load balancing and service discovery. But their documentation also talks about 3rd party load balancers like NGINX, HAProxy and F5.
https://learn.hashicorp.com/collections/consul/load-balancing
If consul takes care of load balancing, then what is the purpose of these load balancers.
My assumptions,
These load balancers are to replace the built-in load balancing technique of consul, but the LB still uses consul service discovery data. (Why anyone need this !!!)
Consul only provides load balancing for east-west traffic (within the service mesh). To load balance north-south traffic (internet traffic), we need external load balancers.
Please let me know which of my assumption is correct
Consul service mesh uses Envoy proxy by default for both east-west and north-south load balancing of connections within the mesh. Whereas east-west traffic is routed through a sidecar proxy, north-south connections route through an instance of Envoy which is configured to act as an ingress gateway.
In addition to Consul's native, Envoy ingress, Consul also supports integrations with other proxies and API gateways. These can be used if you require functionality which is not available in the native ingress offering.
Third party proxies leverage Consul's service catalog to populate their backend/upstream member pools with endpoint information from Consul. This allows the proxy to always have an up-to-date list of healthy and available services in the data center, and eliminates the need to manually reconfigure the north-south proxy when adding/removing service endpoints.
Some gateways like Ambassador, F5, and (soon) Traefik (see PR https://github.com/traefik/traefik/pull/7407) go a step further by integrating with the service mesh (see Connect custom proxy integration) so that they can utilize mTLS when connecting to backend services.
I checked with one of my colleagues (full disclosure: I work for F5) and he mentioned that whereas it is not a technical requirement to use external services for load balancing, a lot of organizations already have the infrastructure in place, along with the operational requirements, policies, and procedures that come with it.
For some examples on how Consul might work with edge services like the F5 BIG-IP, here are a couple articles you might find interesting that can provide context for your question.
Consul Templating BIG-IP Services
Automate App Delivery with
F5, Terraform, and Consul

Amazon Application Load Balancer wss to ws forwarding problem

There is a target group in AWS Fargate cluster that manages node.js applications inside Docker containers. Every application serves web socket connections (web socket, not socket.io!).
There is a non-encrypted connection (HTTP / ws) behind the Application Load Balancer. However, outside it’s HTTPS / wss. Thus, when HTTPS request comes to Application Load Balancer, it decrypts the request and forwards HTTP request to a selected container.
The question is - how (and where) is it possible to configure wss->ws forwarding for web socket requests (there is a specific URL)?
HTTPS->HTTP rule does wss->HTTP transformation, which is insanely wrong. How to implement wss->ws transformation and is this possible at all?

Is reverse proxy actually needed on ASP.NET core?

We're wondering if reverse proxy is actually required for most use cases and would appreciate additional information.
The Kerstel/Nginx documentation claims:
"Kestrel is great for serving dynamic content from ASP.NET Core. However, the web serving capabilities aren't as feature rich as servers such as IIS, Apache, or Nginx. A reverse proxy server can offload work such as serving static content, caching requests, compressing requests, and HTTPS termination from the HTTP server. A reverse proxy server may reside on a dedicated machine or may be deployed alongside an HTTP server."
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2
Could anyone please share some insights if this is actually relevant nowadays?
On our use case, we use Docker instances with external load balancing (AWS ALB).
Each docker instance has both Nginx and our ASP.NET Core application running.
We couldn't figure out the exact benefits of using Nginx.
Serving static content
As we're using an external CRN (AWS CloudFront), I assume static caching doesn't really have any actual benefits, does it?
Caching requests
I believe this is the same as serving static content, as dynamic content isn't cached on most scenarios (on our use case - all scenarios).
Compressing requests
ASP.NET Core has a response compression middleware, however - it claims "The performance of the middleware probably won't match that of the server modules. HTTP.sys server server and Kestrel server don't currently offer built-in compression support.".
Perhaps some benchmarks could be created to validate this claim.
https://learn.microsoft.com/en-us/aspnet/core/performance/response-compression?view=aspnetcore-2.2
HTTPS termination from the HTTP server
I assume most clients having load balancers can skip this part, as HTTPS termination can be done on the load balancer if needed.
Thanks!
Effy
This documentation does not tell you that you „should“ run ASP.NET Core / Kestrel behind a reverse proxy, just that advanced scenarios can benefit from one since Kestrel does not have some features that other web servers may have.
If you don‘t have a use for an additional nginx reverse proxy then you don‘t have to use one.
E.g. Kestrel only recently adopted APIs to change certain connection parameters on the fly without requiring a restart - this is helpful for switching certificates acquired via ACME (e.g. Let‘s Encrypt service).
It highly depends on the system architecture if a reverse proxy is needed or not, but you don‘t have to deploy one if you don‘t require a specific feature.

Route to request via F5 or via zuul/Eureka/Ribbon

I have a few services in my environment , request to which are proxied via ZUUL.
We have 2 each instances each of ZUUL, Eureka and each of the downstream services for scalability and failover.
For a composite service, we lookup the eureka registry for zuul service and then build the zuul proxy endpoint using the returned info and actual endpoint of downstream service and invoke the ZUUL proxy endpoint using RestTemplate.
We also have F5 as our hardware load balancer. Using F5 we intend to receive the external requests and loadbalance them and route them to ZUUL instances.
My query is should the internal requests within the firewall, for example, the composite service , Route to downstream service via F5 or via ZUUL (using Eureka/Ribbon to lookup ZUUL first).
I am not sure if the approach is correct and makes sense because that means for external request we are relying on F5 (Hardware LB) to do the load balancing for ZUUL and for internal requests we are relying on Eureka/Ribbon (Software LB) to do the load balancing for ZUUL.
Would it be better to just route to F5 from the composite service rather than use Eureka/Ribbon for consistency. But, I see there are extra hops, composite to F5, F5 to zuul, zuul to downstream (via eureka/ribbon).
Can anyone suggest me a better way to handle this.

How does WCF + SSL working with load balancing?

If SSL is handled by a load balancer, do I still need to configure it in the WCF serviceCertificate node? My scenario is to use message level security. If someone can explain how load balancing with wcf and ssl works, that would be very nice.
WCF requires security tokens to be passed over a secure transport if the message itself is not signed/encrypted. Since traffic is HTTP between your Big-IP and your individual web servers, you need a way to have security tokens that you know are secured between the client and the Big-IP up front still be passed to your server farm. There's a couple ways to do that depending on what version of WCF you're using:
If you're using WCF 4.0 you can just create a custom binding and set the AllowInsecureTransport property on the built in SecurityBindingElement to signify that you don't care that the transport isn't secure.
If you're using WCF 3.5 you have to "lie" about security with a custom TransportSecurityBindingElement on the server side. You can read my old post about this here.
FWIW, they created a hotfix release for 3.5 SP1 that adds the AllowInsecureTransport to that version, but I don't know if your company will allow you to install custom hotfixes.
If you want to use message security then each message is encrypted and signed separately - there is no secure connection and load balancer behaves as with any other HTTP transport. Loadbalancer doesn't know about security and doesn't need certificate.
There are two gotchas:
All load balanced application servers hosting your WCF service must use the same certificate
You must ensure that your WCF binding doesn't use sessions (reliable, security) otherwise you will need load balancing algorithm with sticky sessions (all request for single session always routed to the same server)
It doesn't. Don't bother with this. You will be in a world of hurt. Just install the certs on each machine. We've recently been through this fiasco. WCF is not worth the effort it thinks it needs SSL but sees that it doesn't have it. Take a look at openrasta or something else if you want to do all your SSL on the loadbalancer. #microsoftfail