Route to request via F5 or via zuul/Eureka/Ribbon - load-balancing

I have a few services in my environment , request to which are proxied via ZUUL.
We have 2 each instances each of ZUUL, Eureka and each of the downstream services for scalability and failover.
For a composite service, we lookup the eureka registry for zuul service and then build the zuul proxy endpoint using the returned info and actual endpoint of downstream service and invoke the ZUUL proxy endpoint using RestTemplate.
We also have F5 as our hardware load balancer. Using F5 we intend to receive the external requests and loadbalance them and route them to ZUUL instances.
My query is should the internal requests within the firewall, for example, the composite service , Route to downstream service via F5 or via ZUUL (using Eureka/Ribbon to lookup ZUUL first).
I am not sure if the approach is correct and makes sense because that means for external request we are relying on F5 (Hardware LB) to do the load balancing for ZUUL and for internal requests we are relying on Eureka/Ribbon (Software LB) to do the load balancing for ZUUL.
Would it be better to just route to F5 from the composite service rather than use Eureka/Ribbon for consistency. But, I see there are extra hops, composite to F5, F5 to zuul, zuul to downstream (via eureka/ribbon).
Can anyone suggest me a better way to handle this.

Related

Consul load balancing north south traffic

I am trying to run some of my micro services within consul service mesh. As per consul documentation, it is clear that consul takes care of routing, load balancing and service discovery. But their documentation also talks about 3rd party load balancers like NGINX, HAProxy and F5.
https://learn.hashicorp.com/collections/consul/load-balancing
If consul takes care of load balancing, then what is the purpose of these load balancers.
My assumptions,
These load balancers are to replace the built-in load balancing technique of consul, but the LB still uses consul service discovery data. (Why anyone need this !!!)
Consul only provides load balancing for east-west traffic (within the service mesh). To load balance north-south traffic (internet traffic), we need external load balancers.
Please let me know which of my assumption is correct
Consul service mesh uses Envoy proxy by default for both east-west and north-south load balancing of connections within the mesh. Whereas east-west traffic is routed through a sidecar proxy, north-south connections route through an instance of Envoy which is configured to act as an ingress gateway.
In addition to Consul's native, Envoy ingress, Consul also supports integrations with other proxies and API gateways. These can be used if you require functionality which is not available in the native ingress offering.
Third party proxies leverage Consul's service catalog to populate their backend/upstream member pools with endpoint information from Consul. This allows the proxy to always have an up-to-date list of healthy and available services in the data center, and eliminates the need to manually reconfigure the north-south proxy when adding/removing service endpoints.
Some gateways like Ambassador, F5, and (soon) Traefik (see PR https://github.com/traefik/traefik/pull/7407) go a step further by integrating with the service mesh (see Connect custom proxy integration) so that they can utilize mTLS when connecting to backend services.
I checked with one of my colleagues (full disclosure: I work for F5) and he mentioned that whereas it is not a technical requirement to use external services for load balancing, a lot of organizations already have the infrastructure in place, along with the operational requirements, policies, and procedures that come with it.
For some examples on how Consul might work with edge services like the F5 BIG-IP, here are a couple articles you might find interesting that can provide context for your question.
Consul Templating BIG-IP Services
Automate App Delivery with
F5, Terraform, and Consul

Amazon Application Load Balancer wss to ws forwarding problem

There is a target group in AWS Fargate cluster that manages node.js applications inside Docker containers. Every application serves web socket connections (web socket, not socket.io!).
There is a non-encrypted connection (HTTP / ws) behind the Application Load Balancer. However, outside it’s HTTPS / wss. Thus, when HTTPS request comes to Application Load Balancer, it decrypts the request and forwards HTTP request to a selected container.
The question is - how (and where) is it possible to configure wss->ws forwarding for web socket requests (there is a specific URL)?
HTTPS->HTTP rule does wss->HTTP transformation, which is insanely wrong. How to implement wss->ws transformation and is this possible at all?

Is reverse proxy actually needed on ASP.NET core?

We're wondering if reverse proxy is actually required for most use cases and would appreciate additional information.
The Kerstel/Nginx documentation claims:
"Kestrel is great for serving dynamic content from ASP.NET Core. However, the web serving capabilities aren't as feature rich as servers such as IIS, Apache, or Nginx. A reverse proxy server can offload work such as serving static content, caching requests, compressing requests, and HTTPS termination from the HTTP server. A reverse proxy server may reside on a dedicated machine or may be deployed alongside an HTTP server."
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-2.2
Could anyone please share some insights if this is actually relevant nowadays?
On our use case, we use Docker instances with external load balancing (AWS ALB).
Each docker instance has both Nginx and our ASP.NET Core application running.
We couldn't figure out the exact benefits of using Nginx.
Serving static content
As we're using an external CRN (AWS CloudFront), I assume static caching doesn't really have any actual benefits, does it?
Caching requests
I believe this is the same as serving static content, as dynamic content isn't cached on most scenarios (on our use case - all scenarios).
Compressing requests
ASP.NET Core has a response compression middleware, however - it claims "The performance of the middleware probably won't match that of the server modules. HTTP.sys server server and Kestrel server don't currently offer built-in compression support.".
Perhaps some benchmarks could be created to validate this claim.
https://learn.microsoft.com/en-us/aspnet/core/performance/response-compression?view=aspnetcore-2.2
HTTPS termination from the HTTP server
I assume most clients having load balancers can skip this part, as HTTPS termination can be done on the load balancer if needed.
Thanks!
Effy
This documentation does not tell you that you „should“ run ASP.NET Core / Kestrel behind a reverse proxy, just that advanced scenarios can benefit from one since Kestrel does not have some features that other web servers may have.
If you don‘t have a use for an additional nginx reverse proxy then you don‘t have to use one.
E.g. Kestrel only recently adopted APIs to change certain connection parameters on the fly without requiring a restart - this is helpful for switching certificates acquired via ACME (e.g. Let‘s Encrypt service).
It highly depends on the system architecture if a reverse proxy is needed or not, but you don‘t have to deploy one if you don‘t require a specific feature.

SSL Termination at F5 or ZUUl/Eureka/Services?

We have a few services running in our environment with Spring Cloud Netflix, Eureka and Zuul. Also, we use Spring Boot for developing the services.
We also F5 as the hardware load balancer which receives the external requests and routes them to one of ZUUL instances based on the configured rule.
As of now, we use HTTP for communication between the services. We now want to secure all communications via HTTPS.
All the services including ZUUL and Eureka are scaled up with 2 instances in separate machines for failover.
My question is should I setup and enable HTTPS for each of the services including Eureka,ZUUL ad other downstream services (OR) Is it possible to only use HTTPS only for the F5. and leave the other instances in HTTP itself.
I heard of a feature called SSL Termination/off-loading which is provided by most load balancers. I am not sure F5 support it. If it supports would it make sense to only use it for HTTPS and leave the rest in HTTP.
I feel this can reduce the complexity in setting up SSL for each of the instances(which can change in the future based on the load) and also reduce the slowness which will be inherent with SSL decryption and encryption.
Should I secure every instance including eureka/zuul and downstream services or just do ssl-termination at F5 alone.
If the back end endpoints are HTTPS then the load balancers need to load balance at TCP layer, as they cannot inspect the content. If the load balancer endpoints are HTTPS themselves, then there is usually little point of encrypting the internal traffic, and the load balancer can inspect the traffic and do smart decisions where to route the traffic (eg. sticky session). If the application endpoint needs to know that the original request is HTTPS (which is often the case) then a HTTP header is added to the internal leg to advertise this, the de-facto convention being the X-FORWARDED-PROTO header.
If you choose to let the LB-to-app leg on clear, then you need to make sure that the segment is trustworthy and your app endpoints are not reachable directly, bypassing the LB.

How to build a proxy layer for SOAP/XML webservices

I need to capture and run realtime analysis on messages being exchanged between various web services implemented in Java and client apps. The server code and config can not be modified and is hosted on various servers.
Is it possible to build a proxy layer that will take all calls from the client app and route them to actual web services.
So it needs to do the following:
Accept a config file containing endpoints for various web services that need to be proxied
For each end point, generate a proxy URL
The client apps will point to these proxy URLs
The proxy layer will listen to traffic on these proxy URLs, and route them to real end points.
Track all SOAP traffic in between the client and services and run the necessary analysis.
I considered SoapUI but it does not seem to provide enough control that I need for realtime analysis.
You should start with WCF Routing service. Once you have working communication you can add some custom message processing through custom behaviours or channels to grab SOAP messages and do your analysis.