Consul load balancing north south traffic - load-balancing

I am trying to run some of my micro services within consul service mesh. As per consul documentation, it is clear that consul takes care of routing, load balancing and service discovery. But their documentation also talks about 3rd party load balancers like NGINX, HAProxy and F5.
https://learn.hashicorp.com/collections/consul/load-balancing
If consul takes care of load balancing, then what is the purpose of these load balancers.
My assumptions,
These load balancers are to replace the built-in load balancing technique of consul, but the LB still uses consul service discovery data. (Why anyone need this !!!)
Consul only provides load balancing for east-west traffic (within the service mesh). To load balance north-south traffic (internet traffic), we need external load balancers.
Please let me know which of my assumption is correct

Consul service mesh uses Envoy proxy by default for both east-west and north-south load balancing of connections within the mesh. Whereas east-west traffic is routed through a sidecar proxy, north-south connections route through an instance of Envoy which is configured to act as an ingress gateway.
In addition to Consul's native, Envoy ingress, Consul also supports integrations with other proxies and API gateways. These can be used if you require functionality which is not available in the native ingress offering.
Third party proxies leverage Consul's service catalog to populate their backend/upstream member pools with endpoint information from Consul. This allows the proxy to always have an up-to-date list of healthy and available services in the data center, and eliminates the need to manually reconfigure the north-south proxy when adding/removing service endpoints.
Some gateways like Ambassador, F5, and (soon) Traefik (see PR https://github.com/traefik/traefik/pull/7407) go a step further by integrating with the service mesh (see Connect custom proxy integration) so that they can utilize mTLS when connecting to backend services.

I checked with one of my colleagues (full disclosure: I work for F5) and he mentioned that whereas it is not a technical requirement to use external services for load balancing, a lot of organizations already have the infrastructure in place, along with the operational requirements, policies, and procedures that come with it.
For some examples on how Consul might work with edge services like the F5 BIG-IP, here are a couple articles you might find interesting that can provide context for your question.
Consul Templating BIG-IP Services
Automate App Delivery with
F5, Terraform, and Consul

Related

How to integrate Kubernetes Nginx Ingress with Consul and Consul Connect

I have a k8s cluster with an nginx based ingress and multiple services (ClusterIP). I want to use Consul as a service mesh and documentation is very clear on how to set up and govern communication between services. What is not clear though is how to setup the nginx ingress to talk to these services via the injected sidecar connect proxies using mutual ssl. I'm using cert-manager to automatically provision and terminate ssl at the ingress. I need to secure the communication between the ingress and the services with Consul provisioned mutual SSL. Any documentation related to this scenario will definitely help.
You would inject the sidecar into the ingress-nginx controller and have it talk to backend services just like any other service-to-service thing. This will probably require overriding a lot of the auto-generated config so I'm not sure it will be as useful as you hope.

How to communicate between services in Kubernetes in a secure way

I have a few node.js microservices running in Kubernetes and now I would need to find a way to communicate between them. I was thinking to expose an endpoint that would only be accessible internally from other pods. I have been searching for hours, but didn't find a solution that would be secure enough. Is there a way to make it work as such? Thank you!
If you want your service to be accessible only from selected pods - you may use Network Policies. They allow to define what pods can talk to what pods on the network level. For example, you may expose your service through ingress and allow only ingress controller to talk to your application. That way you can be sure that your application can only be available through ingress (with authentication) and no other way.
Network Policies are supported only be some network plugins:
Calico
Open vSwitch
Cilium
Weave
Romana
communicate between services in Kubernetes in a secure way
Natively, Kubernetes does not provide mutual TLS solution to the services for encrypted communication, that's where Istio with mutual-tls-authenticatione bring this functionality to the platform.
Simply use 'cluster ip' as service type. this would keep your services exposed within cluster. you can use services by their name over Http.
for any service that is talking publicly you may need to use load balancer service type or ingress controller.
There are several ways to achieve this:
Encryption, which at least needs some kind of certificate management. You could also use a service-mesh for encryption but this needs some efford.
Autorisation via Access-Tokens.
Securing the Kubernetes cluster and the ingress.
Network filtering (firewall).
This list is far from complete.
Note that doing just item 2 will not solve your problem. I think you will at least need items 1 and 2 to get some level of security.

Route to request via F5 or via zuul/Eureka/Ribbon

I have a few services in my environment , request to which are proxied via ZUUL.
We have 2 each instances each of ZUUL, Eureka and each of the downstream services for scalability and failover.
For a composite service, we lookup the eureka registry for zuul service and then build the zuul proxy endpoint using the returned info and actual endpoint of downstream service and invoke the ZUUL proxy endpoint using RestTemplate.
We also have F5 as our hardware load balancer. Using F5 we intend to receive the external requests and loadbalance them and route them to ZUUL instances.
My query is should the internal requests within the firewall, for example, the composite service , Route to downstream service via F5 or via ZUUL (using Eureka/Ribbon to lookup ZUUL first).
I am not sure if the approach is correct and makes sense because that means for external request we are relying on F5 (Hardware LB) to do the load balancing for ZUUL and for internal requests we are relying on Eureka/Ribbon (Software LB) to do the load balancing for ZUUL.
Would it be better to just route to F5 from the composite service rather than use Eureka/Ribbon for consistency. But, I see there are extra hops, composite to F5, F5 to zuul, zuul to downstream (via eureka/ribbon).
Can anyone suggest me a better way to handle this.

SSL Termination at F5 or ZUUl/Eureka/Services?

We have a few services running in our environment with Spring Cloud Netflix, Eureka and Zuul. Also, we use Spring Boot for developing the services.
We also F5 as the hardware load balancer which receives the external requests and routes them to one of ZUUL instances based on the configured rule.
As of now, we use HTTP for communication between the services. We now want to secure all communications via HTTPS.
All the services including ZUUL and Eureka are scaled up with 2 instances in separate machines for failover.
My question is should I setup and enable HTTPS for each of the services including Eureka,ZUUL ad other downstream services (OR) Is it possible to only use HTTPS only for the F5. and leave the other instances in HTTP itself.
I heard of a feature called SSL Termination/off-loading which is provided by most load balancers. I am not sure F5 support it. If it supports would it make sense to only use it for HTTPS and leave the rest in HTTP.
I feel this can reduce the complexity in setting up SSL for each of the instances(which can change in the future based on the load) and also reduce the slowness which will be inherent with SSL decryption and encryption.
Should I secure every instance including eureka/zuul and downstream services or just do ssl-termination at F5 alone.
If the back end endpoints are HTTPS then the load balancers need to load balance at TCP layer, as they cannot inspect the content. If the load balancer endpoints are HTTPS themselves, then there is usually little point of encrypting the internal traffic, and the load balancer can inspect the traffic and do smart decisions where to route the traffic (eg. sticky session). If the application endpoint needs to know that the original request is HTTPS (which is often the case) then a HTTP header is added to the internal leg to advertise this, the de-facto convention being the X-FORWARDED-PROTO header.
If you choose to let the LB-to-app leg on clear, then you need to make sure that the segment is trustworthy and your app endpoints are not reachable directly, bypassing the LB.

multiple soa/api gateways to satisfy internal and external customers

I have come across API and Service architectures where API/Svc gateway is positioned in DMZ for authorization/authentication of external(not employees) consumers. But, in my case, we have internal consumers also. The internal consumers are behind an intranet firewall that isolates them from DMZ. My question is do i need to deploy a separate instance of API/Svc gateway for internal users for authentication/authorization that is within intranet or should internal api access have to go through the intranet firewall to DMZ and back into the intranet?
That depends on the gateway architecture. For example Tyk.io separates UI/Portal from the gateway nodes. If the UI components and the gateway nodes are separate then you can have a single portal but differentiated gateways to handle traffic from different sources.