How to integrate Kubernetes Nginx Ingress with Consul and Consul Connect - ssl

I have a k8s cluster with an nginx based ingress and multiple services (ClusterIP). I want to use Consul as a service mesh and documentation is very clear on how to set up and govern communication between services. What is not clear though is how to setup the nginx ingress to talk to these services via the injected sidecar connect proxies using mutual ssl. I'm using cert-manager to automatically provision and terminate ssl at the ingress. I need to secure the communication between the ingress and the services with Consul provisioned mutual SSL. Any documentation related to this scenario will definitely help.

You would inject the sidecar into the ingress-nginx controller and have it talk to backend services just like any other service-to-service thing. This will probably require overriding a lot of the auto-generated config so I'm not sure it will be as useful as you hope.

Related

Consul load balancing north south traffic

I am trying to run some of my micro services within consul service mesh. As per consul documentation, it is clear that consul takes care of routing, load balancing and service discovery. But their documentation also talks about 3rd party load balancers like NGINX, HAProxy and F5.
https://learn.hashicorp.com/collections/consul/load-balancing
If consul takes care of load balancing, then what is the purpose of these load balancers.
My assumptions,
These load balancers are to replace the built-in load balancing technique of consul, but the LB still uses consul service discovery data. (Why anyone need this !!!)
Consul only provides load balancing for east-west traffic (within the service mesh). To load balance north-south traffic (internet traffic), we need external load balancers.
Please let me know which of my assumption is correct
Consul service mesh uses Envoy proxy by default for both east-west and north-south load balancing of connections within the mesh. Whereas east-west traffic is routed through a sidecar proxy, north-south connections route through an instance of Envoy which is configured to act as an ingress gateway.
In addition to Consul's native, Envoy ingress, Consul also supports integrations with other proxies and API gateways. These can be used if you require functionality which is not available in the native ingress offering.
Third party proxies leverage Consul's service catalog to populate their backend/upstream member pools with endpoint information from Consul. This allows the proxy to always have an up-to-date list of healthy and available services in the data center, and eliminates the need to manually reconfigure the north-south proxy when adding/removing service endpoints.
Some gateways like Ambassador, F5, and (soon) Traefik (see PR https://github.com/traefik/traefik/pull/7407) go a step further by integrating with the service mesh (see Connect custom proxy integration) so that they can utilize mTLS when connecting to backend services.
I checked with one of my colleagues (full disclosure: I work for F5) and he mentioned that whereas it is not a technical requirement to use external services for load balancing, a lot of organizations already have the infrastructure in place, along with the operational requirements, policies, and procedures that come with it.
For some examples on how Consul might work with edge services like the F5 BIG-IP, here are a couple articles you might find interesting that can provide context for your question.
Consul Templating BIG-IP Services
Automate App Delivery with
F5, Terraform, and Consul

How to communicate between services in Kubernetes in a secure way

I have a few node.js microservices running in Kubernetes and now I would need to find a way to communicate between them. I was thinking to expose an endpoint that would only be accessible internally from other pods. I have been searching for hours, but didn't find a solution that would be secure enough. Is there a way to make it work as such? Thank you!
If you want your service to be accessible only from selected pods - you may use Network Policies. They allow to define what pods can talk to what pods on the network level. For example, you may expose your service through ingress and allow only ingress controller to talk to your application. That way you can be sure that your application can only be available through ingress (with authentication) and no other way.
Network Policies are supported only be some network plugins:
Calico
Open vSwitch
Cilium
Weave
Romana
communicate between services in Kubernetes in a secure way
Natively, Kubernetes does not provide mutual TLS solution to the services for encrypted communication, that's where Istio with mutual-tls-authenticatione bring this functionality to the platform.
Simply use 'cluster ip' as service type. this would keep your services exposed within cluster. you can use services by their name over Http.
for any service that is talking publicly you may need to use load balancer service type or ingress controller.
There are several ways to achieve this:
Encryption, which at least needs some kind of certificate management. You could also use a service-mesh for encryption but this needs some efford.
Autorisation via Access-Tokens.
Securing the Kubernetes cluster and the ingress.
Network filtering (firewall).
This list is far from complete.
Note that doing just item 2 will not solve your problem. I think you will at least need items 1 and 2 to get some level of security.

Per backend configurable forwarding timeout in Traefik proxy?

I have a question on ForwardingTimeout configuration in Traefik (https://docs.traefik.io/configuration/commons/#forwarding-timeouts)
Do any backend providers support configuring it per backend instead of globally in traefik proxy?
Most of the other proxies (haproxy) allow configuring dial-timeout, read-timeout's per backend.
This is not possible at the moment.
There's an discussion for this feature and it's still open: https://github.com/containous/traefik/issues/3027

HTTPS between Azure Service Fabric services

I have the following scenario:
A stateless service with a self-hosted OWIN WebApi. This provides a RESTful client-facing api.
A stateful service, again with a self-hosted OWIN WebApi.
After locating the correct stateful service partition, the stateless service calls into stateful service to access state. It does so via HTTP/HTTPS into the WebApi.
This configuration works fine running on the local cluster and an Azure cluster over HTTP. I'm running into problems though with HTTPS.
Using a self-signed cert I'm able to use HTTPS between the client and the stateless front-end service. However, I can't seem to get the configuration quite right to allow the stateless service to communicate with the stateful service over HTTPS.
I get an exception when the stateless service makes the request to the stateful service. "The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel." That has an inner exception of "The remote certificate is invalid according to the validation procedure".
I'm a bit fuzzy on security on service fabric, but have read through several articles, SO posts, blogs, etc. on the subject.
Here are my questions:
At a high level, what is the proper way to secure interservice communication in my scenario?
Is a self-sign cert supported in this scenario?
Are the two services in the same cluster? If so, why not just call the stateful service from the stateless one using ServiceProxy?
You can use a self-signed certificate - the error you're seeing is not specific to Service Fabric. There are several ways to bypass it (although obviously it's not recommended to do that in production). Take a look at this SO question: C# Ignore certificate errors?

How does WCF + SSL working with load balancing?

If SSL is handled by a load balancer, do I still need to configure it in the WCF serviceCertificate node? My scenario is to use message level security. If someone can explain how load balancing with wcf and ssl works, that would be very nice.
WCF requires security tokens to be passed over a secure transport if the message itself is not signed/encrypted. Since traffic is HTTP between your Big-IP and your individual web servers, you need a way to have security tokens that you know are secured between the client and the Big-IP up front still be passed to your server farm. There's a couple ways to do that depending on what version of WCF you're using:
If you're using WCF 4.0 you can just create a custom binding and set the AllowInsecureTransport property on the built in SecurityBindingElement to signify that you don't care that the transport isn't secure.
If you're using WCF 3.5 you have to "lie" about security with a custom TransportSecurityBindingElement on the server side. You can read my old post about this here.
FWIW, they created a hotfix release for 3.5 SP1 that adds the AllowInsecureTransport to that version, but I don't know if your company will allow you to install custom hotfixes.
If you want to use message security then each message is encrypted and signed separately - there is no secure connection and load balancer behaves as with any other HTTP transport. Loadbalancer doesn't know about security and doesn't need certificate.
There are two gotchas:
All load balanced application servers hosting your WCF service must use the same certificate
You must ensure that your WCF binding doesn't use sessions (reliable, security) otherwise you will need load balancing algorithm with sticky sessions (all request for single session always routed to the same server)
It doesn't. Don't bother with this. You will be in a world of hurt. Just install the certs on each machine. We've recently been through this fiasco. WCF is not worth the effort it thinks it needs SSL but sees that it doesn't have it. Take a look at openrasta or something else if you want to do all your SSL on the loadbalancer. #microsoftfail