I have a few node.js microservices running in Kubernetes and now I would need to find a way to communicate between them. I was thinking to expose an endpoint that would only be accessible internally from other pods. I have been searching for hours, but didn't find a solution that would be secure enough. Is there a way to make it work as such? Thank you!
If you want your service to be accessible only from selected pods - you may use Network Policies. They allow to define what pods can talk to what pods on the network level. For example, you may expose your service through ingress and allow only ingress controller to talk to your application. That way you can be sure that your application can only be available through ingress (with authentication) and no other way.
Network Policies are supported only be some network plugins:
Calico
Open vSwitch
Cilium
Weave
Romana
communicate between services in Kubernetes in a secure way
Natively, Kubernetes does not provide mutual TLS solution to the services for encrypted communication, that's where Istio with mutual-tls-authenticatione bring this functionality to the platform.
Simply use 'cluster ip' as service type. this would keep your services exposed within cluster. you can use services by their name over Http.
for any service that is talking publicly you may need to use load balancer service type or ingress controller.
There are several ways to achieve this:
Encryption, which at least needs some kind of certificate management. You could also use a service-mesh for encryption but this needs some efford.
Autorisation via Access-Tokens.
Securing the Kubernetes cluster and the ingress.
Network filtering (firewall).
This list is far from complete.
Note that doing just item 2 will not solve your problem. I think you will at least need items 1 and 2 to get some level of security.
Related
I am trying to run some of my micro services within consul service mesh. As per consul documentation, it is clear that consul takes care of routing, load balancing and service discovery. But their documentation also talks about 3rd party load balancers like NGINX, HAProxy and F5.
https://learn.hashicorp.com/collections/consul/load-balancing
If consul takes care of load balancing, then what is the purpose of these load balancers.
My assumptions,
These load balancers are to replace the built-in load balancing technique of consul, but the LB still uses consul service discovery data. (Why anyone need this !!!)
Consul only provides load balancing for east-west traffic (within the service mesh). To load balance north-south traffic (internet traffic), we need external load balancers.
Please let me know which of my assumption is correct
Consul service mesh uses Envoy proxy by default for both east-west and north-south load balancing of connections within the mesh. Whereas east-west traffic is routed through a sidecar proxy, north-south connections route through an instance of Envoy which is configured to act as an ingress gateway.
In addition to Consul's native, Envoy ingress, Consul also supports integrations with other proxies and API gateways. These can be used if you require functionality which is not available in the native ingress offering.
Third party proxies leverage Consul's service catalog to populate their backend/upstream member pools with endpoint information from Consul. This allows the proxy to always have an up-to-date list of healthy and available services in the data center, and eliminates the need to manually reconfigure the north-south proxy when adding/removing service endpoints.
Some gateways like Ambassador, F5, and (soon) Traefik (see PR https://github.com/traefik/traefik/pull/7407) go a step further by integrating with the service mesh (see Connect custom proxy integration) so that they can utilize mTLS when connecting to backend services.
I checked with one of my colleagues (full disclosure: I work for F5) and he mentioned that whereas it is not a technical requirement to use external services for load balancing, a lot of organizations already have the infrastructure in place, along with the operational requirements, policies, and procedures that come with it.
For some examples on how Consul might work with edge services like the F5 BIG-IP, here are a couple articles you might find interesting that can provide context for your question.
Consul Templating BIG-IP Services
Automate App Delivery with
F5, Terraform, and Consul
I have a k8s cluster with an nginx based ingress and multiple services (ClusterIP). I want to use Consul as a service mesh and documentation is very clear on how to set up and govern communication between services. What is not clear though is how to setup the nginx ingress to talk to these services via the injected sidecar connect proxies using mutual ssl. I'm using cert-manager to automatically provision and terminate ssl at the ingress. I need to secure the communication between the ingress and the services with Consul provisioned mutual SSL. Any documentation related to this scenario will definitely help.
You would inject the sidecar into the ingress-nginx controller and have it talk to backend services just like any other service-to-service thing. This will probably require overriding a lot of the auto-generated config so I'm not sure it will be as useful as you hope.
I have a web app that talks to a service layer via WCF. These need to be internal endpoints and should be .net TCP bindings. However I also have some services in the service layer that don't need to be consumed internally but need to be exposed to the outside world i.e. http/https input endpoints. What is the best way in implementing this in Azure?
I was hoping someone could provide clarification / advice on the following points:
If I use internal endpoints are these load balanced? There seems to be a lot of contradicting info around the web. I have read that you need to implement your own algorithm, but I have also read that this has now been implemented by Microsoft and it is automatic.
Should the service layer be a web role or a worker role? It seems that there is a bit of a workaround to get internal TCP bindings working with a web role?
Is there a specific set of guidelines as to which one to use? i.e. web role or worker role.
I am assuming I am going to need two instances regardless of whether or not I use a web role or worker role? but wouldn't this depend on the first point? i.e. if there is no load balancer is there even any point in having 2 worker role instances?
Would it be better to split my service layer into two layers? One to expose the internal endpoints and another to expose the public endpoints?
Thanks in advance.
My previous answer got truncated. Take a look at Azure Service bus, you can create relays there to expose your internal WCF services
You can use a service relay for this, take a look # Azure
If SSL is handled by a load balancer, do I still need to configure it in the WCF serviceCertificate node? My scenario is to use message level security. If someone can explain how load balancing with wcf and ssl works, that would be very nice.
WCF requires security tokens to be passed over a secure transport if the message itself is not signed/encrypted. Since traffic is HTTP between your Big-IP and your individual web servers, you need a way to have security tokens that you know are secured between the client and the Big-IP up front still be passed to your server farm. There's a couple ways to do that depending on what version of WCF you're using:
If you're using WCF 4.0 you can just create a custom binding and set the AllowInsecureTransport property on the built in SecurityBindingElement to signify that you don't care that the transport isn't secure.
If you're using WCF 3.5 you have to "lie" about security with a custom TransportSecurityBindingElement on the server side. You can read my old post about this here.
FWIW, they created a hotfix release for 3.5 SP1 that adds the AllowInsecureTransport to that version, but I don't know if your company will allow you to install custom hotfixes.
If you want to use message security then each message is encrypted and signed separately - there is no secure connection and load balancer behaves as with any other HTTP transport. Loadbalancer doesn't know about security and doesn't need certificate.
There are two gotchas:
All load balanced application servers hosting your WCF service must use the same certificate
You must ensure that your WCF binding doesn't use sessions (reliable, security) otherwise you will need load balancing algorithm with sticky sessions (all request for single session always routed to the same server)
It doesn't. Don't bother with this. You will be in a world of hurt. Just install the certs on each machine. We've recently been through this fiasco. WCF is not worth the effort it thinks it needs SSL but sees that it doesn't have it. Take a look at openrasta or something else if you want to do all your SSL on the loadbalancer. #microsoftfail
My employer is a software vendor for a specific market. Our customers integrate our system with others using web services. We use Microsoft technology, and our web services are implemented in ASP.NET and WCF.
The time has come to review our current set of services, and come up with company standards for future integrations. I am reading "Enterprise Integration Patterns," and I've also been looking a little bit at nServiceBus and Mass Transit. These may simplify issues like contract versioning and unit testing, but they seem to be most useful for providing an internal service bus, not for exposing services to external clients.
Our customers are on many different platforms, and require our services to be standards compliant. That may mean different things to different people, but I think it is safe to assume that they want to access web services described with WSDL.
In this scenario, is WCF the way to go?
WCF is by far the most standards-compliant stack on the Microsoft platform. The nice thing is that it's very flexible for different clients "out of the box", and if there are things that cause you grief, most of them can be changed via custom behaviors without too much trouble.
An alternative that I normally recommend is integration over AMQP between your message brokers. That was you can use the push paradigm instead of the polling one (which is very powerful and scalable in comparison)!
You'd set up your own broker, such as RabbitMQ, locally. Then you'd let your integration partner set up one. (Easy: just download it).
If your partner is integrating from the same data center, you'd be save to assume few network splits - meaning you could share the broker. On the other hand, if you are on different networks, you can set up the broker in federation mode. (Run rabbitmq-plugins enable rabbitmq_federation and point to the other broker)
Now you can use e.g. MassTransit:
ServiceBusFactory.New(sbc =>
{
sbc.UseRabbitMqRouting();
sbc.ReceiveFrom("rabbitmq://rabbitmq.mydomain.local/myvhost/myapplication");
// sbc.Subscribe( s => s ... );
});
, like you would do when not doing any integration.
If you look at http://rabbitmq.mydomain.local:55672/ now you will find the administration interface for RabbitMQ. MassTransit creates an exchange for each message type (sending such a message to that exchange will fan out to all subscribers), which you can put authorization rules on.
Authorization rules can be in the form of regex per user or it can be integrated into LDAP. Consult the documentation for this.
You'd also need SSL in the case that you're going over the WAN and you don't have an IPSec tunnel - that documentation is here: http://www.rabbitmq.com/ssl.html and you enable it like this.
That's it! Enjoy!
Post scriptum: if you are feeling up for an adventure that will help you manage all of your infrastructure as a side-effect, you can have a look at puppet. Puppet is a provisioner and configuration manager of servers; in this case you'd be interested in setting up SSL with puppet. First, order a wild-card subdomain certificate for your domain, then use that cert to sign other certificates: you can delegate that - see the rabbitmq guide where it states "Now we can generate the key and certificates that our test Certificate Authority will use." - generate a certificate-signing-request for the certificate instead of creating a new authority - and let RMQ use this for SSL - it will be valid for the internet.