I have a POC domain in and AWS VPC with a kerberised Kafka cluster operating with SSL.
In the VPC which has Active Directory we can connect producer/consumers to the cluster over SASL_SSL and everything works fine.
Part of the POC requires an on-prem service to produce to the broker. I'd hope we could use LdapLoginModule in the jaas.conf and for now just user LDAP over SSL passing in the password till we got federated AD.
Can someone confirm if this is possible or any suggested workarounds?
If anyone has a similar question, it would seem SASL/PLAIN is the way to go http://docs.confluent.io/3.0.0/kafka/sasl.html#authentication-using-sasl-plain
Related
Maybe this is a dumb question, but I really don't know if I have to secure applications with tokens etc. within a kubernetes cluster.
So for example I make a grpc-call from a client within the cluster to a server within the cluster.
I thought this should be secure without authenticating the client with a token or something like that, because (if I understood it right) kubernetes pods and services work within a VPN which won't be exposed as long as it's not told to.
But is this really secure, should I somehow build an authorization system within my cluster?
Also how can I use a service to load balance the grpc-calls over the server pods without exposing the server outside the cluster?
If you have a service, it already has built-in load balancer when you have more than one replica out of the box.
Also Kubernetes traffic is internal within the cluster out of the box, unless you explicitly expose traffic using LoadBalancer, Ingress or NodePort.
Does it mean traffic is safe? No.
By default, everything is allowed within Kubernetes cluster so every service can reach every service or pod in StatefulSet apps.
You can use NetworkPolicy to allow traffic from one service to another service and nothing else. That would increase security.
Does it mean traffic is safe now? It depends.
Authentication would add an additional security layer in case container is hacked. There could be more scenarios, but I can't think of for now.
So internal authentication is usually used to improve security in production systems.
I hope it answers the question.
We have IIB 10.0.0.12 running on Windows Server 2012 R2. We are looking to setup Kerberos -Token based authentication for SOAP services that are exposed to internal/external consumers.
We have around 4 System test servers running on a Same domain. The test servers are not load balanced; can we create a Single User account (Say "IIBTestPrincipal") in Active Directory and map multiple SPN's to this user account and setup the test environments like below.
setspn -A HTTP/server3.somedomain.co.uk#SOMEDOMAIN.CO.UK IIBADPrincipal
setspn -A HTTP/server5.somedomain.co.uk#SOMEDOMAIN.CO.UK IIBADPrincipal
Can somebody please advice/ guide on process for setting the same in load balanced environment.?
We have 4 broker servers load balanced via Netscalar. Can the load balancer perform a kerberos passthrough and broker perform all the kerberos authentication work ? If so should we be creating a SPN on Load balancer Host name and map all the prod servers as alias to that SPN ?
Couldn't find much info from Info center,Any thoughts on the above are much appreciated.
Netscaler supports Kerberos impersonation and Kerberos contrained delegation. I'm not that familiar with Kerberos, take a look in their documentation
https://support.citrix.com/article/CTX222453
I am trying to get more documentation, understanding about security in Azure Managed Kubernetes Service (AKS).
Does Azure encrypt the containers deployed to the AKS cluster at "rest"? If so, how is data encryption achieved at rest, and in motion?
What are the ways to achieve SSL/TLS in AKS, any documentation is appreciated.
Thanks in advance
I can definitely tell you TLS termination is supported in AKS. I've been able to implement this.
HTTPS Ingress on Azure Kubernetes Service (AKS)
This documentation is slightly out of date though. You should use cert-manager instead of KUBE-LEGO.
I would welcome a more authoritative answer, but as far as I have determined managed disks are always encrypted (https://azure.microsoft.com/en-us/blog/azure-managed-disks-sse/), but the worker nodes are not encrypted by default. It would be necessary to run az vm encryption enable on every node (quite a chore if you are scaling up and down!). If you do that you should be covered, though.
As for SSL/TLS Kubernetes supports TLS for Ingress, see https://kubernetes.io/docs/concepts/services-networking/ingress, but I haven't tested it in AKS. We are using our own Nginx server as gateway and with that approach you can use any TLS tutorial out there. We feel that we have more control that way.
I'm setting up a cloud service on azure and want to buffer logs in redis. However running redis as a web service on azure means my requests have to leave my virtual network, which means encryption is a must.
I've searched for hours but haven't found any clues whether logstash can read from redis via ssl. Isn't that possible at all?
Seems like redis isn't able to talk ssl and the redis web service of azure seems to come with custom ssl support, which seems to be the reason why there is no ssl support for the redis input.
However this solution (stunnel) helped me solving my problem: http://bencane.com/2014/02/18/sending-redis-traffic-through-an-ssl-tunnel-with-stunnel/
I have been messing around with openshift and reading as much documentation as i can. Yet, the authentication performed by default(using admin .kubeconfig) puzzles me.
1)Are client-certificate-data and client-key-data the same as the admin certificate and key? I ask this because the contents of the certificate/key files are not the same as in .kubeconfig.
2).kubeconfig (AFAIK) is used to authenticate agains a kubernetes master. Yet, in OpenShift we are authentication against OpenShift master (right?). Why using .kubeconfig?
Kinds regards and thank you for your patience.
OpenShift builds on top of Kubernetes - it exposes both the OpenShift APIs (builds, deployments, images, projects) and the Kubernetes APIs (pods, replication controllers, services). A client connecting to OpenShift will use both sets of APIs. OpenShift can run on top of an existing Kubernetes cluster, in which case it will proxy API calls to the Kubernetes master and then apply security policy on top (via the OpenShift policy engine which may eventually become part of Kube).
So, the client is really an extension of Kubectl that offers some additional functionality, and it can use .kubeconfig to be consistent with a Kubectl setup. You can talk to an OpenShift cluster via kubectl, so vice versa seems fair.
The client-certificate-data and key-data are base64 encoded versions of the files on disk. They should be the same once you decode them. We do that so the .kubeconfig can be shipped around as one unit, but you can also set it up to reference files on disk.