We want to encrypt cassandra table/column/sstable/commitlog/hints. and store the keystore in vault or kms. all of our cassandra instances or ec2 machines. can you please direct me to the documentation how to achieve encryption
Related
I have nodejs app using kafkajs package for connecting to AWS MSK.
We are moving to Strimzi Kafka because we already have a kubernetes cluster and we don't need the MSK anymore.
Until now we were connected with SSL but didn't have to specify any CA path or something. We used this way of connection both on our nodejs apps and kafka-ui and it worked with no issues.
We are trying to the same with Strimzi Kafka, but we get SSL handshake failed.
For my understanding is AWS MSK is using amazon certificates that are known while the Strimzi Kafka is generating self signed certificates which is ok by us.
How can I still using this way like we used with AWS MSK? With just use ssl: true in kafkajs (It works)
Thanks.
The easiest way to use a certificate signed by some public CA is using the listener certificate which lets you provide your own server certificate for given listener. I'm not sure how the Amazon CA works, but this blog post shows how to do it for example using Cert-Manager and Let's Encrypt.
Keep in mind that to use the public CAs, you usually need to use some proper domain names and not just internal Kubernetes services. This might for example increase costs or latency if your applications run in the same Kubernetes cluster because the traffic might need to go through a load balancer or ingress.
We have service provider that takes a request and creates cluster of elastic search.
What is the best practice to issue ssl certificate ?
1. Should we issue certificate per cluster ?
2. or One cluster for my service provider should be enough which will be used to access clusters ?
I am assuming issuing new certificate while creating cluster is better.
Please provide me the input.
Also, inside the cluster, do I really need to enable ssl so that pods talk to each other passing certificate ?
Yes, you should definitely use TLS to encrypt network traffic to, from, and within your Elasticsearch clusters run on shared and managed K8S version (GKE).
Additionally I would opt for a maximum separation of customer spaces with:
Kubernetes namespaces
namespaced serviceaccounts/rolebindings
and even PD-SSD based volumes with customer supplied encryption keys
I'm not sure if you are aware of existence of 'Elastic Cloud on Kubernetes' (ECK) - it applies Kubernetes Operator pattern for running and operating Elasticsearch clusters on your own K8S cluster in GCP. Treat it also like a collection of a best practices for running Elasticsearch cluster in most secure way, here is a quick start tutorial.
I am trying to get more documentation, understanding about security in Azure Managed Kubernetes Service (AKS).
Does Azure encrypt the containers deployed to the AKS cluster at "rest"? If so, how is data encryption achieved at rest, and in motion?
What are the ways to achieve SSL/TLS in AKS, any documentation is appreciated.
Thanks in advance
I can definitely tell you TLS termination is supported in AKS. I've been able to implement this.
HTTPS Ingress on Azure Kubernetes Service (AKS)
This documentation is slightly out of date though. You should use cert-manager instead of KUBE-LEGO.
I would welcome a more authoritative answer, but as far as I have determined managed disks are always encrypted (https://azure.microsoft.com/en-us/blog/azure-managed-disks-sse/), but the worker nodes are not encrypted by default. It would be necessary to run az vm encryption enable on every node (quite a chore if you are scaling up and down!). If you do that you should be covered, though.
As for SSL/TLS Kubernetes supports TLS for Ingress, see https://kubernetes.io/docs/concepts/services-networking/ingress, but I haven't tested it in AKS. We are using our own Nginx server as gateway and with that approach you can use any TLS tutorial out there. We feel that we have more control that way.
I have deployed a Redis cluster on Google Kubernetes Engine using Kubernetes's provided examples. It works as expected.
I am attempting to connect to this cluster from client applications. I am aware that Redis does not provide encryption, nor is the recommended practice to expose the cluster to the world, and it's intended to be accessed from private and trusted networks.
If by default, redis binds to the loopback interface, how can I connect with standard (Go or Python) client libraries to the cluster?
As Carlos described kubectl proxy might be an approach. Here are some alternatives.
I would say that look at how cloud services providing Redis-as-a-Service are doing this. Do they have a password auth model? Do they have TLS certificates? Figure out how they provide auth and you can configure it that way too.
If there's no authentication, kubectl proxy and kubectl port-forward will give you a secure tunnel into the cluster, so you don't have to expose the redis Service to public internet.
Use new feature Internal Load Balancer https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing. This lets you access your Redis cluster (running on GKE with a non-public IP address) to other GCE VMs in your network. This still doesn't do authentication/authorization, but at least it's not exposed to the public Internet.
I have an Nginx web-server running on EC2 and an amazon RDS mysql instance with encryption enabled at instance creation time. This is using an encryption key I create using IAM.
Question 1: What's the purpose of this encryption key? It is just to encrypt data at rest?
Then I'm trying to use SSL provided by RDS (rds-ca-2015-root.pem) to encrypt data in motion between Nginx web-server and RDS mysql instance.
Question 2: Do I have to copy .pem file to Nginx server and do some configuration? Please list the steps if possible.
After, I want Nginx web-server to communicate with visitors browser over HTTPS when submitting login info and other sensitive information.
Question 3: How can I do this? Do I need another SSL certificate from a CA and how do I configure this in Nginx server?
Thank you.
Question 1: What's the purpose of this encryption key?
It looks like it's for SSL communication to your MySQL instance. To talk to securely talk to your instance, you must configure your MySQL client with that .pem file.
It is just to encrypt data at rest?
No, for that, you just check a checkbox at RDS database instance creation time. There is nothing else to do.
to encrypt data in motion between Nginx web-server and RDS mysql instance.
Nginx doesn't talk to MySQL. (Well, maybe you have some strange nginx module?) Whatever is talking to MySQL must be configured for SSL mode (and for security, must be given the public key .pem file to verify it's talking to the right server.)
Question 2: Do I have to copy .pem file to Nginx server and do some configuration? Please list the steps if possible.
Yes. Since Nginx doesn't talk to MySQL directly, you need to figure out what is talking to MySQL. (Is it some nginx module? Look at that module doc for directives. Otherwise, nginx is probably fronting for some other app such as PHP-FPM, etc. That app must be configured for SSL to MySQL.)
Question 3: Do I need another SSL certificate from a CA and how do I configure this in Nginx server?
Yes, there is a complex dance you need to do to get SSL working. There are many pages on how to do that. You'll need to ask a specific question if you want help with that.