Datastax Cassandra SSL - ssl

I'm new to Cassandra and just installed DataStax Community Edition 3-node cluster in our QA environment. I'd like to secure node-to-node and client-to-node communications within my cluster using GlobalSign wildcard SSL cert that I already have. So far I found posts showing how to secure cluster using your own CA but wasn't able to find any mentions on how to use wildcard certs. Basically, I'd like to install my wildcard cert to all nodes in the cluster and use DNS A-records to match node IP address and the DNS name (e.g. 10.100.1.1 > node01.domain.com).
Is that even possible? Any help is greatly appreciated!
Mike

Using anything but certificate pinning as described in the reference is insecure, as Cassandra will not validate if the hostname the certificate was created for is actually the host trying to connect. See CASSANDRA-9220 for details.

Related

Why does SSL certificate not working on VMware Esxi server anymore

VMware Esxi server had SSL certificate and after restart they stop working. I am not using domain name but just IP address.
What I have done so far is:
I have restore the certificates from vSphere.
I have create self assigned certificates and install them.
Is this a network issue or I am missing something?
Please advise me how can I look in depth to find out the problem.

Kubernetes: mount certificate to pod

I'd like to deploy an ldap server on my kubernetes cluster. The server itself is up and running, but I'd like to enable SSL encryption for it as well.
I already have cert-manager up and running and I also use a multitude of SSL certificates with my ingresses with my HTTP traffic. It would be really nice if I could just use a CertificateRequest with my ldap server as well, managed and updated by cert-manager.
My problem is I have no idea how to mount a Certificate to my kubernetes pod. I know that cert-manager creates a secret and puts the certificate data in it. The problem with that is I have no idea of the validity of that certificate this way, and can't remount/reapply the new certificate.
Has anybody done anything like this? Is there a non-hacky way to incorporate ingresses to terminate SSL encryption?

How do I create a tls cert for a three node server domain that covers the parent domain as well?

I'm not even sure I asked the question right...
I have three servers running minio in distributed mode. I need all three servers to run with TLS enabled. It's easy enough to run certbot, generate a cert for each node, drop said certs into /etc/minio/certs/ and go! but here's where I start running into issues.
The servers are thus:
node1.files.example.com
node2.files.example.com
node3.files.example.com
I'm launching minio using the following command:
MINIO_ACCESS_KEY=minio \
MINIO_SECRET_KEY=secret \
/usr/local/bin/minio server \
-C /etc/minio --address ":443" \
https://node{1...3}.files.example.com:443/volume/{1...4}/
This works and I am able to connect to all three servers from a webbrowser using https with good certs. however, users will connect to the server using the parent domain "files.example.com" (using distributed DNS)
I already ran certbot and generated the certs for the parent domain... and I copied the certs into /etc/minio/certs/ as well as /etc/minio/certs/CAs/ (calling both files "files.example.com-public.crt" and "files.example.com-public.key" respectively)... this did not work. when I try to open the parent domain "files.example.com" I get a cert error (chich I can bypass) indicating the certificate is for the node in which I have connected and not for the parent domain.
I'm pretty sure this is just a matter of putting the cert in the right place and naming it correctly... right? does anyone know how to do that? I also have an idea there might be a way to issue a cert that covers multiple domains... is that how I'm supposed to do this? how?
I already hit up minio's slack channel and posted on their github, but no ones replying to me. not even, "this won't work."
any ideas?
I gave up and ran certbot in manual mode. it had to install apache on one of the nodes, then certbot had me jump through a couple of minor hoops (namely it had me create a new txt record with my DNS provider, and then create a file with a text string on the server for verification). I then copied the created certs into my minio config directory (/etc/minio/certs/) on all three nodes. that's it.
to be honest, I'd rather use the plugin as it allows for an automated cert renewal, but I'll live with this for now.
You could also run all of them behind a reverse proxy to handle the TLS termination using a wildcard domain certificate (ie. *.files.example.com). The reverse proxy would centralize the certificates, DNS, and certbot script if you prefer, etc to a single node, essentially load balancing the TLS and DNS for the minio nodes. The performance hit of "load-balancing" TLS like this may be acceptable depending on your workload, considering the simplification to your current DNS and TLS cert setup.
[Digital Ocean example using nginx and certbot plugins] https://www.digitalocean.com/community/tutorials/how-to-create-let-s-encrypt-wildcard-certificates-with-certbot

Installing a wildcard SSL certificate in Red Hat 6 hosted on Azure

Short version: I have a wildcard certificate for a domain. We presently have 2 Apache servers, using Red Hat 6.8, running on the Azure cloud.
My question is: How do I install the wildcard certificate, and have it work properly, since the URL is (for example) http://mysite-prod01.centralus.cloudapp.azure.com -- but the certificate is for *.mydomain.com?
We're using Traffic Manager for www.mydomain.com, as the 'front' for the 2 web servers. Any ideas? I've searched and found installing SSL certificates on Red Hat, which isn't an issue, or installing certificates on Windows Azure.
There have to be a fair number of folks hosting their Red Hat servers on Azure, so this must have been solved before ... thanks in advance for your time.
Nothing to do with Azure. Install the certs as you would normally do and CNAME your domain to the traffic manager endpoint.
www.example.com IN CNAME something.trafficmanager.net
Traffic Manager works at the DNS level, so you don't handshake TLS with it, you do it with one of the VMs, depending which endpoint you get.

Elasticsearch Shield SSL Certificates

I`m using elastic 2.2.0 with shield 2.2, 10 nodes cluster. I need to enable ssl in Elastic for Kibana to work with shield and I got troubled on the certification signing part.
I do not have a wild card certificate so I cant sign just one csr in node and copy it to all other nodes, I tried to use letsencrypt (with the elastic tutorial) and sign a certificate with common name of node1 and alternative names of node2-10 and copy it to all other nodes (of course I firstly created domains for all 10 servers and pointed it to node1, sign the csr, then pointed all the 9 to the right server), it didn't work and I got a lot of "bed certificate" exceptions in the nodes log.
As I said, I need ssl for kibana to work with shield, and for secure connections in general, and I planning to add some more nodes to the cluster...
How can I manage to do so?
What would be the best architecture for that purpose?
The problem was that I tried to use the certificates on private ip seeds of nodes, and as the documentation says (its not possible):
If you use a commercial CA, the DNS names and IP addresses used to identify a node must be publicly resolvable. Internal DNS names and private IP addresses are not accepted due to security concerns.
If you need to use private DNS names and IP addresses, using an internal CA is the most secure option. It enables you to specify node identities and ensure node identities are verified when nodes connect. If you must use a commercial CA and private DNS names or IP addresses, you cannot include the node identity in the certificate, so the only option is to disable hostname verification.
So the solution is to use the certificate only for outside requests (like kibana UI) by setting in elasticsearch.yml:
shield.transport.ssl: false
shield.http.ssl: true