Connecting to cassandra cqlsh from remote server with SSL Enabled - ssl

I have a Kubernetes that runs 3 node cassandra cluster. when I try to connect to cqlsh from the local machine it works fine. but after enabling SSL on the cluster, I am seeing the below error. I tried setting up the cqlshrc on my local machine and also kubernetes pods but still the same error. can someone help me?
$ kubectl run -i --tty --restart=Never --rm --image cassandra cqlsh -- cqlsh cassandra-0.cassandra.default.svc.cluster.local -u cassandra -p password --ssl
Validation is enabled; SSL transport factory requires a valid certfile to be specified. Please provide path to the certfile in [ssl] section as 'certfile' option in /root/.cassandra/cqlshrc (or use [certfiles] section) or set SSL_CERTFILE environment variable.
pod "cqlsh" deleted
pod default/cqlsh terminated (Error)

Follow the below steps to troublshoot
Check the subject,validity,issuer of remote node certificate (host-remote) from host-local
echo | openssl s_client -showcerts -connect host-remote:cassandra-ssl-port 2>/dev/null | openssl x509 -noout -subject -dates -issuer
Check cqlsh.cer.pem it may only one entry and has subject with CN=host-local, this can be a possible reason that you are able to connect to local but not remote host.
openssl x509 -text -noout -in path to trustore in cqlhrc file/cqlsh.cer.pem
Your truststore shoud have root certificate/CA certificate as well to connect to remote host by successfully validating the certificate chain which is coming from remote node .
You may need to embed the root certificate in trutstore
Refer this to apply ssl in in cassandra.

Related

Certificate issues with centOS7 with curl

I have an issue when using certificate when using curl. I'm running centOS7. We managed to get the curl going in other places, but not our dev machine:
What we are trying to do:
sudo curl -X 'GET' 'https://webpage/document' --cert '/localization.crt.pem' --key '/localization.key.pem' -H 'accept: */*' -k
Im getting this error:
curl: (58) SSL peer cannot verify your certificate.
What I tried to do?(from centOS documentation)
https://access.redhat.com/documentation/en-us/red_hat_certificate_system/9/html/administration_guide_common_criteria_edition/importing_certificate_into_nssdb
# PKICertImport -d . -n "client name" -t ",," -a -i certificate.crt.pem -u C
after echo $? we get a 0, so i think it is installed properly?
Any idea on whats wrong would be great.
I have run into this recently on our linux environments. I've found that this tends to happen if you have an SSL Certificate issued that also includes a chain certificate. If that chain is not also configured on your server OpenSSL considers the certificate invalid.
I would test this using this command:
openssl s_client -showcerts -verify 5 -connect website.com:443
If you see a block like this that means you are missing the certificate chain in your server configuration:
---
SSL handshake has read 2162 bytes and written 401 bytes
Verification error: unable to verify the first certificate
---
Windows fills in the gaps and doesn't mind this type of configuration, but openssl is very particular.
I managed to solve the issue. Recompiled curl with openSSL with following tutorial:
Install curl with openssl
Works like a charm :)

HiveMQ-Cloud get ssl fingerprint / cert for MQTT

I'm trying to use the hivemq-Cloud service (https://console.hivemq.cloud/). Unfortunately I'm forced to use the ssl option and I can't figure out how I can download the public key / fingerprint.
Somebody familiar with the service?
I created a cluster and got something like this
somehash.s1.eu.hivemq.cloud:8883 then I created a user and testet the connection with this service: http://www.hivemq.com/demos/websocket-client/. It only works with the option 'ssl' enabled.
I thought I can catch the fingerprint via ssh-keyscan:
ssh-keyscan -p 8883 <somehash>.s1.eu.hivemq.cloud
<somehash>.s1.eu.hivemq.cloud: Connection closed by remote host
<somehash>.s1.eu.hivemq.cloud: Connection closed by remote host
<somehash>.s1.eu.hivemq.cloud: Connection closed by remote host
And I got this message. How can I get the public key from a himemq-mqtt service?
Didn't worked for me with keyscan but with openssl.
Here is the solution for my problem:
Get certificate fingerprint of HTTPS server from command line?
openssl s_client -connect <somehash>.s1.eu.hivemq.cloud:8883 < /dev/null 2>/dev
/null | openssl x509 -fingerprint -noout -in /dev/stdin

ArangoDB working together with letsenrcypt certificates

Is there anyoune out there who got a running arangoDB database working with a letsencrypt certificate? I just can't find out to geht this running.
ArangoDB is running on a digitalOcean droplet and I could get it running togehter with a self-signed certificate following this tutorial. So arangoDB is sucessfully running on port: 8530
Now my approach was replacing the self-signed certificate with a letsencrypt cert.
So I added a subdomain in DigitalOcean to the droplet. e.g.: db.example.com an then generated the cert-files:
sudo -H ./letsencrypt-auto certonly --standalone -d db.example.com
You will end up with 4 files: cert.pem chain.pem fullchain.pem privkey.pem
As I understood, these files are:
Private Key --------> privkey.pem
Public Key ---------> cert.pem
Certificate Chain --> chain.pem
As described in the tutorial I mentioned, you nee the certificate and the key in one file. So i did
cat chain.pem privkey.pem | sudo tee server.pem
to have a file containing the certificate and the private key.
Then I modified the file /etc/arangodb3/arangod.conf to let arango know where the keyfile is and modified the ssl section:
[ssl]
keyfile = /etc/letsencrypt/live/db.example.com/server.pem
But after restarting arango, the server is not available. When trying to connect the browser to: https://db.example.com:8530. Firewall settings for the droplet should all be ok, because I could access this address with the self-signed cetificate before.
I then tried to modify the endpoint in /etc/arangodb3/arangod.conf from
endpoint = ssl://0.0.0.0:8530
to
endpoint = ssl://db.example.com:8530
and also
tcp://db.example.com:8530
None of it was working. Has somebody out there an idea what I am doing wrong?
Please use the ip of the interface you want to use when specifying the endpoint e.g. endpoint = ssl://42.23.13.37:8530 (ip address should list your interfaces along with addresses in use). Then it could help to use the fullchain.pem to create the server.prm (cat fullchain.pem privkey.pem > server.pem). Make sure the resulting server.pem is accessible and readable by the arangodb user. If the server is still not starting correctly please provide logs of the server. To access the logs use systemctl -fu arangodb3.service or follow the logs with tail -f <logfile> if you use some custom location for logging.
I have just tested a setup with letsencrypt certificates and it was working after ensuring all above points.

Not able to connect the ssl node throgh cqlsh

Before explaining the issue which I'm facing, I will let you know the verified points from my local machine.
I have all the Cassandra related configuration and I have the required privileges (access) to my machine .
I'm able to connect the Cassandra node which is SSL disabled or the node which is TLS disabled through cqlsh.
E.g I'm able to connect to below C* node with the below command
cqlsh -u xxxxx -p xxxxxx 123.abc.com
But at the same time I'm not able to connect to the below node with option SSL
cqlsh --ssl -u xxxxx -p xxxxxx 123.xyz.com
Below is my content of cqlshrc file:
[Authentication]
Usename = xxxx
password = xxxx
[connection]
hostname = 123.xyz.com
port = 9042
factory = cqlshlib.ssl.ssl_transport_factory
[ssl]
certfile=~/certfiles/xyz.pem
validate = false
Even I tried setting the certFile path as an environment variable.
I'm getting the below exception:
Validation is enabled; SSL transport factory requires a valid certfile to be specified. Please provide path to the certfile in [ssl] section as 'certfile' option in /XXXX/XXXXX/.cassandra/cqlshrc (or use [certfiles] section) or set SSL_CERTFILE environment variable.
I'm going to guess that your path is probably valid, but that your certfile may not be. Here are some quick steps that will generate a valid certfile from the keystore of one of your nodes:
1 - Check your cassandra.yaml for the keystore location and password:
client_encryption_options:
enabled: true
keystore: /etc/cassandra/.keystore
keystore_password: flynnLives
2 - Convert your keystore to a PKCS12 keystore:
$ keytool -importkeystore -srckeystore /etc/cassandra/.keystore
-destkeystore ~/.cassandra/p12.keystore -deststoretype PKCS12
-srcstorepass flynnLives -deststorepass flynnLives
3 - Generate a certfile from the PKCS12 keystore:
$ openssl pkcs12 -in ~/.cassandra/p12.keystore -nokeys -out ~/.cassandra/xyz.pem
-passin pass:flynnLives
4 - Specify the connection and ssl sections in your cqlshrc, as well as the default transport factory and the name of your certificate. And unless you're using two-way SSL, set validate to false.
[connection]
factory = cqlshlib.ssl.ssl_transport_factory
[ssl]
certfile = ~/.cassandra/xyz.pem
validate = false
5 - Connect via cqlsh:
$ bin/cqlsh 192.168.0.100 -u flynn -p reindeerFlotilla --ssl
Connected to MasterControl at 192.168.0.100:9042.
[cqlsh 5.0.1 | Cassandra 2.2.5 | CQL spec 3.3.1 | Native protocol v4]
Use HELP for help.
flynn#cqlsh>

x509 error when trying to login to a trusted (?) docker registry

I have set up a docker registry using harbor.
I have copied the appropriate certificates in /usr/share/local/ca-certificates and run sudo update-ca-certificates with success. (indicated the number of newly certs added).
When trying to login to the specific registry:
ubuntu#master1:/home/vagrant$ docker login my.registry.url
Username: pkaramol
Password:
Error response from daemon: Get https://my.registry.url/v2/: x509: certificate signed by unknown authority
However the following test succeeds:
openssl s_client -connect my.registry.url:443 -CApath /etc/ssl/certs/
...coming back with a lot of verbose output, the certificate itself and ending in :
Verify return code: 0 (ok)
curl also succeeds to the above https link (it fails when the site is not trusted).
Any suggestions?
If you read the documentation
Use self-signed certificates
Warning: Using this along with basic authentication requires to also trust the certificate into the OS cert store for some versions of docker (see below)
This is more secure than the insecure registry solution.
Generate your own certificate:
$ mkdir -p certs
$ openssl req \
-newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key \
-x509 -days 365 -out certs/domain.crt
Be sure to use the name myregistrydomain.com as a CN.
Use the result to start your registry with TLS enabled.
Instruct every Docker daemon to trust that certificate. The way to do this depends on your OS.
Linux: Copy the domain.crt file to /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt on every Docker host. You do not need to restart Docker.
See below link for more details
https://docs.docker.com/registry/insecure/#use-self-signed-certificates