I am trying to configure the puppetserver and agent to use external CA with - Root self-signed CA & Master,Agent having its own ssl certificate
Configurations in puppetserver:
/etc/puppetlabs/puppetserver/bootstrap.cfg
# To enable the CA service, leave the following line uncommented
# puppetlabs.services.ca.certificate-authority-service/certificate-authority-service
# To disable the CA service, comment out the above line and uncomment the line below
puppetlabs.services.ca.certificate-authority-disabled-service/certificate-authority-disabled-service
/etc/puppetlabs/puppetserver/conf.d/webserver.conf
ssl-cert : /usr/cachelogic/var/device-pki/dev_cert.pem
ssl-key : /usr/cachelogic/var/device-pki/dev_key.pem
ssl-ca-cert : /usr/cachelogic/var/device-pki/CAcert.pem
ssl-crl-path : /etc/puppetlabs/puppet/ssl/crl.pem
puppetserver service was started successfully.
Configurations in puppet agent:
/etc/puppetlabs/puppet/puppet.conf
hostcert = /usr/cachelogic/var/device-pki/dev_cert.pem
hostprivkey = /usr/cachelogic/var/device-pki/dev_key.pem
localcacert = /usr/cachelogic/var/device-pki/CAcert.pem
While starting the puppet agent following this the error message that I get.
Debug: Using cached certificate for ca
Debug: Creating new connection for https://cp3.zzz152d1.cdn:8140
Debug: Using cached certificate for ca
Error: Could not run: stack level too deep
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:63
Any pointers on debugging this issue will be helpful. Thanks.
Related
I am finding it impossible to set up an encrypted connection with a RabbitMQ broker using python's pika library on the client side. My starting point was the pika tutorial example here but I cannot make it work. I have proceeded as follows.
(1) The RabbitMQ configuration file was:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
(2) The rabbitmq-auth-mechanism-ssl plugin was enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling was confirmed by checking the enable status through: rabbitmq-plugins list.
(3) The correctness of the TLS certificates was verified by using openssl tools as described here.
(4) The client-side program to set up the connection was:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(
cafile="/Xyz/sampleNodeCert/tms.crt")
context.load_cert_chain("/Xyz/sampleNodeCert/node.crt",
"/Xyz/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, '127.0.0.1')
conn_params = pika.ConnectionParameters(host='127.0.0.1',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials())
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
(5) The client-side program failed with the following error message:
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism EXTERNAL. For details see the broker logfile.'
(6) The log message in the RabbitMQ broker was:
2019-10-15 20:17:46.028 [info] <0.642.0> accepting AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
2019-10-15 20:17:46.032 [error] <0.642.0> Error on AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671, state: starting):
EXTERNAL login refused: user 'CN=www.node.com,O=Node GmbH,L=NodeTown,ST=NodeProvince,C=DE' - invalid credentials
2019-10-15 20:17:46.043 [info] <0.642.0> closing AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
(7) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
Questions: Does anyone have any idea as to why my test fails? Where can I find a complete working example of setting up an encrypted connection to RabbitMQ using pika?
NB: I am familiar with this post but none of the tips in the post helped me.
After studying the link provided above by Luke Bakken, I am now in a position to answer my own question. The main change with respect to my original example is that I configure the RabbitMQ broker with a passwordless user which has the same name as the CN field of the TLS certificate on both the server and the client side. To illustrate, below, I go through my example again in detail:
(1) The RabbitMQ configuration file is:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_cert_login_from = common_name
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = EXTERNAL
auth_mechanisms.2 = PLAIN
auth_mechanisms.3 = AMQPLAIN
Note that, with the ssl_cert_login_from configuration option, I am asking for the username of the RabbitMQ account to be taken from the "common name" (CN) field of the TLS certificate.
(2) The rabbitmq-auth-mechanism-ssl plugin is enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling can be confirmed by checking the enable status through command: rabbitmq-plugins list.
(3) The signed TLS certificate must have the issuer and subject CN fields equal to each other and equal to the hostname of the RabbitMQ broker node. In my case, inspection of the RabbitMQ log file (in /var/log/rabbitmq) shows that the broker is running on a node called: rabbit#pnp-vm2. The host name is therefore pnp-vm2. In order to check the CN fields of the client-side certificate, I use the following command:
ap#pnp-vm2:openssl x509 -noout -text -in /etc/cert/node.crt | fgrep CN
Issuer: C = CH, ST = CH, L = Location, O = Organization GmbH, CN = pnp-vm2
Subject: C = DE, ST = NodeProvince, L = NodeTown, O = Node GmbH, CN = pnp-vm2
As you can see, both the Issuer CN field and the Subject CN Field are equal to: "pnp-vm2" (which is the hostname of the RabbitMQ broker, see above). I tried using this name for only one of the two CN fields but then the connection to the broker could not be established. In my test environment, it was easy to create a client certificate with identical CN names but, in an operational environment, this may be a lot harder to do. Also, I do not quite understand the reason for this constraint: is it a bug or it is a feature? And does it originate in the particular RabbitMQ library I am using (python's pika) or in the AMQP protocol? These question probably deserve a dedicated post.
(4) The client-side program to set up the connection is:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(cafile="/home/ap/RocheTe/cert/sampleNodeCert/tms.crt")
context.load_cert_chain("/home/ap/RocheTe/cert/sampleNodeCert/node.crt",
"/home/ap/RocheTe/cert/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, 'pnp-vm2')
conn_params = pika.ConnectionParameters(host='a.b.c.d',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials(),
heartbeat=0)
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
input("Press Enter to continue...")
Here, "a.b.c.d" is the IP address of the machine on which the RabbitMQ broker is running.
(5) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
One final word of warning: with this configuration, I was able to establish a secure connection to the RabbitMQ Broker but, for reasons which I still do not understand, it became impossible to start the RabbitMQ Web Management Tool...
I'm trying to secure our NIFI environment with SSL. I'm gettin the following error:
This site can’t provide a secure connection <I.P> uses an unsupported protocol.
ERR_SSL_VERSION_OR_CIPHER_MISMATCH
I got a Comodo certificate that i requested at my org and got it approved. I have a .key as well which was generated during CSR. I imported the comodo cert into the keystore. Then, I imported both the comodo root cert and .key into truststore. NIFI version is 1.9.2
nifi.properties:
nifi.security.keystoreType=JKS
nifi.security.keystorePasswd=mypassword
nifi.security.keyPasswd=
nifi.security.truststore=./conf/truststore.jks
nifi.security.truststoreType=JKS
nifi.security.truststorePasswd=mypassword
nifi.security.user.authorizer=managed-authorizer
nifi.security.user.login.identity.provider=
nifi.security.ocsp.responder.url=
nifi.security.ocsp.responder.certificate=
last few lines of the logs:
2019-07-12 02:29:55,877 INFO [main] o.eclipse.jetty.server.AbstractConnector Started ServerConnector#45e97963{SSL,[ssl, http/1.1]}{0.0.0.0:8443}
2019-07-12 02:29:55,877 INFO [main] org.eclipse.jetty.server.Server Started #28943ms
2019-07-12 02:29:55,906 INFO [main] org.apache.nifi.nar.NarAutoLoader Starting NAR Auto-Loader for directory ./extensions ...
2019-07-12 02:29:55,907 INFO [main] org.apache.nifi.nar.NarAutoLoader NAR Auto-Loader started
2019-07-12 02:29:55,907 INFO [main] org.apache.nifi.web.server.JettyServer NiFi has started. The UI is available at the following URLs:
2019-07-12 02:29:55,907 INFO [main] org.apache.nifi.web.server.JettyServer https://<I.P>:8443/nifi
2019-07-12 02:29:55,907 INFO [main] org.apache.nifi.web.server.JettyServer https://127.0.0.1:8443/nifi
2019-07-12 02:29:55,909 INFO [main] org.apache.nifi.BootstrapListener Successfully initiated communication with Bootstrap
2019-07-12 02:29:55,909 INFO [main] org.apache.nifi.NiFi Controller initialization took 19369037824 nanoseconds (19 seconds).
Can you show the output of using the OpenSSL s_client tool to connect to the host? I'm assuming <I.P> is a manual substitution for the actual host IP? Using this version of NiFi, the certificate must have valid SubjectAlternativeName entries for the hostname(s) and IP address(es) you wish to access the service using.
You also want to ensure that the keystore contains the public certificate and private key. The truststore should contain the public certificate and any CA certificates used to sign it (depending on your threshold for desired specificity on accepting incoming certificates for client certificate authentication).
I'm currently configuring Apache Kafka with SSL authentication and am coming across an error when starting the service. It appears that the broker starts up correctly(leader election etc seems to occur), but as soon as any cluster operations begin to take place, I get the error below continually in the logs.
[2019-05-16 11:04:00,351] INFO [Controller id=1, targetBrokerId=1] Failed authentication with XXXX/YYYY (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2019-05-16 11:04:00,351] DEBUG [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2019-05-16 11:04:00,351] DEBUG An authentication error occurred in broker-to-broker communication. (org.apache.kafka.clients.ManualMetadataUpdater)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLProtocolException: Handshake message sequence violation, 2
Tried recreating the key and trust stores, tried dropping SSL from the inter broker listener(this results in an ANONYMOUS principal that I don't want to grant access to any resource).
To explain my configuration:
Running Kafka 2.2 using the SSL principal builder
I have 3 listeners setup - one on a public interface, and two on private interfaces(one for inter-broker comms and one for internal consumers)
SSL is enabled on all 3 listeners
Each listener is tied to it's own key and trust stores(as I need to be able to present different certificates for the internal addresses, as well as being able to trust different signing CA's), and SSL key password is provided for each key/keystore.
Certificates were created using a locally generated key, local CSR generated then signed by a CA running on CFSSL multiroot.
Keystores were then created using the key(same password), signed certificate and CA certificate imported.
Truststore was created and certificate issuing CA(s) added here.
#Kafka Server Properties Configuration
#Broker and listener configuration
broker.id=1
listeners=egress://address1:9093,inter://address1:9094,ingest://address2:9092
advertised.listeners=egress://address1:9093,inter://address1:9094,ingest://address2:9092
listener.security.protocol.map=egress:SSL,inter:SSL,ingest:SSL
inter.broker.listener.name=inter
##
#Listener Trust and Keystore Configurations
#egress configuration
listener.name.egress.ssl.keystore.type=JKS
listener.name.egress.ssl.keystore.location=/data/kafka/pki/egress-keystore.jks
listener.name.egress.ssl.keystore.password=<redacted>
listener.name.egress.ssl.truststore.type=JKS
listener.name.egress.ssl.truststore.location=/data/kafka/pki/egress-truststore.jks
listener.name.egress.ssl.truststore.password=<redacted>
listener.name.egress.ssl.key.password=<redacted>
listener.name.egress.ssl.client.auth=required
listener.name.egress.ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
##
#inter configuration
listener.name.inter.ssl.keystore.type=JKS
listener.name.inter.ssl.keystore.location=/data/kafka/pki/inter-keystore.jks
listener.name.inter.ssl.keystore.password=<redacted>
listener.name.inter.ssl.truststore.type=JKS
listener.name.inter.ssl.truststore.location=/data/kafka/pki/inter-truststore.jks
listener.name.inter.ssl.truststore.password=<redacted>
listener.name.inter.ssl.key.password=<redacted>
listener.name.inter.ssl.client.auth=requested
listener.name.inter.ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
##
#ingest configuration
listener.name.ingest.ssl.keystore.type=JKS
listener.name.ingest.ssl.keystore.location=/data/kafka/pki/ingest-keystore.jks
listener.name.ingest.ssl.keystore.password=<redacted>
listener.name.ingest.ssl.truststore.type=JKS
listener.name.ingest.ssl.truststore.location=/data/kafka/pki/ingest-truststore.jks
listener.name.ingest.ssl.truststore.password=<redacted>
listener.name.ingest.ssl.key.password=<redacted>
listener.name.ingest.ssl.client.auth=required
listener.name.ingest.ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
##
#Generic SSL Configuration
ssl.keystore.type=JKS
ssl.keystore.location=/data/kafka/pki/inter-keystore.jks
ssl.keystore.password=<redacted>
ssl.truststore.type=JKS
ssl.truststore.location=/data/kafka/pki/inter-truststore.jks
ssl.truststore.password=<redacted>
ssl.key.password=<redacted>
ssl.client.auth=requested
ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
ssl.enabled.protocols=TLSv1.2
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:<redacted>
##
#General configuration
auto.create.topics.enable=False
delete.topic.enable=True
log.dir=/var/log/kafka
log.retention.hours=24
log.cleaner.enable=True
log.cleanup.policy=delete
log.retention.check.interval.ms=3600000
min.insync.replicas=2
replication.factor=3
default.replication.factor=3
num.partitions=50
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
transaction.state.log.min.isr=2
transaction.state.log.num.partitions=50
num.replica.fetchers=4
auto.leader.rebalance.enable=True
leader.imbalance.check.interval.seconds=60
transactional.id.expiration.ms=10000
unclean.leader.election.enable=False
zookeeper.connect=zookeeper:2180
zookeeper.session.timeout.ms=100
controlled.shutdown.enable=True
broker.rack=rack1
Did you inserts the certificates to the keystores in the order you described? It could be important to first set the ca, then the certificate signed by the ca to get the chain of trust correctly.
Need your help in setting the SSL manager in Jmeter for performance testing with IBM datapower.
I tried the below steps to Add cert.
• Added (* .jks /*.p12 ) file in the jmeter GUI > Options > SSL Manager.
• I tried the setting the jks file in system.properties file too.
Path : *\jMETER\apache-jmeter-3.0\apache-jmeter-3.0\bin\system.properties
# Truststore properties (trusted certificates)
#javax.net.ssl.trustStore=/path/to/[jsse]cacerts
#javax.net.ssl.trustStorePassword
#javax.net.ssl.trustStoreProvider
#javax.net.ssl.trustStoreType [default = KeyStore.getDefaultType()]
# Keystore properties (client certificates)
# Location
javax.net.ssl.keyStore=****.jks -- Added
#
#The password to your keystore
javax.net.ssl.keyStorePassword=****-- Added
#
#javax.net.ssl.keyStoreProvider
#javax.net.ssl.keyStoreType [default = KeyStore.getDefaultType()]
I dont see the SSL handshake jMETER and datapower even after i followed ablove steps. Getting below error from datapower.
12:47:26 AM ssl error 51751363 10.123.98.73 0x806000ca valcred (###_CVC_Reverse_Server): SSL Proxy Profile '###_SSLPP_Reverse_Server': connection error: peer did not send a certificate
12:47:26 AM mpgw error 51751363 10.123.98.73 0x80e00161 source-https (###_HTTPS_FSH_CON_****): Request processing failed: Connection terminated before request headers read because of the connection error occurs, from URL: 10.123.98.73:58394
12:47:26 AM ssl error 51751363 10.123.98.73 0x8120002f sslproxy (####_SSLPP_Reverse_Server): SSL library error: error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not
Can you please advice how to send the cert(.jks/ .p12) file from jmeter.
Change "Implementation" of your HTTP Request sampler(s) to Java. The fastest and the easiest way of doing this is using HTTP Request Defaults.
If you're using .p12 keystores you will need an extra line in the system.properties file like:
javax.net.ssl.keyStoreType=pkcs12
JMeter restart is required to pick the properties up.
See How to Set Your JMeter Load Test to Use Client Side Certificates article for more information.
I had set up a 6-node Cassandra cluster spanning two AWS regions / datacenters (3 in each) and everything was working fine. After getting that much working I attempted to enable internode encryption which I cannot get to work properly, despite reading innumerable documents on the subject and fiddling endlessly.
I don't see any errors or anything out of the ordinary in the logs. I do see the following line in the logs which indicates it has started the encrypted messaging service, as expected:
MessagingService.java:482 - Starting Encrypted Messaging Service on SSL port 7001
I have enabled verbose logging for SSL in cassandra-env.sh, however this does not produce any errors or additional information about SSL internode connections that I can see (update below):
JVM_OPTS="$JVM_OPTS -Djavax.net.debug=ssl"
I can connect to from one node to all the others on the encrypted messaging port 7001 using nc, so there's no firewall issue.
ubuntu#ip-5-6-7-8:~$ nc -v 1.2.3.4 7001
Connection to 1.2.3.4 7001 port [tcp/afs3-callback] succeeded!
I can connect to each node locally using cqlsh (I haven't enabled client-server encryption) and can query the system keyspace, etc.
However, if I run nodetool status I see that the nodes cannot see each other. Only the node that I'm querying the cluster on is present in the list. This was not the case before internode encryption was enabled, they could all see each other just fine then.
ubuntu#ip-5-6-7-8:~$ nodetool status
Datacenter: us-east_A
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 1.2.3.4 144.75 KB 256 ? 992ae1bc-77e4-4ab1-a18f-4db62bb0ce6f 1b
My process was this:
Created a certificate authority for my cluster
Created a keystore and truststore for each node and added my CA certificate chain to both
Generated a key pair and CSR for each node, signed it with my CA, and added the resulting certificate to each node's keystore
Updated each node's configuration as reads below
Restarted all nodes
The server encryption configuration I'm using is this, with the appropriate values in the $variables.
server_encryption_options:
internode_encryption: all
keystore: $keystore_path
keystore_password: $keystore_passwd
truststore: $truststore_path
truststore_password: $truststore_passwd
require_client_auth: true
protocol: TLS
algorithm: SunX509
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
If anybody could offer some insight or a direction to look in it would be greatly appreciated.
Update: Cipher Suite Agreement
Apparently SSL debug logging prints to stdout, which is not logged to Cassandra's logfiles, so I didn't see that output before. Running Cassandra in the foreground I can see a ton of SSL errors tracing out, all of which complain of handshake failure, because:
javax.net.ssl.SSLHandshakeException: no cipher suites in common
In an attempt to solve this problem I have switched to the Oracle JRE (I was being lazy and using OpenJDK before) and installed the JCE unlimited strength cryptography policy files to ensure all possible ciphers would be supported.
It didn't fix anything.
This is especially confusing given that all these nodes are exactly identical: hardware, OS vendor and version, Java vendor and version, Cassandra version, and configuration file. I cannot imagine why they cannot agree on a cipher suite under these circumstances.
The following is the full error that is traced:
*** ClientHello, TLSv1.2
RandomCookie: GMT: 1449074039 bytes = { 205, 93, 27, 38, 184, 219, 250, 8, 232, 46, 117, 84, 69, 53, 225, 16, 27, 31, 3, 7, 203, 16, 133, 156, 137, 231, 238, 39 }
Session ID: {}
Cipher Suites: [TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
Compression Methods: { 0 }
***
%% Initialized: [Session-3, SSL_NULL_WITH_NULL_NULL]
%% Invalidated: [Session-3, SSL_NULL_WITH_NULL_NULL]
ACCEPT-/1.2.3.4, SEND TLSv1.2 ALERT: fatal, description = handshake_failure
ACCEPT-/1.2.3.4, WRITE: TLSv1.2 Alert, length = 2
ACCEPT-/1.2.3.4, called closeSocket()
ACCEPT-/1.2.3.4, handling exception: javax.net.ssl.SSLHandshakeException: no cipher suites in common
ACCEPT-/1.2.3.4, called close()
ACCEPT-/1.2.3.4, called closeInternal(true)
INFO 16:33:59 Waiting for gossip to settle before accepting client requests...
Allow unsafe renegotiation: false
Allow legacy hello messages: true
Is initial handshake: true
Is secure renegotiation: false
ACCEPT-/1.2.3.4, setSoTimeout(10000) called
ACCEPT-/1.2.3.4, READ: SSL v2, contentType = Handshake, translated length = 57
After a great deal more poking and prodding I've finally managed to get this to work. The problem was related to certificates and the keystore.
As a result of these problems the SSL handshake would fail either due to certificate chain problems or cipher suite agreement problems. Cassandra rather unhelpfully discards errors related to SSL and logs nothing.
In any case, I managed to get things working by doing the following:
Ensure that the CA generates node certificates with both client and server key usage attributes. Failing to include one or the other will prevent nodes from authenticating to each other properly. This presents itself as the cipher suite agreement error. If you're using OpenSSL to manage your CA, I've included the -extensions configuration I used below.
Ensure that both the root and any intermediate CA certificates you are using (if you're using an intermediary CA) are imported into both the keystore and truststore.
Ensure that the node certificate imported into the keystore includes the full trust chain from the primary certificate down to the CA root, including any intermediaries – even though you have already imported these CA certificates separately into the keystore. Failing to do this presents itself as an invalid certificate chain errors.
OpenSSL CA Config
Here's my extensions section for dual-role client/server certificates. You can include this in your OpenSSL config file and reference it when signing by specifying -extensions dual_cert.
[ dual_cert ]
# Extensions for dual-role user/server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, server
nsComment = "Client/Server Dual-role Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
Creating a PEM containing the full trust chain
To create a single PEM file which contains the full trust chain for your node certificate, simply cat all the certificate files in reverse order from the node certificate down to the CA root.
cat node1.crt ca-intermediate.crt ca-root.crt > node1-full-chain.crt