Cassandra nodes cannot see each other when internode encryption is enabled - ssl

I had set up a 6-node Cassandra cluster spanning two AWS regions / datacenters (3 in each) and everything was working fine. After getting that much working I attempted to enable internode encryption which I cannot get to work properly, despite reading innumerable documents on the subject and fiddling endlessly.
I don't see any errors or anything out of the ordinary in the logs. I do see the following line in the logs which indicates it has started the encrypted messaging service, as expected:
MessagingService.java:482 - Starting Encrypted Messaging Service on SSL port 7001
I have enabled verbose logging for SSL in cassandra-env.sh, however this does not produce any errors or additional information about SSL internode connections that I can see (update below):
JVM_OPTS="$JVM_OPTS -Djavax.net.debug=ssl"
I can connect to from one node to all the others on the encrypted messaging port 7001 using nc, so there's no firewall issue.
ubuntu#ip-5-6-7-8:~$ nc -v 1.2.3.4 7001
Connection to 1.2.3.4 7001 port [tcp/afs3-callback] succeeded!
I can connect to each node locally using cqlsh (I haven't enabled client-server encryption) and can query the system keyspace, etc.
However, if I run nodetool status I see that the nodes cannot see each other. Only the node that I'm querying the cluster on is present in the list. This was not the case before internode encryption was enabled, they could all see each other just fine then.
ubuntu#ip-5-6-7-8:~$ nodetool status
Datacenter: us-east_A
=====================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 1.2.3.4 144.75 KB 256 ? 992ae1bc-77e4-4ab1-a18f-4db62bb0ce6f 1b
My process was this:
Created a certificate authority for my cluster
Created a keystore and truststore for each node and added my CA certificate chain to both
Generated a key pair and CSR for each node, signed it with my CA, and added the resulting certificate to each node's keystore
Updated each node's configuration as reads below
Restarted all nodes
The server encryption configuration I'm using is this, with the appropriate values in the $variables.
server_encryption_options:
internode_encryption: all
keystore: $keystore_path
keystore_password: $keystore_passwd
truststore: $truststore_path
truststore_password: $truststore_passwd
require_client_auth: true
protocol: TLS
algorithm: SunX509
store_type: JKS
cipher_suites: [TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA]
If anybody could offer some insight or a direction to look in it would be greatly appreciated.
Update: Cipher Suite Agreement
Apparently SSL debug logging prints to stdout, which is not logged to Cassandra's logfiles, so I didn't see that output before. Running Cassandra in the foreground I can see a ton of SSL errors tracing out, all of which complain of handshake failure, because:
javax.net.ssl.SSLHandshakeException: no cipher suites in common
In an attempt to solve this problem I have switched to the Oracle JRE (I was being lazy and using OpenJDK before) and installed the JCE unlimited strength cryptography policy files to ensure all possible ciphers would be supported.
It didn't fix anything.
This is especially confusing given that all these nodes are exactly identical: hardware, OS vendor and version, Java vendor and version, Cassandra version, and configuration file. I cannot imagine why they cannot agree on a cipher suite under these circumstances.
The following is the full error that is traced:
*** ClientHello, TLSv1.2
RandomCookie: GMT: 1449074039 bytes = { 205, 93, 27, 38, 184, 219, 250, 8, 232, 46, 117, 84, 69, 53, 225, 16, 27, 31, 3, 7, 203, 16, 133, 156, 137, 231, 238, 39 }
Session ID: {}
Cipher Suites: [TLS_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_EMPTY_RENEGOTIATION_INFO_SCSV]
Compression Methods: { 0 }
***
%% Initialized: [Session-3, SSL_NULL_WITH_NULL_NULL]
%% Invalidated: [Session-3, SSL_NULL_WITH_NULL_NULL]
ACCEPT-/1.2.3.4, SEND TLSv1.2 ALERT: fatal, description = handshake_failure
ACCEPT-/1.2.3.4, WRITE: TLSv1.2 Alert, length = 2
ACCEPT-/1.2.3.4, called closeSocket()
ACCEPT-/1.2.3.4, handling exception: javax.net.ssl.SSLHandshakeException: no cipher suites in common
ACCEPT-/1.2.3.4, called close()
ACCEPT-/1.2.3.4, called closeInternal(true)
INFO 16:33:59 Waiting for gossip to settle before accepting client requests...
Allow unsafe renegotiation: false
Allow legacy hello messages: true
Is initial handshake: true
Is secure renegotiation: false
ACCEPT-/1.2.3.4, setSoTimeout(10000) called
ACCEPT-/1.2.3.4, READ: SSL v2, contentType = Handshake, translated length = 57

After a great deal more poking and prodding I've finally managed to get this to work. The problem was related to certificates and the keystore.
As a result of these problems the SSL handshake would fail either due to certificate chain problems or cipher suite agreement problems. Cassandra rather unhelpfully discards errors related to SSL and logs nothing.
In any case, I managed to get things working by doing the following:
Ensure that the CA generates node certificates with both client and server key usage attributes. Failing to include one or the other will prevent nodes from authenticating to each other properly. This presents itself as the cipher suite agreement error. If you're using OpenSSL to manage your CA, I've included the -extensions configuration I used below.
Ensure that both the root and any intermediate CA certificates you are using (if you're using an intermediary CA) are imported into both the keystore and truststore.
Ensure that the node certificate imported into the keystore includes the full trust chain from the primary certificate down to the CA root, including any intermediaries – even though you have already imported these CA certificates separately into the keystore. Failing to do this presents itself as an invalid certificate chain errors.
OpenSSL CA Config
Here's my extensions section for dual-role client/server certificates. You can include this in your OpenSSL config file and reference it when signing by specifying -extensions dual_cert.
[ dual_cert ]
# Extensions for dual-role user/server certificates (`man x509v3_config`).
basicConstraints = CA:FALSE
nsCertType = client, server
nsComment = "Client/Server Dual-role Certificate"
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer:always
keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth, serverAuth
Creating a PEM containing the full trust chain
To create a single PEM file which contains the full trust chain for your node certificate, simply cat all the certificate files in reverse order from the node certificate down to the CA root.
cat node1.crt ca-intermediate.crt ca-root.crt > node1-full-chain.crt

Related

Kafka SSL Authentication Issues for inter-broker communication

I'm currently configuring Apache Kafka with SSL authentication and am coming across an error when starting the service. It appears that the broker starts up correctly(leader election etc seems to occur), but as soon as any cluster operations begin to take place, I get the error below continually in the logs.
[2019-05-16 11:04:00,351] INFO [Controller id=1, targetBrokerId=1] Failed authentication with XXXX/YYYY (SSL handshake failed) (org.apache.kafka.common.network.Selector)
[2019-05-16 11:04:00,351] DEBUG [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[2019-05-16 11:04:00,351] DEBUG An authentication error occurred in broker-to-broker communication. (org.apache.kafka.clients.ManualMetadataUpdater)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLProtocolException: Handshake message sequence violation, 2
Tried recreating the key and trust stores, tried dropping SSL from the inter broker listener(this results in an ANONYMOUS principal that I don't want to grant access to any resource).
To explain my configuration:
Running Kafka 2.2 using the SSL principal builder
I have 3 listeners setup - one on a public interface, and two on private interfaces(one for inter-broker comms and one for internal consumers)
SSL is enabled on all 3 listeners
Each listener is tied to it's own key and trust stores(as I need to be able to present different certificates for the internal addresses, as well as being able to trust different signing CA's), and SSL key password is provided for each key/keystore.
Certificates were created using a locally generated key, local CSR generated then signed by a CA running on CFSSL multiroot.
Keystores were then created using the key(same password), signed certificate and CA certificate imported.
Truststore was created and certificate issuing CA(s) added here.
#Kafka Server Properties Configuration
#Broker and listener configuration
broker.id=1
listeners=egress://address1:9093,inter://address1:9094,ingest://address2:9092
advertised.listeners=egress://address1:9093,inter://address1:9094,ingest://address2:9092
listener.security.protocol.map=egress:SSL,inter:SSL,ingest:SSL
inter.broker.listener.name=inter
##
#Listener Trust and Keystore Configurations
#egress configuration
listener.name.egress.ssl.keystore.type=JKS
listener.name.egress.ssl.keystore.location=/data/kafka/pki/egress-keystore.jks
listener.name.egress.ssl.keystore.password=<redacted>
listener.name.egress.ssl.truststore.type=JKS
listener.name.egress.ssl.truststore.location=/data/kafka/pki/egress-truststore.jks
listener.name.egress.ssl.truststore.password=<redacted>
listener.name.egress.ssl.key.password=<redacted>
listener.name.egress.ssl.client.auth=required
listener.name.egress.ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
##
#inter configuration
listener.name.inter.ssl.keystore.type=JKS
listener.name.inter.ssl.keystore.location=/data/kafka/pki/inter-keystore.jks
listener.name.inter.ssl.keystore.password=<redacted>
listener.name.inter.ssl.truststore.type=JKS
listener.name.inter.ssl.truststore.location=/data/kafka/pki/inter-truststore.jks
listener.name.inter.ssl.truststore.password=<redacted>
listener.name.inter.ssl.key.password=<redacted>
listener.name.inter.ssl.client.auth=requested
listener.name.inter.ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
##
#ingest configuration
listener.name.ingest.ssl.keystore.type=JKS
listener.name.ingest.ssl.keystore.location=/data/kafka/pki/ingest-keystore.jks
listener.name.ingest.ssl.keystore.password=<redacted>
listener.name.ingest.ssl.truststore.type=JKS
listener.name.ingest.ssl.truststore.location=/data/kafka/pki/ingest-truststore.jks
listener.name.ingest.ssl.truststore.password=<redacted>
listener.name.ingest.ssl.key.password=<redacted>
listener.name.ingest.ssl.client.auth=required
listener.name.ingest.ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
##
#Generic SSL Configuration
ssl.keystore.type=JKS
ssl.keystore.location=/data/kafka/pki/inter-keystore.jks
ssl.keystore.password=<redacted>
ssl.truststore.type=JKS
ssl.truststore.location=/data/kafka/pki/inter-truststore.jks
ssl.truststore.password=<redacted>
ssl.key.password=<redacted>
ssl.client.auth=requested
ssl.principal.mapping.rules=RULE:^.*[Oo][Uu]=([a-zA-Z0-9.-]*).*$/$1/L,DEFAULT
ssl.enabled.protocols=TLSv1.2
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=false
super.users=User:<redacted>
##
#General configuration
auto.create.topics.enable=False
delete.topic.enable=True
log.dir=/var/log/kafka
log.retention.hours=24
log.cleaner.enable=True
log.cleanup.policy=delete
log.retention.check.interval.ms=3600000
min.insync.replicas=2
replication.factor=3
default.replication.factor=3
num.partitions=50
offsets.topic.num.partitions=50
offsets.topic.replication.factor=3
transaction.state.log.min.isr=2
transaction.state.log.num.partitions=50
num.replica.fetchers=4
auto.leader.rebalance.enable=True
leader.imbalance.check.interval.seconds=60
transactional.id.expiration.ms=10000
unclean.leader.election.enable=False
zookeeper.connect=zookeeper:2180
zookeeper.session.timeout.ms=100
controlled.shutdown.enable=True
broker.rack=rack1
Did you inserts the certificates to the keystores in the order you described? It could be important to first set the ca, then the certificate signed by the ca to get the chain of trust correctly.

RabbitMQ Server TLS, client alert: Fatal - Certificate Unknown when starting service

Have RabbitMQ configured to enable TLS with certificates. Key, Cert, and CA defined in .conf file. Upon service startup, error is thrown. Cannot find the cause for this to be thrown and logging isn't giving any more information at the debug level.
Get a client alert failure and am not certain of cause.
2019-03-22 10:04:18.690 [info] <0.7.0> Server startup complete; 4 plugins started.
* rabbitmq_amqp1_0
* rabbitmq_management
* rabbitmq_management_agent
* rabbitmq_web_dispatch
2019-03-22 10:04:24.831 [debug] <0.689.0> Supervisor {<0.689.0>,rabbit_connection_sup} started rabbit_connection_helper_sup:start_link() at pid <0.690.0>
2019-03-22 10:04:24.831 [debug] <0.689.0> Supervisor {<0.689.0>,rabbit_connection_sup} started rabbit_reader:start_link(<0.690.0>, {acceptor,{0,0,0,0},5671}) at pid <0.691.0>
2019-03-22 10:04:24.909 [info] <0.688.0> TLS server: In state certify received CLIENT ALERT: Fatal - Certificate Unknown
Our certs didn't have the correct type of X509v3 Extended Key Usage on the cert.
For x509 Auth, you'll need to assign client web auth when creating the certificate.
X509v3 Extended Key Usage:
TLS Web Client Authentication
This won't fix the issue if your certificate CA is broken and can't be verified, but for my issue, this was the resolution.

openssl 1.0.2j, how to force server to choose ECDH* ciphers

I have client server which uses opensl 1.0.2j, and using RSA:4096 key and certificate and want to force the server to use only the following ciphers.
ECDHE-RSA-AES256-GCM-SHA384
ECDHE-RSA-AES256-SHA384
ECDH-RSA-AES128-GCM-SHA256
ECDH-RSA-AES128-SHA256
DHE-RSA-AES256-GCM-SHA384
DHE-RSA-AES256-SHA256
DHE-RSA-AES128-GCM-SHA256
DHE-RSA-AES128-SHA256
My server side code will look like below.
method = SSLv23_server_method();
ctx = SSL_CTX_new(method);
SSL_CTX_set_cipher_list(ctx, "ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDH-RSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256");
SSL_CTX_set_ecdh_auto(ctx, 1);
SSL_CTX_set_tmp_dh(ctx, dh);
SSL_CTX_set_tmp_ecdh(ctx, ecdh);
SSL_CTX_use_certificate_file(ctx, certFilePath, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(ctx, privKeyPath, SSL_FILETYPE_PEM)
SSL_accept()
My client side code will look like as below
method = SSLv23_server_method();
ctx = SSL_CTX_new(method);
SSL_CTX_set_cipher_list(ctx, "ECDH-RSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-SHA256");
SSL_CTX_set_ecdh_auto(ctx, 1);
SSL_CTX_set_tmp_dh(ctx, dh);
SSL_CTX_set_tmp_ecdh(ctx, ecdh);
SSL_CTX_use_certificate_file(ctx, certFilePath, SSL_FILETYPE_PEM);
SSL_CTX_use_PrivateKey_file(ctx, privKeyPath, SSL_FILETYPE_PEM)
SSL_connect()
The last step ssl_accept() on the server fails with
err: 336027900 'error:140760FC:SSL routines:SSL23_GET_CLIENT_HELLO:unknown protocol'
If i use ECDHE*RSA* or DHE*RSA* ciphers on the client side, it is working fine.
Could you please let me know what I am missing?
Edit: server's certificate(certFilePath) contains an RSA public key not the ECDH public key.
Meta: answer only up to TLS1.2; 1.3 no longer has keyexchange and authentication in ciphersuite.
First, it makes no sense to call both set_ecdh_auto and set_tmp_ecdh -- those are mutually exclusive. Also your server doesn't request client authentication, so configuring a self-cert&key on the client is useless. OTOH your server is using a selfsigned cert which probably isn't in the client's default truststore, so you may need to configure the client truststore (there are several approaches to doing that).
Second, 'static' ECDH ciphersuites are quite different from ECDHE suites, just as 'static' DH suites are different from DHE suites. Both static forms are not widely implemented and very little used because they generally offer no benefit; in particular OpenSSL 1.1.0 and up no longer implement them, so your code becomes obsolete in about a year if I remember the schedule correctly.
To be exact, DH-RSA suites require a cert containing a DH key (which permits keyAgreement), and for TLS<=1.1 the cert must be issued by a CA using an RSA key; for 1.2 this latter restriction is removed. No public CA will issue a cert for a DH key, and even doing it yourself isn't easy; see https://security.stackexchange.com/questions/44251/openssl-generate-different-type-of-self-signed-certificate and (my) https://crypto.stackexchange.com/questions/19452/static-dh-static-ecdh-certificate-using-openssl/ .
ECDH-RSA suites similarly require a cert containing an ECC key which permits keyAgreement, and issued by RSA if <=1.1; this is somewhat easier because the key (and CSR) for ECDH is the same as for ECDSA and only KeyUsage needs to differ. For your self-created and self-signed case, it's easy, just generate an ECC key and cert (automatically signed with ECDSA).
But last, this shouldn't cause 'unknown protocol'; it would cause 'no shared cipher' and handshake_failure. The code you've shown shouldn't cause 'unknown protocol', so you probably need to investigate and fix that first. You might try using commandline s_client instead especially with its -debug or -msg hooks, or -trace if you have compiled that in.

Erlang SSL server stops accepting connections

Setup :
Erlang cluster with two Erlang nodes, different names, identical SSL setup (certificates, keys, authority)
the two nodes are listening for connections on the same port
the accept scheme is simple and doesn't have an acceptor pool in front : ListenSocket = ssl:listen() when the app starts -> then, in the children, I do AcceptSock = ssl:transport_accept(ListenSocket) + ssl:ssl_accept(AcceptSock) + mysup:start_child() which will start a new gen_server to listen on ListenSocket (in the gen_server init() I have timeout == 0, btw - to make the gen_server receive a timeout message which will be handled with handle_info(timeout...) which does the accept scheme above).
Expected behavior :
I expect all of this to work all the time :)
Observed behavior :
from time to time, one or both servers stop accepting ssl connections from the iOS apps. telnet to that port works - and it even passes transport_accept().
from the iOS app, I get a "SSLHandshake failed, error -9806" and it doesn't look like transport_accept() was successful (I have error logging before and after that line and I do not see any error messages printed in the log - theoretically, it looked like the iOS app is not trying to connect to that port, but it did try, because it says SSLHandshake failed).
I followed this thread - and got the followings :
openssl s_client -connect myserver:4321 -servername myserver -ssl3 -tls1 -prexit
CONNECTED(00000003)
write:errno=60
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 0 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
Start Time: 1460057622
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
same command executed with the second server (that is still accepting connections) returned a lot more infos and doesn't time out.
Any help is appreciated, thank you.

RabbitMQ: handshake error when attempting to use SSL certificates

I am trying to use SSL certificates with RabbitMQ but I keep getting handshake errors with the broker.
The certificates that I have generated work fine when using the openssl 's_client' and 's_server' commands in separate terminal windows and utilizing port 8443 as detailed in the SSL Troubleshooting guide (http://www.rabbitmq.com/troubleshooting-ssl.html).
The problem appears when I attempt to connect to the RabbitMQ SSL port 5671 using the same openssl 's_client' command:
Running this:
openssl s_client -connect localhost:5671 -cert /etc/rabbitmq/ssl/client/cert.pem -key /etc/rabbitmq/ssl/client/key.pem -CAfile /etc/rabbitmq/ssl/certificate_auth/cacert.pem
Produces this:
CONNECTED(00000003)
depth=1 CN = RMQCA
verify return:1
depth=0 CN = roger.xxxxxx.com, O = server
verify return:1
139997248210760:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1256:SSL alert number 40
139997248210760:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177:
---
The SSL listener starts fine as indicated in the RabbitMQ log:
=INFO REPORT==== 19-May-2014::15:45:34 ===
started TCP Listener on [::]:5672
=INFO REPORT==== 19-May-2014::15:45:34 ===
started SSL Listener on [::]:5671
When attempting to connect to port 5671 with 's_client' the error appears:
=INFO REPORT==== 19-May-2014::17:20:39 ===
accepting AMQP connection <0.3263.0> ([::1]:58538 -> [::1]:5671)
=ERROR REPORT==== 19-May-2014::17:20:39 ===
SSL: certify: ssl_handshake.erl:1346:Fatal error: handshake failure
=ERROR REPORT==== 19-May-2014::17:20:44 ===
error on AMQP connection <0.3263.0>: {ssl_upgrade_error,
{tls_alert,"handshake failure"}} (unknown POSIX error)
RabbitMQ Config file:
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile, "/etc/rabbitmq/ssl/certificate_auth/cacert.pem"},
{certfile, "/etc/rabbitmq/ssl/server/cert.pem"},
{keyfile, "/etc/rabbitmq/ssl/server/key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, false}]}
]}
].
RabbitMQ info:
[{pid,10375},
{running_applications,
[{rabbitmq_management,"RabbitMQ Management Console","3.2.3"},
{rabbitmq_web_dispatch,"RabbitMQ Web Dispatcher","3.2.3"},
{webmachine,"webmachine","1.10.3-rmq3.2.3-gite9359c7"},
{mochiweb,"MochiMedia Web Server","2.7.0-rmq3.2.3-git680dba8"},
{rabbitmq_management_agent,"RabbitMQ Management Agent","3.2.3"},
{rabbit,"RabbitMQ","3.2.3"},
{ssl,"Erlang/OTP SSL application","5.3.3"},
{public_key,"Public key infrastructure","0.21"},
{crypto,"CRYPTO version 2","3.2"},
{asn1,"The Erlang ASN1 compiler version 2.0.4","2.0.4"},
{os_mon,"CPO CXC 138 46","2.2.14"},
{inets,"INETS CXC 138 49","5.9.8"},
{mnesia,"MNESIA CXC 138 12","4.11"},
{amqp_client,"RabbitMQ AMQP Client","3.2.3"},
{xmerl,"XML parser","1.3.6"},
{sasl,"SASL CXC 138 11","2.3.4"},
{stdlib,"ERTS CXC 138 10","1.19.4"},
{kernel,"ERTS CXC 138 10","2.16.4"}]},
{os,{unix,linux}},
{erlang_version,
"Erlang R16B03-1 (erts-5.10.4) [source] [64-bit] [smp:2:2] [async-threads:30] [hipe] [kernel-poll:true]\n"},
{memory,
[{total,43812088},
{connection_procs,5616},
{queue_procs,42528},
{plugins,451248},
{other_proc,13805200},
{mnesia,72752},
{mgmt_db,10208},
{msg_index,34560},
{other_ets,1159472},
{binary,1030272},
{code,21819091},
{atom,793505},
{other_system,4587636}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,787819724},
{disk_free_limit,50000000},
{disk_free,31267266560},
{file_descriptors,
[{total_limit,924},{total_used,4},{sockets_limit,829},{sockets_used,2}]},
{processes,[{limit,1048576},{used,215}]},
{run_queue,0},
{uptime,7893}]
...done.
Any help would be greatly appreciated
Thanks in advance.
UPDATE:
I get the following errors when trying to connect with the rabbitmqadmin utility.
Log File:
=INFO REPORT==== 20-May-2014::14:39:12 ===
accepting AMQP connection <0.16589.0> ([::1]:58922 -> [::1]:5671)
=ERROR REPORT==== 20-May-2014::14:39:12 ===
SSL: certify: ssl_handshake.erl:1346:Fatal error: handshake failure
=ERROR REPORT==== 20-May-2014::14:39:17 ===
error on AMQP connection <0.16589.0>: {ssl_upgrade_error,
{tls_alert,"handshake failure"}} (unknown POSIX error)
The rabbitmqadmin command produced the following:
*** Could not connect: [Errno 1] _ssl.c:492: error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
I had the same problem as #user3653959 and #Sarah Messer's answer lead me to the solution.
Your client certificate must have the TLS Web Client Authentication "X509v3 Extended Key Usage" attribute. Mine had only TLS Web Server Authentication due to an error in my client generation script.
To check your client certificate's capabilities, you can use the this command:
openssl x509 -noout -text -in client-certificate.pem
Then look for the "X509v3 extensions:" section and the "X509v3 Extended Key Usage:" subsection.
If you generate your client certificate using the example openssl.conf and client and server commands provided in the official "RabbitMQ - TLS Support" guide, it should work out of the box.
The key here is the extendedKeyUsage = 1.3.6.1.5.5.7.3.2 openssl config option in openssl.conf as #Sarah Messer points out. This is the "TLS Web Client Authentication" capability. OpenSSL s_server does not require this capability and that's why it works by default with it, but not with RabbitMQ. keyUsage = digitalSignature is enough as main usage options. Also, the "Common Name" (CN) of the client certificate is not important.
Just for reference
My environment:
RabbitMQ 3.6.2
Erlang 18.2
Ubuntu 14.04.2 LTS (64-bit)
Only TLSv1.2 enabled.
The error I was seeing in my RabbitMQ log:
=ERROR REPORT==== 21-Jun-2016::13:28:21 ===
SSL: certify: ssl_handshake.erl:1492:Fatal error: handshake failure
The error I was seeing via openssl s_client:
140735165813584:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:s3_pkt.c:1472:SSL alert number 40
140735165813584:error:1409E0E5:SSL routines:ssl3_write_bytes:ssl handshake failure:s3_pkt.c:656:
I worked my way through similar troubles (using RabbitMQ 2.7.1 / Erlang R14B04). Here's what I've found:
The RabbitMQ plugins page and at least one other site recommend enabling the plugin rabbitmq_auth_mechanism_ssl. If rabbitmq-plugins is an invalid command on your system, this page describes how to enable it on Ubuntu. (Apparently the apt-get package doesn't have quite the expected behavior on Debian-based systems.) Your output (from rabbitmqctl report, I presume) says you don't have rabbitmq_auth_mechanism_ssl enabled.
For your rabbitmq.config, you'll need to make sure "EXTERNAL" is listed as one of the auth_mechanisms. The line's syntax is {auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']} and appears as one item in the default, "rabbit" portion of the configuration.
You should also make sure the certificate your client presents has the appropriate values set for both keyUsage and extendedKeyUsage, as RabbitMQ is more strict about these than s_server. For debugging / testing purposes, you may want to be extremely permissive with these. You can set the keyUsage in your openssl config. A broadly-acceptable openssl config may have lines like this
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment, keyAgreement, keyCertSign, cRLSign
extendedKeyUsage = 1.3.6.1.5.5.7.3.1, 1.3.6.1.5.5.7.3.2
(I think the .2 OID, "TLS Web Client Authentication" is important for connecting to RabbitMQ, but I haven't done careful tests.)
This will produce certificates with this block near the end:
X509v3 Key Usage:
Digital Signature, Non Repudiation, Key Encipherment, Data Encipherment, Key Agreement, Certificate Sign, CRL Sign
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
There should be more output from s_client. In particular, I'm interested in the final line, which should look something like "Verify return code: 0 (ok)" If you have a non-zero / error message there, post it and pivot off that in your searches. (#19 is surprisingly common, given that it's not really an error.)
When I got to this point, when I tried to make a simple pika.BlockingConnection, the handshake apparently completed just fine, but Rabbit removed EXTERNAL from the list specified in auth_mechanisms in the config. I confirmed I had rabbitmq_auth_mechanism_ssl enabled, but that by itself wasn't enough. (I discovered this by subclassing pika.credentials.ExternalCredentials and pass an instance as the "credentials" item in ConnectionParameters, adding a print start at the top of the subclass's response_for() method.) I fixed that by adding the following line to the rabbit portion of the config file, on the same level as ssl_listeners and ssl_cert_login_from:
{ssl_apps,[asn1,crypto,public_key,ssl]},
(I suspect newer versions of RabbitMQ turn that on by default, but my particular setup did not.)
If you've done all that and you're still having trouble, you might also try replacing "verify_peer" with "verify_none" in your RabbitMQ config. You probably don't want that in production, since it opens you up to anyone with a self-signed certificate, but it's another data point. Also, subclass the relevant things in pika and add in print statements to get more insight on what Rabbit's sending you and how your local client's interpreting it.
Here is the solution which worked for me:
Adding below ciphers in rabbitmq.config:
{ciphers, ["ECDHE-ECDSA-AES256-GCM-SHA384","ECDHE-RSA-AES256-GCM-SHA384",
"ECDHE-ECDSA-AES256-SHA384","ECDHE-RSA-AES256-SHA384", "ECDHE-ECDSA-DES-CBC3-SHA",
"ECDH-ECDSA-AES256-GCM-SHA384","ECDH-RSA-AES256-GCM-SHA384","ECDH-ECDSA-AES256-SHA384",
"ECDH-RSA-AES256-SHA384","DHE-DSS-AES256-GCM-SHA384","DHE-DSS-AES256-SHA256",
"AES256-GCM-SHA384","AES256-SHA256","ECDHE-ECDSA-AES128-GCM-SHA256",
"ECDHE-RSA-AES128-GCM-SHA256","ECDHE-ECDSA-AES128-SHA256","ECDHE-RSA-AES128-SHA256",
"ECDH-ECDSA-AES128-GCM-SHA256","ECDH-RSA-AES128-GCM-SHA256","ECDH-ECDSA-AES128-SHA256",
"ECDH-RSA-AES128-SHA256","DHE-DSS-AES128-GCM-SHA256","DHE-DSS-AES128-SHA256",
"AES128-GCM-SHA256","AES128-SHA256","ECDHE-ECDSA-AES256-SHA",
"ECDHE-RSA-AES256-SHA","DHE-DSS-AES256-SHA","ECDH-ECDSA-AES256-SHA",
"ECDH-RSA-AES256-SHA","AES256-SHA","ECDHE-ECDSA-AES128-SHA",
"ECDHE-RSA-AES128-SHA","DHE-DSS-AES128-SHA","ECDH-ECDSA-AES128-SHA",
"ECDH-RSA-AES128-SHA","AES128-SHA"]},
{fail_if_no_peer_cert,false}]}
]}
]