ActiveMQ SSL activation - ssl

I have an MQTT broker with ActiveMQ on an Ubuntu server with Windows clients. Now I want to enable SSL. I found the tutorial, but I have a question.
This step 1: I do on Mqtt broker activemq
Step 1 Create a certificate for the broker with keytool:
keytool -genkey -alias broker -keyalg RSA -keystore broker.ks
Step 2 export the broker's certificate so it can be shared with clients: This action on MQTT broker Server. Certificat will be installed on Windows cleint.
keytool -export -alias broker -keystore broker.ks -file broker_cert
Step 3 see below Create a certificate/keystore for the client:
Do I need this step? where to perform this step? On client or Mqtt broker server? but there are windows cleint.
keytool -genkey -alias client -keyalg RSA -keystore client.ks
*Step 4. Do I need this step? where to perform this step? On client or MQTT broker server? but there are windows client.
Create a truststore for the client, and import the broker's certificate. This will ensure that the client "trusts" the broker:*
keytool -import -alias broker -keystore client.ts -file broker_cert
What do I have to do now to make the broker and the windows client use the certificate?

The instructions cover both the broker-side and client-side.
The broker hosts the self-signed SSL certificate to hand out on SSL connections, and the client needs the key in a 'truststore' to allow the key from the broker since it is self-signed and not from one of the public SSL key signers that are already provided by most OS and dev stacks.
Keep in mind-- SSL encrypts the traffic, but also maintains 'who to trust'. Just b/c some server hands out a SSL key, doesn't mean the client should simply encrypt and start passing data to that server.
EDIT: Some config samples
At minimum:
<broker ..
..
<sslContext>
<sslContext keyStore="broker1-keystore.ks"
keyStorePassword="password"/>
</sslContext>
..
</broker>
Advanced ref: https://activemq.apache.org/ssl-transport-reference

#Pavlovich
I installed the certificate on client.
I change activeqm.xml like:
transportConnector name="ssl" uri="ssl://0.0.0.0:61714?transport.enabledProtocols=TLSv1.2"/>
I'm trying to test the connection with a certificate using mqtt fx and it doesn't work.
i keep getting mqtt exception
ERROR --- BrokerConnectService : MqttException
Thx

Related

Setup kafka security cluster with SASL/PLAIN

I've trying to setup a cluster by https://docs.confluent.io/platform/current/security/security_tutorial.html with SSL keys and username/password like it is described.
But failed to find a proper way to set up a dname of a key and broker's parameter "super.users"
Its told to create a key with:
# Without user prompts, pass command line arguments keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -validity {validity} -genkey -storepass {keystore-pass} -keypass {key-pass}
-dname {distinguished-name} -ext SAN=DNS:{hostname}
And later on with configuring broker's server.properties we need to setup a super.users:
Because this tutorial configures the inter-broker security protocol as
SSL, set the super user name to be the distinguished name configured
in the broker’s certificate. (See other authorization configuration
options).
super.users=User:;User:;User:;User:kafka-broker-metric-reporter
The problem is dname must follow a pattern: "CN=cName, OU=orgUnit, O=org, L=city, S=state, C=countryCode"
Moreover, there is a restriction on CN for kafka: it must be equal SAN FQDN setting.
So, a question is:
in case we have a localhost and setting up cluster with single broker, should we set dname for key like "CN=localhost" and the command will be:
keytool -keystore kafka.server.keystore.jks -alias localhost -genkey
-dname "CN=localhost" -ext SAN=DNS:localhost
and then have in server.properties entry:
super.users=User:CN=localhost
?
And if it true, the second question:
In case we still have a localhost and setting up a 2 separate brokers there. So, we will have a same dname?
Actually that's was correct to add users in config with their dname:
super.users=User:CN=localhost
It was not so obvious, but it's works.

kafka 2 way ssl authentication

I am trying to setup 2 way ssl authentication. My requirement is broker should authenticate only specific clients.
My organization has a CA which issue all certificates in pkcs12 format. steps i followed are as follows.
get a certificate for the broker and configured it in the broker keystore
ssl.keystore.location=/home/kafka/certificate.p12
ssl.keystore.password=xxxxx
ssl.client.auth=required
get a certificate for the client and configured it in the client keystore
ssl.keystore.location=/home/kafka/certificate.p12
ssl.keystore.password=xxxxx
extracted the public certificate from the client certificate using keytool command
keytool -export -file cert -keystore certificate.p12 -alias "12345" -storetype pkcs12 -storepass xxxxx
imported the certificate into broker truststore. broker truststore contains only the client 12345 certificate.
keytool -keystore truststore.p12 -import -file cert -alias 12345 -storetype pkcs12 -storepass xxxxx -noprompt
configured the truststore in the broker.
ssl.truststore.location=/home/kafka/truststore.p12
ssl.truststore.password=xxxxx
configured the truststore in client. client truststore contains CA certificates.
ssl.truststore.location=/etc/pki/java/cacerts
ssl.truststore.password=xxxxx
When i run the broker and client i expect the broker to authenticate the client and establish ssl connection. but instead following error is thrown.
[2021-06-03 23:32:06,864] ERROR [AdminClient clientId=adminclient-1] Connection to node -1 (abc.com/10.129.140.212:9093) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
[2021-06-03 23:32:06,866] WARN [AdminClient clientId=adminclient-1] Metadata update failed due to authentication error (org.apache.kafka.clients.admin.internals.AdminMetadataManager)
org.apache.kafka.common.errors.SslAuthenticationException: SSL handshake failed
Caused by: javax.net.ssl.SSLProtocolException: Unexpected handshake message: server_hello
I tried various things but nothing seems to work. when i replace the broker truststore with /etc/pki/java/cacerts truststore file which contains only the CA certificate
then it works fine. but it will authenticate any client which has certificate issued by the CA.
what could be the issue ?
The default format is jks,
use keytool to create a Java KeyStore (JKS) with the certificate and key for use by Kafka. You'll be prompted to create a new password for the resulting file as well as enter the password for the PKCS12 file from the previous step. Hang onto the new JKS password for use in configuration below.
$ keytool -importkeystore -srckeystore server.p12 -destkeystore kafka.server.keystore.jks -srcstoretype pkcs12 -alias myserver.internal.net
Note: It's safe to ignore the following warning from keytool.
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.p12 -destkeystore kafka.server.keystore.jks -srcstoretype pkcs12"

How does a client get an SSL certificate from an ActiveMQ Broker?

I have an ActiveMQ Broker living on AWS. I'm trying to secure connections from clients using SSL. I have set up the broker to use SSL, but I don't quite understand where the clients are supposed to get the certificate from. Do I need to copy the cert from the broker and package it with client code? Or do I remotely retrieve the cert programmatically each time the client is launched?
Relevant SSL Setup in activemq.xml
<sslContext>
<sslContext keyStore="file:${activemq.base}/conf/broker.ks"
keyStorePassword="password" trustStore="file:${activemq.base}/conf/broker.ts"
trustStorePassword="password"/>
<transportConnectors>
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="ssl" uri="ssl://0.0.0.0:61714?transport.enabledProtocols=TLSv1.2"/>
</transportConnectors>
The clients connecting will be Java clients using JMS. At this point I'm using the default cert that comes packaged with the ActiveMQ installation.
As the ActiveMQ documentation states:
ActiveMQ includes key and trust stores that reference a dummy self signed cert. When you create a broker certificate and stores for your installation, either overwrite the values in the conf directory or delete the existing dummy key and trust stores so they cannot interfere)
Therefore, you should delete the existing broker.ks and broker.ts and create new ones for your installation. You've got a couple of options here.
I imagine that AWS has some sort of infrastructure to acquire SSL certificates and that those certificates would be signed by a well-known certificate authority which would be trusted implicitly by your JMS clients. A quick search turned up AWS Certificate Manager.
However, you also have the option of using a "self-signed" certificate which is, by definition, not signed by a well-known certificate authority and therefore must be explicitly trusted by your clients.
You can take the self-signed route by using the following commands:
Using keytool (from the JDK), create a certificate for the broker:
keytool -genkey -alias broker -keyalg RSA -keystore broker.ks
Export the broker's certificate so it can be shared with clients:
keytool -export -alias broker -keystore broker.ks -file broker_cert
Create a certificate/keystore for the client:
keytool -genkey -alias client -keyalg RSA -keystore client.ks
Create a truststore for the client, and import the broker's certificate. This establishes that the client "trusts" the broker:
keytool -import -alias broker -keystore client.ts -file broker_cert
When starting the client's VM, specify the following system properties:
javax.net.ssl.trustStorePassword=password
javax.net.ssl.trustStore=/path/to/client.ts
If you choose the self-signed route then you only need to generate the client.ts once and then copy that to every client. The clients will use the same truststore every time they connect (assuming the broker's certificate doesn't change).

Glassfish 4 certificate based client authentication

For couple of days I'm trying to set up my development environment for certificate-based client authentication and it just don't want to work. I'm using the Glassfish 4 documentation (security guide) and creating according to it self signed client certificate for test purposes but I'm not sure what I'm missing, since there is not complete description of the whole process. When I enable Client Authentication for my Http-Listener and don't get any error message in the server log, but when I try to connect from a browser I just cannot establish a connection with the server. Without this option my web application is working just fine. In chrome I see the following message:
This site can’t be reached
127.0.0.1 refused to connect.
ERR_CONNECTION_REFUSED
And in firefox:
The connection to 192.168.1.9:8181 was interrupted while the page was loading.
So for me it seems that something (unfortunately I cannot understand what exactly) is happening, but a connection cannot be established.
Since the setup is pretty complex I'm looking for a tutorial or how-to page which has step by step instruction, but any help and advise will be higly appreciated.
Ok, I finally got it how it works :) I found very good step by step instructions in the book Java EE 7 with GlassFish 4 Application Server, Chapter 9, The cerrtificate realm (p. 247)
One have to basicly do the following 3 Steps:
Create Client Certificate
1.1 Generate a self-signed certificate:
keytool -genkey -v -alias myalias -keyalg RSA -storetype PKCS12 -keystore clientCert_1.p12 -storepass wonttellyou -keypass wonttellyou
1.2 Import it in a browser
NB!: When the certificate is not imported the browser doesn't ask for it, but instead returns a connection error message, which for me is pretty misleading.
Export the certificate from step 1. into a format that Glassfish can understand
keytool -export -alias myalias -keystore clientCert_1.p12 -storetype PKCS12 -storepass wonttellyou -rfc -file clientCert_1.cer
RESULT => Certificate stored in file clientCert_1.cer
Since we issued a self-signed certificate, in order for GlassFish to accept our certificate, we need to import it into the cacerts keystore.
keytool -import -v -trustcacerts -alias myalias -file clientCert_1.cer -keystore ../cacerts.jks -keypass changeit -storepass changeit
Note
The part: -import -v -trustcacerts is not in the book, but without
it the keytool may crash throwing an exception.
changeit is the default glassfish password
Finally one needs to setup the application server for certificate based client authentication, which has two parts. The first one is adding the a login configuration to web.xml:
...
<login-config>
<auth-method>CLIENT-CERT</auth-method>
<realm-name>certificate</realm-name>
</login-config>
...
And the second one is configuring the role mapping in glassfish-web.xml, so that your application has a corresponding role for that login. It looks like this:
...
<security-role-mapping>
<role-name>YOUR_ROLE</role-name>
<group-name>YOUR_GROUP</group-name>
<principal-name>CN=Test User, OU=n/a, O=Test User, L=Cologne, ST=NRW, C=DE</principal-name>
</security-role-mapping>
...
For more detailed information, about key generation and setting up your glassfish consult the book.
And finally one more thing which was confusing for me. Over the admin interface one can find the SSL configuration tab of an existing http-listener. You don't have to enable the Client Authentication option!

"unable to find valid certification path to requested target" after adding new Keystore to ActiveMQ

We use ActiveMQ to queue up messages from remote clients.
The clients use the following URL to connect to ActiveMQ on our server;
ssl://www.mydomain.com:61616
This worked fine in the past and was set up by a developer know longer with the company.
Recently we had to update our SSL Cert as the old one had ran out. We did this successfully for our http server but have only now realised that a copy of the original keystore still resided in the ActiveMQ config folders.
We have tried to place the new keystore into the ActiveMQ config folders, overwriting the old keystore. However this does not appear to work and all connections are rejected with the following stack trace;
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(Unknown Source)
at java.security.cert.CertPathBuilder.build(Unknown Source)
What are we doing wrong here?
We've listed the contents of both the old and new keystore using the keytool -list command and they appear to be very similar (apart from the dates of course).
Is there additional updates we need to make to the clients calling the above url to accept our new keystore?
It may be that your truststore is out of synch with your keystore. Here is the general way to set it up from scratch; your config will differ, so adapt as needed:
Generate certs for each of the clients, and register the client certs with the broker truststore.
> keytool -genkey -alias producer -keyalg RSA -keystore myproducer.ks
> keytool -genkey -alias consumer -keyalg RSA -keystore myconsumer.ks
Export both certs
> keytool -export -alias producer -keystore myproducer.ks -file producer_cert
> keytool -export -alias consumer -keystore myconsumer.ks -file consumer_cert
Import the certs into the producer truststore (new file)
> keytool -import -alias producer -keystore mybroker.ts -file producer_cert
> keytool -import -alias consumer -keystore mybroker.ts -file consumer_cert
Copy the broker truststore to whichever location you had the old one in, usually {ACTIVEMQ_HOME}/conf. You can generally see this in your broker config:
<broker ...>
<sslContext>
<sslContext keyStore="file:${activemq.base}/conf/mybroker.ks"
keyStorePassword="test123"
trustStore="file:${activemq.base}/conf/mybroker.ts"
trustStorePassword="test123"/>
</sslContext>
</broker>