How to fix ActiveMQ Artemis connection timeouts when sslEnabled=true - ssl

I'm trying to enable SSL on a Artemis broker and always get this exception when trying to connect:
Exception in thread "main" ActiveMQConnectionTimedOutException[errorType=CONNECTION_TIMEDOUT message=AMQ219013: Timed out waiting to receive cluster topology. Group:null]
at org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory(ServerLocatorImpl.java:743)
The code I use to connect is just this:
ActiveMQClient.createServerLocator("tcp://localhost:5500").createSessionFactory();
This is from a fresh install of Artemis 2.23.1 and the only thing I changed from the default broker configuration was to add this acceptor in broker.xml:
<acceptor name="netty-ssl-acceptor">tcp://localhost:5500?sslEnabled=true;keyStorePath=server-keystore.jks;keyStorePassword=securepass</acceptor>
I generated the keystore and truststore using the script provided in this example.
I had first tried a keystore with a cert that is valid for my domain (using a domain-qualified host name in createServerLocator()) but that also gave me the timeout. That is when I went back to fresh installs and tried going through the SSL example.
Various attempts with invalid paths/passwords/certs threw exceptions that led me to what to fix, but so far haven't been able to see what I did wrong with a generic timeout discovering cluster topology.
Anybody have ideas?

You need to specify sslEnabled=true on the client's URL as well so it knows to use SSL, e.g.:
ActiveMQClient.createServerLocator("tcp://localhost:5500?sslEnabled=true").createSessionFactory();
This is done for the JMS connection in the ssl-enabled example which you cited here.
Also, if you're using self-signed certificates then you'll need a truststore for your client as well and you'll need to configure those settings on the client's URL (just like in the example).

Related

Google Cloud TCP external load balancer and TLS not self signed

Is it possible to directly expose a server behind a L4 load balancer, with a public certificate?
This server is inside a Kubernetes pod. There is a TCP loadbalancer service in front of it which creates the external L4 LB.
My problem is that the TLS traffic does not reach the container inside the pod. So if you succeeded with a similar configuration, I would be interested into knowing.
Update
I did not mention that the traffic is GRPC.
Here is what I did: I have a domain and a corresponding official certificate. I want to secure the grpc connection.
I tried two approches:
with google ESP container, I put the cert as an nginx secret, pass it to the container, set an ssl-port. Behind the ESP in the same pod, I have my grpc server
In this case I get a message like this on the client side:
D0610 14:38:46.246248584 32401 security_handshaker.cc:176] Security
handshake failed:
{"created":"#1591792726.246234613","description":"Handshake
failed","file":"../deps/grpc/src/core/lib/security/transport/security_handshaker.cc","file_line":291,"tsi_code":10,"tsi_error":"TSI_PROTOCOL_FAILURE"}
I see some TLS exchanges with wireshark but no log in esp.
without ESP, I put the cert in my GRPC server. There the GRPC server fails with something like this:
error:1408F10B:SSL routines:ssl3_get_record:wrong version number
In the google ESP documentation I see that I have to prove the domain belongs to me and upload the cert (but where)?
Update 2
As of today, I see no evidence that it is feasible.
IMO, the main issue is that the L4 has the IP corresponding to the domain name of the certificate. Hence the pods don't have the correct IP to prove that they can use the cert so their request towards roots are denied (I have no proof of that as I can't get debug info from nginx in the ESP. I have seen a request with the pure GRPC server solution though).
The issue was in TLS exchange.
By installing the cert in the ESP, it works fine with a web browser and the certificate is indicated valid, whereas with a GRPC client, the TLS handshake fails. Adding additional trace info helped.
By checking my certificate (not self signed but attached to my domain), I found that there is an intermediate certificate provided with it. I installed it along with the domain certificate (in the same crt file) and then it worked.
I don't know exactly why it is behaving like this but maybe it's due to the root_cert file in grpc client lib being too old.
By the way for a domain cert, there is no specific requirement regarding CN and subjectAltName for the certificate. It works without it. So it must only apply to self signed certs as I have seen elsewhere.
I had another issue that disturbed my task: be careful not to name the service port of the L4 load balancer with 'http2' inside. I had some side effect, making another deployment fail due to this. In fact when you do https, don't put http2 in the name.
Anyway it is now working and answers the request for the bounty. Thanks to all who tried to help :)

How to use Kafka with TLS peer verification turned off

I'm testing kafka cluster creation using let's encrypt staging certs. After creating, on my machine, I run the kafka-provided kafka-console-consumer.sh and kafka-console-producer.sh scripts. When I ran with let's encrypt production, it worked fine. But now that I'm using staging certs, I get this when I run the producer:
ERROR [Producer clientId=console-producer] Connection to node -1 (2.kafka.mysite.com/10.1.17.191:9092) failed authentication due to: SSL handshake failed (org.apache.kafka.clients.NetworkClient)
I use these properties for producer script:
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="secret";
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
I'd like to give the option to ignore TLS, and I'd like it to be some parameter I can toggle (on the cluster or on the client) to allow it. How can I achieve this? For anyone familiar with Rabbitmq, I think it's similar to VERIFY_PEER=false, aka VERIFY_NONE.
The kafka configuration has setting
ssl.client.auth
Its value could be set as required,requested or none. You could set it to requested.his means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself
https://docs.confluent.io/current/installation/configuration/broker-configs.html

Startup: Logstash Forwarder

I'm quite confused on how to set up logstash-forwarder.
I currently have it running on my local host, but am unsure how to set it up to handshake with the remote host. I have a ssl certificate and key, and have the configuration paths to it.
I am confused as far as what should I be installing onto my remote host to get this to execute? Is it just a copy of the ssl key, and certificate, or some type of logstash-forwarder package installation as well?
The remote server would normally be an instance of logstash, using the lumberjack input, where you would specify the SSL parameters.

The system cannot infer the transport information from xxxx url

I have been trying to configure a simple pass through proxy using wso2 esb, which points to a REST service in https port.
I had tried doing the same using my development machine (Windows 7) and it is successful.
But when I try repeating the same in production server, in RHEL, I get The system cannot infer the transport information error in system log.
Things Tried
Created passthrough proxy service pointing to https://some.domain.in/something/something.
Tried CURL to https://some.domain.in/something/something and its shows the response properly
Imported certificate from the site to client-truststore.jks. Same was done locally and it worked.
in axis2.xml, edited <parameter name="HostnameVerifier">AllowAll</parameter>under https transporter
Error Message
When clicked in test in configuration console, I got the following message, Invalid address
CURL the proxy service URL, and got Empty response
Checked system logs and saw below logs
Am I missing out something?
I could see in the wso2-error-logs following messages
ERROR {org.apache.synapse.transport.passthru.TargetHandler} - I/O
error: handshake alert: unrecognized_name
javax.net.ssl.SSLProtocolException: handshake alert: unrecognized_name
Then I realised that I was using java 1.6 locally but 1.7 in production.
And in Java 1.7 there are some changes in SSL handling
The JDK 7 release supports
the Server Name Indication (SNI) extension in the JSSE client. SNI,
described in RFC 4366 enables TLS clients to connect to virtual
servers.
In order to bypass this, I added JAVA_OPTS="-Djsse.enableSNIExtension=false" in wso2server.sh and restarted.
This solved my problem.
Not sure if this is the correct way though
This url helped me finally

Is there a way to validate the broker's SSL certificate in django-celery?

I'm using django-celery do connect to a RabbitMQ broker through SSL (with the BROKER_USE_SSL setting). Is there a way to:
Verify the certificate of the broker when the connection is established.
Configure a client certificate to us to establish the connection.
The RabbitMQ side is working correctly, but I don't know how to configure Celery for this and I haven't found anything in Celery's documentation either. The settings CELERY_SECURITY_KEY, CELERY_SECURITY_CERTIFICATE and CELERY_SECURITY_CERT_STORE look like they could do this, but it seems that they're only used for message signing.
kombu.Connection accepts ssl argument as a dictionary of SSL configuration (ssl=False by default). I suppose it is applicable for BROKER_USE_SSL too.
BROKER_USE_SSL={
'ca_certs': '/etc/pki/tls/certs/something.crt',
'keyfile': '/etc/something/system.key',
'certfile': '/etc/something/system.cert',
'cert_reqs': ssl.CERT_REQUIRED,
}