RabbitMQ MQTT SSL connection fails - ssl

I am trying to set up a RabbitMQ server with mqtt and amqp connections.
I have opened mqtt tcp connection on port 1883 and mqtt ssl connection on port 8883. TLS and SSL listners are successfully opened as the log. I am using mqttBox as the client and I can successfully connect to port 1883 using tcp. But I am unable to connect to port 8883 using TLS/SSL.
Here is my config file.
[
{rabbit,
[
{tcp_listeners, [{"127.0.0.1", 5672}, {"::1", 5672}]},
{default_vhost, <<"/">>},
{default_user, <<"user">>},
{default_pass, <<"bitnami">>},
{default_permissions, [<<".*">>, <<".*">>, <<".*">>]},
{ssl_options, [{cacertfile, "/opt/bitnami/rabbitmq/tls/result/ca_certificate.pem"},
{certfile, "/opt/bitnami/rabbitmq/tls/result/server_certificate.pem"},
{keyfile, "/opt/bitnami/rabbitmq/tls/result/server_key.pem"},
%% {password,""},
{verify, verify_peer},
{fail_if_no_peer_cert, true}]}
%% {ssl_listeners, [5671]}
]
},
{kernel, []},
{rabbitmq_management,
[
{listener, [{port, 15672}, {ip, "0.0.0.0"}]}
]
},
{rabbitmq_shovel,
[
{shovels, []}
]
},
{rabbitmq_stomp, []},
{rabbitmq_mqtt, [{ssl_cert_login, true}, {allow_anonymous, false} ,
{ssl_listeners, [8883]}, {tcp_listeners, [1883]}]},
{rabbitmq_amqp1_0, []},
{rabbitmq_auth_backend_ldap, []},
{rabbit, [{vm_memory_high_watermark, 0.6}]
}
].
And my log file.
started MQTT TCP Listener on [::]:1883
started MQTT SSL Listener on [::]:8883
started TCP Listener on [::]:5672
started SSL Listener on [::]:5671
<0.13639.4> MQTT vhost picked using plugin configuration or default
TCP connection successful
<0.13639.4> accepting MQTT connection <0.13639.4> (123.231.123.82:54601 -> 10.128.0.5:1883)
TLS connection failed
<0.13639.4> MQTT detected network error for "123.231.123.82:54601 -> 10.128.0.5:1883": peer closed TCP connection
It seems both tcp and tls requests are headed to 10.128.0.5:1883.
How can I fix this?
edit: client configurations:

Related

Insufficient Security with RabbitMQ 3.7.15 and Erlang 22.0.1 / 22.0.2 on centOS 7.6

Observing an Insufficient Security error after upgrading RabbitMQ server to 3.7.15 with Erlang 22.0.1 / 22.0.2 on centOS 7.6.
Initial State of system where SSL was found to be working:
CentOS Linux release - 7.5
RMQ - 3.7.7-1.el7
Erlang - 20.3.8.2-1.el7.x86_64
SSL was found to be working even when CentOS was upgraded to 7.6 and RMQ to 3.7.15. Checked after RMQ restart.
However when Erlang was upgraded to erlang-22.0.2-1.el7.x86_64.rpm, SSL stopped working. (After RMQ restart)
RabbitMQ config:
[
{rabbitmq_management,
[{listener, [{port, 15671},
{ssl, true},
{ssl_opts, [{cacertfile, "<path>/cacert.pem"},
{certfile, "<path>/cert.pem"},
{keyfile, "<path>/key.pem"}]}
]}
]},
{rabbit, [
{log_levels, [{connection,info}]},
{tcp_listeners, []},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"<path>/all_cacerts.pem"},
{certfile,"<path>/cert.pem"},
{keyfile,"<path>/key.pem"},
{depth, 5},
{verify,verify_peer},
{fail_if_no_peer_cert,false}]},
{auth_mechanisms, ['PLAIN','AMQPLAIN','EXTERNAL']},
{loopback_users, []},
{ssl_cert_login_from, common_name}
]}
].
RabbitMQ enabled pluggins:
[rabbitmq_auth_mechanism_ssl,rabbitmq_management,rabbitmq_shovel,rabbitmq_shovel_management].
Please help.
Edit 1:
Updated the rabbitmq.config in this manner. Cert based auth is working now.
[
{rabbitmq_management,
[{listener, [{port, 15671},
{ssl, true},
{ssl_opts, [{cacertfile, "<path>/cacert.pem"},
{certfile, "<path>/cert.pem"},
{keyfile, "<path>/key.pem"}]},
{ssl, [{versions, ['tlsv1.3', 'tlsv1.2', 'tlsv1.1', 'tlsv1', 'sslv3']},
{ciphers,
[{ecdhe_ecdsa,aes_256_gcm,aead,sha384}, {...}]}
]}
]},
{ssl, [{versions, ['tlsv1.3', 'tlsv1.2', 'tlsv1.1', 'tlsv1', 'sslv3']},
{rabbit, [
{log_levels, [{connection,info}]},
{tcp_listeners, [5672]},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"<path>/all_cacerts.pem"},
{certfile,"<path>/cert.pem"},
{keyfile,"<path>/key.pem"},
{ssl, [{versions, ['tlsv1.3', 'tlsv1.2', 'tlsv1.1', 'tlsv1', 'sslv3']},
{ciphers,
[{ecdhe_ecdsa,aes_256_gcm,aead,sha384}, {...}]},
{depth, 5},
{verify,verify_peer},
{fail_if_no_peer_cert,false}]},
{auth_mechanisms, ['PLAIN','AMQPLAIN','EXTERNAL']},
{loopback_users, []},
{ssl_cert_login_from, common_name}
]}
].
However, shovels with amqps with port 5671 still error out.
[error] <0.7391.6> Shovel 'ShovelTest' failed to connect (URI: amqps://<ip>:5671/<blah>): {tls_alert,{insufficient_security,"received SERVER ALERT: Fatal - Insufficient Security"}}
Shovels work fine with ampq with port 5672 though.
Please help.

Kafka Connect failing to read from Kafka topics over SSL

Running kafka connect in our docker-swarm, with the following compose file:
cp-kafka-connect-node:
image: confluentinc/cp-kafka-connect:5.1.0
ports:
- 28085:28085
secrets:
- kafka.truststore.jks
- source: kafka-connect-aws-credentials
target: /root/.aws/credentials
environment:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_LOG4J_ROOT_LEVEL: TRACE
CONNECT_REST_PORT: 28085
CONNECT_GROUP_ID: cp-kafka-connect
CONNECT_CONFIG_STORAGE_TOPIC: dev_cp-kafka-connect-config
CONNECT_OFFSET_STORAGE_TOPIC: dev_cp-kafka-connect-offsets
CONNECT_STATUS_STORAGE_TOPIC: dev_cp-kafka-connect-status
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 3
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 3
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 3
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_VALUE_CONVERTER_SCHEMAS_ENABLE: 'false'
CONNECT_INTERNAL_KEY_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_INTERNAL_VALUE_CONVERTER: org.apache.kafka.connect.json.JsonConverter
CONNECT_REST_ADVERTISED_HOST_NAME: localhost
CONNECT_PLUGIN_PATH: /usr/share/java/
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
KAFKA_HEAP_OPTS: '-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=2'
deploy:
replicas: 1
resources:
limits:
cpus: '0.50'
memory: 4gb
restart_policy:
condition: on-failure
delay: 10s
max_attempts: 3
window: 2000s
secrets:
kafka.truststore.jks:
external: true
kafka-connect-aws-credentials:
external: true
The kafka connect node starts up successfully, and I am able to set up tasks and view the status of those tasks...
The connector I setup I called kafka-sink, I created it with the following config:
"config": {
"connector.class": "io.confluent.connect.s3.S3SinkConnector",
"s3.region": "eu-central-1",
"flush.size": "1",
"schema.compatibility": "NONE",
"tasks.max": "1",
"topics": "input-topic-name",
"s3.part.size": "5242880",
"timezone": "UTC",
"directory.delim": "/",
"locale": "UK",
"s3.compression.type": "gzip",
"format.class": "io.confluent.connect.s3.format.bytearray.ByteArrayFormat",
"partitioner.class": "io.confluent.connect.storage.partitioner.DefaultPartitioner",
"schema.generator.class": "io.confluent.connect.storage.hive.schema.DefaultSchemaGenerator",
"name": "kafka-sink",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"storage.class": "io.confluent.connect.s3.storage.S3Storage",
"s3.bucket.name": "my-s3-bucket",
"rotate.schedule.interval.ms": "60000"
}
This task now says that it is running.
When I did not include the SSL config, specifically:
CONNECT_BOOTSTRAP_SERVERS: kafka01:9093,kafka02:9093,kafka03:9093
CONNECT_SECURITY_PROTOCOL: SSL
CONNECT_SSL_TRUSTSTORE_LOCATION: /run/secrets/kafka.truststore.jks
CONNECT_SSL_TRUSTSTORE_PASSWORD: ********
and instead pointed to a bootstrap server that was exposed with no security:
CONNECT_BOOTSTRAP_SERVERS: insecurekafka:9092
It worked fine, and read from the appropriate input topic, and output to the S3 bucket with default partitioning...
However, when I run it using the SSL config against my secure kafka topic, it logs no errors, throws no exceptions, but does nothing at all despite data continuously being pushed to the input topic...
Am I doing something wrong?
This is my first time using Kafka Connect, normally, I connect to kafka using Spring Boot apps where you just have to specify the truststore location and password in the config.
Am I missing some configuration in either my compose file or my task config?
I think you need to add SSL config for both consumer and producer. Check here Kafka Connect Encrypt with SSL
Something like this
security.protocol=SSL
ssl.truststore.location=~/kafka.truststore.jks
ssl.truststore.password=<password>
ssl.keystore.location=~/kafka.client.keystore.jks
ssl.keystore.password=<password>
ssl.key.password=<password>
producer.security.protocol=SSL
producer.ssl.truststore.location=~/kafka.truststore.jks
producer.ssl.truststore.password=<password>
producer.ssl.keystore.location=~/kafka.client.keystore.jks
producer.ssl.keystore.password=<password>
producer.ssl.key.password=<password>

Logstash - RabbitMQ connection Timeout error

I have installed logstash on 2 nodes to send the logs to RabbitMQ. SSL is configured on RabbitMQ listening 5671 port. I have configured both the logstash to push the logs to rabbitmq server on the 5671 port.
This is my configuration.
input {
file {
path => "/var/log/messages"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:system_auth_timestamp} %{SYSLOGHOST:system_auth_hostname} %{GREEDYDATA:command_issued}: %{GREEDYDATA:message}" }
add_tag => "syslog"
}
}
output {
rabbitmq {
exchange => "elasticsearch-exchange"
exchange_type => "direct"
key => "logstash-routing_key"
ssl => true
#verify_ssl => true
ssl_certificate_password => 'Password'
ssl_certificate_path => 'certfile'
ssl_version => "TLSv1.2"
host => "10.2.0.0"
vhost => "es_vhost"
durable => true
persistent => true
port => 5671
user => "admin"
password => "password"
heartbeat => "5"
}
stdout {
codec => rubydebug
}
}
This is the error I am getting in the logstash log.
{:timestamp=>"2017-12-26T07:22:32.708000+0000", :message=>"Pipeline aborted due to error", :exception=>java.util.concurrent.TimeoutException, :backtrace=>["com.rabbitmq.utility.BlockingCell.get(com/rabbitmq/utility/BlockingCell.java:77)", "com.rabbitmq.utility.BlockingCell.uninterruptibleGet(com/rabbitmq/utility/BlockingCell.java:111)", "com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(com/rabbitmq/utility/BlockingValueOrException.java:37)", "com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(com/rabbitmq/client/impl/AMQChannel.java:367)", "com.rabbitmq.client.impl.AMQConnection.start(com/rabbitmq/client/impl/AMQConnection.java:293)", "com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:648)", "com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:678)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.15.0-java/lib/march_hare/session.rb:505)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)", "RUBY.converting_rjc_exceptions_to_ruby(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.15.0-java/lib/march_hare/session.rb:467)", "RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.15.0-java/lib/march_hare/session.rb:500)", "RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.15.0-java/lib/march_hare/session.rb:136)", "RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.15.0-java/lib/march_hare/session.rb:109)", "RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.15.0-java/lib/march_hare.rb:20)", "RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-4.1.1-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:174)", "RUBY.connect!(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-4.1.1-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:131)", "RUBY.register(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-rabbitmq-3.1.0-java/lib/logstash/outputs/rabbitmq.rb:40)", "RUBY.register(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/output_delegator.rb:75)", "RUBY.start_workers(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)", "RUBY.start_workers(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:181)", "RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/pipeline.rb:136)", "RUBY.start_pipeline(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.4-java/lib/logstash/agent.rb:473)", "java.lang.Thread.run(java/lang/Thread.java:745)"], :level=>:error}
{:timestamp=>"2017-12-26T07:22:35.710000+0000", :message=>"stopping pipeline", :id=>"main"}
This is the error I am getting in RabbitMQ logs.
=INFO REPORT==== 27-Dec-2017::05:44:27 ===
accepting AMQP connection <0.1228.0> (10.2.0.0:42187 -> 10.24.168.17:5601)
=WARNING REPORT==== 27-Dec-2017::05:44:35 ===
closing AMQP connection <0.1228.0> (10.2.0.0:42187 -> 10.24.168.17:5601):
client unexpectedly closed TCP connection
This is RabbitMQ conf
% This file managed by Puppet
% Template Path: rabbitmq/templates/rabbitmq.config
[
{rabbit, [
{cluster_nodes, {[rabbit#node01, rabbitmq#node02, rabbit#node03], disc}},
{cluster_partition_handling, ignore},
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{default_user, <<"admin">>},
{default_pass, <<"passowrd">>},
{handshake_timeout, 60000},
{tcp_listeners, []},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/etc/rabbitmq/ssl_cert/testca/cacert.pem"},
{certfile,"/etc/rabbitmq/ssl_cert/server/cert.pem"},
{keyfile,"/etc/rabbitmq/ssl_cert/server/key.pem"},
{password, "Password"},
{verify,verify_peer},
{versions, ['tlsv1.2']},
{fail_if_no_peer_cert,false}]},
{ssl_handshake_timeout, 5000}
{log_levels, [{autocluster, debug}, {connection, info}]}
]},
{kernel, [
]},
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
].
% EOF
I have even changed the SSL listener port to 5601 and tried just to make sure that this is not port conflict. I am hitting the wall everytime here.
There was a mismatch in hostname. I have resolved it by proving an FQDN in the /etc/hosts file. SSL is working fine now.

Rabbitmqctl command throws error

I am trying to create a 3 node cluster on RabbitMQ. I have the first node up and running. When I issue join cluster command from node 2, it is throwing an error that node is down.
rabbitmqctl join_cluster rabbit#hostname02
I am getting the following error:
Status of node rabbit#hostname02 ...
Error: unable to connect to node rabbit#hostname02: nodedown
DIAGNOSTICS
===========
attempted to contact: [rabbit#hostname02]
rabbit#hostname02:
* connected to epmd (port 4369) on hostname02
* epmd reports: node 'rabbit' not running at all
no other nodes on hostname02
* suggestion: start the node
current node details:
- node name: 'rabbitmq-cli-30#hostname02'
- home dir: /var/lib/rabbitmq
- cookie hash: bygafwoj/ISgb3yKej1pEg==
This is my config file.
[
{rabbit, [
{cluster_nodes, {[rabbit#hostname01, rabbitmq#hostname02, rabbit#hostname03], disc}},
{cluster_partition_handling, ignore},
{tcp_listen_options,
[binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true},
{exit_on_close, false}]
},
{default_user, <<"guest">>},
{default_pass, <<"guest">>},
{log_levels, [{autocluster, debug}, {connection, info}]}
]},
{kernel, [
]},
{rabbitmq_management, [
{listener, [
{port, 15672}
]}
]}
].
% EOF
I have updated the /etc/hosts file with the details of all 3 nodes on all the 3 servers. I am not sure where I am getting this wrong.

How to disable RabbitMQ default tcp listening port - 5672

I have configured the RabbitMQ rabbitmq.config file with new port number i.e. 5671 with SSL.
Now I want to disable the default port i.e. 5672.
Config file as below :-
[
{rabbit, [
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cacert.pem"},
{certfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cert.pem"},
{keyfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,false},
{ciphers,[{dhe_rsa,aes_256_cbc,sha},
{dhe_dss,aes_256_cbc,sha},
{rsa,aes_256_cbc,sha}]}
]
}
]}
].
Now its working on both port 5671 and 5672.But I need to disable the port 5672.
Give some comments or suggestion.
Thanks in advance.
To disable standart RabbitMQ 5672 port add {tcp_listeners, []} to your rabbitmq.conf:
[
{rabbit, [
{tcp_listeners, []},
{ssl_listeners, [5671]},
{ssl_options, [{cacertfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cacert.pem"},
{certfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/cert.pem"},
{keyfile,"/ay/app/xxx/softwares/rabbitmq_server-3.1.1/etc/ssl/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,false},
{ciphers,[{dhe_rsa,aes_256_cbc,sha},
{dhe_dss,aes_256_cbc,sha},
{rsa,aes_256_cbc,sha}]}
]
}
]}
].
It works with RabbitMQ 3.1.5
Here's how to do it with the new configuration file format introduced in RabbitMQ 3.7:
Set up the SSL listener in rabbitmq.conf:
listeners.ssl.1 = 5671
ssl_options.cacertfile = /path/to/testca/cacert.pem
ssl_options.certfile = /path/to/server/cert.pem
ssl_options.keyfile = /path/to/server/key.pem
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
Disable the non-SSL listener in advanced.config:
[
{rabbit,
[{tcp_listeners, []}
]}
].
It appears that to disable non-ssl listening with the new file format, you can do the following:
listeners.tcp = none
This has the same effect as the other 3.7 answer, but removes the need to do it in the advanced.config.