RabbitMQ Web-MQTT WSS closes client connection. Insecure WS and other secure protocols work - ssl

I have a deployment of RabbitMQ that uses it's own certificates for end-to-end encryption. It uses both AMQP and MQTT-over-WSS to connect multiple types of clients. AMQP clients are able to connect securely, so I know that the certificate set up is good.
Clients using WS going to ws://hostname:15675/ws can connect fine, but obviously are not secure. Clients attempting to connect to wss://hostname:15676/ws have the connection closed on them. 15676 is the port you will see I have bound the web-mqtt ssl listener to, as shown below. I've gone through both the networking and tls help guide by RabbitMQ, and I see the port correctly bound and can confirm it is exposed and available to the client.
The relevant rabbit.conf:
listeners.tcp.default = 5671
listeners.ssl.default = 5671
ssl_options.cacertfile = /path/to/fullchain.pem
ssl_options.certfile = /path/to/cert.pem
ssl_options.keyfile = /path/to/privkey.pem
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = /path/to/fullchain.pem
web_mqtt.ssl.certfile = /path/to/cert.pem
web_mqtt.ssl.keyfile = /path/to/privkey.pem
Basically, I'm wondering if I have the connection string wrong (wss://hostname:15675/ws)? Do I need to go to /wss? Is it a problem my client is a browser running on localhost -- not HTTPS? Do I have a configuration set incorrectly -- am I missing one?
If there is a better source of documentation/examples of this plugin beyond the RabbitMQ website, I would also be interested.

maybe the configuration mismatch
if there any password for the private file you need to add it also.
refer to the following sample rabbitmq.conf
listeners.ssl.default = 5671
ssl_options.cacertfile = <path/ca-bundle (.pem/.cabundle)>
ssl_options.certfile = <path/cert (.pem/.crt)>
ssl_options.keyfile = <path/key (.pem/.key)>
ssl_options.password = <your private key password>
ssl_options.versions.1 = tlsv1.3
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.ciphers.1 = TLS_AES_256_GCM_SHA384
ssl_options.ciphers.2 = TLS_AES_128_GCM_SHA256
ssl_options.ciphers.3 = TLS_CHACHA20_POLY1305_SHA256
ssl_options.ciphers.4 = TLS_AES_128_CCM_SHA256
ssl_options.ciphers.5 = TLS_AES_128_CCM_8_SHA256
ssl_options.honor_cipher_order = true
ssl_options.honor_ecc_order = true
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = <path/ca-bundle (.pem/.cabundle)>
web_mqtt.ssl.certfile = <path/crt (.pem/.crt)>
web_mqtt.ssl.keyfile = <path/key (.pem/.key)>
web_mqtt.ssl.password = <your private key password>
web_mqtt.ssl.honor_cipher_order = true
web_mqtt.ssl.honor_ecc_order = true
web_mqtt.ssl.client_renegotiation = false
web_mqtt.ssl.secure_renegotiate = true
web_mqtt.ssl.versions.1 = tlsv1.2
web_mqtt.ssl.versions.2 = tlsv1.1
web_mqtt.ssl.ciphers.1 = ECDHE-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.2 = ECDHE-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.3 = ECDHE-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.4 = ECDHE-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.5 = ECDH-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.6 = ECDH-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.7 = ECDH-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.8 = ECDH-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.9 = DHE-RSA-AES256-GCM-SHA384
this is a working configuration file for the rabbitmq-server on ubuntu 20.04
restart the rabbitmq-server
list the listeners port (make sure that the SSL ports enabled) (rabbitmq-diagnostics listeners)
test the SSL (testssl localhost:16567)
also test the telnet (telnet localhost 16567)
please reffer : https://www.rabbitmq.com/ssl.html#erlang-otp-requirements and
troubleshooting
this is worked for me :-)

Related

Can not send tasks to Rabbitmq in Fly.io

The problem that I have is that I have Rabbitmq and Celery running on fly (versions and configs are below). Both of them deploy normally and without any problems, however when I send a task to Rabbitmq on fly using the public dedicated Ipv4 address I get the following error: “Server has closed the connection unexpectedly”.
Versions and configurations:
OS: Ubuntu 20.04.4 LTS
Rabbitmq version: 3.8.2
Celery version: 5.2.7
The fly.toml file for Rabbitmq:
app = “rabbitmqserver”
kill_signal = “SIGINT”
kill_timeout = 5
processes =
[env]
[experimental]
auto_rollback = true
[[services]]
http_checks =
internal_port = 5672
processes = [“app”]
protocol = “tcp”
script_checks =
[[services.ports]]
handlers = [“tls”]
port = 5672
Can you provide a suitable configuration for Rabbitmq so that I can send tasks to it using it’s Ipv4 address?
I tried multiple other configurations for Rabbitmq on fly and it also did not work. Furthermore, I made sure that all the needed ports are exposed and that the machine is actually alive (checked using ping command).
Tried configuration:
app = "rabbitmq-app"
kill_signal = "SIGINT"
kill_timeout = 5
processes = []
[env]
RABBITMQ_MNESIA_DIR = "/var/lib/rabbitmq/mnesia/data"
[experimental]
allowed_public_ports = []
auto_rollback = true
[[services]]
http_checks = []
internal_port = 5672
processes = ["app"]
protocol = "tcp"
script_checks = []
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
# rabbitmq admin
[[services]]
http_checks = []
internal_port = 15672
protocol = "tcp"
script_checks = []
[[services.ports]]
handlers = ["http", "tls"]
port = "15672"
[[services.tcp_checks]]
grace_period = "1s"
`

InfluxDB refuses connection from telegraf when changing from HTTP to HTTPS

In my centos7 server, I have set up Telegraf and InfluxDB. InfluxDB successfully receives data from Telegraf and stores them in the database. But when I reconfigure both services to use https, I see the following error in Telegraf's logs
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [outputs.influxdb] When writing to [https://127.0.0.1:8086]: Post "https://127.0.0.1:8086/write?db=GRAFANA": dial tcp 127.0.0.1:8086: connect: connection refused
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [agent] Error writing to outputs.influxdb: could not write any address
InfluxDB doesn't show any errors in it's logs.
Below is my telegraf.conf file:
[agent]
hostname = "local"
flush_interval = "15s"
interval = "15s"
# Input Plugins
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.io]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.system]]
[[inputs.swap]]
[[inputs.netstat]]
[[inputs.processes]]
[[inputs.kernel]]
# Output Plugin InfluxDB
[[outputs.influxdb]]
database = "GRAFANA"
urls = [ "https://127.0.0.1:8086" ]
insecure_skip_verify = true
username = "telegrafuser"
password = "metricsmetricsmetricsmetrics"
And this is the uncommented [http] section of the influxdb.conf
# Determines whether HTTP endpoint is enabled.
enabled = false
# Determines whether the Flux query endpoint is enabled.
flux-enabled = true
# The bind address used by the HTTP service.
bind-address = ":8086"
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = false
# Determines whether HTTPS is enabled.
https-enabled = true
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/server-cert.pem"
# Use a separate private key location.
https-private-key = "/etc/ssl/server-key.pem"

DBVisualizer not able to connect to Kerberised Hive

We have a HDP (3.1.0) cluster with Hive (3.0.0.3.1). The cluster is Kerberised;
I am trying to connect to Hive with DBVisualizer, without success. The client (where I am using DBVisualizer from) is a Centos 7 Machine.
Kerberos related
On the client, here is the /etc/krb5.conf (copy/paste from one of the cluster's machine):
cat krb5.conf
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = COMPANY.LOC
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[domain_realm]
COMPANY.LOC = COMPANY.LOC
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
COMPANY.LOC = {
admin_server = server.company.loc
kdc = server.company.loc
}
I used kinit and here is the result of klist:
[florianc#localhost etc]$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: castelainf#COMPANY.LOC
Valid starting Expires Service principal
07/24/2020 09:12:03 07/24/2020 19:12:03 krbtgt/COMPANY.LOC#COMPANY.LOC
renew until 07/31/2020 09:11:59
DbVisualizer
Version: 11.0.4 (free)
Tools>Tool Properties>Specify overridden Java VM Properties here:
-Dsun.security.krb5.debug=true
-Djavax.security.auth.useSubjectCredsOnly=false
-Djava.security.krb5.conf="/etc/krb5.conf"
The JAR used for the driver is the one provided by the cluster in Ambari>Hive>JDBC Standalone jar
The database URL of the connection is:
jdbc:hive2://server1.company.loc:2181,server2.company.loc:2181,server3.company.loc:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2;principal=hive/_HOST#COMPANY.LOC
The error returned when trying to connect is the following:
Could not open client transport for any of the Server URI's in ZooKeeper: Can't get Kerberos realm
Edit 1
Using these URIs:
jdbc:hive2://server1.company.loc:2181/;principal=hive/_HOST#COMPANY.LO
jdbc:hive2://server1.company.loc:2181/;principal=hive/server1#COMPANY.LOC
jdbc:hive2://server1.company.loc:2181/;principal=hive/server1.company.loc#COMPANY.LOC
Always return:
Could not open client transport with JDBC Uri <URI>: Can't get Kerberos realm

Rabbit MQ declarative clustering

I have a RabbitMQ node running on a Windows 2012 server (rabbit#my-server-1).
I am creating a second node (rabbit#my-server-2) on a seperate server (also Windows 2012) and would like to cluster it with the existing node. The deployment of the second node is via Octopus Deploy and to make life easier I would like to have the clustering automatically done on startup of the node.
Reading the documentation (https://www.rabbitmq.com/clustering.html and https://www.rabbitmq.com/configure.html) leads me to believe I just need to add the following to the rabbitmq.conf file:
cluster_nodes.disc.1 = rabbit#my-server-1
However doing so causes the node to not start. The erl.exe process starts using 100% cpu and I see the following message in the erl_crash.dump file:
Slogan: init terminating in do_boot (generate_config_file)
I believe this is symptomatic of an invalid config file, and indeed removing these config entries allows me start the node fine.
I am able to cluster to the existing node manually via the relevant rabbitmqctl commands, but would prefer the declarative solution if possible.
I'm running RabbitMQ v3.7.4 and Erlang v20.3
So, what am I doing wrong? I've done some googling but haven't found anything that helps.
EDIT
Config file in full is:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_nodes.disc.1 = rabbit#my-server-1
How about adding the clustering-information like it is written in the doc under "Config File Peer Discovery Backend"
this would leave you with a configfile like this:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#my-server-1
cluster_formation.classic_config.nodes.2 = rabbit#my-server-2

Different smarthosts for different domains with different credentials

Hello I have two (or maybe more later) domains:
domain1
domain2
I want to configure Exim (cPanel) to use SendGrid's or Mailgun SMTP servers, currently I'm trying with this config in Exim:
**Section: TRANSPORTSTART**
domain1_smtp:
driver = smtp
hosts = smtp.mailgun.org
hosts_require_auth = smtp.mailgun.org
hosts_require_tls = smtp.mailgun.org
domain2_smtp:
driver = smtp
hosts = smtp.mailgun.org
hosts_require_auth = smtp.mailgun.org
hosts_require_tls = smtp.mailgun.org
**Section: AUTH**
domain1_login:
driver = plaintext
public_name = LOGIN
client_send = : postmaster#mg.domain1.com : password
domain2_login:
driver = plaintext
public_name = LOGIN1
client_send = : postmaster#mg.domain2.com : password
**Section: PREROUTER**
send_via_domain1:
driver = manualroute
domains = ! +local_domains
senders = *#domain1.cm
transport = domain1_smtp
route_list = "* smtp.mailgun.org::2525 byname"
host_find_failed = defer
send_via_domain2:
driver = manualroute
domains = ! +local_domains
senders = *#domain2.com
transport = domain2_smtp
route_list = "* smtp.mailgun.org::2525 byname"
host_find_failed = defer
When I'm sending email from user#domain1.com I'm getting messages delivered by postmaster#mg.domain1.com and when I'm sending from user#domain2.com I'm getting messages delivered also from postmaster#mg.domain1.com.
I want to have smarthost for every domain with different credentials. Thanks
I have these setup (VPS + WHM/cPanel + Exim + Mailgun) and after doing some online research, I've found a few helpful websites regarding this topic and managed to come out with the correct configuration. Below are the solutions that I'm currently using on my VPS and hope it will help you as well. It should solve your "via" problem and might solve the intermittent "550 5.7.1 Relaying denied" error from Mailgun as well:
Go to the "Exim Configuration Editor" in WHM. Choose "Advanced Editor" and insert the configuration below:
Section: AUTH
mailgun_login:
driver = plaintext
public_name = LOGIN
hide client_send = ": ${extract{login}{${lookup{$sender_address_domain}lsearch{/etc/exim_mailgun}{$value}fail}}} : ${extract{password}{${lookup{$sender_address_domain}lsearch{/etc/exim_mailgun}{$value}fail}}}"
Section: ROUTERSTART
mailgun:
driver = manualroute
domains = ! +local_domains
transport = mailgun_transport
route_list = "* smtp.mailgun.org::587 byname"
host_find_failed = defer
no_more
Section: TRANSPORTSTART
mailgun_transport:
driver = smtp
hosts = smtp.mailgun.org
hosts_require_auth = smtp.mailgun.org
hosts_require_tls = smtp.mailgun.org
Then create a file named /etc/exim_mailgun and insert the content similar to the structure below (Replace it with your Mailgun's domain login credentials that was verified):
domain1.com: username=postmaster#mg.domain1.com password=abcdefghi
domain2.com: username=postmaster#mg.domain2.com password=jklmnopqr