Rabbit MQ declarative clustering - rabbitmq

I have a RabbitMQ node running on a Windows 2012 server (rabbit#my-server-1).
I am creating a second node (rabbit#my-server-2) on a seperate server (also Windows 2012) and would like to cluster it with the existing node. The deployment of the second node is via Octopus Deploy and to make life easier I would like to have the clustering automatically done on startup of the node.
Reading the documentation (https://www.rabbitmq.com/clustering.html and https://www.rabbitmq.com/configure.html) leads me to believe I just need to add the following to the rabbitmq.conf file:
cluster_nodes.disc.1 = rabbit#my-server-1
However doing so causes the node to not start. The erl.exe process starts using 100% cpu and I see the following message in the erl_crash.dump file:
Slogan: init terminating in do_boot (generate_config_file)
I believe this is symptomatic of an invalid config file, and indeed removing these config entries allows me start the node fine.
I am able to cluster to the existing node manually via the relevant rabbitmqctl commands, but would prefer the declarative solution if possible.
I'm running RabbitMQ v3.7.4 and Erlang v20.3
So, what am I doing wrong? I've done some googling but haven't found anything that helps.
EDIT
Config file in full is:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_nodes.disc.1 = rabbit#my-server-1

How about adding the clustering-information like it is written in the doc under "Config File Peer Discovery Backend"
this would leave you with a configfile like this:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#my-server-1
cluster_formation.classic_config.nodes.2 = rabbit#my-server-2

Related

Can not send tasks to Rabbitmq in Fly.io

The problem that I have is that I have Rabbitmq and Celery running on fly (versions and configs are below). Both of them deploy normally and without any problems, however when I send a task to Rabbitmq on fly using the public dedicated Ipv4 address I get the following error: “Server has closed the connection unexpectedly”.
Versions and configurations:
OS: Ubuntu 20.04.4 LTS
Rabbitmq version: 3.8.2
Celery version: 5.2.7
The fly.toml file for Rabbitmq:
app = “rabbitmqserver”
kill_signal = “SIGINT”
kill_timeout = 5
processes =
[env]
[experimental]
auto_rollback = true
[[services]]
http_checks =
internal_port = 5672
processes = [“app”]
protocol = “tcp”
script_checks =
[[services.ports]]
handlers = [“tls”]
port = 5672
Can you provide a suitable configuration for Rabbitmq so that I can send tasks to it using it’s Ipv4 address?
I tried multiple other configurations for Rabbitmq on fly and it also did not work. Furthermore, I made sure that all the needed ports are exposed and that the machine is actually alive (checked using ping command).
Tried configuration:
app = "rabbitmq-app"
kill_signal = "SIGINT"
kill_timeout = 5
processes = []
[env]
RABBITMQ_MNESIA_DIR = "/var/lib/rabbitmq/mnesia/data"
[experimental]
allowed_public_ports = []
auto_rollback = true
[[services]]
http_checks = []
internal_port = 5672
processes = ["app"]
protocol = "tcp"
script_checks = []
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
# rabbitmq admin
[[services]]
http_checks = []
internal_port = 15672
protocol = "tcp"
script_checks = []
[[services.ports]]
handlers = ["http", "tls"]
port = "15672"
[[services.tcp_checks]]
grace_period = "1s"
`

404 when executing docker push to gitlab-container-registry

I have installed gitlab-ce 13.2.0 on my server and the container-registry was immediately available.
from a other sever (or my local machine) I can login, but when pushing a image to the container-registry I get a 404-error: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE html>\n<html>\n<head>...
in my gitlab.rb I have:
external_url 'https://git.xxxxxxxx.com'
nginx['enable'] = true
nginx['client_max_body_size'] = '250m'
nginx['redirect_http_to_https'] = true
nginx['ssl_certificate'] = "/etc/gitlab/trusted-certs/xxxxxxxx.com.crt"
nginx['ssl_certificate_key'] = "/etc/gitlab/trusted-certs/xxxxxxxx.com.key"
nginx['ssl_protocols'] = "TLSv1.1 TLSv1.2"
registry_external_url 'https://git.xxxxxxxx.com'
what is confusing, is that the registry_external_url is the same as the external_url. There are those lines in the gitlab.rb:
### Settings used by GitLab application
# gitlab_rails['registry_enabled'] = true
# gitlab_rails['registry_host'] = "git.xxxxxxxx.com"
# gitlab_rails['registry_port'] = "5005"
# gitlab_rails['registry_path'] = "/var/opt/gitlab/gitlab-rails/shared/registry"
But when I uncomment this, I cannot login.
what can be the problem here?
This is actually because you are using https port without proxying the registry in nginx.
Fix these lines according to the following in gitlab.rb:
registry_nginx['enable'] = true
registry_nginx['listen_https'] = true
registry_nginx['redirect_http_to_https'] = true
registry_external_url 'https://registry.YOUR_DOMAIN.gtld'
You don't need to touch nginx['ssl_*] parameters when you are using letsencrypt since the chef would take care.
How is your image named? Your image name must match exactly not only the registry URL, but project too.
You can't just build "myimage:latest" and push it. It must be like git.xxxxxxxx.com/mygroup/myproject:latest. You can obtain correct name from $CI_REGISTRY_IMAGE predefined variable.

RabbitMQ Web-MQTT WSS closes client connection. Insecure WS and other secure protocols work

I have a deployment of RabbitMQ that uses it's own certificates for end-to-end encryption. It uses both AMQP and MQTT-over-WSS to connect multiple types of clients. AMQP clients are able to connect securely, so I know that the certificate set up is good.
Clients using WS going to ws://hostname:15675/ws can connect fine, but obviously are not secure. Clients attempting to connect to wss://hostname:15676/ws have the connection closed on them. 15676 is the port you will see I have bound the web-mqtt ssl listener to, as shown below. I've gone through both the networking and tls help guide by RabbitMQ, and I see the port correctly bound and can confirm it is exposed and available to the client.
The relevant rabbit.conf:
listeners.tcp.default = 5671
listeners.ssl.default = 5671
ssl_options.cacertfile = /path/to/fullchain.pem
ssl_options.certfile = /path/to/cert.pem
ssl_options.keyfile = /path/to/privkey.pem
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = /path/to/fullchain.pem
web_mqtt.ssl.certfile = /path/to/cert.pem
web_mqtt.ssl.keyfile = /path/to/privkey.pem
Basically, I'm wondering if I have the connection string wrong (wss://hostname:15675/ws)? Do I need to go to /wss? Is it a problem my client is a browser running on localhost -- not HTTPS? Do I have a configuration set incorrectly -- am I missing one?
If there is a better source of documentation/examples of this plugin beyond the RabbitMQ website, I would also be interested.
maybe the configuration mismatch
if there any password for the private file you need to add it also.
refer to the following sample rabbitmq.conf
listeners.ssl.default = 5671
ssl_options.cacertfile = <path/ca-bundle (.pem/.cabundle)>
ssl_options.certfile = <path/cert (.pem/.crt)>
ssl_options.keyfile = <path/key (.pem/.key)>
ssl_options.password = <your private key password>
ssl_options.versions.1 = tlsv1.3
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.ciphers.1 = TLS_AES_256_GCM_SHA384
ssl_options.ciphers.2 = TLS_AES_128_GCM_SHA256
ssl_options.ciphers.3 = TLS_CHACHA20_POLY1305_SHA256
ssl_options.ciphers.4 = TLS_AES_128_CCM_SHA256
ssl_options.ciphers.5 = TLS_AES_128_CCM_8_SHA256
ssl_options.honor_cipher_order = true
ssl_options.honor_ecc_order = true
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = <path/ca-bundle (.pem/.cabundle)>
web_mqtt.ssl.certfile = <path/crt (.pem/.crt)>
web_mqtt.ssl.keyfile = <path/key (.pem/.key)>
web_mqtt.ssl.password = <your private key password>
web_mqtt.ssl.honor_cipher_order = true
web_mqtt.ssl.honor_ecc_order = true
web_mqtt.ssl.client_renegotiation = false
web_mqtt.ssl.secure_renegotiate = true
web_mqtt.ssl.versions.1 = tlsv1.2
web_mqtt.ssl.versions.2 = tlsv1.1
web_mqtt.ssl.ciphers.1 = ECDHE-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.2 = ECDHE-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.3 = ECDHE-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.4 = ECDHE-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.5 = ECDH-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.6 = ECDH-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.7 = ECDH-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.8 = ECDH-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.9 = DHE-RSA-AES256-GCM-SHA384
this is a working configuration file for the rabbitmq-server on ubuntu 20.04
restart the rabbitmq-server
list the listeners port (make sure that the SSL ports enabled) (rabbitmq-diagnostics listeners)
test the SSL (testssl localhost:16567)
also test the telnet (telnet localhost 16567)
please reffer : https://www.rabbitmq.com/ssl.html#erlang-otp-requirements and
troubleshooting
this is worked for me :-)

What's the use of the [runners.docker] section in config.toml for use case with docker machine?

reading the documentation on autoscaling I can't figure the role of the [runner.docker] section when using docker+machine as executor :
[runners.docker]
image = "ruby:2.1" # The default image used for builds is 'ruby:2.1'
In the executors documentation it says :
docker+machine : like docker, but uses auto-scaled docker machines -
this requires the presence of [runners.docker] and [runners.machine]
I get I have to define this [runners.docker] section to be able to use [runners.machine] section, but what is the aim of this [runners.docker] ?
I can't find how to configure it as I don't understand why to use it.
Our gitlab-runner runs on a vSphere VM and is configured to scale using docker+machine executor with MachineDriver using vmwarevsphere. All works nice but I would like to understand fully the configuration file.
Here is our "censored with stars" config.toml file with the [runners.docker] I can't understand (note that the guy that wrote it leaved the company, so I can't ask him):
[[runners]]
name = "gitlab-runner"
limit = 6
output_limit = 102400
url = "http://gitlab.**************.lan"
token = "*******************"
executor = "docker+machine"
[runners.docker]
tls_verify = false
image = "docker:latest"
dns = ["*.*.*.*"]
privileged = true
disable_cache = false
volumes = ["/etc/localtime:/etc/localtime:ro", "/var/run/docker.sock:/var/run/docker.sock", "/etc/docker/certs.d:/etc/docker/certs.d", "/cache:/cache", "/builds:/builds"]
cache_dir = "cache"
shm_size = 0
[runners.cache]
Type = "s3"
ServerAddress = "*.*.*.*"
AccessKey = "*****************"
SecretKey = "*****************"
BucketName = "runner"
Insecure = true
[runners.machine]
IdleCount = 4
MaxBuilds = 10
IdleTime = 3600
MachineDriver = "vmwarevsphere"
MachineName = "gitlab-runner-pool-1-%s"
MachineOptions = ["vmwarevsphere-username=************", "vmwarevsphere-password=*****************", "vmwarevsphere-vcenter=*.*.*.*", "vmwarevsphere-datastore=*********", "vmwarevsphere-memory-size=3096", "vmwarevsphere-disk-size=40960", "vmwarevsphere-cpu-count=3", "vmwarevsphere-network=*****************", "vmwarevsphere-datacenter=**************", "vmwarevsphere-hostsystem=*******************", "engine-storage-driver=overlay2", "engine-insecure-registry=**************", "engine-insecure-registry=*******************"]
OffPeakPeriods = ["* * 0-8,21-23 * * mon-fri *", "* * * * * sat,sun *"]
OffPeakTimezone = "Local"
OffPeakIdleCount = 1
OffPeakIdleTime = 600
The [runners.machine] section defines how to start and provision your runner machines, the [runners.docker] section then defines how to configure the runner on that machine.
Docker-machine on its own only does the following (as you can read here):
"Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands."
So this does nothing with the Gitlab runner, you still need to configure the runner after that and thats where the [runners.docker] section comes into play because the runner needs to know what default image to use and what volumes to mount etc.

How can I encrypt (using SSL) Akka Remoting messages?

I forked this simple server-client akka project:
https://github.com/roclas/akka-irc
which is an IRC-like chat and I'm trying to encode messages.
In my master branch, if I start a server (sbt run and then select option 2) and then a client (sbt run and then select option 1),
if I write something in the client, the message is correctly sent to the server.
If I start wireshark and listen to the messages that meet these conditions:
tcp.port==1099 and tcp.len>200
I can read the messages in plain text.
How could I encode them using SSL?
You can see what I am trying to do by modifying the src/main/resources/application.conf file in the develop branch
What would I have to modify?
How should my src/main/resources/application.conf file look like?
Thank you
You should enable SSL at yout custom .conf file with:
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
}
remote {
enabled-transports = ["akka.remote.netty.ssl"]
netty.ssl{
enable-ssl = true
security {
key-store = "path-to-your-keystore"
key-store-password = "your-keystore's-password"
key-password = "your-key's-password"
trust-store = "path-to-your-truststore"
trust-store-password = "your-trust-store's-password"
protocol = "TLSv1"
random-number-generator = "AES128CounterSecureRNG"
enabled-algorithms = ["TLS_RSA_WITH_AES_128_CBC_SHA"]
}
}
}
}
And don't forget to change your actor path's prefix to:
akka.ssl.tcp://YourActorSystemName#ip:port:/...
In addition to what J.Santos said, I had forgotten to create these two files:
trust-store = "path-to-your-truststore"
trust-store-password = "your-trust-store's-password"
that I changed by:
key-store = "src/main/resources/keystore"
trust-store = "src/main/resources/truststore"
in my ./src/main/resources/common.conf
as J.Santos reminded me after looking at my project.
Thank you very much!!