Can not send tasks to Rabbitmq in Fly.io - rabbitmq

The problem that I have is that I have Rabbitmq and Celery running on fly (versions and configs are below). Both of them deploy normally and without any problems, however when I send a task to Rabbitmq on fly using the public dedicated Ipv4 address I get the following error: “Server has closed the connection unexpectedly”.
Versions and configurations:
OS: Ubuntu 20.04.4 LTS
Rabbitmq version: 3.8.2
Celery version: 5.2.7
The fly.toml file for Rabbitmq:
app = “rabbitmqserver”
kill_signal = “SIGINT”
kill_timeout = 5
processes =
[env]
[experimental]
auto_rollback = true
[[services]]
http_checks =
internal_port = 5672
processes = [“app”]
protocol = “tcp”
script_checks =
[[services.ports]]
handlers = [“tls”]
port = 5672
Can you provide a suitable configuration for Rabbitmq so that I can send tasks to it using it’s Ipv4 address?
I tried multiple other configurations for Rabbitmq on fly and it also did not work. Furthermore, I made sure that all the needed ports are exposed and that the machine is actually alive (checked using ping command).
Tried configuration:
app = "rabbitmq-app"
kill_signal = "SIGINT"
kill_timeout = 5
processes = []
[env]
RABBITMQ_MNESIA_DIR = "/var/lib/rabbitmq/mnesia/data"
[experimental]
allowed_public_ports = []
auto_rollback = true
[[services]]
http_checks = []
internal_port = 5672
processes = ["app"]
protocol = "tcp"
script_checks = []
[[services.tcp_checks]]
grace_period = "1s"
interval = "15s"
restart_limit = 0
timeout = "2s"
# rabbitmq admin
[[services]]
http_checks = []
internal_port = 15672
protocol = "tcp"
script_checks = []
[[services.ports]]
handlers = ["http", "tls"]
port = "15672"
[[services.tcp_checks]]
grace_period = "1s"
`

Related

InfluxDB refuses connection from telegraf when changing from HTTP to HTTPS

In my centos7 server, I have set up Telegraf and InfluxDB. InfluxDB successfully receives data from Telegraf and stores them in the database. But when I reconfigure both services to use https, I see the following error in Telegraf's logs
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [outputs.influxdb] When writing to [https://127.0.0.1:8086]: Post "https://127.0.0.1:8086/write?db=GRAFANA": dial tcp 127.0.0.1:8086: connect: connection refused
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [agent] Error writing to outputs.influxdb: could not write any address
InfluxDB doesn't show any errors in it's logs.
Below is my telegraf.conf file:
[agent]
hostname = "local"
flush_interval = "15s"
interval = "15s"
# Input Plugins
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.io]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.system]]
[[inputs.swap]]
[[inputs.netstat]]
[[inputs.processes]]
[[inputs.kernel]]
# Output Plugin InfluxDB
[[outputs.influxdb]]
database = "GRAFANA"
urls = [ "https://127.0.0.1:8086" ]
insecure_skip_verify = true
username = "telegrafuser"
password = "metricsmetricsmetricsmetrics"
And this is the uncommented [http] section of the influxdb.conf
# Determines whether HTTP endpoint is enabled.
enabled = false
# Determines whether the Flux query endpoint is enabled.
flux-enabled = true
# The bind address used by the HTTP service.
bind-address = ":8086"
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = false
# Determines whether HTTPS is enabled.
https-enabled = true
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/server-cert.pem"
# Use a separate private key location.
https-private-key = "/etc/ssl/server-key.pem"

Splunk: How to enable Splunk SSO

I have splunk and try to enable splunk SSO instead of nornal authentiation. I have configuraitons as follows:
In /opt/splunk/etc/system/local/server.conf
[general]
trustedIP = 192.168.1.208
serverName = Splunk_Core_02
pass4SymmKey = $7$RRvdYDdIlj4P2geQdtHluTRb7OfvZhTFTZGJ7z5JiZAkJ6Q1at6j0Q==
sessionTimeout = 30s
[sslConfig]
sslPassword = $7$m6pB5a0PWFg64VlNZGgunhGElO3qLiAc6NrhfLO+tpX2jR7WC7qm1Q==
[lmpool:auto_generated_pool_download-trial]
description = auto_generated_pool_download-trial
quota = MAX
slaves = *
stack_id = download-trial
[lmpool:auto_generated_pool_forwarder]
description = auto_generated_pool_forwarder
quota = MAX
slaves = *
stack_id = forwarder
[lmpool:auto_generated_pool_free]
description = auto_generated_pool_free
quota = MAX
slaves = *
stack_id = free
[license]
active_group = Enterprise
[diskUsage]
minFreeSpace = 1024
[lmpool:test_splunk]
quota = MAX
slaves = *
stack_id = enterprise
In /opt/splunk/etc/system/local/web.conf
[settings]
#SSO
SSOMode = permissive
trustedIP = 192.168.1.208,192.168.2.15,127.0.0.1
remoteUser = REMOTE-USER
#tools.proxy.on = False
root_endpoint = /splunk
#SSL
enableSplunkWebSSL = 0
httpport = 8000
mgmtHostPort = 127.0.0.1:8089
appServerPorts = 8065
splunkdConnectionTimeout = 30
enableSplunkWebClientNetloc = False
# SSL certificate files.
privKeyPath = $SPLUNK_HOME/etc/auth/splunkweb/privkey.pem
serverCert = $SPLUNK_HOME/etc/auth/splunkweb/cert.pem
...
I see http://192.168.1.208:8000/debug/sso page, I see SSO is not enabled. What's wrong with my configurations?
There are several documentations says in server.conf, the trustedIP is 127.0.0.1. But none of them mention that only 127.0.0.1 is eligible to enable/activate SSO. So do not configure other IP address, instead of 127.0.0.1.
And in server.conf(/opt/splunk/etc/system/local/), you could only configure one trustedIP, and it is 127.0.0.1.
https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/ConfigureSplunkSSO
Have you restarted splunk after making these changes?
In /opt/splunk/etc/system/local/web.conf,
remoteUser = REMOTE-USER is more likley to be REMOTE_USER
You have to use SAML.. I am using SAML for SSO purpose.. You need to contact IT guy and he will provide you IDP file upload it and share with your Splunk Connection file. You can download it from same window. Go to Users -> Authentication Method > SAML. Once everything is placed Then you have to create groups on AD and same thing you need to do on Splunk under SAML configuration.. Let me know if you need more details..
https://docs.splunk.com/Documentation/Splunk/8.0.3/Security/HowSAMLSSOworks

RabbitMQ Web-MQTT WSS closes client connection. Insecure WS and other secure protocols work

I have a deployment of RabbitMQ that uses it's own certificates for end-to-end encryption. It uses both AMQP and MQTT-over-WSS to connect multiple types of clients. AMQP clients are able to connect securely, so I know that the certificate set up is good.
Clients using WS going to ws://hostname:15675/ws can connect fine, but obviously are not secure. Clients attempting to connect to wss://hostname:15676/ws have the connection closed on them. 15676 is the port you will see I have bound the web-mqtt ssl listener to, as shown below. I've gone through both the networking and tls help guide by RabbitMQ, and I see the port correctly bound and can confirm it is exposed and available to the client.
The relevant rabbit.conf:
listeners.tcp.default = 5671
listeners.ssl.default = 5671
ssl_options.cacertfile = /path/to/fullchain.pem
ssl_options.certfile = /path/to/cert.pem
ssl_options.keyfile = /path/to/privkey.pem
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = /path/to/fullchain.pem
web_mqtt.ssl.certfile = /path/to/cert.pem
web_mqtt.ssl.keyfile = /path/to/privkey.pem
Basically, I'm wondering if I have the connection string wrong (wss://hostname:15675/ws)? Do I need to go to /wss? Is it a problem my client is a browser running on localhost -- not HTTPS? Do I have a configuration set incorrectly -- am I missing one?
If there is a better source of documentation/examples of this plugin beyond the RabbitMQ website, I would also be interested.
maybe the configuration mismatch
if there any password for the private file you need to add it also.
refer to the following sample rabbitmq.conf
listeners.ssl.default = 5671
ssl_options.cacertfile = <path/ca-bundle (.pem/.cabundle)>
ssl_options.certfile = <path/cert (.pem/.crt)>
ssl_options.keyfile = <path/key (.pem/.key)>
ssl_options.password = <your private key password>
ssl_options.versions.1 = tlsv1.3
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.ciphers.1 = TLS_AES_256_GCM_SHA384
ssl_options.ciphers.2 = TLS_AES_128_GCM_SHA256
ssl_options.ciphers.3 = TLS_CHACHA20_POLY1305_SHA256
ssl_options.ciphers.4 = TLS_AES_128_CCM_SHA256
ssl_options.ciphers.5 = TLS_AES_128_CCM_8_SHA256
ssl_options.honor_cipher_order = true
ssl_options.honor_ecc_order = true
web_mqtt.ssl.port = 15676
web_mqtt.ssl.backlog = 1024
web_mqtt.ssl.cacertfile = <path/ca-bundle (.pem/.cabundle)>
web_mqtt.ssl.certfile = <path/crt (.pem/.crt)>
web_mqtt.ssl.keyfile = <path/key (.pem/.key)>
web_mqtt.ssl.password = <your private key password>
web_mqtt.ssl.honor_cipher_order = true
web_mqtt.ssl.honor_ecc_order = true
web_mqtt.ssl.client_renegotiation = false
web_mqtt.ssl.secure_renegotiate = true
web_mqtt.ssl.versions.1 = tlsv1.2
web_mqtt.ssl.versions.2 = tlsv1.1
web_mqtt.ssl.ciphers.1 = ECDHE-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.2 = ECDHE-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.3 = ECDHE-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.4 = ECDHE-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.5 = ECDH-ECDSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.6 = ECDH-RSA-AES256-GCM-SHA384
web_mqtt.ssl.ciphers.7 = ECDH-ECDSA-AES256-SHA384
web_mqtt.ssl.ciphers.8 = ECDH-RSA-AES256-SHA384
web_mqtt.ssl.ciphers.9 = DHE-RSA-AES256-GCM-SHA384
this is a working configuration file for the rabbitmq-server on ubuntu 20.04
restart the rabbitmq-server
list the listeners port (make sure that the SSL ports enabled) (rabbitmq-diagnostics listeners)
test the SSL (testssl localhost:16567)
also test the telnet (telnet localhost 16567)
please reffer : https://www.rabbitmq.com/ssl.html#erlang-otp-requirements and
troubleshooting
this is worked for me :-)

Rabbit MQ declarative clustering

I have a RabbitMQ node running on a Windows 2012 server (rabbit#my-server-1).
I am creating a second node (rabbit#my-server-2) on a seperate server (also Windows 2012) and would like to cluster it with the existing node. The deployment of the second node is via Octopus Deploy and to make life easier I would like to have the clustering automatically done on startup of the node.
Reading the documentation (https://www.rabbitmq.com/clustering.html and https://www.rabbitmq.com/configure.html) leads me to believe I just need to add the following to the rabbitmq.conf file:
cluster_nodes.disc.1 = rabbit#my-server-1
However doing so causes the node to not start. The erl.exe process starts using 100% cpu and I see the following message in the erl_crash.dump file:
Slogan: init terminating in do_boot (generate_config_file)
I believe this is symptomatic of an invalid config file, and indeed removing these config entries allows me start the node fine.
I am able to cluster to the existing node manually via the relevant rabbitmqctl commands, but would prefer the declarative solution if possible.
I'm running RabbitMQ v3.7.4 and Erlang v20.3
So, what am I doing wrong? I've done some googling but haven't found anything that helps.
EDIT
Config file in full is:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_nodes.disc.1 = rabbit#my-server-1
How about adding the clustering-information like it is written in the doc under "Config File Peer Discovery Backend"
this would leave you with a configfile like this:
listeners.ssl.default = 5671
ssl_options.cacertfile = e:/Rabbit/Certificates/cacert.pem
ssl_options.certfile = e:/Rabbit/Certificates/cert.pem
ssl_options.keyfile = e:/Rabbit/Certificates/key.pem
ssl_options.password = xxxxxxx
ssl_options.verify = verify_none
ssl_options.fail_if_no_peer_cert = false
ssl_options.versions.1 = tlsv1.2
web_stomp.ssl.port = 14879
web_stomp.ssl.backlog = 1024
web_stomp.ssl.certfile = e:/Rabbit/Certificates/cert.pem
web_stomp.ssl.keyfile = e:/Rabbit/Certificates/key.pem
web_stomp.ssl.cacertfile = e:/Rabbit/Certificates/cacert.pem
web_stomp.ssl.password = xxxxxxx
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config
cluster_formation.classic_config.nodes.1 = rabbit#my-server-1
cluster_formation.classic_config.nodes.2 = rabbit#my-server-2

What's the use of the [runners.docker] section in config.toml for use case with docker machine?

reading the documentation on autoscaling I can't figure the role of the [runner.docker] section when using docker+machine as executor :
[runners.docker]
image = "ruby:2.1" # The default image used for builds is 'ruby:2.1'
In the executors documentation it says :
docker+machine : like docker, but uses auto-scaled docker machines -
this requires the presence of [runners.docker] and [runners.machine]
I get I have to define this [runners.docker] section to be able to use [runners.machine] section, but what is the aim of this [runners.docker] ?
I can't find how to configure it as I don't understand why to use it.
Our gitlab-runner runs on a vSphere VM and is configured to scale using docker+machine executor with MachineDriver using vmwarevsphere. All works nice but I would like to understand fully the configuration file.
Here is our "censored with stars" config.toml file with the [runners.docker] I can't understand (note that the guy that wrote it leaved the company, so I can't ask him):
[[runners]]
name = "gitlab-runner"
limit = 6
output_limit = 102400
url = "http://gitlab.**************.lan"
token = "*******************"
executor = "docker+machine"
[runners.docker]
tls_verify = false
image = "docker:latest"
dns = ["*.*.*.*"]
privileged = true
disable_cache = false
volumes = ["/etc/localtime:/etc/localtime:ro", "/var/run/docker.sock:/var/run/docker.sock", "/etc/docker/certs.d:/etc/docker/certs.d", "/cache:/cache", "/builds:/builds"]
cache_dir = "cache"
shm_size = 0
[runners.cache]
Type = "s3"
ServerAddress = "*.*.*.*"
AccessKey = "*****************"
SecretKey = "*****************"
BucketName = "runner"
Insecure = true
[runners.machine]
IdleCount = 4
MaxBuilds = 10
IdleTime = 3600
MachineDriver = "vmwarevsphere"
MachineName = "gitlab-runner-pool-1-%s"
MachineOptions = ["vmwarevsphere-username=************", "vmwarevsphere-password=*****************", "vmwarevsphere-vcenter=*.*.*.*", "vmwarevsphere-datastore=*********", "vmwarevsphere-memory-size=3096", "vmwarevsphere-disk-size=40960", "vmwarevsphere-cpu-count=3", "vmwarevsphere-network=*****************", "vmwarevsphere-datacenter=**************", "vmwarevsphere-hostsystem=*******************", "engine-storage-driver=overlay2", "engine-insecure-registry=**************", "engine-insecure-registry=*******************"]
OffPeakPeriods = ["* * 0-8,21-23 * * mon-fri *", "* * * * * sat,sun *"]
OffPeakTimezone = "Local"
OffPeakIdleCount = 1
OffPeakIdleTime = 600
The [runners.machine] section defines how to start and provision your runner machines, the [runners.docker] section then defines how to configure the runner on that machine.
Docker-machine on its own only does the following (as you can read here):
"Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands."
So this does nothing with the Gitlab runner, you still need to configure the runner after that and thats where the [runners.docker] section comes into play because the runner needs to know what default image to use and what volumes to mount etc.