I'm very new to rabbitmq, I installed rabbitmq-server on one EC2 instance, and want to create a consumer on another EC2 instance.
But I'm getting this error:
socket.gaierror: [Errno -2] Name or service not known
That's the node status:
ubuntu#ip-10-147-xxx-xxx:~$ sudo rabbitmq-server restart
ERROR: node with name "rabbit" already running on "ip-10-147-xxx-xxx"
DIAGNOSTICS
===========
nodes in question: ['rabbit#ip-10-147-xxx-xxx']
hosts, their running nodes and ports:
- ip-10-147-xxx-xxx: [{rabbit,46074},{rabbitmqprelaunch4603,51638}]
current node details:
- node name: 'rabbitmqprelaunch4603#ip-10-147-xxx-xxx'
- home dir: /var/lib/rabbitmq
- cookie hash: Gsnt2qHd7wWDEOAOFby=
And that's the consumer code:
import pika
cred = pika.PlainCredentials('guest', 'guest')
conn_params = pika.ConnectionParameters('10-147-xxx-xxx', credentials=cred)
conn_broker = pika.BlockingConnection(conn_params)
conn_broker = pika.BlockingConnection(conn_params)
channel = conn_broker.channel()
channel.exchange_declare(exchange='hello-exchange', type='direct', passive=False, durable=True, auto_delete=False)
channel.queue_declare(queue='hello-queue')
channel.queue_bind(queue='hello-queue', exchange='hello-exchange', routing_key='hola')
def msg_consumer(channel, method, header, body):
channel.basic_ack(delivery_tag=method.delivery_tag)
if body == 'quit':
channel.basic_cancel(consumer_tag='hello-consumer')
channel.stop_consuming()
else:
print body
return
channel.basic_consume(msg_consumer, queue='hello-queue', consumer_tag='hello-consumer')
channel.start_consuming()
You should check that the security group allows you to use the rabbitMQ port, also it seems that you are not using Rabbit default's port (5672) so it should be in your connection parameters
Related
I'm using the Mercure hub 0.13, everything works fine on my development machine, but on my test server the hub keeps on trying to bind on port 80, resulting in a error, as nginx is already running on port 80.
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm starting the hub with the following command:
MERCURE_PUBLISHER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_PUBLISHER_JWT_ALG=RS256 \
MERCURE_SUBSCRIBER_JWT_KEY=$(cat publisher.key.pub) \
MERCURE_SUBSCRIBER_JWT_ALG=RS256 \
./mercure run -config Caddyfile.dev
Caddyfile.dev is as follows:
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
{$GLOBAL_OPTIONS}
}
{$SERVER_NAME:localhost:3000}
log
route {
redir / /.well-known/mercure/ui/
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt {env.MERCURE_PUBLISHER_JWT_KEY} {env.MERCURE_PUBLISHER_JWT_ALG}
# Subscriber JWT key
subscriber_jwt {env.MERCURE_SUBSCRIBER_JWT_KEY} {env.MERCURE_SUBSCRIBER_JWT_ALG}
# Permissive configuration for the development environment
cors_origins *
publish_origins *
demo
anonymous
subscriptions
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
When I provider the SERVER_NAME as an environment variable, without a domain, SERVER_NAME=:3000, the hub actually starts on port 3000, but runs in http mode, which only allows for anonymous subscriptions and is not what I need.
Server:
Operating System: CentOS Stream 8
Kernel: Linux 4.18.0-383.el8.x86_64
Architecture: x86-64
Full output when trying to start the Mercure hub:
2022/05/10 04:50:29.605 INFO using provided configuration {"config_file": "Caddyfile.dev", "config_adapter": ""}
2022/05/10 04:50:29.606 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "Caddyfile.dev", "line": 3}
2022/05/10 04:50:29.609 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022/05/10 04:50:29.610 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022/05/10 04:50:29.610 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003d6150"}
2022/05/10 04:50:29.627 INFO tls cleaning storage unit {"description": "FileStorage:/root/.local/share/caddy"}
2022/05/10 04:50:29.628 INFO tls finished cleaning storage units
2022/05/10 04:50:29.642 INFO pki.ca.local root certificate is already trusted by system {"path": "storage:pki/authorities/local/root.crt"}
2022/05/10 04:50:29.643 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc0003d6150"}
run: loading initial config: loading new config: http app module: start: tcp: listening on :80: listen tcp :80: bind: address already in use
I'm a bit late, but I hope that will help someone.
As mentionned here, you can specify the http_port manually in your caddy configuration file.
I am finding it impossible to set up an encrypted connection with a RabbitMQ broker using python's pika library on the client side. My starting point was the pika tutorial example here but I cannot make it work. I have proceeded as follows.
(1) The RabbitMQ configuration file was:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = false
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = PLAIN
auth_mechanisms.2 = AMQPLAIN
auth_mechanisms.3 = EXTERNAL
(2) The rabbitmq-auth-mechanism-ssl plugin was enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling was confirmed by checking the enable status through: rabbitmq-plugins list.
(3) The correctness of the TLS certificates was verified by using openssl tools as described here.
(4) The client-side program to set up the connection was:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(
cafile="/Xyz/sampleNodeCert/tms.crt")
context.load_cert_chain("/Xyz/sampleNodeCert/node.crt",
"/Xyz/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, '127.0.0.1')
conn_params = pika.ConnectionParameters(host='127.0.0.1',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials())
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
(5) The client-side program failed with the following error message:
pika.exceptions.ProbableAuthenticationError: ConnectionClosedByBroker: (403) 'ACCESS_REFUSED - Login was refused using authentication mechanism EXTERNAL. For details see the broker logfile.'
(6) The log message in the RabbitMQ broker was:
2019-10-15 20:17:46.028 [info] <0.642.0> accepting AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
2019-10-15 20:17:46.032 [error] <0.642.0> Error on AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671, state: starting):
EXTERNAL login refused: user 'CN=www.node.com,O=Node GmbH,L=NodeTown,ST=NodeProvince,C=DE' - invalid credentials
2019-10-15 20:17:46.043 [info] <0.642.0> closing AMQP connection <0.642.0> (127.0.0.1:48252 -> 127.0.0.1:5671)
(7) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
Questions: Does anyone have any idea as to why my test fails? Where can I find a complete working example of setting up an encrypted connection to RabbitMQ using pika?
NB: I am familiar with this post but none of the tips in the post helped me.
After studying the link provided above by Luke Bakken, I am now in a position to answer my own question. The main change with respect to my original example is that I configure the RabbitMQ broker with a passwordless user which has the same name as the CN field of the TLS certificate on both the server and the client side. To illustrate, below, I go through my example again in detail:
(1) The RabbitMQ configuration file is:
listeners.tcp.default = 5672
listeners.ssl.default = 5671
ssl_cert_login_from = common_name
ssl_options.verify = verify_peer
ssl_options.fail_if_no_peer_cert = true
ssl_options.cacertfile = /etc/cert/tms.crt
ssl_options.certfile = /etc/cert/tms.crt
ssl_options.keyfile = /etc/cert/tmsPrivKey.pem
auth_mechanisms.1 = EXTERNAL
auth_mechanisms.2 = PLAIN
auth_mechanisms.3 = AMQPLAIN
Note that, with the ssl_cert_login_from configuration option, I am asking for the username of the RabbitMQ account to be taken from the "common name" (CN) field of the TLS certificate.
(2) The rabbitmq-auth-mechanism-ssl plugin is enabled with the following command:
rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
Successful enabling can be confirmed by checking the enable status through command: rabbitmq-plugins list.
(3) The signed TLS certificate must have the issuer and subject CN fields equal to each other and equal to the hostname of the RabbitMQ broker node. In my case, inspection of the RabbitMQ log file (in /var/log/rabbitmq) shows that the broker is running on a node called: rabbit#pnp-vm2. The host name is therefore pnp-vm2. In order to check the CN fields of the client-side certificate, I use the following command:
ap#pnp-vm2:openssl x509 -noout -text -in /etc/cert/node.crt | fgrep CN
Issuer: C = CH, ST = CH, L = Location, O = Organization GmbH, CN = pnp-vm2
Subject: C = DE, ST = NodeProvince, L = NodeTown, O = Node GmbH, CN = pnp-vm2
As you can see, both the Issuer CN field and the Subject CN Field are equal to: "pnp-vm2" (which is the hostname of the RabbitMQ broker, see above). I tried using this name for only one of the two CN fields but then the connection to the broker could not be established. In my test environment, it was easy to create a client certificate with identical CN names but, in an operational environment, this may be a lot harder to do. Also, I do not quite understand the reason for this constraint: is it a bug or it is a feature? And does it originate in the particular RabbitMQ library I am using (python's pika) or in the AMQP protocol? These question probably deserve a dedicated post.
(4) The client-side program to set up the connection is:
#!/usr/bin/env python
import logging
import pika
import ssl
from pika.credentials import ExternalCredentials
logging.basicConfig(level=logging.INFO)
context = ssl.create_default_context(cafile="/home/ap/RocheTe/cert/sampleNodeCert/tms.crt")
context.load_cert_chain("/home/ap/RocheTe/cert/sampleNodeCert/node.crt",
"/home/ap/RocheTe/cert/sampleNodeCert/nodePrivKey.pem")
ssl_options = pika.SSLOptions(context, 'pnp-vm2')
conn_params = pika.ConnectionParameters(host='a.b.c.d',
port=5671,
ssl_options=ssl_options,
credentials=ExternalCredentials(),
heartbeat=0)
with pika.BlockingConnection(conn_params) as conn:
ch = conn.channel()
ch.queue_declare("foobar")
ch.basic_publish("", "foobar", "Hello, world!")
print(ch.basic_get("foobar"))
input("Press Enter to continue...")
Here, "a.b.c.d" is the IP address of the machine on which the RabbitMQ broker is running.
(5) The environment in which this test was done is Ubuntu 18.04 using RabbitMQ 3.7.17 on Erlang 22.0.7. On the client side, python3 version 3.6.8 was used.
One final word of warning: with this configuration, I was able to establish a secure connection to the RabbitMQ Broker but, for reasons which I still do not understand, it became impossible to start the RabbitMQ Web Management Tool...
I want to create a cluster of 3 nodes. I have created two nodes with command:
RABBITMQ_NODE_PORT=5680 RABBITMQ_NODENAME=rabbit1#localhost rabbitmq-server -detached
Now when i try to stop the node in order to join it to cluster, it gives me error stating the node is not started at all.
What i have done till now is installed rabbitmq and started it using rabbitmq-server.
rabbit1#localhost.log
Error description:
init:do_boot/3
init:start_em/1
rabbit:start_it/1 line 480
rabbit:broker_start/0 line 356
rabbit:start_apps/2 line 575
app_utils:manage_applications/6 line 126
lists:foldl/3 line 1263
rabbit:'-handle_app_error/1-fun-0-'/3 line 696
throw:{could_not_start,rabbitmq_mqtt,
{rabbitmq_mqtt,
{{shutdown,
{failed_to_start_child,'rabbit_mqtt_listener_sup_:::1883',
{shutdown,
{failed_to_start_child,
{ranch_listener_sup,{acceptor,{0,0,0,0,0,0,0,0},1883}},
{shutdown,
{failed_to_start_child,ranch_acceptors_sup,
{listen_error,
{acceptor,{0,0,0,0,0,0,0,0},1883},
eaddrinuse}}}}}}},
{rabbit_mqtt,start,[normal,[]]}}}}
Log file(s) (may contain more information):
/usr/local/var/log/rabbitmq/rabbit1#localhost.log
/usr/local/var/log/rabbitmq/rabbit1#localhost_upgrade.log
Terminal:
Most common reasons for this are:
* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues)
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server)
* Target node is not running
In addition to the diagnostics info below:
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more
* Consult server logs on node rabbit1#localhost
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools
DIAGNOSTICS
===========
attempted to contact: [rabbit1#localhost]
rabbit1#localhost:
* connected to epmd (port 4369) on localhost
* epmd reports: node 'rabbit1' not running at all
other nodes on localhost: [rabbit]
* suggestion: start the node
Current node details:
* node name: 'rabbitmqcli-9206-rabbit#localhost'
* effective user's home directory: /Users/yashparekh
* Erlang cookie hash: +/3SPQl4T2w3zA11j1+o4Q==
I expect stop_app command to work in order to be able to join it to cluster.
Please let me know where i'm going wrong.
Thanks in advance.
{failed_to_start_child,
{ranch_listener_sup,{acceptor,{0,0,0,0,0,0,0,0},1883}},
{shutdown,
{failed_to_start_child,ranch_acceptors_sup,
{listen_error,
{acceptor,{0,0,0,0,0,0,0,0},1883},
eaddrinuse}}}}}}},
it means that the port 1883 (the MQTT port) is already used. you have to set also this port dynamically.
I'm looking for configure Celery on my FreeBSD server and I get some issues according to log files.
My configuration:
FreeBSD server
2 Django applications : app1 and app2
Celery is daemonized and Redis
Each application has his own Celery task
My Celery config file:
I have in /etc/default/celeryd_app1 :
# Names of nodes to start
CELERYD_NODES="worker"
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/www/app1/venv/bin/celery"
# App instance to use
CELERY_APP="main"
# Where to chdir at start.
CELERYD_CHDIR="/usr/local/www/app1/src/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# Set logging level to DEBUG
#CELERYD_LOG_LEVEL="DEBUG"
# %n will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/app1/%n%I.log"
CELERYD_PID_FILE="/var/run/celery/app1/%n.pid"
# Workers should run as an unprivileged user.
CELERYD_USER="celery"
CELERYD_GROUP="celery"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I have exactly the same file for celeryd_app2
Django settings file with Celery settings:
CELERY_BROKER_URL = 'redis://localhost:6379'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_BACKEND = 'redis://localhost:6379'
CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_IGNORE_RESULT = False
CELERY_TASK_TRACK_STARTED = True
# Add a one-minute timeout to all Celery tasks.
CELERYD_TASK_SOFT_TIME_LIMIT = 60
Both settings have the same redis' port.
My issue:
When I execute a celery task for app1, I find logs from this task in app2 log file with an issue like this :
Received unregistered task of type 'app1.task.my_task_for_app1'
...
KeyError: 'app1.task.my_task_for_app1'
There is an issue in my Celery config file ? I have to set different redis port ? If yes, How I can do that ?
Thank you very much
I guess the problem lies in the fact that you are using the same Redis database for both applications:
CELERY_BROKER_URL = 'redis://localhost:6379'
Take a look into the guide for using Redis as a broker. Just change the database for each application, e.g.
CELERY_BROKER_URL = 'redis://localhost:6379/0'
and
CELERY_BROKER_URL = 'redis://localhost:6379/1'
I have a python program connecting to a rabbitmq server. When this program starts, it connects well. But when rabbitmq server restarts, my program can not reconnect to it, and leaving error just "Socket closed"(produced by kombu), which is meaningless.
I want to know the detailed info about the connection failure. On the server side, there is nothing useful in the rabbitmq log file either, it just said "connection failed" with no reason given.
I tried the trace plugin(https://www.rabbitmq.com/firehose.html), and found there was no trace info published to amq.rabbitmq.trace exchange when the connection failure happended. I enabled the plugin with:
rabbitmq-plugins enable rabbitmq_tracing
systemctl restart rabbitmq-server
rabbitmqctl trace_on
and then i wrote a client to get message from amq.rabbitmq.trace exchange:
#!/bin/env python
from kombu.connection import BrokerConnection
from kombu.messaging import Exchange, Queue, Consumer, Producer
def on_message(self, body, message):
print("RECEIVED MESSAGE: %r" % (body, ))
message.ack()
def main():
conn = BrokerConnection('amqp://admin:pass#localhost:5672//')
channel = conn.channel()
queue = Queue('debug', channel=channel,durable=False)
queue.bind_to(exchange='amq.rabbitmq.trace', routing_key='publish.amq.rabbitmq.trace')
consumer = Consumer(channel, queue)
consumer.register_callback(on_message)
consumer.consume()
while True:
conn.drain_events()
if __name__ == '__main__':
main()
I also tried to get some debug log from rabbitmq server. I reconfigured rabbitmq.config according to https://www.rabbitmq.com/configure.html, and set
log_levels to
{log_levels, [{connection, info}]}
but as a result rabbitmq server failed to start. It seems like the official doc is not for me, my rabbitmq server version is 3.3.5. However
{log_levels, [connection,debug,info,error]}
or
{log_levels, [connection,debug]}
works, but with this there is no DEBUG info showing in the logs, which i don't know whether it is because the log_levels configuration is not effective or there is just no DEBUG log got printed all the time.
I know that this answer comes massively late, but for future purveyors, this worked for me:
[
{rabbit,
[
{log_levels, [{connection, debug}, {channel, debug}]}
]
}
].
Basically, you just need to wrap the parameters you want to set in whichever module/plugin they belong to.