With the following rabbitmq config
[ {mnesia, [{dump_log_write_threshold, 100}]},
{rabbit, [{vm_memory_high_watermark, 0.4}]},
{rabbitmq_shovel,
[{shovels,
[{devShovel,
[{sources, [{broker, "amqp://shoveluser:shoveluser#server2:5672"}]},
{destinations, [{broker, "amqp://shoveluser:shoveluser#localhost:5672"}]},
{queue, <<"queue">>},
{publish_fields,[{exchange,<<"DataExchange">>}]}
]
}]
}]
}
].
and all of the relevant queues / exchanges declared I am able to start my rabbitmq server. However, when I check the shovel management, the plugin always displays starting as the state of the shovel. What causes this and is there any way to get more info ?
Make sure to check the user is setup correctly on the brokers.
Related
RabbitMQ 3.10.1
rabbitmq-diagnostics status
...
Config files
* /etc/rabbitmq/rabbitmq.config
...
rabbitmq.config:
[
{rabbit,
[
{heartbeat, 90}
]
}
].
RabbitMQ Management show 5s heartbeat
And log:
2022-05-13 19:56:43.235925+03:00 [error] <0.5979.0> closing AMQP connection <0.5979.0> (xxx.xxx.xxx.xxx:3555 -> xxx.xxx.xxx.xxx:5672):
2022-05-13 19:56:43.235925+03:00 [error] <0.5979.0> missed heartbeats from client, timeout: 5s
How to fix this?
Set the heartbeat to 90s in the client. Most clients are able to set the heartbeat (from the client). RabbitMQ will respect the heartbeat suggested by the client. More about that here: https://www.rabbitmq.com/heartbeats.html#heartbeats-timeout
Rabbit MQ set up in my organization uses LDAP for Authenticaton and Authorization.
How can I configure NServiceBus (or RabbitMQ) to use the credentials that the service is running under (- like integrated security for SQL Connections).
Rabbmit MQ Configuration
[
{rabbit,
[{auth_backends, [rabbit_auth_backend_ldap]}]},
{rabbitmq_auth_backend_ldap,
[ {servers, ["ad.xxxx.xxx"]},
{dn_lookup_attribute, "userPrincipalName"},
{dn_lookup_base, "OU=xxxx Users,DC=ad,DC=xxxx,DC=xxx"},
{log, true},
{group_lookup_base, "OU=xxxx Users,DC=ad,DC=xxxx,DC=xxx"},
{tag_queries, [{administrator, {in_group, "CN=GRP_Name,OU=XXXX Users,DC=ad,DC=XXXX,DC=XXX"}},
{management, {in_group, in_group, "CN=GRP_Name,OU=XXXX Users,DC=ad,DC=XXXX,DC=XXX"}}]}
]
}
].
NServiceBus Code:
var endpointConfiguration = new EndpointConfiguration("Receiver.Service");
var transport = endpointConfiguration.UseTransport<RabbitMQTransport>();
transport.UseConventionalRoutingTopology();
transport.ConnectionString("host=rabbitmq.sb.xxxx.xxx");
RabbitMQ's LDAP support requires that client applications pass a username and password. There is no equivalent to SQL's integrated security.
In your case, user's must have a DN whose value ends with OU=xxxx Users,DC=ad,DC=xxxx,DC=xxx. Your NServiceBus application will have to pass a username and password of an account with the expected DN.
https://www.rabbitmq.com/ldap.html
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I have several VMs all getting messages from a node running RabbitMQ.
I've hit a bottleneck of the default settings so I'm starting to tweak it to get better results.
I've added
CONFIGFILE=/etc/rabbitmq/rabbitmq
and set the following rabbitmq.config
[
{rabbit, [
{tcp_listeners,[{"0.0.0.0", 5672}]},
{tcp_listen_options, [
{nodelay, true}
]}
]}
].
This is just one of the suggestions from the website.
https://www.rabbitmq.com/networking.html
Without the config file, everything runs ok, but when adding the file I keep getting IOError Socket Closed.
Is there anything in the configuration file which is causing the socket to be closed?
Solution was found here http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-January/017138.html
After adding those settings to the config file, connection could be established again.
I have a python program connecting to a rabbitmq server. When this program starts, it connects well. But when rabbitmq server restarts, my program can not reconnect to it, and leaving error just "Socket closed"(produced by kombu), which is meaningless.
I want to know the detailed info about the connection failure. On the server side, there is nothing useful in the rabbitmq log file either, it just said "connection failed" with no reason given.
I tried the trace plugin(https://www.rabbitmq.com/firehose.html), and found there was no trace info published to amq.rabbitmq.trace exchange when the connection failure happended. I enabled the plugin with:
rabbitmq-plugins enable rabbitmq_tracing
systemctl restart rabbitmq-server
rabbitmqctl trace_on
and then i wrote a client to get message from amq.rabbitmq.trace exchange:
#!/bin/env python
from kombu.connection import BrokerConnection
from kombu.messaging import Exchange, Queue, Consumer, Producer
def on_message(self, body, message):
print("RECEIVED MESSAGE: %r" % (body, ))
message.ack()
def main():
conn = BrokerConnection('amqp://admin:pass#localhost:5672//')
channel = conn.channel()
queue = Queue('debug', channel=channel,durable=False)
queue.bind_to(exchange='amq.rabbitmq.trace', routing_key='publish.amq.rabbitmq.trace')
consumer = Consumer(channel, queue)
consumer.register_callback(on_message)
consumer.consume()
while True:
conn.drain_events()
if __name__ == '__main__':
main()
I also tried to get some debug log from rabbitmq server. I reconfigured rabbitmq.config according to https://www.rabbitmq.com/configure.html, and set
log_levels to
{log_levels, [{connection, info}]}
but as a result rabbitmq server failed to start. It seems like the official doc is not for me, my rabbitmq server version is 3.3.5. However
{log_levels, [connection,debug,info,error]}
or
{log_levels, [connection,debug]}
works, but with this there is no DEBUG info showing in the logs, which i don't know whether it is because the log_levels configuration is not effective or there is just no DEBUG log got printed all the time.
I know that this answer comes massively late, but for future purveyors, this worked for me:
[
{rabbit,
[
{log_levels, [{connection, debug}, {channel, debug}]}
]
}
].
Basically, you just need to wrap the parameters you want to set in whichever module/plugin they belong to.
I have a RabbitMQ shovel that I have been using for some time.
I have a PC '192.168.7.1' that is shovelling messages from another PC '192.168.7.6'. This works unless '192.168.7.6' reboots, then the shovel on '192.168.7.1' stays in the RUNNING state and never again receives messages and also never reconnects. So messages just buffer up on '192.168.7.6' indefinitely.
Here is an excerpt from my config file showing the shovel config:
[{rabbit, [{disk_free_limit, {mem_relative, 1.0}}]},
{rabbitmq_shovel,
[ {shovels, [ {backbone_shovel,
[ {sources,
[ {brokers, [ "amqp://guest:guest#192.168.7.6"
]}
, {queue.declare, [
{queue, <<"backbone">>}
, durable
]}
]}
, {destinations,
[ {broker, "amqp://guest:guest#localhost"}
, {queue.declare, [
{queue, <<"backbone">>}
, durable
]}
]}
, {queue, <<"backbone">>}
, {ack_mode, on_confirm}
, {reconnect_delay, 5}
]},
Here is an excerpt from the rabbitmq shovel management plugin when the source (192.168.7.6) is rebooting:
backbone_shovel
running
type: network
virtual_host: /
host: 192.168.7.6
username: guest
ssl: false
type: network
virtual_host: /
host: localhost
username: guest
ssl: false
2012-11-26 11:03:51
How can I force a shovel to restart when the target dies?
Answering my own question just in case anyone else has the same problem.
This simple solution is to put the shovel on the client (PC '192.168.7.6') instead of the server (PC '192.168.7.1'). Then if the client restarts, the shovel restarts. So the shovel never gets out of sync with the state of the client.