RabbitMQ Shovel Frozen 'Running' - rabbitmq

I have a RabbitMQ shovel that I have been using for some time.
I have a PC '192.168.7.1' that is shovelling messages from another PC '192.168.7.6'. This works unless '192.168.7.6' reboots, then the shovel on '192.168.7.1' stays in the RUNNING state and never again receives messages and also never reconnects. So messages just buffer up on '192.168.7.6' indefinitely.
Here is an excerpt from my config file showing the shovel config:
[{rabbit, [{disk_free_limit, {mem_relative, 1.0}}]},
{rabbitmq_shovel,
[ {shovels, [ {backbone_shovel,
[ {sources,
[ {brokers, [ "amqp://guest:guest#192.168.7.6"
]}
, {queue.declare, [
{queue, <<"backbone">>}
, durable
]}
]}
, {destinations,
[ {broker, "amqp://guest:guest#localhost"}
, {queue.declare, [
{queue, <<"backbone">>}
, durable
]}
]}
, {queue, <<"backbone">>}
, {ack_mode, on_confirm}
, {reconnect_delay, 5}
]},
Here is an excerpt from the rabbitmq shovel management plugin when the source (192.168.7.6) is rebooting:
backbone_shovel
running
type: network
virtual_host: /
host: 192.168.7.6
username: guest
ssl: false
type: network
virtual_host: /
host: localhost
username: guest
ssl: false
2012-11-26 11:03:51
How can I force a shovel to restart when the target dies?

Answering my own question just in case anyone else has the same problem.
This simple solution is to put the shovel on the client (PC '192.168.7.6') instead of the server (PC '192.168.7.1'). Then if the client restarts, the shovel restarts. So the shovel never gets out of sync with the state of the client.

Related

RabbitMQ ignore config "heartbeat" rule

RabbitMQ 3.10.1
rabbitmq-diagnostics status
...
Config files
* /etc/rabbitmq/rabbitmq.config
...
rabbitmq.config:
[
{rabbit,
[
{heartbeat, 90}
]
}
].
RabbitMQ Management show 5s heartbeat
And log:
2022-05-13 19:56:43.235925+03:00 [error] <0.5979.0> closing AMQP connection <0.5979.0> (xxx.xxx.xxx.xxx:3555 -> xxx.xxx.xxx.xxx:5672):
2022-05-13 19:56:43.235925+03:00 [error] <0.5979.0> missed heartbeats from client, timeout: 5s
How to fix this?
Set the heartbeat to 90s in the client. Most clients are able to set the heartbeat (from the client). RabbitMQ will respect the heartbeat suggested by the client. More about that here: https://www.rabbitmq.com/heartbeats.html#heartbeats-timeout

Socket Io and Kubernetes

I am trying to migrate a socket io service from GCP (App Engine) to a kubernetes cluster.
Everything works fine on the GCP side (we have one instance of the server without replicas).
The migration to k8s is going very well, except that when connecting the client socket to the server, it does not receive some information:
In transport 'polling': Of course, as there are two pods, this doesn't work properly anymore and the client socket keeps deconnecting / reconnecting in loop.
In 'websocket' transport: The connection is correctly established, the client can receive data from the server in 'broadcast to all client' mode => socket.emit('getDeviceList', os.hostname()) but, as soon as the server tries to send data only to the concerned client io.of(namespace).to(socket.id).emit('getDeviceList', JSON.stringify(obj)), this one doesn't receive anything...
Moreover, I modified my service to have only one pod for a test, the polling mode works correctly, but, I find myself in the same case as the websocket mode => I can't send an information to a precise client...
Of course, the same code on the App Engine side works correctly and the client receives everything correctly.
I'm working with:
"socket.io": "^3.1.0",
"socket.io-redis": "^5.2.0",
"vue": "^2.5.18",
"vue-socket.io": "3.0.7",
My server side configuration:
var io = require('socket.io')(server, {
pingTimeout: 5000,
pingInterval : 2000,
cors: {
origin: true,
methods: ["GET", "POST"],
transports: ['websocket', 'polling'],
credentials: true
},
allowEIO3: true
});
io.adapter(redis({ host: redis_host, port: redis_port }))
My front side configuration:
Vue.use(new VueSocketIO({
debug: true,
connection: 'path_to_the_socket_io/namespace,
options: {
query: `id=..._timestamp`,
transports: ['polling']
}
}));
My ingress side annotation:
kubernetes.io/ingress.class: nginx kubernetes.io/ingress.global-static-ip-name: ip-loadbalancer
meta.helm.sh/release-name: xxx
meta.helm.sh/release-namespace: xxx -release nginx.ingress.kubernetes.io/affinity: cookie nginx.ingress.kubernetes.io/affinity-mode: persistent nginx.ingress.kubernetes.io/force-ssl-redirect: true nginx.ingress.kubernetes.io/proxy-connect-timeout: 10800
nginx.ingress.kubernetes.io/proxy-read-timeout: 10800
nginx.ingress.kubernetes.io/proxy-send-timeout: 10800
nginx.org/websocket-services: app-sockets-cluster-ip-service
My question is : why i can get broadcast to all user message and not specific message to my socket ?
Can someone try to help me ? :)
Thanks a lot !
I found the solution in the day.and share it.
In fact, the problem is not due to the kubernetes Cluster but due to the socket io and socket io redis adapter version.
I was using socket.io: 3.x.x and using socket.io-redis: 5.x.x
In fact, i need to use the socket.io-redis: 6.x.x with this version of socket io :)
You can find the compatible version of socket io and redis adapter here:
https://github.com/socketio/socket.io-redis-adapter#compatibility-table
Thanks a lot.

Logstash fails to connect to rabbitmq

I am trying to setup a logstash configuration to push lines from a file to rabbitmq. I installed both logstash 2.1.1 and rabbit 3.6.0 on my local machine to test it. the output configuration is:
output {
rabbitmq {
exchange => "test_exchange"
key => "test"
exchange_type => "direct"
host => "127.0.0.1"
port => "15672"
user => "logstash"
password => "logstashpassword"
}
}
But when I now start logstash it fails to startup with the following error (the output is only shown when logstash is started in debug mode):
Worker threads expected: 4, worker threads started: 4 {:level=>:info, :file=>"logstash/pipeline.rb", :line=>"186", :method=>"start_filters"}
Connecting to RabbitMQ. Settings: {:vhost=>"/", :host=>"127.0.0.1", :port=>15672, :user=>"logstash", :automatic_recovery=>true, :pass=>"logstashpassword", :timeout=>0, :heartbeat=>0} {:level=>:debug, :file=>"logstash/plugin_mixins/rabbitmq_connection.rb", :line=>"135", :method=>"connect"}
The error reported is:
com.rabbitmq.utility.BlockingCell.get(com/rabbitmq/utility/BlockingCell.java:77)
com.rabbitmq.utility.BlockingCell.uninterruptibleGet(com/rabbitmq/utility/BlockingCell.java:111)
com.rabbitmq.utility.BlockingValueOrException.uninterruptibleGetValue(com/rabbitmq/utility/BlockingValueOrException.java:37)
com.rabbitmq.client.impl.AMQChannel$BlockingRpcContinuation.getReply(com/rabbitmq/client/impl/AMQChannel.java:367)
com.rabbitmq.client.impl.AMQConnection.start(com/rabbitmq/client/impl/AMQConnection.java:293)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:621)
com.rabbitmq.client.ConnectionFactory.newConnection(com/rabbitmq/client/ConnectionFactory.java:648)
java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:606)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:505)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.converting_rjc_exceptions_to_ruby(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:467)
RUBY.new_connection_impl(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:500)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:136)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare/session.rb:109)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/march_hare-2.11.0-java/lib/march_hare.rb:20)
RUBY.connect(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:137)
RUBY.connect!(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-mixin-rabbitmq_connection-2.2.0-java/lib/logstash/plugin_mixins/rabbitmq_connection.rb:94)
RUBY.register(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-rabbitmq-3.0.6-java/lib/logstash/outputs/rabbitmq.rb:40)
org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
RUBY.start_outputs(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:192)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/pipeline.rb:102)
RUBY.execute(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/agent.rb:165)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:90)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.run(/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.1.1-java/lib/logstash/runner.rb:95)
org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
RUBY.initialize(/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/task.rb:24)
java.lang.Thread.run(java/lang/Thread.java:745)
And then logstash "crashes" i.e. stops. Does anyone know if that has to do with the config of logstash or is it a problem in rabbitMQ setup?
Thanks in advance.
Daniel
Found my error the port for rabbitMQ has to be 5672 not 15672.

RabbitMQ IOError Socket Closed when changing settings

I have several VMs all getting messages from a node running RabbitMQ.
I've hit a bottleneck of the default settings so I'm starting to tweak it to get better results.
I've added
CONFIGFILE=/etc/rabbitmq/rabbitmq
and set the following rabbitmq.config
[
{rabbit, [
{tcp_listeners,[{"0.0.0.0", 5672}]},
{tcp_listen_options, [
{nodelay, true}
]}
]}
].
This is just one of the suggestions from the website.
https://www.rabbitmq.com/networking.html
Without the config file, everything runs ok, but when adding the file I keep getting IOError Socket Closed.
Is there anything in the configuration file which is causing the socket to be closed?
Solution was found here http://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-January/017138.html
After adding those settings to the config file, connection could be established again.

rabbitmq shovel state always starting

With the following rabbitmq config
[ {mnesia, [{dump_log_write_threshold, 100}]},
{rabbit, [{vm_memory_high_watermark, 0.4}]},
{rabbitmq_shovel,
[{shovels,
[{devShovel,
[{sources, [{broker, "amqp://shoveluser:shoveluser#server2:5672"}]},
{destinations, [{broker, "amqp://shoveluser:shoveluser#localhost:5672"}]},
{queue, <<"queue">>},
{publish_fields,[{exchange,<<"DataExchange">>}]}
]
}]
}]
}
].
and all of the relevant queues / exchanges declared I am able to start my rabbitmq server. However, when I check the shovel management, the plugin always displays starting as the state of the shovel. What causes this and is there any way to get more info ?
Make sure to check the user is setup correctly on the brokers.