RabbitMQ suddenly does not receive messages from Spring AMQP - rabbitmq

I've been moving my dev-environment from Linux to Mac and suddenly I'm facing weird RabbitMQ behavior.
I had Ubuntu box, on which RabbitMQ 2.8.7-1 runned and I did dev stuff on the same box. I've been running my testcode (provided below) and Rabbit were happy to receive all messages.
// using spring-rabbit 1.1.1.RELEASE
AmqpTemplate amqpTemplate = (AmqpTemplate) context.getBean("amqpTemplate");
amqpTemplate.convertAndSend("bar.queue", "Foo message");
Now I moved to Mac box (host A), in which I'm running VirtualBox with Linux (host B), on which runs RabbitMQ with the same configuration as on previous Linux box. I'm running my dev environment on Mac, which calls Rabbit inside VM :). But nothing came, so I used Wireshark to track down communication, which seems ok:
. . . .
B > A: Connection.Start
A > B: Connection.Start-Ok
B > A: Connection.Tune
A > B: Connection.Tune-Ok
A > B: Connection.Open
B > A: Connection.Open-Ok
A > B: Channel.Open
B > A: Channel.Open-Ok
A > B: Basic.Publish (bar.queue)
A > B: Content-Header (text/plain)
A > B: Content-Body (Foo message)
A > B: Channel.Close (200-OK)
B > A: [[TCP ACK for Channel.Close]]
. . . .
So it looks like message had been received, but maybe not processed by broker? Log on client side also tells me that message was published.
17:09:37.450 [main] DEBUG o.s.b.f.s.DefaultListableBeanFactory - Returning cached instance of singleton bean 'amqpTemplate'
17:09:37.576 [main] DEBUG o.s.a.r.c.CachingConnectionFactory - Creating cached Rabbit Channel from AMQChannel(amqp://guest#hostB:5672/,1)
17:09:37.611 [main] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Executing callback on RabbitMQ Channel: Cached Rabbit Channel: AMQChannel(amqp://guest#hostB:5672/,1)
17:09:37.612 [main] DEBUG o.s.amqp.rabbit.core.RabbitTemplate - Publishing message on exchange [], routingKey = [bar.queue]
I have absolutely no idea where to start looking after problem, why it was working before and not now, where can be problem?
edit:
OK, I tried to implement simple sender from RabbitMQ tutorial and it looks like it hangs on close(), because application still runs and code after close() is not reached.
channel.basicPublish(exchangeName, routingKey, MessageProperties.PERSISTENT_TEXT_PLAIN, messageBodyBytes);
channel.close();
// this is never reached and app still running :-o
conn.close();

How much free disk space do you have on your VirtualBox?
This looks very similar to my problem. I was calling BasicPublish method from C# under Windows 7 x64. RabbitMQ installed on the same machine. The code was hanging at the same line as yours. When I put BasicPublish between TxSelect and TxCommit methods, then the program was hanging at TxCommit.
After some time I realized that my HDD was almost full (can't really remember but around 200MB of free space). I freed some space and that helped.

Related

How to configure Shovel plugin for two instances of RabbitMQ running on same machine

Sorry if this is repeated question but I am having hard time getting this to work even after doing a lot of search. Thanks for any help in advance!!.
I have two instances of RabbitMQ running inside docker container running on my machine as shown below.
C:\WINDOWS\system32>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
20c1ceda0013 datafyit/rabbitmq:shovel "docker-entrypoint.s…" 4 hours ago Up About an hour 4369/tcp, 5671/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:32791->5672/tcp, 0.0.0.0:32790->15672/tcp shovel-rabbit-snt
d467418754ef datafyit/rabbitmq:shovel "docker-entrypoint.s…" 4 hours ago Up 4 hours 4369/tcp, 5671/tcp, 15671/tcp, 25672/tcp, 0.0.0.0:32783->5672/tcp, 0.0.0.0:32782->15672/tcp shovel-rabbit-rcv
I am trying to configure a shovel plugin so that I can pass the messages from broker "shovel-rabbit-snt" through shovel plugin to "shovel-rabbit-rcv" broker and get it in my receiver code.
But I am not able to get the shovel to running state and getting following error message always from RabbitMQ management UI.
{{badmatch,{error,econnrefused}},
[{rabbit_shovel_worker,make_conn_and_chan,2,
[{file,"src/rabbit_shovel_worker.erl"},{line,238}]},
{rabbit_shovel_worker,handle_cast,2,
[{file,"src/rabbit_shovel_worker.erl"},{line,63}]},
{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1032}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,247}]}]}
My Shovel Configuration: -
Source amqp://guest:guest#172.17.0.4:32791/
Sender
queue
Destination amqp://guest:guest#172.17.0.2:32783/
Receiver
queue
Prefetch count ?
Reconnect delay
Add headers ○
Ack mode on-confirm
Auto-delete never.

Why is "await Publish<T>" hanging / not completing / not finishing

The following piece of code has been working for some time and it has suddenly stopped returning:
await availableChangedPublishEndpoint
.Publish<IAvailableStockChanged>(
AvailableStockCounter.ConvertSkuQtyToAvailableStockChangedEvent(
newAvailable,
absMessage.Warehouse)
);
There is nothing clever in ConvertSkuQtyToAvailableStockChangedEvent - it just maps one simple class to another.
We added logs before and after this code and it's definitely just stopping at this point. Other systems are publishing fine, other messages are being sent from this application (for e.g. logs are actually sent via RabbitMQ). We have redeployed and we have upgraded to latest MassTransit version. We are seeing that the messages are being published - possibly multiple times, but this Publish method never returns.
We had a broken RabbitMQ node and a clean service restart on one node fixed it. I appreciate there might be other reasons for this behaviour, but this was our problem.
systemctl restart rabbitmq-server
Looking further into RabbitMQ we saw that some of the empty queues that were connected to this exchange were not synchronized (see below) and when we tried to synchronize them that wouldn't work.
We also couldn't delete some of these unsynchronized queues.
We believe an unexpected shutdown of one of the nodes had caused this problem - but it left most queues / exchanges completely OK.

Spark Streaming: Inputs are received but not processed

I am running a simple SparkStreaming application, that consists in sending messages through a socket server to the SparkStreaming Context and printing them.
This is my code, which I am running in IntelliJ IDE:
SparkConf sparkConfiguration= new SparkConf().setAppName("DataAnalysis").setMaster("spark://IP:7077");
JavaStreamingContext sparkStrContext=new JavaStreamingContext(sparkConfiguration, Durations.seconds(1));
JavaReceiverInputDStream<String> receiveData=sparkStrContext.socketTextStream("localhost",5554);
I am running this application in a standalone cluster mode, with one worker (an Ubuntu VM) and a master (my Windows host).
This is the problem: When I run the application, I see that it successfully connected to the master, but it doesn't print any lines:
it just stays this way permanently.
If I go to the Spark UI, I find that the SparkStreaming Context is receiving inputs, but they are not being processed:
Can someone help me please? Thank you so much.
You need to perform below.
sparkStrContext.start(); // Start the computation
sparkStrContext.awaitTermination(); // Wait for the computation to terminate
Once you do this , you need to post the messages at port 5554 , for this you will first need to run Netcat (a small utility found in most Unix-like systems) as a data server by using and start pushing the stream.
For example , you need to do like below.
TERMINAL 1:
# Running Netcat
$ nc -lk 5554
hello world
TERMINAL 2: RUNNING Your streaming program
-------------------------------------------
Time: 1357008430000 ms
-------------------------------------------
hello world
...
...
You can check similar example here

How to configure and run remote celery worker correctly?

I'm new to celery and may be doing something wrong, but I already
spent a lot of trying to figure out how to configure celery
correctly.
So, in my environment I have 2 remote servers; one is main (it has
public IP address and most of the stuff like database server, rabbitmq
server and web server running my web application is there) and another
is used for specific tasks which I want to asynchronously invoke from
the main server using celery.
I was planning to use RabbitMQ as a broker and as results back-end.
Celery config is very basic:
CELERY_IMPORTS = ("main.tasks", )
BROKER_HOST = "Public IP of my main server"
BROKER_PORT = 5672
BROKER_USER = "guest"
BROKER_PASSWORD = "guest"
BROKER_VHOST = "/"
CELERY_RESULT_BACKEND = "amqp"
When I'm running a worker on the main server tasks are executed just
fine, but when I'm running it on the remote server only a few tasks
are executed and then worker gets stuck not being able to executed any
task. When I restart the worker it executes a few more tasks and gets
stuck again. There is nothing special inside the task and I even tried
a test task that just adds 2 numbers. I tried to run the worker
differently (demonizing and not, setting different concurrency and
using celeryd_multi), nothing really helped.
What could be the reason? Did I miss something? Do I have to run
something on the main server other than the broker (RabbitMQ)? Or is
it a bug in the celery (I tried a few version: 2.2.4, 2.3.3 and dev,
but none of them worked)?
Hm... I've just reproduced the same problem on the local worker, so I
don't really know what it is... Is it required to restart celery
worker after every N tasks executed?
Any help will be very much appreciated :)
Don't know if you ended up solving the problem, but I had similar symptoms. Turned out that (for whatever reason) print statements from within tasks was causing tasks not to complete (maybe some sort of deadlock situation?). Only some of the tasks had print statements, so when these tasks executed eventually the number of workers (set by concurrency option) were all exhausted, which caused tasks to stop executing.
Try to set your celery config to
CELERYD_PREFETCH_MULTIPLIER = 1
CELERYD_MAX_TASKS_PER_CHILD = 1
docs

WCF and MsmqBinding to remote private queue

We have a WCF log service that uses MsmqBinding and WAS. The issue is that I try to use it from remote computer and that message seems to never reach the destination queue. Here are the facts :
Server config
List item
destination machine name : logserver.domain.ext
destination queue : private$/logservice.svc (journaling enabled)
security on the queue : everyone : full control, NETWORK SERVICE : Full Control
IgnoreOSNameValidation registry key : set
Client config
client endpoint address : logserver.domain.ext/private/logservice.svc
Observed behaviour
the output queue is well created ans has status Connected and 0 message wainting
if I pause the output queue, I see messages appearing and then desapearing when resume the queue
no message can be seen in the remote queue or the journal
and the worth is :
var queue = new MessageQueue(#"FormatName:DIRECT=OS:logserver.domain.ext\private$\logservice.svc");
queue.Send("hello");
works !
You do not mention permissions for the ANONYMOUS LOGON account. This is the account that remote private queue access will happen under if you are not explicitly using Windows security on the binding.
I was facing the same issue, and it turned out that the issue was with Distributed Transaction Coordinator configuration. This MSDN document helped me solve it.