Multiple consumers are created after site recycle. RabbitMq/Masstransit - rabbitmq

I use Masstransit in my .net web application to connect to RabbitMq.
Sometime after site recycle, I see lots of consumers on a single queue. I didn't set competing consumers and in normal situation I should have only one consumer per queue.
When this problem happens, my messages get processed very slowly (I assume the time depends on my retry policy) and I have to showdown the site and start it again.
I use Masstransit 5.2.3/RabbitMq 3.7.6.
Could anyone give any clue what the problem could be?

Related

RabbitMQ as Message Broker used by Spring Websocket dies under load

I develop an application where we need to handle 160k concurrent users which are connected to the backend via a websocket connection.
We decided to use the spring websocket implementation and RabbitMQ as the message broker.
In our application every user needs to subscribe to its user queue /exchange/amq.direct/update as well as to another queue where also other users can potential subscribe to /topic/someUniqueName.
In our first performance test we did the naive approach where every user subscribes to two new queues.
When running the test RabbitMQ dies silently when around 800 users are connected at the same time, so around 1600 queues are active (See the graph of all RabbitMQ objects here).
I read though that you should be careful opening many connections to RabbitMQ.
Now I wonder if the approach that is anticipated by Spring Websocket with opening one queue per user is a conceptional problem for systems with high load or if there is another error in my system.
Limiting factors for RabbitMQ are usually:
memory (can be checked in dashboard) that needs to grow with number of messages and number of queues (if you don't use lazy queues that go directly to disk).
maximum number of file descriptors (at least 1 per connection) that often defaults to too low values on many distributions (ref: https://lists.rabbitmq.com/pipermail/rabbitmq-discuss/2012-April/019615.html)
CPU for routing the messages
I did find the issue. I actually misconfigured the RabbitMQ service and just gave it a 1024 file descriptor limit. Increasing it solved the issue.

Why does NServiceBus post empty messages to MSMQ?

Does NServiceBus, at any point, for any reason, have to post empty messages to MSMQ, and if so, why and when does it happen? Longer explanation below.
A project I work on makes use of NServiceBus version 4. That version does not allow "multi-hosting" of event handlers for different queues in a single process, which may be inconvenient if your project contains 40 or so different queues.
To overcome this problem in development, I made a small "router" app, which listens to all the necessary MSMQ queues and simply forwards all messages from them into a single "unified" queue. That "unified" queue is specified as the queue name for the "unified endpoint" process, which references all the handlers for all the messages that would normally be handled from those various queues.
The setup kind-of works, it seems (with most handlers, at least), but there is one mysterious behaviour (which, I presume, may have something to do with the set-up not working with some other handlers). Namely, as soon as the project starts up, my "router" immediately discovers a number of empty MSMQ messages posted to the queues it has to listen to. Apparently, NSB is publishing those messages during start-up for some reason (and most probably the router is snitching them up before NSB has the chance to look at them again).
I am sure this is not an artefact of my implementation as this does not happen unless NSB is also started. I am curious about the reasons.
NServiceBus, by default, autosubscribes to all handled events if it knows the endpoint which publishes them. These empty messages you see might be the subscribe messages because they are being sent during the endpoint start-up phase.
The mechanics behind the subscribe messages are documented here. TL;DR for transports that do not provide publishing natively (e.g. MSMQ, Azure Storage Queues) NServiceBus emulates it using subscribe messages and internal subscription lists (storages).
You can verify this by checking the message intent header. If they are not subscribe messages, please share the complete list of headers of such a message for further investigation.

MassTransmit - Distributed Messaging Model - Reliable/Durable - NServiceBus too expensive

I would like to use MassTransmit similar to NServiceBus, every publisher and subscriber has a local queue. However I want to use RabbitMQ.
So do all my desktop clients have to have RabbitMQ installed, I think so, then should I just connect the 50 desktop clients and 2 servers into a cluster?
I know the two servers must be in the same cluster. However 50 client nodes, seems a bi tmuch to put in one cluster.....Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
The Servers are dealing with backend hl7 messages.
Any help and advice here is much appreciated, this is all on windows machines.
Basically I am leaving NServiceBus behind, as it is now too expensive, they aiming it at large corporations with big budgets, hence Masstransmit.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers.
The desktops also use CQS to update their views.
should I just connect the 50 desktop clients and 2 servers into a cluster?
Yes, you have to connected your clients to the cluster.
However 50 client nodes, seems a bi tmuch to put in one cluster.
No, (or it depends how big are your servers) 50 clients is a small number
Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
I think it's better the cluster, because federation and shovel are asynchronous, it means that your LockOrder could be not replicated in time.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers
Withe RMQ you can create a persistent queue and messages, and it is not necessary if the client(s) is connected. It will get the messages when it will connect to the broker.
I hope it helps.
I have a FOSS ESB rpoject called Shuttle, if you would like to give it a spin: https://github.com/Shuttle/shuttle-esb
I haven't used NServiceBus for a while and actually started Shuttle when it went commercial. The implementation is somewhat different from NServiceBus. I don't know MassTransit at all, though. Currently process managers (sagas) have to be hand-rolled in Shuttle whereas MassTransit and NServiceBus have this incorporated. If I do get around to adding sagas I'll be adding them as a Module that can be plugged into the receiving pipeline. This way one could have various implementations and choose the flavour you like :)
Back to your issue. Shuttle has the concept of an optional outbox for queuing technologies like RabbitMQ. Shuttle does have a RabbitMQ implementation. I believe the outbox works somewhat like 'shovel' does. So the outbox would be local and sending messages would first go to the outbox. It would periodically try to send messages on to the recipients and, after a configurable number of attempts, send the message to an error queue. It can then be returned to the outbox for further attempts, or even moved directly to the recipient queue once it is up.
Documentation here: http://shuttle.github.io/shuttle-esb/

Where are unique ReceiveFrom addresses really necessary on MassTransit with RabbitMQ?

Background
My group are complete noobs with MassTransit and messaging in general. I understand the simple demos found online, but I'm confused on how to set things up for non-trivial scenarios. (many producers, many consumers, with consumers communicating back to producers)
We currently make 3rd party web service calls directly from web code via synchronous calls. Some of them are notoriously slow and unreliable to the point of browser timeouts and YSODs that aren't directly our code's fault. We want to replace these sync calls with messages and eventual consistency for retries and poison queue.
We also want to replace various scheduled/batch tasks with messaging to get closer to real time processing instead of waiting for next batch to run.
Our website runs on a farm of 6 IIS servers behind a hardware load balancer. There are 2 additional "application" servers that run the scheduled tasks. I figure we will put our new worker services on the app servers or maybe even all 8 servers.
Questions
So... The "common gotchas" section of the MT docs say that each application needs it's own address. My question is around what exactly is the definition of application in this case.
I have 6 web servers running the website. Does each of these need a unique address or can they all just be "rabbitmq://localhost/MyApp/Website". What if IIS is configured for multiple worker processes? Do each of those also need a different rabbit address?
Same question goes for my 2 application servers. If I'm running the same worker on both boxes does it need different addresses? Some stuff says if you want competing consumers to share an address, but if you want "event" type messages to be delivered to everyone they need to be different addresses.
What if you need both event (broadcast) and command (consumed once) messages sent to a worker cluster? (Multiple instances of the same workers to handle more load.)
What if I have consumers hosted in the web application directly? (I'm not sure this is a good idea to start with.)
What about request/response messages? I assume the responses should go back to the originating web server. Otherwise the MT request call will never unblock or at best timeout.
Each instance of an IServiceBus needs it's own RecieveFrom address. And yeah, if there are multiple worker processes, each should have it's own queue. You can use temporary queue for this though in web apps.
For competing consumers, each process/IServiceBus that is one of the consumes should be an exact copy. If there's an event that doesn't need to be competing, then it needs to have it's own process.

message deleted from queue

I have used BlockingQueue implementation to process my events by services from a queue. However in case if the server goes down, all my events from that queue are getting deleted and hence I am missing events to process. (I am looking for some internal DB where server can store the event/messages from queue and if server goes down and up again, it can load all events/messages to process again, without manually intervention).
Any help on this. I am not sure if I should use Apache ActiveMQ. I am using apache servicemix.
Thanks in advance.
I can not answer about how to do this with BlockingQueue.
But ActiveMQ has two features that you will benefit from:
Persistent Queues and possibly you might also want to look at Durable Queues
It has a built in database that just does this under the hood and allows messages to be persisted in queue even if broker or consumer has to restart.