Where are unique ReceiveFrom addresses really necessary on MassTransit with RabbitMQ? - rabbitmq

Background
My group are complete noobs with MassTransit and messaging in general. I understand the simple demos found online, but I'm confused on how to set things up for non-trivial scenarios. (many producers, many consumers, with consumers communicating back to producers)
We currently make 3rd party web service calls directly from web code via synchronous calls. Some of them are notoriously slow and unreliable to the point of browser timeouts and YSODs that aren't directly our code's fault. We want to replace these sync calls with messages and eventual consistency for retries and poison queue.
We also want to replace various scheduled/batch tasks with messaging to get closer to real time processing instead of waiting for next batch to run.
Our website runs on a farm of 6 IIS servers behind a hardware load balancer. There are 2 additional "application" servers that run the scheduled tasks. I figure we will put our new worker services on the app servers or maybe even all 8 servers.
Questions
So... The "common gotchas" section of the MT docs say that each application needs it's own address. My question is around what exactly is the definition of application in this case.
I have 6 web servers running the website. Does each of these need a unique address or can they all just be "rabbitmq://localhost/MyApp/Website". What if IIS is configured for multiple worker processes? Do each of those also need a different rabbit address?
Same question goes for my 2 application servers. If I'm running the same worker on both boxes does it need different addresses? Some stuff says if you want competing consumers to share an address, but if you want "event" type messages to be delivered to everyone they need to be different addresses.
What if you need both event (broadcast) and command (consumed once) messages sent to a worker cluster? (Multiple instances of the same workers to handle more load.)
What if I have consumers hosted in the web application directly? (I'm not sure this is a good idea to start with.)
What about request/response messages? I assume the responses should go back to the originating web server. Otherwise the MT request call will never unblock or at best timeout.

Each instance of an IServiceBus needs it's own RecieveFrom address. And yeah, if there are multiple worker processes, each should have it's own queue. You can use temporary queue for this though in web apps.
For competing consumers, each process/IServiceBus that is one of the consumes should be an exact copy. If there's an event that doesn't need to be competing, then it needs to have it's own process.

Related

Multiple consumers are created after site recycle. RabbitMq/Masstransit

I use Masstransit in my .net web application to connect to RabbitMq.
Sometime after site recycle, I see lots of consumers on a single queue. I didn't set competing consumers and in normal situation I should have only one consumer per queue.
When this problem happens, my messages get processed very slowly (I assume the time depends on my retry policy) and I have to showdown the site and start it again.
I use Masstransit 5.2.3/RabbitMq 3.7.6.
Could anyone give any clue what the problem could be?

Multiple Mule applications are receiving the same message running on different ports

The setup: Several Mule apps (at least 4) running under ESB 3.8.1 CE are deployed with inbound endpoints listening on different ports. Each links a unique .NET application with distinct Salesforce instances. Each listener has SOAP endpoints with an identical webmethod to POST to (although the WSDL's themselves are not necessarily).
The mystery: When one of the endpoints is called directly (url:19001, say) the message is processed by that Mule inbound listener, correctly logging the call along with its arguments and dispatching it to flows within that application. That SAME message is also picked up by other inbound listeners (with endpoints of url:19004, url:19007 and url:19010, say) which also log this call (and ultimately fail) simultaneously.
In my time-consuming attempts to track the cause of this behavior I've found this sort of "cross talk" also happens on windows deployments. I've found that widening port ranges significantly has no effect either. The web seems to have nothing to offer in terms of explanation, this being the first S.O. question about it for example.
The question: What is going on? Has anyone else experienced this and how the heck do we stop it?

MassTransmit - Distributed Messaging Model - Reliable/Durable - NServiceBus too expensive

I would like to use MassTransmit similar to NServiceBus, every publisher and subscriber has a local queue. However I want to use RabbitMQ.
So do all my desktop clients have to have RabbitMQ installed, I think so, then should I just connect the 50 desktop clients and 2 servers into a cluster?
I know the two servers must be in the same cluster. However 50 client nodes, seems a bi tmuch to put in one cluster.....Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
The Servers are dealing with backend hl7 messages.
Any help and advice here is much appreciated, this is all on windows machines.
Basically I am leaving NServiceBus behind, as it is now too expensive, they aiming it at large corporations with big budgets, hence Masstransmit.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers.
The desktops also use CQS to update their views.
should I just connect the 50 desktop clients and 2 servers into a cluster?
Yes, you have to connected your clients to the cluster.
However 50 client nodes, seems a bi tmuch to put in one cluster.
No, (or it depends how big are your servers) 50 clients is a small number
Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
I think it's better the cluster, because federation and shovel are asynchronous, it means that your LockOrder could be not replicated in time.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers
Withe RMQ you can create a persistent queue and messages, and it is not necessary if the client(s) is connected. It will get the messages when it will connect to the broker.
I hope it helps.
I have a FOSS ESB rpoject called Shuttle, if you would like to give it a spin: https://github.com/Shuttle/shuttle-esb
I haven't used NServiceBus for a while and actually started Shuttle when it went commercial. The implementation is somewhat different from NServiceBus. I don't know MassTransit at all, though. Currently process managers (sagas) have to be hand-rolled in Shuttle whereas MassTransit and NServiceBus have this incorporated. If I do get around to adding sagas I'll be adding them as a Module that can be plugged into the receiving pipeline. This way one could have various implementations and choose the flavour you like :)
Back to your issue. Shuttle has the concept of an optional outbox for queuing technologies like RabbitMQ. Shuttle does have a RabbitMQ implementation. I believe the outbox works somewhat like 'shovel' does. So the outbox would be local and sending messages would first go to the outbox. It would periodically try to send messages on to the recipients and, after a configurable number of attempts, send the message to an error queue. It can then be returned to the outbox for further attempts, or even moved directly to the recipient queue once it is up.
Documentation here: http://shuttle.github.io/shuttle-esb/

Connect NServiceBus with an AIX Mainframe

I have a back end system that drops events to my system. It is critical that these events don't get lost (I work for a health care company and lost info can impact a patient's care).
I would like to make this system drop it's data into NServiceBus so that it can be published to subscribers that need it. However, my server that is dropping these messages is an AIX machine, so it can't run .NET Code.
This system can send the messages via a lot of standard protocol and communication types (TCP, WSDL Based Services, Call A Database Sproc, etc).
One option I have considered is to setup a WCF service that the AIX mainframe will call. I can then have my WCF service make the call to NServiceBus.
But the events sent per minute of this back end service can at times be fairly high (about 500 messages per minute). I am worried that WCF is not up to this, while NService bus says it can handle 1000 messages per second. Am also worried about data loss in the event of a downtime. NserviceBus claims it is not going to loose any data.
Am I wrong? Is WCF going to be just fine? Or am I making a weak link in the chain?
Is there a way I can use an established protocol to add items directly to an NServiceBus Queue?
Or should I just write my own .NET app that will allow NServiceBus to use a TCP connection?
Note: Because these messages are critical, the message must be acknowledged or the server will keep sending it.
I would take a look at the WCF integration that comes right out of the box. The WCF service is contained within the same host as NSB. The integration does nothing more than just push the message onto the queue, so I don't think you'll have a throughput issue. Seeing that this is critical data, I would suggest clustering the service. The other option would be to install 2 or more instances of the service on different machines and load balance the HTTP calls across both. In essence you would have 1 logical Publisher with 2 physical components doing the publishing.

Advice on disconnected messages with WCF through firewalls

All,
I'm looking for advice over the following scenario:
I have a component running in one part of the corporate network that sends messages to an application logic component for processing. These components might reside on the same server, different servers in the same network (LAN ot WAN) or live outside in the cloud. The application server should be scalable and resilient.
The messages are related in that the sequence they arrive is important. They are time-stamped with the client timestamp.
My thinking is that I'll get the clients to use WCF basicHttpBinding (some are based on .NET CF which only has basic) to send messages to the Application Server (this is because we can guarantee port 80/443 will be open for outgoing connections). Server accepts these, and writes these into a queue. This queue can be scaled out if needed over multiple machines.
I'm hesitant to use MSMQ for the queue though as to properly scale out we are going to have to install seperate private queues on each application server and round-robin monitor the queues. I'm concerned though that we could lose a message on a server that's gone down until the server is restored, and we could end up processing a later message from a different server and disrupt the sequence.
What I'd prefer is a central queue (e.g. a database table) that all application servers monitor.
With this in mind, what I'd like to do is to create a custom WCF binding, similar to netMsmqBinding, but that uses the DB table instead but I'm confused as to whether I can simply create a custom transport or a I need a full binding, and whether the binding will allow the client to send over HTTP. I've looked around the internet but I'm a little confused as to where to start.
I could not bother with the custom WCF binding but it seems a good way to introduce scalability if I do need to seperate the servers.
Any suggestions please would be helpful, including alternatives.
Many thanks
I would start with MSMQ because it is exactly for this purpouse. Use single transactional queue on clustered machine and let application servers to take messages for processing from this queue. Each message processing has to be part of distributed transaction (MSDTC).
This scenario will ensure:
clustered queue host will ensure that if one cluster node fails the other will still be able to handle requests
sending each message as recoverable - it means that message will be persisted on hard drive (not only in memory) so in critical failure of the whole cluster you will still have all messages.
transactional queue will ensure that all message transport operations will be atomic - moving message from outgoing queue to destination queue will be processed as transaction. It means that original message from outgoing queue will be kept in queue until ack from destination queue arrives. Transactional processing can ensure in order delivery.
Distributed transaction will allow application servers consuming messages in transaction. Message will not be deleted from queue until application server commits transaction or transaction time outs.
MSMQ is also available on .NET CF so you can send messages directly to queue without intermediate non-reliable web service layer.
It should be possible to configure MSMQ over HTTP (but I have never used it so I'm not sure how it cooperates with previous mentioned features).
Your proposed solution will be pretty hard. You will end up in building BizTalk's MessageBox. But if you really want to do it, check Omar's post about building database queue table.