Worker node handles message from two distributor - nservicebus

I have same question asked in nServicebus group. I did not get firm answer of this feature is supported. I like to post it here to see SO community thoughts.
http://tech.groups.yahoo.com/group/nservicebus/message/16487
I already have windows processor worker nodes that handles the messages from a
Distributor. Now I like to extend this worker node to handle messages from
another distributor with different queue names. When I looked at the unicast bus
configuration, I found that only one distributor Control and data address can be
set. Is there a way to set up multiple distributor in NServiceBus Configuration?
If you also explain the pros and cons of using handling multiple distributor
that would help.

It sounds like you may be using NServiceBus 2.x, because in NServiceBus 3.0, the Distributor story is very much changed.
Under NServiceBus 2.x, you usually set up multiple endpoints all talking to the same distributor. These endpoints become worker nodes and the distributor divides up the work between them based on each worker node reporting when it has a free thread.
So, if you had the load of messages coming into Queue X handled by X.Worker#Server1 and X.Worker#Server2, it doesn't make sense to me why you would want one of the X.Worker instances to handle messages coming into queue Y?
Instead, you should (normally) set up one Distributor per logical service. This is akin to a Network Load Balancer for HTTP traffic. Then the endpoints behind it act as the worker nodes. You can set up a second distributor, with its own worker nodes, for another logical service.
Now, with all that said, in NServiceBus 3.x, the distributor is integrated with the endpoint. So you start off with one endpoint configured as a Master Node. Basically it functions as a distributor AND a worker. Then to scale out, you simply stand up more nodes in Worker role only, pointing at the Master Node to get their work.
In that scenario, there is (generally) no freestanding Distributor. This is why I'm guessing you're referring to V2.

Related

Instance Mapping - nServiceBus

We can use “Instance Mapping” to route message to same instance hosted on multiple physical servers. What would be the impact if one of the physical machine goes down due to any reason? Will nServiceBus framework will start routing message to remaining active “physical” machine?
Regards
You're talking about MSMQ, which uses store & forward. It's first stored locally on the server, before it's forwarded to the actual machine.
There are two options to scale out
Distributor
Sender Side Distribution
You chose Sender Side Distribution, I assume from your question, using endpoint instance mapping. In that same document, there's a section about the limitations which mentions:
Sender-side distribution does not use message processing confirmations (the distributor approach). Therefore the sender has no feedback on the availability of workers and, by default, sends the messages in a round-robin behavior. Should one of the nodes stop processing, the messages will pile up in its input queue. As such, nodes running in sender-side distribution mode require more careful monitoring compared to distributor workers.
So the messages keep being sent to the machine that is down. If it is entirely unreachable, the messages will remain in the Outgoing queue on the sender machine. Otherwise they'll be stored on the incoming queue on the processing machine.

NServicebus+RabbitMQ and Distributor

NServiceBus Distributor/Worker pattern makes perfect sense for MSMQ due to the hard requirement of local input queues.
But this is not the case with RabbitMQ, I am trying to understand how and when the NServiceBus distributor is relevant with RabbitMQ. With RabbitMQ multiple workers can read from the same remote queue.
The actual scenario is similar to using an AWS auto-scaling group to scale out workers pointing to a high available RabbitMQ cluster. Now avoiding distributor altogether makes the setup much simpler to build, test and provision.
Thoughts?
As RabbitMQ transport falls into the broker style bus, so, in your use case, it would make more sense not to use the distributor.
The same goes for all broker-style transports, where you can use a competing consumer pattern to scale out.
NServiceBus is an excellent system and does wonders in most message queuing system where you don't have an integrated distributor (which you do with exchanges in RabbitMQ). We use NServiceBus here at our company.
Azure Queues and MSMQ are perfect examples of such queuing technologies.
NServiceBus handles the distribution internally and therefore reproduces this capability for you.
However... If you are blessed with the possibility of imposing what queuing technology you can use, then I would highly encourage you to look into RabbitMQ and a product (Open Source) called MassTransit
http://masstransit-project.com/
MassTransit can in turn function in the two modes and will either delegate or simulate the distribution for you - however I nonetheless have a soft spot for NServiceBus as do our senior devs here.
Per this page...
http://docs.particular.net/nservicebus/load-balancing-with-the-distributor
Using the distributor is only useful when using MSMQ - if you aren't using MSMQ then there is no point. RabbitMQ and other transport will allow access to the same queue from multiple consumers, while MSMQ will not. The distributor in a nutshell will take messages from the main queue and distribute them across multiple worker queues as they report that they are done with whatever they are working on.

MassTransmit - Distributed Messaging Model - Reliable/Durable - NServiceBus too expensive

I would like to use MassTransmit similar to NServiceBus, every publisher and subscriber has a local queue. However I want to use RabbitMQ.
So do all my desktop clients have to have RabbitMQ installed, I think so, then should I just connect the 50 desktop clients and 2 servers into a cluster?
I know the two servers must be in the same cluster. However 50 client nodes, seems a bi tmuch to put in one cluster.....Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
The Servers are dealing with backend hl7 messages.
Any help and advice here is much appreciated, this is all on windows machines.
Basically I am leaving NServiceBus behind, as it is now too expensive, they aiming it at large corporations with big budgets, hence Masstransmit.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers.
The desktops also use CQS to update their views.
should I just connect the 50 desktop clients and 2 servers into a cluster?
Yes, you have to connected your clients to the cluster.
However 50 client nodes, seems a bi tmuch to put in one cluster.
No, (or it depends how big are your servers) 50 clients is a small number
Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
I think it's better the cluster, because federation and shovel are asynchronous, it means that your LockOrder could be not replicated in time.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers
Withe RMQ you can create a persistent queue and messages, and it is not necessary if the client(s) is connected. It will get the messages when it will connect to the broker.
I hope it helps.
I have a FOSS ESB rpoject called Shuttle, if you would like to give it a spin: https://github.com/Shuttle/shuttle-esb
I haven't used NServiceBus for a while and actually started Shuttle when it went commercial. The implementation is somewhat different from NServiceBus. I don't know MassTransit at all, though. Currently process managers (sagas) have to be hand-rolled in Shuttle whereas MassTransit and NServiceBus have this incorporated. If I do get around to adding sagas I'll be adding them as a Module that can be plugged into the receiving pipeline. This way one could have various implementations and choose the flavour you like :)
Back to your issue. Shuttle has the concept of an optional outbox for queuing technologies like RabbitMQ. Shuttle does have a RabbitMQ implementation. I believe the outbox works somewhat like 'shovel' does. So the outbox would be local and sending messages would first go to the outbox. It would periodically try to send messages on to the recipients and, after a configurable number of attempts, send the message to an error queue. It can then be returned to the outbox for further attempts, or even moved directly to the recipient queue once it is up.
Documentation here: http://shuttle.github.io/shuttle-esb/

rabbitMQ federation VS ActiveMQ Master/Slave

I am trying to set up cluster of brokers, which should have same feature like rabbitMQ cluster, but over WAN (my machines are in different locations), so rabbitMQ cluster does not work.
I am looking to alternatives, rabbitMQ federation is just backup the messages in the downstream, can not make sure they have exactly the same messages available at any time (downstream still keeps the old messages already consumed in the upstream)
how about ActiveMQ Master/Slave, I have found :
http://activemq.apache.org/how-do-distributed-queues-work.html
"queues and topics are all replicated between each broker in the cluster (so often to a master and maybe a single slave). So each broker in the cluster has exactly the same messages available at any time so if a master fails, clients failover to a slave and you don't loose a message."
My concern is that if it can automatically update to make sure Master/Slave always have the same messages, which means the consumed messages in Master will also disappear in Slaves.
Thanks :)
ActiveMQ has various clustering features.
First there is High Availability - "Master/Slave". The idea is that several physical servers act as a single logical ActiveMQ broker. If one goes down, another takes it place without losing data. You can do that by sharing the message store (shared file system or shared JDBC), or you could setup a replicated cluster, which replicates read/writes to the master down to all slaves (you need three+ servers). ActiveMQ is using LevelDB and Apache Zookeeper to achieve this.
The other format of cluster available in ActiveMQ is to be able to distribute load and separate security over several logical brokers. Brokers are then connected in a network of brokers. Messages are by default passed around to the broker with available consumers for that message. However, there is a rich toolbox of features in ActiveMQ to tweak a network of brokers to do things as always send a copy of a message to specific broker etc. It takes some messing with the more advanced features though (static network connectors and queue mirroring, maybe more).
Maybe there is a better way to solve your requirements, which is not really specified in the question?

Using Apache Camel for Load Balancing

Can I access SEDA or VM queue from another machine or JVM?
I actually want to implement load balancing with the help of Camel but do not want introduce another messaging framework for this. I just want to distribute load to different consumers from a producers using some in built queue.
Is it possible? If no then what are my options?
Another Approach:(Pull Approach)
Not sure how optimum new approach is or what are the advantages and disadvantages of new approach, So please help me to analyze this approach.
Messages will be put into a Master queue and all the worker systems will be listening to Master queue.Let's say 100,000 messages are being put into Master queue and 5 worker systems are listening to it. Worker systems will process the messages one by one from the master queue. There are two big benefits with this approach:
I don't need to worry about registering my worker systems with the producer. Sixth system just boot up and start listening to Master queue.
I don't need to worry about sending message to a consumer system which is free. When worker system will be done processing a message, it pick up another one from the Master queue.
Let me know your thoughts on it.
SEDA and VM:// work only on the same JVM.
Load balancing in Java messaging is usually achieved using the JMS and Competing Consumers pattern. You send messages to the queue and multiple consumers compete to process them.
If broker with its queue becomes a bottleneck - consider using fan-out pattern and the network of brokers.
SEDA and VM endpoints are valid for the host Context and JVM respectively. To facilitate JVM-to-JVM messaging you will need to use an over-the-wire protocol component such as, but not limited to, Mina, HTTP or JMS.
The easiest way is to use jms. If you have n routes listening on the same jms queue then they will automatically load balance. If one goes away the load will be balanced over the remaining ones. I recommend starting with ActiveMQ as it is very easy to setup and well integrated with Camel.To make the broker highly available you can either setup two standalone brokers or setup one embedded broker per camel instance.