Cofiguring an endpoint to act both as worker and subscriber - nservicebus

Is it possible to configure an endpoint to act as a worker retrieving jobs from a distributor AND subscribe to some kind of messages?
I have the following scenario ( adapted to sale terminology)
*) a central department publishes every now and then a list of the new prices. All workers have to be notified. That means, a worker should subscribe to this event.
*) when a new order arrives at the central, it sends it to the distributor, which send it to the next idle worker to be processed. That means, a worker have to be configured to receive messages from the distributor.
I use the following configuration:
<MsmqTransportConfig
InputQueue="worker"
ErrorQueue="error"
NumberOfWorkerThreads="2"
MaxRetries="5"
/>
<UnicastBusConfig
DistributorControlAddress="distributorControlBus"
DistributorDataAddress="distributorDataBus" >
<MessageEndpointMappings>
<add Messages="Events" Endpoint="messagebus" />
</MessageEndpointMappings>
</UnicastBusConfig>
When I configure it only as a worker or only as a subscriber everything works as expected, but not when I configure it as both.
I discovered that a message arrives at the input queue of the central with the address of the distributor as return address instead of worker address, and the publisher recognize no subscriber in this case.
Any ideas? Thanks in advance.

Workers are not supposed to be used in that way IFAIK. I think the way to go would be to have your central subscribe to the prices and when a "NewOrderMessage" arrives enrich that data with the required prices (perhaps only prices for the products in that particular order) and send a new ProcessOrderRequest to the input queue of the distributor.
Another way would be to have the process that sends the order request to include the prices in the order request.
Does that make any sense?
/Andreas

Workers behind a distributor is how you scale out a single logical subscriber, not how you handle multiple logical subscribers. The point is that only a single worker out of the pool of workers should get a given message, in which case, you want all workers to look the same to the publisher - which is why the address of the distributor is given.
If you have multiple logical subscribers that you want to scale out, give each one of them their own distributor.

Related

Distribute rabbitmq messages evenly

At the moment we have number of publishers (micro-services) which publish their messages to exchange. Each message has a serviceId attribute. The queue is connected to a single subscriber (micro-service) which processes the queue messages, processing of a single message is a costly operation (takes about 20-30 secs).
Currently we have the following situation: service A publishes ~200 messages, after some seconds service B publishes 2 messages. So the subscriber will process these 2 messages only after the first 200 will be processed.
We want to process the messages in the order they came to the queue, but with respect to the source serviceId.
Obvious solution is to split the queue to a separate queues (one per publisher) and subscribe to each queue separately, but the number of publishers can change, we need to request them dynamically and subscribe (unsubscribe) to them.
Another approach is to replicate our subscriber app to have one to one relationship between publisher and subscriber, but this will require more system resources.
What would be the best approach to handle this situation?
Thanks!
/!\ Be careful, publishers publish to an exchange, not to a queue.
We want to process the messages in the order they came to the queue,
but with respect to the source serviceId.
If I understand well, you want to load balance your messages according to a serviceId, and serviceIds are not known in advance.
The solution I would suggest here is to have a direct exchange, with routing keys such as xxxxx.<serviceId>. Then, you can bind one queue by serviceId (that is: one queue for service A, one for service B, ...), each consumer consuming on all queues.
Then you have to handle the publisher subscription: I would make a publisher publish a "hello" message, this message being consumed by each consumer, which in turn bind a new queue for that service (using xxxxx.<newServiceId>), and finally publish a response back (so that the publisher can start sending messages).
Note: each service queue is the same for all consumers, resulting in the worker configuration (see this tutorial)
Hope this helps.

How to know the last job in a queue

I have a group of of jobs that need to be processed.
Some may take 10 min, some may take 1h.
Now I need to know what is the last job executed because at the end of that group of jobs I need to fire another message.
The message queue in this case is RabbitMQ.
Is there a way I can accomplish this with only RabbitMQ?
What would be a good strategy for this task?
Thats strategy you can use with any messaging system.
I assume you have group of workers listening to single queue with jobs "jobs queue" to be processed. Now you can have service lets call it Manager witch duplicates this queue and saves all no finished messages. Now when worker finishes the job it send acknowledgment message to the Manager. Manager for example discards all finished jobs and stores only running once. (If you want to take in to account passable failures it can track that too).
When Manager have no more messages it publishes message to the "all messages in the group done topic". Now publishers can listen to the topic and fire new job messages to the "job queue".
Of course in simple case you can have one producer witch could be the Manager in the same time.
Example RabbitMQ implementation.
Now to implement this in RabbitMQ you can for example create one FanoutExchange (for producer to send messages to) and two queues jobsQueue (to send jobs to workers) and jobTrackingQueue (to send jobs to Manager for tracking jobs). Now you create second FonoutExchange (for Manager to send task done messges to) you create unnamed queue per producer who wants to know if all messges are done.

How to remove duplicacy receiving for subscriber cluster in pubsub

I am thinking how to remove duplicacy for subscriber cluster in pubsub, for example:
There is a service called email, which should send welcome emails after user signing up. By using pub/sub, email service shall listen a event called "signedUp" which will be triggered each time user sign up. However, how about if I have 2 email services to balance? Without any special efforts, I think two welcome emails will be sent out. So how to solve this issue?
I prefer redis as pubsub server for simplicity, or rabbitmq if redis doesn't work out.
I don't think it is possible to do it in redis. But in rabbitmq, it can, let me explain below:
Rabbitmq has a separate stuff called 'exchange' from queue. So server publish a message to exchange, client can create queues to bind to the exchange. Therefore, instances from one services can create the same queue to bind with exchange, by doing that exchange will delivery message to the queue once and handled by only one instance once.
Account service:
channel.assertExchange(‘signedUp’, 'fanout')
channel.publish(ex, '', new Buffer(message)
Email service:
let queue = channmel.assertQueue(‘email’);
channel.bindQueue(queue, 'signedUp'); // bind this queue to exchange
ch.consume(queue, logMessage)
By given a queue name in email service, no matter how many email services started, the published message (signedUp in this case) will be handled by one and ONLY ONE email service.

Clarification of topology

I understand that Rebus is perfectly capable of transporting messages from point A to B (using MSMQ as the transport layer). To make things perfectly clear, is Rebus also capable of doing one-to-many messaging, i.e. messages sent from point A should end at both point B and C?
And if it is possible, how does it do it? I cannot see any centralised distribution site (a post-office), so I assume that the communication will consist of a channel from every endpoint to every other endpoint (so that in a network where a process has to communicate with 5 other endpoints, there will be 5 channels radiating out of this process). Can you confirm this assumption?
Yes, Rebus is indeed capable of publishing messages to virtually any number of subscribers. It's true that MSMQ (at least in its most basic mode of operation) is a simple point-to-point channel, which is why there's a layer on top in order to implement true pub/sub.
The way it works, is that each subscriber has an endpoint mapping pointing to the publisher, and then each subscriber goes
bus.Subsribe<SomethingInterestingHappened>();
which causes an internal SubscriptionMessage to be sent the the publisher. The publisher must then remember who subscribed to each given message type, typically by storing this information in an SQL Server. All this happens automatically, it just requires that you configure some kind of subscription storage.
And then, when the time comes to publish something, the publisher goes
bus.Publish(new SomethingInterestingHappened { ... });
which will make Rebus look up all the subscribers of the given message type. This may be 0, 1 or more, and then the event will be sent to each subscriber's input queue.
You can read more about these things in the Rebus docs on the page about routing.
To give you a hint on how subscribers and publishers might be configured, check this out - this is a subscriber:
Configure.With(...)
.Transport(t => t.UseMsmq....)
.MessageOwnership(t => t.FromRebusConfigurationSection())
(...)
which also has an endpoint mapping that maps a bunch of events to a specific publisher:
<endpoints>
<add messages="SomePublisher.Messages" endpoint="publisher_input_queue" />
</endpoint>
and then the publisher might look like this:
Configure.With(...)
.Transport(t => t.UseMsmq....)
.Subscriptions(s => s.StoreInSqlServer(theConnectionString, "subscriptions")
.EnsureTableIsCreated())
(...)

How to limit multi-Publisher can not listening the SAME Queue with the NServiceBus?

guys:
I want to use NServiceBus to manage messages.I have more than 5 different Publishers,every publisher is listening different queue.and every publisher have more than 3 different Subscribers.
Currently,the publishers and their Subscribers works well.but unfortunately,i found some messages in which should be processed by one Publisher being received by other program which only know the queue's name.and the original Publisher didn't know that.
So i want to know if there is any solution to prevent other program or Publisher receive myself messages?
If you want to be specific about who subscribes to what, then you need to manually configure the endpoint to subscribe to specific messages(Bus.Subscribe()/Bus.Unsubscribe()). If you don't want a particular endpoint to receive certain messages even though they may show up then you can also load the specific handlers. This can be done by separating the messages/handlers into separate assemblies and then loading the ones you want with Configure.With(assemblyList).