NServiceBus. Handling specified messages in a queue - nservicebus

We use a windows service based on NServiceBus.Host to handling certain type of messages (say Message A) which are sent from some web services (messages are used as commands). In the future we want to update our services and introduce new type of messages (say Message B).
Is it possible in case of single queue to configure endpoints in old and new version of the windows service that each will handle only messages it knows about (old version - only Message A, new version - only Message B) and leave in the queue all the rest?
If it's impossible then a obvious solution is to have own queue for each type of message and I suppose own endpoint for each queue. Okay, let's assume we want to support in the future not only new messages (Message B) but also old (Message A). Are there ways to implement this (multiple endpoints) in scope of single host process or single way is using two host processes (accordingly two windows services) for each endpoint?
Thank you.

The nice thing about NServiceBus it's support for inheritance. If you have a look at the documentation I think you will find what you are after.
http://particular.net/articles/messages-as-interfaces
There is also a detailed example on http://particular.net/articles/versioning-sample

Related

Mass Transit + Azure Service Bus: Consume some types of messages without creating their corresponding topic

As I have been able to verify, in MassTransit with Azure Service Bus, each type of object consumed by a "Consumer" generates a Topic for that type regardless of whether it is only consumed in a specific "receive endpoint" (queue). When sending a message of this type with the "Send()" method, the message is sent directly to the "receive endpoint" (queue) without going through the topic. If this same message is published with the "Publish()" method, it is published in the Topic, and is forwarded to the receive endpoint (queue) from the corresponding subscriber.
My application uses a CQRS pattern where the messages are divided into commands and events. Commands use the send-receive pattern and are therefore always dispatched in MassTransit with the "Send()" method. The events, however, are based on the publish-subscribe pattern, and therefore are always dispatched in MassTransit with the "Publish()" method. As a result, a large number of topics are created on the bus that are never used (one for each type of command), since the messages belonging to these topics are sent directly to the receiver's queue.
For all these reasons, the question I ask is whether it is possible to configure MassTransit so that it does not automatically create the topics of some types of messages consumed because they will only be sent using the "Send()" method? Does this make sense in MassTransit or is it not possible/recommended?
Thank you!
Regards
Edited 16/04/2021
After doing some testing, I edit this topic to clarify that the intention is to configure MassTransit so that it does not automatically create the topics of some types of messages consumed, all of them received on the same receive endpoint. That is, the intention is to configure (dynamically if possible, through the type of object) which types of messages consumed create a topic and which do not in the same receive endpoint. Let's imagine that we have a receive endpoint (a queue) associated with a service, and this service is capable of consuming both commands and events, since the commands are only dispatched through Send(), it is not necessary to create the topic for them, however the events that are dispatched via Publish(), they need their topic (and their subscribers) to exist in order to deliver the message and be consumed.
Thanks in advance
Yes, for a receive endpoint hosting a consumer that will only receive Sent messages, you can specify ConfigureConsumeTopology = false for that receive endpoint. You can do that via a ConsumerDefinition, or when configuring the receive endpoint directly.
UPDATE
It is also possible to disable topology configuration per message type using an attribute on the message contract:
[ConfigureConsumeTopology(false)]
public interface SomeCommand
{
}
This will prevent the topic/exchange from being created and bound to the receive endpoint.
While I can understand the desire to be "pure to the CQRS mantra" and only Send commands, I'd suggest you read this answer and take it into consideration before overburdening your developers with knowing every single endpoint in the system by name...

Can a Pika RabbitMQ client service both consume and publish messages?

Could someone with Pika experience give me a quick yes/no response as to whether the following functionality is possible, or whether my thinking that it is indicates a lack of conceptual understanding of Pika.
My desired functionality:
Python service (single threaded script) has one connection to my RabbitMQ broker using the SelectConnection adapter.
That connection has two channels.
Using one channel, A, the service declares a queue and binds to some exchange E1.
The other channel, B, is used to declares some other exchange, E2.
The service consumes messages from the queue via A.
It does some small processing of those messages, [possibly carries out CRUD operates through its connection to a MongoDB instance,] then publishes a message to exchange E2 via B.
I have read the Pika docs thoroughly, and have not found enough information to understand whether this is doable.
To put it simply - can a single python script both publish and consume via one selectconnection adapter connection?
Yes of course. You can achieve that in many ways (via the same connection, different connection, same channel, different channel etc.)
What I do when I have implemented this in the past is, I create my connection, get the channel and setup my consumer with it's delegate (function). When my consume message function is called I get the channel parameter that comes with it, which I sub-sequentially use to publish the next message to a different queue. If you don't want to use the same channel, you can simply setup another then.

Nservicebus routing

We have multiple web and windows applications which were deployed to different servers that we are planning to integrate using NservierBus to let all apps can pub/sub message between them, I think we using pub/sub pattern and using MSMQ transport will be good for it. but one thing I am not clear if it is a way to avoid hard code to set sub endpoint to MSMQ QueueName#ServerName which has server name in it directly if pub is on another server. on 6-pre I saw idea to set endpoint name then using routing to delegate to transport-level address, is that a solution to do that? or only gateway is the solution? is a broker a good idea? what is the best practice for this scenario?
When using pub/sub, the subscriber currently needs to know the location of the queue of the publisher. The subscriber then sends a subscription-message to that queue, every single time it starts up. It cannot know if it subscribed already and if it subscribed for all the messages, since you might have added/configured some new ones.
The publisher reads these subscriptions messages and stores the subscription in storage. NServiceBus does this for you, so there's no need to write code for this. The only thing you need is configuration in the subscriber as to where the (queue of the) publisher is.
I wrote a tutorial myself which you can find here : http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/
That being said, you should take special care related to issues regarding websites that publish messages. More information on that can be found here : http://docs.particular.net/nservicebus/hosting/publishing-from-web-applications
In a scale out situation with MSMQ, you can also use the distributor : http://docs.particular.net/nservicebus/scalability-and-ha/distributor/
As a final note: It depends on the situation, but I would not worry too much about knowing locations of endpoints (or their queues). I would most likely not use pub/sub just for this 'technical issue'. But again, it completely depends on the situation. I can understand that rich-clients which spawn randomly might want this. But there are other solutions as well, with a more centralized storage and an API that is accessed by all the rich clients.

NServiceBus Pub/Subscribe using SQLServer transport - can the subscriber scale out?

Using the latest version of NServiceBus 4.4 I believe.
We are looking to implement NServiceBus and this section is using SQLServer as a transport. We want to pub/subscribe, which is fine but how would it work with scaling out the subscribers?
I have done a PoC where I ran the recieving endpoint of a SQLServer transport multiple times and when a message came in, the first instance of the running reciever got the message and processed it, resulting in the other process NOT processing it, which is correct.
In a pub/subscribe architecture using SQLServer, would this same method of running multiple instances of the subscriber work and since we are using a common queue (SQLServer) it will just sort itself out and not process the message multiple times?
When using SQL Server persistence, the subscribers for your events and messages are held in the Subscription table within the NServiceBus database, so you can check which endpoints are subscribing to what messages or events by viewing the contents of that.
It's worth noting that you can only publish "message" classes with NServiceBus that are implementing the IEvent interface (unless you make use of unobtrusive mode).
When you publish a message or event using bus.Publish, all subscribers to that type will subscribe to it, as long as the individual endpoint names are different.
More information from Particular Software is here:
And here.

How to get delivery path in rabbitmq to become message property?

The undelying use case
It is typical pubsub use case: Consider we have M news sources, and there are N subscribers who subscribe to the desired news sources, and who want to get news updates. However, we want these updates to land up in mongodb - essentially maintain most recent 'k' updates (and can be indexed and searched etc.). We want to design for M to scale upto million publishers, N to scale to few millions.
Subscribers' updates are finally received and stored in more than one hosts and their native mongodbs.
Modeling in rabbitmq
Rabbitmq will be used to persist the mappings (who subscribes to which news source).
I have setup a pubsub system in this way: We create publisher exchanges (each mapping to one news source) and of type 'fanout'.
For modelling subscribers, there are two options.
In the first option, have one queue for each subscriber bound to relevant publisher exchanges. And let the client process open connections to all these subscriber queues and receive the updates (and persist them to mongodb). Note that in this option, when the client is restarted, it has to manage list of all susbcribers, and open connections to all subscriber queues it is responsible for.
In the second option, we want to be able to remove overhead of having to explicitly open on each user queue upon startup. Instead, we want to listen to only one queue - representative of all subscribers who will send updates to this client host.
For achieving this, we first create one exchange for each subscriber and let it bind to the publisher exchange(s) that it follows. We let a single queue for each client, and let the subscriber exchange bind to this queue (type=direct) if the subscriber belongs to that client.
Once the client receives the update message, it should come to know which subscriber exchange it came from. Only then we can add it to mongodb for relevant subscriber. Presumably the subscriber exchange should add this information as a new header on the message.
As per rabbitmq docs, I believe there is no way to get achieve this. (Or more specifically, to get the 'delivery path' property from the delivered message, from which we can get this information).
My questions:
Is it possible to add a new header to message as it passes through exchange?
If this is not possible, then can we achieve it through custom exchange and relevant plugin? Any plugin that I can readily use for this purpose?
I am curious as to why rabbitmq is not providing delivery path property as an optional configuration?
Is there any other way I can achieve the same? (See pubsubhubbub note below)
PubSubHubBub
The use case is very similar to what pubsubhubbub protocol provides for. And there is rabbitmq plugin too called rabbithub. However, our system will be a closed system, and I believe that the webhook approach of the protocol is going to be too much of overhead compared to listening on single queue (and from performance perspective.)
The producer (RMQ Client) of the message should add all the required headers (including the originator's identity) before producing (publishing) it on RMQ. These headers are used for routing.
If, while in transit, the message (including headers) needs to be transformed (e.g. adding new headers), it needs to be sent to the transformer (another RMQ Client). This transformer will essentially become the new publisher.
The actual consumer should receive its intended messages (for which it has subscribed to) through single queue. The routing of all its subscribed messages should be arranged on the RMQ Exchange.
Managing the last 'K' updates should neither be the responsibility of the producer nor the consumer. So, it should be done in the transformer. Producers' messages should be routed to this transformer (for storage) before further re-routing to exchange(s) from where consumers consume.