All Endpoint Instances subscribe and handle event - rabbitmq

I have a notification service that handles events and publishes them to clients using various technologies, such as SignalR. I want every instance of my notification service to pick up and handle these events. However, NServiceBus only allows any one instance of my notification service endpoint to pick up the event, and the other instances never get it.
My current workaround for this is to create a separate named endpoint for each instance of my notification service (the name has the server host name added to it), but then I have to make sure I unsubscribe from the event when the instance goes down or is moved to another server.
Is there a better way to do this? It would be nice if I could configure NServiceBus to create a separate incoming queue for each endpoint instance in this case, but I can't figure out how to do that, or even if NServiceBus supports such a use case.

You are correct. NServiceBus does not support such a case. Subscribers are always treated as logical endpoints, so individualized queues would not be used even if they were available.
Differentiating the instances by modifying the endpoint name is the most straightforward way to achieve what you want.
Changing your differentiator to a controllable runtime value, for instance an environment variable, would at least alleviate the need to unsubscribe when an instance is moved.
Also, if you want to review the scenario in more detail please don't hesitate to reach out to us directly, we might have other approaches to suggest. Just open a support ticket.

Related

How can I use Reactive Extensions and WCF to process information on a remote server and monitor progress?

I am experimenting with using Reactive Extensions to create a Windows Service.
Essentially what I want is for the Observer to sit on the server, the clients able to create observables and have them pushed to the server, the server informing the client of the progress of the job (not sure how to do this or what mechanism to use to do it), and then when it's done, having the server send the client the return code and output of the program it called. Can this be done? Is it the best way to do what I'm trying to do? If you need any more information, what would you need to know to help me?
This seems back to front. Generally clients know about servers (how to find then and connect). In contrast the Observer pattern (and therefore Rx) is about allowing something to callback to another observer that is does not know about.
In your case I think you simply want to have clients call methods on a server. Potentially these are bound to a single connection/session. The client however maybe an observer of the progress from the server and the final result.
See the Reactive Trader project by the team at Adaptive to see a .NET client server app using Rx.

Exposing message queues remotely with NServiceBus

I have a scenario where I need to expose a bunch of event messages that have been created in NServiceBus to third parties over a simple authenticated REST API. The third party may or may not be using .NET (and they might even be JavaScript in the browser).
I understand that that pub/sub is a push mechanism, but I'm looking for a polling mechanism. Is this even possible in NServiceBus? Is this what an adapter is for, or is that for accepting inbound messages?
Typically you would not want to expose your service bus to third parties. You could manage to have some transport deliver to subscribers but then you would be sending an internal structure to the outside world. You also did mention that you need a pull mechanism via a REST interface.
What I would suggest is to have a subscriber within your service bus that listens to the relevant messages and then either saves them in a serialized form in a type of 'event store' or de-normalizes them into the resources that the REST interface would expose. These messages/resources would contain the relevant date/time stamp.
It would be up to the consumer of the REST API to specify some point in time to retrieve the resources from. So the third party would simply keep track of when last they retrieved the data. Of course they could retrieve as much as they need and new 'subscribers' would be able to retrieve the entire history if required. Each message/resource should also have a GUID of sorts to be able to aid idempotence.

NServiceBus publishing in a multi system environment

I work on a system where we have the same website across multiple countries. Each of these websites has it's own services. Everything works well, but I've always found myself having to send messages rather than publishing as the messages otherwise other services where I know before hand it's completely irrelevant. It sounds pointless to me publishing to many services and then filtering it's relevance.
Is there a practice I should be dealing with when wanting to publish messages to a certain subset of services, how have others dealt with this problem?
By default endpoints subscribe to all messages. If you want only certain endpoints to subscribe to specific sets, then you need to configure your endpoint to DoNotAutoSubscribe(). You then must explicitly subscribe to each message type the endpoint will be interested in using Bus.Subscribe().
Could you describe your logic of determining relevance for particular endpoint systems ? the purpose of publishing and subscribing is that there are events in a system that other endpoints can subscribe to.
you should not know something about your subscribers. so how do you determine relevance ?
if these messages are not relevant for a specific endpoint why do you want to subscribe to these messages ?
If it truly is an event message then you need to publish the message. If you need to publish to a subset you could have a separate subscription store that the endpoint in question would use.
Typically it should be up to the subscriber to determine whether the received event is relevant but if you do have the information up-front then could go with the separate subscription store.
In my FOSS ESB project (http://shuttle.codeplex.com/) a ISubscriptionManager implementation has to be provided to the ESB to determine the subscriber uris to send published messages to. Although it may be overkill one could provide a custom implementation that contains some logic to perform the filtering; otherwise the separate subscription store.

Managing WCF Duplex Callback connections for Silverlight frontend

Perhaps I'm going about this the wrong way, but here's my current "setup".
I have a silverlight client that uses Caliburn.Micro and a MEF container with a "LoadCatalog" class, to keep everything loosely coupled in an MVVM way.
I have a "common" dll where all the interfaces are kept.
All my views and viewmodels are separate projects, that only have a reference to the common dll.
The viewmodels use WCF (regular) to communicate to the backend. The frontend itself has a duplex connection to the backend.
Now here's where the question comes to mind. Whenever the backend thinks it's time to have a new screen appear at the frontend, it uses the callback channel to tell the frontend to load the next screen.
Does this seem like a good pattern to use? Or should I leave the management of what screen to load when to the frontend? I think it's nice to have this in the backend, but perhaps this is some kind of anti-pattern I'm not aware of, hence the question.
Now for argument sake, lets say I want to keep this in the backend.
What would be the best way to go about managing the collection of callback channels on the backend? If I enable SessionMode.Required on all the regular WCF endpoints, as well as the duplex channel, does this persist state together over multiple endpoints (regular+duplex)? Or will this persist state only within a single endpoint?
My guess (from the tests I have been able to do so far) is that I need to add some logic, like for example provide the frontend with a guid as soon as the callback connection is made. And then use that guid in the regular endpoint connections so the backend knows which "client" it is.
And would I "ever" be able to reliably collect all the channels and detect current state if I made a collection of the callback channels that I receive? I can intercept the callback channel now (just 1 instance atm, no collections made or anything, so single user) and use that to tell the frontend what to do. But sometimes when the client stops abrubtly (in other words when an error occurs) and I start the client again, it seems like the previous (faulted?) connection is still "re-used" or something, without luck so the communication flow stops after connection the duplex endpoint.
Does this make any sense?
Hope there's someone who has some experience in the matter that can shed some light on it for me. I'm no total newbie, but with regards to multiple connection and keeping them separated, I might need some pointers in the right direction.
Thanks!
Huron.
I managed to get this up and running.
When I create the duplex connection from the frontend to the backend I return a unique Guid. From now on I use this guid for all communication I do with the backend.
This makes the backend "recognize" the client.
In the backend I have a list of connections (grabbed the callback channel and stored it together with the Guid).
Just had to make sure to lock the list object whenever I iterate it or when I add/remove items from it, since it will be used from multiple threads by design.
The pattern to take control from the backend seems to work great so far.

How a WCF request can be correlated with multiple Workflow instances?

The scenario is a follow:
I have multiple clients in which they can register themselves on a workflow server, using WCF requests, to receive some kind of notifications. The information of the notifications will be received from an external system using another receive activity. The workflow then should get the notification information and callback all registered clients using send activity and callback correlations (the clients are exposing callback interfaces implemented in there and the end-point addresses passed initially with the registration requests). "Log-running workflow service" approach is used with a persistent storage.
Now, I'm looking for some way to correlate the incoming information of the notifications received from the external system with the persisted workflow instances created previously when the registration requests, so that all clients will be notified using end-points that already passed with the registration requests. Is WF 4.0 capable of resuming and executing multiple workflow instances when the information of the notification received without storing end-points somehow manually and go though them? If yes, how can I do that?
Also, if my approach of doing so is not correct, then please advice me about the best practice of doing such system using WCF services.
Your help is highly appreciated.
When you use request correlation with workflow services the correlation key must always match a single workflow instance, you can't have multiple workflow instances react to a single message. So you either need to multicast the message using all the different correlation keys or resume you workflow instances in some other way. That other way could be to store the request somewhere, like a SQL table, and have the workflows periodically check that location if they need to notify the client.