Using NServiceBus with Azure Service Fabric - nservicebus

I've read other questions on StackOverflow regarding using NSB on SF and also the sample on github (outdated) and I'm still not sure how to configure NServiceBus properly for this platform.
I'm looking to set up a send only publish/subscribe workflow. What I can't determine through my research is how to set this up so that only one instance of a particular service responds to the message.
For example: 3 services running on the standard 5 nodes (so pretend 5 instances of each of the 3 services).
Existing load balancer routes an http request to a specific instance of Service A.
Service A publishes the "OrderComplete" event
Services B and C both subscribe to the event.
How can I make sure that only one instance of Services B and C respond instead of all 5 instances of Service B and all 5 instances of Service C?
All the services are currently Stateless services.
I was thinking of using the AzureServiceBus or AzureStorageQueue transport.

Stateless approach is fine. You do not need to go into stateful services with a single partition unless you want to leverage reliable collections for your services. But let's look at both options
Going with Stateless services
It's ok to have multiple instances of your services. Yes, they all will create subscriptions. I'd argue that is exactly what you want - competing consumers. More service instances you have, more throughput you'll get, i.e. handling more messages.
What I can't determine through my research is how to set this up so that only one instance of a particular service responds to the message.
This will happen automatically due to the nature of the competing consumer transport (both ASB and ASQ).
Going with Stateful services
With stateful services you need to be very careful. Yes, you could go with a single partition per service, hence having a single primary replica handling your messages. But then, arguably, you're wasting your cluster resources by not utilizing them for concurrent processing of many messages. If you decide to partition your service, then you won't be able to use reliable collections as replicas of services do not share reliable collections among themselves. Should you choose to use partitioned stateful services w/o reliable collections, well, then you better to utilize stateless counterpart.
Note: NSB will provide support for running with stateful services to take advantage of the reliable collections for persistence needs, but even then partitioning is something that would need to be through through to align with business needs. If you do not have a need like that, I'd suggest to stick to stateless services and Azure Storage persistence.

In the NSB/SF sample on github there is a Stateful service that handles a command. What is important is that in the application it has an PartitionCount=1. The same goes for all other solutions with NSB I have seen, only one partition or instance of each service that handles messages. Otherwise you would end up with one subscription per instance for each message as you describe.
Perhaps you could adopt the Distributor to achieve load balancing between multiple instances of the same service, but afaik Distributor only works with MSMQ so you will have to rewrite it to work with SF and Azure Service Bus.
If you stick with single instances, it should work fine for you. You would still get some benefit from SF as it ensures your services are up and running, but load balancing between multiple instances will require some work for you.

Related

Seperate or Merge Kafka Consumer and API services together

After recently reading about event-based architecture, I wanted to change my architecture into one making use of such strengths.
I have two services that expose an API (crud, graphql), each based around a different entity and using a different database.
However, now whenever someone deletes a certain type of row in service A, i need to delete a coupled row in Service B.
So I added Kafka to my design, and whenever I delete the entity in service A, it publishes a notification message into Kafka.
In service B I am currently consuming the same topic so whenever a new message is received the Service will also handle the deletion of the matching entity, because it already has access that table because the same service already exposes the CRUD API to users.
What i'm not sure about is whether putting the Kafka Consumer and the API together in the same service is a good design. It contradicts the point of single responsibility in micro services, and whether there is an issue in one part of the service, it will likely affect the second.
However, creating a new service will also cause me issues - i will have 2 different services accessing the same table, and i will have to make sure i always maintain them together, whenever making changes to the table or database.
What is the best practice in a incident such as this? Is it inevitable to have different services have data coupling or is it not so bad to use the same service for two, similiar usages.
There is nothing wrong with using Kafka... You could do the same with point-to-point service communication, however (JSON-RPC / gRPC), however.
The real problem you seem to be asking about is dual-writes or race-conditions leading to data inconsistency.
While you could use a single consumer group and one topic-partition to preserve order and locking across consumers interested in those events, that does not lock out other consumer-groups from interacting with the database to perform the same action. Therefore, Kafka itself won't help with this problem.
You'll need external, distributed locks (e.g. Zookeeper can be used here) that fence off your database clients while you are performing actions against it.
To the original question, Kafka Connect offers an API and is also a Producer and Consumer client (and would be recommended for database interactions). So is Confluent Schema Registry, KSQLdb, etc.
I believe that the consumer of your service B would not be considered "a service" or part of the "service", as in that it is not called as part the code which services requests. Yet it does provide functionality that is required for the domain function of your microservice. So yes I would consider the consumer part of the Microservice in terms of team/domain responsibility.
There may be different opinions on if the consumer code should share the same code base/repo as the "service" code. Some people believe that it is better to limit the repo scope to a single "executable", others believe it is beneficial to keep the domain scope and have everything in a single repo. I probably belong to the latter group but do not have a very strong opinion on it. I would argue it is more important to have a central documentation / wiki for the domain that will point to the repos involved etc.

Triggering an update on all microservices

Using ASP.NET Core microservices, both API and worker roles, running in Azure Service Fabric.
We use Service Bus to do inter-microservice communication.
Consider the following situation;
Each microservice holds a local, in-mem copy of cached objects of type X.
One worker role is responsible for processing a message that would result in a rebuild of this cache for all instances.
We are having multiple nodes, and thus multiple instances of each microservice in Service Fabric.
What would be the best approach to trigger this update?
I though of the following approaches:
Calling SF for all service replica's and firing an HTTP POST on each replica to trigger the update
This however does not seem to work as worker roles don't expose any APIs
Creating a specific 'broadcast' topic where each instance registers a subscription for, and thus using pub/sub mechanism
I fail to see how I can make sure each instance has it's own subscription, but also I don't end up with ghost subscriptions when something happens like a crash
You can use the OSS library Service Fabric Pub Sub for this.
Every service partition can create its own subscription for messages of a given type.
It uses the partition identifier for subscriptions, so crashes and moves won't result in ghost subscriptions.
It uses regular SF remoting, so you won't need to expose API's for messaging.

Angular 5 and Message Bus

I have a set of RESTful services that my Angular 5 client uses to perform CRUD and business operations for the application. These are a set of micro services and they use pub/sub message queues to communicate between them, e.g. when a user is created the user server publishes a UserCreated event to the message queue and subscribers can listen for this event and act upon it as required.
Now, this is all good but i was thinking that wouldn’t it be better if the Angular 5 application itself published the event to the message queue rather than making HTTP POST/PUT or DELETE and only make GET requests against the API?
So repeating the example above the Angular 5 client would publish a CreateUserEvent to the message bus (in my case cloud pub/sub), I could then have services subscribe to these events and act upon them. My RESTful services would then only expose GET /users and GET /user/:id for example.
I know that this is doable and I guess what I am describing is CQRS, but I am keen to understand if publishing events to a message bus from the UI is good practice?
The concept of Messaging Bus is very different than Microservices. Probably, the answer to your question lies in the way you look at these two, from architectural perspective.
A messaging bus(whether it is backend specific or frontend specific) is designed in such a way, that it serves the purpose of communication of entities within the confined boundary of an environment, i.e. backend or frontend.
Whereas on the other hand, microservices architecture is designed in such a way that, two different environments that may be backend-frontend or backend-backend, can "effectively" communicate.
So there is a clear separation of motivation behind both the concepts. Now, from your viewpoint, you may use a hybrid approach which might work, and it may also lead to interesting findings related to performance, architectural design or overheads as well.
Publishing directly from the client is possible, but the caveat is that it means that the client needs to have the proper credentials to publish. For this reason, it may be preferable to have the service do the publishing in response to requests sent from the clients.

Load balancing a room-based pub/sub application on Azure

I've got a working Silverlight/WCF application that I need to start thinking about scaling. An obvious target for scaling, of course, is Azure.
The key architectural feature of the application is that 2-10 Silverlight clients will join a given "room" (using a duplex Net.TCP connection), and any of those clients can then send a message (for instance, a chat message), which then needs to be pushed in real-time to every other client connected to the same room, using the underlying duplex WCF connection.
Right now, the way the WCF service works is basically to keep in-memory a list of sessions and the rooms that they're associated with, so that when a message from one session comes in, it can automatically send the message to every other session in the room.
This works fine for a single WCF server instance, but it gets complicated if you need to scale it so that multiple WCF instances are in play. If you use network-layer load balancing, of course, you would typically find that only some of the members of your room are on the same server you're on, which means that when it comes time to push out messages to all those members, only some of them would actually get notified.
Apart from Azure, I had been thinking that I would handle it via some sort of application-layer load balancing. For instance, the web server that each client downloads the Silverlight application from might do a primitive round-robin sort of load-balancing, i.e., "OK, everyone in room x, you use WCF instance 1. Everyone in room y, you use WCF instance 2." That sort of thing.
So I have two questions:
(1) Is there any other, better way to architect this, so as to be able to use network-layer load balancing rather than needing to make the application aware of the underlying infrastructure?
(2) If I have to do the application-layer load balancing, what's the best way to handle this in Azure? Do I have to use the IAAS (full VM's), or is there a way to do this using PAAS (worker roles)? My understanding is that it's not possible to independently address worker roles, which would make a roles-based approach difficult, if not impossible.
SignalR powered by the Azure Service bus, may work for you.
http://vasters.com/clemensv/2012/02/13/SignalR+Powered+By+Service+Bus.aspx

MSMQ between WCF services in a load balanced enviroment

I'm thinking of adding a queue function in a product based on a bunch of WCF services. I've read some about MSMQ, first I thought that was what I needed but I'm not sure and are considering to just put the queue in a database table. I wonder if somone here got some feedback on which way to go.
Basicly I'm planning to have a facade WCF service called over http. The facade service should only write all incoming messages to a queue to give a fast response to the calling system. The messages in the queue should then be processed by another component, either a WCF service or a Windows service depending om my choice of queue.
The product is running in a load balanced enviroment with 2 to n web servers.
The options I'm considering and the questions I got are:
To let the facade WCF write to a MSMQ and then have anothther WCF service reading from this queue to do the processing of the messages. What I don't feel confident about for this alternative from what I've read is how this will work in a load balanced enviroment.
1A. Where should the MSMQ(s) be placed? One on each web server? One on a separate server? Mulitple on a separate server? (not considering need of redundance and that data in rare cases could be lost and re-sent)
1B. How it the design affected if I want the system redundant? I'd like to be alble to lose a server (it never comes up online again) holding the MSMQ without losing the data in that queue. From what I've read about MSMQ that leaves me to the only option of placing the MSMQ on a windows cluster. Is that correct? (I'd like to avoid using a windows cluster fo this).
The second design alternative is to let the facade WCF service write the queue to a database. Then have two or more Windows services to do the processing of the queue. I don't have any questions on this alternative. If you wonder why I don't pick this one as it seems simpler to me then it is because I'd like to build this not introducing any windows services to the solution, that I beleive the MSMQ got functionality I don't want to code myself and I'm also curious about using MSMQ as I've never used it before.
Best Regards
Håkan
OK, so you're not using WCF with MSMQ integration, you're using WCF to create MSMQ messages as an end-product. That simplifies things to "how do I load balance MSMQ?"
The arrangement you use is based on what works best for you.
You could have multiple webservers sending messages to a remote queue on a central machine.
Instead you could have a webservers putting messages in local queues with a central machine polling the queues for new arrivals.
You don't need to cluster MSMQ to make it resilient. You can instead make your code resilient so that it copes with lost messages using dead letter queues, transactional queues, journaling, and so on. Hardware clustering is the easy option :-)
Load-balancing MSMQ - a brief
discussion
Oil and water - MSMQ transactional
messages and load balancing
After reading some more on the subjet I haver decided to not use MSMQ. It seems like I really got no reason to go down this road. I need this to be non-transactional and as I understand it none of the journaling or dead letter techniques will help me with my redundancy requirement.
All my components will be online most of the time (maybe a couple of hours per year when they got access problems).
The MSQM will only add complexity to the exciting solution, another technique and maybe another server to keep track of.
To get full redundance to prevent data loss in MSMQ I will need a windows cluster or implement send/recieve to multiple identical queues. I don't want to do either of those.
All this lead me to front my recieving application with a WCF facade accepting http calls writing to a database queue. This database is already protected from data loss. The queue will be polled by muliple active instances of a Windows Servce containing all the heavy business logic. With low process priority these services could be hosted on the already existing nodes used by the load balaced web application. If I got time to use MSMQ or if I needed it for another reason in my application I might change my decision.