Best Practice for setting up RabbitMQ cluster in production with NServiceBus - rabbitmq

Currently we have 2 load balanced web servers. We are just starting to expose some functionality over NSB. If I create two "app" servers would I create a cluster between all 4 servers? Or should I create 2 clusters?
i.e.
Cluster1: Web Server A, App Server A
Cluster2: Web Server B, App Server B
Seems like if it is one cluster, how do I keep a published message from being handled by the same logical subscriber more than once if that subscriber is deployed to both app server A and B?
Is the only reason I would put RabbitMQ on the web servers for message durability (assuming I didn't have any of the app services running on the web server as well)? In that case my assumption is that I am then using the cluster mirroring to get the message to the app server. Is this correct?

Endpoints vs Servers
NServiceBus uses the concept of endpoints. An endpoint is related to a queue on which it receives messages. If this endpoint is scaled out for either high availability or performance then you still have one queue (with RabbitMQ). So if you would have an instance running on server A and B they both (with RabbitMQ) get their messages from the same queue.
I wouldn't think in app servers but think in endpoints and their non functional requirements in regards to deployment, availability and performance.
Availability vs Performance vs Deployment
It is not required to host all endpoints on server A and B. You can also run service X and Y on server A and services U and V on server B. You then scale out for performance but not for availability but availability is already less of an issue because of the async nature of messaging. This can make deployment easier.
Pubsub vs Request Response
If the same logical endpoint has multiple instances deployed then it should not matter which instance processes an event. If it is then it probably isn't pub sub but async request / response. This is handled by NServiceBus by creating a queue for each instance (with RabbitMQ) on where the response can be received if that response requires affinity to requesting instance.
Topology
You have:
Load balanced web farm cluster
Load balanced RabbitMQ cluster
NServiceBus Endpoints
High available multiple instances on different machines
Spreading endpoints on various machines ( could even be a machine per endpoint)
A combination of both
Infrastructure
You could choose to run the RabbitMQ cluster on the same infrastructure as your web farm or do it separate. It depends on your requirements and available resources. If the web farm and rabbit cluster are separate then you can more easily scale out independently.

Related

How to run an async WCF service in the cloud in a serverless, auto scaling manner

I need to run the services in a "auto-scaling" manner. ie. as load increases, I need it to create more of these services. I can use a cloud based load balancer to handle the routing of the messages from 1 to many instances.
Would hosting this in IIS (and somehow doing the port sharing etc) work?

How to use kubernetes replication controllers to replicate message-based services

We usually use message passing to send messages to decoupled services. This makes service discovery a non-issue, because (with AMQP in RabbitMQ for instance) you can use the broker's routing capability to dispatch messages to the right queues that feed the correct services. Load balancing is also handled by the message broker.
Enter kubernetes.
The use case that is usually laid out when talking about service replication and re-spawning failing services, is when your clients use some active protocol like http to contact a service, even if this service handles requests asynchronously. In this context, it is a natural fit to have replication controllers, that manage a group of services and a single entry point to load balance between them.
I like kubernetes' intuitive concepts, like rolling deployments, but how to you control this beasts that don't have an http interface ?
UPDATE:
I am not trying to set up a cluster of message brokers. I am looking at message consumers as services. Service clients don't connect directly to the services, they send messages to the message broker. The message broker acts as a load balancer in a way, and dispatches the messages to the subscribed queue consumers. These consumers implement the service.
My question gravitates around the fact that most usage patterns in demos handle services that are called via http, and kubernetes does a good job here to create a service proxy for these services, and a replication controller. Is it possible to create replication controllers for my kind of service, which does not have a http interface per se, and have all the benefits of rolling updates, and minimum instances?
I'm not sure I entirely understand the question. Are you asking how to use RabbitMQ with Kubernetes? Or how to set up a RabbitMQ cluster: https://www.rabbitmq.com/clustering.html? Or how rolling updates interact with RabbitMQ? Or something else?
I think you should be able to create one service and one replication controller per server, and then use the service DNS names in the cluster configuration file. This is the current approach used to run Zookeeper, also. We have a long-standing TODO to make this less verbose (https://github.com/GoogleCloudPlatform/kubernetes/issues/260), but the current approach should be straightforward. You do lose the ability to use a single kubectl rolling-update command to update the cluster, but it's also straightforward to update the instances individually.

Weblogic migratable JMS consumer doesn't follow the service to the new managed server if the old server remains running

I have a JMS service targeted at a migratable target (using an Auto-Migrate Exactly-Once policy) in a cluster which consists of 2 managed servers, at any point of time the service is hosted at one of them and the consumer (which is targeted at the cluster) is supposed to receive messages seamlessly no matter where the service is hosted.
When I manually switch the host of the migratable target (clicking migrate), without turning the hosting managed server off, the consumer fails to receive messages sent to the queues, unless I turn off the previous hosting managed server forcing the consumer to the new host.
I can rule out sender problems, I can see the messages in the queue right after them being sent.
I'll be grateful if anyone can advice on how to configure either the consumer or the migratable service to work seamlessly when migration happens.
I think that may just be a misunderstanding of how migration works. The docs state Auto-Migrate Exactly-Once:
indicates that if at least one Managed Server in the candidate list
is running, then the JMS service will be active somewhere in the
cluster if servers should fail or are shut down (either gracefully or
forcibly). For example, a migratable target hosting a path service
should use this option so if its hosting server fails or is shut down,
the path service will automatically migrate to another server and so
will always be active in the cluster. Note that this value can lead to
target grouping. For example, if you have five exactly-once migratable
targets and only one server member is started, then all five
migratable targets will be activated on that server member.
The docs also state:
Manual Service Migration—the manual migration of pinned JTA and
JMS-related services (for example, JMS server, SAF agent, path
service, and custom store) after the host server instance fails
Your server/service has neither failed or shut down, you are forcing it to migrate with a healthy host still running, so it has not met the criteria for migration.
See more here as well.
I have some experience that sounds reminiscent of what you're looking at. There was some WLS-specific capability around recognizing reconfiguration in JMS destinations as part of their clustered server design.
In one case I had to call a WLS-specific method: weblogic.jms.extensions.WLSession.setExceptionListener(). This was on their implementation of the JMS Session interface. This is analogous to the standard JMS Connection.setExceptionListener().
With this WLS-specific capability, the WLSession.setExceptionListener() callback would occur at a point where the consuming client should tear down and re-establish the connection / session / consumer in reaction to a reconfiguration (migration) that had happened.

NServiceBus Host and Endpoint Configuration

I have been going through the NServiceBus samples, one point which is not clear to me is the cardinality of NserviceBus Host to Endpoint. Is the relationship 1 NServiceBusHost to 1 Endpoint? What does this look like in production? 1 Windows Service per 1 Endpoint?
Thanks In Advance
We have 3 main actors the NServiceBus.Host that is the physical host that allows an Endpoint, more to come, to be hosted as a Windows Service on a Windows machine. So there is a 1:1 relation between a NServiceBus.Host and a Windows Service.
One host can, starting with V5, host multiple endpoints instances (we can have more than one bus per service listening to different Qs) where an endpoint instance is the physical deployment of an endpoint that is a logical definition that owns a set of message types.
So in production we can 1 service, that monitors 1 queue that hosts 1 endpoint. But you can also have multiple endpoints in the same service even if by default the on-premises NSB.host does not support that out-of-the-box. On the other hand we support it out-of-the-box on Azure where we have a dynamic host that allows more than 1 instance per host while keeping the instances isolated in different processes.
cross answered: https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!msg/particularsoftware/7zOHHOOqDi4/I4p2TbvFGc0J

Windows service Bus evaluation

My management is evaluating non-Azure Microsoft Windows Service Bus (Azure is out of consideration for security reasons). It will be used to setup topic/subscription model with a number of WCF services with netMessagingBinding that we building, so I just have a few basic questions about that.
Are there any specific hardware requirements like dedicated server, dedicated database etc. for WSB to run in production environment?
It's easy to configure WCF service to listen on a specific topic subscription. Is there any way for WCF service to listen to multiple subscriptions?
Appreciate the answers.
You can install the service components and the databases all on one server (that is the default). However, for a number of reasons, we installed the services on a dedicated app server and then created the Service bus databases on an existing database server. The install package allows you to specify a different db server. Check this article for the minimum server requirements
Yes you can get one WCF service to listen to multiple subscriptions. You would need to create two (or more) System.ServiceModel.ServiceHost instances and then run them inside one process. For example we had one windows service running two ServiceHost's. Each host listened at a different queue and therefore implemented a different contract. This meant where queues were logically grouped we didn't need a new windows service per queue. You could do the same with subscriptions.
For question one, you will have to go through the exercise of hardware sizing. the good news is that WCF services can scale vertically, so you can add up servers if there were issues in handling client load.
To do hardware sizing you will have to make an estimate the expected load and then do performance/scalablity testing to figure the load bearing capacity of your serviceBus/services .
you could find a lot of resources for load testing like this one http://seroter.wordpress.com/2011/10/27/testing-out-the-new-appfabric-service-bus-relay-load-balancing/
once you do load testing and come up with the numbers, you can then do sizing using references like this one http://msdn.microsoft.com/en-us/library/bb310550.aspx