I'm thinking of adding a queue function in a product based on a bunch of WCF services. I've read some about MSMQ, first I thought that was what I needed but I'm not sure and are considering to just put the queue in a database table. I wonder if somone here got some feedback on which way to go.
Basicly I'm planning to have a facade WCF service called over http. The facade service should only write all incoming messages to a queue to give a fast response to the calling system. The messages in the queue should then be processed by another component, either a WCF service or a Windows service depending om my choice of queue.
The product is running in a load balanced enviroment with 2 to n web servers.
The options I'm considering and the questions I got are:
To let the facade WCF write to a MSMQ and then have anothther WCF service reading from this queue to do the processing of the messages. What I don't feel confident about for this alternative from what I've read is how this will work in a load balanced enviroment.
1A. Where should the MSMQ(s) be placed? One on each web server? One on a separate server? Mulitple on a separate server? (not considering need of redundance and that data in rare cases could be lost and re-sent)
1B. How it the design affected if I want the system redundant? I'd like to be alble to lose a server (it never comes up online again) holding the MSMQ without losing the data in that queue. From what I've read about MSMQ that leaves me to the only option of placing the MSMQ on a windows cluster. Is that correct? (I'd like to avoid using a windows cluster fo this).
The second design alternative is to let the facade WCF service write the queue to a database. Then have two or more Windows services to do the processing of the queue. I don't have any questions on this alternative. If you wonder why I don't pick this one as it seems simpler to me then it is because I'd like to build this not introducing any windows services to the solution, that I beleive the MSMQ got functionality I don't want to code myself and I'm also curious about using MSMQ as I've never used it before.
Best Regards
HÃ¥kan
OK, so you're not using WCF with MSMQ integration, you're using WCF to create MSMQ messages as an end-product. That simplifies things to "how do I load balance MSMQ?"
The arrangement you use is based on what works best for you.
You could have multiple webservers sending messages to a remote queue on a central machine.
Instead you could have a webservers putting messages in local queues with a central machine polling the queues for new arrivals.
You don't need to cluster MSMQ to make it resilient. You can instead make your code resilient so that it copes with lost messages using dead letter queues, transactional queues, journaling, and so on. Hardware clustering is the easy option :-)
Load-balancing MSMQ - a brief
discussion
Oil and water - MSMQ transactional
messages and load balancing
After reading some more on the subjet I haver decided to not use MSMQ. It seems like I really got no reason to go down this road. I need this to be non-transactional and as I understand it none of the journaling or dead letter techniques will help me with my redundancy requirement.
All my components will be online most of the time (maybe a couple of hours per year when they got access problems).
The MSQM will only add complexity to the exciting solution, another technique and maybe another server to keep track of.
To get full redundance to prevent data loss in MSMQ I will need a windows cluster or implement send/recieve to multiple identical queues. I don't want to do either of those.
All this lead me to front my recieving application with a WCF facade accepting http calls writing to a database queue. This database is already protected from data loss. The queue will be polled by muliple active instances of a Windows Servce containing all the heavy business logic. With low process priority these services could be hosted on the already existing nodes used by the load balaced web application. If I got time to use MSMQ or if I needed it for another reason in my application I might change my decision.
Related
I haven't found an existing post asking this but apologize if I missed it.
I'm trying to get my head round microservices and have come across articles where RabbitMQ is used. I'm confused why RabbitMQ is needed. Is the intention that the services will use a web api to communicate with the outside world and RabbitMQ to communicate with each other?
In Microservices architecture you have two ways to communicate between the microservices:
Synchronous - that is, each service calls directly the other microservice , which results in dependency between the services
Asynchronous - you have some central hub (or message queue) where you place all requests between the microservices and the corresponding service takes the request, process it and return the result to the caller. This is what RabbitMQ (or any other message queue - MSMQ and Apache Kafka are good alternatives) is used for. In this case all microservices know only about the existance of the hub.
microservices.io has some very nice articles about using microservices
A message queue provide an asynchronous communications protocol - You have the option to send a message from one service to another without having to know if another service is able to handle it immediately or not. Messages can wait until the responsible service is ready. A service publishing a message does not need know anything about the inner workings of the services that will process that message. This way of handling messages decouple the producer from the consumer.
A message queue will keep the processes in your application separated and independent of each other; this way of handling messages could create a system that is easy to maintain and easy to scale.
Simply put, two obvious cases can be used as examples of when message queues really shine:
For long-running processes and background jobs
As the middleman in between microservices
For long-running processes and background jobs:
When requests take a significant amount of time, it is the perfect scenario to incorporate a message queue.
Imagine a web service that handles multiple requests per second and cannot under any circumstances lose one. Plus the requests are handled through time-consuming processes, but the system cannot afford to be bogged down. Some real-life examples could include:
Images Scaling
Sending large/many emails (like newsletters)
Search engine indexing
File scanning
Video encoding
Delivering notifications
PDF processing
Calculations
The middleman in between microservices:
For communication and integration within and between applications, i.e. as the middleman between microservices, a message queue is also useful. Think of a system that needs to notify another part of the system to start to work on a task or when there are a lot of requests coming in at the same time, as in the following scenarios:
Order handling (Order placed, update order status, send an order, payment, etc.)
Food delivery service (Place an order, prepare an order, deliver food)
Any web service that needs to handle multiple requests
Here is a story explaining how Parkster (a digital parking service) are breaking down their system into multiple microservices by using RabbitMQ.
This guide follow a scenario where a web application allows users to upload information to a web site. The site will handle this information and generate a PDF and email it back to the user. Handling the information, generating the PDF and sending the email will in this example case take several seconds and that is one of the reasons of why a message queue will be used.
Here is a story about how and why CloudAMQP used message queues and RabbitMQ between microservices.
Here is a story about the usage of RabbitMQ in an event-based microservices architecture to support 100 million users a month.
And finally a link to Kontena, about why they chose RabbitMQ for their microservice architecture: "Because we needed a stable, manageable and highly-available solution for messaging.".
Please note that I work for the company behind CloudAMQP (hosting provider of RabbitMQ).
The same question can be why REST is necessary for microservices? Microservice concept is not something new under moon. A long time distribution of workflow was used for backend engineering and asynchronous request processing, Microservice is the same component in a separated jvm which matches with S(single responsibility) in SOLID. What makes it micro SERVICE - is that it is balanced. And that is the all! Particularly (!), it can be REST Service on Spring Cloud/REST base, which is registered by Eureka, has proxy gateway and load balancing over Zuul and Ribbon. But it is not the whole world of microservices!By the way, asynchronous distributed processing is one of tasks which microservices are used for. Long time ago services(components) in separated JVM was integrated over any messaging and the pattern is known as ESB. Microservices are the same subjects the pattern. Due to fashion for Spring Cloud REST seems like it is the only way of microservices. Nope! Message based asynchronous microservice architecture is supported by Vertx https://dzone.com/articles/asynchronous-microservices-with-vertx, for example. Why not to use RabbitMQ as message channel? In this case load balancing can be provided by building RabbitMQ cluster. For example:https://codeburst.io/using-rabbitmq-for-microservices-communication-on-docker-a43840401819. So, world is much wide more.
We have multiple web and windows applications which were deployed to different servers that we are planning to integrate using NservierBus to let all apps can pub/sub message between them, I think we using pub/sub pattern and using MSMQ transport will be good for it. but one thing I am not clear if it is a way to avoid hard code to set sub endpoint to MSMQ QueueName#ServerName which has server name in it directly if pub is on another server. on 6-pre I saw idea to set endpoint name then using routing to delegate to transport-level address, is that a solution to do that? or only gateway is the solution? is a broker a good idea? what is the best practice for this scenario?
When using pub/sub, the subscriber currently needs to know the location of the queue of the publisher. The subscriber then sends a subscription-message to that queue, every single time it starts up. It cannot know if it subscribed already and if it subscribed for all the messages, since you might have added/configured some new ones.
The publisher reads these subscriptions messages and stores the subscription in storage. NServiceBus does this for you, so there's no need to write code for this. The only thing you need is configuration in the subscriber as to where the (queue of the) publisher is.
I wrote a tutorial myself which you can find here : http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/
That being said, you should take special care related to issues regarding websites that publish messages. More information on that can be found here : http://docs.particular.net/nservicebus/hosting/publishing-from-web-applications
In a scale out situation with MSMQ, you can also use the distributor : http://docs.particular.net/nservicebus/scalability-and-ha/distributor/
As a final note: It depends on the situation, but I would not worry too much about knowing locations of endpoints (or their queues). I would most likely not use pub/sub just for this 'technical issue'. But again, it completely depends on the situation. I can understand that rich-clients which spawn randomly might want this. But there are other solutions as well, with a more centralized storage and an API that is accessed by all the rich clients.
Simplified... We are using NServiceBus for updating our storage.
In our sagas we first read data from our storage and updates the data and puts it back again to storage.The NServicebus instance is selfhosted in a windows service. Calls to storage are separated in its own assembly ('assembly1').
Now we will also need synchronous read from our storage through WCF. In some cases there will be the same reads that were needed when updating in sagas.
I have my opinion quite clear but maybe I am wrong and therefore I am asking this question...
Should we set up a separate WCF service that is using a copy of 'assembly1'?
Or, should the WCF instance host nservicebus?
Or, is there even a better way to do it?
It is in a way two endpoints, WCF for the synchronous calls and the windows service that hosts nservicebus (which already exists) right now.
I see no reason to separate into two distinct endpoints in your question or comments. It sounds like you are describing a single logical service, and my default position would be to host each logical service in a single process. This is usually the simplest approach, as it makes deployment and troubleshooting easier.
Edit
Not sure if this is helpful, but my current client runs NSB in an IIS-hosted WCF endpoint. So commands are handled via NSB messages, while queries are still exposed via WCF. To date we have had no problems hosting the two together in a single process.
Generally speaking, a saga should only update its own state (the Data property) and send messages to other endpoints. It should not update other state or make RPC calls (like to WCF).
Before giving more specific recommendations, it would be best to understand more about the specific responsibilities of your saga and the data being updated by 'assembly1'.
I've got a working Silverlight/WCF application that I need to start thinking about scaling. An obvious target for scaling, of course, is Azure.
The key architectural feature of the application is that 2-10 Silverlight clients will join a given "room" (using a duplex Net.TCP connection), and any of those clients can then send a message (for instance, a chat message), which then needs to be pushed in real-time to every other client connected to the same room, using the underlying duplex WCF connection.
Right now, the way the WCF service works is basically to keep in-memory a list of sessions and the rooms that they're associated with, so that when a message from one session comes in, it can automatically send the message to every other session in the room.
This works fine for a single WCF server instance, but it gets complicated if you need to scale it so that multiple WCF instances are in play. If you use network-layer load balancing, of course, you would typically find that only some of the members of your room are on the same server you're on, which means that when it comes time to push out messages to all those members, only some of them would actually get notified.
Apart from Azure, I had been thinking that I would handle it via some sort of application-layer load balancing. For instance, the web server that each client downloads the Silverlight application from might do a primitive round-robin sort of load-balancing, i.e., "OK, everyone in room x, you use WCF instance 1. Everyone in room y, you use WCF instance 2." That sort of thing.
So I have two questions:
(1) Is there any other, better way to architect this, so as to be able to use network-layer load balancing rather than needing to make the application aware of the underlying infrastructure?
(2) If I have to do the application-layer load balancing, what's the best way to handle this in Azure? Do I have to use the IAAS (full VM's), or is there a way to do this using PAAS (worker roles)? My understanding is that it's not possible to independently address worker roles, which would make a roles-based approach difficult, if not impossible.
SignalR powered by the Azure Service bus, may work for you.
http://vasters.com/clemensv/2012/02/13/SignalR+Powered+By+Service+Bus.aspx
I have two applications, let us call them A and B. Currently A uses WCF to send messages to B. A doesn't need a response and B never sends messages back to A.
Unfortunately, there is a flaky network connection between the servers A and B are running on. This results in A getting timeout errors from time to time.
I would like to use WCF+MSMQ as a buffer between the two applications. That way if B goes down temporarily, or is otherwise inaccessible, the messages are not lost.
From an architectural standpoint, how should I configure this?
I think you might have inflated your question a bit with the inclusion of the word "architectural".
If you truly need an architectural overview of this issue from that high of a level, including SLA concerns, your SL will be as good as your MSMQ deployment, so if you are concerned about SL, just look at the documentation on the internet about MSMQ and SLA.
If you are looking more for the actual implementation from a code standpoint, this article is excellent:
http://code.msdn.microsoft.com/msmqpluswcf
It goes over a lot of the things you'll need to know, including how to setup MSMQ and how to implement chunking to get around MSMQ's 4MB limit (if this is necessary... I hope it's not).
Here's a good article about creating a durable and transactional queue that will cross machines using an MSMQ cluster: http://www.devx.com/enterprise/Article/39015/1954