My team wants to move to microservices architecture. Currently we are using Redis Pub/Sub as message broker for some legacy parts of our system. My colleagues think that it is naturally to continue use redis as service bus as they don't want spend their time on studying new product. But in my opinion RabbitMQ (especially with MassTransit) is a better approach for microservices. Could you please compare Redis Pub/Sub with Rabbit MQ and give me some arguments for Rabbit?
Redis is a fast in-memory key-value store with optional persistence. The pub/sub feature of Redis is a marginal case for Redis as a product.
RabbitMQ is the message broker that does nothing else. It is optimized for reliable delivery of messages, both in command style (send to an endpoint exchange/queue) and publish-subscribe. RabbitMQ also includes the management plugin that delivers a helpful API to monitor the broker status, check the queues and so on.
Dealing with Redis pub/sub on a low level of Redis client can be a very painful experience. You could use a library like ServiceStack that has a higher level abstraction to make it more manageable.
However, MassTransit adds a lot of value compared to raw messaging over RMQ. As soon as you start doing stuff for real, no matter what transport you decide to use, you will hit typical issues that are associated with messaging like handling replies, scheduling, long-running processes, re-delivery, dead-letter queues, and poison queues. MassTransit does it all for you. Neither Redis or RMQ client would deliver any of those. If your team wants to spend time dealing with those concerns in their own code - that's more like reinventing the wheel. Using the argument of "not willing to learn a new product" in this context sounds a bit weird, since, instead of delivering value for the product, developers want to spend their time dealing with infrastructure concerns.
RabbitMQ is far more stable and robust than Redis for passing messages.
RabbitMQ is able to hold and store a message if there is no consumer for it (e.g. your listener crashed , etc).
RabbitMQ has different methods for communication: Pub/Sub , Queue. That you can use for load balancing , etc
Redis is convenient for simple cases. If you can afford losing a message and you don't need queues then I think Redis is also a good option.
If you however can not afford losing a message then Redis is not a good option.
Related
I have a microservice architecture and now I need to introduce a notification center. Requirements are: any service is able to send a notification, any service is able to subscribe to any kind of notifications, UI (web) is able to subscribe to notifications (websockets are preferred). Of course I can write such service by myself but maybe there is ready-made robust solution for that.
UPD: I'm not looking for pub/sub messaging system as it is too low-level for notification center
What you are looking for is publish-subscriber messaging. If you are using AWS stack, then I can recommend Amazon SNS or Amazon SQS. I think Amazon SNS is more suitable because its push based.
Amazon SNS allows applications to send time-critical messages to multiple subscribers through a “push” mechanism, eliminating the need
to periodically check or “poll” for updates.
Amazon SQS is a message queue service used by distributed applications to exchange messages through a polling model, and can be
used to decouple sending and receiving components—without requiring
each component to be concurrently available.
Out of Amazon web services stack, there are a bunch of free messaging solutions:
RabbitMQ is one of the leading implementation of the AMQP protocol (along with Apache Qpid). Therefore, it implements a broker
architecture, meaning that messages are queued on a central node
before being sent to clients. This approach makes RabbitMQ very easy
to use and deploy, because advanced scenarios like routing, load
balancing or persistent message queuing are supported in just a few
lines of code. However, it also makes it less scalable and “slower”
because the central node adds latency and message envelopes are quite
big.
ZeroMq is a very lightweight messaging system specially designed for high throughput/low latency scenarios like the one you can find in
the financial world. Zmq supports many advanced messaging scenarios
but contrary to RabbitMQ, you’ll have to implement most of them
yourself by combining various pieces of the framework (e.g : sockets
and devices).
ActiveMQ is in the middle ground. Like Zmq, it can be deployed with both broker and P2P topologies. Like RabbitMQ, it’s easier to
implement advanced scenarios but usually at the cost of raw
performance.
Now you know what you need, I would recommend to read through each technology for a while and decide which one serves your goal more accurately. If that doesn't worth our time and your requirement is more specific & relatively small, then you can go for writing something on your own.
In discussing the differences between Kafka and RabbitMQ, "dumb broker" and "smart broker" keeps popping up in their interactions with consumers. Kafka is described as having a dumb broker while RabbitMQ is said to have a smart broker/dumb consumer model.
What exactly does this mean? I'm familiar with the basics of Kafka and a little bit more about RabbitMQ. However, what features of RabbitMQ makes the broker smarter than Kafka's?
This is a question that bothered me for sometime too :) Here's what I have understood so far...
In the case of RabbitMQ the broker makes sure the messages are delivered to the consumers and dequeue them only when it gets acknowledgement from all the consumers that need that message. It also keeps track of consumer state.
Kafka does not keep track of "which messages were read by consumers". The Kafka broker keeps all messages in queues for a fixed duration of time and it's the responsibility of the consumer to read it from the queue. It also does not have this overhead operation of keeping track of consumer state.
You can read more about it in this awesome Pivotal blog post comparing RabbitMQ and Kafka.
The point about Kafka using a dumb broker while Rabbit MQ using a smart broker is one of the points used while deciding which Messaging System to use. Since RabbitMQ is a smart broker implementing global startegies for retry is far easier and listener agnostic than in Kafka.
Given a set of microservices accessed through an API gateway I believe that the above point, combined with the advantages of Rabbit MQ being much more maintainable and the knowledge that the data passed across microservices will never amount to the same load as that of streaming data, makes Rabbit MQ a far better choice than Kafka for Inter Service Communication
Dumb vs Smart broker means that the Broker can be smart to route messages based on certain conditions.
In the case of RabbitMQ, producer sends message to Exchange and Exchange routes the message to Queue. Here "Exchange" does the routing and thats what they call as Smart broker. Again people have made Brokers really smart and ended up with ESB which we all know what happened and Industry is moving away from Bloated ESB's.
In the case of Kafka, broker doesn't route messages. It is up to the user to create topics, producers partition the events into topic-partitions, and consumer groups and decide which consumer groups listens to which topic.
Smart vs Dumb broker has nothing to do with Message acknowledgment. In case of RabbitMQ, it tracks the status of each message to see whether it is consumer or not. In the case of Kafka, it happens but differently by using offsets on partitions and offset is stored in Kafka itself ( consumer can also store). But both provide the functionality.
I am putting together a queue based distributed system, all standard stuff. We are using the latest version of RabbitMQ to provide our messaging transport tier.
I have some questions regarding achieving high availability (for my applications and not actually RabbitMQ) that I couldn't answer by reading the documentation. Would appreciate some advice, it's very likely my lack of understanding of Rabbit/AMQP that is causing the problem :)
Problem: I have a message producer (called the primary). There is one and only 1 message producer. There is a secondary producer (called the backup) which should take over from the primary should it fail.
How could I achieve this using existing RabbitMQ capabilities?
Thoughts: Use an "exclusive" queue, to which the primary will be connected to. The backup will attempt to connect to to this queue. When the primary fails, the backup will gain connectivity to the queue and establish control over the process.
What is the correct pattern I should be using to achieve this? I couldn't find any documentation on competing producers etc, would appreciate your advice! How do others do this?
Kind regards
TM
If you want to have only one producer at a time - you can't afford it with RabbitMQ mechanism (unless you'll get some plugin but I don't know such of a kind). You can gain control on producers number on application level.
P.S.:
Looks like you don't get AMQP idea well, producers publish messages to exchanges, while consuming get them from queue. The broker (RabbitMQ) route messages from exchange to on or more queues (in fact, it can also route messages to other exchange, but that's another story).
I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.
I have been working on a CQRS project (my first) for over the last 9 months which has been a heavy learning curve. I am currently using JOliver's excellent EventStore in my write model and using PostGresSql for my read model.
Both my read and write databases are on the same machine which means that when a change is made to the write database, in the same synchronous call a change is made to the read model.
As I was learning CQRS I felt this was the best way to go as I had no experience with message queue/service bus frameworks such as MassTransit, NServiceBus etc.
I am now at a point with most of my architecture in place to introduce a message queue framework.
Today, I came across Redis MQ which is part of ServiceStack and as we are already using ServiceStack for our Rest based HTTP clients, this seems like the right way to go.
My question is more about understanding what I need to know (or if I have any misunderstandings) to implement Redis MQ and whether Redis MQ is the right choice?
Now from what I understand, I would use Redis MQ as a durable queue between the write and read database. Once my event store has recorded that something has happened in my domain then it will publish to Redis MQ. The services listening for events/messages would receive the event/message from Redis MQ and once it has processed it (i.e. update or write to the read model), a notification/response goes back to the event store to tell the event store that the message has been received and processed by the listener/subscriber.
Does this sound correct?
Also would the Redis MQ architecture give me everything that NSB, RavenDB, MassTransit etc offer?
Also, I will be deploying to windows 2008 and 2003 server. Is Redis stable for these OSs?
I think the ServiceStack implementation of message queueing in Redis is more appropriate for job-queue scenarios - it pushes a message onto the end of a Redis list and then uses Redis pub-sub to notify listening subscribers that there is a message to pull from the queue. Any consumers would be competing for messages.
For event sourcing, you may be more interested in a type of fanout or topic based messaging topology as offered by RabbitMQ, not that that precludes you from building that sort of thing using Redis data structures yourself.
Now from what I understand, I would use Redis MQ as a durable queue
between the write and read database.
Yes this is correct.
Once my event store has recorded that something has happened in my
domain then it will publish to Redis MQ.
Yes and this can be done in several ways. It can either happen as part of the transaction which persists to the event store or you can have an out of band process which continuously publishes events from the event store.
a notification/response goes back to the event store to tell the event
store that the message has been received and processed by the
listener/subscriber.
The response back to the event publisher is usually omitted. This truly decouples the publishers from subscribers. You make the assumption that once the message is published, all interested subscribers will handle it. If something happens, an error should be logged.
Also would the Redis MQ architecture give me everything that NSB,
RavenDB, MassTransit etc offer?
I don't have experience running Redis MQ, but I do know that Redis supports pub/sub which is one of the value propositions of NSB and MassTransit (as opposed to say bare-bones MSMQ). What MT and NSB offer beyond pub/sub are sagas and it doesn't seem like Redis MQ support those out of the box at least. You may not ever have a need for sagas so this should not automatically be a deterrent. RavenDB is not a message queue so it doesn't apply here.
Also, I will be deploying to windows 2008 and 2003 server. Is Redis
stable for these OSs?
I've run Redis on 2008 R2 and it has been stable so I would think Redis MQ would be stable as well.
You may be interested in a little side project of mine on GitHub which is a queue and persistence implementation for NServiceBus using Redis. https://github.com/mackie1001/NServicebus.Redis
I'd not call it production ready and I want to port it to NSB 4 and do some thorough testing but the meat of it is done.