CQRS using Redis MQ - redis

I have been working on a CQRS project (my first) for over the last 9 months which has been a heavy learning curve. I am currently using JOliver's excellent EventStore in my write model and using PostGresSql for my read model.
Both my read and write databases are on the same machine which means that when a change is made to the write database, in the same synchronous call a change is made to the read model.
As I was learning CQRS I felt this was the best way to go as I had no experience with message queue/service bus frameworks such as MassTransit, NServiceBus etc.
I am now at a point with most of my architecture in place to introduce a message queue framework.
Today, I came across Redis MQ which is part of ServiceStack and as we are already using ServiceStack for our Rest based HTTP clients, this seems like the right way to go.
My question is more about understanding what I need to know (or if I have any misunderstandings) to implement Redis MQ and whether Redis MQ is the right choice?
Now from what I understand, I would use Redis MQ as a durable queue between the write and read database. Once my event store has recorded that something has happened in my domain then it will publish to Redis MQ. The services listening for events/messages would receive the event/message from Redis MQ and once it has processed it (i.e. update or write to the read model), a notification/response goes back to the event store to tell the event store that the message has been received and processed by the listener/subscriber.
Does this sound correct?
Also would the Redis MQ architecture give me everything that NSB, RavenDB, MassTransit etc offer?
Also, I will be deploying to windows 2008 and 2003 server. Is Redis stable for these OSs?

I think the ServiceStack implementation of message queueing in Redis is more appropriate for job-queue scenarios - it pushes a message onto the end of a Redis list and then uses Redis pub-sub to notify listening subscribers that there is a message to pull from the queue. Any consumers would be competing for messages.
For event sourcing, you may be more interested in a type of fanout or topic based messaging topology as offered by RabbitMQ, not that that precludes you from building that sort of thing using Redis data structures yourself.

Now from what I understand, I would use Redis MQ as a durable queue
between the write and read database.
Yes this is correct.
Once my event store has recorded that something has happened in my
domain then it will publish to Redis MQ.
Yes and this can be done in several ways. It can either happen as part of the transaction which persists to the event store or you can have an out of band process which continuously publishes events from the event store.
a notification/response goes back to the event store to tell the event
store that the message has been received and processed by the
listener/subscriber.
The response back to the event publisher is usually omitted. This truly decouples the publishers from subscribers. You make the assumption that once the message is published, all interested subscribers will handle it. If something happens, an error should be logged.
Also would the Redis MQ architecture give me everything that NSB,
RavenDB, MassTransit etc offer?
I don't have experience running Redis MQ, but I do know that Redis supports pub/sub which is one of the value propositions of NSB and MassTransit (as opposed to say bare-bones MSMQ). What MT and NSB offer beyond pub/sub are sagas and it doesn't seem like Redis MQ support those out of the box at least. You may not ever have a need for sagas so this should not automatically be a deterrent. RavenDB is not a message queue so it doesn't apply here.
Also, I will be deploying to windows 2008 and 2003 server. Is Redis
stable for these OSs?
I've run Redis on 2008 R2 and it has been stable so I would think Redis MQ would be stable as well.

You may be interested in a little side project of mine on GitHub which is a queue and persistence implementation for NServiceBus using Redis. https://github.com/mackie1001/NServicebus.Redis
I'd not call it production ready and I want to port it to NSB 4 and do some thorough testing but the meat of it is done.

Related

Redis Pub/Sub vs Rabbit MQ

My team wants to move to microservices architecture. Currently we are using Redis Pub/Sub as message broker for some legacy parts of our system. My colleagues think that it is naturally to continue use redis as service bus as they don't want spend their time on studying new product. But in my opinion RabbitMQ (especially with MassTransit) is a better approach for microservices. Could you please compare Redis Pub/Sub with Rabbit MQ and give me some arguments for Rabbit?
Redis is a fast in-memory key-value store with optional persistence. The pub/sub feature of Redis is a marginal case for Redis as a product.
RabbitMQ is the message broker that does nothing else. It is optimized for reliable delivery of messages, both in command style (send to an endpoint exchange/queue) and publish-subscribe. RabbitMQ also includes the management plugin that delivers a helpful API to monitor the broker status, check the queues and so on.
Dealing with Redis pub/sub on a low level of Redis client can be a very painful experience. You could use a library like ServiceStack that has a higher level abstraction to make it more manageable.
However, MassTransit adds a lot of value compared to raw messaging over RMQ. As soon as you start doing stuff for real, no matter what transport you decide to use, you will hit typical issues that are associated with messaging like handling replies, scheduling, long-running processes, re-delivery, dead-letter queues, and poison queues. MassTransit does it all for you. Neither Redis or RMQ client would deliver any of those. If your team wants to spend time dealing with those concerns in their own code - that's more like reinventing the wheel. Using the argument of "not willing to learn a new product" in this context sounds a bit weird, since, instead of delivering value for the product, developers want to spend their time dealing with infrastructure concerns.
RabbitMQ is far more stable and robust than Redis for passing messages.
RabbitMQ is able to hold and store a message if there is no consumer for it (e.g. your listener crashed , etc).
RabbitMQ has different methods for communication: Pub/Sub , Queue. That you can use for load balancing , etc
Redis is convenient for simple cases. If you can afford losing a message and you don't need queues then I think Redis is also a good option.
If you however can not afford losing a message then Redis is not a good option.

Smart Broker vs. Dumb Broker (Kafka and RabbitMQ)

In discussing the differences between Kafka and RabbitMQ, "dumb broker" and "smart broker" keeps popping up in their interactions with consumers. Kafka is described as having a dumb broker while RabbitMQ is said to have a smart broker/dumb consumer model.
What exactly does this mean? I'm familiar with the basics of Kafka and a little bit more about RabbitMQ. However, what features of RabbitMQ makes the broker smarter than Kafka's?
This is a question that bothered me for sometime too :) Here's what I have understood so far...
In the case of RabbitMQ the broker makes sure the messages are delivered to the consumers and dequeue them only when it gets acknowledgement from all the consumers that need that message. It also keeps track of consumer state.
Kafka does not keep track of "which messages were read by consumers". The Kafka broker keeps all messages in queues for a fixed duration of time and it's the responsibility of the consumer to read it from the queue. It also does not have this overhead operation of keeping track of consumer state.
You can read more about it in this awesome Pivotal blog post comparing RabbitMQ and Kafka.
The point about Kafka using a dumb broker while Rabbit MQ using a smart broker is one of the points used while deciding which Messaging System to use. Since RabbitMQ is a smart broker implementing global startegies for retry is far easier and listener agnostic than in Kafka.
Given a set of microservices accessed through an API gateway I believe that the above point, combined with the advantages of Rabbit MQ being much more maintainable and the knowledge that the data passed across microservices will never amount to the same load as that of streaming data, makes Rabbit MQ a far better choice than Kafka for Inter Service Communication
Dumb vs Smart broker means that the Broker can be smart to route messages based on certain conditions.
In the case of RabbitMQ, producer sends message to Exchange and Exchange routes the message to Queue. Here "Exchange" does the routing and thats what they call as Smart broker. Again people have made Brokers really smart and ended up with ESB which we all know what happened and Industry is moving away from Bloated ESB's.
In the case of Kafka, broker doesn't route messages. It is up to the user to create topics, producers partition the events into topic-partitions, and consumer groups and decide which consumer groups listens to which topic.
Smart vs Dumb broker has nothing to do with Message acknowledgment. In case of RabbitMQ, it tracks the status of each message to see whether it is consumer or not. In the case of Kafka, it happens but differently by using offsets on partitions and offset is stored in Kafka itself ( consumer can also store). But both provide the functionality.

Read all messages from the very begining

Consider a group chat scenario where 4 clients connect to a topic on an exchange. These clients each send an receive messages to the topic and as a result, they all send/receive messages from this topic.
Now imagine that a 5th client comes in and wants to read everything that was send from the beginning of time (as in, since the topic was first created and connected to).
Is there a built-in functionality in RabbitMQ to support this?
Many thanks,
Edit:
For clarification, what I'm really asking is whether or not RabbitMQ supports SOW since I was unable to find it on the documentations anywhere (http://devnull.crankuptheamps.com/documentation/html/develop/configuration/html/chapters/sow.html).
Specifically, the question is: is there a way for RabbitMQ to output all messages having been sent to a topic upon a new subscriber joining?
The short answer is no.
The long answer is maybe. If all potential "participants" are known up-front, the participant queues can be set up and configured in advance, subscribed to the topic, and will collect all messages published to the topic (matching the routing key) while the server is running. Additional server configurations can yield queues that persist across server reboots.
Note that the original question/feature request as-described is inconsistent with RabbitMQ's architecture. RabbitMQ is supposed to be a transient storage node, where clients connect and disconnect at random. Messages dumped into queues are intended to be processed by only one message consumer, and once processed, the message broker's job is to forget about the message.
One other way of implementing such a functionality is to have an audit queue, where all published messages are distributed to the queue, and a writer service writes them all to an audit log somewhere (usually in a persistent data store or text file). This would be something you would have to build, as there is currently no plug-in to automatically send messages out to a persistent storage (e.g. Couchbase, Elasticsearch).
Alternatively, if used as a debug tool, there is the Firehose plug-in. This is satisfactory when you are able to manually enable/disable it, but is not a good long-term solution as it will turn itself off upon any interruption of the broker.
What you would like to do is not a correct usage for RabbitMQ. Message Queues are not databases. They are not long term persistence solutions, like a RDBMS is. You can mainly use RabbitMQ as a buffer for processing incoming messages, which after the consumer handles it, get inserted into the database. When a new client connects to you service, the database will be read, not the message queue.
Relevant
Also, unless you are building a really big, highly scalable system, I doubt you actually need RabbitMQ.
Apache Kafka is the right solution for this use-case. "Log Compaction enabled topics" a.k.a. compacted topics are specifically designed for this usecase. But the catch is, obviously your messages have to be idempotent, strictly no delta-business. Because kafka will compact from time to time and may retain only the last message of a "key".

Nservicebus routing

We have multiple web and windows applications which were deployed to different servers that we are planning to integrate using NservierBus to let all apps can pub/sub message between them, I think we using pub/sub pattern and using MSMQ transport will be good for it. but one thing I am not clear if it is a way to avoid hard code to set sub endpoint to MSMQ QueueName#ServerName which has server name in it directly if pub is on another server. on 6-pre I saw idea to set endpoint name then using routing to delegate to transport-level address, is that a solution to do that? or only gateway is the solution? is a broker a good idea? what is the best practice for this scenario?
When using pub/sub, the subscriber currently needs to know the location of the queue of the publisher. The subscriber then sends a subscription-message to that queue, every single time it starts up. It cannot know if it subscribed already and if it subscribed for all the messages, since you might have added/configured some new ones.
The publisher reads these subscriptions messages and stores the subscription in storage. NServiceBus does this for you, so there's no need to write code for this. The only thing you need is configuration in the subscriber as to where the (queue of the) publisher is.
I wrote a tutorial myself which you can find here : http://dennis.bloggingabout.net/2015/10/28/nservicebus-publish-subscribe-tutorial/
That being said, you should take special care related to issues regarding websites that publish messages. More information on that can be found here : http://docs.particular.net/nservicebus/hosting/publishing-from-web-applications
In a scale out situation with MSMQ, you can also use the distributor : http://docs.particular.net/nservicebus/scalability-and-ha/distributor/
As a final note: It depends on the situation, but I would not worry too much about knowing locations of endpoints (or their queues). I would most likely not use pub/sub just for this 'technical issue'. But again, it completely depends on the situation. I can understand that rich-clients which spawn randomly might want this. But there are other solutions as well, with a more centralized storage and an API that is accessed by all the rich clients.

Does NServiceBus 4.x with RabbitMQ support round robing consumers or the competing consumer model?

I'm using NServiceBus 4.x with RabbitMQ 3.2.x as my transport.
I made the assumption that by using RabbitMQ as my transport I would be given the competing consumer model as an option. I understand that NServiceBus employs the "Fannout" exchange type for all exchanges and does not support round robin at this time. However is there a way to configure NServiceBus to take advantage of the levels of indirection via Exchanges and channels that RabbitMQ offers.
I have several consumers I would like to compete for messages from a given queue. What I am observing is subscribers' blocking access to further message retrieval from the queue until the message is consumed. So having more then one consumer at this point does me no good other then redundancy.
After reading some documentation on RabbitMQ I'm assuming that it's normal to block until the Ack receipt is sent from the subscriber. But I had assumed that subscriber #2 would have free access to the queue to fetch another message.
There is mention of increasing the prefetch count on RabbitMQ channel.
Example:
channel.BasicQos(0,prefetchcount,false)
I don't see anywhere that I can change this setting via configuration in NServiceBus. Furthermore as I read what prefetch does I'm really not sure this what i'm looking for.
Is it possible to use RabbitMQ with out a distirbutor type pattern used with MSMQ? Or should I move to MassTransit or Rebus?
Put prefetchcount=2 in your connection string. Any value above 1 will tell the broker to allow more than X unacked message to go out. You need to fiddle with this setting to find the optimum for your scenario.