Synchronize microservices with events (RabbitMQ) - rabbitmq

We are using a microservice architecture to implement Web-APIs in nodejs. Every service exposes HTTP endpoints so the app / website can interact with it. To synchronize the different databases, we are currently using RabbitMQ. A microservice can publish a message on a fanout exchange and every subscribed microservice receives the message.
There are two problems with this architecture.
What if we want to add a second instance of a microservice (for loadbalancing purposes etc.). If the second service would subscribe to the same fanout exchange, the messages would be consumed two times.
Either acknowledgments do not work with fanout exchanges, or I'm doing something wrong. When I publish a message on a fanout exchange without subscribers, the messages disappears immediately without being acked.
This leads me to the my question. Is RabbitMQ a good choice for microservice synchronization or should we change our architecture. Here is a short example of how I would like it to work:
The user creates a new account
The auth-mc inserts the user in its database and publishes a 'user.created' event
a1-mc, a2-mc (same mc, just loadbalanced) and b1-mc are subscribers of the exchange. Either a1 or a2, as well as b1 receive the event and insert the user in their respective database
The event is only removed from each microservices queue, after its acknowledged
This way I can be sure, that every microservice (loadbalanced or not) receives the message one time. Can such a pattern even be implemented using RabbitMQ?
EDIT: Also looking for good literature about microservices if there are any suggestions.

Let's use topic exchange instead of fanout for your purpose. Only one consumer will receive the message instead of all of them. You could route your messages based on routing_key params for different consumers. For instance, you have an exchange. You have three different queues bound to this exchange with the same routing key. Your message will be duplicated for each queue! Your consumers from different microservices could read the message separately and do what they need. The message will be not dropped until you acknowledge it, but it's a good practice to push message with TTL.

Related

RabbitMQ with pika: different callback for different queue on same channel?

I have a hard time understanding the basic concepts of RabbitMQ. I find the online documentation not perfectly clear.
So far I understand, what a channel, a queue, a binding etc. is.
But how would the following use case be implemented:
Use Case: Sender posts to one exchange with different topics. On the receiver side, depending on the topic, different receivers should be notified.
So the following should somehow be feasible with a topic exchange:
create a channel
within this channel, create a topic exchange
for each topic to be subscribed to, create a queue and a queue binding with this topic as property
My difficulty is that the callback would be related to the channel, not to the queue or the queue binding. I am not 100 % sure if I am right here.
So that's my question: in order to have multiple callbacks, IOW: different message handlers, depending on the subscribed topic - do you have to create multiple channels, one for each "different message handling"? All these channels should grab the same exchange and define their own queue + queue binding for that specific topic?
Please confirm if this is correct or if I am straying from the canonic path of AMPQ ... "queue" sounds so light-weight, so I intuitively thought of a queue or a queue binding as the right point to attach a consuming event handler to, but it seems that, instead, channel is my friend in this. Right?
Another aspect of my question:
If I really have to use multiple channels for this, do I have to declare the same exchange (exchange name and exchange type of "topic") for each channel? I hoped there was something like:
define the exchange with this name and the type of "topic" once
for each channel, "grab" this predefined exchange and use it by adding queues and queue bindings to this exchange
I find it helpful to think about the roles of the broker (RabbitMQ) and the clients (your applications) separately.
The broker, RabbitMQ, will receive messages from your publishers, route them to queues, and eventually send them to consumers. The message routing can be simple or complex. In your case, the routing is topic based with a few different queues.
You haven't said much about the publishers, likely because their job is simple. They send messages with a routing key to RabbitMQ.
The consumer side is where things can get interesting. At the simplest level, a consumer subscribes to a queue, receives messages from RabbitMQ, and processes them. The consumer opens a connection to RabbitMQ and will use a channel for a particular use (e.g., subscribing to a queue). The power of message brokers is that they allow designers to break up processes into separate apps if desired.
You don't give much insight into your application, other than the presence of different message topics. An important design choice for you to make is how to define the application(s). Are the different topics suitable for separate applications, or will a single application handle all types of messages.
For the former case, you would have one application for each queue. A single channel that subscribes to the queue is probably the most sensible decision unless your application needs to be threaded. For threaded applications, each thread would have its own channel and all threads can be subscribed to the same queue. Each application would have its own callback function for processing that type of message.
For the latter case (single application with multiple queues), the best approach would be to have at least one channel per queue. It sounds like each queue would require its own callback function, and you would assign the functions to the channels according to its subscription. You might have multiple channels per queue if your application can process multiple messages (of each topic) simultaneously.
Regarding your question about declaring exchanges, queues, and bindings, these items only need to be created once. But it is reasonable practice to have your clients declare them at connection time. Advantages of declaring them are that they will be created again if they were deleted and that any discrepancies between your declaration and what is on the broker will trigger errors.

Message Delivery Guarantee for Multiple Consumers in Pub/Sub and Messaging Queues

Requirement
A system undergoes some state change, and multiple other parts of the system has to know this(lets call them observers) so that they can perform some actions based on the current state, the actions of the observers are important, if some of the observers are not online(not listening currently due to some trouble, but will be back soon), the message should not be discarded till all the observers gets the message.
Trying to accomplish this with pub/sub model, here are my findings, (please correct if this understanding is wrong) -
The publisher creates an event on specific topic, and multiple subscribers can consume the same message. This model either provides no delivery guarantee(in redis), or delivery is guaranteed once(with messaging queues), ie. when one of the consumer acknowledges a message, the message is discarded(rabbitmq).
Example
A new Person Profile entity gets created in DB
Now,
A background verification service has to know this to trigger the verification process.
Subscriptions service has to know this to add default subscriptions to the user.
Now both the tasks are important, unrelated and can run in parallel.
Now In Queue model, if subscription service is down for some reason, a BG verification process acknowledges the message, the message will be removed from the queue, or if it is fire and forget like most of pub/sub, the delivery is anyhow not guaranteed for both the services.
One more point is both the tasks are unrelated and need not be triggered one after other.
In short, my need is to make sure all the consumers gets the same message and they should be able to acknowledge them individually, the message should be evicted only after all the consumers acknowledged it either of the above approaches doesn't do this.
Anything I am missing here ? How should I approach this problem ?
This scenario is explicitly supported by RabbitMQ's model, which separates "exchanges" from "queues":
A publisher always sends a message to an "exchange", which is just a stateless routing address; it doesn't need to know what queue(s) the message should end up in
A consumer always reads messages from a "queue", which contains its own copy of messages, regardless of where they originated
Multiple consumers can subscribe to the same queue, and each message will be delivered to exactly one consumer
Crucially, an exchange can route the same message to multiple queues, and each will receive a copy of the message
The key thing to understand here is that while we talk about consumers "subscribing" to a queue, the "subscription" part of a "pub-sub" setup is actually the routing from the exchange to the queue.
So a RabbitMQ pub-sub system might look like this:
A new Person Profile entity gets created in DB
This event is published as a message to an "events" topic exchange with a routing key of "entity.profile.created"
The exchange routes copies of the message to multiple queues:
A "verification_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.#"
A "subscription_setup_service" queue has been bound to this exchange to receive a copy of all messages matching "entity.profile.created"
The consuming scripts don't know anything about this routing, they just know that messages will appear in the queue for events that are relevant to them:
The verification service picks up the copy of the message on the "verification_service" queue, processes, and acknowledges it
The subscription setup service picks up the copy of the message on the "subscription_setup_service" queue, processes, and acknowledges it
If there are multiple consuming scripts looking at the same queue, they'll share the messages on that queue between them, but still completely independent of any other queue.
Here's a screenshot from this interactive visualisation tool that shows this scenario:
As you mentioned it is not something that you can control with Redis Pub/Sub data structure.
But you can do it easily with Redis Streams.
Streams will allow you to post messages using the XADD command and then control which consumers are dealing with the message and acknowledge that message has been processed.
You can look at these sample application that provides (in Java) example about:
posting and consuming messages
create multiple consumer groups
manage exceptions
Links:
Getting Started with Redis Streams and Java
Redis Streams in Action ( Project that shows how to use ADD/ACK/PENDING/CLAIM and build an error proof streaming application with Redis Streams and SpringData )

RabbitMQ same message to each consumer

I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html

Fanout exchanges are basically load balancers right?

I have been learning AMQP using rabbitMQ and I came across this concept called fanout exchanges. From the illustration diagram, all I could see is that it's some kind of load balancer. Could anyone please explain what is it's actual purpose?
I assume that you mean that only one queue will get a message once it arrives to fanout exchange. So from that point of view:
No, I don't think its a load-balancer (I admit that terminology can be confusing).
In Rabbit MQ there are different types of exchanges, its true and fanout exchange is only one type of them. The basic model of Rabbit MQ assumes that you can connect as many queues as you want to the same exchange. Now, all the queues that are connected to the exchange will get the message (Rabbit MQ just replicates the message) - so exchange can't act as a load balancer.
The only difference between the exchange types is an algorithm of matching routing key. It's like a "to" field in a regular envelope. When a message arrives to exchange, it checks the routing key (a.k.a. binding) and depending on type of exchange "finds" to which queue the message should be routed.
When queue gets registered to exchange it always uses this binding. It like queue says to the binding "hey, all messages which are supposed to arrive to John Smith (its a routing key), please pass them to me". Then when the message arrives, it always has a "to" field in the envelope - so exchange checks whether the message is intended to be sent to John Smith, and if so - just routes it to the queue.
It's possible that there will be many queues interested to get a message from John Smith, in this case the message will be replicated. As for fanout exchange - it just doesn't pay any attention to the routing key and instead just sends the message to all the connected queues.
Now, there is another abstraction called consumer. Consumers can be connected to the single queue (again, many consumers can be connected to the queue).
The trick is that only one consumer can get the message for processing at a time.
So if you want a load balancer - you can use a single queue, connected to your exchange (it can be fanout of course), but then connect many consumers to that queue, and rabbit will send the message to the first consumer (it uses round robin internally to pick the first consumer) - if the consumer can't handle it, the message will be re-queued and rabbit will attempt to send it to another consumer.

PubSub + Reliable message delivery to unreliably present subscribers

I need to build a system that uses a Publish/Subscribe bus (e.g. Mule, ZeroMQ, RabbitMQ), but the literature all implies that subscriber applications are reliably available to receive messages from topics to which they subscribe as soon as the Pub/Sub bus is able to deliver the message.
I have a system where some of the applications will be reliably connected to the Publish/Subscribe bus, but other applications will not be active or connected to the bus all the time.
The obvious solution is to have some sort of "presence" protocol between the unreliable application and the Publish/Subscribe bus so that "present" applications get their messages delivered immediately, and "not present" applications have their messages queued up in a persistent buffer of some kind, and as soon as they complete the "presence handshake", the queued messages are delivered to the newly present application.
Are there any Publish/Subscribe buses which have this kind of feature built in, or are there any open-source add-ons which do this? Can you point me to any URLs which describe this?
You can achieve this behaviour quite easily with any AMQP-compliant broker (such as RabbitMQ).
Choose the correct exchange type for your usage model. You'll want to use a direct exchange if you're always sending to absolutely named destinations, something like chat.messages.
If you want to do pattern-based routing, you'll want to use topic exchange. Then you can route based on patterns such a chat.messages.*.
Routing is described in more detail in the RabbitMQ Tutorials.
To create the kind of persistent subscription that you mention, have each subscriber create a queue that is private to that subscriber. The queue is then bound to the relevant routing keys on your chosen exchange.
Since each subscriber has its own queue, messages will be consumed by the subscriber when active and stored when subscriber is inactive or disconnected.
You haven't mentioned your language of choice, but in Java you can accomplish this with JMS using durable subscribers. Any implementation of JMS (there are many, including the aforementioned RabbitMQ) will support this feature.