RabbitMQ, dealing with multiple applications sending and receiving the same data - rabbitmq

So i have the following problem:
Three applications are capable of adding, altering and deleting users. All three of them are sharing their user mutations through RabbitMQ. When i want to share a user mutation from App1 to App2 & App3 using wildcards, App1 will also receive it's own message because it's listening to the same topic exchange. How should i design my queues and exchanges to solve this problem?

Related

Load balancing WebSocket with Redis and RabbitMQ

Consider a small chat server. In this server, the actual processing of messages is done by nodes of a service called "chat". Communications of this service along with a "user" service are then aggregated via a "gateway" service in front that is the only service that actually communicates with the users and is in charge of passing requests received to other services via the RabbitMQ channel they share.
In a system designed like this, each user is connected to one of the instances of the "gateway" service and when sending and receiving messages indirectly communicates with the private "chat" or "user" services behind. To load balance this, we have an Nginx reverse-proxy on the edge that tries to distribute requests to different "gateway" instances. But since WebSocket connection is real-time, "chat" instances should also be able to send messages to the right instance of the "gateway" in charge of that specific user for user-specific messages and to all "gateway" instances for site-wide messages. This is a problem since with RabbitMQ I don't believe we can target a specific subscriber and even if we could, we don't know to which instance that specific user is connected right now.
Therefore, since we are using Socket.io for WebSocket connection, I am thinking of adding a new Redis node to the stack to allow this communication between different instances of the "gateway" service. This is directly supported by Socket.io and works alright and removes all sorts of limitations imposed by the RabbitMQ, however, we are still using RabbitMQ to route a message from a "chat" instance to a "gateway" instance that then will propagate through the Redis service and when the right "gateway" instance having access to the user is found, delivered to them.
This adds unnecessary lag to user-specific outbound messages. So here I am asking if anyone has a better idea of how this problem should be approached and how to decrease this lag.
Personally, I have this idea of adding Socket.io to "chat" services (with no client access) and use its backend to send the message directly to the Redis store so that the instance of the "gateway" connected to it can route it directly to the user, going over the whole RabbitMQ thing for this type of messages.
It might be important to mention that none of these services are here just to do this specific thing, RabbitMQ is heavily used for communication between different services acting as the message broker and the "gateway" service works with multiple other services for data aggregation, authentication and data validation and transformation. The above example was a simplified version of the problem at hand with the minimum number of moving parts that I could easily describe here.
Edit: To send messages directly to socket.io redis store, the following library can be used apparently not to load the whole socket.io library:
https://github.com/socketio/socket.io-redis-emitter

RabbitMQ Connect/Disconnect Notifications

I am new to RabbitMQ and I am working on an application that will receive information from many devices and route all messages into a couple of queues depending on the MQTT topic. I was able to get all of this working easily, but now I am looking into how to push a message to a queue when a client connects or disconnects from RabbitMQ in order to update the current status of the client in my database. Is there a way to do this?
Event Exchange Plugin
Client connection, channels, queues, consumers, and other parts of the system naturally generate events. For example, when a connection is accepted, authenticated and access to the target virtual host is authorised, it will emit an event of type connection_created. When a connection is closed or fails for any reason, a connection_closed event is deleted.
Unfortunately the rabbitmq_event_exchange is created after importing bindings from definition.json. Which means that the amq.rabbitmq.event cannot be bound to a queue via the configuration and must be bound after the start.

MQTT backend scaling

I am currently developing a typical IoT service. At the moment multiple devices connect to one MQTT broker (mosquitto) and my java backend also connects to the broker (Paho).
The problem i see is the following:
When i am going to have multiple instances of my java backend every backend will receive and process every message received. That`s a big issue. I just want to deliver a message to only one java backend. Anybody an idea how to deal with this problem?
Btw: Java backends will be added or removed depending on the load.
There are a couple of options
Place a queuing system between your application and the MQTT broker, possibly something like Apache Kafka
HiveMQ and IBM MessageSight brokers support (different implementations) of something called shared subscriptions. This allows messages to be shared out between more than one client. Shared subscriptions is likely to be formally added to the MQTTv5 spec which should mean that it will be added to more broker and have a standard implementation.

MassTransmit - Distributed Messaging Model - Reliable/Durable - NServiceBus too expensive

I would like to use MassTransmit similar to NServiceBus, every publisher and subscriber has a local queue. However I want to use RabbitMQ.
So do all my desktop clients have to have RabbitMQ installed, I think so, then should I just connect the 50 desktop clients and 2 servers into a cluster?
I know the two servers must be in the same cluster. However 50 client nodes, seems a bi tmuch to put in one cluster.....Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
The Servers are dealing with backend hl7 messages.
Any help and advice here is much appreciated, this is all on windows machines.
Basically I am leaving NServiceBus behind, as it is now too expensive, they aiming it at large corporations with big budgets, hence Masstransmit.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers.
The desktops also use CQS to update their views.
should I just connect the 50 desktop clients and 2 servers into a cluster?
Yes, you have to connected your clients to the cluster.
However 50 client nodes, seems a bi tmuch to put in one cluster.
No, (or it depends how big are your servers) 50 clients is a small number
Or should I shovel them or Federate them to the server cluster exchange?
The desktop machine send messages like: LockOrder, UnLock Order.
I think it's better the cluster, because federation and shovel are asynchronous, it means that your LockOrder could be not replicated in time.
However I want reliable/durable messaging, hence local queues on ALL publishers and ALL subscribers
Withe RMQ you can create a persistent queue and messages, and it is not necessary if the client(s) is connected. It will get the messages when it will connect to the broker.
I hope it helps.
I have a FOSS ESB rpoject called Shuttle, if you would like to give it a spin: https://github.com/Shuttle/shuttle-esb
I haven't used NServiceBus for a while and actually started Shuttle when it went commercial. The implementation is somewhat different from NServiceBus. I don't know MassTransit at all, though. Currently process managers (sagas) have to be hand-rolled in Shuttle whereas MassTransit and NServiceBus have this incorporated. If I do get around to adding sagas I'll be adding them as a Module that can be plugged into the receiving pipeline. This way one could have various implementations and choose the flavour you like :)
Back to your issue. Shuttle has the concept of an optional outbox for queuing technologies like RabbitMQ. Shuttle does have a RabbitMQ implementation. I believe the outbox works somewhat like 'shovel' does. So the outbox would be local and sending messages would first go to the outbox. It would periodically try to send messages on to the recipients and, after a configurable number of attempts, send the message to an error queue. It can then be returned to the outbox for further attempts, or even moved directly to the recipient queue once it is up.
Documentation here: http://shuttle.github.io/shuttle-esb/

why duplicate messages happen in multi brokers of activemq cluster?

There're 2 brokers which are configured as a cluster through network connector.
Allways, messages are sent by a producer to broker0, and consumed by a consumer of broker0. But we found that some duplicated messages are sent to broker1, even broker0 are working well.
That's say, this duplicated messages are contains in both broker0 and broker1. Could anyone tell me the reason ?
Thank you
Such kind of situation can occur, if you are trying to use two independent ActiveMQ instances in a cluster and client has been given access to both broker URLs.
The solution is to use the master-slave feature that is designed to provide high availability.