Microservice architecture communication with rabbitmq message broker - rabbitmq

I have started to develop an ecommerce application using a microservices architecture. Every microservice will have a separate database. For now, I know I want to use a Node.js microservice to handle products and also serve as a search engine for them. I plan on having a Ruby on Rails server-microservice that should handle all the requests and then if the request is not meant to be processed by it, (e.g. the request is to add a new product) to send this information somehow using RabbitMQ to the Node.js microservice, and let it perform the action. Is this an acceptable architectural design or I'm completely off route?

Ruby on Rails server-microservice that should handle all the requests (You can do better)
A. For this, what you need is an Reverse Proxy.
A reverse proxy is able to forward each incoming request to the microservice that's responsible for processing it.
It can also act as a Load Balancer : it'll distribute the incoming requests accross many services (if, for instance, you want to deploy multiple instances of the same service)
...
B. You will also need an API Gateway for managing Authentication & Authorization, and handling Security, Traceability, Logging, ... of the requests.
For (A) & (B), you can you use either Nginx or Kong
Use RabbitMQ in case you want to establish Event-based and/or Asynchronous communication among your microservices. Here's a simple example : Everytime a user confirms an Order, OrderService informs ProductService to update the quantity of the product that's been ordered.
The advantage of using RabbitMQ here is that OrderService won't stay on a blocking state while waiting for ProductService to let him know whether he received the info or not, or he updated the quantity or not ... he'll move on and handle the other incoming requests.

Related

Which option is more suitable for microservice? GRPC or Message Brokers like RabbitMQ

I want develop a project in microservice structure.
I have to use php/laravel and nodejs/nestjs
What is the best connection method between my microservices. I read about RabbitMQ and NATS messaging
and also GRPC
Which option is more suitable for microservice?
and why?
Thanks in advance
The technologies address different needs.
gRPC is a mechanism by which a client invokes methods on remote (although they needn't be) server. The client is tightly-coupled (often through load-balancers) with servers that implement the methods.
E.g. I (client) call Starbucks (service) and order (method) a coffee.
gRPC is an alternative to REST, GraphQL, and other mechanisms used to connect clients with servers though some form of API.
Message brokers (e g NATS, Rabbit) provide a higher-level abstraction in which a client sends messages to an intermediate service called a broker (this could be done using gRPC) and the broker may queue messages and either ship them directly to services (push) or wait for a service to check its subscription (pull).
E.g. I (client) post a classified ad on some site (broker). Multiple people may see my ad (subscriber) and offer to buy (method) the items from me. Some software robot may subscribe too and contact me offering to transport or insure the things I'm selling. Someone else may be monitoring sales of widgets on the site in order to determine whether there's a market for opening a store to sell these widgets etc.
With the broker, the client may never know which servers implement the functionality (and vice versa). This is a loosely-coupled mechanism in which services may be added and removed independently of the client.
If you need a synchronous response on 1:1 service call use gRPC
If you don't care which service will consume messages (asynchronous & no tight coupling between services) use RabbitMQ
If you need distributed system to keep events history and reuse later on another service use Kafka
Basically, it comes down to whether you want an Async communication between services or not.
That is when you can decide between real-time communication services (Sync) such as gRPC or RPC & Message Queueing ones (Async) such as RabbitMQ, Kafka or Amazon SQS.
Here are also some good answers by other users:
https://dev.to/hypedvibe_7/what-is-the-purpose-of-using-grpc-and-rabbitmq-in-microservices-c4i#comment-1d43
https://stackoverflow.com/a/63420930/9403963

why replace ocelot api gateway with rabbitMQ

We are making a cloud native enterprise business application on dotnet core mvc platform. The dotnet core default api gateway between frontend application and backend microservices is Ocelot used in Async mode.
We have been suggested to use RabbitMQ message broker instead of Ocelot. The reasoning given for this shift is asynchronous request - response exchange between frontend and microservices. Here, we would like to declare that our application would have few hundred cshtml pages spanning over several frontend modules. We are expecting over thousand users concurrently using the application.
Our concern is that, is it the right suggestion or not. Our development team feels that we should continue using Ocelot api gateway for general request - response exchange between frontend and microservices and use RabbitMQ only for events which are going to trigger backgroup processing and respond after a delay when the job gets completed.
In case you guys feel that yes we can replace Ocelot, then our further concerns about reliable session based request and response. We should not have to programmaticaly corelate response to session requests. Here it may please be noted that with RabbitMQ we are testing with dotnet core MassTransit library. The Ocelot API Gateway is designed to handle session based request-response commnunication.
In RabbitMQ should we make reply queue for each request or should the client maintain a single reply queue for all requests. Should the reply queue be exclusive or durable.
Can single reply queue per client be able to serve to all request or will it be correct to make multiple receive endpoint based on application modules/cshtml pages to serve all our concurrent users with efficient way.
Thanking you all, we eagerly wait for your replies.
I recommend to implement RabbitMQ. You might need to change ocelot to rabbit mq. 

Kafka + API service Architecture

Im developing a web application using expressjs and wanted to leverage the latest technology and architecture i.e kafka, microservices etc - The frontend is React and is calling the backend microservices to retrieve data.
My current architecture, consists of multiple services serving as rest api endpoints in the backend such as user service, account service, company service etc
All these services work well and fine, but having to introduce kafka into the mix, i now require to publish a 'new user' event when a client registers for an account -> the user service publishes this event but then now require the accounts service to consume it.
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)? or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
I'd like to know what the best approach is with this kind of situation.
In general, the microservices will have rest apis for providing any business/CRUD capabilities and the Kafka broker will mostly be used for achieving eventual consistency and also for triggering any actions(by dedicated Kafka consumers) asynchronously.
Now to your particular question -
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)?
The microservices will have their own data stores which may require to be consistent/in-sync with data stores belonging to other microservices. You can created dedicated Kafka topics for relevant events, for e.g. "User_Resource" could be a Kafka topic where you could publish all the events(CRUD) related to User resource. These topics can be subscribed by other microservices and the consumers will have logic to handle these events ( update account service database, trigger notifications to other down-streams etc.). This will also create clean separation between CRUD and business services.
or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
A service which exposes a rest endpoint can also act as a Kafka producer/consumer. If your application is built using Spring boot and spring cloud framework, you can use spring-cloud-stream to handle Kafka interactions in simplest way. The services need not to be bothered about the state of other services as they are supposed to be independent.

REST API with active push notifications from server to client

Problem description
i am working on a Xamarin application that consumes a REST API written in Python flask.
The Xamarin application offers virtual shopping lists where user can collaborate on buying stuff they have on a shared list.
To improve the user experience, i want to be able to actively notify the user about finished items on the list.
Possible solutions:
Synchronous API polling from client side
Notifications are stored by the API in a relational database and have a flag indicating if the user received the notification already.
The API has an endpoint GET /users/:user_id/notifications/ that queries the database for notifications and returns a JSON response with those.
Advantages
fairly simple to implement
Problems
synchronous polling creates a huge amount of http requests
API service remains stateless, making a horizontal scaling with a loadbalancer easier
Websocket endpoint on the API
The API has an endpoint POST /users/:user_id/notifications/register which creates a websocket connection between client and API.
The connection is stored to a global array in which each entry maps a client id to a websocket connection.
When a new notification is created, the endpoint makes a lookup in the connection dictionary by comparing the owner id of the notification with the dictionary entries. The notification is sent to appropriate user through the websocket.
Notifications are stored in the database like in the first approach.
When a user calls the endpoint, a new websocket connection will be established first and upon success the API sends all unseen notifications from the database to the user.
Advantages
API can push notifications to clients asynchronously
Problems
When a user terminates the websocket connection his dictionary entry will persis
Retaining one websocket connection per user permanently adds additional overhead to the API
Horizontal scalability of the API is more difficult because the service is not stateless anymore (Websocket connection information saved in
RabbitMQ
The API uses a RabbitMQ service to send notifications to the client. Every client uses subscribes to his own notification queue to prevent the broadcasting of messages.
Advantages
API remains stateless
Problems
Notifications needs to be resend to the exchange when a user is offline
Amount of queues grows drastically
Additional costs for RabbitMQ service
High temporary load on the RabbitMQ service when many users come online in the same time
Final words
It would be interesting to hear the opinion of others.
I believe the active distribution of notifications from backen services to clients i a very common use case.
best,
D
I would use RabbitMQ and consume events forwarding them as push notifications. This will work while the user is not actively connected to the website and enhance the engagement with each user experience that will return to the website when notified for more information see How to setup basic web push notification functionality using a Flask backend or How to send push notifications to a browser in ASP.NET Core or Sending Notifications with Spring Boot, Angular, and Firebase Cloud Messaging this way the RabbitMQ will not wait until the user is back online. If the user is online you can forward the notification directly to the Xamarin application via WebSockets and a load balancer like NGINX that can handle many WebSockets in an optimized way.
Synchronous API polling from the client-side is the less preferred way since it overloads the webserver with requests while nothing was changed.
I don't think the scalability of WebSocket is a problem. You can scale up easily with pub/sub. The hotspot of long connections is a kind of serious problem.
For one-way communication, I would suggest Server sent event. In the end, it usually depends on what your team is confident with.
I can recommend on a different approach for API that provides JSON which is called GraphQL
It supports subscriptions capabilities that are pushed by the GraphQL API Server (using web sockets)
GraphQL is considered today to be better than RESTful API since its very flexible and you can get exactly the data you need with one query.

Microservice, Queue vs HTTP request

We have a backend with a dozen of workers in python connected to rabbitMQ with celery. We have also the API gateway in Django with postgresql. The context between workers is handled by the DB.
We would like:
To decouple the DB and the workers,
To be able to make workers in Go.
We looked at the microservice infrastructure and this seems very interesting. What we don't understand is what kind of Request/Response pattern we should use and how we can handle the context of a request between workers without using a common db.
Microservice articles deal with notifications and subscription. Is this applicable to this situation or should we use HTTP request. Is the RPC pattern used? This pattern seems to be very heavy.