Microservice, Queue vs HTTP request - rabbitmq

We have a backend with a dozen of workers in python connected to rabbitMQ with celery. We have also the API gateway in Django with postgresql. The context between workers is handled by the DB.
We would like:
To decouple the DB and the workers,
To be able to make workers in Go.
We looked at the microservice infrastructure and this seems very interesting. What we don't understand is what kind of Request/Response pattern we should use and how we can handle the context of a request between workers without using a common db.
Microservice articles deal with notifications and subscription. Is this applicable to this situation or should we use HTTP request. Is the RPC pattern used? This pattern seems to be very heavy.

Related

Microservice architecture communication with rabbitmq message broker

I have started to develop an ecommerce application using a microservices architecture. Every microservice will have a separate database. For now, I know I want to use a Node.js microservice to handle products and also serve as a search engine for them. I plan on having a Ruby on Rails server-microservice that should handle all the requests and then if the request is not meant to be processed by it, (e.g. the request is to add a new product) to send this information somehow using RabbitMQ to the Node.js microservice, and let it perform the action. Is this an acceptable architectural design or I'm completely off route?
Ruby on Rails server-microservice that should handle all the requests (You can do better)
A. For this, what you need is an Reverse Proxy.
A reverse proxy is able to forward each incoming request to the microservice that's responsible for processing it.
It can also act as a Load Balancer : it'll distribute the incoming requests accross many services (if, for instance, you want to deploy multiple instances of the same service)
...
B. You will also need an API Gateway for managing Authentication & Authorization, and handling Security, Traceability, Logging, ... of the requests.
For (A) & (B), you can you use either Nginx or Kong
Use RabbitMQ in case you want to establish Event-based and/or Asynchronous communication among your microservices. Here's a simple example : Everytime a user confirms an Order, OrderService informs ProductService to update the quantity of the product that's been ordered.
The advantage of using RabbitMQ here is that OrderService won't stay on a blocking state while waiting for ProductService to let him know whether he received the info or not, or he updated the quantity or not ... he'll move on and handle the other incoming requests.

Which option is more suitable for microservice? GRPC or Message Brokers like RabbitMQ

I want develop a project in microservice structure.
I have to use php/laravel and nodejs/nestjs
What is the best connection method between my microservices. I read about RabbitMQ and NATS messaging
and also GRPC
Which option is more suitable for microservice?
and why?
Thanks in advance
The technologies address different needs.
gRPC is a mechanism by which a client invokes methods on remote (although they needn't be) server. The client is tightly-coupled (often through load-balancers) with servers that implement the methods.
E.g. I (client) call Starbucks (service) and order (method) a coffee.
gRPC is an alternative to REST, GraphQL, and other mechanisms used to connect clients with servers though some form of API.
Message brokers (e g NATS, Rabbit) provide a higher-level abstraction in which a client sends messages to an intermediate service called a broker (this could be done using gRPC) and the broker may queue messages and either ship them directly to services (push) or wait for a service to check its subscription (pull).
E.g. I (client) post a classified ad on some site (broker). Multiple people may see my ad (subscriber) and offer to buy (method) the items from me. Some software robot may subscribe too and contact me offering to transport or insure the things I'm selling. Someone else may be monitoring sales of widgets on the site in order to determine whether there's a market for opening a store to sell these widgets etc.
With the broker, the client may never know which servers implement the functionality (and vice versa). This is a loosely-coupled mechanism in which services may be added and removed independently of the client.
If you need a synchronous response on 1:1 service call use gRPC
If you don't care which service will consume messages (asynchronous & no tight coupling between services) use RabbitMQ
If you need distributed system to keep events history and reuse later on another service use Kafka
Basically, it comes down to whether you want an Async communication between services or not.
That is when you can decide between real-time communication services (Sync) such as gRPC or RPC & Message Queueing ones (Async) such as RabbitMQ, Kafka or Amazon SQS.
Here are also some good answers by other users:
https://dev.to/hypedvibe_7/what-is-the-purpose-of-using-grpc-and-rabbitmq-in-microservices-c4i#comment-1d43
https://stackoverflow.com/a/63420930/9403963

When to NOT use a message broker such as RabbitMQ in a micro-services architecture?

I am new to the concept of messaging brokers such as RabbitMQ and wanted to learn some best practices.
RabbitMQ seems to be a great way to facilitate asynchronous communication between micro-services, however, I have a beginners question that I could not find an answer to anywhere else.
When would one NOT use a message broker such as RabbitMQ in a micro-services architecture?
As an example:
Let's say I have two services. Service A and Service B (auth service)
The client makes a request to service A which in turn must communicate with service B (auth service) to authenticate the user and authorize the request. (using Basic Auth)
Internet
Client ----------------> Service A +-------> Service B [Authenticate/Authorization]
HTTP request HTTP or AMQP??
In my limited understanding, the issue I can foresee with using an AMQP in scenarios such as the one outlined above is service A being able to process the request and send a response to the client within an acceptable timeframe, given it must wait for service B to consume and respond to a message.
Essentially, is it a bad idea to make Service A wait for a response from Service B via an AMQP?
Or have I missed the point of an AMQP entirely??
Well actually what you are describing is mostly close to the HTTP.
HTTP is synchronous which means that you have to wait for a response. The solution to this issue is AMQP as you mention. With AMQP you don't necessarily need to wait(you can configure it).
Its not necessarily a bad idea but what most microservices depend on is something called eventual consistency. As this will be a quite long answer with a lot of ifs I would suggest taking a look into Microservices Architecture
For example here is the part about the http vs amqp since its mostly a question about sychronous vs asychronous communication
It goes into great detail about different approaches of microservices design listing pros and cons for your specific question and others.
For example in your case the Auth would happen at the API gateway as its not considered best practice to leave the microservices open for all the client applications.

why replace ocelot api gateway with rabbitMQ

We are making a cloud native enterprise business application on dotnet core mvc platform. The dotnet core default api gateway between frontend application and backend microservices is Ocelot used in Async mode.
We have been suggested to use RabbitMQ message broker instead of Ocelot. The reasoning given for this shift is asynchronous request - response exchange between frontend and microservices. Here, we would like to declare that our application would have few hundred cshtml pages spanning over several frontend modules. We are expecting over thousand users concurrently using the application.
Our concern is that, is it the right suggestion or not. Our development team feels that we should continue using Ocelot api gateway for general request - response exchange between frontend and microservices and use RabbitMQ only for events which are going to trigger backgroup processing and respond after a delay when the job gets completed.
In case you guys feel that yes we can replace Ocelot, then our further concerns about reliable session based request and response. We should not have to programmaticaly corelate response to session requests. Here it may please be noted that with RabbitMQ we are testing with dotnet core MassTransit library. The Ocelot API Gateway is designed to handle session based request-response commnunication.
In RabbitMQ should we make reply queue for each request or should the client maintain a single reply queue for all requests. Should the reply queue be exclusive or durable.
Can single reply queue per client be able to serve to all request or will it be correct to make multiple receive endpoint based on application modules/cshtml pages to serve all our concurrent users with efficient way.
Thanking you all, we eagerly wait for your replies.
I recommend to implement RabbitMQ. You might need to change ocelot to rabbit mq. 

ServiceStack Messaging API: Can it make a broadcast?

As I have previously mentioned, I am using ServiceStack Messaging API (IMessageQueueClient.Publish) as well as the more low-level IRedisClient.PublishMessage.
I use the Messaging API when I need a specific message/request to be processed by only one instance of a module/service, so even though I might have several modules running that all listens for MyRequest, only one service receives the message and processes it.
I use the IRedisClient.PublishMessage when I do a broadcast, a pub/sub situation, sending a request that everyone should receive that listens on that specific Redis channel.
However, I am in a situation where it would be useful to use the Messaging API, but do a broadcast, so that all instances that are listening to a specific message type, gets the message, not just the one.
(The reason for this is to streamline our usage of Redis and how we subscribe to events/request, but I will not get into details about this now. A little more background on this is here.)
Is there a "broadcast way" for the Messaging API?
No, the purpose of ServiceStack Messaging is simply to invoke ServiceStack Services via MQ. Any other MQ features is outside the purpose & scope of ServiceStack MQ, you'd need to instead develop against the MQ Provider APIs directly to access their broadcast features.
Server Events is a ServiceStack feature that supports broadcasting messages to subscribers of user-defined channels, but its a completely different implementation that serves a different use-case for sending "server push" real-time events over HTTP or gRPC, e.g. it doesn't use MQ brokers and pub/sub messages aren't persistent (i.e. only subscribers at time messages are sent will receive them).