why replace ocelot api gateway with rabbitMQ - rabbitmq

We are making a cloud native enterprise business application on dotnet core mvc platform. The dotnet core default api gateway between frontend application and backend microservices is Ocelot used in Async mode.
We have been suggested to use RabbitMQ message broker instead of Ocelot. The reasoning given for this shift is asynchronous request - response exchange between frontend and microservices. Here, we would like to declare that our application would have few hundred cshtml pages spanning over several frontend modules. We are expecting over thousand users concurrently using the application.
Our concern is that, is it the right suggestion or not. Our development team feels that we should continue using Ocelot api gateway for general request - response exchange between frontend and microservices and use RabbitMQ only for events which are going to trigger backgroup processing and respond after a delay when the job gets completed.
In case you guys feel that yes we can replace Ocelot, then our further concerns about reliable session based request and response. We should not have to programmaticaly corelate response to session requests. Here it may please be noted that with RabbitMQ we are testing with dotnet core MassTransit library. The Ocelot API Gateway is designed to handle session based request-response commnunication.
In RabbitMQ should we make reply queue for each request or should the client maintain a single reply queue for all requests. Should the reply queue be exclusive or durable.
Can single reply queue per client be able to serve to all request or will it be correct to make multiple receive endpoint based on application modules/cshtml pages to serve all our concurrent users with efficient way.
Thanking you all, we eagerly wait for your replies.

I recommend to implement RabbitMQ. You might need to change ocelot to rabbit mq. 

Related

Microservice architecture communication with rabbitmq message broker

I have started to develop an ecommerce application using a microservices architecture. Every microservice will have a separate database. For now, I know I want to use a Node.js microservice to handle products and also serve as a search engine for them. I plan on having a Ruby on Rails server-microservice that should handle all the requests and then if the request is not meant to be processed by it, (e.g. the request is to add a new product) to send this information somehow using RabbitMQ to the Node.js microservice, and let it perform the action. Is this an acceptable architectural design or I'm completely off route?
Ruby on Rails server-microservice that should handle all the requests (You can do better)
A. For this, what you need is an Reverse Proxy.
A reverse proxy is able to forward each incoming request to the microservice that's responsible for processing it.
It can also act as a Load Balancer : it'll distribute the incoming requests accross many services (if, for instance, you want to deploy multiple instances of the same service)
...
B. You will also need an API Gateway for managing Authentication & Authorization, and handling Security, Traceability, Logging, ... of the requests.
For (A) & (B), you can you use either Nginx or Kong
Use RabbitMQ in case you want to establish Event-based and/or Asynchronous communication among your microservices. Here's a simple example : Everytime a user confirms an Order, OrderService informs ProductService to update the quantity of the product that's been ordered.
The advantage of using RabbitMQ here is that OrderService won't stay on a blocking state while waiting for ProductService to let him know whether he received the info or not, or he updated the quantity or not ... he'll move on and handle the other incoming requests.

When to NOT use a message broker such as RabbitMQ in a micro-services architecture?

I am new to the concept of messaging brokers such as RabbitMQ and wanted to learn some best practices.
RabbitMQ seems to be a great way to facilitate asynchronous communication between micro-services, however, I have a beginners question that I could not find an answer to anywhere else.
When would one NOT use a message broker such as RabbitMQ in a micro-services architecture?
As an example:
Let's say I have two services. Service A and Service B (auth service)
The client makes a request to service A which in turn must communicate with service B (auth service) to authenticate the user and authorize the request. (using Basic Auth)
Internet
Client ----------------> Service A +-------> Service B [Authenticate/Authorization]
HTTP request HTTP or AMQP??
In my limited understanding, the issue I can foresee with using an AMQP in scenarios such as the one outlined above is service A being able to process the request and send a response to the client within an acceptable timeframe, given it must wait for service B to consume and respond to a message.
Essentially, is it a bad idea to make Service A wait for a response from Service B via an AMQP?
Or have I missed the point of an AMQP entirely??
Well actually what you are describing is mostly close to the HTTP.
HTTP is synchronous which means that you have to wait for a response. The solution to this issue is AMQP as you mention. With AMQP you don't necessarily need to wait(you can configure it).
Its not necessarily a bad idea but what most microservices depend on is something called eventual consistency. As this will be a quite long answer with a lot of ifs I would suggest taking a look into Microservices Architecture
For example here is the part about the http vs amqp since its mostly a question about sychronous vs asychronous communication
It goes into great detail about different approaches of microservices design listing pros and cons for your specific question and others.
For example in your case the Auth would happen at the API gateway as its not considered best practice to leave the microservices open for all the client applications.

REST API with active push notifications from server to client

Problem description
i am working on a Xamarin application that consumes a REST API written in Python flask.
The Xamarin application offers virtual shopping lists where user can collaborate on buying stuff they have on a shared list.
To improve the user experience, i want to be able to actively notify the user about finished items on the list.
Possible solutions:
Synchronous API polling from client side
Notifications are stored by the API in a relational database and have a flag indicating if the user received the notification already.
The API has an endpoint GET /users/:user_id/notifications/ that queries the database for notifications and returns a JSON response with those.
Advantages
fairly simple to implement
Problems
synchronous polling creates a huge amount of http requests
API service remains stateless, making a horizontal scaling with a loadbalancer easier
Websocket endpoint on the API
The API has an endpoint POST /users/:user_id/notifications/register which creates a websocket connection between client and API.
The connection is stored to a global array in which each entry maps a client id to a websocket connection.
When a new notification is created, the endpoint makes a lookup in the connection dictionary by comparing the owner id of the notification with the dictionary entries. The notification is sent to appropriate user through the websocket.
Notifications are stored in the database like in the first approach.
When a user calls the endpoint, a new websocket connection will be established first and upon success the API sends all unseen notifications from the database to the user.
Advantages
API can push notifications to clients asynchronously
Problems
When a user terminates the websocket connection his dictionary entry will persis
Retaining one websocket connection per user permanently adds additional overhead to the API
Horizontal scalability of the API is more difficult because the service is not stateless anymore (Websocket connection information saved in
RabbitMQ
The API uses a RabbitMQ service to send notifications to the client. Every client uses subscribes to his own notification queue to prevent the broadcasting of messages.
Advantages
API remains stateless
Problems
Notifications needs to be resend to the exchange when a user is offline
Amount of queues grows drastically
Additional costs for RabbitMQ service
High temporary load on the RabbitMQ service when many users come online in the same time
Final words
It would be interesting to hear the opinion of others.
I believe the active distribution of notifications from backen services to clients i a very common use case.
best,
D
I would use RabbitMQ and consume events forwarding them as push notifications. This will work while the user is not actively connected to the website and enhance the engagement with each user experience that will return to the website when notified for more information see How to setup basic web push notification functionality using a Flask backend or How to send push notifications to a browser in ASP.NET Core or Sending Notifications with Spring Boot, Angular, and Firebase Cloud Messaging this way the RabbitMQ will not wait until the user is back online. If the user is online you can forward the notification directly to the Xamarin application via WebSockets and a load balancer like NGINX that can handle many WebSockets in an optimized way.
Synchronous API polling from the client-side is the less preferred way since it overloads the webserver with requests while nothing was changed.
I don't think the scalability of WebSocket is a problem. You can scale up easily with pub/sub. The hotspot of long connections is a kind of serious problem.
For one-way communication, I would suggest Server sent event. In the end, it usually depends on what your team is confident with.
I can recommend on a different approach for API that provides JSON which is called GraphQL
It supports subscriptions capabilities that are pushed by the GraphQL API Server (using web sockets)
GraphQL is considered today to be better than RESTful API since its very flexible and you can get exactly the data you need with one query.

Microservice, Queue vs HTTP request

We have a backend with a dozen of workers in python connected to rabbitMQ with celery. We have also the API gateway in Django with postgresql. The context between workers is handled by the DB.
We would like:
To decouple the DB and the workers,
To be able to make workers in Go.
We looked at the microservice infrastructure and this seems very interesting. What we don't understand is what kind of Request/Response pattern we should use and how we can handle the context of a request between workers without using a common db.
Microservice articles deal with notifications and subscription. Is this applicable to this situation or should we use HTTP request. Is the RPC pattern used? This pattern seems to be very heavy.

Is NServiceBus suitable for general as well specific client notifications

I am looking at various options for a WCF based publish subscribe framework. Say I have one WCF web service that will be the publisher and 1000 clients registered as subscriber. For some published messages all clients will be interested but at the same time I wish the ability to notify a single client with a specific message. On receiving notification the client will call other web service methods on the web service.
Is NServiceBus suitable for this kind of scenario ?
If I use MSMQ for transport does it mean that every PC where the client is installed requires a queue to be created ?
Some of the challenges include how you want the publisher to behave when a given subscribing client is down - do you want that message to be available when the subscriber comes back up? If so, then some kind of durable messaging is needed between them - like MSMQ.
Your question about notifying a single client, is that as a result of a request sent by that client? If so, then standard NServiceBus calls in the form of Bus.Reply will do it for you. When using WCF, if the response is to be asynchronous you'll need to use callback contracts.
NServiceBus can do all the things you described, and has the ability to automatically install MSMQ and create queues so that greatly simplifies client-side deployments.
You also have the ability with NServiceBus to expose messages over WCF so you can support non-NServiceBus clients if you need to as well. It also has its own http gateway and XSD schemas which can allow clients on non-Windows platforms to interoperate even without using WCF.
Hope that answers your questions.