Im developing a web application using expressjs and wanted to leverage the latest technology and architecture i.e kafka, microservices etc - The frontend is React and is calling the backend microservices to retrieve data.
My current architecture, consists of multiple services serving as rest api endpoints in the backend such as user service, account service, company service etc
All these services work well and fine, but having to introduce kafka into the mix, i now require to publish a 'new user' event when a client registers for an account -> the user service publishes this event but then now require the accounts service to consume it.
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)? or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
I'd like to know what the best approach is with this kind of situation.
In general, the microservices will have rest apis for providing any business/CRUD capabilities and the Kafka broker will mostly be used for achieving eventual consistency and also for triggering any actions(by dedicated Kafka consumers) asynchronously.
Now to your particular question -
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)?
The microservices will have their own data stores which may require to be consistent/in-sync with data stores belonging to other microservices. You can created dedicated Kafka topics for relevant events, for e.g. "User_Resource" could be a Kafka topic where you could publish all the events(CRUD) related to User resource. These topics can be subscribed by other microservices and the consumers will have logic to handle these events ( update account service database, trigger notifications to other down-streams etc.). This will also create clean separation between CRUD and business services.
or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
A service which exposes a rest endpoint can also act as a Kafka producer/consumer. If your application is built using Spring boot and spring cloud framework, you can use spring-cloud-stream to handle Kafka interactions in simplest way. The services need not to be bothered about the state of other services as they are supposed to be independent.
Related
I have started to develop an ecommerce application using a microservices architecture. Every microservice will have a separate database. For now, I know I want to use a Node.js microservice to handle products and also serve as a search engine for them. I plan on having a Ruby on Rails server-microservice that should handle all the requests and then if the request is not meant to be processed by it, (e.g. the request is to add a new product) to send this information somehow using RabbitMQ to the Node.js microservice, and let it perform the action. Is this an acceptable architectural design or I'm completely off route?
Ruby on Rails server-microservice that should handle all the requests (You can do better)
A. For this, what you need is an Reverse Proxy.
A reverse proxy is able to forward each incoming request to the microservice that's responsible for processing it.
It can also act as a Load Balancer : it'll distribute the incoming requests accross many services (if, for instance, you want to deploy multiple instances of the same service)
...
B. You will also need an API Gateway for managing Authentication & Authorization, and handling Security, Traceability, Logging, ... of the requests.
For (A) & (B), you can you use either Nginx or Kong
Use RabbitMQ in case you want to establish Event-based and/or Asynchronous communication among your microservices. Here's a simple example : Everytime a user confirms an Order, OrderService informs ProductService to update the quantity of the product that's been ordered.
The advantage of using RabbitMQ here is that OrderService won't stay on a blocking state while waiting for ProductService to let him know whether he received the info or not, or he updated the quantity or not ... he'll move on and handle the other incoming requests.
I want develop a project in microservice structure.
I have to use php/laravel and nodejs/nestjs
What is the best connection method between my microservices. I read about RabbitMQ and NATS messaging
and also GRPC
Which option is more suitable for microservice?
and why?
Thanks in advance
The technologies address different needs.
gRPC is a mechanism by which a client invokes methods on remote (although they needn't be) server. The client is tightly-coupled (often through load-balancers) with servers that implement the methods.
E.g. I (client) call Starbucks (service) and order (method) a coffee.
gRPC is an alternative to REST, GraphQL, and other mechanisms used to connect clients with servers though some form of API.
Message brokers (e g NATS, Rabbit) provide a higher-level abstraction in which a client sends messages to an intermediate service called a broker (this could be done using gRPC) and the broker may queue messages and either ship them directly to services (push) or wait for a service to check its subscription (pull).
E.g. I (client) post a classified ad on some site (broker). Multiple people may see my ad (subscriber) and offer to buy (method) the items from me. Some software robot may subscribe too and contact me offering to transport or insure the things I'm selling. Someone else may be monitoring sales of widgets on the site in order to determine whether there's a market for opening a store to sell these widgets etc.
With the broker, the client may never know which servers implement the functionality (and vice versa). This is a loosely-coupled mechanism in which services may be added and removed independently of the client.
If you need a synchronous response on 1:1 service call use gRPC
If you don't care which service will consume messages (asynchronous & no tight coupling between services) use RabbitMQ
If you need distributed system to keep events history and reuse later on another service use Kafka
Basically, it comes down to whether you want an Async communication between services or not.
That is when you can decide between real-time communication services (Sync) such as gRPC or RPC & Message Queueing ones (Async) such as RabbitMQ, Kafka or Amazon SQS.
Here are also some good answers by other users:
https://dev.to/hypedvibe_7/what-is-the-purpose-of-using-grpc-and-rabbitmq-in-microservices-c4i#comment-1d43
https://stackoverflow.com/a/63420930/9403963
I basically have a smaller software that is using the Microservice architecture. I am currently using RabbitMQ to do the communication between UI and services and that works great.
However I am thinking about creating a new microservice, a API Gateway, that basically takes the RabbitMQ logic from the UI and encapsulate into a service, which would become the entry point to all the other services.
The benefit is that I would encapsulate the logic that give access to the services and also being able to add authentication in the API Gateway.
However I would need to use HTTP request to interact with the API as I am moving the messaging logic from the UI. Could there be any major drawbacks in this approach?
I have being able to find examples about RabbitMQ and examples about API Gateways but never those two together, I might just be overthinking it a bit.
I have a good understanding SignalR Hubs in a client/server scenario, where both the client and server are tightly coupled.
Let's say I have a WCF service that receives an update from some external resource. That service could update the database with a new value. However the client would need to be notified that an update has occurred. This could be handled through a service proxy that notifies the client (sounds a bit like polling) or some cache resource.
I could create C#-based clients and connect all the nodes via SignalR hubs, but this creates a closed, non-distributed system.
A SignaR hub that attaches to a WCF service could use the .Net 4.5 could implement a WCF asynchronous service operation, where a hub client would be notified with any service data changes.
I saw something similar in Push Notifications with NServiceBus and SignaR, but not sure if this is an optimal production-level solution.
What other methods could be used in this scenario and how would they be implemented?
If you are not using push notifications directly to the client or some kind of long polling then it is pretty typical to communicate with clients on another channel altogether. Not knowing the business case, it is hard to tell what would be feasible. Usually this manifests itself in the form of SMS, push notifications to mobile, email, etc. This does not answer your question directly, but you may find that there is another way to achieve your goal.
I am looking at various options for a WCF based publish subscribe framework. Say I have one WCF web service that will be the publisher and 1000 clients registered as subscriber. For some published messages all clients will be interested but at the same time I wish the ability to notify a single client with a specific message. On receiving notification the client will call other web service methods on the web service.
Is NServiceBus suitable for this kind of scenario ?
If I use MSMQ for transport does it mean that every PC where the client is installed requires a queue to be created ?
Some of the challenges include how you want the publisher to behave when a given subscribing client is down - do you want that message to be available when the subscriber comes back up? If so, then some kind of durable messaging is needed between them - like MSMQ.
Your question about notifying a single client, is that as a result of a request sent by that client? If so, then standard NServiceBus calls in the form of Bus.Reply will do it for you. When using WCF, if the response is to be asynchronous you'll need to use callback contracts.
NServiceBus can do all the things you described, and has the ability to automatically install MSMQ and create queues so that greatly simplifies client-side deployments.
You also have the ability with NServiceBus to expose messages over WCF so you can support non-NServiceBus clients if you need to as well. It also has its own http gateway and XSD schemas which can allow clients on non-Windows platforms to interoperate even without using WCF.
Hope that answers your questions.