I have a small project in which tweets from twitter are consumed by an application, put into a JMS queue on an ActiveMQ, read from another application to to enrich each tweet and then returned into another JMS queue on the ActiveMQ server.
Currently, all the routing is defined within each application itself. So the routing from twitter into the JMS queue is defined in application A and the routing from one JMS queue into another is defined in application B.
In my opinion this architecture seems to be wrong, since now I cannot change any route without redeploying one of the applications.
What I want is a solution where I have some (micro)services like a twitter adapter and an enricher, which are independent from each other. But where do I put the routing information then? Somehow into ActiveMQ? Is there a way to configure the routes easily? Somehow this sounds like an ESB, doesn´t it?
I think you can use a central java component hosting Camel + Embedded ActiveMQ server for integrating various applications. This central component can have your micro services/enrichers as well as all routes transferring data between applications.
Camel provides lots of components (VM, file, JMS, RMI, Webservices etc) which you can use as endpoints connecting to Application A/B. About Twitter feed you can put the twitter adapter/listener into this central component so that it can communicate to Camel routes through direct-VM endpoint.
This makes integration easier. All routes are maintained at central place and it decouples the MQ server, micro services and applications.
Related
We are making a cloud native enterprise business application on dotnet core mvc platform. The dotnet core default api gateway between frontend application and backend microservices is Ocelot used in Async mode.
We have been suggested to use RabbitMQ message broker instead of Ocelot. The reasoning given for this shift is asynchronous request - response exchange between frontend and microservices. Here, we would like to declare that our application would have few hundred cshtml pages spanning over several frontend modules. We are expecting over thousand users concurrently using the application.
Our concern is that, is it the right suggestion or not. Our development team feels that we should continue using Ocelot api gateway for general request - response exchange between frontend and microservices and use RabbitMQ only for events which are going to trigger backgroup processing and respond after a delay when the job gets completed.
In case you guys feel that yes we can replace Ocelot, then our further concerns about reliable session based request and response. We should not have to programmaticaly corelate response to session requests. Here it may please be noted that with RabbitMQ we are testing with dotnet core MassTransit library. The Ocelot API Gateway is designed to handle session based request-response commnunication.
In RabbitMQ should we make reply queue for each request or should the client maintain a single reply queue for all requests. Should the reply queue be exclusive or durable.
Can single reply queue per client be able to serve to all request or will it be correct to make multiple receive endpoint based on application modules/cshtml pages to serve all our concurrent users with efficient way.
Thanking you all, we eagerly wait for your replies.
I recommend to implement RabbitMQ. You might need to change ocelot to rabbit mq.
Im developing a web application using expressjs and wanted to leverage the latest technology and architecture i.e kafka, microservices etc - The frontend is React and is calling the backend microservices to retrieve data.
My current architecture, consists of multiple services serving as rest api endpoints in the backend such as user service, account service, company service etc
All these services work well and fine, but having to introduce kafka into the mix, i now require to publish a 'new user' event when a client registers for an account -> the user service publishes this event but then now require the accounts service to consume it.
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)? or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
I'd like to know what the best approach is with this kind of situation.
In general, the microservices will have rest apis for providing any business/CRUD capabilities and the Kafka broker will mostly be used for achieving eventual consistency and also for triggering any actions(by dedicated Kafka consumers) asynchronously.
Now to your particular question -
Should i be creating a new subscriber service individually consume this event, connecting to the same db as the account service (though doesn't this defeat the purpose of1 database per service microservice architecture)?
The microservices will have their own data stores which may require to be consistent/in-sync with data stores belonging to other microservices. You can created dedicated Kafka topics for relevant events, for e.g. "User_Resource" could be a Kafka topic where you could publish all the events(CRUD) related to User resource. These topics can be subscribed by other microservices and the consumers will have logic to handle these events ( update account service database, trigger notifications to other down-streams etc.). This will also create clean separation between CRUD and business services.
or should the accounts service that is acting as a rest api endpoint also consume the kafka event (doesn't this also then complicate things when theres 20+ microservices, spending time checking what service is consuming what event)?
A service which exposes a rest endpoint can also act as a Kafka producer/consumer. If your application is built using Spring boot and spring cloud framework, you can use spring-cloud-stream to handle Kafka interactions in simplest way. The services need not to be bothered about the state of other services as they are supposed to be independent.
As I have previously mentioned, I am using ServiceStack Messaging API (IMessageQueueClient.Publish) as well as the more low-level IRedisClient.PublishMessage.
I use the Messaging API when I need a specific message/request to be processed by only one instance of a module/service, so even though I might have several modules running that all listens for MyRequest, only one service receives the message and processes it.
I use the IRedisClient.PublishMessage when I do a broadcast, a pub/sub situation, sending a request that everyone should receive that listens on that specific Redis channel.
However, I am in a situation where it would be useful to use the Messaging API, but do a broadcast, so that all instances that are listening to a specific message type, gets the message, not just the one.
(The reason for this is to streamline our usage of Redis and how we subscribe to events/request, but I will not get into details about this now. A little more background on this is here.)
Is there a "broadcast way" for the Messaging API?
No, the purpose of ServiceStack Messaging is simply to invoke ServiceStack Services via MQ. Any other MQ features is outside the purpose & scope of ServiceStack MQ, you'd need to instead develop against the MQ Provider APIs directly to access their broadcast features.
Server Events is a ServiceStack feature that supports broadcasting messages to subscribers of user-defined channels, but its a completely different implementation that serves a different use-case for sending "server push" real-time events over HTTP or gRPC, e.g. it doesn't use MQ brokers and pub/sub messages aren't persistent (i.e. only subscribers at time messages are sent will receive them).
I basically have a smaller software that is using the Microservice architecture. I am currently using RabbitMQ to do the communication between UI and services and that works great.
However I am thinking about creating a new microservice, a API Gateway, that basically takes the RabbitMQ logic from the UI and encapsulate into a service, which would become the entry point to all the other services.
The benefit is that I would encapsulate the logic that give access to the services and also being able to add authentication in the API Gateway.
However I would need to use HTTP request to interact with the API as I am moving the messaging logic from the UI. Could there be any major drawbacks in this approach?
I have being able to find examples about RabbitMQ and examples about API Gateways but never those two together, I might just be overthinking it a bit.
I am experimenting with Mule API management these days. What I come to know is we can deploy our API to one of these:
A Mule Runtime
An API Gateway
In the documentation, it is said that we should go with option 1 when we want to separate out the implementation of your API from the orchestration. What does it mean?
Can any one please explain in detail?
Policy management from API Platform and analytics generation can be achieved only by using a correctly configured API Gateway, which is a superset of Mule EE (current version is API Gateway 2.1.0 which contains Mule EE 3.7.2).
Depending on your architecture you may have different solutions.
For example:
Proxy running on API Gateway, implementation API running somewhere
else (eg. Mule EE/CE, Tomcat, cobol server, etc)
Proxy and implementation API running on the same API Gateway
Implementation API
managed directly from API Platform without using the autogenerated
proxies.
HTH :-)
Not exactly sure what they mean there, because on this page: https://developer.mulesoft.com/docs/display/current/API+Gateway they also mention this:
Note that the API Gateway, because it acts as an orchestration layer
for services and APIs implemented elsewhere, is technology-agnostic.
You can proxy non-Mule services or APIs of any kind, as long as they
expose HTTP/HTTPS, VM, Jetty, or APIkit Router endpoints. You can also
proxy APIs that you design and build with API Designer and APIkit to
the API Gateway to separate the orchestration from the implementation
of those APIs.
So both methods technically allow you to separate API from orchestration, as your API gateway application could simply proxy another Mule application elsewhere that performs the orchestration. But my understanding of the two options are:
The API gateway is a limited offering that allows you to use a subset of Mule's connectors, transports and modules such as ApiKit and HTTP, it allows you to expose and API then use http to connect to whatever backend systems you want as a proxy and perform the orchestration in the API layer.
By using the Mule runtime operation, it gives you much more flexibility and allows you to compose as many applications as you want using the full range of connectors etc. and separate out the different aspects of your applications into as many layers as you want as separately deployable entities that you can deploy to on-premise standalone instances or Cloudhub etc.
#Ryan answer is more or less on the mark, however if you do choose the Mule ESB offering you will loose out on the API Management and governance functionality that API gateway provides OOTB.
These include
Lets you enforce runtime policies and collect data for analytics
Applies policies to APIs or endpoints around security, throttling,
rate limiting, and more
Extends PingFederate to serve as identity management and OAuth
provider for your APIs
Lets you require or restrict certain behaviors in a few simple steps
Lets you add or remove policies at runtime with no API downtime
Manages access to your API by issuing contract keys
Monitors the API to confirm it is meeting all contract terms
Ensures compliance with service level agreements (SLAs)
In my opinion go with API Gateway/Manager if your API will be consumed my third party developers with whom you might not have too many interactions (think public API's) else Mule ESB should be good.
You should be able to migrate from Mule ESB to API Manager (and vice versa) also easily if you need to, so I do not think you will get locked into your decision
PS: Content copied from here