confusion over aggregator service in microservices pattern - asp.net-core

I need to create a service to collect and consolidate events from other services, as far as found on internet ,the aggregator service helps to find out what's going on in the application flow, I have confusion here which need your help, aggregator microservice means if any input or output from a service should be sent to the aggregator service with time and date? But in clouds also we have such a service like application insights, does not it do the same thing? Even if we store every event it gona be a huge data in the db,is it really a best solution?

So Answering your first question,
Aggregator microservice means if any input or output from a service should be sent to the aggregator service with time and date?
Not Really, Aggregator Microservice is a pattern, which is basically another service that receives requests, subsequently makes requests to multiple different services and combines the results and responds to the initiating request.
So I guess you're looking for some Log aggregators, which are software functions that consolidate log data from throughout the IT infrastructure into a single centralized platform where it can be reviewed and analyzed.
But in clouds also we have such a service like application insights, does not it do the same thing? Yes, you can say that it's a similar service.
Even if we store every event it gona be a huge data in the db,is it really a best solution? Leave this with your Log aggregator tool, it will have a proper mechanism to keep your data. Mostly they will keep the data in a compact way and properly indexed too.

Related

Seperate or Merge Kafka Consumer and API services together

After recently reading about event-based architecture, I wanted to change my architecture into one making use of such strengths.
I have two services that expose an API (crud, graphql), each based around a different entity and using a different database.
However, now whenever someone deletes a certain type of row in service A, i need to delete a coupled row in Service B.
So I added Kafka to my design, and whenever I delete the entity in service A, it publishes a notification message into Kafka.
In service B I am currently consuming the same topic so whenever a new message is received the Service will also handle the deletion of the matching entity, because it already has access that table because the same service already exposes the CRUD API to users.
What i'm not sure about is whether putting the Kafka Consumer and the API together in the same service is a good design. It contradicts the point of single responsibility in micro services, and whether there is an issue in one part of the service, it will likely affect the second.
However, creating a new service will also cause me issues - i will have 2 different services accessing the same table, and i will have to make sure i always maintain them together, whenever making changes to the table or database.
What is the best practice in a incident such as this? Is it inevitable to have different services have data coupling or is it not so bad to use the same service for two, similiar usages.
There is nothing wrong with using Kafka... You could do the same with point-to-point service communication, however (JSON-RPC / gRPC), however.
The real problem you seem to be asking about is dual-writes or race-conditions leading to data inconsistency.
While you could use a single consumer group and one topic-partition to preserve order and locking across consumers interested in those events, that does not lock out other consumer-groups from interacting with the database to perform the same action. Therefore, Kafka itself won't help with this problem.
You'll need external, distributed locks (e.g. Zookeeper can be used here) that fence off your database clients while you are performing actions against it.
To the original question, Kafka Connect offers an API and is also a Producer and Consumer client (and would be recommended for database interactions). So is Confluent Schema Registry, KSQLdb, etc.
I believe that the consumer of your service B would not be considered "a service" or part of the "service", as in that it is not called as part the code which services requests. Yet it does provide functionality that is required for the domain function of your microservice. So yes I would consider the consumer part of the Microservice in terms of team/domain responsibility.
There may be different opinions on if the consumer code should share the same code base/repo as the "service" code. Some people believe that it is better to limit the repo scope to a single "executable", others believe it is beneficial to keep the domain scope and have everything in a single repo. I probably belong to the latter group but do not have a very strong opinion on it. I would argue it is more important to have a central documentation / wiki for the domain that will point to the repos involved etc.

NServiceBus and WCF, how do they get along?

Simplified... We are using NServiceBus for updating our storage.
In our sagas we first read data from our storage and updates the data and puts it back again to storage.The NServicebus instance is selfhosted in a windows service. Calls to storage are separated in its own assembly ('assembly1').
Now we will also need synchronous read from our storage through WCF. In some cases there will be the same reads that were needed when updating in sagas.
I have my opinion quite clear but maybe I am wrong and therefore I am asking this question...
Should we set up a separate WCF service that is using a copy of 'assembly1'?
Or, should the WCF instance host nservicebus?
Or, is there even a better way to do it?
It is in a way two endpoints, WCF for the synchronous calls and the windows service that hosts nservicebus (which already exists) right now.
I see no reason to separate into two distinct endpoints in your question or comments. It sounds like you are describing a single logical service, and my default position would be to host each logical service in a single process. This is usually the simplest approach, as it makes deployment and troubleshooting easier.
Edit
Not sure if this is helpful, but my current client runs NSB in an IIS-hosted WCF endpoint. So commands are handled via NSB messages, while queries are still exposed via WCF. To date we have had no problems hosting the two together in a single process.
Generally speaking, a saga should only update its own state (the Data property) and send messages to other endpoints. It should not update other state or make RPC calls (like to WCF).
Before giving more specific recommendations, it would be best to understand more about the specific responsibilities of your saga and the data being updated by 'assembly1'.

NServicebus hierarchy and structure

I am just starting out learning NServicebus (and SOA in general) and have a few questions and points I need clarification on regarding how the solution is typically structured and common best practices:
The documentation doesn't really explain what an endpoint is. From what I gather it is a unit of
deployment and your service will have 1 or more endpoints. Is this correct?
Is it considered best practice to have one VS solution per Service you are developing? With a project for messages, then a project for each endpoint, and finally a project that is shared with the endpoints containing your domain layer?
From what I read, services are usually comprised of individual components. Can (or should) any component in the service access the same database, or should it be one database per component?
Thanks for any clarification or insight one can offer.
I will try and answer your questions the best i can...
I'm not sure about the term "Best Practices" i would consider the term "Best Thinking" or "Paradigm"
Q1: yes, an endpoint is a effectively a deployed process that consumes the messages of it's queue.
It's dose not have to belong to a single "Service" (logical) (In the case of a web endpoint for example), an endpoint can have one or more handlers deployed to it.
Q2: I would go with one solution (and later repo) per logical domain service, Inside a solution I would create a project per message handler, because as you scale you will need to move you handler between endpoints, or to their own endpoint depending on scale. Messages however are contracts, so i would put them in a solution/s maybe split commands and events. You may consider something like nuget to publish your message packages.
Q3: A "Service" is a logical composition of autonomous components, each is a vertical slice of functionality, so they can share the same database, but i would say that only one component has the authority to modify it's own data. I would always try to think what would happen when you need to scale.
Dose this make sense?

Best way to queue WCF requests so that only one is processed at a time

I'm building a WCF service to handle all QuickBooks SDK functionality for two companies. Since the QuickBooks SDK needs to open/close the actual QuickBooks application to process a request, only one can be handled at a time or QuickBooks goes into a really bad state. I'm looking for the best way to allow end users to make a QuickBooks data request, and have my WCF application hold that request until the previous request is completed.
If nothing is currently being processed, then the request will go through immediately.
Does anyone know of the best method to handle that type of functionality? Anything third party/built-in .NET libraries?
Thanks!
Use WCF Throttling. Its configurable and will solve your problem without code changes.
See my answer for WCF ConcurrencyMode Single and InstanceContextMode PerCall.
One way to do this is to Place a Queue between the user and the Quickbooks Application:
The request from the user is placed i a Queue or Data table.
A background process reads the one item at a time out of the Queue, sends it to Quickbooks and Places the result in a result table.
The Client applictaion reads the result from the result table.
This requires some work, but the user will allways be able to submit requests and only one will be processed at a time.
The solution given by ErnieL will also work if you use Concurrency mode Single, but in Heavy load scenarios the users will get timeouts.

Having more WCF methods in a service can decrease performance?

What is a best practice for designing WCF services concerning to the use of more or less operations under a single service.
Taking into consideration that a Service must be generic and Business oriented, I have encountered some SOAP services # work that have too much XML elements per operation in their contracts and too many operations in a single service.
From my point of view, without testing, I think the number of operations within a service will not have any impact on the performance in the middleware since a response is build specifically for each operation containing only the XML elements concerning that operation.
Or are there any issues for having too many operations within a SOAP service ?
There is an issue, and that is when trying to do a metadata exchange or a proxy creation against a service with many methods (probably in the thousands). Since it will be trying to do the entire thing at once, it could timeout, or even hit an OutOfMemory exception.
Dont hink it will impact performance much but important thing is methods must be logically grouped in different service. Service with large number of method usually mean they are not logically factored.