Should API delegate all work to other services? - api

Suppose we have an API server with several endpoints that serves user requests. I have been wondering what is a good trade-off between implementing logic in the API server vs. delegating to other microservices.
For example, suppose we want to fetch data from a database upon an API call. Should database query:
Be performed by the API itself?
Delegated to a separate microservice that handles database queries?
Delegated to something even simpler than a microservice, say a lambda function in the cloud?
Thanks for help.

In case of Microservices the service itself is the owner of its own data. This implies two things:
This service is the only application which has direct access to its own data
If ServiceB wants perform any operation on this data it has to do that via the ServiceA's API (not directly via the database)
If ServiceB needs to retrieve data frequently and the data is fairly static then the ServiceB could implement a local cache via replication. But the source of truth still remains at ServiceA.

Afaik. microservices split up the problem horizontally, and what you are talking about is splitting up the problem vertically. Ofc. it is possible to combine both approaches and having multiple microservices each or some of them split up to smaller services in different layers. This can be both an architectural and a scaling decision, so better to check the numbers what kind of load you expect, what response time you need, and how much you want to spend on it imho. So better not to solve problems which are non-existent atm. but maybe in the future you will have them, or not...

Related

Seperate or Merge Kafka Consumer and API services together

After recently reading about event-based architecture, I wanted to change my architecture into one making use of such strengths.
I have two services that expose an API (crud, graphql), each based around a different entity and using a different database.
However, now whenever someone deletes a certain type of row in service A, i need to delete a coupled row in Service B.
So I added Kafka to my design, and whenever I delete the entity in service A, it publishes a notification message into Kafka.
In service B I am currently consuming the same topic so whenever a new message is received the Service will also handle the deletion of the matching entity, because it already has access that table because the same service already exposes the CRUD API to users.
What i'm not sure about is whether putting the Kafka Consumer and the API together in the same service is a good design. It contradicts the point of single responsibility in micro services, and whether there is an issue in one part of the service, it will likely affect the second.
However, creating a new service will also cause me issues - i will have 2 different services accessing the same table, and i will have to make sure i always maintain them together, whenever making changes to the table or database.
What is the best practice in a incident such as this? Is it inevitable to have different services have data coupling or is it not so bad to use the same service for two, similiar usages.
There is nothing wrong with using Kafka... You could do the same with point-to-point service communication, however (JSON-RPC / gRPC), however.
The real problem you seem to be asking about is dual-writes or race-conditions leading to data inconsistency.
While you could use a single consumer group and one topic-partition to preserve order and locking across consumers interested in those events, that does not lock out other consumer-groups from interacting with the database to perform the same action. Therefore, Kafka itself won't help with this problem.
You'll need external, distributed locks (e.g. Zookeeper can be used here) that fence off your database clients while you are performing actions against it.
To the original question, Kafka Connect offers an API and is also a Producer and Consumer client (and would be recommended for database interactions). So is Confluent Schema Registry, KSQLdb, etc.
I believe that the consumer of your service B would not be considered "a service" or part of the "service", as in that it is not called as part the code which services requests. Yet it does provide functionality that is required for the domain function of your microservice. So yes I would consider the consumer part of the Microservice in terms of team/domain responsibility.
There may be different opinions on if the consumer code should share the same code base/repo as the "service" code. Some people believe that it is better to limit the repo scope to a single "executable", others believe it is beneficial to keep the domain scope and have everything in a single repo. I probably belong to the latter group but do not have a very strong opinion on it. I would argue it is more important to have a central documentation / wiki for the domain that will point to the repos involved etc.

Need help in selecting the right design pattern

We are into the lead business. We capture leads and pass it on to the clients based on some rules. integration to each client very in nature like nature of the API and in some cases, data mapping is also required. We perform the following steps in order to route leads to the client.
Select the client
Check if any client-specific mapping(master data) is required.
Send Lead to nearest available dealer(optional step)
Call client api to send lead
Update push status of the lead to database
Note that some of the steps can be optional.
Which design pattern would be suitable to solve this problem. The motive is to simplify integration to each client.
You'll want to isolate (and preferably externalize) the aspects that differ between clients, like the data mapping and API, and generalize as much as possible. One possible force to consider is how easily new clients and their APIs can be accommodated in the future.
I assume you have a lot of clients, and a database or other persistent mechanism that holds this client list, so data-driven routing logic that maps leads to clients shouldn't be a problem. The application itself should be as "dumb" as possible.
Data mapping is often easily described with meta-data, and also easily data-driven. Mapping meta-data is client specific, so it could easily be kept in your database associated with each client in XML or some other format. If the transformations to leads necessary to conform to specific APIs are very complex, the logic could be isolated through the use of a strategy pattern, with the specific strategy selected according to the target client. If an extremely large number of clients and APIs need to be accommodated, I'd bend over backwards to make the API data-driven as well. If you have just a few client types (say less than 20), I'd employ some distributed asynchronicity, and just have my application publish the lead and client info to a topic corresponding to client-type, and have subscribed external processors specific for each client-type do their thing and publish the results on another single queue. A consumer listing to the results queue would update the database.
I will divide your problem statement into three parts mentioned below:
1) Integration of API with different clients.
2) Perfom some steps in order to route leads to the client.
3) Update push status of the lead to database.
Design patterns involved in above three parts:
1) Integration of API with different clients - Integration to each client vary in nature like the nature of the API. It seems you have incompitable type of interface so, you should design this section by using "Adapter Design Pattern".
2) Perform some steps in order to route leads to the client- You have different steps of execution. Next step is based on the previous steps. So, you should design this section by using "State Design Pattern".
3) Update push status of the lead to database: This statement shows that you want to notify your database whenever push status of the lead happens so that information will be updated into database. So, you should design this section by using "Observer Design Pattern".
Sounds like this falls in the workflow realm.
If you're on Amazon Web Services, there's SWF, otherwise, there's a lot of workflow solutions out there for your favorite programming language.

How to connect separate microservice applications?

I am building huge application using microservices architecture. The application will consist of multiple backend microservices (deployed on multiple cloud instances), some of which I would like to connect using rest apis in order to pass data between them.
The application will also expose public api for third parties, but the above mentioned endpoints should be restricted ONLY to other microservices within the same application creating some kind of a private network.
So, my question is:
How to achieve that restricted api access to other microservices within the same application?
If there are better ways to connect microservices than using http transport layer, please mention them.
Please keep the answers server/language agnostic if possible.
Thanks.
Yeah easy. Each client of a micro service has an API key. Micro services only accept requests from clients with a valid API Key.
Also, its good to know that REST is simply a protocol that allows communication between bounded contexts.
It doesn't have to be over HTTP. The requirement is that it has a uniform interface (this is why HTTP is used with its PUT, POST, GET, DELETE... methods) and that it is stateless (all state being transferred through a URI).
So if all your micro services run on the same box, all you need to do is something like this:
class SomeClass implements RestfulMethods {
public function get(params){ // return something}
public function post(params){ // add something}
public function put(params){ // update something}
public function delete(params){ // delete something}
}
Micro services then communicated by interacting with the RestfulMethod implementations of other services.
But if your micorservices are on different machines, its probably best to use HTTP as the transport mechanism.
One way is to use HTTPS for internal MS communication. Lock down the access (using a trust store) to only your services. You can share a certificate among the services for backend communication. Preferably a wildcard certificate. Then it should work as long as your services can be adressed to the same domain. Like *.yourcompany.com.
Once you have it all in place, it should work fine. HTTPS sessions does imply some overhead, but that's primarily in the handshake process. Using keep-alive on your sessions, there shouldn't be much overhead with encrypted channels.
Of course, you can simply add some credentials to your http headers as well. That would be less secure.
RestAPI is not only way to do it, one of the some ideas that i have seeming is about the usage of Service Registry link Eureka (Netflix), Zookeeper (Apache) and others.
Here is an example:
https://github.com/tiarebalbi/qcon2015-sao-paolo-microservices-workshop
...the above mentioned endpoints should be restricted ONLY to other
microservices within the same application...
What you are talking about in a broad sense is authorisation.
Authorisation is the granting or denying of "powers" or "abilities" within your application to authentic users.
Therefore the job of any authorisation mechanism is to validate the "claim" implicit in any inbound API request - that the user is allowed to do the thing encoded in the request.
As an example, imagine I turned up at your API with a PUT request for Widget 1234:
PUT /widgetservice/widget/1234 HTTP/1.1
This could be interpreted as me (Bob Smith, a known user) making a claim that I am allowed to make changes to a widget in your system with id 1234.
Whatever you do to validate this claim, I hope you can see this needs to be done at the application level, rather than at the API level. In fact, authorisation is an application-level concern, rather than an API-level concern (unlike authentication, which is very much an API level concern).
To demonstrate, in our example above, it's theoritically possible I'm allowed to create a new widget, but not to update an existing widget:
POST /widgetservice/widget/1234 HTTP/1.1
Or even I'm allowed to update only widget 1234 and requests to change other widgets should not be allowed
PUT /widgetservice/widget/5678 HTTP/1.1
How to achieve that restricted api access to other microservices
within the same application?
So this becomes a question about how can you build authorisation into your application so that you can validate individual requests coming from known users (in your case your other services in your ecosystem are just another kind of known user).
Well, and apologies but I'm going to be prescriptive here, you could use a claims-based authorisation service, which stores valid claims based on user identity or membership of roles.
It depends largely on how you are handling authentication, and whether or not you are supporting roles as part of that process. You could store claims against individual users but this becomes arduous as the number of users increases. OAuth, despite being pretty heavy to implement, is a leading platform for this.
I am building huge application using microservices architecture
The only thing I will say here is read this first.
The easiest way is to only enable access from the IP address that your microservices are running on.
I know i'm super late for this question :)) but for anyone who came across this thread, Kafka is a great option for operations similar to this question.
based on Kafka's own introduction
Kafka is generally used for two broad classes of applications:
Building real-time streaming data pipelines that reliably get data between systems or applications
Building real-time streaming applications that transform or react to the streams of data
Side Note: Kafka is created by LinkedIn and is being used in many huge companies so it's kindda battle tested.
you can use RabbitMQ
publish your resquests to queue and then consume tasks

Two wcf servers vs a wcf server with callback

I have got two applications that need to communicate via WCF:
Called A and B.
A suppose to push values to B for storage/update
B suppose to push a list of values stored in it to A
the senior programmer in my team wants to open a WCF server at A and another WCF server at B.
I claim that one should be the server and the other should be the client and use server call back In order to avoid splitting the interface into two, avoid circular dependency, and duplication of code settings. he doesn't understand it. can anyone help me explain him why his solution is bad code?
It depends on your criteria. Let's assume a client/server model where A is the client and B is the server. You state that B should "push" data to A.
If you truly need push then you should make B into a duplex server. This does put some strain on your bandwith, so if you have a bandwith restriction, this might not be the right choice.
If you can incur some delay at A than you might want to opt for a polling mechanism of your own (maybe based on timing, or some other logic).
If both are not an option, you can try and swap roles. So then make B the client and A the server. It's les intuitive, but it might fit your scenario. If you can incur a delay on storing data, make B poll A for changes in the data and save at an interval.
If there can be no delay in both and bandwith is limited, you do end up with two WCF services. Altough it may look silly at first glance, keep in mind they are services and not servers. It does make things a bit more complex, so I would keep it as a last resort.
A service should encapsulate a set of functionality that other applications can consume. All it does is wait for and respond to requests from other components, it doesn't initiate actions by itself.
If Application B is storing data, then it can of course be provided to Application A as a service. It provides the "service" of storing data without application A having to worry about how or where, and returns successfully stored data. This is exactly the kind of thing that WCF Services are meant to handle.
I am assuming that application A is the one initiating the requests (unless you have an unmentioned 3rd application, one of them must be the initiator). If Application A is initiating actions (for example, it has a UI, or is triggered to do some batch processing etc.) then it should not be modeled as a "service".
I hope that helps :)

Best practice to handle large WCF service

I'm working on a 4-player network game in WPF and learning WCF in the process. So far, to handle the network communication, I've followed the advice from the YeahTrivia game on Coding4Fun game: I use a dualHttpBinding, and have use a CallbackContract interface to send back messages to clients. It works pretty well.
However, my service is getting very large. It has methods/callbacks to handle the game itself, but also the chat system, the login/registration process, the matchmaking, the roster/player info, etc. Both the server and client are becoming hard to maintain because everything is tied into a single interface. On the client for example, I have to redirect the callbacks to either the game page, the lobby page, etc, and I find that very tedious. I'd prefer being able to handle the game callbacks on the game page, the chat callbacks on the chat window, etc.
So what's the best way to handle that? I've thought of many things, but not sure which is best: splitting the service into multiple ones, having multiple "endpoints" on my service, or is there other tricks to implement a service partially where appropriate?
Thanks
You should have multiple components, each of which should be limited to one responsibility - not necessarily one method, but handling the state for one of the objects you're dealing with. When you have everything all in one service then your service is incredibly coupled to itself. Optimally, each component should be as independent as possible.
I'd say start with splitting it up where it makes sense and things should be MUCH more manageable.
I would support Terry's response - you should definitely split up your big interface into several smaller ones.
Also, you could possibly isolate certain operations like the registration and/or login process into simpler services - not knowing anything about your game, I think this could well be a simple non-duplex service that e.g. provides a valid "player token" as its output which can then be used by the other services to authenticate the players.
Multiple smaller, leaner interfaces also give you the option to potentially create separate, dedicated front-ends (e.g. in Silverlight or something) that would target / handle just certain parts of the whole system.
Marc