Lets suppose I have two services in a mircoservices architecture which are:-
OrderService
CustomerService
Each service has its own database, my frontend makes a request to /api/order/25 now along with the order I want to return the CustomerName as well but CustomerName is in a different database. So whats the best and most microservices based approach for fetching the CustomerName along with the Order data?
In this case, I suggest creating an aggregator service or backends-for-frontend.
This service will fetch data from both the services and aggregates it for your frontend(or for your client).
In this way, your order service doesn't need to depend on customer service for just getting a name or your client doesn't need to make another call for fetching data and do aggregation.
With BFF,
You can add an API tailored to the needs of each client, removing a lot of the bloat caused by keeping it all in one place.
Frontend requirements will be separated from the backend concerns. This is easier for maintenance.
The client application will know less about your APIs’ structure, which will make it more resilient to changes in those APIs.
However, all these microservices patterns came with some tradeoffs and no exceptions for BFFs.
so Always keep in mind that,
BFF is a translation/aggregation layer between the client and the services. When data is returned from a service API, the purpose of it is to aggregate and transform it into the data type specified by the client application.
Avoid over-dependence on BFF and don't add application logics in this layer. As I said in the previous step it's just a translator.
Implement a resilient design and timeout since this aggregator is calling other services and getting data. If one or more service calls take too long, it should timeout and return a partial set of data. Consider how your application will handle this scenario
Monitoring of your aggregator and it's child service calls. Implement distributed tracing using correlation IDs to track each call.
You have multiple options here. For example:
Duplicate some Customer data in the OrderService Database. You could for example save a small subset of the entity as a separate Table in the Order service db like: CustomerId and CustomerName. This works very well for use cases where you have a lot of traffic on the: /api/order/id endpoint but the number of Customers is very small. This means that duplicating some of Customers data in your OrderService db would not be very expensive. This option works quite well if you have a Messaging Queue for communication between micro-service which publishes event messages when the Customer data changes in CustomerService. This way you can update the Customer data in OrderService db and be up to date with the source of truth of the data. If you do not have a Messaging Queue for communication this option would not be so great as you would need to periodically check for changes in the CustomerService. This will create additional load to the CustomerService and the Customer data in OrderService db would inconsistent for some period. Good thing about this option is even if you have a lot of calls to your "/api/order/id" the CustomerService will not be affected at all by this load as the data is already in the OrderService db.
Call the CustomerService from OrderService when you need that data. In this option every time you need the CustomerName and other CustomerService data you would need to call the CustomerService over some api. For example that would be when someone calls your "/api/order/id" endpoint you would call the CustomerService api from your OrderService. This option is very good if you do not have a lot of calls to this api and generally the load on the CustomerService is not big. This way this additional calls would not be very problematic. It becomes a problem if you have a lot of calls to"/api/order/id" and the load is delegated to the CustomerService as well.
Create a 3'rd micro-service which will aggregate data from both the CustomerService and OrderService. In this option you would have another read-only micro-service with its own database which will have data from both micro-service in its own database. This option can be useful if you have a system which has to handle a lot of load in particular has a lot of potential for lot of load to this particular endpoint: "/api/order/id" or other endpoints which use an Aggregations of multiple entities from multiple micro-service. If you want to separate this and scale this independently you can create another micro-service like customer-order-micro-service and duplicate data from CustomerService and OrderService. Similar as in option 1. you would just have to duplicate particular fields/properties of the Entities which you need. Using this option you could even have some kind of De-Normalized data structure where you could store the entities in one table/collection(if you need it :)). You can even use pick a data storage technology which fits your queries better like Elastic Search if you have some full text search requirements and etc. This special service can be used to fit your needs as you need it. Some examples are also in the direction Query/Read part of CQRS and so on. There are a lot of options what you can do here. This is an option for a specific use case so you need to carefully review your business requirements and adjust it to your needs.
Conclusion
Usually in most of the cases you will be fine with option 1 or option 2. But it is good to know that you can also decide to go with the option 3 and have a special read-only/query only service which can server your special needs. Keep in mind that the way of communication between services has also an impact on your options. You can read more about micro-service to micro-service communication here.
Related
I am working on two different services:
The first one handles all of the write operations through a REST API, it contains all of the required business logic to maintain data in a consistent state, and it persists entities on a database. It also publishes events to a message broker when an entity is changed (creation, update, deletion, etc). It's structured in a DDD fashion.
The second one only handles reads, also with a REST API. It subscribes to the same message broker in order to process the events published by the first service, then it saves the received data to an in memory database for fast reads.
Nothing fancy, just CQRS with eventual consistency.
For the first service, I had a clear mind on how to structure the application:
I have the domain package with subpackages for each different aggregate. Each aggregate has its own domain objects, and its own repository interface.
I have the application package with different application services, and they basically just orchestrate the domain objects and call repositories to persist/update data, and the event publisher to publish domain events. The event publisher interface is also in this package.
I have the infrastructure package, which includes a persistence package, where the repository implementations reside, and a messaging package, where the event publisher implementation resides.
Finally, the interfaces package is where I keep the controllers/handlers for the REST API.
For the second service, I'm very unsure on how to structure it. My doubts are the following:
Should I use the repository pattern? To be fair it seems redundant and not very useful in this scenario. There are no domain objects nor rules here, cause the data to be saved/updated is already validated by the first service.
If I avoid using the repository pattern, I suppose I'd have to inject the database client in my application service, and access the data directly. Is this a good practice? If yes, where would the returned objects fit? Would they also be part of the application layer?
Would it make sense to skip the application service entirely and inject the database client straight up in the controller/handler? What if the queries are a bit complicated? This would pollute the controllers with a lot of db logic, making it harder to switch implementations (there would be no interface in this case).
What do you think?
The Query side will only contain the methods for getting data, so it can/should be really simple.
You are right, an abstraction on top of your persistence like a repository pattern can feel redundant.
You can actually call the database in your controller. Even when it comes to testing, on the query side you only need basically integration tests that test the actual database. Having unit tests won't test much.
On the other hand, it can make sense to wrap the database calling logic in a query service similar to a repository. You would inject only that query service interface in your controller, which should use your ubiquitous language! You would have all the db logic in this query service and keep the db complexity there, while keeping the controller really simple.
You can avoid complex queries by having multiple read models based on your events depending on your needs.
I am designing an e-commerce application with microservice approach, using ORM(JPA) for data persistence for one of the microservice named OrderService. OrderService owns functionality related to persisting and reporting orders, which essentially include customer and product information. Customer and product functionality is managed by different microservices.
My question is at ORM layer OrderService need POJO which belongs to ProductService and CustomerService. What is the best way to deal with this dependency between services? Should application needs to design in different way?
There are few things that one should take into consideration when try to find a solution
1. You cannot access the database of other services, you have to make a call.
2. You should try not to keep data from other services into yours. Data duplication lead to an inconsistent state and should be avoided if you can
3. You should have a means to query data from other services when asked for.
Now with those points, I will mostly restrict data from other services to some reference ids (which should be immutable). At ORM layer I will just fetch the reference IDs and bloat them up by making an API call to concerned services(business layer).
You may realize that you are making way too many calls for say getting customer name to customer service using customer id, if that is the case, you may consider saving some of these information in your system. But be cautioned. Data that you saved should not be volatile and make sure you have done due diligence in making that call.
Recently, I have gone through many design principles of microservices and realizes that CQRS-ES and data replication with eventual consistency is best solution of this issue. We try to make communication Asynchronous as much as possible, uses point to point synchronous communication between microservices only when necessary.
This is a fairly common situation when designing microservices. Most microservices will require access to data available through another microservices or an external provider.
The best way to deal with this is to design each microservice as a "separate" application and think of all other microservices as external to it.
So, the developer of Microservice#1 (M1) would have to check into the Microservice#2 (M2) spec and write simple POJO classes for the data he fetches from there. Just like he would do if he were using some external API like Facebook.
Do note that that M1 will always talk to M2 (via REST for example) and never to the DB directly for the data it needs.
Ideally, each microservice would have its own database (or part clone of a central database)
Hi i would like your help so i can decide on what to do with this matter. The thing is that at my work we are currently migrating from Web Services to using WCF, now the thing is that when we used web services we had one web service that was in charge of invoking the business logic now the thing is that i would like to know what is actually the best way to achieve the same functionallity with WCF, using one unique service to call the different business logic classes or have multiple services to call the different business logic classes? also i have to clarify that when i say one unique service i mean that this will have just one method that one way or another will be capable of invoking any of the business logic classes depending on certain parameters and will also have other methods but for other different tasks, now i would like to know which would be the best approach for this, by the way the reason we have consider using one service like i told you is to manage from there the commits or rollbacks necessaries when something blows when making an operation on the db and have it just from one place, not all over the place, thanks in advance and well i'm kind of new with wcf.
You can migrate your existing service structure into WCF and still have the same functionality. You'll need to create and expose the service(s) according to WCF, but the architectural structure can remain how you have it in Web Services. You may want to revisit your design. There are many features at your disposal, including Entity Framework, that allow you to manage commits, rollbacks, etc.
I am new to Windows Communication Foundation and I am working on a system that serves data to a front end.
The WCF portion of the system consists of hundreds of queries that retrieve specific filtered datasets. These datasets are send back to the client via over a hundred different classes. It almost seems like there is a separate class for each service operation.
A snapshot of the code would look like
[OperationContract]
IList<A> LoadAdata();
[OperationContract]
IList<B> LoadBdata();
[OperationContract]
IList<C> LoadCDdata();
.
.
In addition alot of time and code is spent converting from the dataset into the IList<> objects.
My Questions are:
Is this how WCF is suppose to work?
Is there a better way to structure this service?
The typically structure you describe is not an absolute necessity for WCF to work. It can be a practice that your company to have a standard way of dealing with service and data contracts. For example: ServiceResponse ServiceOperation (ServiceRequest request); is a common pattern to see. This allows to flexibily maintain the input and output parameters of a service operation, without changing the outer visible signature of the operation. This might seem overhead, but can serve a purpose.
Is operations are standard CRUD operation and all look the same and do not have any specific business logic behind it, please take a look at WCF Data Services which exposes your data model as a standardize OData interface. The client is able to create custom queries and prevents the service from having to expose a large set of interface operations. It is all handled for you in that case.
I’m having a little bit of difficulty understanding some architectural principles when developing a service. If you make a call to a WCF service and it returns a collection of items(Orders) (which are custom made classes made up From LINQ-to-SQL entity data) to a client and each item has a collection of items(OrderItems) (one-to-many) that are also made up from the same LINQ-to-SQL context. If I make another call to the service and request a particular OrderItem and modify its details on the client side, how then does the first collection of Items realise that one of its Orders OrderItem has changed from the client side. I am taking the approach of when changing the OrderItem I send the OrderItem object to the WCF service for storage via LINQ-to-SQL commands but to update the collection that the client first called I use IList interface to search and replace each instance of the OrderItem. Also subscribing each item to the PropertyChanged event give some control. This does work with certain obvious limitations but how would one 'more correctly' approach this by perhaps managing all of the data changing from the service itself.. ORM? static classes? If this is too difficult question to answer, perhaps some link or even chat group that I can discuss this as I understand that this site is geared for quick Q/A type topics rather than guided tutorial discussions.
Thanks all the same.
Chris Leach
If you have multiple clients changing the same data at the same time, at the end of the day you system must implement some sort of Concurrency Control. Broadly thats going to fall into one of two categories: pessimistic or optimistic.
In your case it sounds like you are venturing down the optimistic route, whereby anyone can access the resource via the service - it does not get locked or accessed exclusively. What that means is ultimately you need to detect and resolve conflicts that will arise when one client changes the data before another.
The second architectural requirement you seem to be describing is some way to synchronize changes between clients. This is a very difficult problem. One way is to build some sort of publish/subscribe system whereby, after a client retrieves some resources from the service, it also subscribes to get updates to changes to resource. You can do this either in a push or pull based fashion (pull is probably simpler, i.e. just poll for changes).
Fundamentally you are trying to solve a reasonably complex problem, but its also one which pops up quite frequently in software.