Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!
Related
I trying to develop Microservice in .Net core.
planning to implement project structure like
Frontend
Services
-Product
-Product.Api
-Product.Application
-Product.Domain
-Product.Infrastructure
-Basket
-Basket.Api
-Basket.Application
-Basket.Domain
-Basket.Infrastructure
-Order
-Order.Api
-Order.Application
-Order.Domain
-Order.Infrastructure
In the above project structure, under service folder currently three module(Product, Basket and Order). many module will added later.
Where each module have 4 projects for Api, Application ,Domain, Infrastructure. if add more module increase number of class library and web project. this will drop Visual studio loading, compile and running time of project due to my hardware is not enough.
Please recommend any other pattern for optimizing number of projects in the microservice?
If the number of class libraries is the determining factor in your architecture performance, maybe it is time to converge the modules into the same module.
If it is absolutely necessary to continue using the microservice architecture and the high number of modules, you should consider investing in more powerful hardware.
Developing software often requires a lot of ram to house all the processes running the stack locally.
Another approach would be to try to develop on a cloud platform such as Azure, and use the corresponding tools to debug against a cloud instance or even in a GitHub Codespace.
If Product, Basket and Order are different microservices, then they should be in different Visual Studio solutions. Each solution will be small and independent and they'll all load and work fast regardless of how many microservices you have.
If Product, Basket and Order are part of the same microservice and you are planning to add many more modules, your microservice design is probably wrong, as a single microservice appears to have far too many responsibilities. In this case, the solution is to limit the responsibilities of each microservice so that they don't grow to enormous sizes.
If what you are building is a modular monolith (a single deployable unit, but with the code organised in modules), then the solutions are a bit different. If it's a single developer application, you probably don't need to split the modules in separate projects. For example, the whole API can be a single project and each module be in a different folder. If there'll be many developers and teams working on the source code, then you might want to create a separate solution for each module, so each team can work on their own code.
Like #abo said: if the number of assemblies in impacting the performance of your application consider one assembly per module.
If your driver for having multiple assemblies per module was governance on dependencies then consider using an additional tool like https://github.com/realvizu/NsDepCop which allows you to enforce architectural/dependency rules without the help of the compiler.
well,
I have some notes on this Architecture you are making.
if you do it right the one of the payoff is going to be less impact on the hardware.
note#1 : make A Kernel Module which has all of the abstraction of the common functionalities. (that other microservices and take a reference from). like the base repository, message queue base handler ,command /command handler. and u you want to disconnect a module from the kernel you can do that and add your own abstractions alone in that module (simplicity is the ultimate sophistication).
note#2 : Not every module needs to have the same projects or layers as u putting for example: -generally speaking- Basket doesn't need infrastructure. all it does to tell the order module if there is any order on going or pending for this user.
note#3 : microservices is notorious of needing a high-end server to handle the massive amount of nodes/application.
finally I have an awesome blueprint for you to study and follow.
which happens to covers the same case youre after.
here is the link
not sure if my question is explainable enough but I will try to explain it here as much as I can. I am currently exploring and playing with a microservices architecture, to see how it works and learn more. Mostly I understand how things work, what is the role of API Gateway in this architecture, etc...
So I have one more theoretical question. For example, imagine there are 2 services, ie. event (which manage possible events) and service ticket which manages tickets related to a specific event (there could be many tickets). So these 2 services really depend on each other but they have a separated database, are completely isolated and loosely coupled, just as it should be in "ideal" microservices environment.
Now imagine I want to fetch event and all tickets related to that event and display it in a mobile application or web spa application or whatever. Is calling multiple services / URLs to fetch data and output to UI completely okay in this scenario? Or is there a better way to fetch and aggregate this data.
From my reading different sources calling one service from another service is adding latency, making services depend on each other, future changes in one service will break another one, etc so it's not a great idea at all.
I'm sorry if I am repeating a question and it was already asked somewhere (althought I could not find it), but I need an opinion from someone that met this question earlier and can explain the flow here in a better way.
Is calling multiple services / URLs to fetch data and output to UI
completely okay in this scenario? Or is there a better way to fetch
and aggregate this data.
Yes it is ok to call multiple services from your UI and aggregate the data in your Fronted code for your needs. Actually in this case you would call 2 Rest API's to get the data from ticket micro-service and event micro-service.
Another option is that you have some Views/Read optimized micro-service which would aggregate data from both micro-services and serve as a Read-only micro-service. This of course involves some latency considerations and other things. For example this approach can be used if you have requirement like a View which consists of multiple of models(something like a Denormalized view) which will be accessed a lot and/or have some complex filter options as well. In this approach you would have a Third micro-service which would be aggregated from the data of your 2 micro-services(tickets and events). This micro-services would be optimized for reading and could also if needed use a different storage type(Document db or similar). For your case if you would decide to do this you could have only one API call to this micro-service which will provide you all your data.
Calling One micro-service from another. In some cases you can not really avoid this. Even though there are some sources online which would tell you not to do it sometimes it is inevitable. For your example I would not recommend this approach as it would produce coupling and unnecessary latency which can be avoided with other approaches.
Background info:
You can read this answer where the topic is about if it is ok to call one micro-service from another micro-service. For your particular case it is not a good option but for some cases it might be. So read it for some clarification.
Summary:
I have worked with system where we where doing all those 3 things. It really depends on your business scenario and needs of your application. Which approach to pick will depend on a couple of criteria like: usability from UI, scaling(if you have high demand on the micro-services you could consider adding a Third micro-service which could aggregate data from tickets and events micro-service), domain coupling.
For your case I would suggest option 1 or option 2 (if you have a high demanding UI) from user prospective. For some cases option 1 is enough and having a third micro-service would be an overkill, but sometimes this is an option as well.
In my experience with cloud based services, primarily Microsoft Azure, the latency of one service calling another does indeed exist, but can be relied upon to be minimal. This is especially true when compared to the unknown latency involved with the users device making the call over whichever internet plan they happen to have.
There will always be a consuming client that is dependent on a service and its defined interface, whether it is the SPA app or a another service. So in the scenario which you described, something has to aggregate the payloads from both services.
Based on this I have seen improved performance by using a service which handles client requests, aggregates results from n services and responds accordingly. This does indeed result in dependencies, but as your services evolve, it is possible to have multiple versions of your services active simultaneously allowing you to depreciate older versions at a time that is appropriate.
I hope that helps.
Optional Advice
You can maintain the read table (denormalize ) inside any of the services , which suited best. Why? - Because the CQRS apply where needed , CQRS best suited for the big and complex application. It introduce the complexity in your system , and you gain less profit and more headache.
I wonder if anyone could share their thoughts on my question regarding web based APIs (we use Microsoft stacks)..
We are currently in the process of building an infrastructure to host web apis across our business.
As a organisation we have seperate business areas that provide services to our customers. These individual areas of our business generally have their own best of breed IT system. Offering APIs is something we've long thought about and we have started the design process.
The APIs we aim to offer shall be web based (.NET/webAPI/WCF etc.) and will largely (99%) be consumed within our organisation but some may be exposed externally in the future should the requirement arise (new mobile app may need to use the services etc.)
I'd love to hear your thoughts and experiences around how you architected yuor farms. I understand its quite an open question without understanding the crooks of our requirements but its more general advice/experiences I'd like to hear.
Particularly we are trying to decide whether we should design the infrastrcuture by:
1) Providing each area of the business with their own API server whereby we shall deploy each web API within a new application inside IIS.
or
2) Setup up a load balanced web api farm whereby we have say 2/3 iis web servers, all built the same, hosting the same web apis but the business areas will all share the same server effectively. Each area would have a segregated site within iis and new APIs shall be setup under new applications inside their respective web sites.
I dont foresee us having thousands of APIs but some will be business critical so I'm certainly bearing resilience in mind which is why as much as I like each business area having their own API server, I'm being swayed towards the option of having a load balanced farm which the whole business shares.
Anyone have any thoughts, experiences etc.?
Thanks!
That's a very interesting question, and i'd love to hear what others might think. I'm no big expert, but here are my two cents.
It seems to me, that the answer should be somewhere in between those two options you specified. Specifically, each critical business area, should get their own resilient, load balanced farm, while less critical services can utilize single machine deployments. Critical business area may not mean only one API, but can actually be a group of APIs, with high cohesion among themselves.
Using option 1 environment to full extent can be hard to maintain,
while utilizing option 2 fully, can be inefficient in terms of redeployment if (or better yet, when) business logic changes. Furthermore, i think it will be possible for greedy APIs to hog resources in peak traffic, making other services temporary less performant (unless you have some sort of dynamic scaling mechanism).
We're developing a comprehensive domain model encompassing 7(!) models/bounded contexts spanning several teams. We are yet to decide whether each one of the BCs is entirely disconnected from the others (being orchestrated by a layer above) or whether they are going to communicate via domain-events.
The application under development is for all purposes a SWT/Swing single-threaded application, so no fancy distributed mumbo jumbo between the different BCs is needed.
Yet, a big question remains: how to integrate all those different models? Should it be the Application Layer to undertake the task? If yes, and since in some (hopefully, few) cases the wiring and order ends up being complex, isn't the Application Layer the wrong place to do that?
For instance, consider the use of case of assembling a very complex synthetically created human (AssembleHumanoid). We have bounded contexts relating to the circulatory system, to the bone structure, the nervous system, ventilation system, coordination, immunological and mental systems and still the sensor system (lol, this was just all made up as you might imagine).
Wiring up all that stuff in the Application Layer feels kinda wrong. The obvious solution seems to be to create a 2nd Domain Layer just for orchestration matters. I've looked up but Vernon's Implementing Domain-Driven Design doesn't directly touch the issue (although he gets near # p531, "Composing Multiple Bounded Contexts").
What are your thoughts on the matter?
I'm right now tackling the same questions as you. My role in my project is architect and we have identified 5 BC's. But we are one team and intend to develop theses BC's within one large application. So our BC's are modules within a larger insurance application where each BC speaks its own ubiquitous language (Treaty, Reinsurance, security, medical risk assessment, premium).
But I have given this a lot of thoughts and I think we'll send updates to other BC through Domain Events. Our client is a MVC site that will consume our service layer. But My intention is that application layer have that kind of granularity so it will manage to perform the main task for the client without letting the client MVC project to coordination to other BC's.
We uses some shared Kernel between BC's but not for communicating. We do use DDD integration pattern where we have reference to other BC through Value Objects. We also have som BC to act like Factory, for example Security BC are creating different user roles for other BC's.
But when it comes to execution of a use case that actually need to to some final task in other BC's , Domain Event comes to rescue.
I'm looking at designing a commercial web site from scratch in the new year and I was planning on using NServiceBus. Other components would include RavenDB, ASP.NET MVC, Ninject, Bootstrap, etc.
My question is, if I build scalability in from the beginning, particularly if in the first 6 months I plan to run the site from a single server, would it be a foolish thing to use NServiceBus from the outset? Will I experience much of a choke-point by pushing everything through MSMQ rather than direct calls to methods in DLLs? Should NServiceBus only be added to mature systems, or systems that are intended to be deployed to more than one server?
While the fastest possible call would be a direct in-memory invocation, I wouldn't look at optimizing for latency of calls in most web scenarios. Instead, focus on what type of logic can be run asynchronously with respect to the user request. That will be most influential on your overall scalability.
NServiceBus provides a fairly clean programming model for asynchronous invocations that can later on be distributed across multiple processes and machines.