Optimizing .NET core microservice project structure - asp.net-core

I trying to develop Microservice in .Net core.
planning to implement project structure like
Frontend
Services
-Product
-Product.Api
-Product.Application
-Product.Domain
-Product.Infrastructure
-Basket
-Basket.Api
-Basket.Application
-Basket.Domain
-Basket.Infrastructure
-Order
-Order.Api
-Order.Application
-Order.Domain
-Order.Infrastructure
In the above project structure, under service folder currently three module(Product, Basket and Order). many module will added later.
Where each module have 4 projects for Api, Application ,Domain, Infrastructure. if add more module increase number of class library and web project. this will drop Visual studio loading, compile and running time of project due to my hardware is not enough.
Please recommend any other pattern for optimizing number of projects in the microservice?

If the number of class libraries is the determining factor in your architecture performance, maybe it is time to converge the modules into the same module.
If it is absolutely necessary to continue using the microservice architecture and the high number of modules, you should consider investing in more powerful hardware.
Developing software often requires a lot of ram to house all the processes running the stack locally.
Another approach would be to try to develop on a cloud platform such as Azure, and use the corresponding tools to debug against a cloud instance or even in a GitHub Codespace.

If Product, Basket and Order are different microservices, then they should be in different Visual Studio solutions. Each solution will be small and independent and they'll all load and work fast regardless of how many microservices you have.
If Product, Basket and Order are part of the same microservice and you are planning to add many more modules, your microservice design is probably wrong, as a single microservice appears to have far too many responsibilities. In this case, the solution is to limit the responsibilities of each microservice so that they don't grow to enormous sizes.
If what you are building is a modular monolith (a single deployable unit, but with the code organised in modules), then the solutions are a bit different. If it's a single developer application, you probably don't need to split the modules in separate projects. For example, the whole API can be a single project and each module be in a different folder. If there'll be many developers and teams working on the source code, then you might want to create a separate solution for each module, so each team can work on their own code.

Like #abo said: if the number of assemblies in impacting the performance of your application consider one assembly per module.
If your driver for having multiple assemblies per module was governance on dependencies then consider using an additional tool like https://github.com/realvizu/NsDepCop which allows you to enforce architectural/dependency rules without the help of the compiler.

well,
I have some notes on this Architecture you are making.
if you do it right the one of the payoff is going to be less impact on the hardware.
note#1 : make A Kernel Module which has all of the abstraction of the common functionalities. (that other microservices and take a reference from). like the base repository, message queue base handler ,command /command handler. and u you want to disconnect a module from the kernel you can do that and add your own abstractions alone in that module (simplicity is the ultimate sophistication).
note#2 : Not every module needs to have the same projects or layers as u putting for example: -generally speaking- Basket doesn't need infrastructure. all it does to tell the order module if there is any order on going or pending for this user.
note#3 : microservices is notorious of needing a high-end server to handle the massive amount of nodes/application.
finally I have an awesome blueprint for you to study and follow.
which happens to covers the same case youre after.
here is the link

Related

Splitting an Enterprise project (ex. ERP) to multiple small projects (ASP.NET Core)

Is it an excellent way to split an Enterprise project such as ERP software into multiple small projects?
Our ERP project has some modules:
HRM
Sales & Marketing
Production Planning
Procurement
Inventory & Warehouse
...
Is it a proper way to have one project for each module? (with one database for all modules)
I'd like to know if it is a proper way to create an ASP.NET Core MVC
project for each module.
Well, based on your reply, if each module means class files of your project HRM, Sales & Marketing then that wouldn't be sopt on. Standard asp.net core project could be designed like below:
However, project architecture completely evolve depending on over the time so its quite challenging to say what is within the sound.
Nevertheless, few generic way followed by the developer these days are as below:
Architecture You Can Follow:
Clean Architecture
Onion Architecture
Repository Pattern
Principle You Should Consider:
No matter, which Architecture you have opted to implement, therefore, two things you must consider:
SOLID
DRY
Note: If you want to create single project therefore, the class files, and other related stuff, for all of your module server cost would considerably be surged. We must bear in mind the cost and maintainability as well.

Share common dependencies for two .NET Core apps

I have two ASP.NET Core apps running on the same server and they share many dependencies.
I want to put all these common dependencies in a directory in order to save disk space. But I don't know how the config the apps needs to be so they search this particular directory in order to load them.
Thanks in advance
As far as I can see, there is conceptual complexity and misunderstanding here. Before I explain this, I would like to inform you that go over some assumptions.
There are only 2 web applications(could be API projects or different). I assume you don't have any projects. I have already asked you.
Dependencies are a big problem throughout the development process. And we developers are responsible handle this. Evolve the Tightly-Coupled systems to Loosely-coupled system gives us many advantages. For this reason, we aim to reduce the dependencies of the applications by using many technics, design patterns throughout the development process. I recommend you looking for dependencies and coupling concepts. I share some information that will be a starting point for you.
After looking for it, you will become aware that you need to separate dependencies at the application level rather than at the disk level. You will find that have many technic and approach. I am sure that after looking it for you take action will be easy.
Here are points;
https://dev.to/franiglesias/dependencies-and-coupling-4365
https://stackify.com/dependency-inversion-principle/
What is the difference between loose coupling and tight coupling in the object oriented paradigm?

Testing reusable components / services across multiple systems

I'm currently starting a new project where we are hoping to develop a new system using reusable components and services.
We currently have 30+ systems that all have common elements, but at the moment we develop each system in isolation so it feels like we are often duplicating code and then of course we have 30+ separate code bases to maintain and support.
What we would like to do is create a generic platform using shared components to enable quick development of new collections, reusing code and reusing automated tests and reduce the code base that needs to be maintained.
Our thoughts so far are that we would have a common code base for specific modules for example User Management and Secure System Access, these modules could consist of their own generic web module, API and Context. This would create a generic package of code.
We could then deploy these different components/packages to build up a new system to save coding the same modules over and over again, so if the new system needed to manage users, you could get the User Management package and boom it does what you need. However, because we have 30+ systems we will deploy the components multiple times for each collection. Also we appreciate that some of the systems will need unique functionality so there would be the potential to add extensions to the generic modules for system specific needs OR to choose not to use one of the generic modules and create a new one, but use the rest of the generic components.
For example if we have 4 generic components that make up the system A, B, C and D. These could be deployed to create the following system set ups:
System 1 - A, B, C and D (Happy with all generic components)
System 2 - Aa, B, C and D (extended component A to include specific functionality)
System 3 - A, E, C and F (Can't reuse components B and D so create specific ones, but still reuse components A and C)
This is throwing up a few issues for me as I need to be able to test this platform and each system to ensure it works and this is the first time I've come across having to test a set up like this.
I've done some reading around Mircroservices and how to test them, but these often approach the problem for just 1 system using microservices where we are looking at multiple systems with different configurations.
My thoughts so far lead me to believe that for the generic components that will be utilised by the different collections I can create automated tests at the base code level and then those tests will confirm the generic functionality and therefore it will not be necessary to retest these functions again for each component, other than perhaps a manual sense check after deployment. Then at each system level additional automated tests can be added to check the specific functionality that may be created.
Ideally what I'd like would be to have some sort of testing platform set up so that if a change is made to a core component such as User Management it would be possible to trigger all the auto tests at the core level and then all of the specific system tests for all systems that will share the component to ensure that any changes don't affect core functionality or create a knock on effect to the specific systems. Then a quick manual check would be required. I'm keen to try and remove a massive manual test overhead checking 30+ systems each time a shared component is changed.
We work in an agile way and for our current projects we have a strong continuous integration process set up, so when a developer checks in some code (Visual Studio) this triggers a CI build (TeamCity / Octopus) that will run all of the unit tests, provided that all these tests pass, this then triggers an Integration build that will run my QA Automated tests which are a mixture of tests run at an API level and Web tests using SpecFlow and PhantomJS or Selenium Webdriver. We would like to keep this sort of framework in place to keep the quick feedback loops.
It all sounds great in theory, but where I'm struggling is trying to put something into practice and create a sound testing strategy to cover this kind of system set up.
So really what I'm hoping is that there is someone out there who has encountered something similar in the past and has thoughts on the best way to tackle this and has proven that they work.
I'm keen to get a better understanding of how I could set up a testing platform / rig to aid the continuous integration for all systems considering that each system could potentially look different, yet have shared code.
Any thoughts or links to blogs / whitepapers etc. that you think might help would be much appreciated!!
Your approach is quite good, and since soon I'll have to face the same issues like you - I can give you my ideas so far. I'm pretty sure that to
create a sound testing strategy to cover this kind of system set up
can't be squeezed-in in one post. So the big picture looks like this (to me) - you're in the middle of the Enterprise application integration process, the fundamental basis to be test covered will be the Data migration. Maybe you need to consider the concept of Service-oriented architecture
generic platform using shared components
since it'll enable you to provide application functionality as services to other applications. Here indirect benefit will be that SOA involves dramatically simplified testing. Services are autonomous, stateless, with fully documented interfaces, and separate from the cross-cutting concerns of the implementation. There are a lot of resources like this E2E testing or efficiently testing SOA.

Web apps architecture: 1 or n API

Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!

Maintaining two versions of a business class library

Our core business application uses a library (C# project) of business objects. Data access is done using the Wilson O/R Mapper (we're migrating to NHibernate this summer). The application has 3 front-end UIs: Windows Forms, ASP.NET, and a Windows Forms app that is installed on tablet PCs. The three front-ends perform different functions but they all access a core subset of the business classes.
The tablet PC application is the problem. We try to limit the amount of data pushed to the tablets to reduce the time it takes them to sync using SQL Server merge replication. The problem we've run into is when we add new functionality to the main application that we have no need to distribute to the tablet PCs or, if it's sensitive data, a strong need to not distribute it. Some of this can be controlled through replication, but we occasionally introduce dependencies in the core business objects that must be present in order for the O/R mapper to work.
Ideally, we would have two versions of the core business object library, Full and Compact. This seems like it would be a maintenance nightmare. Are there any strategies for managing this? Or alternatives? How does Microsoft manage the full and compact .NET Frameworks?
Your question talks about Tablet PC, which is really just XP and therefore the CF really isn't relevant, but for the sake of the question subject itself we can still talk about maintaining code used by the CF and the FFx (assuming you actually meant Windows Mobile or Windows CE).
First thing to know is that CF assemblies are retargetable. This means that a CF assembly can be directly used by a full-framework app without any recompiling (assuming it doesn't use any device specific stuff like P/Invoking coredll witout checking the runtime environment, using the WindowsMobile namespace, etc).
If using retargeting doesn't get you all the way there, then you can deal with the maintennace using compiler directives as well as partial classes. Daniel Moth covers tips on these quite well in his MSDN article.
One thing you may be able to do is if you can compile for each platform seperately you may be able to use compiler directives to limit what is needed by the Tablet PC platform. However with you using an OR mapper that may prove to be difficult.
Now in an ideal world you would actually have your Domain objects (the ones that map to the OR) with very very little business logic shared. Then have a BO layer that consumes these Domain objects. If you managed to break out your code base this way you could in theory then have just seperate layers you need to deploy depending on your need.
However it sounds to me more like you need to perform an intelligent split.
What you probably need to do is segment your code such that the Tablet PC BO are in the core root BO asseymbly. Then have a BO extension assembly that has the additional objects, rules, etc that are needed for the Winform / Web app versions.
So while you would have two domain level business object components at this point you would not actually have any duplication. As your Tablet PC BO object would also be the base for the Winform / Asp.net app. Then the extension dll would only contain the extras needed for the bigger versions of hte applications.
If you followed this approach it might make things easier to manage. Just look at it from the Common stuff needed everywhere and the specialized approach. :)
I can go into much greater detail if you want, just wanted to give you a basic hit.