Share common dependencies for two .NET Core apps - asp.net-core

I have two ASP.NET Core apps running on the same server and they share many dependencies.
I want to put all these common dependencies in a directory in order to save disk space. But I don't know how the config the apps needs to be so they search this particular directory in order to load them.
Thanks in advance

As far as I can see, there is conceptual complexity and misunderstanding here. Before I explain this, I would like to inform you that go over some assumptions.
There are only 2 web applications(could be API projects or different). I assume you don't have any projects. I have already asked you.
Dependencies are a big problem throughout the development process. And we developers are responsible handle this. Evolve the Tightly-Coupled systems to Loosely-coupled system gives us many advantages. For this reason, we aim to reduce the dependencies of the applications by using many technics, design patterns throughout the development process. I recommend you looking for dependencies and coupling concepts. I share some information that will be a starting point for you.
After looking for it, you will become aware that you need to separate dependencies at the application level rather than at the disk level. You will find that have many technic and approach. I am sure that after looking it for you take action will be easy.
Here are points;
https://dev.to/franiglesias/dependencies-and-coupling-4365
https://stackify.com/dependency-inversion-principle/
What is the difference between loose coupling and tight coupling in the object oriented paradigm?

Related

Optimizing .NET core microservice project structure

I trying to develop Microservice in .Net core.
planning to implement project structure like
Frontend
Services
-Product
-Product.Api
-Product.Application
-Product.Domain
-Product.Infrastructure
-Basket
-Basket.Api
-Basket.Application
-Basket.Domain
-Basket.Infrastructure
-Order
-Order.Api
-Order.Application
-Order.Domain
-Order.Infrastructure
In the above project structure, under service folder currently three module(Product, Basket and Order). many module will added later.
Where each module have 4 projects for Api, Application ,Domain, Infrastructure. if add more module increase number of class library and web project. this will drop Visual studio loading, compile and running time of project due to my hardware is not enough.
Please recommend any other pattern for optimizing number of projects in the microservice?
If the number of class libraries is the determining factor in your architecture performance, maybe it is time to converge the modules into the same module.
If it is absolutely necessary to continue using the microservice architecture and the high number of modules, you should consider investing in more powerful hardware.
Developing software often requires a lot of ram to house all the processes running the stack locally.
Another approach would be to try to develop on a cloud platform such as Azure, and use the corresponding tools to debug against a cloud instance or even in a GitHub Codespace.
If Product, Basket and Order are different microservices, then they should be in different Visual Studio solutions. Each solution will be small and independent and they'll all load and work fast regardless of how many microservices you have.
If Product, Basket and Order are part of the same microservice and you are planning to add many more modules, your microservice design is probably wrong, as a single microservice appears to have far too many responsibilities. In this case, the solution is to limit the responsibilities of each microservice so that they don't grow to enormous sizes.
If what you are building is a modular monolith (a single deployable unit, but with the code organised in modules), then the solutions are a bit different. If it's a single developer application, you probably don't need to split the modules in separate projects. For example, the whole API can be a single project and each module be in a different folder. If there'll be many developers and teams working on the source code, then you might want to create a separate solution for each module, so each team can work on their own code.
Like #abo said: if the number of assemblies in impacting the performance of your application consider one assembly per module.
If your driver for having multiple assemblies per module was governance on dependencies then consider using an additional tool like https://github.com/realvizu/NsDepCop which allows you to enforce architectural/dependency rules without the help of the compiler.
well,
I have some notes on this Architecture you are making.
if you do it right the one of the payoff is going to be less impact on the hardware.
note#1 : make A Kernel Module which has all of the abstraction of the common functionalities. (that other microservices and take a reference from). like the base repository, message queue base handler ,command /command handler. and u you want to disconnect a module from the kernel you can do that and add your own abstractions alone in that module (simplicity is the ultimate sophistication).
note#2 : Not every module needs to have the same projects or layers as u putting for example: -generally speaking- Basket doesn't need infrastructure. all it does to tell the order module if there is any order on going or pending for this user.
note#3 : microservices is notorious of needing a high-end server to handle the massive amount of nodes/application.
finally I have an awesome blueprint for you to study and follow.
which happens to covers the same case youre after.
here is the link

What are the disadvantages of the ECS (Entity-Component-System) architectural pattern, compared to OOP (or other paradigms)?

Because of Unity ECS, I've been reading a lot about ECS lately.
There are many obvious advantages to an ECS architecture:
ECS is data-oriented: Data tends to be stored linearly, which is the most optimal way for the system to access it. In decent ECS implementations, data is stored and processed sequentially, with few or no interruptions for any given system processing it's set of components.
ECS is very compartmentalized: It naturally separates data from behavior, enforces 'composition over inheritance' (google it), etc.
ECS is very friendly to parallel-processing and multi-threading: Because of the way things are structured, many entities and components can avoid conflicts and be processed in parallel to other systems.
However, disadvantages to ECS (compared to OOP, or Entity-Component [without systems], as is common in game-engines including Unity up until recently) are rarely, if ever, talked about. Do they exist? And if they do, what are they?
Here's a few points I gathered from my research:
Systems are very dependent on their ordering. Introducing new systems in between already existing Systems can be a challenge.
You also need to plan ahead your data as much as possible, since they will potentially be used by a LOT of systems. Changing the content of components could potentially break quite a few systems.
Although it's easy to debug the flow of a system, it's also harder to debug single component changes and not have a global view of what happened to the entity across all it's components. I'm not sure if Unity introduced new debug features for this.
If you're planning to use ECS in your team, introducing a new paradigm to devs that are not familiar with it could be a challenge. The onboarding time could be longer with more overhead.
Hope this gives you a good starting point.
When it comes to Unity3D, one disadvantage which comes to my mind is that the ECS there is quite restricted to the Unity classes (e.g. MonoBehaviour) and lifecycle. That means that the components are not easy to share with other C# code whereas a well-designed OOP class is reusable by other platforms than Unity.
Another point which comes to my mind is that using Interfaces with Components is sometimes not easy in Unity because only in the newest version serialization of interfaces are supported. Without serialization there don't appear inside of the inspector.

Web apps architecture: 1 or n API

Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!

API library decoupling approaches?

Imagine a set of libraries that represent some APIs. By using an inversion of control mechanisms, concrete implementations will be injected in a consuming project.
Here is a situation. I have some of the API libraries depending on other API libraries for certain functionalities - therefore the API libraries themselves are coupled at some point. This coupling can become an issue later, because changing one API will result in changes of the dependent APIs, and the corresponding implementations will also need to be changed, so in the worst case we end up with quite a number of projects that need to be modified to reflect a change form only one of them.
Now I have in mind two possible solutions for this:
Create a monolith API project that unites the related API libraries.
Further decouple APIs by making each library provide interfaces for all functionalities that are dependent on the other API, so the direct dependency is removed. This might result in a similar code in both libraries, but gives freedom to the implementations chosen via the IoC mechanisms and also allows the APIs to improve independently from each other (when an API is changed, the changes would affect only its implementation libraries, not other APIs or their implementatons).
The problem with the second approach is the duplicating of code, and the result might be of having too much api libraries that need to be referenced (for instance, in .NET application each API will be a separate DLL. In some scenarios, like Silverlight applications, this can be an issue with app size - download time and client performance overally).
Is there a better solution for the situation. When is it better to merge some API libs into one bigger and when not? I know this is a very general question I am asking, but lets ignore the due dates, estimations, client requirements and technologies for a moment, I want to be able to determine the right approach based on both achieving maximum scalability and minimum maintanance time. So, what could be a good reason to choose either approach, or another one you might suggest me?
Edit:
I feel like I must clarify something about the question. I have in mind decoupling APIs from each other, not the API from its implementation. So, for instance if I have security API for validating permissions of access, and user accounts API that uses (references) the security API, changing security API will bring the need of changing the user accounts API, and the implementations of both of them. The more APIs that happen to be coupled this way, the more changes will have to be applied. It is what I want to avoid.
The choice is between few huge libraries and a myriad of small libraries.
If you have a huge library, the code within will tend to be tightly coupled simply because there's no force providing pressure to design the various elements in a loosely coupled way. The risk is that it becomes harder and harder to evolve that library because there are so many interdependencies that must be coordinated. Think about the .NET Base Class Library as an example.
If you have a myriad of small libraries, you might risk dll hell. Yes, we were promised many years ago that this was over, but it's not. Just try to consume a lot of fine-grained open source libraries in your application code base and you'll know what I mean.
Still, the Single Responsibility Principle also applies at the package level, so I'd recommend small, focused libraries instead of huge general-purpose libraries. This also makes it easier to always pick best-of-breed libraries.
Small libraries can always be composed/compiled into larger libraries (in .NET with an Assembly Linker / Merger / Repacker utility), while it's much harder to split a big library.
No matter what you do, the most important thing to keep in mind is backwards compatibility. The fewer breaking changes you introduce, the easier those libraries will be to manage.
I don't see this as a problem, really.
Some library will depend on other libraries, and this is fine to me: improving one library will improve all the dependents! The "owner" of a library will have the responsibility not to break existing code, when making a change, but this is normal and can easily be handled if the code is well designed.
If you have changes rippling through all dependent code you should reconsider your design. If your library surfaces a certain API it should isolate its consumers from changes to underlying classes or libraries.
Update 1:
If your application uses Library1 with API1 it should not have to deal with the fact that Library1 uses Lib2, Lib3, .. , LibX.
E.g. The Moq mocking library depends on CastleDynamicProxy. Why should you have to care about that? You get an assembly where DynamicProxy is already merged in and you can just use Moq. You never see, use or have to care about DynamicProxy. So if the DP API changes, that would not affect your tests written using Moq. Moq isolates your code from changes in the API of the underlying DP.
Update 2:
Finding a problem valid for more than one branch causes modifications
of all of them
If that is the case you don't build a library but a helper for a very specific problem that should NEVER be forced upon other projects. Shared libraries tend to degenerate to a collection of "might be useful somewhere in the distant future". Don't! This will always bite you in the a**! If you have a solution for a problem that occurs in more than one place (like Guard classes): share it. If you believe that you might find a use for some solution to a problem: leave it in the project until you really have that situation. Then share it. Never do that "just in case".

Should I be more concerned with coupling between packages or between units of distribution?

I have been looking at metrics for coupling and also look at DSM.
One of the tools I've been using looks at coupling between 'modules' with a module being a unit of distribution (in this case a .net assembly).
I feel that I should be more interested in looking at coupling between packages (or namespaces) than with units of distribution.
Should I be more concerned with coupling between packages/namespaces (ensure that abstractions only depend on abstractions, concrete types depend on abstractions and their are no cycles in the dependencies so that refactoring and extending is easy) or should I be concerned with whether I can deploy new versions without needing to update unchanged units of distribution?
What does anyone else measure?
For what it's worth, my gut feel is that if I focus on the package/namespace coupling then the unit of distribution coupling will come for free or at least be easier.
First, it's easy to go overboard looking at dependencies and coupling. Make sure you aren't over complicating it.
With that disclaimer out of the way, here's what I suggest.
There's really 3 different views to dependency/coupling management:
1) physical structure (i.e. assembly dependencies)
2) logical structure (i.e. namespace dependencies)
3) implementation structure (i.e. class dependencies)
For large apps, you will need to at least examine all 3, but you can usually prioritize.
For client deployed apps, number 1 can be very important (i.e. for things like plug-ins). For apps deployed inside the enterprise (i.e. asp.net), item #1 usually turns out to be not so important (excluding frameworks reused across multiple apps). You can usually deploy the whole app easy enough not to take the overhead of a complicated structure for #1.
Item #2 tends to be more of a maintainability issue. Know your layer boundaries and their relationship to namespaces (i.e. are you doing 1 layer per namespace or are you packaged differently at the logical level). Sometimes tools can help you enforce your layer boundaries by looking at the logical dependency structure.
Item #3 is really about doing good class design. Every good developer should put forth a pretty good amount of effort into ensuring he is only taking on the proper dependencies in his classes. This is easier said than done, and is typically a skill that has to be acquired over time.
To get a bit closer to the heart of your question, item #1 is really about how the projects are laid out in the VS solution. So this isn't an item to measure. It's more of something you setup at the beginning and let run. Item #2 is something you might use a tool to check during builds to see if the developers have broken any rules. It's more of a check than a measure really. Item #3 is really the one you'd want to take a good look at measuring. Finding the classes in your codebase which have a high amount of coupling are going to be pain points down the road, ensure the quality on those guys. Also, measuring at this level allows you to have some insight into the quality (overall) of the codebase as it's evolved. In addition, it can give you a red flag if someone checks some really raunchy code into your codebase.
So, if you want to prioritize, take a quick look at #1 and #2. Know what they should look like. But for most apps, item #3 should be taking the most time.
This answer, of course, excludes huge frameworks (like the .NET BCL). Those babies need very careful attention to #1. :-)
Otherwise, you end up with problems like this:
"Current versions of the .NET Framework include a variety of GUI-based libraries what wouldn't work properly in Server Core"
http://www.winsupersite.com/showcase/win2008_ntk.asp
Where you can't run .NET on a GUI-less install of Windows Server 2008 because the framework takes dependencies on the GUI libraries...
One final thing. Make sure you are familiar with the principles behind good dependency/coupling management. You can find a nice list here:
http://butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod
coupling and dependency cycles between units of distribution is more "fatal" because it can make it really difficult to deploy your program - and sometimes it can also make it really difficult to even compile your program.
you are mostly right, a good top-level design that divides the code into logical packages and clear and predefined dependencies only will get you most of the way, the only thing missing is correct separation of the packages into units of distributions.