Staying open with DI/IoC containers - ioc-container

I am involved with several open source projects which taken together provide an application development framework. The question I have is what mechanism(s) should I provide for integrating them with each other?
On the conceptual level the answer is clear - DI/IoC. The "only" problem is to decide which one. In several installations we used StructureMap, but then a user came along who wanted only one of the components and wanted NInject.
So, to qualify the question, how should I go about building my components so that they can be integrated with each other (and 3rd Party) using a variety of DI/IoC containers.
The best I could come up with was to separate out all integration code into separate projects and then have a project per supported IoC container, but this sounds suspiciously like IoC squared.
Any bright ideas? or I am just thinking too hard?
P.S. for the curious: NDjango; Bistro; Workflow Server

As long as you develop reusable components, you can implement them in a DI-friendly way without ever referencing any particular DI Container.
It' only when you need to compose an actual, running application that you need the DI Container, but as I understand, you are developing a framework, and it's best to keep it DI-neutral.

See this very related question (almost a duplicate).
For inspiration about integrating several projects while keeping them independent, see the Castle Project.

Related

Share common dependencies for two .NET Core apps

I have two ASP.NET Core apps running on the same server and they share many dependencies.
I want to put all these common dependencies in a directory in order to save disk space. But I don't know how the config the apps needs to be so they search this particular directory in order to load them.
Thanks in advance
As far as I can see, there is conceptual complexity and misunderstanding here. Before I explain this, I would like to inform you that go over some assumptions.
There are only 2 web applications(could be API projects or different). I assume you don't have any projects. I have already asked you.
Dependencies are a big problem throughout the development process. And we developers are responsible handle this. Evolve the Tightly-Coupled systems to Loosely-coupled system gives us many advantages. For this reason, we aim to reduce the dependencies of the applications by using many technics, design patterns throughout the development process. I recommend you looking for dependencies and coupling concepts. I share some information that will be a starting point for you.
After looking for it, you will become aware that you need to separate dependencies at the application level rather than at the disk level. You will find that have many technic and approach. I am sure that after looking it for you take action will be easy.
Here are points;
https://dev.to/franiglesias/dependencies-and-coupling-4365
https://stackify.com/dependency-inversion-principle/
What is the difference between loose coupling and tight coupling in the object oriented paradigm?

Choosing between dnx451 and dnxcore 50 for Azure Web App in terms of functionality, performance, etc

I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.

SOA Solution Design and Service Encapsulation: Should I keep service codebases in separate solutions, or all together in one?

I am developing an SOA project using principles I learned in Udi Dahan's Advanced Distributed Systems Design course. So far the project has 6 completely independent services, as well as an IT/Ops service for aggregation and integration with third party services. I also have a set of libraries I've developed for use across all of these services with infrastructure code, i/o, and utilities.
Is it better to keep these services' code together under one solution, or to keep them in entirely separate solutions? Or both? Integration testing will be much simpler with one solution, but I'm concerned with service encapsulation - my gut tells me I don't want a developer working on one service to know how another service works, because it can lead to unintentional coupling. Has anyone run into real-world issues with service encapsulation when keeping all codebases under one solution?
I would keep the services in separate solutions or even separate repositories.
I don't think it has anything to do with service encapsulation, but more to help deal with code contention better and make it hard to take dependancies between services.
As for common code (like contracts) put them in a separate repository and use nuget (or similar) to reference form your solutions.
Make sense?

Web apps architecture: 1 or n API

Background:
I'm thinking about web application organisation. I will separate front (web site for browser) from back (API): 2 apps, 2 repository, 2 hosting. Front will call API for almost everything.
So, if I have two separate domain services with my API (example: learning context and booking context) with no direct link between them, should I build 2 API (with 2 repository, 2 build process, etc.) ? Is it a good practice to build n APIs for n needs or one "big" API ? I'm speaking about a substantial web app with trafic.
(I hope this question will not be closed as not constructive... I think it's a real question for a concrete case, sorry if not. This question and some other about architecture were not closed so there is hope for mine)
It all depends on the application you are working on, its business needs, priorities you have and so on. Generally you have several options:
Stay with one monolithic application
Stay with one monolithic application but decouple domain model across separate modules/bundles/libraries
Create distributed architecture (like Service Oriented Architecture (SOA) or Event Driven Architecture (EDA))
One monolithic application
It's the easiest and the cheapest way to develop application on its beginning stage. You don't have to worry about complex architecture, complex deployment and development process. It also works better if there are no many developers around.
Once the application is growing up, this model begins to be problematic. You can't deploy modules separately, the app is more exposed to anti-patterns, spaghetti code/design (especially when a lot people working on it). QA process takes more and more time, which may make it unusable on CI basis. Introducing approaches like Continuous Integration/Delivery/Deployment is also much much harder.
Within this approach you have one repo/build process for all your APIs,
One monolithic application but decouple domain model
Within this approach you still have one big platform, but you connect logically separate modules on 3rd party basis. For example you may extract one module and create a library from it.
Thanks to that you are able to introduce separate processes (QA, dev) for different libraries but you still have to deploy whole application at once. It also helps you avoid anti-patterns, but it may be hard to keep backward compatibility across libraries within the application lifespan.
Regarding your question, in this way you have separate API, dev process and repository for each "type of actions" as long as you move its domain logic to separate library.
Distributed architecture (SOA / EDA)
SOA has a lot profits. You can introduce completely different processes for each service: dev, QA, deploying. You can deploy just one service at once. You also can use different technologies for different purposes. QA process gets more reliable as it involves smaller projects. You can version communication (API) between services which makes them even more independent. Moreover you have better ability to scale horizontally.
On the other hand complexity of the high level architecture grows. You have much more different components you have to take care: authentication / authorisation between services, security, service discovering, distributed transactions etc. If your application is data driven (separate frontend which use APIs for consuming data) and particular services don't need to communicate to each other - it may be not as much complicate (but such assumption is IMO quite risky, sooner or letter you will need to communicate them).
In that approach you have separate API, with separate repositories and separate processes for each "type of actions" (which I understand ss separate domain model / services).
As I wrote on the beginning the way you choose depends on the application and its needs. Anyway, back to your original question, my suggestion is to keep APIs as separate as you can. Even if you have one monolithic application you should be able to version APIs separately and keep their domain logic separate. Separating repositories and/or processes depends on the approach you choose (eg. among these I mentioned before).
If I missed your point, please describe in more detailed way what answer do you expect.
Best!

OpenSwing Framework

Is OpenSwing a good framework for developing professional desktop application?
I was recently using the OpenSwing Framework. I can say only the best for the functionalities which are provided with the framework. It is a multitier concept with excelent data binding possibilities. My App uses a small Derby DB in background and I’m managing it with hibernate.
I’m sure, you will be able to advance very fast and provide a working prototype very quick. I would advice you to read the available doc first and to run the provided examples (http://oswing.sourceforge.net/).
However, it has another side which you should be aware of and you will probably notice by yourself if you run the examples. The GridFrame, GridFrameControler, DetailFrame, DetailFrameControler etc classes are not really generic. There are a lot of dependencies bult in and you will have to customize them again and again for every single implementation (can be seen in the demos).
I had another approach, I invested some time in building my own classes which are generic and using the unchanged OpenSwing classes in the background first. Now I’m only setting the properties file where all details are pre-defined. The rest is generic and I don’t have to re-code again and again for every single frame.
I hope this will help.
Regards
I used the openswing in team for more than two years.
It's a pretty nice swing framework for the enterprise development used in the Internal.
It provide great component based by MVP pattern ,such as grid , document ...
If you try it , It's a good article for you about Model-View-Presenter
And try the demo in the source,It's quite good.
The JAllInOne is also a good demo for the framework also made by the mcarniel
and It's a personal project only developed by mcarniel. Thanks mcarniel's great work.