ABP without ORM - asp.net-core

I recently stumbled upon ABP (previously Asp.Net BoilerPlate) as a framework to rebuild a web-app in a modular way. It's very interesting indeed, and come with a very wild bunch of basic elements like authentication, logging, security, multi-tenancy, settings and so on...
But, as far as I have understood it by now, ABP is "strongly coupled" with EF Core or Dapper, and I don't like to use ORM in my code, I have a more "database driven" approach and like to write queries myself.
So, the main question is: it's possible to use ABP WITHOUT using EFCore/Dapper? Or it's better to switch to other modular framework like OrchardCore or ExtCore?
EDIT: 11/11/2020 after #hikalkan reply.
Hi #hikalkan, thanks for your kind reply. Maybe I have to explain more what I want to achieve, so you can advise me better. My goal is to create a "pluggable" web-app, in which I can replace a module with another with same functionality but different details.
A little introduction: I have a "complex" web-app for HR departments of small-to-medium companies, many customers use it, and each one have its own copy installed in its premises. The app is composed by many functionality: personal data, contracts data, trainings data, shifts and so on. But each customer have slightly different modules, while the app itself is an old, monolithic one: it works, but I have to maintain different versions, almost one for each customer, very difficult and time consuming. Don't blame it on me, I have "inherited" the app and have to maintain and improve it that way.
But, finally, I can spend some time rebuild it from scratch, and I want it to be "modular", so that the main part (authentication, profiling, db interaction, theming, security, logging, etc...) stay stable & solid, shared among all installations, and each customer have a selection of module/plug-in to choose from. A bit like Wordpress, but better.
For example, let's say I have a simple module "contactSimple" for managing contacts (emails, phone numbers, pagers, and so on), each contact have a type and a value field in the database, very basic, and 90% of my customers are happy with it. But the remaining 10% want to add a note field, a flag "is main contact" or other minor changes. Now, what i want to do is: develop the "contactEnhanced" module as a separetad class library, with same interface and main functions of "contactSimple", compile it as a dll, simply change the dll in the web-app, update the database if needed to, reload the app and the new dll takes place, without altering any other component.
I was thinking to simply use dynamic reflection to obtain it, but then i found that reflection is not very suited, 'cause is slow and heavy on resources, so while surfing the web I find ABP.
Now, THE question: in your opinion, is ABP the framework/solution I was searching for? Please let me know!

ABP is designed to be database provider independent. It currently has two DB provider integration options: EF Core & MongoDB. That means ABP is not strongly coupled with the EF Core or Dapper: It works with MongoDB too. You set -d mongodb if you've created your solution with the ABP CLI.
So, the Framework itself has no relation to any database provider. But the pre-built modules have. For example, ABP provides an Identity module that has user and role management functions and needs to a database and includes some code to interact with the database. So it can't be db provider independent. All the pre-built modules provides EF Core & MongoDB integration packages.
If you want to use these modules (when you create a new application from the startup templates, some modules come pre-installed), you have to decide to use EF Core or MongoDB for the database operations of these modules.
When it comes to you own application code: You are free to use any approach, including ADO.NET with manual SQL queries. Just do it how you do in a regular application. If you want to isolate database queries, create your own repository classes. In this way, you don't see ORM in your code. But the modules still use EF Core or MongoDB.
Actually there a possibility to completely drop the EF Core references: Implement all the repositories needed by the pre-built modules yourself. Then they will work since they only depend on repository interfaces.
BTW; If you use OrchardCore, it uses YesSQL (Yes, YES SQL) as a core dependency and you can not change it because all the OrchardCore framework depends on it everywhere. Also, OrchardCore is UI dependent: It uses aspnet core MVC / Razor Pages UI while the ABP Framework is UI independent and provides 3 built-in options: Angular, MVC and Blazor.
Edit: After edit of the question
The story you've explained is one of the goals of the ABP Framework. ABP is highly modular and also extensible. We built all the modules to be extensible. For example, the module entity extension system allows you to add new properties to existing entities of a module (the module is used as a NuGet package) without touching its source code. You can override the server side logic of the module.
But modularity is hard in general. I mean the module also should be designed so extensible/replaceable. If you want to declare some interfaces for a module, so the module can be completely replaceable, you have a lot of restrictions. For example, you can not write SQL join queries to tables of that module (because the replacement module can use a different table structure).
However, if the customizations will be lighter, you can follow the ABP Framework's module design to make your module extensible/customizable. See https://docs.abp.io/en/abp/latest/Customizing-Application-Modules-Guide and https://docs.abp.io/en/commercial/latest/guides/customizing-modules (commercial docs will be moved to the open source side since they are available as open source now). BTW, ABP supports to load modules as dlls from a folder. It reads dlls and initializes modules on application initialization.
I can only explains what ABP offers. I can't make suggestion, unfortunately. Because a real life project is complex and I can't predict all the problems & requirements you will have in the future :)

Related

Does Suave include tools for database?

Is there built-in way to access databases in Suave?
Suave is a web server library, so it doesn't come with a built-in way to access a database or anything like a sql abstraction.
An option if you're looking for a framework that does have a way to access data built in, Saturn is a fine choice. It's also used as the backend for SAFE-stack if you're interested in full-stack F#.
Under the covers it's relatively simple, the template just lays down a CLI that lets you scaffold out some code and do migrations. And Dapper gets used as your database access library. But it does at least put things together in a template so that you can see how to connect things.

Choosing between dnx451 and dnxcore 50 for Azure Web App in terms of functionality, performance, etc

I am creating a new project that will run in Azure Web App on the new ASP.NET 5. We are not planning to run it on linux or anything like that, at least now. So the question is, should I try to keep both frameworks if possible just in case or I should prefer one of them. There are e.g. much less dependencies that I can use with dnxcore50 which is not so nice. So the main question is: are there any benefits of using dnxcore50 if running in Azure Web App, like: performance, stability, etc. over dnx451.
I have to start that I'm still the beginner in ASP.NET 5 (like the most other), so I didn't posted my answer before and you should ignore my reputation, because it's come from another subjects, which I know better.
I think that everybody, who switch to ASP.NET 5, ask the same question whether it does make sense to keep both framework in his projects. I try to post below my personal thoughts about the subject.
My personal choice is my short recommendation to you: keep both framework till you find some really important reason to drop one from there.
ASP.NET 5 is still not final. The strategy is not full fixed and it can be changed in a short time later. Just some examples. Previous beta versions have supported "Helios" as an option for hosting ASP.NET 5 applications on IIS. The option was dropped later (see the statement). Even the name dnxcore50 is renamed now to dotnet5.4 at least in all internal Microsoft components (see the announcement). One can suppose that some other things could be changed in the future. Thus I think that putting all your eggs in one basket would be too dangerous now: keeping of both frameworks could reduce the risk.
The next thing, which I found, was the following. dnxcore50 (dotnet5.4 or CoreFX or .NET Core foundational libraries) don't support many features supported by .Net Framework. One important example for me was missing XSD Schema validation (see here and here). I use XML only in combination with XSD Schema validation. I prefer JSON in the most other cases. Kipping of both frameworks in your project could helps you to locate the parts of your code, which could be not yet implemented in CoreFX. It could helps you to move the code in separate component or to change the implementation.
About the performance. One should distinguish potentiality of both frameworks from the current implementation. In general CoreFX was redesigned and decomposed. Many parts of one mscorlib was separated or removed (remoting, AppDomains and so on). It means that the performance of CoreFX should be better. Theoretically the factored API can provide better performance. Moreover one can more easy improve one parts of CoreFX and publish new version with improved performance. More modules instead of having one monolith gives us the new way for improvement of the performance and for fixing the bugs. On the other side replacing of dependencies to new version could be origin of new compatibility problems and thus it increases the risk and could decrease the stability. By keeping of both frameworks we can test whether the new problem exist in alternative framework. It allows us to suppose that the last changes of dependencies and not the last changes of our main code is the origin of new problems.
I can continue with pros and cons of the usage of every framework, but nodoby like to read long text and all my arguments forward me to the same practical decision: keeping by default of both frameworks in my projects as soon as I would find out a real requirement to drop one from the frameworks.
No major advantages really so far.
This might change in the future and why I'm planning to target both (CoreCLR and .NET 4.6). A lot of investment is being spent in CoreCLR but also on Docker and Service Fabric.
Just my 2 cents.

Preventing data-loss when deploying a new version of EF website?

I'm building an ASP .NET MVC 4 application. Now, when I deploy a new version of this site to my live environment, I don't want to lose data just because my column names got renamed or something similar.
I'm using Entity Framework 4 to store objects in the database. Now, I am aware that this framework can generate so-called change-scripts. However, I do not trust them. Am I just being over-causious, or do I have reason not to trust them?
I'm designing the models in an EDMX diagram which then generates the tables for me. This makes it complicated for me to generate proper changescripts, especially when I do not (for sure) know how things are mapped into the database in certain scenarios.
So how do I get around this? If you use the same things as I do, what is your way of preventing data-loss on deployment?
I figured out that I probably have to wait for Entity Framework 5.
It seems that such a feature is already planned.

Is Doctrine2 too 'big' for this project?

I'm writing an application that manages something like Drupal's nodes. I'm planning on using the app in various content management systems / applications (Concrete5, Wordpress, custom Zend & Yii applications, etc...).
Since I'm using it in so many different places, I have to package an ORM with the app (ie I can't use Conrete5's or Yii's ORM, etc...). I love Doctrine 2, but am concerned that this is too 'big' of an orm to be packaged with my app.
It get's messy, for example, if I'm incorporating this app with a Zend application which is running Doctrine 2. I don't want two 'instances' of Doctrine running in the same app. Is this a warranted concern?
Question: Is Doctrine 2 too 'big' for this project? If so, what would be a good alternative ORM?
If you are going to use your application as an extension for other CMSes and/or frameworks,you should definitley use an ORM for the following reasons which comes to my mind:
1.CMS database installations are different.some use mysql some use oracle ,etc and you will have to create your own adapters or
2.Use CMS's native database abstraction layers.So you will have to rerwite your own model for every cms plugin you are gonna make.
3.Doctrine can do many big jobs but using doctrine is rather easy.doctrine is not resources intensive.
4.using more than one instance of doctrine will not be a problem as far as i know.
5.however doctrine2 requires minimum installation of PHP 5.3 and some shared servers might have older versions of php which this problem will be resolved while time passes and 5.2 becomes obsolete.
However in some CMSes more than one connection will made for your extension to work.(one for the CMS'S native database queries and one for your doctrine query.)
Way 1: work with different ORM's via adapters
(+) better integration with frameworks
(-) a lot of work to implement adapters
(-) loss of flexibility (limited by your adapters interface)
Way 2 (my choice): Use PDO with FETCH_CLASS, it's comfortable enough (you can fetch data to instances of your classes). Most of modern ORM's on PHP works through PDO, so integration must be easy.
Also about Doctrine 2 & Yii -- I tested this combination, works fine.

Maintaining two versions of a business class library

Our core business application uses a library (C# project) of business objects. Data access is done using the Wilson O/R Mapper (we're migrating to NHibernate this summer). The application has 3 front-end UIs: Windows Forms, ASP.NET, and a Windows Forms app that is installed on tablet PCs. The three front-ends perform different functions but they all access a core subset of the business classes.
The tablet PC application is the problem. We try to limit the amount of data pushed to the tablets to reduce the time it takes them to sync using SQL Server merge replication. The problem we've run into is when we add new functionality to the main application that we have no need to distribute to the tablet PCs or, if it's sensitive data, a strong need to not distribute it. Some of this can be controlled through replication, but we occasionally introduce dependencies in the core business objects that must be present in order for the O/R mapper to work.
Ideally, we would have two versions of the core business object library, Full and Compact. This seems like it would be a maintenance nightmare. Are there any strategies for managing this? Or alternatives? How does Microsoft manage the full and compact .NET Frameworks?
Your question talks about Tablet PC, which is really just XP and therefore the CF really isn't relevant, but for the sake of the question subject itself we can still talk about maintaining code used by the CF and the FFx (assuming you actually meant Windows Mobile or Windows CE).
First thing to know is that CF assemblies are retargetable. This means that a CF assembly can be directly used by a full-framework app without any recompiling (assuming it doesn't use any device specific stuff like P/Invoking coredll witout checking the runtime environment, using the WindowsMobile namespace, etc).
If using retargeting doesn't get you all the way there, then you can deal with the maintennace using compiler directives as well as partial classes. Daniel Moth covers tips on these quite well in his MSDN article.
One thing you may be able to do is if you can compile for each platform seperately you may be able to use compiler directives to limit what is needed by the Tablet PC platform. However with you using an OR mapper that may prove to be difficult.
Now in an ideal world you would actually have your Domain objects (the ones that map to the OR) with very very little business logic shared. Then have a BO layer that consumes these Domain objects. If you managed to break out your code base this way you could in theory then have just seperate layers you need to deploy depending on your need.
However it sounds to me more like you need to perform an intelligent split.
What you probably need to do is segment your code such that the Tablet PC BO are in the core root BO asseymbly. Then have a BO extension assembly that has the additional objects, rules, etc that are needed for the Winform / Web app versions.
So while you would have two domain level business object components at this point you would not actually have any duplication. As your Tablet PC BO object would also be the base for the Winform / Asp.net app. Then the extension dll would only contain the extras needed for the bigger versions of hte applications.
If you followed this approach it might make things easier to manage. Just look at it from the Common stuff needed everywhere and the specialized approach. :)
I can go into much greater detail if you want, just wanted to give you a basic hit.