Is it possible to use NHibernate without altering a DDD model that is part of a framework - nhibernate

I dig a lot of things about the DDD approach (Ubiquitous language, Aggregates, Repositories, etc.) and I think that, contrary to what I read a lot, entities should have behavior rather then being agnostic. All examples I see tend to present entities with virtual automatic properties and an empty constructor (protected or worst, public) and that's it. I consider this kind of objects more like DTOs then entities.
I'm in the process of creating a framework with its specific API and I don't want to be tied to an ORM. So I built the domain first (without thinking of persistence) and now I would like to use NHibernate as persistence tool so I added a new project to my current solution to help ensure that my model isn't altered to support NHibernate. This project should be an implementation of the abstract repositories that live inside my domain. And now the difficulties arise.
Since it is my first time with NHibernate (I'm also trying Fluent Nhibernate but it seems even more restricting) I would like to know :
Is it possible to use NHibernate without altering a DDD model that is part of a framework
The things (constraints) that are necessary for NHibernate to work as expected and efficiently (virtual properties, empty constructors, etc.) I think this list would be helpful to a lot of people who are starting to learn NHibernate.
Please keep in mind that I'm building a framework so the Open/Closed Principle is very important for me.
P.S.: Sorry if my english is not good, I'm from Montreal and I speak french.
Edit 1: Here is one problem I have with NHibernate now - How to map Type with Nhibernate (and Fluent NHibernate)

For NHibernate:
All mapped classes require a default (no-arguments) constructor. The default constructor does not have to be public (it can be private so that it is not a part of the API), but it must exist. This is because NHibernate must be able to create an instance of the mapped class without passing any arguments. (There are workarounds, but don't do that.)
All mapped properties for which lazy-loading will be required must be marked virtual. This includes all reference properties and all collection properties. This is because NHibernate must be able to generate a proxy class deriving the mapped class and overriding the mapped property.
All mapped collection properties should use an interface as the property type. For example, use IList<T> rather than List<T>. This is because the collections types in the .NET Framework tend to be sealed, and NHibernate must be able to replace a default instance of the collection type with its own instance of the collection type, and NHibernate has its own internal implementations of the collection types.
For NHibernate, prefer Iesi.Collections.Generic.ISet<T> to System.Collections.Generic.IList<T>, unless you are sure that what you want is actually a list rather than a set. This requires being conversant in the theoretical definitions of list and set and in what your domain model requires. Use a list when you know that the elements must be in some specific order.
Also note that it's typically not easy to swap object-relational mapping frameworks, and in many cases it is impossible, when you have anything beyond a trivial domain model.

The short answer to your question is that it is not possible, but if don't need lazy loading the required alterations are trivial.
No matter what, you will have add default constructors to classes that do not already have them. If you are willing to forgo lazy-loading, those default constructors can be private, and you don't have to make any other changes to your domain model to use NHibernate.
That's awfully close to persistence ignorance.
Having said that, if you want lazy-loading, you'll need to make several changes (outlined in other answers to this question) so that NHibernate can create proxies of your aggregated entities. I'm personally still trying to decide whether lazy-loading is an enabling technology for DDD or if it's a premature optimization that requires too many intrusive changes to my POCOs. I'm leaning toward the former, though I really wish NHibernate could be configured to use a specific constructors.
You might also take a look at Davy Brion's blog (I particularly liked Implementing A Value Object With NHibernate), which is really illuminating if you're interested in domain-driven-design and avoiding anemic domain models.

In my experience, the only thing that NHibernate requires of a domain is virtual properties and methods and a default no-argument constructor, which as Jeff mentioned, can be marked private or protected if need be. That's it. NHibernate is my OR/M of choice, and I find the entire NHibernate stack (NHibernate, NHibernate Validator, Fluent NHibernate, LINQ to NHibernate) to be the most compelling framework for persisting POCO domains.
A few things you can do with NHibernate:
Decorate your domain model with NHV attributes. These constaints allow you to do three things: validate your objects, ensure that invalid entities are not persisted via NHibernate, and help autogenerate your schema when using using NHibernate's SchemaExport or SchemaUpdate tools.
Map your domain model to your persistent storage using Fluent NHibernate. The main advantage, for me, in using FNH is the ability to auto map your entities based on conventions that you set. Additonally, you can override these automappings where necessary, manually write class maps to take full control of the mappings, and use the xml hbm files if you need to.
Once you buy into using NH, you can easily use the SchemaExport or SchemaUpdate tools to create and execute DDL against your database, allowing you to automatically migrate domain changes to your database when initilizing the NH session factory. This allows you to forget about the database, for all intents and purposes, and concentrate instead on your domain. Note, this may not be useful or ideal in many circumstances, but for quick, local development of domain-centric apps, I find it convenient.
Additionally, I like using generic repositories to handle CRUD scenarios. For example, I usually have an IRepository that defines methods for getting all entites as an IQueryable, a single entity by id, for saving an entity, and for deleting an entity. For anything else, NH offers a rich set of querying mechanisms -- you can use LINQ to NHibernate, HQL, Criteria queries, and straight SQL if need be.
Th only compromise you have to make is using NHV attributes in your domain. This is not a deal breaker for me, since NHV is a stand-alone framework which adds additional capabilities if you choose to use NHibernate.
I have built a few apps using NH, and each has a persistence ignorant domain with all persistence concerns separated into its own assembly. That means one assembly for your domain, and another for your fluent mappings, session management, and validation integration. It's very nice and clean and does the job well.
By the way: your English is pretty darn good, I wish my French was up to par ;-).

Just to put my two bits in, I struggled with the same thing once but I overcame this by:
Adding protected default constructor to every entity.
Making Id virtual
Let's take for example upvote and downvote for Vote entity on my experiment website:
http://chucknorrisfacts.co.uk/ (NHibernate + MySQL with Mono)
public class Vote : Entity
{
private User _user;
private Fact _fact;
// true: upvote, false: downvote
private bool _isupvoted;
// for nHibernate
protected Vote() { }
public Vote(User user, Fact fact, bool is_upvoted)
{
Validator.NotNull(user, "user is required.");
Validator.NotNull(fact, "fact is required.");
_fact= fact;
_user = user;
_isupvoted = is_upvoted;
}
public User User
{
get { return _user; }
}
public Fact Fact
{
get { return _fact; }
}
public bool Isupvoted
{
get { return _isupvoted; }
}
}
This class inherits from Entity where we stick all the minimum necessary for Nhibernate.
public abstract class Entity
{
protected int _id;
public virtual int Id { get {return _id;} }
}
and Fluent mapping where you Reveal the private property.
public class VoteMap : ClassMap<Vote>
{
public VoteMap()
{
DynamicUpdate();
Table("vote");
Id(x => x.Id).Column("id");
Map(Reveal.Member<Vote>("_isupvoted")).Column("vote_up_down");
References(x => x.Fact).Column("fact_id").Not.Nullable();
References(x => x.User).Column("user_id").Not.Nullable();
}
}
You could probably place protected default constructor in Entity class and configure nHibernate to use it instead but I didn't look into it yet.

Related

Registering multiple DbContext Instances on startup for use in Generic Repository

I'm trying to create a Generic repository which accepts 2 generic types e.g.
public class EfRepository<T, TS> : IAsyncRepository<T,TS> where T : BaseEntity
where TS : DbContext
{
..........
}
and in my startup.cs I have the usual mapping:
services.AddScoped<DbContext, ConfigDbContext>();
How can I now add another mapping to DbContext? I have tried adding another mapping between DbContext and another Context i created but It only ever uses the first mapping.
I have multiple databases I need to consume and would ideally like to have a DbContext for each one but i can't see a way of having multiple DI mappings.
In my EfRepository class the following code exceptions when I add an extra DbContext to my code and use it:
protected readonly DbContext _dbContext;
public EfRepository(DbContext dbContext)
{
this._dbContext = (TS)dbContext;
}
The exception is Unable to Convert from Type1 to Type2 and I know this is because the DbContext is bound to Type1 in my startup.cs.
How can I (if possible) use multiple DbContext's in a generic fashion?
That's not how you register DbContext, which is the source of your problem. The correct method is:
services.AddDbContext<ConfigDbContext>(o =>
o.UseSqlServer(Configuration.GetConnectionString("Foo")));
Done correctly, adding another is exactly the same:
services.AddDbContext<SomeOtherContext>(o =>
o.UseSqlServer(Configuration.GetConnectionString("OtherConnectionString")));
Then, which one gets pulled depends on which one you inject, which yes, does mean that you need to specify the actual type you want to inject, not DbContext generically. However, that can be done only one the derived class. In other words, you can keep the code you have (though you should not cast the context) and simply do:
public class FooRepository : EFRepository<Foo, ConfigDbContext>
{
public FooRepository(ConfigDbContext context)
: base(context) {}
}
You can leave it upcast to DbContext, as you don't need the actual type to do EF things. To get at the DbSets, you can use the generic Set<T>:
var foos = _dbContext.Set<Foo>();
And now, with all that said, throw it all out. It's completely unacceptable to use the repository pattern with an ORM like EF. EF already implements the repository and unit of work patterns. The DbContext is your unit of work, and each DbSet is a repository. Adding an extra layer on top of that does nothing but add maintenance concerns and extra entropy to your code, and frankly, creating a repository/unit of work that plays nice with EF, is so trying as to be impossible, so more often than not, you're just going to hamstring EF, making it less efficient and harder to use.
Using an ORM like EF is choosing to use a third-party DAL. That is all. There's no need to create your own DAL at that point, because you've outsourced it. I'm not sure why so many people get hung up on this. When was the last time you created your own routing framework or your own templated view preprocessor. Never. You just third party libraries for that (the framework), so why is it a problem to use a third party library for your DAL as well?
Then, you make ask well what about abstracting the EF dependency. Well, first, if you're thinking you might switch ORMs some day down the road, you won't. It just never happens. You'll sooner rewrite the whole app from the ground up. Second, the repository pattern doesn't even achieve this. You still have an EF dependency that bubbles all the way up to the front-line app. There's no way around that.
For true abstraction, you can use something like microservices architecture. Other than that, just embrace the dependency or don't use it at all, and really create your own DAL.

How to use and create DTOs is OOP world?

What is the right way to create DTOs from business objects?
Who should be responsible for creating them? BO/DTO itself from BO/some static factory?
Where should they reside in code if I have, f.e. some core library and a specific service API library that I need DTO for? In core library next to BO(which seems incorrect)/in specific library?
If I have encapsulated fields in my BO how do DTO grab them? (obviously in case when BO is not responsible for creating DTOs)
As an example assume that I have some Person BO like this:
class Person
{
private int age;
public bool isBigEnough => age > 10;
}
I want age to be an internal state of Person but still I need to communicate my BO to some api. Or having private field in my class that I want to send somewhere already means that it should be public?
Are there any general considerations of how to use DTOs alongside business classes with encapsulated data?
___ Update:
In addition to approaches that #Alexey Groshev mentioned I came accross another one: we separate data of our BO class into some Data class with public accessors. BO wraps this data with its api(probably using composition) and when needed it can return its state as Data class as clone. So dto converter will be able to access Domain object's state but won't be able to modify it(since it will be just a copy).
There're multiple options available, but it would be difficult to recommend anything, because I don't know the details about your project/product. Anyway I'll name a few.
You can use AutoMapper to map BOs to DTOs and vise versa. I personally dislike this approach, because it's quite difficult (but possible) to keep it under control in medium/large sized projects. People don't usually bother to configure mappings properly and just expose internal state of their objects. For example, your isBigEnough would disappear and age would become public. Another potential risk is that people can map DTOs to/from EF/Hibernate objects. You can find some articles which explain why it's considered to be a bad practice.
As you suggested, a BO can create DTO by itself, but how would you implement this approach? You can add methods or factory methods to your entities, e.g. public PersonDto ToDto(). Or you can add an interface, e.g. public interface IDtoConvertable<T> { T ToDto(); }, and choose which entity or aggregate root will implement it. Your Person class would look like this class Person : IDtoConvertable<PersonDto> {... public PersonDto ToDto() {...} }. In both cases DTO namespace/assembly must to accessible by entities which sometimes can be a problem, but usually it's not a biggie. (Make sure that DTOs cannot access entities which is much worse.)
(C#) Another option is to return a delegate which creates DTO. I decided to separate it from (2), because entity doesn't really create DTO by itself, but rather exposes a functionality which creates DTO. So, you could have something like this public Func<PersonDto> ToDto() {...}. You might want to have an interface as in (2), but you get the idea, don't you? Do I like this approach? No, because it makes code unreadable.
As you see, there are more questions than answers. I'd recommend you to make a few experiments and check what works for you (your project) and what doesn't.
I think the answer to question 5 will address the other questions too.
Are there any general considerations of how to use DTOs alongside business classes with encapsulated data?
Remember, a DTO is solely to transfer data. Do not concern yourself with implementing any kind of rules in the DTO. All it is used for is to move data from one subsystem to another (NOT between classes of the same subsystem). How that data is used in the destination system is out of your control -- although as the God programmer you inherently know how it is going to be used, DO NOT let that knowledge influence your design -- and therefore there should be no assumptions expressed as behaviour or knowledge accessors -- so, no isBigEnough.

Entities depends upon Repositories abstractions

How to make entities lazy load its relationships?
For example: Post and Comment models, where a Post can have 0 or more Comments. How to make the getComments() method on Post entity lazy load its Comments?
My first think, is to have an CommentRepository injected into my Post entity, how is this bad? Since Entities and Repositories are part of may domain, why can't they have a two way knowledge about each other?
Thank you
UPDATE
I know there are many excellent industry standard ORMs that perform lazy loading for the main languages out there, but I don't want to rely on its magics. I'm looking for a ORM/DBAL agnostic solution to make sure of the application's low coupling.
Aggregates represent a consistency boundary so there should never be a need to lazy-load related data as the aggregate as a whole should always be consistent. All objects that belong to an aggregate have no need to exist on their own. If you do have an object that has it's own life-cycle then it needs to be removed from the aggregate.
When you do find that you need to do this you may want to rethink your design. It may be that you are using your object model to query. You should rather have a light-weight query model that can perform this function.
Injecting repositories or services into entities is generally not the best idea. A double-dispatch mechanism should be preferred.
But in your case I would still try to not lazy-load.
Consider using a proxy that subclasses Post, overrides the getComments() method. Inject the proxy with the CommentRepository and access it in the overridden getComment() method.
This is how an ORM would typically do it. It keeps your domain classes clean as only the proxy is dependent on a data access mechanism.
At first, you should separate domain concept from details of realization. Agreagate pattern is about how to organize your domain and lazy-loading is an implementation detail.
Also, I disagree with #Eben Roux about inconsistency of agreates. Lazy loading contradicts nothing in my opinion. I express why.
Lazy loading itself
To understand how lazy loading can be implemented you may refer to Martin Fowler's PoEAAA pattern 'Lazy loading'. For me, proxy pattern is the best solution.
Also, it's important that most nowadays ORMs supports lazy loading, BUT for data model (not domain model).
It is a good practice to separate data model and domain model and use repostiories to hide this transformation:
Separated domain and data models
In this case objects of domain model are constructed inside repositories those hide ORM context. Required data object and all associations are loaded by ORM, than transformation to domain model is performed, and finally, constructed domain object returned.
The question is how to load some associations not during creation of domain object, but during it's lifetime. You can use Repoisotry inside entity and I see nothing wrong with it. It will looks like:
public class Post {
private ICommentsRepository _commentsRepository;
private IList<Comments> _comments;
//necessary to perform lazy loading (repository always wroks with ids)
private IList<int> _commentIds;
//realize lazy loading
...
}
there are problems:
Your model now becomes not clear. It contains 'techincal' information like _commentIds.
As soon as you want to define ICommentsRepository you claim the Comment to be aggregate root. If we introduce agregate pattern into domain model, repositories should be creaed just for agregate roots. Thus it means that Comment and Post are different agregate roots. And possible that it is not what you want.
There is better solution:
public interface ICommentList {
...
}
public class CommentList : ICommentList {
...
}
public class CommentListProxy : ICommentList {
private CommentList _realCommentList;
private IList<int> _commentIds;
//realize lazy loading here using ORMs capabilities!
//don't use repository here!
}
public class Post {
private ICommentList _commentList;
...
}
Post repository will initaize _commentList field with proxy object. Also, it is necessary to say:
CommentListProxy relates to data model layer, not to domain model. It uses ORMs capabilities to implement lazy loading
and thus doesn't use repositories, and thus you may consider CommentList as a part of the Post agregate.
The only possible disadvantage of this approach is in implicit database querying when operating with domain objects. This must be clear for users of the Post class.
Smart ORMs
Finally there are kind of ORMs which allows you to use same model for both domain and data. It realizes lazy-loading for domain model in a same way as for data model. Take a look at DataObjects.Net. For some cases it is a good solution.

DDD with L2S or NHibernate... about persisting data of business objects

I have been working on my first experimental DDD project. The objective of this project is for me to get a feeling on the whole DDD concept. Oddly enough, as i have read it's the more difficult part, i find the ubiquitous language "translation" easier than the whole architecture itself, thus my question.
I have only used in my past projects L2S (Linq to SQL). My applications were not DDD per se but they do have a Business Object (aside from the ones that linq to sql generates) and i have a repository for this objects. For example,
public class Customer
{
public ID {get; set;}
public string Fullname {get; set;}
public Address address {get; set;}
public List<Invoices> invoices {get; set;}
}
Now, in L2S, i have to breakdown this class into three different queries and submit them into the database. I have a mapper (extension methods) to make my life "easier". Something like this.
public void AddCustomer(Customer customer)
{
// This customer i am passing is the business object
// For the sake of demo, i am going to avoid the whole attach(), check for ID, etc.
// I think you are going to get what i am trying to do here.
using{ var context = new L2SContext())
{
context.CustomerEntity.InsertOnSubmit(customer.ToEntity());
context.AddressEntity.InsertOnSubmit(customer.Address.ToEntity());
context.InvoicesEntity.InsertAllOnSubmit(customer.Invoices.ToEntity());
}
}
Ok. Later on i have a SubmitChanges() method in the context where i actually persist the data to the database.
Now, i don't know much, almost anything, about NHibernate. But looking at some examples, i am suspecting that NHibernate takes care of all that breakdown for you (because of the mapping?) so you only have to pass Customer and it will do the rest. Is that Correct?
I am willing to learn NHibernate if I really see a HUGE benefit from it.
Thank you for checking out my question.
EDIT: Have you heard of Codesmithtools.com? They have a framework generator for LinqToSql, EF, NHibernate. Has anyone tried their NHibernate? I have used PLINQO for LinqToSql but they add so much crap to the classes that i believe are unnecessary. Pretty much the classes are suit to be used, for bad programmers, as business classes, DTO, ViewModels, etc. All In One :). Terrible. BUT, they are really good at generating all that. i have to give them KUDOS for that.
A few points for NHibernate over Linq-2-SQL for DDD:
Persistence by reachability. This can also be called cascading saves, but it would allow you to persist the customer entity without having to explicitly insert them. This fits nicely with DDD because Customer would be an aggregate and Address would be a value object. In NHibernate, value objects are represented as component mappings and entities and aggregates as class mappings. Aggregates should be persisted and retrieved as single units and NHibernate allows you to do this.
Persistence ignorance. NHibernate allows you to design your classes as pure POCOs, without references to additional libraries. As far as I remember, L2S required a special type for collections, as well as requiring explicit foreign keys as properties. Note, that even with NHibernate persistence ignorance is an ideal, not a goal.
As pointed out by others, there is a steep learning curve to NHibernate. For example, lazy loading can be problematic. But it is worth it overall.
Your question is open-ended. It is obvious that you now how Linq-2-SQL works. As the first comment already said: yes NHibernate can provide cascading saves. But this is only the beginning... Please, first of all, check this question and answers (there are more than one interesting):
NHibernate vs LINQ to SQL
I am using NHibernate on my private projects as the first choice. If possible I prefer it on any project. But, I would like to append one more NOTE, from my experinece:
The biggest benefit is, that once you will learn and work with NHibernate, it won't be so much difficult to work with other ORM tool. On some projects you will (I did) meet Entity Framework on some LLBL Generator..., and (while we can blame that something in comparsion with NHibernate is missing;) we can quickly:
understand the domain, because ORM forces the Entity/Domain driven implementation
use standard patterns
Wish it helps and good luck with NHibernate. Learning curve is maybe slower then expected, but benefits await.

What classes should I map against with NHibernate?

Currently, we use NHibernate to map business objects to database tables. Said business objects enforce business rules: The set accessors will throw an exception on the spot if the contract for that property is violated. Also, the properties enforce relationships with other objects (sometimes bidirectional!). Well, whenever NHibernate loads an object from the database (e.g. when ISession.Get(id) is called), the set accessors of the mapped properties are used to put the data into the object.
What's good is that the middle tier of the application enforces business logic. What's bad is that the database does not. Sometimes crap finds its way into the database. If crap is loaded into the application, it bails (throws an exception). Sometimes it clearly should bail because it cannot do anything, but what if it can continue working? E.g., an admin tool that gathers real-time reports runs a high risk of failing unnecessarily instead of allowing an admin to even fix a (potential) problem.
I don't have an example on me right now, but in some instances, letting NHibernate use the "front door" properties that also enforce relationships (especially bidi) leads to bugs.
What are the best solutions?
Currently, I will, on a per-property basis, create a "back door" just for NHibernate:
public virtual int Blah {get {return _Blah;} set {/*enforces BR's*/}}
protected virtual int _Blah {get {return blah;} set {blah = value;}}
private int blah;
I showed the above in C# 2 (no default properties) to demonstrate how this gets us basically 3 layers of, or views, to blah!!! While this certainly works, it does not seem ideal as it requires the BL to provide one (public) interface for the app-at-large, and another (protected) interface for the data access layer.
There is an additional problem: To my knowledge, NHibernate does not give you a way to distinguish between the name of the property in the BL and the name of the property in the entity model (i.e. the name you use when you query, e.g. via HQL--whenever you give NHibernate the name (string) of a property). This becomes a problem when, at first, the BR's for some property Blah are no problem, so you refer to it in your O/R mapping... but then later, you have to add some BR's that do become a problem, so then you have to change your O/R mapping to use a new _Blah property, which breaks all existing queries using "Blah" (common problem with programming against strings).
Has anyone solved these problems?!
While I found most of your architecture problematic, the usual way to deal with this stuff is having NHibernate use the backing field instead of the setter.
In your example above, you don't need to define an additional protected property. Just use this in the mapping:
<property name="Blah" access="nosetter.lowercase"/>
This is described in the docs, http://nhibernate.info/doc/nh/en/index.html#mapping-declaration-property (Table 5.1. Access Strategies)