Do I have to implement Add/Delete methods in my NHibernate entities? - nhibernate

This is a sample from the Fluent NHibernate website:
Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection!
Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ?
public class Store
{
public virtual int Id { get; private set; }
public virtual string Name { get; set; }
public virtual IList<Product> Products { get; set; }
public virtual IList<Employee> Staff { get; set; }
public Store()
{
Products = new List<Product>();
Staff = new List<Employee>();
}
public virtual void AddProduct(Product product)
{
product.StoresStockedIn.Add(this);
Products.Add(product);
}
public virtual void AddEmployee(Employee employee)
{
employee.Store = this;
Staff.Add(employee);
}
}

You don't have to do this for nhibernate, you have to do this for keep in-memory consistence and not repeat yourself.
Consistence in memory
If you have a two way relationship, lets say Order has Lines, and Line as a relationship to order. You don't want to have a reference from one side and not from the other.
If you just do:
order.Lines.Add(line);
You have made a reference from Order to Line, but Line.Order property remains null. So your in-memory instances are not consistent.
Don't Repeat Yourself
You can use the following code :
order.Lines.Add(line);
line.Order = order;
but you will be repeating yourself, so it is better to put this code in only one place, and the best place is as order.AddLine(..).

You don't have to. You could just call SomeStore.Products.Add(someProduct) directly from outside of your entity. But it's often good practice to make the collections 'read-only' from a public perspective, and using an add method in the entity for adding items.
One benefit of this is that you can put additional logic in there. For instance in your store example, you could set a 'storesStockedIn' collection (if there was such a thing) in the same method, and so keep all the logic about to creating that relationship in one place.
This isn't really a NHibernate thing, but rather an OOP thing. (Although I'm not familiar with EF - maybe it automates some of this for you). The design decisions are exactly the same as if it was just an unpersisted poco (without NHibernate).

Related

Problem with Include() EntityFramework Core with blazor server side [duplicate]

I had seen some books(e.g programming entity framework code first Julia Lerman) define their domain classes (POCO) with no initialization of the navigation properties like:
public class User
{
public int Id { get; set; }
public string UserName { get; set; }
public virtual ICollection<Address> Address { get; set; }
public virtual License License { get; set; }
}
some other books or tools (e.g Entity Framework Power Tools) when generates POCOs initializes the navigation properties of the the class, like:
public class User
{
public User()
{
this.Addresses = new IList<Address>();
this.License = new License();
}
public int Id { get; set; }
public string UserName { get; set; }
public virtual ICollection<Address> Addresses { get; set; }
public virtual License License { get; set; }
}
Q1: Which one is better? why? Pros and Cons?
Edit:
public class License
{
public License()
{
this.User = new User();
}
public int Id { get; set; }
public string Key { get; set; }
public DateTime Expirtion { get; set; }
public virtual User User { get; set; }
}
Q2: In second approach there would be stack overflow if the `License` class has a reference to `User` class too. It means we should have one-way reference.(?) How we should decide which one of the navigation properties should be removed?
Collections: It doesn't matter.
There is a distinct difference between collections and references as navigation properties. A reference is an entity. A collections contains entities. This means that initializing a collection is meaningless in terms of business logic: it does not define an association between entities. Setting a reference does.
So it's purely a matter of preference whether or not, or how, you initialize embedded lists.
As for the "how", some people prefer lazy initialization:
private ICollection<Address> _addresses;
public virtual ICollection<Address> Addresses
{
get { return this._addresses ?? (this._addresses = new HashSet<Address>());
}
It prevents null reference exceptions, so it facilitates unit testing and manipulating the collection, but it also prevents unnecessary initialization. The latter may make a difference when a class has relatively many collections. The downside is that it takes relatively much plumbing, esp. when compared to auto properties without initialization. Also, the advent of the null-propagation operator in C# has made it less urgent to initialize collection properties.
...unless explicit loading is applied
The only thing is that initializing collections makes it hard to check whether or not a collection was loaded by Entity Framework. If a collection is initialized, a statement like...
var users = context.Users.ToList();
...will create User objects having empty, not-null Addresses collections (lazy loading aside). Checking whether the collection is loaded requires code like...
var user = users.First();
var isLoaded = context.Entry(user).Collection(c => c.Addresses).IsLoaded;
If the collection is not initialized a simple null check will do. So when selective explicit loading is an important part of your coding practice, i.e. ...
if (/*check collection isn't loaded*/)
context.Entry(user).Collection(c => c.Addresses).Load();
...it may be more convenient not to initialize collection properties.
Reference properties: Don't
Reference properties are entities, so assigning an empty object to them is meaningful.
Worse, if you initiate them in the constructor, EF won't overwrite them when materializing your object or by lazy loading. They will always have their initial values until you actively replace them. Worse still, you may even end up saving empty entities in the database!
And there's another effect: relationship fixup won't occcur. Relationship fixup is the process by which EF connects all entities in the context by their navigation properties. When a User and a Licence are loaded separately, still User.License will be populated and vice versa. Unless of course, if License was initialized in the constructor. This is also true for 1:n associations. If Address would initialize a User in its constructor, User.Addresses would not be populated!
Entity Framework core
Relationship fixup in Entity Framework core (2.1 at the time of writing) isn't affected by initialized reference navigation properties in constructors. That is, when users and addresses are pulled from the database separately, the navigation properties are populated.
However, lazy loading does not overwrite initialized reference navigation properties.
In EF-core 3, initializing a reference navigation property prevents Include from working properly.
So, in conclusion, also in EF-core, initializing reference navigation properties in constructors may cause trouble. Don't do it. It doesn't make sense anyway.
In all my projects I follow the rule - "Collections should not be null. They are either empty or have values."
First example is possible to have when creation of these entities is responsibility of third-part code (e.g. ORM) and you are working on a short-time project.
Second example is better, since
you are sure that entity has all properties set
you avoid silly NullReferenceException
you make consumers of your code happier
People, who practice Domain-Driven Design, expose collections as read-only and avoid setters on them. (see What is the best practice for readonly lists in NHibernate)
Q1: Which one is better? why? Pros and Cons?
It is better to expose not-null colections since you avoid additional checks in your code (e.g. Addresses). It is a good contract to have in your codebase. But it os OK for me to expose nullable reference to single entity (e.g. License)
Q2: In second approach there would be stack overflow if the License class has a reference to User class too. It means we should have one-way reference.(?) How we should decide which one of the navigation properties should be removed?
When I developed data mapper pattern by myself I tryed to avoid bidirectional references and had reference from child to parent very rarely.
When I use ORMs it is easy to have bidirectional references.
When it is needed to build test-entity for my unit-tests with bidirectional reference set I follow the following steps:
I build parent entity with emty children collection.
Then I add evey child with reference to parent entity into children collection.
Insted of having parameterless constructor in License type I would make user property required.
public class License
{
public License(User user)
{
this.User = user;
}
public int Id { get; set; }
public string Key { get; set; }
public DateTime Expirtion { get; set; }
public virtual User User { get; set; }
}
It's redundant to new the list, since your POCO is depending on Lazy Loading.
Lazy loading is the process whereby an entity or collection of entities is automatically loaded from the database the first time that a property referring to the entity/entities is accessed. When using POCO entity types, lazy loading is achieved by creating instances of derived proxy types and then overriding virtual properties to add the loading hook.
If you would remove the virtual modifier, then you would turn off lazy loading, and in that case your code no longer would work (because nothing would initialize the list).
Note that Lazy Loading is a feature supported by entity framework, if you create the class outside the context of a DbContext, then the depending code would obviously suffer from a NullReferenceException
HTH
The other answers fully answer the question, but I'd like to add something since this question is still relevant and comes up in google searches.
When you use the "code first model from database" wizard in Visual Studio all collections are initialized like so:
public partial class SomeEntity
{
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2214:DoNotCallOverridableMethodsInConstructors")]
public SomeEntity()
{
OtherEntities = new HashSet<OtherEntity>();
}
public int Id { get; set; }
[System.Diagnostics.CodeAnalysis.SuppressMessage("Microsoft.Usage", "CA2227:CollectionPropertiesShouldBeReadOnly")]
public virtual ICollection<OtherEntity> OtherEntities { get; set; }
}
I tend to take wizard output as basically being an official recommendation from Microsoft, hence why I'm adding to this five-year-old question. Therefore, I'd initialize all collections as HashSets.
And personally, I think it'd be pretty slick to tweak the above to take advantage of C# 6.0's auto-property initializers:
public virtual ICollection<OtherEntity> OtherEntities { get; set; } = new HashSet<OtherEntity>();
Q1: Which one is better? why? Pros and Cons?
The second variant when virtual properties are set inside an entity constructor has a definite problem which is called "Virtual member call in a constructor".
As for the first variant with no initialization of navigation properties, there are 2 situations depending on who / what creates an object:
Entity framework creates an object
Code consumer creates an object
The first variant is perfectly valid when Entity Framework creates a object,
but can fail when a code consumer creates an object.
The solution to ensure a code consumer always creates a valid object is to use a static factory method:
Make default constructor protected. Entity Framework is fine to work with protected constructors.
Add a static factory method that creates an empty object, e.g. a User object, sets all properties, e.g. Addresses and License, after creation and returns a fully constructed User object
This way Entity Framework uses a protected default constructor to create a valid object from data obtained from some data source and code consumer uses a static factory method to create a valid object.
I use the answer from this Why is my Entity Framework Code First proxy collection null and why can't I set it?
Had problems with constructor initilization. Only reason I do this is to make test code easier. Making sure collection is never null saves me constantly initialising in tests etc

Object mapper vs Object wrapper

I would appreciate a little help here...
Lets say that in an application we have a Data Layer and a Business Logic Layer. In the DAL we have the following entity:
public class Customer {
public string Name {get; set;}
public ICollection<Address> Addresses {get; set;}
}
public class Address {
public string Street {get; set;}
}
In the BLL we have the following POCOs:
public class CustomerDto {
public string Name {get; set;}
public ICollection<AddressDto> Addresses {get; set;}
}
public class AddressDto {
public string Street {get; set;}
}
The entities in the DAL are populated with a ligth-weight ORM and retrieve from the BLL using a repository. Ex:
public class CustomerInformationService {
private readonly ICustomerRepository _repository {get; set;}
public CustomerInformationService (ICustomerRepository repository)
{
_repository = repository;
}
public class CustomerDto Get(int id)
{
var customerEntity = _repository.Get(id);
var customerDto = /* SOME TRANSFORMATION HERE */
return customerDTO;
}
}
My questions is about the /* SOME TRANSFORMATION HERE */ part. There is a discussion in our team about how to do the "mapping".
One approach is to use a mapper either an automapper or a manual mapping.
The second approach is to use sort of like a wrapper around Entity and reference the DTO in order to save a copying operation between object. Something like this:
public class CustomerDto
{
private IEntity _customerEntity;
public IEntity CustomerEntity { get {return _customerEntity;}}
public CustomerDto(IEntity customerEntity)
{
_customerEntity = customerEntity;
}
public string Name
{
get { return _customerEntity.Name; }
}
public ICollection<Address> Addresses
{
get { return _customerEntity.Addresses; }
}
}
The second approach feels a little weird to me because _customerEntity.Addresses feels like a leak (_customerEntity's reference) between my DAL and my BLL but I am not sure.
Are there any advantages/disavantages of using one approach over the other one?
Additional info: We usually pull a max. of 1000 records at a time that would need to be transform between Entity and DTO.
You did not mentioned your "ligth-weight ORM". I will answer in two sections.
If you are using ORM that creates proxies
You should avoid exposing Entities outside certain boundary. ORMs like NHibernate/EF implement lazy loading based on proxies. If you expose Entities to application/UI layer, you will have little control over ORM behavior. This may lead to many unexpected issues and debugging will also very difficult.
Wrapping Entities in DTOs will gain nothing. You are accessing Entities anyway.
Using DTOs and mapping them with some mapper tool like AutoMapper is good solution here.
If you are using ORM that does not create proxies
Do NOT use DTOs, directly use your Entities. DTOs are useful and recommended here in many cases. But the example you given in question does not need DTOs at all.
In case you choose to use DTOs, wrapping Entities in DTOs does not make sense. If you want to use Entity anyway, why wrap it? Again, tool like AutoMapper could help.
Refer this question. It's bit different; I am asking Yes/No and you are asking How. But still it will help you.
I bet for the service layer approach. Basically because something that looks like a business object or domain object has nothing to do with DTOs.
And, indeed, you and your team should use AutoMapper instead of repeating the same code tons of times which will consist in setting some properties from A to B, A to C, C to B...

NHibernate exception: method Add should be 'public/protected virtual' or 'protected internal virtual'

Take this class as example:
public class Category : PersistentObject<int>
{
public virtual string Title { get; set; }
public virtual string Alias { get; set; }
public virtual Category ParentCategory { get; set; }
public virtual ISet<Category> ChildCategories { get; set; }
public /*virtual*/ void Add(Category child)
{
if (child != null)
{
child.ParentCategory = this;
ChildCategories.Add(child);
}
}
}
When running the application without the virtual keyword of add method, I getting this error:
method Add should be 'public/protected virtual' or 'protected internal virtual'
I understand why properties need to declare as virtual, because thay need to be overridden by the lazy loading feature.
But I don't understand why Methods need to be declare as virtual... they need to be overridden for what reason?
This very confusing!
Methods as well need to be virtual because they could access fields. Consider this situation:
class Entity
{
//...
private int a;
public virtual int A
{
get { return a; }
}
public virtual void Foo()
{
// lazy loading is implemented here by the proxy
// to make the value of a available
if (a > 7)
// ...
}
}
I believe this is required for the lazy-loading feature in NHibernate where NHibernate creates proxies of your entity and controls all access to it. This is why every single method and property must be virtual. Basically, if there is a member doing anything with the entity, NH need to know about it and tap into it.
Like mentioned earlier, in order for NHibernate to do the 'magic' it creates proxy classes which inherit from your entities (Category in your case). However, if you make your entities implement an interface, it will use that interface to create a proxy instead of concrete types. This way, you wouldn't have to mark everything virtual.
EDIT: Some corrections... According to this, i am compelled to say that it almost looks like NH doesn't really do anything with virtual methods, after all. And i even read someone saying that they removed this run-time check from the NH core assembly just to get around it. My assumption would be that it is an older requirements which hasn't been removed. The cool thing is that it looks like there is an initiative to use PostSharp for static proxies, so your classes won't have to declare anything virtual for NH to generate proxies. The bad thing is that it looks like it's been stuck in a branch for almost two years.

Advise on object-oriented design

I would like some help with a OOD query.
Say I have the following Customer class:
public class Customer
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
It's a simple placeholder for customer related data. Next comes the CustomerFactory, which is used to fetch customers:
public static class CustomerFactory
{
public static Customer GetCustomer(int id)
{
return null; // Pretend this goes off to get a customer.
}
}
Now, if I wanted to write a routine named UpdateCustomer(Customer customer) can someone suggest where I could place this method?
Obviously I don't want to use the Customer class since that would be against SRP (Single Responsibility Principle), also I don't see it as a good idea to attach the method to the CustomerFactory class, since it's only role is to get customers from the database.
So it looks like I'm going to need another class, but I don't know what to name it.
Cheers.
Jas.
What you have called a Factory isn't a Factory at all. It's a Repository.
A Factory handles the instansiation of various classes sharing a common Interface or Class Hierarchy based on some set of parameters.
A Repository handles the retrieval and management of data.
The Repository would definitely have the UpdateCustomer(Customer customer) method in it as well as the GetCustomer(int id) method.
You are more on less on your way to creating a Repository. Do something like this:
public interface ICustomerRepository
{
Customer SelectCustomer(int id);
void UpdateCustomer(Customer customer);
void DeleteCustomer(int id);
void CreateCustomer(Customer customer);
}
Then create concrete implementations of this interface (the interface is really just because it's good practice to program against interfaces - you could skip it, though, although I would recommend that you keep it).
Wouldn't your UpdateCustomer routine be placed in your DAL (Data Access Layer). You should define a class to handle inserts or updates to the database and then pass a customer object to it.
You could write the DAL class to handle all of this but I don't see any issue in storing it in your CustomerFactory class, although as mentioned it is not really a factory.

How to simply map an NHibernate ISet to IList using AutoMapper

I'm trying to use AutoMapper to map from DTO's to my Domain.
My DTO's might look like this:
public class MyDTO
{
public string Name { get; set; }
public bool OtherProperty { get; set; }
public ChildDTO[] Children { get; set;}
}
public class ChildDTO
{
public string OtherName { get; set; }
}
My Domain objects like this:
public class MyDomain
{
public string Name { get; set; }
public bool OtherProperty { get; set; }
public ISet<ChildDomain> Children { get; set; }
}
public class ChildDomain
{
public string OtherName { get; set; }
}
How would I setup AutoMapper to be able to map from these Array's to Set's. It seems like AutoMapper is taking the Array's and converting them into IList's then failing on conversion to ISet.
Here's the exception
Unable to cast object of type 'System.Collections.Generic.List`1[DataTranser.ChildDTO]' to type 'Iesi.Collections.Generic.ISet`1[Domain.ChildDomain]'.
I'm hoping to find a simple generic way to do this so that I can minimize the infrastructure needed to map from DTO's to Domain. Any help is greatly appreciated.
UPDATE:
So then how would I model MyDomain -> ChildDomain without ending up with an anemic domain model? I understand that without business logic in MyDomain or ChildDomain the domain model is currently anemic, but the goal was to add business logic in as we move forward. I just want to ensure that my View Model can be translated into the domain model and persisted.
What would you suggest for this scenario, moving from a simple mapping between view and domain and later adding in business rules?
Thanks again for your help.
If your persistence layer is simple, using UseDestinationValue() will tell AutoMapper to not replace the underlying collection:
ForMember(dest => dest.Children, opt => opt.UseDestinationValue())
However, if it's not simple, we just do the updating manually back into the domain. The logic generally gets more complex to update the domain model. Doing reverse mapping puts constraints on the shape of your domain model, which you might not want.
The answer:
You have to create your own IObjectMapper to map a custom collection like ISet
Create your own configuration instance with all the standard
objectmappers and your new
setobjectmapper.
Use an IMappingEngine instance created with the configuration with
your own objectmapper instead of the
static AutoMapper.Mapper class.
Some remarks:
It's easy to configure the IMappingEngine construction in a inversion of control container.
The source of automapper itself might help you with creating the IObjectMapper implementation.
You are using automapper on the opposite way for what it is designed for: It's designed to map complex objects to simple objects. You try to map a simple DTO to a complex entity. (This does not mean that what you want is hard to do with automapper, but you might get different problems in the future)
You are using the anemic domain model anti pattern. Domain should hold all the business logic, so it should not expose a complex collection like ISet (and no public setters for collections at all)