Filtering soft-deleted data with Fluent NHibernate - nhibernate

I'm trying to implement simple soft deletes in my application using Fluent NHibernate. All entities have a boolean flag IsDeleted, and delete operation only sets this property to true.
I'm struggling with querying more complex entities referencing each other, for example by having many-to-many relationship. Let's say I have Person entity, having a collection of Projects:
class Person : Entity {
public virtual IList<Project> Projects { get; set; }
}
class Project : Entity {
//some properties
}
Now imagine that Person p has Projects proj1 and proj2. If proj1 gets soft-deleted, we simply set its IsDeleted property to true. However, when I access p's projects, collection is automatically lazy-loaded with proj1 too, independently from its flag. Of course, I can always filter the collection, for example by Projects.Where(x => !x.Isdeleted), but this leads to repetitive code prone to bugs. I want to separate this kind of data juggling from my presentation layer.
I want to automatize this process by some global rule saying "load only entities with IsDeleted set to false", which applies to all queries and lazy-loaded collections.
What I have tried:
Override events, but I wasn't able to intercept all DB reads and filter all entities that are read.
Filters, which I couldn't get to work with lazy-loaded collections.
What would you recommend, what is the easiest way to implement soft deletes without code repetition and easily separable from presentation layer?

To complete #Rippo, off the top of my head, something like this should work:
public abstract class BaseEntity
{
public bool IsDeleted {get;set;}
}
public class SomeEntity : BaseEntity
{
....
}
public abstract class EntityMap<T>: ClassMap<T> where T:BaseEntity
{
public EntityMap()
{
Where(x=>!x.IsDeleted);
}
}
public class SomeEntityMap: EntityMap<SomeEntity>
{
...
}

Related

Object mapper vs Object wrapper

I would appreciate a little help here...
Lets say that in an application we have a Data Layer and a Business Logic Layer. In the DAL we have the following entity:
public class Customer {
public string Name {get; set;}
public ICollection<Address> Addresses {get; set;}
}
public class Address {
public string Street {get; set;}
}
In the BLL we have the following POCOs:
public class CustomerDto {
public string Name {get; set;}
public ICollection<AddressDto> Addresses {get; set;}
}
public class AddressDto {
public string Street {get; set;}
}
The entities in the DAL are populated with a ligth-weight ORM and retrieve from the BLL using a repository. Ex:
public class CustomerInformationService {
private readonly ICustomerRepository _repository {get; set;}
public CustomerInformationService (ICustomerRepository repository)
{
_repository = repository;
}
public class CustomerDto Get(int id)
{
var customerEntity = _repository.Get(id);
var customerDto = /* SOME TRANSFORMATION HERE */
return customerDTO;
}
}
My questions is about the /* SOME TRANSFORMATION HERE */ part. There is a discussion in our team about how to do the "mapping".
One approach is to use a mapper either an automapper or a manual mapping.
The second approach is to use sort of like a wrapper around Entity and reference the DTO in order to save a copying operation between object. Something like this:
public class CustomerDto
{
private IEntity _customerEntity;
public IEntity CustomerEntity { get {return _customerEntity;}}
public CustomerDto(IEntity customerEntity)
{
_customerEntity = customerEntity;
}
public string Name
{
get { return _customerEntity.Name; }
}
public ICollection<Address> Addresses
{
get { return _customerEntity.Addresses; }
}
}
The second approach feels a little weird to me because _customerEntity.Addresses feels like a leak (_customerEntity's reference) between my DAL and my BLL but I am not sure.
Are there any advantages/disavantages of using one approach over the other one?
Additional info: We usually pull a max. of 1000 records at a time that would need to be transform between Entity and DTO.
You did not mentioned your "ligth-weight ORM". I will answer in two sections.
If you are using ORM that creates proxies
You should avoid exposing Entities outside certain boundary. ORMs like NHibernate/EF implement lazy loading based on proxies. If you expose Entities to application/UI layer, you will have little control over ORM behavior. This may lead to many unexpected issues and debugging will also very difficult.
Wrapping Entities in DTOs will gain nothing. You are accessing Entities anyway.
Using DTOs and mapping them with some mapper tool like AutoMapper is good solution here.
If you are using ORM that does not create proxies
Do NOT use DTOs, directly use your Entities. DTOs are useful and recommended here in many cases. But the example you given in question does not need DTOs at all.
In case you choose to use DTOs, wrapping Entities in DTOs does not make sense. If you want to use Entity anyway, why wrap it? Again, tool like AutoMapper could help.
Refer this question. It's bit different; I am asking Yes/No and you are asking How. But still it will help you.
I bet for the service layer approach. Basically because something that looks like a business object or domain object has nothing to do with DTOs.
And, indeed, you and your team should use AutoMapper instead of repeating the same code tons of times which will consist in setting some properties from A to B, A to C, C to B...

Repository OO Design - Multiple Specifications

I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.

Fluent NHibernate HasMany relation with different subtypes of same superclass

I´m using Fluent Nhibernate with automapping and having problem setting up a bi-directional HasMany relationship because of my current inheritance.
I simplified version of my code looks like this
public abstract class BaseClass
{
public BaseClass Parent { get; set; }
}
public class ClassA : BaseClass
{
public IList<ClassB> BChilds { get; protected set; }
public IList<ClassC> CChilds { get; protected set; }
}
public class ClassB : BaseClass
{
public IList<ClassD> DChilds { get; protected set; }
}
public class ClassC : BaseClass
{
}
public class ClassD : BaseClass
{
}
Every class can have one parent and some parents can have childs of two types. I´m using table-per-type inheritance which result in the tables
"BaseClass"
"ClassA"
"ClassB"
"ClassC"
"ClassD"
To get a working bi-directional mapping I have made the following overrides
(one example from ClassA)
mapping.HasMany<BaseType>(x => x.BChilds).KeyColumn("Parent_Id");
mapping.HasMany<BaseType>(x => x.CChilds).KeyColumn("Parent_Id");
This works fine on classes with only one type of children, but ClassA with two child types will get all subtypes of BaseType in each list which ofcourse will end up in an exception. I have looked at two different workarounds tho none of them feels really sufficient and I really believe there is a better way to solve it.
Workaround 1: Point to the concrete subtype in the HasMany mapping. (Updated with more info)
mapping.HasMany<ClassB>(x => x.BChilds).KeyColumns("Parent_Id");
(BaseType replaced with ClassB)
With this mapping NHibernate will in some cases look in the ClassB table for a column named Parent_Id, obviously there is no such column as it belongs to the BaseClass table. The problem only occurs if you add a statement based on BChilds during a ClassA select. e.g loading an entity of ClassA then calling ClassA.BChilds seems to work, but doing a query (using NhibernateLinq) something like
Query<ClassA>().Where(c => c.BChilds.Count == 0)
the wrong table will be used. Therefore I have to manually create a new column in this table with the same name and copy all the values. It works but it´s risky and not flexible at all.
Workaround 2: Add a column to the BaseClass that tells the concrete type and add a where statement to the HasMany mapping.
(after my update to workaround1 I´m no longer sure if this could be a workable solution)
By adding a column they same way as it´s done when using table-per-hierarchy inheritance with a discriminatorValue. i.e BaseType table will get a new column with a value of ClassA, ClassB... Tho given how well NHibernate handles the inheritance overall and by reading the NHibernate manual I believe that the discriminator shouldn´t be needed in a table-per-type scenario, seems like Nhibernate already doing the hardpart and should be able to take care of this in a clean way to without adding a new column, just can´t figure out how.
What's your base class mapping and what does your subclass map look like?
You should be able to do
mapping.HasMany(x => x.BChilds);
And with the correct mapping, you shouldn't have a problem.
If it's fluent nhibernate, look into
UseUnionSubclassForInheritanceMapping();

Do I have to implement Add/Delete methods in my NHibernate entities?

This is a sample from the Fluent NHibernate website:
Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection!
Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ?
public class Store
{
public virtual int Id { get; private set; }
public virtual string Name { get; set; }
public virtual IList<Product> Products { get; set; }
public virtual IList<Employee> Staff { get; set; }
public Store()
{
Products = new List<Product>();
Staff = new List<Employee>();
}
public virtual void AddProduct(Product product)
{
product.StoresStockedIn.Add(this);
Products.Add(product);
}
public virtual void AddEmployee(Employee employee)
{
employee.Store = this;
Staff.Add(employee);
}
}
You don't have to do this for nhibernate, you have to do this for keep in-memory consistence and not repeat yourself.
Consistence in memory
If you have a two way relationship, lets say Order has Lines, and Line as a relationship to order. You don't want to have a reference from one side and not from the other.
If you just do:
order.Lines.Add(line);
You have made a reference from Order to Line, but Line.Order property remains null. So your in-memory instances are not consistent.
Don't Repeat Yourself
You can use the following code :
order.Lines.Add(line);
line.Order = order;
but you will be repeating yourself, so it is better to put this code in only one place, and the best place is as order.AddLine(..).
You don't have to. You could just call SomeStore.Products.Add(someProduct) directly from outside of your entity. But it's often good practice to make the collections 'read-only' from a public perspective, and using an add method in the entity for adding items.
One benefit of this is that you can put additional logic in there. For instance in your store example, you could set a 'storesStockedIn' collection (if there was such a thing) in the same method, and so keep all the logic about to creating that relationship in one place.
This isn't really a NHibernate thing, but rather an OOP thing. (Although I'm not familiar with EF - maybe it automates some of this for you). The design decisions are exactly the same as if it was just an unpersisted poco (without NHibernate).

NHibernate - Do I have to have a class to interface with a table?

I have a class called Entry. This class as a collection of strings called TopicsOfInterest. In my database, TopicsOfInterest is represented by a separate table since it is there is a one-to-many relationship between entries and their topics of interest. I'd like to use nhibernate to populate this collection, but since the table stores very little (only an entry id and a string), I was hoping I could somehow bypass the creation of a class to represent it and all that goes with (mappings, configuration, etc..)
Is this possible, and if so, how? I'm using Fluent Nhibernate, so something specific to that would be even more helpful.
public class Entry
{
private readonly IList<string> topicsOfInterest;
public Entry()
{
topicsOfInterest = new List<string>();
}
public virtual int Id { get; set; }
public virtual IEnumerable<string> TopicsOfInterest
{
get { return topicsOfInterest; }
}
}
public class EntryMapping : ClassMap<Entry>
{
public EntryMapping()
{
Id(entry => entry.Id);
HasMany(entry => entry.TopicsOfInterest)
.Table("TableName")
.AsList()
.Element("ColumnName")
.Cascade.All()
.Access.CamelCaseField();
}
}
I had a similar requirement to map a collection of floats.
I'm using Automapping to generate my entire relational model - you imply that you already have some tables, so this may not apply, unless you choose to switch to an Automapping approach.
Turns out that NHibernate will NOT Automap collections of basic types - you need an override.
See my answer to my own question How do you automap List or float[] with Fluent NHibernate?.
I've provided a lot of sample code - you should be able to substitute "string" for "float", and get it working. Note the gotchas in the explanatory text.