I would like some help with a OOD query.
Say I have the following Customer class:
public class Customer
{
public int Id { get; set; }
public string FirstName { get; set; }
public string LastName { get; set; }
}
It's a simple placeholder for customer related data. Next comes the CustomerFactory, which is used to fetch customers:
public static class CustomerFactory
{
public static Customer GetCustomer(int id)
{
return null; // Pretend this goes off to get a customer.
}
}
Now, if I wanted to write a routine named UpdateCustomer(Customer customer) can someone suggest where I could place this method?
Obviously I don't want to use the Customer class since that would be against SRP (Single Responsibility Principle), also I don't see it as a good idea to attach the method to the CustomerFactory class, since it's only role is to get customers from the database.
So it looks like I'm going to need another class, but I don't know what to name it.
Cheers.
Jas.
What you have called a Factory isn't a Factory at all. It's a Repository.
A Factory handles the instansiation of various classes sharing a common Interface or Class Hierarchy based on some set of parameters.
A Repository handles the retrieval and management of data.
The Repository would definitely have the UpdateCustomer(Customer customer) method in it as well as the GetCustomer(int id) method.
You are more on less on your way to creating a Repository. Do something like this:
public interface ICustomerRepository
{
Customer SelectCustomer(int id);
void UpdateCustomer(Customer customer);
void DeleteCustomer(int id);
void CreateCustomer(Customer customer);
}
Then create concrete implementations of this interface (the interface is really just because it's good practice to program against interfaces - you could skip it, though, although I would recommend that you keep it).
Wouldn't your UpdateCustomer routine be placed in your DAL (Data Access Layer). You should define a class to handle inserts or updates to the database and then pass a customer object to it.
You could write the DAL class to handle all of this but I don't see any issue in storing it in your CustomerFactory class, although as mentioned it is not really a factory.
Related
I would appreciate a little help here...
Lets say that in an application we have a Data Layer and a Business Logic Layer. In the DAL we have the following entity:
public class Customer {
public string Name {get; set;}
public ICollection<Address> Addresses {get; set;}
}
public class Address {
public string Street {get; set;}
}
In the BLL we have the following POCOs:
public class CustomerDto {
public string Name {get; set;}
public ICollection<AddressDto> Addresses {get; set;}
}
public class AddressDto {
public string Street {get; set;}
}
The entities in the DAL are populated with a ligth-weight ORM and retrieve from the BLL using a repository. Ex:
public class CustomerInformationService {
private readonly ICustomerRepository _repository {get; set;}
public CustomerInformationService (ICustomerRepository repository)
{
_repository = repository;
}
public class CustomerDto Get(int id)
{
var customerEntity = _repository.Get(id);
var customerDto = /* SOME TRANSFORMATION HERE */
return customerDTO;
}
}
My questions is about the /* SOME TRANSFORMATION HERE */ part. There is a discussion in our team about how to do the "mapping".
One approach is to use a mapper either an automapper or a manual mapping.
The second approach is to use sort of like a wrapper around Entity and reference the DTO in order to save a copying operation between object. Something like this:
public class CustomerDto
{
private IEntity _customerEntity;
public IEntity CustomerEntity { get {return _customerEntity;}}
public CustomerDto(IEntity customerEntity)
{
_customerEntity = customerEntity;
}
public string Name
{
get { return _customerEntity.Name; }
}
public ICollection<Address> Addresses
{
get { return _customerEntity.Addresses; }
}
}
The second approach feels a little weird to me because _customerEntity.Addresses feels like a leak (_customerEntity's reference) between my DAL and my BLL but I am not sure.
Are there any advantages/disavantages of using one approach over the other one?
Additional info: We usually pull a max. of 1000 records at a time that would need to be transform between Entity and DTO.
You did not mentioned your "ligth-weight ORM". I will answer in two sections.
If you are using ORM that creates proxies
You should avoid exposing Entities outside certain boundary. ORMs like NHibernate/EF implement lazy loading based on proxies. If you expose Entities to application/UI layer, you will have little control over ORM behavior. This may lead to many unexpected issues and debugging will also very difficult.
Wrapping Entities in DTOs will gain nothing. You are accessing Entities anyway.
Using DTOs and mapping them with some mapper tool like AutoMapper is good solution here.
If you are using ORM that does not create proxies
Do NOT use DTOs, directly use your Entities. DTOs are useful and recommended here in many cases. But the example you given in question does not need DTOs at all.
In case you choose to use DTOs, wrapping Entities in DTOs does not make sense. If you want to use Entity anyway, why wrap it? Again, tool like AutoMapper could help.
Refer this question. It's bit different; I am asking Yes/No and you are asking How. But still it will help you.
I bet for the service layer approach. Basically because something that looks like a business object or domain object has nothing to do with DTOs.
And, indeed, you and your team should use AutoMapper instead of repeating the same code tons of times which will consist in setting some properties from A to B, A to C, C to B...
I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.
This is such a simple and common scenario I wonder how did I managed until now and why I have problems now.
I have this object (part of the Infrastructure assembly)
public class Queue {}
public class QueueItem
{
public QueueItem(int blogId,string name,Type command,object data)
{
if (name == null) throw new ArgumentNullException("name");
if (command == null) throw new ArgumentNullException("command");
BlogId = blogId;
CommandType = command;
ParamValue = data;
CommandName = name;
AddedOn = DateTime.UtcNow;
}
public Guid Id { get; internal set; }
public int BlogId { get; private set; }
public string CommandName { get; set; }
public Type CommandType { get; private set; }
public object ParamValue { get; private set; }
public DateTime AddedOn { get; private set; }
public DateTime? ExecutedOn { get; private set; }
public void ExecuteIn(ILifetimeScope ioc)
{
throw new NotImplementedException();
}
}
This will be created in another assembly like this
var qi = new QueueItem(1,"myname",typeof(MyCommand),null);
Nothing unusal here. However, this object will be sent t oa repository where it will be persisted.The Queue object will ask the repository for items. The repository should re-create QueueItem objects.
However, as you see, the QueueItem properties are invariable, the AddedOn property should be set only once when the item is created. The Id property will be set by the Queue object (this is not important).
The question is how should I recreate the QueueItem in the repository? I can have another constructor which will require every value for ALL the properties, but I don't want that constructor available for the assembly that will create the queue item initially. The repository is part of a different assembly so internal won't work.
I thought about providing a factory method
class QueueItem
{
/* ..rest of definitions.. */
public static QueueItem Restore(/* list of params*/){}
}
which at least clears the intent, but I don't know why I don't like this approach. I could also enforce the item creation only by the Queue , but that means to pass the Queue as a dependency to the repo which again isn't something I'd like. To have a specific factory object for this, also seems way overkill.
Basically my question is: what is the optimum way to recreate an object in the repository, without exposing that specific creational functionality to another consumer object.
Update
It's important to note that by repository I mean the pattern itself as an abstraction, not a wrapper over an ORM. It doesn't matter how or where the domain objects are persisted. It matters how can be re-created by the repository. Another important thing is that my domain model is different from the persistence model. I do use a RDBMS but I think this is just an implementation detail which should not bear any importance, since I'm looking for way that doesn't depend on a specific storage access.
While this is a specific scenario, it can applied to basically every object that will be restored by the repo.
Update2
Ok I don't know how I could forget about AutoMapper. I was under the wrong impression it can't map private fields/setter but it can, and I think this is the best solution.
In fact I can say the optimum solutions (IMO) are in order:
Directly deserializing, if available.
Automap.
Factory method on the domain object itself.
The first two don't require the object to do anyting in particular, while the third requires the object to provide functionality for that case (a way to enter valid state data). It has clear intent but it pretty much does a mapper job.
Answer Updated
To answer myself, in this case the optimum way is to use a factory method. Initially I opted for the Automapper but I found myself using the factory method more often. Automapper can be useful sometimes but in quite a lot of cases it's not enough.
An ORM framework would take care of that for you. You just have to tell it to rehydrate an object and a regular instance of the domain class will be served to you (sometimes you only have to declare properties as virtual or protected, in NHibernate for instance). The reason is because under the hood, they usually operate on proxy objects derived from your base classes, allowing you to keep these base classes intact.
If you want to implement your own persistence layer though, it's a whole nother story. Rehydrating an object from the database without breaking the scope constraints originally defined in the object is likely to involve reflection. You also have to think about a lot of side concerns : if your object has a reference to another object, you must rehydrate that one before, etc.
You can have a look at that tutorial : Build Your Own dataAccess Layer although I wouldn't recommend reinventing the wheel in most cases.
You talked about a factory method on the object itself. But DDD states that entities should be created by a factory. So you should have a QueueItemFactory that can create new QueueItems and restore existing QueueItems.
Ok I don't know how I could forget about AutoMapper.
I wish I could forget about AutoMapper. Just looking at the hideous API gives me shivers down my spine.
This is a sample from the Fluent NHibernate website:
Compared to the Entitiy Framework I have ADD methods in my POCO in this code sample using NHibernate. With the EF I did context.Add or context.AddObject etc... the context had the methods to put one entity into the others entity collection!
Do I really have to implement Add/Delete/Update methods (I do not mean the real database CRUD operations!) in a NHibernate entity ?
public class Store
{
public virtual int Id { get; private set; }
public virtual string Name { get; set; }
public virtual IList<Product> Products { get; set; }
public virtual IList<Employee> Staff { get; set; }
public Store()
{
Products = new List<Product>();
Staff = new List<Employee>();
}
public virtual void AddProduct(Product product)
{
product.StoresStockedIn.Add(this);
Products.Add(product);
}
public virtual void AddEmployee(Employee employee)
{
employee.Store = this;
Staff.Add(employee);
}
}
You don't have to do this for nhibernate, you have to do this for keep in-memory consistence and not repeat yourself.
Consistence in memory
If you have a two way relationship, lets say Order has Lines, and Line as a relationship to order. You don't want to have a reference from one side and not from the other.
If you just do:
order.Lines.Add(line);
You have made a reference from Order to Line, but Line.Order property remains null. So your in-memory instances are not consistent.
Don't Repeat Yourself
You can use the following code :
order.Lines.Add(line);
line.Order = order;
but you will be repeating yourself, so it is better to put this code in only one place, and the best place is as order.AddLine(..).
You don't have to. You could just call SomeStore.Products.Add(someProduct) directly from outside of your entity. But it's often good practice to make the collections 'read-only' from a public perspective, and using an add method in the entity for adding items.
One benefit of this is that you can put additional logic in there. For instance in your store example, you could set a 'storesStockedIn' collection (if there was such a thing) in the same method, and so keep all the logic about to creating that relationship in one place.
This isn't really a NHibernate thing, but rather an OOP thing. (Although I'm not familiar with EF - maybe it automates some of this for you). The design decisions are exactly the same as if it was just an unpersisted poco (without NHibernate).
Take this class as example:
public class Category : PersistentObject<int>
{
public virtual string Title { get; set; }
public virtual string Alias { get; set; }
public virtual Category ParentCategory { get; set; }
public virtual ISet<Category> ChildCategories { get; set; }
public /*virtual*/ void Add(Category child)
{
if (child != null)
{
child.ParentCategory = this;
ChildCategories.Add(child);
}
}
}
When running the application without the virtual keyword of add method, I getting this error:
method Add should be 'public/protected virtual' or 'protected internal virtual'
I understand why properties need to declare as virtual, because thay need to be overridden by the lazy loading feature.
But I don't understand why Methods need to be declare as virtual... they need to be overridden for what reason?
This very confusing!
Methods as well need to be virtual because they could access fields. Consider this situation:
class Entity
{
//...
private int a;
public virtual int A
{
get { return a; }
}
public virtual void Foo()
{
// lazy loading is implemented here by the proxy
// to make the value of a available
if (a > 7)
// ...
}
}
I believe this is required for the lazy-loading feature in NHibernate where NHibernate creates proxies of your entity and controls all access to it. This is why every single method and property must be virtual. Basically, if there is a member doing anything with the entity, NH need to know about it and tap into it.
Like mentioned earlier, in order for NHibernate to do the 'magic' it creates proxy classes which inherit from your entities (Category in your case). However, if you make your entities implement an interface, it will use that interface to create a proxy instead of concrete types. This way, you wouldn't have to mark everything virtual.
EDIT: Some corrections... According to this, i am compelled to say that it almost looks like NH doesn't really do anything with virtual methods, after all. And i even read someone saying that they removed this run-time check from the NH core assembly just to get around it. My assumption would be that it is an older requirements which hasn't been removed. The cool thing is that it looks like there is an initiative to use PostSharp for static proxies, so your classes won't have to declare anything virtual for NH to generate proxies. The bad thing is that it looks like it's been stuck in a branch for almost two years.