I am trying to use repositories in my MVC program designs and I have run up against a problem on how to best structure them.
as an example, say I have an object USers and I have a UserRepository which has functions like getUser(int id), saveUser(Dal.User model) etc...
So If in my controller I have EditUser and I want to display a view that will have a user details input form. so I can do something like this:
User user = _userRepository.getUserDetails(userId);
The benefit being that my controller just deals with processing HTTP requests, and business logic is moved to repositories, making testing etc easier
So, say I want to display a drop down list of possible roles this user could have in my system, ie client, admin, staff etc
is it ok to have a function in the _userRepository called getPossibleUserRoles() or should I have a seperate _roleRepository with a function getRoles() ?
IS it a bad idea to inject a repository for every entity you encounter into your controller? or is it a bad idea to mix entities inside your repositories, making them cluttered.
I realise I have presented a very simplistic scenario, but obviously as systems grow in complexity you are potentially talking of 10s of repositories needing to be instantiated in a controller for every page call. and also possibly instantiating repositories that are not being used in current controller methods simply to have them available to other controller methods.
Any advice on how best to structure a project using repositories appreciated
is it ok to have a function in the _userRepository called
getPossibleUserRoles() or should I have a seperate _roleRepository
with a function getRoles() ?
Let's say you have some controllers call:
_userRepository.getUserDetails(userId);
but they never call:
_userRepository.getPossibleUserRoles(userId);
Then you are forcing your controllers to depend on methods they do not use.
Sot it's not just ok, you should split this.
But if getUserDetails and getPossibleUserRoles are chosive (sharing same entity, sharing same business logic etc..).
You can split this without changing implemantation of userrepository beside of creating new class for Roles.
public class UserRepsitory : IUserRoles, IUserRepository
{
}
I realise I have presented a very simplistic scenario, but obviously
as systems grow in complexity you are potentially talking of 10s of
repositories needing to be instantiated in a controller
If a constructor gets too many parameters, there is high posibility SRP violation. Mark Seemann shows how to solve this problem in here.
In a short way: While you are creating a behaviour, if you use always 2 or more than repositories together. Then, these repositories are very close. So you can create a service and orchestrate them in this service. After that, you can use this service as a paremeter beside of using 2 or more repositories in your controller constructor.
is it ok to have a function in the _userRepository called getPossibleUserRoles() or should I have a seperate _roleRepository with a function getRoles() ?
Both solutions are acceptable but consider how you're going to control the proliferation of repositories and methods on those repositories. IMHO, the typical repository usage scenario tends to end-up with too many repositories with too many methods on each. DDD advocates a repository per aggregate root. This is a good rule of thumb... if you're following DDD principles.
IS it a bad idea to inject a repository for every entity you encounter into your controller? or is it a bad idea to mix entities inside your repositories, making them cluttered.
Inject volatile dependencies, so yes, inject a repository for every entity your controller needs. However, once you start injecting more than four dependencies, chances are you've missed an abstraction somewhere in your design. Some solve this problem with RepositoryFactory but this, arguably, introduces the problem of opaque dependencies and, IMHO, fails to convey the class's real dependencies, reducing its usability and self-document-ability.
Take a look at using query objects rather than repositories (https://lostechies.com/jimmybogard/2012/10/08/favor-query-objects-over-repositories/, etc.) and take a look at using orchestration/mediation (http://codeopinion.com/thin-controllers-cqrs-mediatr/) in your controllers. I think you'll find a better design emerges that will help you with your design issues.
Related
All my entities are implementation of interfaces. Most of their properties are read-only.
My repository holds a reference to the library project where i hold all the interfaces, so technically speaking, the repository can save the aggregate root without knowing anything about it's de-facto implementation (something i believe to be a +1).
The problem here is: if most of the properties are read-only, how can I rehydrate a aggregate root without breaking OOP principles? should the repository hold a reference to the domain project and be aware of the concrete implementation of interfaces?
should the repository hold a reference to the domain project and be aware of the concrete implementation of interfaces?
As Evans describes in the Blue Book; the Repository is a role played by an implementation, to keep the application from mutating the underlying data directly. Similarly, the Aggregate Root is a role -- we don't let the application touch the actual entity, but instead just a limited part of it.
The implementation of the repository is part of the model, so it can know more about the specific entities being represented; including knowing how to extract from them a representation of state that can be handed off to your persistence component for storage.
To choose a specific context, let's pretend that we are modeling a TradeBook, and one of the interesting use cases is that of a customer placing orders.
In Java, the implementation of the Repository interface -- the bit that the application knows about, might look like
interface API.TradeBookRepository<TradeBook extends API.TradeBook> {
TradeBook getById(...);
void save(TradeBook);
}
interface API.TradeBook {
void placeOrder(...);
}
So the application knows that it has an access to a repository, but it doesn't know anything about the implementation but the promise that it
will provide something that supports placeOrder.
So the application code looks like:
API.TradeBookRepository<? extends API.TradeBook> repo = ....
API.TradeBook book = repo.getById(...);
book.placeOrder(...)
repo.save(book)
But a given repository implementation is usually coupled to a specific implementation of the book; they are paired together.
class LIFO.TradeBook implements API.TradeBook {
...
}
class LIFO.TradeBookRepository implements API.TradeBookRepository<LIFO.TradeBook> {
...
}
how can I rehydrate a aggregate root without breaking OOP principles?
To some degree, you can't. The good news is, at the boundaries, applications are not object oriented.
The thing you are putting into your durable store isn't an aggregate root; it's some representation of state. I tend to think of it as a Memento. What you really have are two functions - one converts a specific aggregate root implementation (ex: LIFO.TradeBook) to a Memento, the other converts a Memento to an aggregate root.
Key idea: you are probably going to want to change your domain model a lot more often than you are going to want to migrate the database. So the Memento needs to be designed to be stable -- in effect, the Memento is a message sent from the old domain model to the new one, so many of the lessons of message verioning apply.
Simply put, something somewhere in your application has to know about concrete implementations. If you really want to shield the repository implementation (not the contract) from knowing the concrete entities then that responsibility will simply have to fall on another collaborator (e.g. repository would delegate the rehydration to an abstract factory).
However, it's quite uncommon to have separate contracts for aggregates because you usually have a single implementation of these business concepts and there's usually no scenario where you would want to mock them in unit tests. Therefore, repository contracts and implementation are most of the time defined in terms of concrete aggregates.
How to make entities lazy load its relationships?
For example: Post and Comment models, where a Post can have 0 or more Comments. How to make the getComments() method on Post entity lazy load its Comments?
My first think, is to have an CommentRepository injected into my Post entity, how is this bad? Since Entities and Repositories are part of may domain, why can't they have a two way knowledge about each other?
Thank you
UPDATE
I know there are many excellent industry standard ORMs that perform lazy loading for the main languages out there, but I don't want to rely on its magics. I'm looking for a ORM/DBAL agnostic solution to make sure of the application's low coupling.
Aggregates represent a consistency boundary so there should never be a need to lazy-load related data as the aggregate as a whole should always be consistent. All objects that belong to an aggregate have no need to exist on their own. If you do have an object that has it's own life-cycle then it needs to be removed from the aggregate.
When you do find that you need to do this you may want to rethink your design. It may be that you are using your object model to query. You should rather have a light-weight query model that can perform this function.
Injecting repositories or services into entities is generally not the best idea. A double-dispatch mechanism should be preferred.
But in your case I would still try to not lazy-load.
Consider using a proxy that subclasses Post, overrides the getComments() method. Inject the proxy with the CommentRepository and access it in the overridden getComment() method.
This is how an ORM would typically do it. It keeps your domain classes clean as only the proxy is dependent on a data access mechanism.
At first, you should separate domain concept from details of realization. Agreagate pattern is about how to organize your domain and lazy-loading is an implementation detail.
Also, I disagree with #Eben Roux about inconsistency of agreates. Lazy loading contradicts nothing in my opinion. I express why.
Lazy loading itself
To understand how lazy loading can be implemented you may refer to Martin Fowler's PoEAAA pattern 'Lazy loading'. For me, proxy pattern is the best solution.
Also, it's important that most nowadays ORMs supports lazy loading, BUT for data model (not domain model).
It is a good practice to separate data model and domain model and use repostiories to hide this transformation:
Separated domain and data models
In this case objects of domain model are constructed inside repositories those hide ORM context. Required data object and all associations are loaded by ORM, than transformation to domain model is performed, and finally, constructed domain object returned.
The question is how to load some associations not during creation of domain object, but during it's lifetime. You can use Repoisotry inside entity and I see nothing wrong with it. It will looks like:
public class Post {
private ICommentsRepository _commentsRepository;
private IList<Comments> _comments;
//necessary to perform lazy loading (repository always wroks with ids)
private IList<int> _commentIds;
//realize lazy loading
...
}
there are problems:
Your model now becomes not clear. It contains 'techincal' information like _commentIds.
As soon as you want to define ICommentsRepository you claim the Comment to be aggregate root. If we introduce agregate pattern into domain model, repositories should be creaed just for agregate roots. Thus it means that Comment and Post are different agregate roots. And possible that it is not what you want.
There is better solution:
public interface ICommentList {
...
}
public class CommentList : ICommentList {
...
}
public class CommentListProxy : ICommentList {
private CommentList _realCommentList;
private IList<int> _commentIds;
//realize lazy loading here using ORMs capabilities!
//don't use repository here!
}
public class Post {
private ICommentList _commentList;
...
}
Post repository will initaize _commentList field with proxy object. Also, it is necessary to say:
CommentListProxy relates to data model layer, not to domain model. It uses ORMs capabilities to implement lazy loading
and thus doesn't use repositories, and thus you may consider CommentList as a part of the Post agregate.
The only possible disadvantage of this approach is in implicit database querying when operating with domain objects. This must be clear for users of the Post class.
Smart ORMs
Finally there are kind of ORMs which allows you to use same model for both domain and data. It realizes lazy-loading for domain model in a same way as for data model. Take a look at DataObjects.Net. For some cases it is a good solution.
I am using MVC4,EF5, repository pattern and Unity IoC.
Where should the logic code block be placed?
inside the repository of the specific model
the controller
or by extending the partial class of the model? as a static function?
In my application each controller holds an instance of the unit of work. In case the logic will be held inside one of the repositories or inside a partial class, thus requiring to send the unit of work as a parameter. What would you recommend as best practice?
thanks :)
As GraemeMiller highlights, controllers should be absent of business logic. I think that a repository should be fairly light in terms of business logic, too. Dino Esposito recommends a similar pattern to GraemeMiller, in that the controller hands-off the viewmodel to some kind of co-ordinator that uses various other classes to do its job, generating a modified viewmodel or re-directing to another controller as appropriate. Your co-ordinator could be dependent on a unit of work or it might establish one itself. I'd favour the former.
I am programming library collecting some data. It will be able to switch its repositories to change data destination (database/files). I have more entities to store, such as cities, streets etc. My plan is to publish an interface, which will bee needed to implement, to create custom repository for custom data store.
I have seen, each repository takes care of one entity. But in such case, there should be more interfaces - for each repository. Is it OK (in repository design pattern mean) to create single repository accepting all needed entities and publish just one interface? With more interfaces there is possibility to forget to implement some and create inconsistent data api.
Is there better way how to solve this?
Each repository can return different entities. But if you group everything together in one interface, it would be really hard for other developers to read and maintain. In my development project, we try to make sure the each repository return related entities. Hope this helps.
I usually go for hybrid in the sense that, I have a base repository and extending repositories which would need custom implementation.
ie:
public class BaseRepo<T> : IRepo<T> where T: TEntity
{
// common functionality for all repos
// such as find, add, remove etc.
}
However, most of the time you will need more than CRUD, especially for selects.
It is a terrible idea to pass around expression trees which kills you testability and maintainability.
Moreover, You wont be able to use Dependency Injection if you have a single repo, which you is certainly doable. But highly discouraged.
You need to separate the responsibilities of repositories. Follow SOLID principles. and create a good API.
I suggest to create a GeographicRepository that would contain references to multiple datasources, and accept featureType as parameter.
A possible way to use this would be (pseudocode):
var rep = new GeoRepository();
var citylist = rep.getEntities(featureType='city');
// or instead:
var citylist = rep.getCities()
EDIT: a suggestion based on the central repo vs. fragmented repo would be to have a RepositoryFaƧade to be an aggregator of individual (and individually testable) repositories:
var centralRepo = new GeoRepository();
centralRepo.connectRepository(new GoogleCityRepo());
centralRepo.connectRepository(new YahooVillagesRepo());
centralRepo.connectRepository(new USGSDatabaseRepo('C:\usgs_usa_counties.db'));
Of course the way to create/declare "connections" would vary: hardcoded in the constructor, depending on service availabilty, explicit (as shown above), whatever. Also, that would allow for individual testing by writing a harness faƧade that would call only a single repo.
Hope this helps!
Short Answer: Yes, you can use a single repository for all operations.
Long Answer: When i first started using repositories, i thought the only approach was to use a repository for each entity then i found this excellent article "Query Objects with the Repository Pattern" where the author discussed whether to use a single repository per aggregate root or a repository for each entity, or just a single repository for the whole thing. He concluded with a very tempting opinion to use a single repository for everything with the combination of query object pattern for querying the data source, i really liked the end result and you might.
I'm currently trying out a few different ways of implementing repositories in the project I'm working on, and currently have a single repository with generic methods on it something like this:
public interface IRepository
{
T GetSingle<T>(IQueryBase<T> query) where T : BaseEntity;
IQueryable<T> GetList<T>(IQueryBase<T> query) where T : BaseEntity;
T Get<T>(int id) where T : BaseEntity;
int Save<T>(T entity) where T : BaseEntity;
void DeleteSingle<T>(IQueryBase<T> query) where T : BaseEntity;
void DeleteList<T>(IQueryBase<T> query) where T : BaseEntity;
}
That way I can just inject a single repository into a class and use it to get whatever I need.
(by the way, I'm using Fluent NHibernate as my ORM, with a session-per-web-request pattern, and injecting my repository using Structuremap)
This seems to work for me - the methods I've defined on this repository do everything I need. But in all my web searching, I haven't found other people using this approach, which makes me think I'm missing something ... Is this going to cause me problems as I grow my application?
I read a lot of people talking about having a repository per root entity - but if I identify root entities with some interface and restrict the generic methods to only allow classes implementing that interface, then aren't I achieving the same thing?
thanks in advance for any offerings.
I'm currently using a mix of both generic repositories (IRepository<T>) and custom (ICustomRepository). I do not expose IQueryable or IQueryOver from my repositories though.
Also I am only using my repositories as a query interface. I do all of my saving, updating, deleting through the Session (unit of work) object that I'm injecting into my repository. This allows me to do transactions across different repositories.
I've found that I definitely cannot do everything from a generic repository but they are definitely useful in a number of cases.
To answer your question though I do not think it's a bad idea to have a single generic repository if you can get by with it. In my implementation this would not work but if it works for you then that's great. I think it comes down to what works best for you. I don't think you will ever find a solution out there that works perfectly for your situation. I've found hybrid solutions work best for me.
I've done something similar in my projects. One drawback is that you'll have to be careful you don't create a select n+1 bug. I got around it by passing a separate list of properties to eagerly fetch.
The main argument you'll hear against wrapping your ORM like this is that it's a leaky abstraction. You'll still have to code around some the "gotchas" like select n+1 and you don't get to take full advantage of things like NH's caching support (at least not without extra code).
Here's a good thread on the pros and cons of this approach on Ayende's blog. He's more or less opposed to the pattern, but there are a few counter arguments too.
I've implemented such kind of repository for NHibernate. You can see example here.
In that implementation you are able to do eager loading and fetching. The pitfall is that with NH you will often need to be able to use QueryOver or Criteria API to access data (unfortunately LINQ provider is still far from being perfect). And with such an abstraction it could be a problem leading to leaky abstraction.
I have actually moved away from repository pattern and creating a unit of work interfaces - I find it limiting.
Unless you anticipate a change in the datastore i.e. going from DB to textfile or XML - which has never been the case for me, you are best off using ISession. You are trying to abstract your data access and this is exactly what NHibernate does. Using repository limits really cool features like Fetch(), FetchMany() futures etc. ISession is your unit of work.
Embrace NHibernate and use the ISession directly!
I've used this approach successfully on a few projects. It gets burdensome passing in many IRepository<T> to my Service layers for each BaseEntity, but it works. One thing I would change is put the where T : on the interface rather than the methods
public interface IRepository<T> where T : BaseEntity