Entity Framework Code First wrapper or repository? - repository

I've seen it mentioned sometimes that Repository Pattern is built into Entity Framework Code First via the DbSet and DbContext objects.
However that leaves a few problems:
1) Injection - Hard to inject as there isn't a clear cut Interface
2) Mocking - Same as above
3) Multiple references to EnitityFramework.dll - Let's say I create my Code First in it's own assembly/project and then want to reference that in another place I also have to reference entityFramework.dll without some wrapper present
Do you aggree with this and what do you think is the best solution if you do?

DbSet has interface and you usually implement your own context class derived from DbContext so it can also implement your own interface allowing you to deal with injection without any problem.
This is more complex issue. Mocking context doesn't make sense, mocking IDbSet doesn't make sense as well but in the same time mocking any repository or wrapper exposing IQueryable or accepting Expression<Func<>> passed to Linq-to-entities doesn't make sense either (here is simple example why). So yes repository can handle this but you will have to put more effort into this and you will not use Linq to query database from code calling your repository. If you want your upper layer to use declarative queries (as expected when using repository) you must implement your own specifications.
Imho if you don't have EntityFramework.dll in GAC and you will reference your first assembly from new solution you will still add reference to EntityFramework.dll to make sure that it is deployed with your code. Otherwise you are right. Without wrapper you need a reference.

Related

What criteria should one used to determine if Dependency Injection Framework should be used? [duplicate]

I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.

Dependency injection - somewhere between constructor and container

I have a situation where i am currently using constructor dependency injection for a small module that fits inside a larger web framework.
Its working fine but now there is a new class being introduced that requires 2 objects passed to it. But, 1 of the objects requires a lot of work to get set up - essentially it invovles around 4 method calls which create other objects in order to get it into a working state ready to be passed to my object.
My dilemna is that constructor injection is no use due to the work involved, but introducing a ioc container is way overboard, especially for this 1 off use case.
So, how should this be handled? Is there some sort of solution that sits in the middle of these two options?
You've effectively got four five choices:
Poor Man's DI (create objects manually and pass to constructors)
IoC container
Factory method
Abstract factory
Builder (thanks, Mark Seemann!)
I usually start off with an IoC container, but I do a lot of DI. (I've seen too many tightly-coupled codebases.)
If you don't want to introduce an IoC container, I'd lean towards Poor Man's DI.
If you're working in any object-oriented language (not just C#), I recommend reading the book Dependency Injection in .NET. It covers the patterns and anti-patterns in detail.
1 of the objects requires a lot of work to get set up - essentially it invovles around 4 method calls which create other objects in order to get it into a working state ready to be passed to my object.
OK, then create the object and pass the completely initialized object to the constructor, in which it needs to go.
Creating that object sounds like a job for a Builder or Factory.
My dilemna is that constructor injection is no use due to the work
involved,
I prefer Constructor Injection and see no reasons to avoid it.
Using modern IoC frameworks you can specify creation logic that involves "a lot of work to get set up" via factory/ factory method.
No matter how many steps are needed to build an instance of IMyService, you can simply use a constructor dependency to inject it.
Castle Windsor
container.AddFacility<FactorySupportFacility>()
.Register(
Component.For<IMyFactory>().ImplementedBy<MyFactory>(),
Component.For<IMyService>()
.UsingFactoryMethod(k => k.Resolve<IMyFactory>().Create())
);
Unity
var container = new UnityContainer();
container.RegisterType<IMyFactory, MyFactory>();
container.RegisterType<IMyService>(
new InjectionFactory(c => c.Resolve<IMyFactory>().Create()));

AutoMapper in the DAL: When to use Mapper.Reset()?

I'm using AutoMapper in a generated Data Access Layer. That works fine. It was a little confusing when using AutoMapper in another layer and realizing the mappings created in the DAL with Mapper.CreateMap<T1, T2>() were still present. I see Mapper.Reset() which will remove these however I'd rather not have to have the other layers worry about the DAL. Would the best practice be to put a Mapper.Reset() before and after my mapping operations in the DAL? Or is there a way to give these DAL mappings a non-default key to let them persist but not interfere with the use of AutoMapper in other layers?
Note: The use of AutoMapper in the DAL has some specific options such as a number of .ForMember(...) calls that my other layers should not use (without a Mapper.Reset() they would reuse these options).
AutoMapper works as a singleton/single instance. Does it really matter though?
EDIT : This may help you Using Profiles in Automapper to map the same types with different logic
If your other layers aren't worried so much about the DAL classes chances are they aren't going to be calling Map on an instance of the DAL class anyway.
If you call Reset() then your DAL classes will need to restate them when they next need to do some mapping which will add extra very unnecessary overhead.
EDIT : If you call Reset at the start of every DAL call then you can only have a single threaded Data Access strategy. If you call Reset in the middle of a mapping for another DAL project then you are going to obviously break this - so you will have to lock on every DAL method.
This is not the way to use Automapper so I would be leaning towards either looking into those profiles, or not using it all together.
ALSO : Can you post a sample code for what is wrong with having lots and lots of multiple mappings going on? Are there different mapping strategies between two types depending on where in the DAL they are being called from?

Entity Framework - implementing IDbSet

I would like to implement IdbSet to provide my DbContext a custom implementation that will essentially serve to filter my data for the currently logged in user (I am working on a multi-tenant application). The general idea I am going with is very similar to this post: Can a DbContext enforce a filter policy?
However, it is not clear to me how to make DbContext "know" about how to use my own class that implements IDbSet. I am having a tough time finding documentation on this. Any pointers would be helpful!
TIA,
-jle
I'm almost sure that you cannot create your own implementation of IDbSet and pass it to entity framework. Such implementation will lose all stuff related to EF which is internally implemented in DbSet itself - by internally I really mean that there is no public API to replace it. IDbSet interface is provided not because it is supposed to create your own sets but because it allows mocking sets when unit testing application. The only way you can extend functionality is either:
Inheriting DbSet but I'm afraid that it will not help you as well because methods / properties will not be marked as virtual.
Creating custom IDbSet implementation which will wrap DbSet. This looks like the best chance but you can still find that DbContext doesn't like your new implementation. I gave this very quick try but I was not successful. My implementation worked for persisting but not for querying.

WCF data contract design with dependency injection

So I have a layered application that I am adding a WCF service interface on top of. The service is simply a facade with all of our business logic already existing in Business Objects (BOs) within the Business Logic Layer (BLL) which is a class library. Within the BLL we use constructor injection to inject dependencies into the BOs. This is all working with good unit testing, etc. On to the problem...
Ordinarily I'd simply create a set of Request/Response objects as DataContracts for each service method with the appropriate properties for the operation. If the operation required one of our "entities" to be passed either to or from the method, I'd simply define a property of that type and everything would be fine (all of our BOs are serializable). However when one of these "entities" is passed into a service method, WCF deserializes the object without ever invoking the constructors we've defined and, as a result, the dependencies don't resolve.
Let's use the case of a service method called CreateSomething. I'd normally define this as a service operation with a signature like:
CreateSomethingResponse CreateSomething(CreateSomethingRequest request);
CreateSomethingRequest would be a DataContract and have amongst its properties a property of type Something that represented the "entity" being passed into the service. Something, in this case, is a business object that expects to receive an instance of the ISomethingRepository interface from the DI container when instantiated - which, as I said above, does not happen when WCF deserializes the object on the server.
Option #2 is to remove the Something property from the DataContract and define each of the properties explicitly in my DataContract then inside my service method, create a new instance of the Something class, letting the container inject the dependency, then map the property values from the DataContract object into the BO. And I can certainly do that but I am concerned about now having two places to make changes if, say, I want to add a property to the Something type. And, with a lot of properties, there's a lot of code duplication.
Has anyone crossed this bridge and, if so, can you share your thoughts and how you have or would approach this situation in your own applications? Thx!!!
There are two answers on your problem:
First: Do not send your entities and use data transfer objects instead. Your entities are business objects with its logic and data. The logic of business objects is most probably used to control the data. So let the business object control its data in business layer and exchange only dummy crates.
Second: If you don't want to follow the first approach, check documentation of your IoC container. There are ususally two methods for resolving dependencies. For example Unity offers:
Resolve - builds new instance and injects all dependencies (necessary for constructor injection)
BuildUp - takes existing instance and resolves all property dependencies. This should be your choice.
Thanks, Ladislav, for your answer as you confirmed what was already in my head.
What I ended up doing was to change my approach a little. I realized that my use of a business object, per se, was overkill and unnecessary. Or perhaps, just misdirected. When evaluating my requirements, I realized that I could "simplify" my approach and make everything work. By taking each logical layer in my application and looking at what data needed to pass between the layers, I found a design that works.
First, for my business logic layer, instead of a business object, I implemented a Unit of Work object: SomethingManager. SomethingManager is tied to my root Something entity so that any action I want to perform on or with Something is done through the SomethingManager. This includes methods like GetById, GetAll, Save and Delete.
The SomethingManager class accepts two objects in its constructor: an IValidator<Something> and an ISomethingRepository. These will be injected in by the IoC container. The former lets me perform all of the necessary validation using whatever framework we chose (initially the Validation Application Block) and the latter gives me persistance ignorance and abstracts the use of Linq-to-SQL today and makes upgrading to EF4 much easier later on.
For my service layer, I've wired the IoC container (Unity in this case) into WCF so the service instance is created by the container. This allows me to inject an instance of ISomethingManager into my service. Using the interface, I can break the dependency and easily unit test the service class. Plus, because the container is injecting the ISomethingManager instance, it is constructing it and will automatically resolve it's dependencies.
I then created DataContracts to represent how the data should appear when transferred across the wire via the service. Each Request/Response object contains these DataContracts as DataMembers rather than referencing my entity classes (or BOs) directly. It is up to the service method to map the data coming from or going to the Business Logic Layer (via ISomethingManager) - using AutoMapper to make this clean and efficient.
Back in the data layer, I've simply extended the generated entity classes by defining a partial class that implements the desired interface from the BLL. For instance, the Something L2S entity has a partial defined that implements ISomething. And ISomething is what the SomethingManager (and ISomethingManager interface) and ISomethingRepository work with making it very easy to query the database and pass the L2S entity up the chain for the service layer to consume and pass on (without the service layer having any knowledge or dependency on the L2S implementation).
I appreciate any comment, questions, criticisms or suggestions anyone has on this approach.