DbContext, EF, & LINQ - Whats the best way to expose methods on DbContext via an Interface? - wcf

I am pretty new to EF and DBContext, and so am looking for some advise as to the best way to set out my code for a WCF service using EF, Stored Procs, or SQL.
The Background
I have an MVC3 front end that is hooked up to a WCF service layer for the Data access (Oracle). The actual Data access is via a separate DAO class library.
My goal is to have the service layer consume an interface only, on which it can call a set of methods to return the data. I do not want the service layer aware that we are using EF for the queries, as I may replace the slow EF bits with Stored Procs or plain SQL text as an when required.
Where I'm upto at the moment
I have an Interface for my database IDB, and a concreate implementation of IDB, MyDB, that also implements DBContext. MyDb then has a couple of derived classes called MyStdDB and MySecureDB. When I want the interface, I call my factory method which works out if I need a standard or secure db, and then returns that into my interface variable.
WCF Code:
public List<string> GetAccount() {
IDB _db = DBFactory.GetInstance();
return _db.GetAccount();
}
DBFactory code:
pubilc class DBFactory {
pubilc static IDB GetInstance()
{
if bSecure
return MySecureDB;
else
return MyStdDB;
}
}
So when I want to ask for a query I want to ask _db.GetAccount() within my Service call. At the moment I have added this as an extension method on the IDB interface. The reason for that was to prevent the service seeing my EF entities, and it allows spearation of the qqueries into logical files, eg. Class full of CUSTOMER queries, class full of ACCOUNT queries.
IDB Code:
public interface IDB : IDisposable
{
ObjectContext UnderlyingContext { get; }
int SaveChanges();
}
MyDB Code:
public class MyDB : DbContext, IDB
{
ObjectContext IDB.UnderlyingContext
{
get
{
return ((IObjectContextAdapter)this).ObjectContext;
}
}
int IDB.SaveChanges()
{
return SaveChanges();
}
public DbSet<Customer> Customer { get; set; }
}
Extension Method:
public static List<string> GetAccount(this IDB _db)
{
((MyDB)_db).Customer.AsNoTracking().First();
}
The Issue
As you may see, I have to cast the interface into the concrete object so that I can get to the EF entities. This is because the entities are on the implementation of the class rather than the interface. The Extension method is in my DAO class library, so that would change when my IDB implmentation changed, but I still dont like it.
Is there a better way to do this? Am I looking at DI?
The big drivers for me are:
Access to the database must be via an interface only, as we may be replacing the db soon.
The data access methods must be hidden from the service. I should only be able to access data via the methods provided by the interface/extension methods etc.

The workaround is moving your GetAccount to IDB instead of using extension methods:
public interface IDB : IDisposable
{
ObjectContext UnderlyingContext { get; }
List<string> GetAccount();
int SaveChanges();
}
It solves your issue because MyDB will implement the method and all derived classes will use implementation as well. If they provide other implementation they will simply override it.
The data access methods must be hidden from the service. I should only
be able to access data via the methods provided by the
interface/extension methods etc.
But they are not. The method / property is hidden if it is not public but currently any your service can convert IDB to MyDB and access DbSet directly.

Related

Repository OO Design - Multiple Specifications

I have a pretty standard repository interface:
public interface IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
{
TDomainEntity Find(Guid id);
void Add(TDomainEntity entity);
void Update(TDomainEntity entity);
}
We can use various infrastructure implementations in order to provide default functionality (e.g. Entity Framework, DocumentDb, Table Storage, etc). This is what the Entity Framework implementation looks like (without any actual EF code, for simplicity sake):
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
public TDomainEntity Find(Guid id)
{
// Find, map and return entity using Entity Framework
}
public void Add(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Insert entity using Entity Framework
}
public void Update(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
// Update entity using Entity Framework
}
}
There is a mapping between the TDomainEntity domain entity (aggregate) and the TDataEntity Entity Framework data entity (database table). I will not go into detail as to why there are separate domain and data entities. This is a philosophy of Domain Driven Design (read about aggregates). What's important to understand here is that the repository will only ever expose the domain entity.
To make a new repository for, let's say, "users", I could define the interface like this:
public interface IUserRepository : IRepository<User>
{
// I can add more methods over and above those in IRepository
}
And then use the Entity Framework implementation to provide the basic Find, Add and Update functionality for the aggregate:
public class UserRepository : EntityFrameworkRepository<Stop, StopEntity>, IUserRepository
{
// I can implement more methods over and above those in IUserRepository
}
The above solution has worked great. But now we want to implement deletion functionality. I have proposed the following interface (which is an IRepository):
public interface IDeleteableRepository<TDomainEntity>
: IRepository<TDomainEntity>
{
void Delete(TDomainEntity item);
}
The Entity Framework implementation class would now look something like this:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity, IDeleteableDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find(), Add() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
entity.IsDeleted = true;
entity.DeletedDate = DateTime.UtcNow;
// Update entity using Entity Framework
// ...
}
}
As defined in the class above, the TDataEntity generic now also needs to be of type IDeleteableDataEntity, which requires the following properties:
public interface IDeleteableDataEntity
{
bool IsDeleted { get; set; }
DateTime DeletedDate { get; set; }
}
These properties are set accordingly in the Delete() implementation.
This means that, IF required, I can define IUserRepository with "deletion" capabilities which would inherently be taken care of by the relevant implementation:
public interface IUserRepository : IDeleteableRepository<User>
{
}
Provided that the relevant Entity Framework data entity is an IDeleteableDataEntity, this would not be an issue.
The great thing about this design is that I can start granualising the repository model even further (IUpdateableRepository, IFindableRepository, IDeleteableRepository, IInsertableRepository) and aggregate repositories can now expose only the relevant functionality as per our specification (perhaps you should be allowed to insert into a UserRepository but NOT into a ClientRepository). Further to this, it specifies a standarised way in which certain repository actions are done (i.e. the updating of IsDeleted and DeletedDate columns will be universal and are not at the hand of the developer).
PROBLEM
A problem with the above design arises when I want to create a repository for some aggregate WITHOUT deletion capabilities, e.g:
public interface IClientRepository : IRepository<Client>
{
}
The EntityFrameworkRepository implementation still requires TDataEntity to be of type IDeleteableDataEntity.
I can ensure that the client data entity model does implement IDeleteableDataEntity, but this is misleading and incorrect. There will be additional fields that are never updated.
The only solution I can think of is to remove the IDeleteableDataEntity generic condition from TDataEntity and then cast to the relevant type in the Delete() method:
public abstract class EntityFrameworkRepository<TDomainEntity, TDataEntity> : IDeleteableRepository<TDomainEntity>
where TDomainEntity : DomainEntity, IAggregateRoot
where TDataEntity : class, IDataEntity
{
protected IEntityMapper<TDomainEntity, TDataEntity> EntityMapper { get; private set; }
// Find() and Update() ...
public void Delete(TDomainEntity item)
{
var entity = EntityMapper.CreateFrom(item);
var deleteableEntity = entity as IDeleteableEntity;
if(deleteableEntity != null)
{
deleteableEntity.IsDeleted = true;
deleteableEntity.DeletedDate = DateTime.UtcNow;
entity = deleteableEntity;
}
// Update entity using Entity Framework
// ...
}
}
Because ClientRepository does not implement IDeleteableRepository, there will be no Delete() method exposed, which is good.
QUESTION
Can anyone advise of a better architecture which leverages the C# typing system and does not involve the hacky cast?
Interestly enough, I could do this if C# supported multiple inheritance (with separate concrete implementation for finding, adding, deleting, updating).
I do think that you're complicating things a bit too much trying to get the most generic solution of them all, however I think there's a pretty easy solution to your current problem.
TDataEntity is a persistence data structure, it has no Domain value and it's not known outside the persistence layer. So it can have fields it won't ever use, the repository is the only one knowing that, it'a persistence detail . You can afford to be 'sloppy' here, things aren't that important at this level.
Even the 'hacky' cast is a good solution because it's in one place and a private detail.
It's good to have clean and maintainable code everywhere, however we can't afford to waste time coming up with 'perfect' solutions at every layer. Personally, for view and persistence models I prefer the quickest and simplest solutions even if they're a bit smelly.
P.S: As a thumb rule, generic repository interfaces are good, generic abstract repositories not so much (you need to be careful) unless you're serializing things or using a doc db.

Using Ninject to bind an interface to multiple implementations unknown at compile time

I just recently started using Ninject (v2.2.0.0) in my ASP.NET MVC 3 application. So far I'm thrilled with it, but I ran into a situation I can't seem to figure out.
What I'd like to do is bind an interface to concrete implementations and have Ninject be able to inject the concrete implementation into a constructor using a factory (that will also be registered with Ninject). The problem is that I'd like my constructor to reference the concrete type, not the interface.
Here is an example:
public class SomeInterfaceFactory<T> where T: ISomeInterface, new()
{
public T CreateInstance()
{
// Activation and initialization logic here
}
}
public interface ISomeInterface
{
}
public class SomeImplementationA : ISomeInterface
{
public string PropertyA { get; set; }
}
public class SomeImplementationB : ISomeInterface
{
public string PropertyB { get; set; }
}
public class Foo
{
public Foo(SomeImplementationA implA)
{
Console.WriteLine(implA.PropertyA);
}
}
public class Bar
{
public Bar(SomeImplementationB implB)
{
Console.WriteLine(implB.PropertyB);
}
}
Elsewhere, I'd like to bind using just the interface:
kernel.Bind<Foo>().ToSelf();
kernel.Bind<Bar>().ToSelf();
kernel.Bind(typeof(SomeInterfaceFactory<>)).ToSelf();
kernel.Bind<ISomeInterface>().To ...something that will create and use the factory
Then, when requesting an instance of Foo from Ninject, it would see that one of the constructors parameters implements a bound interface, fetch the factory, and instantiate the correct concrete type (SomeImplementationA) and pass it to Foo's constructor.
The reason behind this is that I will have many implementations of ISomeInterface and I'd prefer to avoid having to bind each one individually. Some of these implementations may not be known at compile time.
I tried using:
kernel.Bind<ISomeInterface>().ToProvider<SomeProvider>();
The provider retrieves the factory based on the requested service type then calls its CreateInstance method, returning the concrete type:
public class SomeProvider : Provider<ISomeInterface>
{
protected override ISomeInterface CreateInstance(IContext context)
{
var factory = context.Kernel.Get(typeof(SomeInterfaceFactory<>)
.MakeGenericType(context.Request.Service));
var method = factory.GetType().GetMethod("CreateInstance");
return (ISomeInterface)method.Invoke();
}
}
However, my provider was never invoked.
I'm curious if Ninject can support this situation and, if so, how I might go about solving this problem.
I hope this is enough information to explain my situation. Please let me know if I should elaborate further.
Thank you!
It seems you have misunderstood how ninject works. In case you create Foo it sees that it requires a SomeImplementationA and will try to create an instance for it. So you need to define a binding for SomeImplementationA and not for ISomeInterface.
Also most likely your implementation breaks the Dependency Inversion Princple because you rely upon concrete instances instead of abstractions.
The solution to register all similar types at once (and the prefered way to configure IoC containers) is to use configuration by conventions. See the Ninject.Extensions.Conventions extenstion.

Repositories and Services, MVC Model

So I've been learning about the Repository model, and it seems that it is expected that Repositories do not do a lot of intricate logic. However I also read that most of the business logic should not be inside of my Controllers. So where do I put it?
I've looked at some sample applications and it seems that they have another layer called Services that do more intricate logic for things. So how does this factor into the MVC pattern?
Do I want to build my services to access my repositories, and then my controllers to access my services? Like this?
interface IMembershipService
{
bool ValidateUser(string username, string password);
MembershipCreateStatus Create(string username, string password);
}
interface IMembershipRepository
{
MembershipCreateStatus Create(string username, string password);
}
class MembershipRepository : IMembershipRepository
{
public MembershipRepository(ISession session)
{
**// this is where I am confused...**
}
}
class MembershipService : IMembershipService
{
private readonly IMembershipRepository membershipRepository;
public MembershipService(IMembershipRepository membershipRepository)
{
this.membershipRepository = membershipRepository;
}
public bool ValidateUser(string username, string password)
{
// validation logic
}
public MembershipCreateStatus Create(string username, string password)
{
return membershipRepository.Create(username, password);
}
}
class MembershipController : Controller
{
private readonly IMembershipService membershipService;
public MembershipController(IMembershipService membershipService)
{
this.membershipService = membershipService
}
}
The marked part of my code is what confuses me. Everything I have read said I should be injecting my ISession into my repositories. This means I could not be injecting ISession into my services, so then how do I do Database access from my Services? I'm not understanding what the appropriate process is here.
When I put ValidateUser in my IMembershipRepository, I was told that was 'bad'. But the IMembershipRepository is where the database access resides. That's the intention, right? To keep the database access very minimal? But if I can't put other logic in them, then what is the point?
Can someone shed some light on this, and show me an example that might be more viable?
I am using Fluent nHibernate, ASP.NET MVC 3.0, and Castle.Windsor.
Should I instead do something like ...
class MembershipService
{
private readonly IMembershipRepository membershipRepository;
public MembershipService(ISession session)
{
membershipRepository = new MembershipRepository(session);
}
}
And never give my Controllers direct access to the Repositories?
Everything I have read said I should be injecting my ISession into my repositories.
That's correct. You need to inject the session into the repository constructor because this is where the data access is made.
This means I could not be injecting ISession into my services, so then how do I do Database access from my Services?
You don't do database access in your services. The service relies on one or more repositories injected into its constructor and uses their respective methods. The service never directly queries the database.
So to recap:
The repository contains the simple CRUD operations on your model. This is where the data access is performed. This data access doesn't necessary mean database. It will depend on the underlying storage you are using. For example you could be calling some remote services on the cloud to perform the data access.
The service relies on one or more repositories to implement a business operation. This business operation might depend on one or more CRUD operations on the repositories. A service shouldn't even know about the existence of a database.
The controller uses the service to invoke the business operation.
In order to decrease the coupling between the different layers, interfaces are used to abstract the operations.
interface IMembershipService
{
bool ValidateUser(string username, string password);
MembershipCreateStatus Create(string username, string password);
}
Creating a service like this an anti-pattern.
How many responsibilities does a service like this have? How many reasons could it have to change?
Also, if you put your logic into services, you are going to end up with an anemic domain. What you will end up with is procedural code in a Transaction Script style. And I am not saying this is necessarily bad.
Perhaps a rich domain model is not appropriate for you, but it should be a conscious decision between the two, and this multiple responsibility service is not appropriate in either case.
This should be a HUGE red flag:
public MembershipCreateStatus Create(string username, string password)
{
return membershipRepository.Create(username, password);
}
What is the point? Layers for the sake of layers? The Service adds no value here, serves no purpose.
There are a lot of concepts missing.
First, consider using a Factory for creating objects:
public interface IMembershipFactory {
MembershipCreateStatus Create(string username, string password);
}
The factory can encapsulate any logic that goes into building an instance or beginning the lifetime of an entity object.
Second, Repositories are an abstraction of a collection of objects. Once you've used a factory to create an object, add it to the collection of objects.
var result = _membershipFactory.Create("user", "pw");
if (result.Failed); // do stuff
_membershipRepository.Add(status.NewMembership); // assumes your status includes the newly created object
Lastly, MyEntityService class that contains a method for every operation that can be performed on an Entity just seems terribly offensive to my senses.
Instead, I try to be more explicit and better capture intent by modeling each operation not as a method on a single Service class, but as individual Command classes.
public class ChangePasswordCommand {
public Guid MembershipId { get; set; }
public string CurrentPassword { get; set; }
public string NewPassword { get; set; }
}
Then, something has to actually do something when this command is sent, so we use handlers:
public interface IHandle<TMessageType> {
void Execute(TMessageType message);
}
public class ChangePasswordCommandHandler : IHandle<ChangePasswordCommand> {
public ChangePasswordCommandHandler(
IMembershipRepository repo
)
{}
public void Execute(ChangePasswordCommand command) {
var membership = repo.Get(command.MembershipId);
membership.ChangePassword(command.NewPassword);
}
}
Commands are dispatched using a simple class that interfaces with our IoC container.
This helps avoids monolithic service classes and makes a project's structure and location of logic much clearer.

Sending an Interface definition over the wire (WCF service)

I have a WCF service that generates loads Entity Framework objects (as well as some other structs and simple classes used to lighten the load) and sends them over to a client application.
I have changed 2 of the classes to implement an interface so that I can reference them in my application as a single object type. Much like this example:
Is it Possible to Force Properties Generated by Entity Framework to implement Interfaces?
However, the interface type is not added to my WCF service proxy client thingymebob as it is not directly referenced in the objects that are being sent back over the wire.
Therefore in my application that uses the service proxy classes, I can't cast or reference my interface..
Any ideas what I'm missing?
Here's some example code:
//ASSEMBLY/PROJECT 1 -- EF data model
namespace Model
{
public interface ISecurable
{
[DataMember]
long AccessMask { get; set; }
}
//partial class extending EF generated class
//there is also a class defined as "public partial class Company : ISecurable"
public partial class Chart : ISecurable
{
private long _AccessMask = 0;
public long AccessMask
{
get { return _AccessMask; }
set { _AccessMask = value; }
}
public void GetPermission(Guid userId)
{
ChartEntityModel model = new ChartEntityModel();
Task task = model.Task_GetMaskForObject(_ChartId, userId).FirstOrDefault();
_AccessMask = (task == null) ? 0 : task.AccessMask;
}
}
}
//ASSEMBLY/PROJECT 2 -- WCF web service
namespace ChartService
{
public Chart GetChart(Guid chartId, Guid userId)
{
Chart chart = LoadChartWithEF(chartId);
chart.GetPermission(userId); //load chart perms
return chart; //send it over the wire
}
}
Interfaces won't come across as separate entities in your WSDL - they will simply have their methods and properties added to the object that exposes them.
What you want to accomplish you can do using abstract classes. These will come across as distinct entities.
Good luck. Let us know how you decided to proceed.

Is it possible to serialize objects without a parameterless constructor in WCF?

I know that a private parameterless constructor works but what about an object with no parameterless constructors?
I would like to expose types from a third party library so I have no control over the type definitions.
If there is a way what is the easiest? E.g. I don't what to have to create a sub type.
Edit:
What I'm looking for is something like the level of customization shown here: http://msdn.microsoft.com/en-us/magazine/cc163902.aspx
although I don't want to have to resort to streams to serialize/deserialize.
You can't really make arbitrary types serializable; in some cases (XmlSerializer, for example) the runtime exposes options to spoof the attributes. But DataContractSerializer doesn't allow this. Feasible options:
hide the classes behind your own types that are serializable (lots of work)
provide binary formatter surrogates (yeuch)
write your own serialization core (a lot of work to get right)
Essentially, if something isn't designed for serialization, very little of the framework will let you serialize it.
I just ran a little test, using a WCF Service that returns an basic object that does not have a default constructor.
//[DataContract]
//[Serializable]
public class MyObject
{
public MyObject(string _name)
{
Name = _name;
}
//[DataMember]
public string Name { get; set; }
//[DataMember]
public string Address { get; set; }
}
Here is what the service looks like:
public class MyService : IMyService
{
#region IMyService Members
public MyObject GetByName(string _name)
{
return new MyObject(_name) { Address = "Test Address" };
}
#endregion
}
This actually works, as long as MyObject is either a [DataContract] or [Serializable]. Interestingly, it doesn't seem to need the default constructor on the client side. There is a related post here:
How does WCF deserialization instantiate objects without calling a constructor?
I am not a WCF expert but it is unlikely that they support serialization on a constructor with arbitrary types. Namely because what would they pass in for values? You could pass null for reference types and empty values for structs. But what good would a type be that could be constructed with completely empty data?
I think you are stuck with 1 of 2 options
Sub class the type in question and pass appropriate default values to the non-parameterless constructor
Create a type that exists soley for serialization. Once completed it can create an instance of the original type that you are interested in. It is a bridge of sorts.
Personally I would go for #2. Make the class a data only structure and optimize it for serialization and factory purposes.