I'm writing a simple movie database which has a three tier stack of service layer, data access layer and SQL DB.
I'm using LINQ to SQL to access the DB and return a Film from the Film table in the DB. This will then be returned as a Film object from the DataContracts in the service layer.
I thought this would work ok, but its resulted in some awkward code which doesn't look right. Can someone sanity check this please?
Is it best practice to map every LINQ result to it's DataContract?
public static class DBConnection
{
private static RMDB_LINQDataContext _db;
static DBConnection()
{
_db = new RMDB_LINQDataContext();
}
public static RMDB.DTO.Film GetFilm(string name)
{
var LINQ_film = from film in _db.GetTable<Film>()
where film.name == name
select film;
if (LINQ_film.ToList().Count != 1)
{
// TODO - faultException
}
else
{
foreach (Film f in LINQ_film.ToList())
{
// Yuck
return new RMDB.DTO.Film(f.name,
f.releaseDate.GetValueOrDefault(), "foo", f.rating.GetValueOrDefault());
}
}
return null;
}
this is a bit neater and more efficient as your formulation performs the query twice
public static List<Film> GetFilm(string name)
{
var LINQ_film = from film in _db.GetTable<Film>()
where film.name == name
select new Film(film.name,
film.releaseDate.GetValueOrDefault(),
"foo",
film.rating.GetValueOrDefault());
var list = LINQ_film.ToList();
if (list.Count != 1)
{
// TODO - faultException
}
return list;
}
Copying one instance of a type to another is standard practice when mapping from a domain object to a DTO. It may seem like a lot more work but it is a purely memory based mapping and provides a layer of isolation between your business layer and your clients. Without it clients will be impacted by refactoring of the business layer
Related
I have a repository, for example, UserRepository. It returns a user by given userId. I work on web application, so objects are loaded into memory, used, and disposed when the request ends.
So far, when I write a repository, I simply retrieved the data from the database. I don't store the retrieved User object into memory (I mean in a collection of the repository). When the repository's GetById() method is called, I don't check if the object is already in the collection. I simply query the database.
My questions are
Should I store retrieved objects in the memory, and when a repository's Get method is called, should I check if the object exists in the memory first before I make any Database call?
Or is the memory collection unnecessary, as web request is a short-lived session and all objects are disposed afterward
1) Should I store retrieved objects in the memory, and when a repository's Get method is called, should I check if the object exists in the memory first before I make any Database call?
Since your repository should be abstracted enough to simulate the purpose of an in-memory collection, I think it is really up to you and to your use case.
If you store your object after being retrieved from the database you will probably end-up with an implementation of the so-called IdentityMap. If you do this, it can get very complicated (well it depends on your domain).
Depending on the infrastructure layer you rely on, you may use the IdentityMap provided by your ORM if any.
But the real question is, is it worth implementing an IdentityMap?
I mean, we agree that repeating a query may be wrong for two reasons, performance and integrity, here a quote of Martin Fowler:
An old proverb says that a man with two watches never knows what time it is. If two watches are confusing, you can get in an even bigger mess with loading objects from a database.
But sometimes you need to be pragmatic and just load them every time you need it.
2) Or is the memory collection unnecessary, as web request is a short-lived session and all objects are disposed afterward
It depends™, for example, in some case you may have to play with your object in different place, in that case, it may be worth, but let's say you need to refresh your user session identity by loading your user from database, then there are cases where you only do it once within the whole request.
As is the usual case I don't think there is going to be a "one-size-fits-all".
There may be situations where one may implement a form of caching on a repository when the data is retrieved often, does not go stale too quickly, or simply for efficiency.
However, you could very well implement a type of generic cache decorator that can wrap a repository when you do need this.
So one should take each use case on merit.
When you're using an ORM like Entity Framework or NHibernate it's already taken care of - all read entities are tracked via IdentityMap mechanism, searching by keys (DbSet.Find in EF) won't even hit the database if the entity is already loaded.
If you're using direct database access or a microORM as base for your repository, you should be careful - without IdentityMap you're essentially working with value objects:
using System;
using System.Collections.Generic;
using System.Linq;
namespace test
{
internal class Program
{
static void Main()
{
Console.WriteLine("Identity map");
var artrepo1 = new ArticleIMRepository();
var o1 = new Order();
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(1, "a1", 100), Quantity = 50});
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(1, "a1", 100), Quantity = 30});
o1.OrderLines.Add(new OrderLine {Article = artrepo1.GetById(2, "a2", 100), Quantity = 20});
o1.ConfirmOrder();
o1.PrintChangedStock();
/*
Art. 1/a1, Stock: 20
Art. 2/a2, Stock: 80
*/
Console.WriteLine("Value objects");
var artrepo2 = new ArticleVORepository();
var o2 = new Order();
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(1, "a1", 100), Quantity = 50});
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(1, "a1", 100), Quantity = 30});
o2.OrderLines.Add(new OrderLine {Article = artrepo2.GetById(2, "a2", 100), Quantity = 20});
o2.ConfirmOrder();
o2.PrintChangedStock();
/*
Art. 1/a1, Stock: 50
Art. 1/a1, Stock: 70
Art. 2/a2, Stock: 80
*/
Console.ReadLine();
}
#region "Domain Model"
public class Order
{
public List<OrderLine> OrderLines = new List<OrderLine>();
public void ConfirmOrder()
{
foreach (OrderLine line in OrderLines)
{
line.Article.Stock -= line.Quantity;
}
}
public void PrintChangedStock()
{
foreach (var a in OrderLines.Select(x => x.Article).Distinct())
{
Console.WriteLine("Art. {0}/{1}, Stock: {2}", a.Id, a.Name, a.Stock);
}
}
}
public class OrderLine
{
public Article Article;
public int Quantity;
}
public class Article
{
public int Id;
public string Name;
public int Stock;
}
#endregion
#region Repositories
public class ArticleIMRepository
{
private static readonly Dictionary<int, Article> Articles = new Dictionary<int, Article>();
public Article GetById(int id, string name, int stock)
{
if (!Articles.ContainsKey(id))
Articles.Add(id, new Article {Id = id, Name = name, Stock = stock});
return Articles[id];
}
}
public class ArticleVORepository
{
public Article GetById(int id, string name, int stock)
{
return new Article {Id = id, Name = name, Stock = stock};
}
}
#endregion
}
}
I'm currently working with WCF services application and we've created per request based Object Context of Entity framework. In the entity framework queries, we've used Complied Query mechanism, however, expected performance could not be achieved at the moment. I suspect it is due to the nature of Object Context(Per request based),as Complied queried depends on Object Context. Is it so?
Code Sample
private static readonly Func<MyContext, IQueryable<Order>> _compiledObjectQuery = CompiledQuery.Compile<MyContext, IQueryable<Order>>(
(ctx) => from Order in ctx.Orders
.Include("OrderType")
.Include("OrderLines")
select Order
);
protected override IQueryable<Order> OrderQuery
{
get { return _compiledObjectQuery.Invoke(Context); }
}
Context Creating
public OPDbContext DbContext
{
get
{
if(_dbConext == null)
{
_dbConext = new OPDbContext(Context, true);
}
return _dbConext;
}
}
Castle is used to inject Object Context per request base
Hello can anyone give me advice on how to test my service layer which uses NHibernate ISession directly?
public class UserAccountService : IUserAccountService
{
private readonly ISession _session;
public UserAccountService(ISession session)
{
_session = session;
}
public bool ValidateUser(string email, string password)
{
var value = _session.QueryOver<UserInfo>()
.Select(Projections.RowCount()).FutureValue<int>().Value;
if (value > 0) return true;
return false;
}
}
I opt to use NHibernate directly for simple cases like simple query,validations and creating/updating records in the database. Coz i dont want to have an abstraction like repository/dao layer on top of Nhibernate that will just add more complexity to my architecture.
You need to decide what you want to actually test on your Service Layer, regardless of the fact that you're using NH.
In your example, a good first test might be to test that the email and password that you pass into your service method is actually being used as a check in your session.
In this case, you'd simply need to stub your session variable and set up expectations using a mock framework of some kind (like Rhino Mocks) that would expect a pre-determined email and password, and then return an expected result.
Some pseudocode for this might look like:
void ValidateUser_WhenGivenGoodEmailAndPassword_ReturnsTrue()
{
//arrange
var stubbedSession = MockRepository.GenerateStub<ISession>();
stubbedSession
.Expect(x => x.Query<UserInfo>())
.Return(new List {
new UserInfo { Email = "johns#email.com", Password = "whatever" } });
var service = new UserAccountService(stubbedSession);
//act
var result = service.ValidateUser("johns#email.com", "whatever");
//assert
Assert.That(result, Is.True);
}
I think you'll find it difficult to test database interactions in a static way. I'd recommend delegating responsibilities to another layer (that layer that adds complexity that you mentioned) that can be mocked for testing purposes, if you deem the functionality important enough to test.
Starting to use Nhibernate for persistency being seduced by the promise that it respects your domain model, I tried to implement a relation manager for my domain objects. Basically, to DRY my code with respect to managing bidirectional one to many and many to many relations, I decided to have those relations managed by a separate class. When a one to many or many to one property is set an entry for the two objects is made in an dictionary, the key is either a one side with a collection value to hold the many sides, or a many side with a value of the one side.
A one to many relation for a specific combination of types looks as follows:
public class OneToManyRelation<TOnePart, TManyPart> : IRelation<IRelationPart, IRelationPart>
where TOnePart : class, IRelationPart
where TManyPart : class, IRelationPart
{
private readonly IDictionary<TOnePart, Iesi.Collections.Generic.ISet<TManyPart>> _oneToMany;
private readonly IDictionary<TManyPart, TOnePart> _manyToOne;
public OneToManyRelation()
{
_manyToOne = new ConcurrentDictionary<TManyPart, TOnePart>();
_oneToMany = new ConcurrentDictionary<TOnePart, Iesi.Collections.Generic.ISet<TManyPart>>();
}
public void Set(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
}
public void Add(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, new HashedSet<TManyPart>());
_oneToMany[onePart].Add(manyPart);
}
public Iesi.Collections.Generic.ISet<TManyPart> GetManyPart(TOnePart onePart)
{
if (!_oneToMany.ContainsKey(onePart)) _oneToMany[onePart] = new HashedSet<TManyPart>();
return _oneToMany[onePart];
}
public TOnePart GetOnePart(TManyPart manyPart)
{
if(!_manyToOne.ContainsKey(manyPart)) _manyToOne[manyPart] = default(TOnePart);
return _manyToOne[manyPart];
}
public void Remove(TOnePart onePart, TManyPart manyPart)
{
_manyToOne.Remove(manyPart);
_oneToMany[onePart].Remove(manyPart);
}
public void Set(TOnePart onePart, Iesi.Collections.Generic.ISet<TManyPart> manyPart)
{
if (onePart == null) return;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, manyPart);
else _oneToMany[onePart] = manyPart;
}
public void Clear(TOnePart onePart)
{
var list = new HashedSet<TManyPart>(_oneToMany[onePart]);
foreach (var manyPart in list)
{
_manyToOne.Remove(manyPart);
}
_oneToMany.Remove(onePart);
}
public void Clear(TManyPart manyPart)
{
if (!_manyToOne.ContainsKey(manyPart)) return;
if (_manyToOne[manyPart] == null) return;
_oneToMany[_manyToOne[manyPart]].Remove(manyPart);
_manyToOne.Remove(manyPart);
}
}
On the many side a code snippet looks like:
public virtual SubstanceGroup SubstanceGroup
{
get { return RelationProvider.SubstanceGroupSubstance.GetOnePart(this); }
protected set { RelationProvider.SubstanceGroupSubstance.Set(value, this); }
}
On the one side, so, in this case the SubstanceGroup, the snippet looks like:
public virtual ISet<Substance> Substances
{
get { return RelationProvider.SubstanceGroupSubstance.GetManyPart(this); }
protected set { RelationProvider.SubstanceGroupSubstance.Set(this, value); }
}
Just using my domain objects, this works excellent. In the domain object I just have to reference an abstract factory that retrieves the appropriate relation and I can set the relation from one side, wich automatically becomes thus bidirectional.
However, when NH kicks in the problem is that I get duplicate keys in my dictionaries. Somehow NH sets a relation property with a null value(!) with a new copy(?) of a domain object. So when the domain object gets saved, I have two entries of that domain object in, for example the many side of the relation, i.e. _manyToOne dictionary.
This problem makes me lose my hair, I do not get it what is happening??
To answer your first, very general question: "Does NHibernate really deliver transparent persistency", I just can say: nothing is perfect. NH tries its best to be as transparent as possible, by also trying to keep its complexity as low as possible.
There are some assumptions, particularly regarding collections: Collections and their implementations are not considered to be part of your domain model. NH provides its own collection implementations. You are not only expected to use the interfaces like ISet and IList. You should also take the instances given by NH when the object is read from the database and never replace it with your own. (I don't know what your relation class is actually used for, so I don't know if this is the problem here.)
Domain objects are unique within the same instance of the session. If you get new instances of domain objects each time, you probably implemented the "session-per-call" anti-pattern, which creates a new session for each database interaction.
I don't have a clue what you actually are doing. How is this OneToManyRelation actually used for? What are you doing when NH doesn't behave as expected? This is a very specific problem to your specific implementation.
Besides the comments on 'convoluted code' and 'what the heck are you doing'. The problem was that I was replacing the persistence collections of NH like in the below code snippet:
public void Add(TOnePart onePart, TManyPart manyPart)
{
if (onePart == null || manyPart == null) return;
if (!_manyToOne.ContainsKey(manyPart)) _manyToOne.Add(manyPart, onePart);
else _manyToOne[manyPart] = onePart;
if (!_oneToMany.ContainsKey(onePart)) _oneToMany.Add(onePart, new HashedSet<TManyPart>());
_oneToMany[onePart].Add(manyPart);
}
I create a new Hashed set for the many part. And that was the problem. If just has set the many part with the collection coming in (in case of the persistence collection implementation of NH) than it would have worked.
As a NH newbie, this replacing of collections with a special implementation from NH has been an important source of errors. Just as a warning to other NH newbies.
I've been using NH Validator for some time, mostly through ValidationDefs, but I'm still not sure about two things:
Is there any special benefit of using ValidationDef for simple/standard validations (like NotNull, MaxLength etc)?
I'm worried about the fact that those two methods throw different kinds of exceptions on validation, for example:
ValidationDef's Define.NotNullable() throws PropertyValueException
When using [NotNull] attribute, an InvalidStateException is thrown.
This makes me think mixing these two approaches isn't a good idea - it will be very difficult to handle validation exceptions consistently. Any suggestions/recommendations?
ValidationDef is probably more suitable for business-rules validation even if, having said that, I used it even for simple validation. There's more here.
What I like about ValidationDef is the fact that it has got a fluent interface.
I've been playing around with this engine for quite a while and I've put together something that works quite well for me.
I've defined an interface:
public interface IValidationEngine
{
bool IsValid(Entity entity);
IList<Validation.IBrokenRule> Validate(Entity entity);
}
Which is implemented in my validation engine:
public class ValidationEngine : Validation.IValidationEngine
{
private NHibernate.Validator.Engine.ValidatorEngine _Validator;
public ValidationEngine()
{
var vtor = new NHibernate.Validator.Engine.ValidatorEngine();
var configuration = new FluentConfiguration();
configuration
.SetDefaultValidatorMode(ValidatorMode.UseExternal)
.Register<Data.NH.Validation.User, Domain.User>()
.Register<Data.NH.Validation.Company, Domain.Company>()
.Register<Data.NH.Validation.PlanType, Domain.PlanType>();
vtor.Configure(configuration);
this._Validator = vtor;
}
public bool IsValid(DomainModel.Entity entity)
{
return (this._Validator.IsValid(entity));
}
public IList<Validation.IBrokenRule> Validate(DomainModel.Entity entity)
{
var Values = new List<Validation.IBrokenRule>();
NHibernate.Validator.Engine.InvalidValue[] values = this._Validator.Validate(entity);
if (values.Length > 0)
{
foreach (var value in values)
{
Values.Add(
new Validation.BrokenRule()
{
// Entity = value.Entity as BpReminders.Data.DomainModel.Entity,
// EntityType = value.EntityType,
EntityTypeName = value.EntityType.Name,
Message = value.Message,
PropertyName = value.PropertyName,
PropertyPath = value.PropertyPath,
// RootEntity = value.RootEntity as DomainModel.Entity,
Value = value.Value
});
}
}
return (Values);
}
}
I plug all my domain rules in there.
I bootstrap the engine at the app startup:
For<Validation.IValidationEngine>()
.Singleton()
.Use<Validation.ValidationEngine>();
Now, when I need to validate my entities before save, I just use the engine:
if (!this._ValidationEngine.IsValid(User))
{
BrokenRules = this._ValidationEngine.Validate(User);
}
and return, eventually, the collection of broken rules.