NHibernate: mapping single column from many-to-one to a primitive type - nhibernate

I have a following mapping:
<set name="People" lazy="true" table="ProjectPeople">
<key column="ProjectId" />
<composite-element class="PersonRole">
<many-to-one name="Person" column="PersonId" cascade="save-update" not-null="true" />
<many-to-one name="Role" column="RoleId" cascade="save-update" not-null="true" />
</composite-element>
</set>
Now, I do not really want to have a separate class for Role in domain, I need only the Role name. However, in DB Roles should still be normalized to a separate table Role (Id, Name).
How do I map it so that People use following PersonRole class?
public class PersonRole {
public virtual Person Person { get; set; }
public virtual string Role { get; set; }
}
Update: added bounty, seems like a question useful not only to me.

You won't actually get the answer you hope for, simply because it is not possible. (N)Hibernate is an Object-Relational-Mapping framework and support three kinds of mapping strategies:
table per class hierarchy
table per subclass
table per concrete class
It also allows you to deviate from this by using formula or sql-insert etc, but as you've found out, these only cause you more pain in the end, are not encouraged by the Hibernate community and are bad for the maintainability of your code.
Solution?
Actually, it is very simple. You do not want to use a class for Role. I assume you mean that you do not want to expose a class of type Role and that you do not want to have to type prObject.Role.Name all the time. Just prObject.Role, which should return a string. You have several options:
Use an inner class in, say, PersonRole, this class can be internal or private. Add a property Role that sets and updates a member field;
Use an internal class. Add a property Role that sets and updates a member field;
Let's examine option 2:
// mapped to table Role, will not be visible to users of your DAL
// class can't be private, it's on namespace level, it can when it's an inner class
internal class Role
{
// typical mapping, need not be internal/protected when class is internal
// cannot be private, because then virtual is not possible
internal virtual int Id { get; private set; }
internal virtual string Name { get; set; }
}
// the composite element
public class PersonRole
{
// mapped properties public
public virtual Person Person { get; set; }
// mapped properties hidden
internal virtual Role dbRole { get; set; }
// not mapped, but convenience property in your DAL
// for clarity, it is actually better to rename to something like RoleName
public string Role /* need not be virtual, but can be */
{
get
{
return this.dbRole.Name;
}
set
{
this.dbRole.Name = value; /* this works and triggers the cascade */
}
}
}
And the mapping can look as expected. Result: you have not violated the one-table-per-class rule (EDIT: asker says that he explicitly wants to violate that rule, and Hib supports it, which is correct), but you've hidden the objects from modification and access by using typical object oriented techniques. All NH features (cascade etc) still work as expected.
(N)Hibernate is all about this type of decisions: how to make a well thought-through and safe abstraction layer to your database without sacrificing clarity, brevity or maintainability or violating OO or ORM rules.
Update (after q. was closed)
Other excellent approaches I use a lot when dealing with this type of issue are:
Create your mappings normally (i.e., one-class-per-table, I know you don't like it, but it's for the best) and use extension methods:
// trivial general example
public static string GetFullName(this Person p)
{
return String.Format("{0} {1}", p.FirstName, p.LastName);
}
// gettor / settor for role.name
public static string GetRoleName(this PersonRole pr)
{
return pr.Role == null ? "" : pr.Role.Name;
}
public static SetRoleName(this PersonRole pr, string name)
{
pr.Role = (pr.Role ?? new Role());
pr.Role.Name = name;
}
Create your mappings normally but use partial classes, which enable you to "decorate" your class any which way you like. The advantage: if you use generated mapping of your tables, you an regenerate as often as you wish. Of course, the partial classes should go in separate files so considering your wish for diminishing "bloat" this probably isn't a good scenario currently.
public partial class PersonRole
{
public string Role {...}
}
Perhaps simplest: just overload ToString() for Role, which makes it suitable for use in String.Format and friends, but of course doesn't make it assignable. By default, each entity class or POCO should have a ToString() overload anyway.
Though it is possible to do this with NHibernate directly, the q. has been closed before I had time to look at it (no ones fault, I just didn't have the time). I'll update if I find the time to do it through Hibernate HBM mapping, even though I don't agree to the approach. It is not good to wrestle with advanced concepts of Hib when the end result is less clear for other programmers and less clear overall (where did that table go? why isn't there a IDao abstraction for that table? See also NHibernate Best Practices and S#arp). However, the exercise is interesting nevertheless.
Considering the comments on "best practices": in typical situations, it shouldn't be only "one class per table", but also one IDaoXXX, one DaoConcreteXXX and one GetDaoXXX for each table, where you use class/interface hierarchy to differentiate between read-only and read/write tables. That's a minimum of four classes/lines of code per table. This is typically auto-generated but gives a very clear access layer (dao) to your data layer (dal). The data layer is best kept as spartan as possible. Nothing of these "best practices" prevent you using extension methods or partial methods for moving Role.Name into Role.
These are best general practices. It's not always possible or feasible or even necessary in certain special or typical sitations.

Personally I would create a Role class like Yassir
But If you want to use the structure that you have at the moment then create a view that contains the foriegn Key to your Person Table and the Role Description.
Modify the Set mapping table to point at your new view
Then modify your Role mapping so that it is a property instead of the many to one mapping.
However taking this approach I think will mean that you will not be able to update your role as it is reerencing a view.
Edit: To update the role you could add <sql-insert>,<sql-update> and <sql-delete> to your mapping file so that the cascade-all will work

i don't think it is possible to map many-to-one to a primitive type if i were you i would add a Role class to the model

This the biggest turn off of the whole OO purist thing.
Surely the goal is to have a working application. Not somebodies version of a perfect class hierarchy. So what if you have to code "prObject.Role.Name " instead of "prObject.Role". How does this help you make a better more reliable program?
From the application design purist point of view what you want is just plain wrong. A person can have several roles, a role can usually be assigned to several people.
Why go to all this trouble to enforce an unrealistic one role per person class hierachy when the undelaying data model is many roles per person?
If you really do have an "only one role per person" rule then it should be refleced in the underlying data model.

Related

Mapping of Interface is Not Supported, But Linq-Sql Object Already Implements Property

So, I created a DataContext (Linq-Sql) in VS from an existing database. It has a table called Users, thus I have a User object. In particular, I want to focus on the UserID and Username properties.
Now, I have an interface:
interface IUser
{
int Id { get; }
string Username { get; }
}
I want to create a partial User class and implement IUser. The reason for this is so that I can treat any User as an IUser in many places and not be concerned about the precise User class:
public partial class User : IUser
{
public int Id
{
get { return UserID; }
}
}
I don't implement the Username get property because I know that the entity object already implements it.
When I have a query like dc.Users.SingleOrDefault(p => p.Id == 5); I know that it's an error because it'll translate that call to an SQL statement and it's going to try to find the Id column, which doesn't exist - UserID exists. So I understand this mapping issue.
When I query dc.Users.SingleOrDefault(p => p.Username == "admin"), it also throws an error, BUT Username IS indeed an existing column in the database, so my impression is that no custom/additional mapping needs to take place. What am I missing?
Can someone point me to a good source on how to combat Linq vs. partial classes implement a custom interface?
Update Question:
Before I try it, does anyone know if "rigging" the datacontext.designer.cs file with our custom interfaces (to implement to the classes themselves instead of in a separate partial class file) will work? Is there a consequence of doing this?
I've come across this before using Generics and LINQ, and the way I solved it was to change p.Id == 5 to p.Id.Equals(5) and LINQ was able to map the query.
In regards to rigging autogenerated code, I have done this in my projects, the only annoyance is having to type all the interfaces again if you regenerate your DBML file. I looked in to dynamically adding interfaces to classes and found this SO post, but I haven't tried it out yet:
What is the nicest way to dynamically implement an interface in C#?
Either way, re-typing is a much better trade off for us right now as we've been able to remove a lot of duplication in our implementation code with this method.
Unfortunately I'm not experienced enough with LINQ or .NET to explain why Equals() works when == does not :)

domain design with nhibernate

In my domain I have something called Project which basically holds a lot of simple configuration propeties that describe what should happen when the project gets executed. When the Project gets executed it produces a huge amount of LogEntries. In my application I need to analyse these log entries for a given Project, so I need to be able to partially successively load a portion (time frame) of log entries from the database (Oracle). How would you model this relationship as DB tables and as objects?
I could have a Project table and ProjectLog table and have a foreign key to the primary key of Project and do the "same" thing at object level have class Project and a property
IEnumerable<LogEntry> LogEntries { get; }
and have NHibernate do all the mapping. But how would I design my ProjectRepository in this case? I could have a methods
void FillLog(Project projectToFill, DateTime start, DateTime end);
How can I tell NHibernate that it should not load the LogEntries until someone calls this method and how would I make NHibernate to load a specifc timeframe within that method?
I am pretty new to ORM, maybe that design is not optimal for NHibernate or in general? Maybe I shoul design it differently?
Instead of having a Project entity as an aggregate root, why not move the reference around and let LogEntry have a Product property and also act as an aggregate root.
public class LogEntry
{
public virtual Product Product { get; set; }
// ...other properties
}
public class Product
{
// remove the LogEntries property from Product
// public virtual IList<LogEntry> LogEntries { get; set; }
}
Now, since both of those entities are aggregate roots, you would have two different repositories: ProductRepository and LogEntryRepository. LogEntryRepository could have a method GetByProductAndTime:
IEnumerable<LogEntry> GetByProductAndTime(Project project, DateTime start, DateTime end);
The 'correct' way of loading partial / filtered / criteria-based lists under NHibernate is to use queries. There is lazy="extra" but it doesn't do what you want.
As you've already noted, that breaks the DDD model of Root Aggregate -> Children. I struggled with just this problem for an absolute age, because first of all I hated having what amounted to persistence concerns polluting my domain model, and I could never get the API surface to look 'right'. Filter methods on the owning entity class work but are far from pretty.
In the end I settled for extending my entity base class (all my entities inherit from it, which I know is slightly unfashionable these days but it does at least let me do this sort of thing consistently) with a protected method called Query<T>() that takes a LINQ expression defining the relationship and, under the hood in the repository, calls LINQ-to-NH and returns an IQueryable<T> that you can then query into as you require. I can then facade that call beneath a regular property.
The base class does this:
protected virtual IQueryable<TCollection> Query<TCollection>(Expression<Func<TCollection, bool>> selector)
where TCollection : class, IPersistent
{
return Repository.For<TCollection>().Where(selector);
}
(I should note here that my Repository implementation implements IQueryable<T> directly and then delegates the work down to the NH Session.Query<T>())
And the facading works like this:
public virtual IQueryable<Form> Forms
{
get
{
return Query<Form>(x => x.Account == this);
}
}
This defines the list relationship between Account and Form as the inverse of the actual mapped relationship (Form -> Account).
For 'infinite' collections - where there is a potentially unbounded number of objects in the set - this works OK, but it means you can't map the relationship directly in NHibernate and therefore can't use the property directly in NH queries, only indirectly.
What we really need is a replacement for NHibernate's generic bag, list and set implementations that knows how to use the LINQ provider to query into lists directly. One has been proposed as a patch (see https://nhibernate.jira.com/browse/NH-2319). As you can see the patch was not finished or accepted and from what I can see the proposer didn't re-package this as an extension - Diego Mijelshon is a user here on SO so perhaps he'll chime in... I have tested out his proposed code as a POC and it does work as advertised, but obviously it's not tested or guaranteed or necessarily complete, it might have side-effects, and without permission to use or publish it you couldn't use it anyway.
Until and unless the NH team get around to writing / accepting a patch that makes this happen, we'll have to keep resorting to workarounds. NH and DDD just have conflicting views of the world, here.

Accept Interface into Collection (Covariance) troubles with nHibernate

I am using Fluent nHibernate for my persistence layer in an ASP.NET MVC application, and I have come across a bit of a quandry.
I have a situation where I need to use an abstraction to store objects into a collection, in this situation, an interface is the most logical choice if you are looking at a pure C# perspective.
Basically, an object (Item) can have Requirements. A requirement can be many things. In a native C# situation, I would merely accomplish this with the following code.
interface IRequirement
{
// methods and properties neccessary for evaluation
}
class Item
{
virtual int Id { get; set; }
virtual IList<IRequirement> Requirements { get; set; }
}
A crude example. This works fine in native C# - however because the objects have to be stored in a database, it becomes a bit more complicated than that. Each object that implements IRequirement could be a completely different kind of object. Since nHibernate (or any other ORM that I have discovered) cannot really understand how to serialize an interface, I cannot think of, for the life of me, how to approach this scenario. I mean, I understand the problem.
This makes no sense to the database/orm. I understand completely why, too.
class SomeKindOfObject
{
virtual int Id { get; set; }
// ... some other methods relative to this base type
}
class OneRequirement : SomeKindOfObject, IRequirement
{
virtual string Name { get; set; }
// some more methods and properties
}
class AnotherKindOfObject
{
virtual int Id { get; set; }
// ... more methods and properties, different from SomeKindOfObject
}
class AnotherRequirement : AnotherKindOfObject, IRequirement
{
// yet more methods and properties relative to AnotherKindOfObject's intentive hierarchy
}
class OneRequirementMap : ClassMap<OneRequirement>
{
// etc
Table("OneRequirement");
}
class AnotherRequirementMap : ClassMap<AnotherRequirement>
{
//
Table("OtherRequirements");
}
class ItemMap : ClassMap<Item>
{
// ... Now we have a problem.
Map( x => x.Requirements ) // does not compute...
// additional mapping
}
So, does anyone have any ideas? I cannot seem to use generics, either, so making a basic Requirement<T> type seems out. I mean the code works and runs, but the ORM cannot grasp it. I realize what I am asking here is probably impossible, but all I can do is ask.
I would also like to add, I do not have much experience with nHibernate, only Fluent nHibernate, but I have been made aware that both communities are very good and so I am tagging this as both. But my mapping at present is 100% 'fluent'.
Edit
I actually discovered Programming to interfaces while mapping with Fluent NHibernate that touches on this a bit, but I'm still not sure it is applicable to my scenario. Any help is appreciated.
UPDATE (02/02/2011)
I'm adding this update in response to some of the answers posted, as my results are ... a little awkward.
Taking the advice, and doing more research, I've designed a basic interface.
interface IRequirement
{
// ... Same as it always was
}
and now I establish my class mapping..
class IRequirementMap : ClassMap<IRequirement>
{
public IRequirementMap()
{
Id( x => x.Id );
UseUnionSubclassForInheritanceMapping();
Table("Requirements");
}
}
And then I map something that implements it. This is where it gets very freaky.
class ObjectThatImplementsRequirementMap : ClassMap<ObjectThatImplementsRequirement>
{
ObjectThatImplementsRequirementMap()
{
Id(x => x.Id); // Yes, I am base-class mapping it.
// other properties
Table("ObjectImplementingRequirement");
}
}
class AnotherObjectThatHasRequirementMap : ClassMap<AnotherObjectThatHasRequirement>
{
AnotherObjectThatHasRequirementMap ()
{
Id(x => x.Id); // Yes, I am base-class mapping it.
// other properties
Table("AnotheObjectImplementingRequirement");
}
}
This is not what people have suggested, but it was my first approach. Though I did it because I got some very freaky results. Results that really make no sense to me.
It Actually Works... Sort Of
Running the following code yields unanticipated results.
// setup ISession
// setup Transaction
var requirements = new <IRequirement>
{
new ObjectThatImplementsRequirement
{
// properties, etc..
},
new AnotherObjectThatHasRequirement
{
// other properties.
}
}
// add to session.
// commit transaction.
// close writing block.
// setup new session
// setup new transaction
var requireables = session.Query<IRequirable>();
foreach(var requireable in requireables)
Console.WriteLine( requireable.Id );
Now things get freaky. I get the results...
1
1
This makes no sense to me. It shouldn't work. I can even query the individual properties of each object, and they have retained their type. Even if I run the insertion, close the application, then run the retrieval (so as to avoid the possibility of caching), they still have the right types. But the following does not work.
class SomethingThatHasRequireables
{
// ...
public virtual IList<IRequirement> Requirements { get; set; }
}
Trying to add to that collection fails (as I expect it to). Here is why I am confused.
If I can add to the generic IList<IRequirement> in my session, why not in an object?
How is nHibernate understanding the difference between two entities with the same Id,
if they are both mapped as the same kind of object, in one scenario, and not the other?
Can someone explain to me what in the world is going on here?
The suggested approach is to use SubclassMap<T>, however the problem with that is the number of identities, and the size of the table. I am concerned about scalability and performance if multiple objects (up to about 8) are referencing identities from one table. Can someone give me some insight on this one specifically?
Take a look at the chapter Inheritance mapping in the reference documentation. In the chapter Limitations you can see what's possible with which mapping strategy.
You've chose one of the "table per concrete class" strategies, as far as I can see. You may need <one-to-many> with inverse=true or <many-to-any> to map it.
If you want to avoid this, you need to map IRequirement as a base class into a table, then it is possible to have foreign keys to that table. Doing so you turn it into a "table per class-hierarchy" or "table per subclass" mapping. This is of course not possible if another base class is already mapped. E.g. SomeKindOfObject.
Edit: some more information about <one-to-many> with inverse=true and <many-to-any>.
When you use <one-to-many>, the foreign key is actually in the requirement tables pointing back to the Item. This works well so far, NH unions all the requirement tables to find all the items in the list. Inverse is required because it forces you to have a reference from the requirement to the Item, which is used by NH to build the foreign key.
<many-to-any> is even more flexible. It stores the list in an additional link table. This table has three columns:
the foreign key to the Item,
the name of the actual requirement type (.NET type or entity name)
and the primary key of the requirement (which can't be a foreign key, because it could point to different tables).
When NH reads this table, it knows from the type information (and the corresponding requirement mapping) in which other tables the requirements are. This is how any-types work.
That it is actually a many-to-many relation shouldn't bother you, it only means that it stores the relation in an additional table which is technically able to link a requirement to more then one item.
Edit 2: freaky results:
You mapped 3 tables: IRequirement, ObjectThatImplementsRequirement, AnotherObjectThatHasRequirement. They are all completely independent. You are still on "table per concrete class with implicit polymorphism". You just added another table with containing IRequirements, which may also result in some ambiguity when NH tries to find the correct table.
Of course you get 1, 1 as result. The have independent tables and therefore independent ids which both start with 1.
The part that works: NHibernate is able to find all the objects implementing an interface in the whole database when you query for it. Try session.CreateQuery("from object") and you get the whole database.
The part that doesn't work: On the other side, you can't get an object just by id and interface or object. So session.Get<object>(1) doesn't work, because there are many objects with id 1. The same problem is with the list. And there are some more problems there, for instance the fact that with implicit polymorphism, there is no foreign key specified which points from every type implementing IRequirement to the Item.
The any types: This is where the any type mapping comes in. Any types are stored with additional type information in the database and that's done by the <many-to-any> mapping which stores the foreign key and type information in an additional table. With this additional type information NH is able to find the table where the record is stored in.
The freaky results: Consider that NH needs to find both ways, from the object to a single table and from the record to a single class. So be careful when mapping both the interface and the concrete classes independently. It could happen that NH uses one or the other table depending on which way you access the data. This may have been the cause or your freaky results.
The other solution: Using any of the other inheritance mapping strategies, you define a single table where NH can start reading and finding the type.
The Id Scope: If you are using Int32 as id, you can create 1 record each second for 68 years until you run out of ids. If this is not enough, just switch to long, you'll get ... probably more then the database is able to store in the next few thousand years...

Mapping list of ints in NHibernate

In NHibernate manual I have found mapping like this:
<bag name="Sizes" table="SIZES" order-by="SIZE ASC">
<key column="OWNER"/>
<element column="SIZE" type="Int32"/>
</bag>
I can't help wondering why would anyone want to do something like this? Is there something better about mapping plain integers than creating an entity corresponding to a given integer (Size entity in this scenario) and creating true one-to-many relationship?
The requirements of your business domain that your application operates in will dictate whether you should have a list of ints (a collection of value types), or a list of entities. If there's a good use case for a IList<int> then by all means go for it, and let NHibernate map this association accordingly. Otherwise just remove it.
However, removing it because it seems unfamiliar to you, isn't a valid reason.
I use this a lot in my domain models. Right now I have a lot of 'Twitter Applications' that index tweets based on search 'Keywords', so I have mapped it like this:
public class TwitterApplication
{
public virtual int Id { get; set; }
public virtual string ApplicationName { get; set; }
// other properties (snip)
public virtual ISet<string> Keywords { get; set; }
}
I use this mapping because I know that:
The number of Keywords will be small
(about 4 - 6)
I'm not interested in storing
Keyword DateAdded etc.
I'm not going to be doing Paging or
Querying on the Keywords, just
retrieve them all at the same time,
or not at all
On this basis, I decided mapping it as a collection of strings was appropriate.
The question is not if you want to map it like this, the question is, if you need a list of ints in your model.
When you have a list of ints in your model, then you want to map it like this. You don't want to write complex code in your model, just because of the mapping.
So, do you think it is useful to have a list of ints in a class? Or a list of Guids, enums, doubles?
class Graph
{
IList<double> Values { get; private set; }
}
Doesn't it make sense to you?

NHibernate convert subclass to parent class

Supposing the following entities :
public class AppUser
{
public virtual int Id { get; set; }
public virtual string Login { get; set; }
}
// Mapped as joined-subclass
public class Person : AppUser
{
public virtual int Age { get; set; }
}
If I create 1 AppUser, and save it like this
var user = new AppUser() { Login = "test" };
session.Save( user ); // let's say Id = 1
How can I cast/convert/"promote" it to a Person, keeping the same ID ?
Now, i'm stuck with a row in my AppUser table, with Id = N. How can I populate the Person table with the same Id ? I can't delete the AppUser and recreate it as a Person, as AppUser may be referenced by foreign keys.
I could issue a "manual" SQL INSERT, but it's kind of ugly...
This is definitively a NHibernate question. I understand that from an OOP point of view, this makes little sense, hence the absence of other tags than nhibernate.
I don't believe nHibernate is going to be able to solve this problem for you. nHibernate is dealing with your data as an object and, especially with joined-subclass I don't believe there is anything built in that allows you to change the subclass type on the fly, or at least change the type and retain the original ID.
I think your best bet is to write a stored procedure that, given an ID and a NEW type, removes all entries from subclass tables and adds a new entry to the correct subclass table.
Once that proc runs, then reload the object in nHibernate (and make sure you have thrown away any cached data relating to it), it should now be of the correct type you want to work with, set its NEW properties and save it.
That way you have a relatively generic stored proc that just changes your subclass types, but you dont need to add all the crazy logic to handle various properties on your subclasses.
This has been discussed on SO before and I am quoting Jon Skeet for posterity:
No. A reference to a derived class
must actually refer to an instance of
the derived class (or null). Otherwise
how would you expect it to behave?
For example:
object o = new object();
string s = (string) o;
int i = s.Length; // What can this sensibly do?
If you want to be able to convert an
instance of the base type to the
derived type, I suggest you write a
method to create an appropriate
derived type instance. Or look at your
inheritance tree again and try to
redesign so that you don't need to do
this in the first place.
In Skeet's example, string's are objects and objects are not strings. So the "upcasting" would not work.