I'm trying to setup a sample project using NHibernate and Fluent NHibernate.
I am using the example mappings from the Fluent NHibernate web-site.
My question is about the many-to-many mapping between Store and Product. It seems (when looking at the generated SQL) that when adding a product to the store, NHibernate deletes all the records from the association table (StoreProduct) that belong to that store, and then inserts all the records again, now including the association to the new product I added.
Is this the default behavior or am I missing something? It just seems not very efficient to delete and re-insert all the associations every time I need to add one.
This is expected behavior. I believe this should only happen when you use the bag mapping strategy which it looks like they are using in the example. A bag indicates that there is an unordered collection that can have duplicate items. Because bag items are not unique NHibernate cannot tell when you've added or removed an item from a bag easily. The easiest thing for NHibernate then is to do is delete all associations and then re-add.
It's been a while since I've played with many-to-many mappings (I usually just map as two one-to-many relationships) but I believe that if you use use a different construct, for example, a set (which does not allow duplicates) you should find that the behavior is different. Of course, you should use what ever construct makes the most semantic sense for your application.
Related
I am using a where clause on a bag to filter a bag. Is there any way to make nhibernate insert these filter values into the database?
IMHO, it is not worth trying to share a table between many-to-many relations. You can't access the boolean flags in this tables, so you can't change it from NH, except using native sql, which is not how NHibernate should be used. And it will be very very very complicated to make changes on these bags. These table is completely invisible for the object oriented point of view and completely managed by NH.
Just map it to different tables and you'll be happy. Believe me.
Hey all, quick NHibernate question.
In my current project, we have a denormalized table that, for a given unique header record, will have one or more denormalized rows.
When the user is accessing a POCO representing the header and performs an update, I need this change to cascade down to all of the denormalized rows. For example, if the user changes field 'A' in the normalized header, I need all denormalized rows to now reflect the new value for field 'A'.
My current though is to just do a foreach in the normalized header on the property set, since I already have an IList representing the denormalized rows, but I was hoping for a more elegant solution that does not involve writing a foreach loop for each normalized field that needs to propagate down to the denormalized table.
FYI in the pure Sproc world, we'd just issue a second update command in a save sproc with an appropriate where clause - but we're also trying to move away from the sproc dependencies and perform most operations in c#
TIA
Thanks all for the answers above. I looked into the event listener as suggested, and it seemed a bit too heavy for what we were trying to accomplish.
Since we're using a repository pattern and the intent is to embed as much of this kind of behavior in the model, we ultimately went with embedding the cascading updates in the setters of the header object's properties. Since these kinds of cascades can be tough to test, etc. it lets us test everything in the model among the pocos without ever having to rely on a SQL trigger or NHibernate.
In short, when a header is updated in it's setter, I do a quick for-each for the list of detail objects, and also update any other denormalized pocos in the object tree, then drop this into the database with a simple saveorupdate with nHibernate.
-Bob
Struggling between choosing linq2sql and nhibernate.
Let me give you some insight in the application in point form:
this is a asp.net mvc application
it will have lots of tables, maybe 50-60 (sql server 2008)
i would want all the basic crud logic done for me (which I think nhiberate + repository pattern can give me)
i don't have too complicated mappings, my tables will look something like:
User(userID, username)
UserProfile(userID, ...)
Content(contentID, title, body, date)
Content_User(contentID, userID)
So in general, I will have a PK table, then lots of other tables that reference that PK (i.e. FK tables).
I will also have lots of mapping tables, that will contain PK, FK pairs.
Entity wise, I want User.cs, UserProfile.cs and then a way to load each object.
I am not looking for a User class that has a UserProfile property, and a Content Collection property (there will be maybe 10-20 tables that will related to the user, I just like to keep things linear if that makes sense).
The one thing that makes me learn towards nhibernate is: cross db potential, and the repository pattern that will give me basic db operations accross all my main tables almost instantly!
Since you seem to have a quite straight forward mapping from class to table in mind Linq to SQL should do the trick, without any difficulties. That would let you get started very quickly, without the initial work of mapping the domain manually to the database.
An alternative could be using NHibernate + Fluent NHibernate and its AutoMapping feature, but keep in mind that the Fluent NHibernate AutoMapping is still quite young.
I'm not quite sure I understand what you want your entities to look like, but with Linq to SQL you will get a big generated mess, which you then could extend by using partial classes. NHibernate lets you design you classes however you want and doesn't generate anything for you out of the box. You could kind of use POCO classes with Linq to SQL but that would take away all the benefits of using Linq to SQL rather than NHibernate.
Concerning the repository pattern, and the use of a generic repository, that can be done quite nicely with Linq to SQL as well, and not only with NHibernate. In my opinion that is one of the nice things about Linq to SQL.
If you probably will need support for other databases than SQL Server, NHibernate is the only choice. However, if it probably won't be an issue I would recommend not using that as the primary factor when choosing. The other factors will probably influence your project more.
Conclusion:
All in all, I would recomment Linq to SQL, in this case, since it would let you get started quickly and is sufficient for your needs. The precondition for that is that you don't have a problem with the thought of having generated, messy code in your domain, and that you are quite sure there will not be any need to support other databases in the future. Otherwise I would recommend NHibernate, since it is truly an awesome ORM.
linq2sql really wants you to work with 1 table per class mapping. So if you have a UserMaster and a UserDetail table, you are looking at two objects when using default linq object generation. You can get around that by mapping linq entities to business entities (see Rob Conery's storefront screencasts), but then you are back to writing object mapping code or using something like Automapper.
If you want to be able to split your classes across multiple tables, then I'd say go with NHibernate. If not, then linq has a lower learning curve.
The only way I'd ever use nHibernate in through Castle Project's ActiveRecord library. Otherwise, nHibernate becomes its own little infrastructure project. Check out some questions in the nHibernate tag to see what I'm talking about.
The only thing I might change about AR is to return results of SELECT operations as List instead of T[]. Of course, with the source code in C# I can do that if I want.
With ActiveRecord, the mapping information is saved in attributes you decorate your classes with. It's genius and I am a huge proponent of the pattern and this particular library.
I'm pretty much a newbie and I need to dig into this matter to write some college article so I need some bootstrap.
Here and there I read that NHibernate offers much more flexibility (compared with L2S) in mapping domain model to database. Can you write down some hints what should I explore?
One thing to consider is that L2S "does it for you" by creating the objects in an extremely large DBML file. You can work with your objects by creating partial classes, but if you decide to try to make any changes to the dbml files you are screwed because L2S will either overwrite your changes when it regenerates itself or you will have to implement any changes manually going forward.
So you are kind of stuck because its a terrible idea to change the DBML, but because of that there are limits to what you can do in terms of naming properties of your objects. A classic example is in the case of using enums that get stored as ints in your database. Lets say you have UserType as a enum in your app, in your user table you would probably just store that as an int column named UserType. Thats great except when you create your DBML file you get UserType mapped as an int column... but if you really want the property UserType to return a UserType enum you are forced to either hack the DBML... or change your naming conventions in your database to match your ORM tool... neither of which are good options.
Whereas nHibernate is just an XML based mapping between YOUR objects and YOUR database which gives you significantly more flexibility in terms of how you want to set things up.
another thing to look at is the many-to-many relationships and the table-per-subclass/ table-per-class mappings that are referenced here:
http://nhibernate.info/doc/nh/en/index.html
I don't believe that L2S can handle table-per-subclass relationships.
Hope this helps,
-Max
Specifically you will probably want to look at the limitations that LINQ to SQL has mapping many to many relationships. This is a big difference between in the mapping between the two products.
I'm trying to figure the best strategy about how to organize DataContexts. The typical DB we work has between 50 and 100 tables usually in third-normal form and with many relations between them. I think we have two options:
Put all tables in a single context. This will ensure that anything we do will be committed in the correct order in database. The problem is that the LINQ designer will be a mess with 50+ tables and I'm worrying performance may be affected.
Create several data contexts based on the logical grouping of tables. The problem is that there will be places where one side of a relation will be in one context and the other in another one. We'll have to manually take care of committing both context-s in the correct order.
Is there any recommended practice to handle this?
More details:
I want to create my own entities and unit of work on top of LINQ to SQL. Entities will be defined in a xml model file where the mapping to LINQ entities will be specified also. A custom tool will generate my entities (POCO) based on the model. The client code will interact only with my entities and my unit of work; never directly with the DataContext or LINQ entities. However I do not want to duplicate what LINQ to SQL provide out of box so I want to use the underlying LINQ DataContext. This means that I cannot have two orders in different data contexts, because it wouldn't be possible to map my POCO Order with both of them.
This is a common question that has been thoroughly analyzed here: http://craftycode.wordpress.com/2010/07/19/linq-to-sql-single-data-context-or-multiple/
In essence, you should create at most one data context per strongly connected group of tables, or one data context per database.
LINQ-to-SQL mappings are like typed DataSets, in that when you use one, you're dealing with a session containing data. You can have the same tables in several different DataContexts. They're only classes, after all; they don't mean anything until you start interacting with the database, by filling them with existing data or using them to create new data.
So perhaps you have Customer, Address, Phone, etc. tables that you deal with when you're sending out a new catalog. Then you have Invoice, Line Item, Product, etc. tables that you use when you're creating an order. But in that latter set you may want to have Customer as well. That's fine. You should just take care to only have one session active at a time so that you're not using inconsistent data. You shouldn't have problems from overlapping entities in your various DataContexts as long as you're not using them in an overlapping way.
As far as the clutter, you can put your DataContext in a specific namespace, and you can also put your various entities in a specific namespace (albeit only one namespace per set of entities in a DataContext). You can do this in the Properties window. This will let you keep the Intellisense less jumbled.
You should create contexts that allow you to perform units of work. This may involve overlapping table mappings.
Context1 : Customer has many Invoices
Context2 : Customer has many Orders
Context3 : Invoice has many Orders
I use one datacontext per database.
Average tables can be up to 100, however from experience I don't experience any performance issues.
The datacontext is in a separate project, which is compiled. The resultant dll referenced from the BLL