I'm using Nhibernate and Fluent Nhibernate.
I fell into the trap I think a lot of new users fall for, and ended up with all my varchar columns at 255 chars. For politic reasons far too boring to go into, there was immediately data in these fields that I'm not supposed too delete (boo) so I need to update the column lengths without dropping and re-creating tables.
However if I apply a convention for string length to the Fluent configuration, and use the NHibernate UpdateSchema method, only new tables seem to get the new varchar length. Is this correct and is there a way to apply this to the existing tables??
you don't necessarily need nHibernate for that..
you didn't mention what the underlying DB instance is, but I'm sure it has options for updating column properties. I think it's the simplest solution.
UpdateSchema only applies non-destructive updates. It is not meant as a migration utility, but for rapid changes to models/database tables during development. For more information, see my answer to NHibernate, ORM : how is refactoring handled? existing data?.
Related
My gut tells me that advanced NHibernate users would be against it and I have been looking for actual analysis on this and have found nothing, I'd like for the answer to address these questions:
What are the pros/cons of using it?
Are there any performance implications, both good or bad (e.g. use it to call stored procedures?)
In which scenarios should we use/avoid it?
Who should use/avoid it?
basically, what are the reasons to use/avoid it and why?
CreateSQLQuery exists for a reason, which is executing queries that are either:
Not supported
Hard to write
using any of the other methods.
Of course it's usually the last choice, because:
It's not object oriented (i.e. you're back to thinking of tables and columns instead of entities, properties and relationships)
It ties you to the physical model
It ties you to a specific RDBMS
It usually forces you to do more work in order to retrieve entities
It doesn't automatically support features like paging
But if you think it's needed for a particular query, go ahead. Make sure to learn all the other methods first (HQL, Linq, QueryOver, Criteria and Get) to avoid doing unnecessary work.
One of the main reasons to avoid SQL and use HQL is to avoid making the code base dependent on the RDBMS type (e.g. MySQL, Oracle). Another reason is that you have to make your code dependent on the table and column names rather than the entity names and properties.
If you are comparing raw SQL to using the NHibernate LINQ provider there are other compelling reasons to go for LINQ queries (when it works), such as type safety and being able to use VS reference search to determine in what queries a certain table or column is referenced.
My opinion is that CreateSQLQuery() is a "last way out" option. It is there because there are things you cannot do with the other NHibernate APIs but it should be avoided since it more or less goes against the whole idea of using NHibernate in the first place.
Do you use SchemaExport and SchemaUpdate in real applications? Initially, you create model and then generate schema? Does it work? Or, you use it only for tests...
Usually, I create db (using visual studio database project) and then mappings and persistent classes or EF entities using designer. But now, I want to try code first approach with Fluent NHibernate.
I have researched SchemaExport and SchemaUpdate and found some issues. For example, update doesn't delete db objects, creates not null columns like nullable if table exists, doesn't generate primary key on many-to-many tables and so on. It mean that I have to recreate db very often. But, what's about data? And, how to deploy changes to production db and so on...
I want to know do you really use code first and SchemaExport(SchemaUpdate) in your applications? May be you can give me some advices...
I use SchemaUpdate in production. It is safe precisely because it never does destructive operations like deleting columns. However, it is not a comprehensive solution for updating your database. If you use it you will still have to supplement it with script to update your schema to do things like deleting (as you mention), indexes, changing column type, adding table data, etc. But SchemaUpdate covers the 90% case for me.
The only downside I've discovered is that over time it seems to occasionally add duplicate foreign-key constraints to my table.
One more thing: you should run SchemaUpdate manually from a build tool, not your app itself. It is not safe to give your application the rights to modify your db schema!
I use SchemaUpdate/SchemaExport for rapid evolution of my model, but they are not a replacement for a database migration tool. As you mention, data cannot be migrated in a sensible manner in many cases. The tool does not have enough context. (e.g. How can you automatically migrate a FullName column to FirstName/LastName?) I answered a similar question here where I discuss db migration tools in the context of NHibernate.
NHibernate, ORM : how is refactoring handled? existing data?
Yes, you can use these in real applications; I do.
Of course, almost all the work happens in that first go. My practice has been to create a separate project that references the mappings in my main project assembly and handles database creation and the initial data import, if any.
Once the project is in production, I usually unload that project from the solution, but keep it around for reference or if I ever need to switch from create scripts to update scripts.
As for the way NHibernate creates the database, you have to do a little more specification in your Fluent mappings than you otherwise might. I like to specify null/not null, foreign key constraint names, etc. to have maximum control over the way the database gets created.
I don't think you'd ever want to use automapping in this scenario.
Just with any generating code whether it be poco generation from a tool or database generation as in your question, it will probably get you 80% of the way there. From there it would be wise to tweak it the other 20% to add your indexes and any other performance tweaks to get it just right.
Hey all, quick NHibernate question.
In my current project, we have a denormalized table that, for a given unique header record, will have one or more denormalized rows.
When the user is accessing a POCO representing the header and performs an update, I need this change to cascade down to all of the denormalized rows. For example, if the user changes field 'A' in the normalized header, I need all denormalized rows to now reflect the new value for field 'A'.
My current though is to just do a foreach in the normalized header on the property set, since I already have an IList representing the denormalized rows, but I was hoping for a more elegant solution that does not involve writing a foreach loop for each normalized field that needs to propagate down to the denormalized table.
FYI in the pure Sproc world, we'd just issue a second update command in a save sproc with an appropriate where clause - but we're also trying to move away from the sproc dependencies and perform most operations in c#
TIA
Thanks all for the answers above. I looked into the event listener as suggested, and it seemed a bit too heavy for what we were trying to accomplish.
Since we're using a repository pattern and the intent is to embed as much of this kind of behavior in the model, we ultimately went with embedding the cascading updates in the setters of the header object's properties. Since these kinds of cascades can be tough to test, etc. it lets us test everything in the model among the pocos without ever having to rely on a SQL trigger or NHibernate.
In short, when a header is updated in it's setter, I do a quick for-each for the list of detail objects, and also update any other denormalized pocos in the object tree, then drop this into the database with a simple saveorupdate with nHibernate.
-Bob
I'm pretty much a newbie and I need to dig into this matter to write some college article so I need some bootstrap.
Here and there I read that NHibernate offers much more flexibility (compared with L2S) in mapping domain model to database. Can you write down some hints what should I explore?
One thing to consider is that L2S "does it for you" by creating the objects in an extremely large DBML file. You can work with your objects by creating partial classes, but if you decide to try to make any changes to the dbml files you are screwed because L2S will either overwrite your changes when it regenerates itself or you will have to implement any changes manually going forward.
So you are kind of stuck because its a terrible idea to change the DBML, but because of that there are limits to what you can do in terms of naming properties of your objects. A classic example is in the case of using enums that get stored as ints in your database. Lets say you have UserType as a enum in your app, in your user table you would probably just store that as an int column named UserType. Thats great except when you create your DBML file you get UserType mapped as an int column... but if you really want the property UserType to return a UserType enum you are forced to either hack the DBML... or change your naming conventions in your database to match your ORM tool... neither of which are good options.
Whereas nHibernate is just an XML based mapping between YOUR objects and YOUR database which gives you significantly more flexibility in terms of how you want to set things up.
another thing to look at is the many-to-many relationships and the table-per-subclass/ table-per-class mappings that are referenced here:
http://nhibernate.info/doc/nh/en/index.html
I don't believe that L2S can handle table-per-subclass relationships.
Hope this helps,
-Max
Specifically you will probably want to look at the limitations that LINQ to SQL has mapping many to many relationships. This is a big difference between in the mapping between the two products.
After reading through many of the questions here about DB schema migration and versions, I've come up with a scheme to safely update DB schema during our update process. The basic idea is that during an update, we export the database to file, drop and re-create all tables, and then re-import everything. Nothing too fancy or risky there.
The problem is that this system is somewhat "viral", meaning that it is only safe to add columns or tables, since removing them would cause problems when re-importing the data. Normally, I would be fine just ignoring these columns, but the problem is that many of the removed items have actually been refactored, and the presence of the old ones in the code fools other programmers into thinking that they can use them.
So, I would like to find a way to be able to mark columns or tables as deprecated. In the ideal case, the deprecated objects would be marked while updating the schema, but then during the next update our backup script would simply not SELECT the objects which have been marked in this way, allowing us to eventually phase out these parts of the schema.
I have found that MySQL (and probably other DB platforms too, but this is the one we are using) supports the COLUMN attribute to both fields and tables. This would be perfect, except that I can't figure out how to actually use it in a meaningful manner. How would I go about writing an SQL query to get all column names which do not contain a comment matching text containing the word "deprecated"? Or am I looking at this problem all wrong, and missing a much better way to do this?
Maybe you should refactor to use views over your tables, where the views never include the deprocated columns.
"Deprecate" usually means (to me at least) that something is marked for removal at some future date, should not used by new functionality and will be removed/changed in existing code.
I don't know of a good way to "mark" a deprecated column, other than to rename it, which is likely to break things! Even if such a facility existed, how much use would it really be?
So do you really want to deprecate or remove? From the content of your question, I'm guessing the latter.
I have the nasty feeling that you may be in one of those "if I wanted to get to there I wouldn't start from here" situations. However, here are some ideas that spring to mind:
Read Recipes for Continuous Database Integration which seems to address much of your problem area
Drop the column explicitly. In MySQL 5.0 (and even earlier?) the facility exists as part of DDL: see the ALTER TABLE syntax.
Look at how ActiveRecord::Migration works in Ruby. A migration can include the "remove_column" directive, which will deal with the problem in a platform-appropriate way. It definitely works with MySQL, from personal experience.
Run a script against your export to remove the column from the INSERT statements, both column and values lists. Probably quite viable if your DB is fairly small, which I'm guessing it must be if you export and re-import it as described.