Hey all, quick NHibernate question.
In my current project, we have a denormalized table that, for a given unique header record, will have one or more denormalized rows.
When the user is accessing a POCO representing the header and performs an update, I need this change to cascade down to all of the denormalized rows. For example, if the user changes field 'A' in the normalized header, I need all denormalized rows to now reflect the new value for field 'A'.
My current though is to just do a foreach in the normalized header on the property set, since I already have an IList representing the denormalized rows, but I was hoping for a more elegant solution that does not involve writing a foreach loop for each normalized field that needs to propagate down to the denormalized table.
FYI in the pure Sproc world, we'd just issue a second update command in a save sproc with an appropriate where clause - but we're also trying to move away from the sproc dependencies and perform most operations in c#
TIA
Thanks all for the answers above. I looked into the event listener as suggested, and it seemed a bit too heavy for what we were trying to accomplish.
Since we're using a repository pattern and the intent is to embed as much of this kind of behavior in the model, we ultimately went with embedding the cascading updates in the setters of the header object's properties. Since these kinds of cascades can be tough to test, etc. it lets us test everything in the model among the pocos without ever having to rely on a SQL trigger or NHibernate.
In short, when a header is updated in it's setter, I do a quick for-each for the list of detail objects, and also update any other denormalized pocos in the object tree, then drop this into the database with a simple saveorupdate with nHibernate.
-Bob
Related
I have just added a IsValidRecord column to a MyClass SQL table.
It will be used as a logical delete / soft delete.
Now I need to update my application to only query the valid records based on the new column.
I use Entity Frameword DB first.
Our app uses a business layer that centralizes all methods fetching the MyClass items.
So I have updated all the methods that query the concerned table with the appropriate filter based on IsValid.
It works fine.
However, I am pretty sure that these are bad practises because devs will forget to set this filter on new methods that will be added in the future, which will obviously bring incorrect records.
I wonder if EF would have features to automatically filter the queries with the appropriate "AND IsValid = 1" filter?
I used to be working for a company doing the same with NHibernate.
The only supported feature that I have seen for EF is this:
Soft Delete
Unfortunately,it overwrites OnModelCreating so I take it that it only works for Code First architecture.
We use DataBase first so I think it does not work as OnModelCreating is never called?
I would normally implement this filter using application-specific views in the database (after all, some uses of this data may need to be able to see deleted items).
With a simple definition for the view, they should automatically be considered updatable by SQL so you shouldn't need to have to write triggers to manage INSERT/UPDATE/DELETE operations. You then lie to Entity Framework about what its "tables" are and it should mostly be none-the-wiser.
Depending on how you want the soft-delete to work, you may choose to hide the existence of the IsValidRow column (nit: we have rows in SQL, not records) in this view and implement an INSTEAD OF DELETE trigger on the view allowing your application to soft delete these rows by asking EF to remove them.
The best link I have found is this:
EDMX Mapping
Use EDMX designer to add the filter condition. It's basically exacly what I want...
Are there any down sides for this solution?
At first sight, it sounds good enough to me.
Only disadvantage that I can think of is that the filter is well hidden. Other devs in the future might have very hard time to figure out why / where / how are the entities filtered.
I am trying to create a QTreeView to display data from a SQL database. This is a large database, so simply loading the data into a QStandardItemModel seems prohibitive.
None of Qt's pre-built SQL model classes are sufficient for the task. Therefore it seems necessary to subclass QAbstractItemModel.
In the first place, I can find no examples where this is done, so I am wondering whether it is the correct approach.
Implementing QAbstractItemModel::data is pretty straightforward. I am uncertain how to implement QAbstractItemModel::parent.
Qt's "Simple Tree Model Example" example would be informative, but in that example the tree structure is represented in memory with the TreeItem class. I could copy that, but if I am going to duplicate the database structure, it would be just as easy to use QStandardItemModel. If I need to maintain a separate data structure (in addition to the database and the QAbstractItemModel subclass) to represent the tree structure, is there any advantage to subclassing QAbstractItemModel over just using a QStandardItemModel?
The challenge in the tree structure is to always be able to identify a model index's parent (i.e., overloading the parent() method). In the Simple Tree example, this is done by storing the three structure in a separate data structure. For large SQL queries this is impractical. For the right database structure, you might be able to calculate the proper parent node given the child, but that is not a guarantee. The only alternative I can imagine is passing a quint32 to QAbstractItemModel::createIndex which encodes the item's parent.
One performance consideration that might be useful. After giving up on sublcassing QAbstractItemModel, I tried populating a QStandardItemModel from the database. I loaded about 1200 items into the model, and four child items to each item with two separate database calls. This took about 3 seconds on a 2009 laptop. That is faster than I had been expecting. (And there would be performance gains if I used a single query instead of repeated queries.)
In the end I went another route: having several QTableViews in a the GUI, with signals and slots to show different aspects of the data. My code is much simpler, and the proper functionality is in place, so this feels like the "right" solution.
I have a table that maintains rows of products that are for sale (tbl_products) using PostgreSQL 9.1. There are also several other tables that maintain ratings on the items, comments, etc. We're using JPA/Hibernate for ORM in a Seam application, and have the appropriate entities wired up properly. In an effort to provide better listings of these items, I've created a SQL VIEW (v_product_summary) that aggregates some of the basic product data (name, description, price, etc.) with data from the other tables (number of comments, average rating, etc.). This provides a nice concise view of the data, and I've created a corresponding JPA entity object that provides read-only access to the view data.
Everything is working fine with respect to running JPQL queries on either the Product object (tbl_products) or the ProductSummary (v_product_summary) objects. However, we'd like to provide a richer search experience using Hibernate Search and Lucene. The issue we're running into, though, is how do we query the ProductSummary objects using Hibernate Search? They're not indexed upon creation, because they're never really "created". They're obtained as read-only objects from the v_product_summary VIEW. An index entry is only created on Product when it's persisted to the database, and not for ProductSummary since it's never persisted.
Our thought is that we should be able to:
Persist our Product object to the database
Immediately query the corresponding ProductSummary object using the product's ID
Manually update the Hibernate Search index for the ProductSummary object
Is this possible? Is this even a good idea? I can see there will be a performance impact since we're executing a query for the ProductSummary object every time a new Product is persisted. However, products are not added to the database at a high volume, so I don't think this will be a huge issue.
We'd really like to find a better, more efficient way of accomplishing this. Can anyone provide any tips or recommendations? If we do go the route of updating the search index manually, is that even doable? Can anyone provide a resource explaining how we can add a single ProductSummary to the index?
Any help you can provide is GREATLY appreciated.
If I understand the question correctly, you're trying to do the normal thing of persisting an object and indexing it at that point, but you're dealing with 2 separate objects.
I find myself doing kludgey things in Hibernate all the time, it feels like it almost demands it of you. Yes, there'd be a performance impact, and as you say, it is probably not a big deal, so it might be worth profiling.
A part of me remembers there's a way you can refresh the object upon write, and wonders if there's a way you can wrap the Product and the ProductSummary and tweak the mapping so that you read part and write part of it (waves hands on syntax and mapping). Or create a Hibernate-facing object with readonly fields that can be split and merged into your two objects. I don't know if your design allows Hibernate-only objects, it's a common idiom in my system.
Either way could be useful if you had a lot of objects in this situation, if this is the only object you're searching in this way, your 3 steps look much clearer.
As for the syntax for adding an object manually, I think you're looking for something like this, after your fetch:
FullTextSession textSession = Search.getFullTextSession(session);
textSession.index(myProductSummary);
Was that all you wanted?
Since you are using postgresql, you could insert to the view and use a rule to redirect the insert to the appropriate table.
A postgresql rule is a way to change the query just before it gets executed. I used it in an application which needed a change in schema but required the old queries to still work for a little while.
You can check out the documentation about rules on insert queries on the postgresql site
Since you'll be inserting and updating to the view, hibernate search will work as usual.
EDIT
An easier strategy. You could insert and update ProductSummary when doing so on Product and tell PostgreSQL to ignore the inserts, updates and deletes on the view.
On the database side"
create RULE dontinsert AS ON insert to v_product_summary do instead nothing
create RULE dontupdate AS ON update to v_product_summary do instead nothing
create RULE dontdelete AS ON delete to v_product_summary do instead nothing
But I guess you will need to hack a little, since the jdbc call executeUpdate will return 0, and hibernate will probably freak.
Technically I think this would be possible, but I think your entire efficiency dilemma might be better solved using something like memcached, therefore making performance less of an issue, and perhaps increasing code maintainability depending on how you currently have it implemented at statement level. By updating the search index manually, do you mean the database index? That is not recommended, and I'm not sure if it's even doable. Why not index them on creation?
I'm a relative newbie at NHibernate, so I'll beg forgiveness in advance if this a stupid question. I've googled it and searched the documentation, and am getting all wrapped around the axle.
I'm maintaining/enhancing an existing application that uses NHibernate for a relatively straightforward table. The table has about 10-12 fields, and no foreign key relations. The table contains somewhere around a dozen or so rows, give or take.
Two of the fields are huge blobs (multi-megabytes). As a result, the table is taking an excessive amount of time (4 minutes) to load when working with a remote DB.
The thing is that those two fields are not needed until a user selects one of the rows and begins to work on it, and then they are only needed for the one row that he selects.
This seems like exactly what lazy loading was meant for. I just can't quite figure out how to apply it unless I break up the existing DB schema and put those columns in their own table with one-to-one mapping, which I don't want to.
If it matters, the program is using NHiberate.Mapping.Attributes rather than hbm files, so I need to be able to make alterations in the attributes of the Domain objects that will propagate to the hbm.
Thanks for any help.
You need lazy properties IN NHibernate 3 to accomplish this. I assume, but don't know, that you can set that using attributes.
I've shown up at a new job and discovered database which is in dire need of some help. There are many many things wrong with it, including
No foreign keys...anywhere. They're faked by using ints and managing the relationship in code.
Practically every field can be NULL, which isn't really true
Naming conventions for tables and columns are practically non-existent
Varchars which are storing concatenated strings of relational information
Folks can argue, "It works", which it is. But moving forward, it's a total pain to manage all of this with code and opens us up to bugs IMO. Basically, the DB is being used as a flat file since it's not doing a whole lot of work.
I want to fix this. The issues I see now are:
We have a lot of data (migration, possibly tricky)
All of the DB logic is in code (with migration comes big code changes)
I'm also tempted to do something "radical" like moving to a schema-free DB.
What are some good strategies when faced with an existing DB built upon a poorly designed schema?
Enforce Foreign Keys: If a relationship exists in the domain, then it should have a Foreign Key.
Renaming existing tables/columns is fraught with danger, especially if there are many systems accessing the Database directly. Gotchas include tasks that run only periodically; these are often missed.
Of Interest: Scott Ambler's article: Introduction To Database Refactoring
and Catalog of Database Refactorings
Views are commonly used to transition between changing data models because of the encapsulation. A view looks like a table, but does not exist as a finite object in the database - you can change what column is being returned for a given column alias as desired. This allows you to setup your codebase to use a view, so you can move from the old table structure to the new one without the application needing to be updated. But it means the view has to return the data in the existing format. For example - your current data model has:
SELECT t.column --a list of concatenated strings, assuming comma separated
FROM TABLE t
...so the first version of the view would be the query above, but once you created the new table that uses 3NF, the query for the view would use:
SELECT GROUP_CONCAT(t.column SEPARATOR ',')
FROM NEW_TABLE t
...and the application code would never know that anything changed.
The problem with MySQL is that the view support is limited - you can't use variables within it, nor can they have subqueries.
The reality to the changes you wish to make is effectively rewriting the application from the ground up. Moving logic from the codebase into the data model will drastically change how the application gets the data. Model-View-Controller (MVC) is ideal to implement with changes like these, to minimize the cost of future changes like these.
I'd say leave it alone until you really understand it. Then make sure you don't start with one of the Things You Should Never Do.
Read Scott Ambler's book on Refactoring Databases. It covers a good many techniques for how to go about improving a database - including the transitional measures needed to allow both old and new programs to work with the changing design.
Create a completely new schema and make sure that it is fully normalized and contains any unique, check and not null constraints etc that are required and that appropriate data types are used.
Prepopulate each table that fills the parent role in a foreign key relationship with a single 'Unknown' record.
Create an ETL (Extract Transform Load) process (I can recommend SSIS (SQL Server Integration Services) but there are plenty of others) that you can use to refill the new schema from the existing one on a regular basis. Use the 'Unknown' record as the parent of any orphaned records - there will be plenty ;). You will need to put some thought into how you will consolidate duplicate records - this will probably need to be on a case by case basis.
Use as many iterations as are necessary to refine your new schema (ensure that the ETL Process is maintained and run regularly).
Create views over the new schema that match the existing schema as closely as possible.
Incrementally modify any clients to use the new schema making temporary use of the views where necessary. You should be able to gradually turn off parts of the ETL process and eventually disable it completely.
First see how bad the code is related to the DB if it is all mixed in no DAO layer you shouldn't think about a rewrite but if there is a DAO layer then it would be time to rewrite that layer and DB along with it. If possible make the migration tool based on using the two DAOs.
But my guess is there is no DAO so you need to find what areas of the code you are going to be changing and what parts of the DB that relates to hopefully you can cut it up into smaller parts that can be updated as you maintain. Biggest deal is to get FKs in there and start checking for proper indexes there is a good chance they aren't being done correctly.
I wouldn't worry too much about naming until the rest of the db is under control. As for the NULLs if the program chokes on a value being NULL don't let it be NULL but if the program can handle it I wouldn't worry about it at this point in the future if it is doing a default value move that to the DB but that is way down the line from the sound of things.
Do something about the Varchars sooner rather then later. If anything make that the first pure background fix to the program.
The other thing to do is estimate the effort of each areas change and then add that price to the cost of new development on that section of code. That way you can fix the parts as you add new features.