NHibernate-friendly database design for dynamically recreated tables - nhibernate

I have an application design question for the nhibernate gurus our there. My application tracks a list of documents submitted and displays data parsed from those documents to the public as a searchable transaction list.
The people/entities submitting these documents may (at any time) replace a previous version of a document with an updated version. Only transactions from the most recent version of each document should be displayed in the searchable transaction list.
The front end of this application uses NHibernate for data access to perform the display and search of the transactions.
My initial idea is to put a trigger on my table of documents that will drop and recreate the table of transactions after any insert, update, or delete of the documents table using a stored proc.
There seem to be two shortcomings with this idea:
1) Since the stored proc will take maybe a minute to run to regenerate the transaction list, there will be a period of time where the transaction list table will be incomplete or not even exist.
2) The data that NHibernate will be reading will become invalid after a transaction table rebuild, and any cached information will be invalid.
A few alternatives that I have contemplated are:
1) using Views as the 'tables' NHibernate uses to read from the database, and update those views to point to the new data tables after a successful rebuild is complete. This doesn't resolve the possible caching problem, though.
2) do the transaction table rebuild through the NHibernate session, though I can't figure out how the mechanics of this would work. NHibernate would need to know about the 'live' set of transactions as well as the 'pending' set of transactions, and the pending set of transactions would need to be mapped to some dynamically named table(s).
Maybe I'm just architecting myself into a pretzel.
Anyone have any ideas what the NHibernate best practices for this sort of problem would be?

I recommend to use Lucene.net for indexing and searching that documents. And you don't need in a trigger and any SP - Lucene will update all indexes automatically.
And here are some tutorials about lucene:
Lucene.Net Best Practices
What are some good resources on using Lucene.Net?
http://www.lucenetutorial.com/lucene-in-5-minutes.html (it's for java version but the API is the same)

Related

Create trigger upon each table creation in SQL Server 2008 R2

I need to create an Audit table that is going to track the actions (insert, update, delete) of my tables in the database and add new row with date, row id, table name and a few more details, so I will know what action happened and when.
So basically from my understanding I need a trigger for each table which is going to track insert/update/delete and a trigger on the database which is going to track new table creation.
My main problem is understanding how to connect between those things so when a new table is being created a trigger will be created for that table which is going to track the actions and add new rows for the Audit table as needed.
Is it possible to make a DDL trigger for create_table and inside of it another trigger for insert / update / delete ?
What you're hoping for is not possible. And I'd strongly advise that you'd be better off thinking about what you really want to achieve at a business level with auditing. It will yield a much simpler and more practical solution.
First up
...trigger on the database which is going to track new table creation.
I cannot stress enough how terrible this idea is. Who exactly has such unfettered access to you database that they can create tables without going through code-review and QA? Which should of course be on the gated pathway towards production. Once you realise that schema changes should not happen ad-hoc, it's patently obvious that you don't need triggers (which are by their very nature reactive) to do something because the schema changed.
Even if you could write such triggers: it's at a meta-programming level that simply isn't worth the effort of trying to foresee all possible permutations.
Better options include:
Requirements assessment and acceptance: This is new information in the system. What are the audit requirements?
Design review: New table; does it need auditing?
Test design: How to test an audit requirements?
Code Review: You've added a new table. Does it need auditing?
Not to mention features provided by tools such as:
Source Control.
Db deployment utilities (whether home-grown or third party).
Part two
... a trigger will be created for that table which is going to track the actions and add new rows for the Audit table as needed.
I've already pointed out why doing the above automatically is a terrible. Now I'm going a step further to point out that doing the above at all is also a bad idea.
It's a popular approach, and I'm sure to get some flack from people who've nicely compartmentalised their particular flavour of it; swearing blind how much time it "saves" them. (There may even be claims to it being a "business requirement"; which I can assure you is more likely a misstated version of the real requirement.)
There are fundamental problems with this approach:
It's reactive instead of proactive. So it usually lacks context.
You'll struggle to audit attempted changes that get rolled back. (Which can be a nightmare for debugging and usually violates real business audit requirements.)
Interpreting audit will be a nightmare because it's just raw data. The information is lost in the detail.
As columns are added/renamed/deleted your audit data loses cohesion. (This is usually the least of problems though.)
These extra tables that always get updated as part of other updates can wreak havoc on performance.
Usually this style of auditing involves: every time a column is added to the "base" table, it's also added to the "audit" table. (This ultimately makes the "audit" table very much like a poorly architected persistent transaction log.)
Most people following this approach overlook the significance of NULLable columns in the "base" tables.
I can tell you from first hand experience, interpreting such audit trails in any but the simplest of cases is not easy. The amount of time wasted is ridiculous: investigating issues, training others to be able to interpret them correctly, writing utilities to try make working with these audit trails less painful, painstakingly documenting findings (because the information is not immediately apparent in the raw data).
If you have any sense of self-preservation you'll heed my advice.
Make it great
(Sorry, couldn't resist.)
A better approach is to proactively plan for what needs auditing. Push for specific business requirements. Note that different cases may need different auditing techniques:
If user performs action X, record A details about the action for legal traceability.
If user attempts to do Y but it prevented by system rules, record B details to track rule system integrity.
If user fails to log in, record C details for security purposes.
If system is upgraded, record D details for troubleshooting.
If certain system events occur, record E details ...
The important thing is that once you know the real business requirements, you won't be saying: "Uh, let's just track everything. It might be useful." Instead you'll:
Be able to produce a cleaner more appropriate and reliable design for each distinct kind of auditing.
Be able to test that it behaves as required!
Be able to use the audit data more easily whenever it's needed.

Programmatically purge document deletion

I've a database with an agent that periodically delete (via Java agent, "removePermanently" method) all documents in a view and re-create them.
After some month, i've noticed that database size is considerably increased.
Showing database information through this command
sh database <dbpath>
it results that i've a lot of deleted documents (i suppose they are deletion stubs)
Document Type Live Deleted
Documents 1,922 817,378
Compacting database, 80% space was recovered.
Is there a way to programmatically delete stubs definitively to avoid "database explosion"? Or, is there a way to correctly manage this scenario (deletion and creation of documents)?
Don't delete the documents! Re-use them. That's the best answer. Seriously. Take the existing documents, clear the fields and set Form := "Obsolete". Modify the selection formula for all your views by appending & Form != "Obsolete" Create a new hidden view called "Obsolete" with selection formula Form = "Obsolete", and instead of creating new documents, change your code to go to the Obsolete view, grab an available document and set new field values (including changing the Form field). Only create new documents if there are not enough available in the Obsolete view. Any performance that you lose by doing this, which really should be minimal with the number of documents that you seem to have, will be more than offset by what you will gain by avoiding the growth and fragmentation of the NSF file that you are creating by doing all the deletions and creating new documents.
If, however, there's no possible way for you to do that -- maybe some third party tool that is outside of your control is creating the documents -- then it's important to know if the database you are talking about is replicated. If it is replicated, then you must be very careful because purging deletion stubs before all replicas are brought up to date will cause deleted documents to "come back to life" if a replica that has been off-line since before the delete occurs comes back on-line.
If the database is not replicated at all, or is reliably replicated across all replicas quickly, then you can reduce the purge interval. Go to the Replication Settings dialog, find the checkbox labeled "Remove documents not modified in the last __ days". Do not check the box, but enter a small number into the number of days. The purge interval for deletion stubs will be set to 1/3 of this number. So if you set it to 3 the effect will be that stubs are kept for 1 day and then purged, giving you 24 hours to assure that all replicas are up to date. If you need more, set the interval higher, maintaining the 3x multiple as needed. If a server is down for an extended period of time (longer than your purge interval), then adjust your operations procedures so that you will be sure to disable replication of the database to that server before it comes back on line and the replica can be deleted and recreated. Be aware, though, that user replicas pose the same problem, and it's not really possible to control or be aware of user replicas that might go off-line for longer than the purge interval. In any case, remember: do not check the box. To reduce the purge interval for deletion stubs only, just reduce the number.
Apart from this, the only way to programmatically delete deletion stubs requires use of the Notes C API. It's possible to call the required routines from LotusScript, but in my experience once the total number of stubs plus documents gets too high you will likely run into an error and may have to create and deploy a new non-replica copy of the database to get past it. You can find code along with my explanation in the answer to this previous question.
I have to second Richard's recommendation to reuse documents. I recently had a similar project, and started the way you did with deleting everything and importing half a million records every night. Deletion stubs and the growth of the FT index quickly became problems, eating up huge amounts of disk space and slowing performance significantly. I tried to manage the deletion stubs, but I was clearly going against the grain of Domino's architecture.
I read Richard's suggestion here, and adopted that approach. Here's what I did:
1) create 2 views based on form - one for 'active' records, and another for 'inactive' records
2) start the agent by setting autoupdate = false for both views
3) use stampall("form", "inactive") to change all fo the active records to inactive
4) manually refresh the 2 views using notesview.refresh()
5) start importing data. for each record, pull a document out of the pool of inactive records (by walking the 'inactive' view)
6) if if run out of inactive records in the pool, create new ones
7) when import is complete, manually refresh the views again
8) use db.createftindex(0, true) to re-create the FT index
the code is really not that complex, and it runs in about the same amount of time, if not faster, than my original approach.
Thanks Richard!
Also, look at the advanced db properties - several things there that will help optimize the db.
It sounds like you are "refreshing" the contents of the database by periodically deleting all the documents and creating new ones from some other source. Cut that out. If the data are in the Notes database already, leave the document alone. What you're doing is very inefficient.

Is it safe to insert into crm database using sql?

We need to insert data(8k records) into a CRM Entity, the data will come from other CRM Entities. Currently we are doing it through code but it takes too much time (Hours). I was wondering if we use SQL to insert directly into the CRM Database it will be a lot easier and will take only minutes. But before moving farward I have few questions:
Is it safe to insert directly into CRM Database, using SQL?
What is the best practice for insert data into CRM using SQL?
What things should i consider before trying it?
EDIT:
4: How do I increase the insert performance?
No, it is not. It is considered unsupported
Don't do it
Rollup 12 was just released and contains a new API feature. There is now a ExecuteMultipleRequest which could be used for batched bulk imports. See http://msdn.microsoft.com/en-us/library/jj863631.aspx
It shouldn't take hours to insert 8000 records. It would help to see your code, but here are some things to consider to improve performance:
Reuse your IOrganizationService. I've found a 10x increase in performance by reusing a IOrganizationService, rather than creating a new one with each record that is being updated
Use multi-threading. You have to be careful with this one, because it could lead to worse performance if the function to check for the entity existing is your bottle neck.
Tweak your exists function. If the check for the entity existing is taking a long time, consider pulling back the entire table and storing it in memory (assuming it's not ridiculously huge). This would remove 8000 separate select statements.
Turn off plugins that may be degrading performance. If you have any plugins registered on the entity, see if performance increases if you disable them during the import.
Create a new, "How do I increase the insert performance" question with your code posted for additional help.
I have not used the CRM application you are referring to, but if you bypass the code you might bypass certain restrictions or even triggers that the code has in place based on certain values sent in.
For example, if you sent a number in through the code, it might perform some mathematical function on that number and add it to some other value and end up storing two values in the database (one value for the number you entered, and another value representing the total including the newly added one).
So if you had just inserted the one value straight into the database, then the total wouldn't get updated with it.
That is just a hypothetical scenario. You may not run into any problems like that or any others, but there could be the chance.
Well i found this article very helpful. It says:
The direct SQL writes to CRM database are not supported.The reason for this is that creating a record in CRM database is so much more than just INSERT INTO…-statement. The first step of optimizing is to understand what happens behind the scenes and can affect the speed:
1. CRM entities usually consist of 2 physical tables.
2. Cascade rules/Sharing: If created record has any relationships with
cascade rules, web service will handle the cascades automatically.
For example cascaded sharing will lead to additional records being
created in PrincipalObjectAccess table. In case of one-time
migrations, disabling the cascade rules while migration runs can
save lot of time
3. Record Ownership: If you are inserting records, make sure you are
setting the owner as an attribute for create and not as an
additional owner assign request. Assigning owner actually takes
4. Money/Time: Web Service handles currencies and time zones.
5. Workflows/Plugins: If the system has any custom workflows and/or
plugins, I strongly recommend pausing them for the duration of
migration.

How to use database triggers in a real world project?

I've learned a lot about triggers and active databases in the last weaks, but I've some questions about real world examples for these.
At work we use the Entity Framework with ASP.Net and an MSSQL Server. We just use the auto generated constrains and no triggers.
When I heared about triggers I asked myself the following questions:
Which tasks can be performed by triggers?
e.g.: Generation of reporting data: currently the data for the reports is created in vb, but I think a trigger could handle this as well. The creation in vb takes a lot of time and the user should not need to wait for it, because it's not necessary for his work.
Is this an example for a perfect task for a trigger?
How does OR-Mapper handle trigger manipulated data?
e.g.: Do OR-Mapper recognize if a trigger manipulated data? The entity framework seems to cache a lot of data, so I'm not sure if it reads the updated data if a trigger manipulates the data, after the insert/update/delete from the framework is processed.
How much constraint handling should be within the database?
e.g.: Sometimes constrains in the database seem much easier and faster than in the layer above (vb.net,...), but how to throw exceptions to the upper layer that could be handled by the OR-Mapper?
Is there a good solution for handeling SQL exceptions (from triggers) in any OR-Mapper?
Thanks in advance
When you hear about a new tool or feture it doesn't mean you have to use it everywhere. You should think about design of your application.
Triggers are used a lot when the logic is in the database but if you build ORM layer on top of your database you want logic in the business layer using your ORM. It doesn't mean you should not use triggers. It means you should use them with ORM in the same way as stored procedures or database functions - only when it makes sense or when it improves performance. If you pass a lot of logic to database you can throw away ORM and perhaps whole your business layer and use two layered architecture where UI will talk directly to database which will do everything you need - such architecture is considered "old".
When using ORM trigger can be helpful for some DB generated data like audit columns or custom sequences of primary key values.
Current ORM mostly don't like triggers - they can only react to changes to currently processed record so for example if you save Order record and your update trigger will modify all ordered items there is no automatic way to let ORM know about that - you must reload data manually. In EF all data modified or generated in the database must be set with StoreGeneratedPattern.Identity or StoreGeneratedPattern.Computed - EF fully follows pattern where logic is either in the database or in the application. Once you define that value is assigned in the database you cannot change it in the application (it will not persist).
Your application logic should be responsible for data validation and call persistence only if validation passes. You should avoid unnecessary transactions and roundtrips to database when you can know upfront that transaction will fail.
I use triggers for two main purposes: auditing and updating modification/insertion times. When auditing, the triggers push data to related audit tables. This doesn't affect the ORM in any way as those tables are not typically mapped in the main data context (there's a separate auditing data context used when needed to look at audit data).
When recording/modifying insert/modification times, I typically mark those properties in the model as [DatabaseGenerated( DatabaseGenerationOptions.Computed )] This prevents any values set on in the datalayer from being persisted back to the DB and allows the trigger to enforce setting the DateTime fields properly.
It's not a hard and fast rule that I manage auditing and these dates in this way. Sometimes I need more auditing information than is available in the database itself and handle auditing in the data layer instead. Sometimes I want to force the application to update dates/times (since they may need to be the same over several rows/tables updated at the same time). In those cases I might make the field nullable, but [Required] in the model to force a date/time to be set before the model can be persisted.
The old Infomodeler/Visiomodeler ORM (not what you think - it was Object Role Modeling) provided an alternative when generating the physical model. It would provide all the referential integrity with triggers. For two reasons:
Some dbmses (notably Sybase/SQL Server) didn't have declarative RI yet, and
It could provide much more finely grained integrity - e.g. "no more than two children" or "sons or daughters but not both" or "mandatory son or daughter but not both".
So trigger logic related to the model in the same way that any RI constraint does. In SQL Server it handled violations with RAISERROR.
An conceptual issue with triggers is that they are essentially context-free - they always fire regardless of context (at least without great pain, and you might better include their logic with the rest of the context-specific logic.) So global domain constraints are the only place I find them useful - which I guess is another general way to identify "referential integrity".
Triggers are used to maintain integrity and consistency of data (by using constraints), help the database designer ensure certain actions are completed and create database change logs.
For example, given numeric input, if you want the value to be constrained to say, less then 100, you could write a trigger that fires for every row on update or insert, and raise an application error if the value of that column does not meet that contraint.
Suppose you want to log historical changes to a table. You could create a Trigger that fires AFTER each INSERT, UPDATE, and DELETE, which also inserts the data into a logging table. If you need to execute custom custom logic, then Triggers may appeal to you.

Audit Logging Strategies

I am trying to decide on the best method for audit logging within my application. The main reason for the log is reporting the sequence of events (changes).
I have a hierarchy of Objects, I need to create reports when something changes on any part of that hierarchy, at a latter date.
I think that I have three options:
Have a log for each table and therefore matching the hierarchy of objects then creating a view for the report.
Flatten the hierarchy and de-normalise the table, making reporting easier - simple select statement.
Have one log table and have a record for each change making reporting harder but more flexible to changes.
I am currently leaning towards option 1.
I have to talk to this subject even though it's old.
It is usually a poor idea to have only one audit table as you will create locking problems in the database as everything hits that table. Use separate audit tables for each table.
It is also a poor idea to have the application do the auditing. Audit must be done at the database level or you risk losing some of the information. Data does not change only from applications in most databases; no one is going to change the prices of all their products one at a time from the user interface when you need a 10% increase to all 10,000,000 of them. Auditing should capture all changes not just some of them. This should be done in a trigger in most databases (SQL server 2008 has a built in auditing function). Some of the worst potential possible changes (employees committing fraud or wanting to maliciously destroy data) also are frequently from places other than the application especially if you allow table level access to users (Which you should not do in any financial database or one that contains personal information). Auditing from the application won't catch this. Developers often forget that in protecting their data, outside sources are not the only threat.
An audit log is basically a chronological list of events that occurred, who performed these events, and what the events were.
I think a flat view would be better as it can be easily ordered and queried. So I'm leaning more towards your option #2/#3.
Include things like the transaction type, the time, the user id, a description of what's changed, and other pertinent information related to your product.
You can also add things to your product over time and you won't need to continually modify your audit log module.
If it's for auditing purposes I'd use a true append-only medium rather than a table/tables in the same db.
You suggest it's for change history purposes - in which case I would restructure your application/db to record the actual events in the first place rather than just the current state.
I would go with (2) and (3): create a single table for all Audit entries.
A flat view is good, provided the extra work flattening does not impact performance.
You could look into an AOP framework to help with this. It would allow you to inject logging functionality at the beginning or end of any/all methods. If you go down this road, it might help define what would make sense for storing the log data.