Database Design for Revisions? - sql

We have a requirement in project to store all the revisions(Change History) for the entities in the database. Currently we have 2 designed proposals for this:
e.g. for "Employee" Entity
Design 1:
-- Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
-- Holds the Employee Revisions in Xml. The RevisionXML will contain
-- all data of that particular EmployeeId
"EmployeeHistories (EmployeeId, DateModified, RevisionXML)"
Design 2:
-- Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
-- In this approach we have basically duplicated all the fields on Employees
-- in the EmployeeHistories and storing the revision data.
"EmployeeHistories (EmployeeId, RevisionId, DateModified, FirstName,
LastName, DepartmentId, .., ..)"
Is there any other way of doing this thing?
The problem with the "Design 1" is that we have to parse XML each time when you need to access data. This will slow the process and also add some limitations like we cannot add joins on the revisions data fields.
And the problem with the "Design 2" is that we have to duplicate each and every field on all entities (We have around 70-80 entities for which we want to maintain revisions).

I think the key question to ask here is 'Who / What is going to be using the history'?
If it's going to be mostly for reporting / human readable history, we've implemented this scheme in the past...
Create a table called 'AuditTrail' or something that has the following fields...
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[OldValue] [varchar](5000) NULL,
[NewValue] [varchar](5000) NULL
You can then add a 'LastUpdatedByUserID' column to all of your tables which should be set every time you do an update / insert on the table.
You can then add a trigger to every table to catch any insert / update that happens and creates an entry in this table for each field that's changed. Because the table is also being supplied with the 'LastUpdateByUserID' for each update / insert, you can access this value in the trigger and use it when adding to the audit table.
We use the RecordID field to store the value of the key field of the table being updated. If it's a combined key, we just do a string concatenation with a '~' between the fields.
I'm sure this system may have drawbacks - for heavily updated databases the performance may be hit, but for my web-app, we get many more reads than writes and it seems to be performing pretty well. We even wrote a little VB.NET utility to automatically write the triggers based on the table definitions.
Just a thought!

Do not put it all in one table with an IsCurrent discriminator attribute. This just causes problems down the line, requires surrogate keys and all sorts of other problems.
Design 2 does have problems with schema changes. If you change the Employees table you have to change the EmployeeHistories table and all the related sprocs that go with it. Potentially doubles you schema change effort.
Design 1 works well and if done properly does not cost much in terms of a performance hit. You could use an xml schema and even indexes to get over possible performance problems. Your comment about parsing the xml is valid but you could easily create a view using xquery - which you can include in queries and join to. Something like this...
CREATE VIEW EmployeeHistory
AS
, FirstName, , DepartmentId
SELECT EmployeeId, RevisionXML.value('(/employee/FirstName)[1]', 'varchar(50)') AS FirstName,
RevisionXML.value('(/employee/LastName)[1]', 'varchar(100)') AS LastName,
RevisionXML.value('(/employee/DepartmentId)[1]', 'integer') AS DepartmentId,
FROM EmployeeHistories

The History Tables article in the Database Programmer blog might be useful - covers some of the points raised here and discusses the storage of deltas.
Edit
In the History Tables essay, the author (Kenneth Downs), recommends maintaining a history table of at least seven columns:
Timestamp of the change,
User that made the change,
A token to identify the record that was changed (where the history is maintained separately from the current state),
Whether the change was an insert, update, or delete,
The old value,
The new value,
The delta (for changes to numerical values).
Columns which never change, or whose history is not required, should not be tracked in the history table to avoid bloat. Storing the delta for numerical values can make subsequent queries easier, even though it can be derived from the old and new values.
The history table must be secure, with non-system users prevented from inserting, updating or deleting rows. Only periodic purging should be supported to reduce overall size (and if permitted by the use case).

Avoid Design 1; it is not very handy once you will need to for example rollback to old versions of the records - either automatically or "manually" using administrators console.
I don't really see disadvantages of Design 2. I think the second, History table should contain all columns present in the first, Records table. E.g. in mysql you can easily create table with the same structure as another table (create table X like Y). And, when you are about to change structure of the Records table in your live database, you have to use alter table commands anyway - and there is no big effort in running these commands also for your History table.
Notes
Records table contains only lastest revision;
History table contains all previous revisions of records in Records table;
History table's primary key is a primary key of the Records table with added RevisionId column;
Think about additional auxiliary fields like ModifiedBy - the user who created particular revision. You may also want to have a field DeletedBy to track who deleted particular revision.
Think about what DateModified should mean - either it means where this particular revision was created, or it will mean when this particular revision was replaced by another one. The former requires the field to be in the Records table, and seems to be more intuitive at the first sight; the second solution however seems to be more practical for deleted records (date when this particular revision was deleted). If you go for the first solution, you would probably need a second field DateDeleted (only if you need it of course). Depends on you and what you actually want to record.
Operations in Design 2 are very trivial:
Modify
copy the record from Records table to History table, give it new RevisionId (if it is not already present in Records table), handle DateModified (depends on how you interpret it, see notes above)
go on with normal update of the record in Records table
Delete
do exactly the same as in the first step of Modify operation. Handle DateModified/DateDeleted accordingly, depending on the interpretation you have chosen.
Undelete (or rollback)
take highest (or some particular?) revision from History table and copy it to the Records table
List revision history for particular record
select from History table and Records table
think what exactly you expect from this operation; it will probably determine what information you require from DateModified/DateDeleted fields (see notes above)
If you go for Design 2, all SQL commands needed to do that will be very very easy, as well as maintenance! Maybe, it will be much much easier if you use the auxiliary columns (RevisionId, DateModified) also in the Records table - to keep both tables at exactly the same structure (except for unique keys)! This will allow for simple SQL commands, which will be tolerant to any data structure change:
insert into EmployeeHistory select * from Employe where ID = XX
Don't forget to use transactions!
As for the scaling, this solution is very efficient, since you don't transform any data from XML back and forth, just copying whole table rows - very simple queries, using indices - very efficient!

We have implemented a solution very similar to the solution that Chris Roberts suggests, and that works pretty well for us.
Only difference is that we only store the new value. The old value is after all stored in the previous history row
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[NewValue] [varchar](5000) NULL
Lets say you have a table with 20 columns. This way you only have to store the exact column that has changed instead of having to store the entire row.

If you have to store history, make a shadow table with the same schema as the table you are tracking and a 'Revision Date' and 'Revision Type' column (e.g. 'delete', 'update'). Write (or generate - see below) a set of triggers to populate the audit table.
It's fairly straightforward to make a tool that will read the system data dictionary for a table and generate a script that creates the shadow table and a set of triggers to populate it.
Don't try to use XML for this, XML storage is a lot less efficient than the native database table storage that this type of trigger uses.

Ramesh, I was involved in development of system based on first approach.
It turned out that storing revisions as XML is leading to a huge database growth and significantly slowing things down.
My approach would be to have one table per entity:
Employee (Id, Name, ... , IsActive)
where IsActive is a sign of the latest version
If you want to associate some additional info with revisions you can create separate table
containing that info and link it with entity tables using PK\FK relation.
This way you can store all version of employees in one table.
Pros of this approach:
Simple data base structure
No conflicts since table becomes append-only
You can rollback to previous version by simply changing IsActive flag
No need for joins to get object history
Note that you should allow primary key to be non unique.

The way that I've seen this done in the past is have
Employees (EmployeeId, DateModified, < Employee Fields > , boolean isCurrent );
You never "update" on this table (except to change the valid of isCurrent), just insert new rows. For any given EmployeeId, only 1 row can have isCurrent == 1.
The complexity of maintaining this can be hidden by views and "instead of" triggers (in oracle, I presume similar things other RDBMS), you can even go to materialized views if the tables are too big and can't be handled by indexes).
This method is ok, but you can end up with some complex queries.
Personally, I'm pretty fond of your Design 2 way of doing it, which is how I've done it in the past as well. Its simple to understand, simple to implement and simple to maintain.
It also creates very little overhead for the database and application, especially when performing read queries, which is likely what you'll be doing 99% of the time.
It would also be quite easy to automatic the creation of the history tables and triggers to maintain (assuming it would be done via triggers).

Revisions of data is an aspect of the 'valid-time' concept of a Temporal Database. Much research has gone into this, and many patterns and guidelines have emerged. I wrote a lengthy reply with a bunch of references to this question for those interested.

I'm going to share with you my design and it's different from your both designs in that it requires one table per each entity type. I found the best way to describe any database design is through ERD, here's mine:
In this example we have an entity named employee. user table holds your users' records and entity and entity_revision are two tables which hold revision history for all the entity types that you will have in your system. Here's how this design works:
The two fields of entity_id and revision_id
Each entity in your system will have a unique entity id of its own. Your entity might go through revisions but its entity_id will remain the same. You need to keep this entity id in you employee table (as a foreign key). You should also store the type of your entity in the entity table (e.g. 'employee'). Now as for the revision_id, as its name shows, it keep track of your entity revisions. The best way I found for this is to use the employee_id as your revision_id. This means you will have duplicate revision ids for different types of entities but this is no treat to me (I'm not sure about your case). The only important note to make is that the combination of entity_id and revision_id should be unique.
There's also a state field within entity_revision table which indicated the state of revision. It can have one of the three states: latest, obsolete or deleted (not relying on the date of revisions helps you a great deal to boost your queries).
One last note on revision_id, I didn't create a foreign key connecting employee_id to revision_id because we don't want to alter entity_revision table for each entity type that we might add in future.
INSERTION
For each employee that you want to insert into database, you will also add a record to entity and entity_revision. These last two records will help you keep track of by whom and when a record has been inserted into database.
UPDATE
Each update for an existing employee record will be implemented as two inserts, one in employee table and one in entity_revision. The second one will help you to know by whom and when the record has been updated.
DELETION
For deleting an employee, a record is inserted into entity_revision stating the deletion and done.
As you can see in this design no data is ever altered or removed from database and more importantly each entity type requires only one table. Personally I find this design really flexible and easy to work with. But I'm not sure about you as your needs might be different.
[UPDATE]
Having supported partitions in the new MySQL versions, I believe my design also comes with one of the best performances too. One can partition entity table using type field while partition entity_revision using its state field. This will boost the SELECT queries by far while keep the design simple and clean.

If indeed an audit trail is all you need, I'd lean toward the audit table solution (complete with denormalized copies of the important column on other tables, e.g., UserName). Keep in mind, though, that bitter experience indicates that a single audit table will be a huge bottleneck down the road; it's probably worth the effort to create individual audit tables for all your audited tables.
If you need to track the actual historical (and/or future) versions, then the standard solution is to track the same entity with multiple rows using some combination of start, end, and duration values. You can use a view to make accessing current values convenient. If this is the approach you take, you can run into problems if your versioned data references mutable but unversioned data.

If you want to do the first one you might want to use XML for the Employees table too. Most newer databases allow you to query into XML fields so this is not always a problem. And it might be simpler to have one way to access employee data regardless if it's the latest version or an earlier version.
I would try the second approach though. You could simplify this by having just one Employees table with a DateModified field. The EmployeeId + DateModified would be the primary key and you can store a new revision by just adding a row. This way archiving older versions and restoring versions from archive is easier too.
Another way to do this could be the datavault model by Dan Linstedt. I did a project for the Dutch statistics bureau that used this model and it works quite well. But I don't think it's directly useful for day to day database use. You might get some ideas from reading his papers though.

How about:
EmployeeID
DateModified
and/or revision number, depending on how you want to track it
ModifiedByUSerId
plus any other information you want to track
Employee fields
You make the primary key (EmployeeId, DateModified), and to get the "current" record(s) you just select MAX(DateModified) for each employeeid. Storing an IsCurrent is a very bad idea, because first of all, it can be calculated, and secondly, it is far too easy for data to get out of sync.
You can also make a view that lists only the latest records, and mostly use that while working in your app. The nice thing about this approach is that you don't have duplicates of data, and you don't have to gather data from two different places (current in Employees, and archived in EmployeesHistory) to get all the history or rollback, etc).

If you want to rely on history data (for reporting reasons) you should use structure something like this:
// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
// Holds the Employee revisions in rows.
"EmployeeHistories (HistoryId, EmployeeId, DateModified, OldValue, NewValue, FieldName)"
Or global solution for application:
// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
// Holds all entities revisions in rows.
"EntityChanges (EntityName, EntityId, DateModified, OldValue, NewValue, FieldName)"
You can save your revisions also in XML, then you have only one record for one revision. This will be looks like:
// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
// Holds all entities revisions in rows.
"EntityChanges (EntityName, EntityId, DateModified, XMLChanges)"

We have had similar requirements, and what we found was that often times the user just wants to see what has been changed, not necessarily roll back any changes.
I'm not sure what your use case is, but what we have done was create and Audit table that is automatically updated with changes to an business entity, including the friendly name of any foreign key references and enumerations.
Whenever the user saves their changes we reload the old object, run a comparison, record the changes, and save the entity (all are done in a single database transaction in case there are any problems).
This seems to work very well for our users and saves us the headache of having a completely separate audit table with the same fields as our business entity.

It sounds like you want to track changes to specific entities over time, e.g. ID 3, "bob", "123 main street", then another ID 3, "bob" "234 elm st", and so on, in essence being able to puke out a revision history showing every address "bob" has been at.
The best way to do this is to have an "is current" field on each record, and (probably) a timestamp or FK to a date/time table.
Inserts have to then set the "is current" and also unset the "is current" on the previous "is current" record. Queries have to specify the "is current", unless you want all of the history.
There are further tweaks to this if it's a very large table, or a large number of revisions are expected, but this is a fairly standard approach.

Related

Is it efficient to have 2 SQL tables with the same structure?

In inventory / production systems I usually implement a table structure similar to the following description...
--- Raw Item ---
id INT(10) UNSIGNED AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
description VARCHAR(128) NOT NULL,
ideal INT(10) UNSIGNED,
PRIMARY KEY(id)
And another table with the same fields for the processed items...
Next tables for clients and providers with similar structures.
Then tables for income orders and outcome orders with similar structures.
And finally, a table which stablishes relationships between a certain processed item and the types of raw items (With quantities) required to produce a batch...
It performs well... But I wonder if it would be better to merge the similar tables and adding a field such as 'type', would like some advice please.
In my mind adding an artificial type to combine similar data is not 3NF. Go with separate tables unless you need the data in the same table.
A client and a provider have similar fields but they are two different things.
If an order migrates from PO to Processed then it is the same thing and it would be appropriate to have a status flag. If you are moving data from one table to the other then combine with a flag is preferred.
That's a good question and as many SQL good questions the answer is "it depends".
IMHO it's ok to create similar (internal structured tables) for different artifacts (with similar properties).
See you can get:
Owner
Id, Name
Pet
Id, Name
Tables with same columns but different meaning.
Sure you can got Items and RawItens in the same table and just a Flag column to differentiate between both. You can even use a self referencing FK to relate Items with RamItems but how that can affect performance?
Well as your table grows engine will need more time (resources, mem, cpu) to retrieve rows/data. If you double the rows... for most DBMS doubling tables ill affect performance a lot less doubling a table rows.
Also it affects evolutive maintenance. If now you need to add one column for your RawItems but not for your Items you can become wasting space.
"Merging" similar tables can increase dificult to understand your schema, not simplify it.
I would imagine this would depend slightly on your database schema you choose. See this link for a little more information; https://dev.mysql.com/doc/refman/5.1/en/storage-engines.html
Each one would offer you various degrees of performance improvements and lack of features in certain areas. I guess the real question is do you want to have to deal with multiple tables, or would it be easier to deal with a single table?
A simple solution to this would be to add something simple like a status to the table. Then just change your queries to run a check against the status. Very simple, and space efficient way to save both tables into one.
--- Raw Item ---
id INT(10) UNSIGNED AUTO_INCREMENT,
name VARCHAR(32) NOT NULL,
description VARCHAR(128) NOT NULL,
ideal INT(10) UNSIGNED,
status VARCHAR(9) NOT NULL,
PRIMARY KEY(id)
SELECT * from items WHERE status="PROCESSED";
Personally I would prefer this approach. It leaves you with only a single table of inventory, as opposed to having multiple tables cluttering up the database schema. Not to mention if it ever expands further (say, you have new, processed, and archived).
How are you going to use the data over time? If you need to combine all these tables together for reporting, one table is likely preferable with partitioning if it is very large. If you need to move the records from one table to the other as it moves throguh a process, it is easier to check on the status of a process if it is all in one table.
Also, if the things are very different entities, they may have differnt related tables and then combining them just muddies the waters, makes the datbase more difficult to understand, and reduces the effectiveness of your PK/FK relationships. In this case, separate tables makes the most sense. It also makes the most sense if you think the data will diverge over time as currently planned features are added.
Take for instance a customer and a Sales rep. They might have many of the same fields but what they are related to will be very differnt and you would not want a customer to be able to be put into the child table for a rep. So now you have to enfore the relationships with more than an FK. Further if you have enough child tables, it makes deleting records to be difficult. The db has to check all the tables even if only half of them might apply to a particular record.
In one database I have seen, the orginal designer combined two things that were dissimlar but had the same fields at the parent level and ended up with well over 100 FKs on that table. It was a nightmare to delete from that table and never fast. And there were occasional data integrity problems where one type of record ended up in the wrong child tables.

Database Design: Stored Record Edit History (Temporal Data)

I want to store temporal information in a database. I have come up with the design below. Is this the best way to do it?
MasterTable
ID
DetailsTable
ID
MasterTableID
CreatedOn
Title
Content
While it works for my purposes having a MasterTable with just an ID field just does not feel right however I can see no other way to link the Details records together.
Is there a cleaner / standard way to do this?
An idea would be to design 2 tables as follows:
Entity table: EntityId - PK, Title, Content
EntityHistory table: EntityId - PK, Version - PK, CreatedOn, Title, Content
Some thoughts about:
Usually you'll need to work only with current version of your row, so your queries will not take into account previous versions while you're joining data, etc. On long term premise, this could have a huge impact on performance, statistics will not be accurate, data selectivity can negatively impact index selection, etc.
In case you work often with current values and historical value, you can define a view as a union on both 2 tables.
How to manage adding a new version? Within a transaction, copy the values from Entity in EntityHistory (by increasing version), then update Entity row with new values. Or alternatively, you could define a trigger on Entity table that will do trick behind.
Use a rowversion column: http://technet.microsoft.com/en-us/library/ms182776(v=sql.105).aspx
Just leave out the MasterTable.
Rows in your DetailsTable will still be "linked together", as you call it, by having the same ID column value.
Any other kind of useful "linking" you might want to do (e.g. link a row to its immediate successor or predecessor) is not achieved by having that MasterTable anyway. It achieves nothing (Unless you would want to have ID's in it, for which there is no Details, such that the ID never has been created, which seems rather unlikely). Leave it out.

Table referenced by other tables having different PKs

I would like to create a table called "NOTES". I was thinking this table would contain a "table_name" VARCHAR(100) which indicates what table put in the note, a "key" or multiple "key" columns representing the primary key values of the table that this note applies to and a "note" field VARCHAR(MAX). When other tables use this table they would supply THEIR primary key(s) and their "table_name" and get all the notes associated with the primary key(s) they supplied. The problem is that other tables might have 1, 2 or more PKs so I am looking for ideas on how I can design this...
What you're suggesting sounds a little convoluted to me. I would suggest something like this.
Notes
------
Id - PK
NoteTypeId - FK to NoteTypes.Id
NoteContent
NoteTypes
----------
Id - PK
Description - This could replace the "table_name" column you suggested
SomeOtherTable
--------------
Id - PK
...
Other Columns
...
NoteId - FK to Notes.Id
This would allow you to keep your data better normalized, but still get the relationships between data that you want. Note that this assumes a 1:1 relationship between rows in your other tables and Notes. If that relationship will be many to one, you'll need a cross table.
Have a look at this thread about database normalization
What is Normalisation (or Normalization)?
Additionally, you can check this resource to learn more about foreign keys
http://www.w3schools.com/sql/sql_foreignkey.asp
Instead of putting the other table name's and primary key's in this table, have the primary key of the NOTES table be NoteId. Create an FK in each other table that will make a note, and store the corresponding NoteId's in the other tables. Then you can simply join on NoteId from all of these other tables to the NOTES table.
As I understand your problem, you're attempting to "abstract" the auditing of multiple tables in a way that you might abstract a class in OOP.
While it's a great OOP design principle, it falls flat in databases for multiple reasons. Perhaps the largest single reason is that if you cannot envision it, neither will someone (even you) looking at it later have an easy time reassembling the data. Smaller that that though, is that while you tend to think of a table as a container and thus similar to an object, in reality they are implemented instances of this hypothetical container you are trying to put together and operate better if you treat them as such. By creating an audit table specific to a table or a subset of tables that share structural similarity and data similarity, you increase the performance of your database and you won't run in to strange trigger or select related issues later.
And you can't envision it not because you're not good at what you're doing, but rather, the structure is not conducive to database logging.
Instead, I would recommend that you create separate logging tables that manage the auditing of each table you want to audit or log. In fact, some fast google searches show many scripts already written to do much of this tasking for you: Example of one such search
You should create these individual tables and then if you want to be able to report on multiple table or even all tables at once, you can create a stored procedure (if you want to make queries based on criterion) or a view with an included SELECT statement that JOINs and/or UNIONs the tables you are interested in - in a fashion that makes sense to the report type. You'll still have to write new objects in to the view, but even with your original table design, you'd have to account for that.

Two tables or one table?

a quick question in regards to table design..
Let's say I am designing a loan application database.
As it is right now, I will have 2 tables..
Applicant (ApplicantID, FirstName , LastName, SSN, Email... )
and
Co-Applicant(CoApplicantID, FirstName, LastName , SSN, Email.., ApplicantID)
Should I consider having just one table because all the fields are the same.. ??
Person( PersonID, FirstName, LastName , SSN, Email...,ParentID (This determines if it is a co-applicant))
What are the pros and cons of these two approaches ?
I suggest the following data model:
PERSON table
PERSON_ID, pk
LOAN_APPLICATIONS table
APPLICATION_ID, pk
APPLICANT_TYPE_CODE table
APPLICANT_TYPE_CODE, pk
APPLICANT_TYPE_CODE_DESCRIPTION
LOAN_APPLICANTS table
APPLICATION_ID, pk, fk
PERSON_ID, pk, fk
APPLICANT_TYPE_CODE, fk
Person( PersonID, FirstName, LastName , SSN, Email...,ParentID (This determines if it is a co-applicant))
That works if a person will only ever exist in your system as either an applicant or a co-applicant. A person could be a co-applicant to numerous loans and/or an applicant themselves - you don't want to be re-entering their details every time.
This is the benefit of how & why things are normalized. Based on the business rules & inherent reality of usage, the tables are setup so stop redundant data being stored. This is for the following reasons:
Redundant data is a waste of space & resources to support & maintain
The action of duplicating the data means it can also be different in subtle ways - capitalizations, spaces, etc that can all lead to complications to isolate real data
Data incorrectly stored due to oversight when creating the data model
Foresight & Flexibility. Currently there isn't any option other than applicant or co-applicant for an APPLICANT_TYPE_CODE value - it could be a stored without using another table & foreign key. But this setup allows support to add different applicant codes in the future, as needed - without any harm to the data model.
There's no performance benefit when you risk bad data. What you would make, will be eaten by the hacks you have to perform to get things to work.
If the Domain Model determines that both people are applicants and that are related, then they belong in the same table with a self-referential foriegn key.
You may want to read up on database normalization.
I think you should have two tables, but not those two. Have a table "loans" which has foreign keys to an applicants table, or just have records in applicants reference the same table.
The advantages:
- Makes searching easier: If you only have a phone number or a name, you can still search, in a single table and find the corresponding person regardless of he/she being a co-applicant or a main-applicant. Otherwise you'd need to use a UNION construct. (Yet, when you know that you search for a particular type of applicant, you add a filter on the type and you only get such applicants.
- Generally easier to maintain. Say tomorrow you need to add the tweeter id of the applicant ;-), only one place to change.
- Also allows inputing persons with say an "open/undefined" type, and assign then as applicant or otherwise, at a later date.
- Allows to introduce new types of applicants (say a co-latteral warrantor... whatever)...
The disadvantages:
with really huge (multi-million person records), there could be a slight performance gain with a two table approach (depending on index and various other things
SQL queries can be bit more complicated, for example with two separate joins to the the person table, one for the applicant the other for the co-applicant . (Nothing intractable but a bit more complexity.
On the whole, the proper design is in most likelihood the one with a single table. Only possible exception is if over time the info kept for one type of applicant was starting to diverge significantly from the other type(s) of applicant. (And even then we can deal with this situation in different ways, including the introduction of a related table for these extra fields, as it may make more sense; Yes, a two table system again, but one where the extra fields may fit "naturally" together in term of their semantics, usage etc...)
Both of your variants have one disadvantage: any person can be an applicant and co-applicant twice and more. So you should use table Person( PersonID, FirstName, LastName , SSN, Email... ) and table Co-Applicants (PersonID as Applicant, PersonID as CoApplicant)
How about since each Applicant can have a Co-Applicant -- just go with one table in total. So you'd have Applicants, which has an optional foreign key field 'coapplicant' (or similar).
If the fields between applicant and co-applicant are identical, then I would suggest that you put them in the same table and use an "applicant type" field to indicate main or co- applicant. IF there's some information special about the co-applicant (such as relationship to main applicant, extra phone numbers, other stuff) you might want to normalize to a separate table and refer from there back to the co-applicant (by (co-)applicant ID) in the applicant table.
Keep Two table>
1ST User type code ID
In this table u can keep user type ie applicat And Co applicant
2nd table User--> here u can keep all the field with similar coloums with user type code as foregin key.
By this you can easily distingush between two user.
I know - I'm too late on this.... The Loan Application is your primary entity. You will have one or more applicants for the loan. Drop the idea of Person - you're creating something that you can't control. I've been there, done that and got the T-Shirt.

Create DB2 History Table Trigger

I want to create a history table to track field changes across a number of tables in DB2.
I know history is usually done with copying an entire table's structure and giving it a suffixed name (e.g. user --> user_history). Then you can use a pretty simple trigger to copy the old record into the history table on an UPDATE.
However, for my application this would use too much space. It doesn't seem like a good idea (to me at least) to copy an entire record to another table every time a field changes. So I thought I could have a generic 'history' table which would track individual field changes:
CREATE TABLE history
(
history_id LONG GENERATED ALWAYS AS IDENTITY,
record_id INTEGER NOT NULL,
table_name VARCHAR(32) NOT NULL,
field_name VARCHAR(64) NOT NULL,
field_value VARCHAR(1024),
change_time TIMESTAMP,
PRIMARY KEY (history_id)
);
OK, so every table that I want to track has a single, auto-generated id field as the primary key, which would be put into the 'record_id' field. And the maximum VARCHAR size in the tables is 1024. Obviously if a non-VARCHAR field changes, it would have to be converted into a VARCHAR before inserting the record into the history table.
Now, this could be a completely retarded way to do things (hey, let me know why if it is), but I think it it's a good way of tracking changes that need to be pulled up rarely and need to be stored for a significant amount of time.
Anyway, I need help with writing the trigger to add records to the history table on an update. Let's for example take a hypothetical user table:
CREATE TABLE user
(
user_id INTEGER GENERATED ALWAYS AS IDENTITY,
username VARCHAR(32) NOT NULL,
first_name VARCHAR(64) NOT NULL,
last_name VARCHAR(64) NOT NULL,
email_address VARCHAR(256) NOT NULL
PRIMARY KEY(user_id)
);
So, can anyone help me with a trigger on an update of the user table to insert the changes into the history table? My guess is that some procedural SQL will need to be used to loop through the fields in the old record, compare them with the fields in the new record and if they don't match, then add a new entry into the history table.
It'd be preferable to use the same trigger action SQL for every table, regardless of its fields, if it's possible.
Thanks!
I don't think this is a good idea, as you generate even more overhead per value with a big table where more than one value changes. But that depends on your application.
Furthermore you should consider the practical value of such a history table. You have to get a lot of rows together to even get a glimpse of context to the value changed and it requeries you to code another application that does just this complex history logic for an enduser. And for an DB-admin it would be cumbersome to restore values out of the history.
it may sound a bit harsh, but that is not the intend. An experienced programmer in our shop had a simmilar idea through table journaling. He got it up and running, but it ate diskspace like there's no tomorrow.
Just think about what your history table should really accomplish.
Have you considered doing this as a two step process? Implement a simple trigger that records the original and changed version of the entire row. Then write a separate program that runs once a day to extract the changed fields as you describe above.
This makes the trigger simpler, safer, faster and you have more choices for how to implement the post processing step.
We do something similar on our SQL Server database, but the audit tables are for each indvidual table audited (one central table would be huge as our database is many many gigabytes in size)
One thing you need to do is make sure you also record who made the change. You should also record the old and new value together (makes it easier to put data back if you need to) and the change type (insert, update, delete). You don't mention recording deletes from the table, but we find those the some of the things we most frequently use the table for.
We use dynamic SQl to generate the code to create the audit tables (by using the table that stores the system information) and all audit tables have the exact same structure (makes is easier to get data back out).
When you create the code to store the data in your history table, create the code as well to restore the data if need be. This will save tons of time down the road when something needs to be restored and you are under pressure from senior management to get it done now.
Now I don't know if you were planning to be able to restore data from your history table, but once you have once, I can guarantee that management will want it used that way.
CREATE TABLE HIST.TB_HISTORY (
HIST_ID BIGINT GENERATED ALWAYS AS IDENTITY (START WITH 0, INCREMENT BY 1, NO CACHE) NOT NULL,
HIST_COLUMNNAME VARCHAR(128) NOT NULL,
HIST_OLDVALUE VARCHAR(255),
HIST_NEWVALUE VARCHAR(255),
HIST_CHANGEDDATE TIMESTAMP NOT NULL
PRIMARY KEY(HIST_SAFTYNO)
)
GO
CREATE TRIGGER COMMON.TG_BANKCODE AFTER
UPDATE OF FRD_BANKCODE ON COMMON.TB_MAINTENANCE
REFERENCING OLD AS oldcol NEW AS newcol FOR EACH ROW MODE DB2SQL
WHEN(COALESCE(newcol.FRD_BANKCODE,'#null#') <> COALESCE(oldcol.FRD_BANKCODE,'#null#'))
BEGIN ATOMIC
CALL FB_CHECKING.SP_FRAUDHISTORY_ON_DATACHANGED(
newcol.FRD_FRAUDID,
'FRD_BANKCODE',
oldcol.FRD_BANKCODE,
newcol.FRD_BANKCODE,
newcol.FRD_UPDATEDBY
);--
INSERT INTO FB_CHECKING.TB_FRAUDMAINHISTORY(
HIST_COLUMNNAME,
HIST_OLDVALUE,
HIST_NEWVALUE,
HIST_CHANGEDDATE