I have several tables that I need to contain a "Null" value, so that if another table links to that particular record, it basically gets "Nothing". All of these tables have differing numbers of records - if I stick something on the end, it gets messy trying to find the "Null" record. So I would want to perform an INSERT query to append a record that either has the value 0 for the ID field, or a fixed number like 9999999. I've tried both, and Access doesn't let me run the query because of a key violation.
The thing is, I've run the same query before, and it's worked fine. But then I had to delete all the data and re-upload it, and now that I'm trying it again, it's not working. Here is the query:
INSERT INTO [Reading] ([ReadingID], [EntryFK], [Reading], [NotTrueReading])
VALUES (9999999, 0, "", FALSE)
Where 9999999 is, I've also tried 0. Both queries fail because of key violations.
I know that this isn't good db design. Nonetheless, is there any way to make this work? I'm not sure why I can't do it now whereas I could do it before.
I'm not sure if I'm fully understanding the issue here, but there may be a couple of reasons why this isn't working. The biggest thing is that any sort of primary key column has to be unique for every record in your lookup table. Like you mentioned above, 0 is a pretty common value for 'unknown' so I think you're on the right track.
Does 0 or 9999999 already exist in [Reading]? If so, that could be one explanation. When you wiped the table out before, did you completely drop and recreate the table or just truncate it? Depending on how the table was set up, some databases will 'remember' all of the keys it used in the past if you simply deleted all of the data in that table and re-inserted it rather thank dropping and recreating it (that is, if I had 100 records in a table and then truncated it (or deleted those records), the next time I insert a record into that table it'll still start at 101 as its default PK value).
One thing you could do is to drop and recreate the table and set it up so that the primary key is generated by the database itself if it isn't already (aka an 'identity' type of column) and ensure that it starts at 0. Once you do that, the first record you will want to insert is your 'unknown' value (0) like so where you let the database itself handle what the ReadingID will be:
INSERT INTO [Reading] ([EntryFK], [Reading], [NotTrueReading]) VALUES (0, "", FALSE)
Then insert the rest of your data. If the other table looking up to [Reading] has a null value in the FK column, then you can always join back to [Reading] on coalesce(fk_ReadingID,0) = Reading.ReadingID.
Hope that helps in some capacity.
Let's say I have a table defined as follows:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKey varchar(255) NOT NULL,
)
CompoundKey is a string with the primary key P_Id concatenated to the end, like Foo00000001 which comes from "Foo" + 00000001. At the moment, entries insertions into this table happen in 2 steps.
Insert a dummy record with a place holder string for CompoundKey.
Update the CompoundKey with the column with the generated compound key.
I'm looking for a way to avoid the 2nd update entirely and do it all with one insert statement. Is this possible? I'm using MS SQL Server 2005.
p.s. I agree that this is not the most sensible schema in the world, and this schema will be refactored (and properly normalized) but I'm unable to make changes to the schema for now.
Your could use a computed column; change the schema to read:
CREATE TABLE SomeTable
(
P_Id int PRIMARY KEY IDENTITY,
CompoundKeyPrefix varchar(255) NOT NULL,
CompoundKey AS CompoundKeyPrefix + CAST(P_Id AS VARCHAR(10))
)
This way, SQL Server will automagically give you your compound key in a new column, and will automatically maintain it for you. You may also want to look into the PERSIST keyword for computed columns which will cause SQL Server to materialise the value in the data files rather than having to compute it on the fly. You can also add an index against the column should you so wish.
A trigger would easily accomplish this
This is simply not possible.
The "next ID" doesn't exist and thus cannot be read to fulfill the UPDATE until the row is inserted.
Now, if you were sourcing your autonumbers from somwhere else you could, but I don't think that's a good answer to your question.
Even if you want to use triggers, an UPDATE is still executed even if you don't manually execute it.
You can obscure the population of the CompoundKey, but at the end of the day it's still going to be an UPDATE
I think your safest bet is just to make sure the UPDATE is in the same transaction as the INSERT or use a trigger. But, for the academic argument of it, an UPDATE still occurs.
Two things:
1) if you end up using two inserts, you must use transaction! Otherwise other processes may see the database in inconsistent state (i.e. seeing record without CompoundKey).
2) I would refrain from trying to paste the Id to the end of CompoundKey in transaction, trigger etc. It is much cleaner to do it at the output if you need it, e.g. in queries (select concat(CompoundKey, Id) as CompoundKeyId ...). If you need it as a foreign key in other tables, just use the pair (CompoundKey, Id).
in our database schema, we like to use delete flags. When a record is deleted, we then update that field, rather than run a delete statement. The rest of our queries then check for the delete flag when returning data.
Here is the problem:
The delete flag is a date, with a default value of NULL. This is convenient because when a record is deleted we can easily see the date that it was deleted on.
However, to enforce unique constraints properly, we need to include the delete flag in the unique constraint. The problem is, on MS SQL , it behaves in accordance to what we want (for this design), but in postgresql, if any field in a multi column unique constraint is NULL, it allows the field. This behavior fits the SQL standard, but it makes our design broken.
The options we are considering are:
make a default value for the deleted field to be some hardcoded date
add a bit flag for deleted, then each table would have 2 delete related fields - date_deleted and is_deleted (for example)
change the date_deleted to is_deleted (bit field)
I suspect option 1 is a performance hit, each query would have to check for the hardcoded date, rather than just checking for IsNUll. Plus it feels wrong.
Option 2, also, feels wrong - 2 fields for "deleted" is non-dry.
Option 3, we lose the "date" information. There is a modified field, which would, in theory reflect the date deleted, but only assuming the last update to the row was the update to the delete bit.
So, Any suggestions? What have you done in the past to deal with "delete flags" ?
Update
Thanks to everyone for the super quick, and thoughtful responses.
We ended up going with a simple boolean field and a modified date field (with a trigger). I just noticed the partial index suggestion, and that looks like the perfect solution for this problem (but I havent actually tried it)
If just retaining the deleted records is important to you, have you considered just moving them to a history table?
This could easily be achieved with a trigger.
Application logic doesn't need to account for this deleted flag.
Your tables would stay lean and mean when selecting from it.
It would solve your problem with unique indexes.
Option 3, we lose the "date"
information. There is a modified
field, which would, in theory reflect
the date deleted, but only assuming
the last update to the row was the
update to the delete bit.
Is there a business reason that the record would be modified after it was deleted? If not, are you worrying about something that's not actually an issue? =)
In the system I currently work on we have the following "metadata" columns _Deleted, _CreatedStamp, _UpdatedStamp, _UpdatedUserId, _CreatedUserId ... quite a bit, but it's important for this system to carry that much data. I'd suggest going down the road of having a separate flag for Deleted to Modified Date / Deleted Date. "Diskspace is cheap", and having two fields to represent a deleted record isn't world-ending, if that's what you have to do for the RDBMS you're using.
What about triggers? When a record is deleted, a post-update trigger copies the row into an archive table which has the same structure plus any additional columns, and an additional column of the date/time and perhaps the user that deleted it.
That way your "live" table only has records that are actually live, so is better performance-wise, and your application doesn't have to worry about whether a record has been deleted or not.
One of my favourite solutions is an is_deleted bit flag, and a last_modified date field.
The last_modified field is updated automatically every time the row is modified (using any technique supported by your DBMS.) If the is_deleted bit flag is TRUE, then the last_modified value implies the time when the row was deleted.
You will then be able to set the default value of last_modified to GETDATE(). No more NULL values, and this should work with your unique constraints.
Just create a conditional unique constraint:
CREATE UNIQUE INDEX i_bla ON yourtable (colname) WHERE date_deleted IS NULL;
Would creating a multi column unique index that included the deleted date achieve the same constraint limit you need?
http://www.postgresql.org/docs/current/interactive/indexes-unique.html
Alternately, can you store a non-NULL and check that the deleted date to the minimum sql date = 0 or "1/1/1753" instead of NULL for undeleted records.
Is it possible to exclude the deleted date field from your unique index? In what way does this field contribute to the uniqueness of each record, especially if the field is usually null?
I want to create a history table to track field changes across a number of tables in DB2.
I know history is usually done with copying an entire table's structure and giving it a suffixed name (e.g. user --> user_history). Then you can use a pretty simple trigger to copy the old record into the history table on an UPDATE.
However, for my application this would use too much space. It doesn't seem like a good idea (to me at least) to copy an entire record to another table every time a field changes. So I thought I could have a generic 'history' table which would track individual field changes:
CREATE TABLE history
(
history_id LONG GENERATED ALWAYS AS IDENTITY,
record_id INTEGER NOT NULL,
table_name VARCHAR(32) NOT NULL,
field_name VARCHAR(64) NOT NULL,
field_value VARCHAR(1024),
change_time TIMESTAMP,
PRIMARY KEY (history_id)
);
OK, so every table that I want to track has a single, auto-generated id field as the primary key, which would be put into the 'record_id' field. And the maximum VARCHAR size in the tables is 1024. Obviously if a non-VARCHAR field changes, it would have to be converted into a VARCHAR before inserting the record into the history table.
Now, this could be a completely retarded way to do things (hey, let me know why if it is), but I think it it's a good way of tracking changes that need to be pulled up rarely and need to be stored for a significant amount of time.
Anyway, I need help with writing the trigger to add records to the history table on an update. Let's for example take a hypothetical user table:
CREATE TABLE user
(
user_id INTEGER GENERATED ALWAYS AS IDENTITY,
username VARCHAR(32) NOT NULL,
first_name VARCHAR(64) NOT NULL,
last_name VARCHAR(64) NOT NULL,
email_address VARCHAR(256) NOT NULL
PRIMARY KEY(user_id)
);
So, can anyone help me with a trigger on an update of the user table to insert the changes into the history table? My guess is that some procedural SQL will need to be used to loop through the fields in the old record, compare them with the fields in the new record and if they don't match, then add a new entry into the history table.
It'd be preferable to use the same trigger action SQL for every table, regardless of its fields, if it's possible.
Thanks!
I don't think this is a good idea, as you generate even more overhead per value with a big table where more than one value changes. But that depends on your application.
Furthermore you should consider the practical value of such a history table. You have to get a lot of rows together to even get a glimpse of context to the value changed and it requeries you to code another application that does just this complex history logic for an enduser. And for an DB-admin it would be cumbersome to restore values out of the history.
it may sound a bit harsh, but that is not the intend. An experienced programmer in our shop had a simmilar idea through table journaling. He got it up and running, but it ate diskspace like there's no tomorrow.
Just think about what your history table should really accomplish.
Have you considered doing this as a two step process? Implement a simple trigger that records the original and changed version of the entire row. Then write a separate program that runs once a day to extract the changed fields as you describe above.
This makes the trigger simpler, safer, faster and you have more choices for how to implement the post processing step.
We do something similar on our SQL Server database, but the audit tables are for each indvidual table audited (one central table would be huge as our database is many many gigabytes in size)
One thing you need to do is make sure you also record who made the change. You should also record the old and new value together (makes it easier to put data back if you need to) and the change type (insert, update, delete). You don't mention recording deletes from the table, but we find those the some of the things we most frequently use the table for.
We use dynamic SQl to generate the code to create the audit tables (by using the table that stores the system information) and all audit tables have the exact same structure (makes is easier to get data back out).
When you create the code to store the data in your history table, create the code as well to restore the data if need be. This will save tons of time down the road when something needs to be restored and you are under pressure from senior management to get it done now.
Now I don't know if you were planning to be able to restore data from your history table, but once you have once, I can guarantee that management will want it used that way.
CREATE TABLE HIST.TB_HISTORY (
HIST_ID BIGINT GENERATED ALWAYS AS IDENTITY (START WITH 0, INCREMENT BY 1, NO CACHE) NOT NULL,
HIST_COLUMNNAME VARCHAR(128) NOT NULL,
HIST_OLDVALUE VARCHAR(255),
HIST_NEWVALUE VARCHAR(255),
HIST_CHANGEDDATE TIMESTAMP NOT NULL
PRIMARY KEY(HIST_SAFTYNO)
)
GO
CREATE TRIGGER COMMON.TG_BANKCODE AFTER
UPDATE OF FRD_BANKCODE ON COMMON.TB_MAINTENANCE
REFERENCING OLD AS oldcol NEW AS newcol FOR EACH ROW MODE DB2SQL
WHEN(COALESCE(newcol.FRD_BANKCODE,'#null#') <> COALESCE(oldcol.FRD_BANKCODE,'#null#'))
BEGIN ATOMIC
CALL FB_CHECKING.SP_FRAUDHISTORY_ON_DATACHANGED(
newcol.FRD_FRAUDID,
'FRD_BANKCODE',
oldcol.FRD_BANKCODE,
newcol.FRD_BANKCODE,
newcol.FRD_UPDATEDBY
);--
INSERT INTO FB_CHECKING.TB_FRAUDMAINHISTORY(
HIST_COLUMNNAME,
HIST_OLDVALUE,
HIST_NEWVALUE,
HIST_CHANGEDDATE
We have a requirement in project to store all the revisions(Change History) for the entities in the database. Currently we have 2 designed proposals for this:
e.g. for "Employee" Entity
Design 1:
-- Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
-- Holds the Employee Revisions in Xml. The RevisionXML will contain
-- all data of that particular EmployeeId
"EmployeeHistories (EmployeeId, DateModified, RevisionXML)"
Design 2:
-- Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
-- In this approach we have basically duplicated all the fields on Employees
-- in the EmployeeHistories and storing the revision data.
"EmployeeHistories (EmployeeId, RevisionId, DateModified, FirstName,
LastName, DepartmentId, .., ..)"
Is there any other way of doing this thing?
The problem with the "Design 1" is that we have to parse XML each time when you need to access data. This will slow the process and also add some limitations like we cannot add joins on the revisions data fields.
And the problem with the "Design 2" is that we have to duplicate each and every field on all entities (We have around 70-80 entities for which we want to maintain revisions).
I think the key question to ask here is 'Who / What is going to be using the history'?
If it's going to be mostly for reporting / human readable history, we've implemented this scheme in the past...
Create a table called 'AuditTrail' or something that has the following fields...
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[OldValue] [varchar](5000) NULL,
[NewValue] [varchar](5000) NULL
You can then add a 'LastUpdatedByUserID' column to all of your tables which should be set every time you do an update / insert on the table.
You can then add a trigger to every table to catch any insert / update that happens and creates an entry in this table for each field that's changed. Because the table is also being supplied with the 'LastUpdateByUserID' for each update / insert, you can access this value in the trigger and use it when adding to the audit table.
We use the RecordID field to store the value of the key field of the table being updated. If it's a combined key, we just do a string concatenation with a '~' between the fields.
I'm sure this system may have drawbacks - for heavily updated databases the performance may be hit, but for my web-app, we get many more reads than writes and it seems to be performing pretty well. We even wrote a little VB.NET utility to automatically write the triggers based on the table definitions.
Just a thought!
Do not put it all in one table with an IsCurrent discriminator attribute. This just causes problems down the line, requires surrogate keys and all sorts of other problems.
Design 2 does have problems with schema changes. If you change the Employees table you have to change the EmployeeHistories table and all the related sprocs that go with it. Potentially doubles you schema change effort.
Design 1 works well and if done properly does not cost much in terms of a performance hit. You could use an xml schema and even indexes to get over possible performance problems. Your comment about parsing the xml is valid but you could easily create a view using xquery - which you can include in queries and join to. Something like this...
CREATE VIEW EmployeeHistory
AS
, FirstName, , DepartmentId
SELECT EmployeeId, RevisionXML.value('(/employee/FirstName)[1]', 'varchar(50)') AS FirstName,
RevisionXML.value('(/employee/LastName)[1]', 'varchar(100)') AS LastName,
RevisionXML.value('(/employee/DepartmentId)[1]', 'integer') AS DepartmentId,
FROM EmployeeHistories
The History Tables article in the Database Programmer blog might be useful - covers some of the points raised here and discusses the storage of deltas.
Edit
In the History Tables essay, the author (Kenneth Downs), recommends maintaining a history table of at least seven columns:
Timestamp of the change,
User that made the change,
A token to identify the record that was changed (where the history is maintained separately from the current state),
Whether the change was an insert, update, or delete,
The old value,
The new value,
The delta (for changes to numerical values).
Columns which never change, or whose history is not required, should not be tracked in the history table to avoid bloat. Storing the delta for numerical values can make subsequent queries easier, even though it can be derived from the old and new values.
The history table must be secure, with non-system users prevented from inserting, updating or deleting rows. Only periodic purging should be supported to reduce overall size (and if permitted by the use case).
Avoid Design 1; it is not very handy once you will need to for example rollback to old versions of the records - either automatically or "manually" using administrators console.
I don't really see disadvantages of Design 2. I think the second, History table should contain all columns present in the first, Records table. E.g. in mysql you can easily create table with the same structure as another table (create table X like Y). And, when you are about to change structure of the Records table in your live database, you have to use alter table commands anyway - and there is no big effort in running these commands also for your History table.
Notes
Records table contains only lastest revision;
History table contains all previous revisions of records in Records table;
History table's primary key is a primary key of the Records table with added RevisionId column;
Think about additional auxiliary fields like ModifiedBy - the user who created particular revision. You may also want to have a field DeletedBy to track who deleted particular revision.
Think about what DateModified should mean - either it means where this particular revision was created, or it will mean when this particular revision was replaced by another one. The former requires the field to be in the Records table, and seems to be more intuitive at the first sight; the second solution however seems to be more practical for deleted records (date when this particular revision was deleted). If you go for the first solution, you would probably need a second field DateDeleted (only if you need it of course). Depends on you and what you actually want to record.
Operations in Design 2 are very trivial:
Modify
copy the record from Records table to History table, give it new RevisionId (if it is not already present in Records table), handle DateModified (depends on how you interpret it, see notes above)
go on with normal update of the record in Records table
Delete
do exactly the same as in the first step of Modify operation. Handle DateModified/DateDeleted accordingly, depending on the interpretation you have chosen.
Undelete (or rollback)
take highest (or some particular?) revision from History table and copy it to the Records table
List revision history for particular record
select from History table and Records table
think what exactly you expect from this operation; it will probably determine what information you require from DateModified/DateDeleted fields (see notes above)
If you go for Design 2, all SQL commands needed to do that will be very very easy, as well as maintenance! Maybe, it will be much much easier if you use the auxiliary columns (RevisionId, DateModified) also in the Records table - to keep both tables at exactly the same structure (except for unique keys)! This will allow for simple SQL commands, which will be tolerant to any data structure change:
insert into EmployeeHistory select * from Employe where ID = XX
Don't forget to use transactions!
As for the scaling, this solution is very efficient, since you don't transform any data from XML back and forth, just copying whole table rows - very simple queries, using indices - very efficient!
We have implemented a solution very similar to the solution that Chris Roberts suggests, and that works pretty well for us.
Only difference is that we only store the new value. The old value is after all stored in the previous history row
[ID] [int] IDENTITY(1,1) NOT NULL,
[UserID] [int] NULL,
[EventDate] [datetime] NOT NULL,
[TableName] [varchar](50) NOT NULL,
[RecordID] [varchar](20) NOT NULL,
[FieldName] [varchar](50) NULL,
[NewValue] [varchar](5000) NULL
Lets say you have a table with 20 columns. This way you only have to store the exact column that has changed instead of having to store the entire row.
If you have to store history, make a shadow table with the same schema as the table you are tracking and a 'Revision Date' and 'Revision Type' column (e.g. 'delete', 'update'). Write (or generate - see below) a set of triggers to populate the audit table.
It's fairly straightforward to make a tool that will read the system data dictionary for a table and generate a script that creates the shadow table and a set of triggers to populate it.
Don't try to use XML for this, XML storage is a lot less efficient than the native database table storage that this type of trigger uses.
Ramesh, I was involved in development of system based on first approach.
It turned out that storing revisions as XML is leading to a huge database growth and significantly slowing things down.
My approach would be to have one table per entity:
Employee (Id, Name, ... , IsActive)
where IsActive is a sign of the latest version
If you want to associate some additional info with revisions you can create separate table
containing that info and link it with entity tables using PK\FK relation.
This way you can store all version of employees in one table.
Pros of this approach:
Simple data base structure
No conflicts since table becomes append-only
You can rollback to previous version by simply changing IsActive flag
No need for joins to get object history
Note that you should allow primary key to be non unique.
The way that I've seen this done in the past is have
Employees (EmployeeId, DateModified, < Employee Fields > , boolean isCurrent );
You never "update" on this table (except to change the valid of isCurrent), just insert new rows. For any given EmployeeId, only 1 row can have isCurrent == 1.
The complexity of maintaining this can be hidden by views and "instead of" triggers (in oracle, I presume similar things other RDBMS), you can even go to materialized views if the tables are too big and can't be handled by indexes).
This method is ok, but you can end up with some complex queries.
Personally, I'm pretty fond of your Design 2 way of doing it, which is how I've done it in the past as well. Its simple to understand, simple to implement and simple to maintain.
It also creates very little overhead for the database and application, especially when performing read queries, which is likely what you'll be doing 99% of the time.
It would also be quite easy to automatic the creation of the history tables and triggers to maintain (assuming it would be done via triggers).
Revisions of data is an aspect of the 'valid-time' concept of a Temporal Database. Much research has gone into this, and many patterns and guidelines have emerged. I wrote a lengthy reply with a bunch of references to this question for those interested.
I'm going to share with you my design and it's different from your both designs in that it requires one table per each entity type. I found the best way to describe any database design is through ERD, here's mine:
In this example we have an entity named employee. user table holds your users' records and entity and entity_revision are two tables which hold revision history for all the entity types that you will have in your system. Here's how this design works:
The two fields of entity_id and revision_id
Each entity in your system will have a unique entity id of its own. Your entity might go through revisions but its entity_id will remain the same. You need to keep this entity id in you employee table (as a foreign key). You should also store the type of your entity in the entity table (e.g. 'employee'). Now as for the revision_id, as its name shows, it keep track of your entity revisions. The best way I found for this is to use the employee_id as your revision_id. This means you will have duplicate revision ids for different types of entities but this is no treat to me (I'm not sure about your case). The only important note to make is that the combination of entity_id and revision_id should be unique.
There's also a state field within entity_revision table which indicated the state of revision. It can have one of the three states: latest, obsolete or deleted (not relying on the date of revisions helps you a great deal to boost your queries).
One last note on revision_id, I didn't create a foreign key connecting employee_id to revision_id because we don't want to alter entity_revision table for each entity type that we might add in future.
INSERTION
For each employee that you want to insert into database, you will also add a record to entity and entity_revision. These last two records will help you keep track of by whom and when a record has been inserted into database.
UPDATE
Each update for an existing employee record will be implemented as two inserts, one in employee table and one in entity_revision. The second one will help you to know by whom and when the record has been updated.
DELETION
For deleting an employee, a record is inserted into entity_revision stating the deletion and done.
As you can see in this design no data is ever altered or removed from database and more importantly each entity type requires only one table. Personally I find this design really flexible and easy to work with. But I'm not sure about you as your needs might be different.
[UPDATE]
Having supported partitions in the new MySQL versions, I believe my design also comes with one of the best performances too. One can partition entity table using type field while partition entity_revision using its state field. This will boost the SELECT queries by far while keep the design simple and clean.
If indeed an audit trail is all you need, I'd lean toward the audit table solution (complete with denormalized copies of the important column on other tables, e.g., UserName). Keep in mind, though, that bitter experience indicates that a single audit table will be a huge bottleneck down the road; it's probably worth the effort to create individual audit tables for all your audited tables.
If you need to track the actual historical (and/or future) versions, then the standard solution is to track the same entity with multiple rows using some combination of start, end, and duration values. You can use a view to make accessing current values convenient. If this is the approach you take, you can run into problems if your versioned data references mutable but unversioned data.
If you want to do the first one you might want to use XML for the Employees table too. Most newer databases allow you to query into XML fields so this is not always a problem. And it might be simpler to have one way to access employee data regardless if it's the latest version or an earlier version.
I would try the second approach though. You could simplify this by having just one Employees table with a DateModified field. The EmployeeId + DateModified would be the primary key and you can store a new revision by just adding a row. This way archiving older versions and restoring versions from archive is easier too.
Another way to do this could be the datavault model by Dan Linstedt. I did a project for the Dutch statistics bureau that used this model and it works quite well. But I don't think it's directly useful for day to day database use. You might get some ideas from reading his papers though.
How about:
EmployeeID
DateModified
and/or revision number, depending on how you want to track it
ModifiedByUSerId
plus any other information you want to track
Employee fields
You make the primary key (EmployeeId, DateModified), and to get the "current" record(s) you just select MAX(DateModified) for each employeeid. Storing an IsCurrent is a very bad idea, because first of all, it can be calculated, and secondly, it is far too easy for data to get out of sync.
You can also make a view that lists only the latest records, and mostly use that while working in your app. The nice thing about this approach is that you don't have duplicates of data, and you don't have to gather data from two different places (current in Employees, and archived in EmployeesHistory) to get all the history or rollback, etc).
If you want to rely on history data (for reporting reasons) you should use structure something like this:
// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
// Holds the Employee revisions in rows.
"EmployeeHistories (HistoryId, EmployeeId, DateModified, OldValue, NewValue, FieldName)"
Or global solution for application:
// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
// Holds all entities revisions in rows.
"EntityChanges (EntityName, EntityId, DateModified, OldValue, NewValue, FieldName)"
You can save your revisions also in XML, then you have only one record for one revision. This will be looks like:
// Holds Employee Entity
"Employees (EmployeeId, FirstName, LastName, DepartmentId, .., ..)"
// Holds all entities revisions in rows.
"EntityChanges (EntityName, EntityId, DateModified, XMLChanges)"
We have had similar requirements, and what we found was that often times the user just wants to see what has been changed, not necessarily roll back any changes.
I'm not sure what your use case is, but what we have done was create and Audit table that is automatically updated with changes to an business entity, including the friendly name of any foreign key references and enumerations.
Whenever the user saves their changes we reload the old object, run a comparison, record the changes, and save the entity (all are done in a single database transaction in case there are any problems).
This seems to work very well for our users and saves us the headache of having a completely separate audit table with the same fields as our business entity.
It sounds like you want to track changes to specific entities over time, e.g. ID 3, "bob", "123 main street", then another ID 3, "bob" "234 elm st", and so on, in essence being able to puke out a revision history showing every address "bob" has been at.
The best way to do this is to have an "is current" field on each record, and (probably) a timestamp or FK to a date/time table.
Inserts have to then set the "is current" and also unset the "is current" on the previous "is current" record. Queries have to specify the "is current", unless you want all of the history.
There are further tweaks to this if it's a very large table, or a large number of revisions are expected, but this is a fairly standard approach.