I have a "controller_variables" table where I save current values of some sensors:
id: the id of the record
controller_id (FK): the id of the controller that provides the data
variable_id (FK): the variable_id
value: the current variable value
created_at: creation date
updated_at: updated date
I also have "history_controller_variables" table where I save "snapshots" of the "controller_variables" table:
id: the id of the record
controller_variable_id (FK): the id of the controller_variables record
value: the "historified" read value
created_at: creation date of the history value
I found myself a few times wondering why I coupled the "history_controller_variables" table to the "controller_variables" table.
If I created the history table as an exact clone of the original table I could:
keep my history in case the referenced "controller_variables" record is deleted.
get history records by directly querying records of a certain controller_id/variable_id.
I can't think of a reason why not to do this change. Are there obvious reasons not to proceed with this change?
You have a fairly big tradeoff here. I don't know which is better, but I can tell you what the advantages of each are.
If your variables for your controllers will not change, then you want to go with one foreign key. This makes it easier to ensure correctness, that the history record represents a valid value for a given controller. If, on the other hand these change and you delete records from the controller variable table, you run into a problem here that has no easy solution. So in that case, you are better off using two.
Ultimately we never know the future and for that reason I would tend to accept some risk of odd data in exchange for ensuring that operational and historical data is subject to different concerns and that changing the data doesn't mess with history.
This is a good change to do. Have the history table be a clone of the original table, but add a timestamp column to the history table. Any time a variable value changes, create a new record in the history table with the new value, and have the timestamp indicate when the variable was changed to that value. If applicable in your application, you can also include a column in your history table that indicates who (or what) modified the variable to be the new value.
Related
I'm developing a change history table where I'll basically record the old and new value for changes in fields of two types: decimal and datetime.
To make it simple, I was thinking about create a string field and convert the values to string before store in the table.
My problem is that later I'll have to create a field in the report to show the difference between the changes (like if the date as changed from 01/20/2015 to 01/27/2015 the difference will be 7 and so on). I do not want to create a field in the table to record the difference between the fields, I want to do it in the report side.
My question is:
Is there any way to store those two kind of data (decimal and datetime) to make it simple to do comparisons later? Cause if I have it in string type I'll have to convert it two times - one before create the record in DB and the other to see what is the difference between them.
I believe the best approach would be what I like to call the never delete, never update approach.
Basically, you add a column to your source table for the record status, that can be either current, historic or deleted (Use a tinyint for that, just be sure to have it linked to a row status table for readability). then instead of deleting a record you update it's status to deleted, and instead of updating it, you change it's status to historic and then insert a new record with the new data.
Naturally, this approach has it's price, since you will have to write an instead of update trigger, but that is a small price to pay comparing to other approaches of keeping history data.
Also, if your primary key is not an identity column, you will need to add this column to your primary key (and any other unique constraints you might have).
You also might want to add a filter to your non-clustered indexes so that they will only index the records where the status is current.
I want to design a changelog for a few tables. Lets call it table restaurant. Every time a user modifies the list of restaurants the change should be logged.
Idea 1
My first idea was to create 2 tables. One which contains all the restaurants RESTAURANT_VALUE (restaurantId*, restaurantValueId*, address, phone, ..., username, insertDate). Every time a change is made it creates a new entry. Then a table RESTAURANT (restaurantId*, restaurantValueId) which will link to the current valid restaurantValueId. So one table that holds the current and the previous version.
Idea 2
It starts with 2 tables as well. One of them contains all current restaurants. e.g. RESTAURANT_CURRENT. And a second table which contains all changes RESTAURANT_HISTORY. Therefore both need to have the exactly same columns. Every time a change occurs the values of the 'current' table are copied into the history table, and the new version in the 'current'.
My opinion
Idea 1 doesn't care if columns will ever be added or not, therefore maintenance and adding of columns would be easy. However, I think as the database grows... wouldn't it slow down? Idea 2 has the advantage that the table with the values will never have any 'old' stuff and not get crowded.
Theoretically I think Idea 1 should be the one done
What do you think. Would you go for Idea 1 or another one? Are there any other important practical thoughts I am not aware of?
The approach strongly depends on your needs. Why would you want a history table?
If it's just for auditing purposes, then make a separate restaurant_history table (idea 2) to keep the history aside. If you want to present the history in the application, then go for signle restaurants table with one of below options:
seq_no - record version number incrementing with each update. If you need current data, you must search for highest seq_no for given restaurant_id(s), so optionally use also current marker, allowing straighforward current = true
valid_from, valid_to - where valid_to is NULL for current record
Sometimes there is need to query efficiently which attributes exactly changed. to do this easily you can consider a history table on attribute level: (restaurant_id, attribute, old_value, new_value, change_date, user).
I need to design a history table to keep track of multiple values that were changed on a specific record when edited.
Example:
The user is presented with a page to edit the record.
Title: Mr.
Name: Joe
Tele: 555-1234
DOB: 1900-10-10
If a user changes any of these values I need to keep track of the old values and record the new ones.
I thought of using a table like this:
History---------------
id
modifiedUser
modifiedDate
tableName
recordId
oldValue
newValue
One problem with this is that it will have multiple entries for each edit.
I was thinking about having another table to group them but you still have the same problem.
I was also thinking about keeping a copy of the row in the history table but that doesn't seem efficient either.
Any ideas?
Thanks!
I would recommend that for each table you want to track history, you have a second table (i.e. tblCustomer and tblCustomer_History) with the identical format - plus a date column.
Whenever an edit is made, you insert the old record to the history table along with the date/time. This is very easy to do and requires little code changes (usually just a trigger)
This has the benefit of keeping your 'real' tables as small as possible, but gives you a complete history of all the changes that are made.
Ultimately however, it will come down to how you want to use this data. If its just for auditing purposes, this method is simple and has little downside except the extra disk space and little or no impact on your main system.
You should define what type of efficiency you're interested in: you can have efficiency of storage space, efficiency of effort required to record the history (transaction cost), or efficiency of time to query for the history of a record in a specific way.
I notice you have a table name in your proposed history table, this implies an intention to record the history of more than one table, which would rule out the option of storing an exact copy of the record in your history table unless all of the tables you're tracking will always have the same structure.
If you deal with columns separately, i.e. you record only one column value for each history record, you'll have to devise a polymorphic data type that is capable of accurately representing every column value you'll encounter.
If efficiency of storage space is your main concern, then I would break the history into multiple tables. This would mean having new column value table linked to both an edit event table and a column definition table. The edit event table would record the user and time stamp, the column definition table would record the table, column, and data type. As #njk noted, you don't need the old column value because you can always query for the previous edit to get the old value. The main reason this approach would be expected to save space is the assumption that, generally speaking, users will be editing a small subset of the available fields.
If efficiency of querying is your main concern, I would set up a history table for every table you're tracking and add a user and time stamp field to each history table. This should also be efficient in terms of transaction cost for an edit.
You don't need to record old and new value in a history table. Just record the newest value, author and date. You can then just fetch the most recent record for some user_id based on the date of the record. This may not be the best approach if you will be dealing with a lot of data.
user (id, user_id, datetime, author, ...)
Sample data
id user_id datetime author user_title user_name user_tele ...
1 1 2012-11-05 11:05 Bob
2 1 2012-11-07 14:54 Tim
3 1 2012-11-12 10:18 Bob
say I have 6 tables.
Workstation
Workstation_CL
Location
Location_CL
Features
Features_CL
I am currently using triggers to do inserts into the "_CL" version of each table with an additional field that denotes whether the change was an "UPDATE", "INSERT" or "DELETE".
the workstation table keeps track of the "modified_by" user. if a user updates the location of a "Workstation" object, the "Location" table gets updated as well as the "Workstation" table. the only modification to the Workstation table is the "modified_by" field so that I will know who made the change.
The problem I am having is when I think about pulling an audit report. How will I link records in the "Location_CL" to the ones in the "Workstation_CL" both are populated by separate triggers.
somehow my question portion was erased. sorry about that.
Question: how can I pull some type of unique identifier to have in both the "Workstation_CL" and the "Location_CL" so that I can identify each revision? for instance, when I pull all records from the "Location_CL" and I see all location changes, pulling the username from the "Workstation_CL" that made the location change?
Give each revision a GUID generated by the trigger. Populate a field (RevisionId) in both tables with the value.
You need 2, maybe 3 columns on each audit table.
1) Timestamp, so you know when the changes were made.
2) User changed, so you can track who made the changes - I assume that Location can change independently of Workstation.
3) You might need an identifier for the transaction, too. I THINK you can get an id from the DB, though I'm not sure.
I don't think you can have an effective report without timestamps and users, though, and I don't think you just have the user on one table.
During the trigger event, I was able to exec the following:
SELECT #trans_id=transaction_id FROM sys.dm_tran_current_transaction
which gives me the transaction id for the current operation.
with that, I am able to insert it in to the corresponding _CL table and then perform selects that will match the auto-gen id's.
This is certainly a long shot, and is by no means vital to my development requirements, so if there's not a readily available solution the please note; I won't be too upset ;)
I was wondering if there was a way to see if a field value had been changed or updated within a date range in Access.
For example, I have a status field in lets say table1 that may read "active" or "inactive" (simply via validation, no related tables for this field), I would like to see how many records changed from "inactive" to "active" within 30 days.
I have found a solution for timestamping a form update, and if worst comes to worse, I can just amend this to apply to a field, but I would rather be able to search for the value changes than the date the field was last changed.
Again, if this strikes anyone as impossible, then please don't worry yourself too much.
Regards,
Andy
You need to have a change history.
Separate table which stores the key of the row as foreign key, the status and the timestamp. every change inserts new row to the table.
Depending on the technology you are using, the easiest way is to use trigger. The trigger can check if the the field is changed odl.status <> new.status and to insert new row in the history table.
If you do not like to keep history, then only one field in the same table can do the job.
The field can be datetime, and also the trigger can update it when the status is changed.
Timestamp will not do the job because if some other field is changed this field will be changed.
So in this case also the trigger can do the job.
But also depending of the type of the client, the client can detect if the field is changed and update the datetime field.