Check for a field value being updated/changed SQL (Access) - sql

This is certainly a long shot, and is by no means vital to my development requirements, so if there's not a readily available solution the please note; I won't be too upset ;)
I was wondering if there was a way to see if a field value had been changed or updated within a date range in Access.
For example, I have a status field in lets say table1 that may read "active" or "inactive" (simply via validation, no related tables for this field), I would like to see how many records changed from "inactive" to "active" within 30 days.
I have found a solution for timestamping a form update, and if worst comes to worse, I can just amend this to apply to a field, but I would rather be able to search for the value changes than the date the field was last changed.
Again, if this strikes anyone as impossible, then please don't worry yourself too much.
Regards,
Andy

You need to have a change history.
Separate table which stores the key of the row as foreign key, the status and the timestamp. every change inserts new row to the table.
Depending on the technology you are using, the easiest way is to use trigger. The trigger can check if the the field is changed odl.status <> new.status and to insert new row in the history table.
If you do not like to keep history, then only one field in the same table can do the job.
The field can be datetime, and also the trigger can update it when the status is changed.
Timestamp will not do the job because if some other field is changed this field will be changed.
So in this case also the trigger can do the job.
But also depending of the type of the client, the client can detect if the field is changed and update the datetime field.

Related

SQL Server - store datetime and decimal

I'm developing a change history table where I'll basically record the old and new value for changes in fields of two types: decimal and datetime.
To make it simple, I was thinking about create a string field and convert the values to string before store in the table.
My problem is that later I'll have to create a field in the report to show the difference between the changes (like if the date as changed from 01/20/2015 to 01/27/2015 the difference will be 7 and so on). I do not want to create a field in the table to record the difference between the fields, I want to do it in the report side.
My question is:
Is there any way to store those two kind of data (decimal and datetime) to make it simple to do comparisons later? Cause if I have it in string type I'll have to convert it two times - one before create the record in DB and the other to see what is the difference between them.
I believe the best approach would be what I like to call the never delete, never update approach.
Basically, you add a column to your source table for the record status, that can be either current, historic or deleted (Use a tinyint for that, just be sure to have it linked to a row status table for readability). then instead of deleting a record you update it's status to deleted, and instead of updating it, you change it's status to historic and then insert a new record with the new data.
Naturally, this approach has it's price, since you will have to write an instead of update trigger, but that is a small price to pay comparing to other approaches of keeping history data.
Also, if your primary key is not an identity column, you will need to add this column to your primary key (and any other unique constraints you might have).
You also might want to add a filter to your non-clustered indexes so that they will only index the records where the status is current.

What is the best method of logging data changes and user activity in an SQL database?

I'm starting a new application and was wondering what the best method of logging is. Some tables in the database will need to have every change recorded, and the user that made the change. Other tables may just need to have the last modified time recorded.
In previous applications I've used different methods to do this but want to hear what others have done.
I've tried the following:
Add a "modified" date-time field to the table to record the last time it was edited.
Add a secondary table just for recording changes in a primary table. Each row in the secondary table represents a changed field in the primary table. So one record update in the primary could create several records in the secondary table.
Add a table similar to no.2 but it records edits across three or fours tables, reference the table it relates to in an additional field.
what methods do you use and would recommend?
Also what is the best way to record deleted data? I never like the idea that a user can permanently delete a record from the DB, so usually I have a boolean field 'deleted' which is changed to true when its deleted, and then it'll be filtered out of all queries at model level. Any other suggestions on this?
Last one.. What is the best method for recording user activity? At the moment I have a table which records logins/logouts/password changes etc, and depending what the action is, gives it a code either 1,2, 3 etc.
Hope I haven't crammed too much into this question. thanks.
I know it's a very old question, but I'd wanted to add more detailed answer as this is the first link I got googling about db logging.
There are basically two ways to log data changes:
on application server layer
on database layer.
If you can, just use logging on server side. It is much more clear and flexible.
If you need to log on database layer you can use triggers, as #StanislavL said. But triggers can slow down your database performance and limit you to store change log in the same database.
Also, you can look at the transaction log monitoring.
For example, in PostgreSQL you can use mechanism of logical replication to stream changes in json format from your database to anywhere.
In the separate service you can receive, handle and log changes in any form and in any database (for example just put json you got to Mongo)
You can add triggers to any tracked table to olisten insert/update/delete. In the triggers just check NEW and OLD values and write them in a special table with columns
table_name
entity_id
modification_time
previous_value
new_value
user
It's hard to figure out user who makes changes but possible if you add changed_by column in the table you listen.

use triggers to keep history of relational tables

say I have 6 tables.
Workstation
Workstation_CL
Location
Location_CL
Features
Features_CL
I am currently using triggers to do inserts into the "_CL" version of each table with an additional field that denotes whether the change was an "UPDATE", "INSERT" or "DELETE".
the workstation table keeps track of the "modified_by" user. if a user updates the location of a "Workstation" object, the "Location" table gets updated as well as the "Workstation" table. the only modification to the Workstation table is the "modified_by" field so that I will know who made the change.
The problem I am having is when I think about pulling an audit report. How will I link records in the "Location_CL" to the ones in the "Workstation_CL" both are populated by separate triggers.
somehow my question portion was erased. sorry about that.
Question: how can I pull some type of unique identifier to have in both the "Workstation_CL" and the "Location_CL" so that I can identify each revision? for instance, when I pull all records from the "Location_CL" and I see all location changes, pulling the username from the "Workstation_CL" that made the location change?
Give each revision a GUID generated by the trigger. Populate a field (RevisionId) in both tables with the value.
You need 2, maybe 3 columns on each audit table.
1) Timestamp, so you know when the changes were made.
2) User changed, so you can track who made the changes - I assume that Location can change independently of Workstation.
3) You might need an identifier for the transaction, too. I THINK you can get an id from the DB, though I'm not sure.
I don't think you can have an effective report without timestamps and users, though, and I don't think you just have the user on one table.
During the trigger event, I was able to exec the following:
SELECT #trans_id=transaction_id FROM sys.dm_tran_current_transaction
which gives me the transaction id for the current operation.
with that, I am able to insert it in to the corresponding _CL table and then perform selects that will match the auto-gen id's.

SQL Schema design question - delete flags

in our database schema, we like to use delete flags. When a record is deleted, we then update that field, rather than run a delete statement. The rest of our queries then check for the delete flag when returning data.
Here is the problem:
The delete flag is a date, with a default value of NULL. This is convenient because when a record is deleted we can easily see the date that it was deleted on.
However, to enforce unique constraints properly, we need to include the delete flag in the unique constraint. The problem is, on MS SQL , it behaves in accordance to what we want (for this design), but in postgresql, if any field in a multi column unique constraint is NULL, it allows the field. This behavior fits the SQL standard, but it makes our design broken.
The options we are considering are:
make a default value for the deleted field to be some hardcoded date
add a bit flag for deleted, then each table would have 2 delete related fields - date_deleted and is_deleted (for example)
change the date_deleted to is_deleted (bit field)
I suspect option 1 is a performance hit, each query would have to check for the hardcoded date, rather than just checking for IsNUll. Plus it feels wrong.
Option 2, also, feels wrong - 2 fields for "deleted" is non-dry.
Option 3, we lose the "date" information. There is a modified field, which would, in theory reflect the date deleted, but only assuming the last update to the row was the update to the delete bit.
So, Any suggestions? What have you done in the past to deal with "delete flags" ?
Update
Thanks to everyone for the super quick, and thoughtful responses.
We ended up going with a simple boolean field and a modified date field (with a trigger). I just noticed the partial index suggestion, and that looks like the perfect solution for this problem (but I havent actually tried it)
If just retaining the deleted records is important to you, have you considered just moving them to a history table?
This could easily be achieved with a trigger.
Application logic doesn't need to account for this deleted flag.
Your tables would stay lean and mean when selecting from it.
It would solve your problem with unique indexes.
Option 3, we lose the "date"
information. There is a modified
field, which would, in theory reflect
the date deleted, but only assuming
the last update to the row was the
update to the delete bit.
Is there a business reason that the record would be modified after it was deleted? If not, are you worrying about something that's not actually an issue? =)
In the system I currently work on we have the following "metadata" columns _Deleted, _CreatedStamp, _UpdatedStamp, _UpdatedUserId, _CreatedUserId ... quite a bit, but it's important for this system to carry that much data. I'd suggest going down the road of having a separate flag for Deleted to Modified Date / Deleted Date. "Diskspace is cheap", and having two fields to represent a deleted record isn't world-ending, if that's what you have to do for the RDBMS you're using.
What about triggers? When a record is deleted, a post-update trigger copies the row into an archive table which has the same structure plus any additional columns, and an additional column of the date/time and perhaps the user that deleted it.
That way your "live" table only has records that are actually live, so is better performance-wise, and your application doesn't have to worry about whether a record has been deleted or not.
One of my favourite solutions is an is_deleted bit flag, and a last_modified date field.
The last_modified field is updated automatically every time the row is modified (using any technique supported by your DBMS.) If the is_deleted bit flag is TRUE, then the last_modified value implies the time when the row was deleted.
You will then be able to set the default value of last_modified to GETDATE(). No more NULL values, and this should work with your unique constraints.
Just create a conditional unique constraint:
CREATE UNIQUE INDEX i_bla ON yourtable (colname) WHERE date_deleted IS NULL;
Would creating a multi column unique index that included the deleted date achieve the same constraint limit you need?
http://www.postgresql.org/docs/current/interactive/indexes-unique.html
Alternately, can you store a non-NULL and check that the deleted date to the minimum sql date = 0 or "1/1/1753" instead of NULL for undeleted records.
Is it possible to exclude the deleted date field from your unique index? In what way does this field contribute to the uniqueness of each record, especially if the field is usually null?

SQL - Table Design - DateCreated and DateUpdated columns

For my application there are several entity classes, User, Customer, Post, and so on
I'm about to design the database and I want to store the date when the entities were created and updated. This is where it gets tricky. Sure one option is to add created_timestamp and update_timestamp columns for each of the entity tables but that isn't that redudant?
Another possibility could be to create a log table that stores this information, and it could be made to contain keep track of updates for any entity.
Any thoughts? I'm leaning on implementing the latter.
The single-log-table-for-all-tables approach has two main problems that I can think of:
The design of the log table will (probably) constrain the design of all the other tables. Most likely the log table would have one column named TableName and then another column named PKValue (which would store the primary key value for the record you're logging). If some of your tables have compound primary keys (i.e. more than one column), then the design of your log table would have to account for this (probably by having columns like PKValue1, PKValue2 etc.).
If this is a web application of some sort, then the user identity that would be available from a trigger would be the application's account, instead of the ID of the web app user (which is most likely what you really want to store in your CreatedBy field). This would only help you distinguish between records created by your web app code and records created otherwise.
CreatedDate and ModifiedDate columns aren't redundant just because they're defined in each table. I would stick with that approach and put insert and update triggers on each table to populate those columns. If I also needed to record the end-user who made the change, I would skip the triggers and populate the timestamp and user fields from my application code.
I do the latter, with a "log" or "events" table. In my experience, the "updated" timestamp becomes frustrating pretty quick, because a lot of the time you find yourself in a fix where you want not just the very latest update time.
How often will you need to include the created/updated timestamps in your presentation layer? If the answer is anything more than "once in a great great while", I think you would be better served by having those columns in each table.
On a project I worked on a couple of years ago, we implemented triggers which updated what we called an audit table (it stored basic information about the changes being made, one audit table per table). This included modified date (and last modified).
They were only applied to key tables (not joins or reference data tables).
This removed a lot of the normal frustration of having to account for LastCreated & LastModified fields, but introduced the annoyance of keeping the triggers up to date.
In the end the trigger/audit table design worked well and all we had to remember was to remove and reapply the triggers before ETL(!).
It's for a web based CMS I work on. The creation and last updated dates will be displayed on most pages and there will be lists for the last created (and updated) pages. The admin interface will also use this information.