What's the best way to audit log DELETEs? - sql

The user id on your connection string is not a variable and is different from the user id (can be GUID for example) of your program. How do you audit log deletes if your connection string's user id is static?
The best place to log insert/update/delete is through triggers. But with static connection string, it's hard to log who delete something. What's the alternative?

With SQL Server, you could use CONTEXT_INFO to pass info to the trigger.
I use this in code (called by web apps) where I have to use triggers (eg multiple write paths on the table). This is where can't put my logic into the stored procedures.

We have a similar situation. Our web application always runs as the same database user, but with different logical users that out application tracks and controls.
We generally pass in the logical user ID as a parameter into each stored procedure. To track the deletes, we generally don't delete the row, just mark the status as deleted, set the LastChgID and LastChgDate fields accordingly. For important tables, where we keep an audit log (a copy of every change state), we use the above method and a trigger copies the row to a audit table, the LastChgID is already set properly and the trigger doesn't need to worry about getting the ID.

Related

SQL Audit options for update commands where an existing column does not change

I have a need to audit changes where triggers are not performing well enough to use. In the audit I need to know exactly who made the change based on a column named LastModifiedBy (gathered at login and used in inserts and updates). We use a single SQL account to access the database so I cant use that to tie it to a user.
Scenario: Now we are researching the SQL transaction Log to determine what has changed. Table has a LastUpdatedBy column we used with trigger solution. With previous solution I had before and after transaction data so I could tell if the user making the change was the same user or a new user.
Problem: While looking at tools like DBForge Transaction Log and ApexSQL Audit I cant seem to find a solution that works. I can see the Update command but I can't tell if all the fields actually changed (Just because SQL says to update a field does not mean it actually changed value). ApexSQL Audit does have a before and after capability but if the LastUpdatedBy field does not change then I don't know what the original value is.
Trigger problem: Large data updates and inserts are crushing performance because of the triggers. I am gathering before and after data in the triggers so I can tell exactly what changed. But this volume of data is taking a 2 second update of 1000 rows and making it last longer than 3 minutes.

MS ACCESS Lock a table during update, blocking

How can I lock a table preventing other users querying it while I update its contents?
Currently my data is updated by wiping the table and re-populating it (i know, its not the best way to update data, but the source data has no unique key to do a record by record update and this is the only way). There exists the unlikely, but possible scenario where a user my access the table in the middle of the update and catch it while it is empty thus returning bad info.
Is there at the SQL (or code) level a way to create a blocking statement that will wait for a DB update to complete prior to querying?
Access has very little locking capabilities. Unless you're storing your data in a different backend, you only can set a database-wide lock or no lock at all.
There is some locking capability setting table locks when the table structure of a table is being changed, but as far as I can find, that's not available to the user (neither through the GUI nor through VBA)
Note that both ADO and DAO support locking (in ADO by setting the IsolationLevel, in DAO by setting dbDenyRead + dbDenyWrite when executing the query), but in my short testing, these options do absolutely nothing in Access.

In Oracle can you create a table that only exists while the database is running?

Is there a way in Oracle to create a table that only exists while the database is running and is only stored in memory? So if the database is restarted I will have to recreate the table?
Edit:
I want the data to persist across sessions. The reason being that the data is expensive to recreate but is also highly sensitive.
Using a temporary table would probably help performance compared to what happens today, but its still not a great solution.
You can create a 100% ephemeral table that is usable for the duration of a session (typically shorter than the duration than the database run time) called a TEMPORARY table. The entire purpose of a table in memory is to make it faster for reading from. You will have to re-populate the table for each session as the table will be forgotten (both structure and data) once the session completes.
No exactly, no.
Oracle has the concept of a "global temporary table". With a global temporary table, you create the table once, as with any other table. The table definition will persist permanently, as with any other table.
The contents of the table, however, will will not be permanent. Depending on how you define it, the contents will persist for either the life of the session (on commit perserve rows) or the life of the transaction (on commit delete rows).
See the documentation for all the details:
http://docs.oracle.com/cd/E11882_01/server.112/e25494/tables003.htm#ADMIN11633
Hope that helps.
You can use Oracle's trigger mechanism to invoke a stored procedure when the database starts up or shuts down.
That way you could have the startup trigger create the table, and the shutdown trigger drop it.
You'd probably also want the startup trigger to handle cases where the table exists and truncate it just in case the server stopped suddenly and the shutdown trigger wasn't called.
Oracle trigger documentation
Using Oracle's Global Temporary Tables, you can create a table in memory and have it delete the data at the end of the transaction, or the end of the session.
If I understand correctly, you have some data that needs to be processed when the database is brought online and left available only as long as the database is online. The only use-case I can think of that would require this is if you're encrypting some data and you want to ensure that the unencrypted data is never written to disk.
If this is actually your use-case, I would recommend forgetting about trying to create your own solution for this and, instead, make use of Oracle's encrypted tablespaces or Transparent Data Encryption.

What is the best way to maintain a LastUpdatedDate column in SQL?

Suppose I have a database table that has a timedate column of the last time it was updated or inserted. Which would be preferable:
Have a trigger update the field.
Have the program that's doing the insertion/update set the field.
The first option seems to be the easiest since I don't even have to recompile to do it, but that's not really a huge deal. Other than that, I'm having trouble thinking of any reasons to do one over the other. Any suggestions?
The first option can be more robust because the database will be maintaining the field. This comes with the possible overhead of using triggers.
If you could have other apps writing to this table in the future, via their own interfaces, I'd go with a trigger so you're not repeating that logic anywhere else.
If your app is pretty much it, or any other apps would access the database through the same datalayer, then I'd avoid that nightmare that triggers can induce and put the logic directly in your datalayer (SQL, ORM, stored procs, etc.).
Of course you'd have to make sure your time-source (your app, your users' pcs, your SQL server) is accurate in either case.
Regarding why I don't like triggers:
Perhaps I was rash by calling them a nightmare. Like everything else, they are appropriate in moderation. If you use them for very simple things like this, I could get on board.
It's when the trigger code gets complex (and expensive) that triggers start to cause lots of problems. They are a hidden tax on every insert/update/delete query you execute (depending on the type of trigger). If that tax is acceptable then they can be the right tool for the job.
You didn't mention 3. Use a stored procedure to update the table. The procedure can set timestamps as desired.
Perhaps that's not feasible for you, but I didn't see it mentioned.
As long as I'm using a DBMS in whose triggers I trust, I'd always go with the trigger option. It allows the DBMS to take care of as many things as possible, which is usually a good thing.
It work make sure under any circumstances that the timestamp column has the correct value. The overhead would be negligible.
The only thing that would be against triggers is portability. If that's not an issue, I don't think there is a question which direction to go.
I would say trigger just in case that someone uses something besides your app to update the table, you probably also want to have a LastUpdatedBy and use SUSER_SNAME() for that, this way you can see who did the update
I'm a proponent of stored procedures for everything. Your update proc could contain a GETDATE() for the column.
And I don't like triggers for this kind of update. Lack of visibility of triggers tends to cause confusion.
This sounds like business logic to me ... I would be more disposed to putting this in the code. Let the database manage the storage of data ... No more and no less.
Triggers are a blessing and a curse.
Blessing: You can use them to enable all kinds of custom constraint checking and data management without backend systems knowledge or changes.
Curse: You don't know whats happening behind your back. Concurrency issues/deadlocks by additional objects brought into transactions that were not origionally expected. Phantom behavior including session environment changes, unreliable rowcounts. Excessive triggering of conditions..additional hotspot/performance penalties.
The answer to this question (Update dates implicitly(trigger) or explicitly (code)) ususally weights heavily on context. For example if you are using last change date as an informational field you might want to only change it when a 'user' actually makes salient changes to a row vs an automated process that simply updates some sort of internal marker users don't care about.
If you are using the trigger for change synchronization or you have no control over code that is executing a trigger makes a lot more sense.
My advise on trigger use it to be careful. Most systems allow you to filter execution based on the operation and fields changed. Proper use of 'before' vs 'after' triggers can have a significant performance impacts.
Finally a few systems are capable of executing a single trigger on multiple changes (multiple rows effected within a transaction) your code should be prepared to apply itself as a bulk update to multiple rows.
Normally I'd say do it database side, but it depends on your application. If you're using LINQ-to-SQL you can just set the field as Timestamp and have your DAL use the Timestamp field for concurrency. It handles it for you automatically, so having to repeat code is a non event.
If you're writing your DAL yourself though, then I'd be more likely to handle this on the database side as it makes writing user interfaces far more flexible - although, I'd likely do this in a stored procedure that has "public" access and the tables locked down - you don't want just any clown coming along and bypassing your stored procedure by writing to the tables directly... unless you plan on making your DAL a standalone component that any future application must use to access the database, in which case, you could code it directly into the DAL - of course, you should only do this if you can guarantee that everyone accessing the database is doing so through your DAL component.
If you're going to allow "public" access to the database to insert into tables, then you'll have to go with the trigger because otherwise anyone can insert/update a single field in the table and the updated field could never get updated.
I would have the date maintained at the database, i.e., a trigger, stored procedure, etc. In most of your database-driven applications the user app is not going to be the only means by which the business users get at data. There are reporting tools, extracts, user SQL, etc. There's also updates and corrections that are done by the DBA that the application won't be providing the date for as well.
But honestly the #1 reason I wouldn't do it from the application is you have no control over the date/time on the client machine. They might be rolling it back to get more days out of a trial license on something or may just want to do bad things to your program.
You can do this without the trigger if your database supports default values on the fields. For example, in SQL Server 2005 I have a table with a field created like this:
create table dbo.Repository
(
...
last_updated datetime default getdate(),
...
)
then the insert code just leaves that field out of the insert field list.
I forgot that only worked for the first insert - I do have an update trigger as well, to update the date fields and put a copy of the updated record in my history table - which I would post ... but the editor keeps erroring out on my code ...
Finally:
create trigger dbo.Repository_Upd on dbo.Repository instead of update
as
--**************************************************************************
-- Trigger: Repository_Upd
-- Author: Ron Savage
-- Date: 09/28/2008
--
-- Description:
-- This trigger sets the last_updated and updated_by fields before the update
-- and puts a copy of the updated row into the Repository_History table.
--
-- Modification History:
-- Date Init Comment
-- 10/22/2008 RS Blocked .prm files from updating the history as they
-- get updated every time the cfg file is run.
-- 10/21/2008 RS Updated the insert into the history table to use the
-- d.last_updated field from the Repository table rather
-- than getdate() to avoid micro second differences.
-- 09/28/2008 RS Created.
--**************************************************************************
begin
--***********************************************************************
-- Update the record but fill in the updated_by, updated_system and
-- last_updated date with current information.
--***********************************************************************
update cr set
cr.filename = i.filename,
cr.created_by = i.created_by,
cr.created_system = i.created_system,
cr.create_date = i.create_date,
cr.updated_by = user,
cr.updated_system = host_name(),
cr.last_updated = getdate(),
cr.content = i.content
from
Repository cr
JOIN Inserted i
on (i.config_id = cr.config_id);
--***********************************************************************
-- Put a copy in the history table
--***********************************************************************
declare #extention varchar(3);
select #extention = lower(right(filename,3)) from Inserted;
if (#extention <> 'prm')
begin
Insert into Repository_History
select
i.config_id,
i.filename,
i.created_by,
i.created_system,
i.create_date,
user as updated_by,
host_name() as updated_system,
d.last_updated,
d.content
from
Inserted i
JOIN Repository d
on (d.config_id = i.config_id);
end
end
Ron

Suggestions for implementing audit tables in SQL Server?

One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger.
While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes?
There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on.
How much writing vs. reading of this table(s) do you expect?
I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day.
Added:
If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work.
We are using two table design for this.
One table is holding data about transaction (database, table name, schema, column, application that triggered transaction, host name for login that started transaction, date, number of affected rows and couple more).
Second table is only used to store data changes so that we can undo changes if needed and report on old/new values.
Another option is to use a third party tool for this such as ApexSQL Audit or Change Data Capture feature in SQL Server.
I have found these two links useful:
Using CLR and single audit table.
Creating a generic audit trigger with SQL 2005 CLR
Using triggers and separate audit table for each table being audited.
How do I audit changes to SQL Server data?
Are there any built-in audit packages? Oracle has a nice package, which will even send audit changes off to a separate server outside the access of any bad guy who is modifying the SQL.
Their example is awesome... it shows how to alert on anybody modifying the audit tables.
OmniAudit might be a good solution for you need. I've never used it before because I'm quite happy writing my own audit routines, but it sounds good.
I use the approach described by Greg in his answer and populate the audit table with a stored procedure called from the table triggers.