I'm trying to fetch a list of table modifications by user, but in view ALL_TAB_MODIFICATIONS I'm only seeing tables with their modifications by type.
Is there a way to know how many modifications have happened by user? The same view but adding the USER column?
That's what Oracle's AUDIT functionality is designed for.
First, you have to set the AUDIT_TRAIL init parm - https://docs.oracle.com/database/121/REFRN/GUID-BD86F593-B606-4367-9FB6-8DAB2E47E7FA.htm#REFRN10006
Then you enable auditing on whatever action you want to audit, with the AUDIT sql statement - https://docs.oracle.com/database/121/REFRN/GUID-BD86F593-B606-4367-9FB6-8DAB2E47E7FA.htm#REFRN10006
Then, when audited action occur, you see them when you query DBA_AUDIT_TRAIL - https://docs.oracle.com/database/121/REFRN/GUID-A9993FAC-12D3-4725-A37D-938CC32D74CC.htm#REFRN23023
The above is the simplest way to get started. It does not cover the new 'unified auditing'. There are also other DBA_* views the give filtered versions of DBA_AUDIT_TRAIL. But the above will give you the starting points.
Related
I have a need to audit changes where triggers are not performing well enough to use. In the audit I need to know exactly who made the change based on a column named LastModifiedBy (gathered at login and used in inserts and updates). We use a single SQL account to access the database so I cant use that to tie it to a user.
Scenario: Now we are researching the SQL transaction Log to determine what has changed. Table has a LastUpdatedBy column we used with trigger solution. With previous solution I had before and after transaction data so I could tell if the user making the change was the same user or a new user.
Problem: While looking at tools like DBForge Transaction Log and ApexSQL Audit I cant seem to find a solution that works. I can see the Update command but I can't tell if all the fields actually changed (Just because SQL says to update a field does not mean it actually changed value). ApexSQL Audit does have a before and after capability but if the LastUpdatedBy field does not change then I don't know what the original value is.
Trigger problem: Large data updates and inserts are crushing performance because of the triggers. I am gathering before and after data in the triggers so I can tell exactly what changed. But this volume of data is taking a 2 second update of 1000 rows and making it last longer than 3 minutes.
I'm using SQL 2008 and have DELETE, UPDATE & INSERT auditing enabled on table XYZ. It works great other than when I query the data:
SELECT * FROM fn_get_audit_file('H:\SQLAudits\*', default, default)
It doesn't actually show me what was deleted or inserted or updated, only that a deletion, etc ... occurred. The statement column of the above query shows this snippet:
delete [dbo].[XYZ] where ([Name] = #0)
I want it to show me what the value of #0 is. Is there a way of doing this?
From what I've found about it, SQL Server 2008's "auditing" feature is very lacking. It does not act as a traditional data audit trail, where you store a new row every time something changes (via Triggers), with complete information such as the user who made the change. It more or less just tells you something has changed without much detail. I really wish SQL Server would include full data audit trail features.
Reference
While Creating Database Audit Specification, you select operation for the Audit Action Type INSERT, UPDATE, DELETE
This result in showing us logs , saying Select or Insert or Update or Delete...But the individual value can never be seen
Example - Click here to view the Logs for Insert/Update/Delete
The SQL Server Audit tool is very powerful, however, it was never designed to record data changes (eg. col1 was changed from 'fred' to 'santa' in table 'dummy' in db 'test' by 'sa').
For this you will need Change Data Capture (http://msdn.microsoft.com/en-us/library/bb522489.aspx).
Cheers,
Mark
You can monitor the delete sentences using SQL Server Profiler. You will be able to see the changes.
Another way to monitor is using the CDC (Change Data Capture) feature in SQL Server. This feature will let you monitor changes in the tables.
Finally, there are other tools related like ApexSQL Trigger.
Disclaimer: I am unable to implement this properly in the application, as the application I'm working on doesn't do data access in a consistent way, and refactoring effort would be too great for the scope of the project and coming deadline.
How would I go with implementing a SQLCLR Trigger for Audit Trail? I would like it to be as simple as possible, and as easy to remove and replace with proper implementation later as possible.
I'm planning to write my audit to a single table (the database is not very write heavy), having columns like:
Timestamp (datetime) - when the change happened?
Username (varchar) - who made the change?
AffectedTableName (varchar) - which table has been affected?
AffectedRowKey (varchar) - this will be either a simple or compound key like (Id=42, A=4,B=2)
OperationType (char(1)) - either I, U or D for insert, update and delete respectively.
InsertedXml (xml) - xml-serialized row (SELECT * FROM INSERTED FOR XML AUTO)
DeletedXml(xml) - xml-serialized row (SELECT * FROM DELETED FOR XML AUTO)
I'm planning to query and resolve this data to a user-readable form in the application. I'm planning to implement this as a database trigger, written using SQLCLR. I can see 2 possible approaches:
Implement this entirely as SqlTrigger method:
Implement this as a SqlProcedure method taking parameters:
schemaName
tableName
insertedXml
deletedXml
I will appreciate any constructive criticism and suggestions. My limitation is that I have to implement the audit at the database level using triggers, and I want it to be as maintainable (read: removable and replacable) as possible. Also ideally, I don't want to have hundreds of triggers with exactly the same body, in case I have to modify them.
There is a serious restriction in SQLCLR triggers that will prevent you from implementing your audit triggers in SQLCLR: you can not find which parent object that was changed from inside a SQLCLR trigger. i.e. if you have a single SQLCLR trigger routine registered to multiple tables, you can not find which table got updated/inserted into/deleted from. At first sight ##procID may look usefull, however when called from inside a SQLCLR trigger, ##procid returns the same value, no matter which table was affected. I have searched the internet and experimented a lot and I have not found a solution. I have found more people having the same issue. Some of the messages date back as far as 2006.
I have created a feature request with Microsoft on Microsoft Connect. Please log in and press the UP arrow to get it implemented so you actually can use a SQLCLR trigger for your purpose: https://connect.microsoft.com/SQLServer/feedback/details/768358/a-sqlclr-trigger-should-be-given-the-parent-object-in-the-sqltriggercontext
I've been using a variation of this script to create audit triggers from some of my projects for awhile now with great results:
http://www.simple-talk.com/sql/database-administration/pop-rivetts-sql-server-faq-no.5-pop-on-the-audit-trail/
One third party app is storing data in a huge database (SQL Server 2000/2005). This database has more than 80 tables. How would I come to know that how many tables are affected when application stores a new record in database? Is there something available I can retrieve the list of tables affected?
You might be able to tell by running a trace in SQL Profiler on the database - the SQL:StmtCompleted event is probably the one to monitor - i.e. if the application does a series of inserts into multiple tables, you should see them go through in Profiler.
You can use SQL Profiler to trace SQL queries. So you will see sequence of calls caused by one button click in your application.
Also use can use metadata or SQL tools to get list of triggers which could make a lot of actions on simple insert.
If you have the SQL script that used to store the new record(Usually, it should be insert statement, or other DML statement such as update, merge and so on). Then you may know how many tables were affected by parsing those SQL script.
Take this SQL for example:
Insert into emp(fname, lname)
Values('john', 'reyes')
You can get result like this:
sstinsert
emp(tetInsert)
Tables:
emp
Fields:
emp.fname
emp.lname
you can add triggers on tables that get fired on update - you could use this to update a log table that would report what was being updated.
see more here: http://www.devarticles.com/c/a/SQL-Server/Using-Triggers-In-MS-SQL-Server/
Profiler is the way to go, as others have said especially with an unfamilar third party database.
I would also spend some time creating diagrams so you can see the foreign key relationships and understand how the database is put together. I usaully know my database structure so well, I can tell from the fields being inserted what tables they affect and I know what triggers are on my tables and what they affect. There is no substitute for taking the time to understand the database you support.
One simple method I've used in the past is basically just creating a second table whose structure mirrors the one I want to audit, and then create an update/delete trigger on the main table. Before a record is updated/deleted, the current state is saved to the audit table via the trigger.
While effective, the data in the audit table is not the most useful or simple to report off of. I'm wondering if anyone has a better method for auditing data changes?
There shouldn't be too many updates of these records, but it is highly sensitive information, so it is important to the customer that all changes are audited and easily reported on.
How much writing vs. reading of this table(s) do you expect?
I've used a single audit table, with columns for Table, Column, OldValue, NewValue, User, and ChangeDateTime - generic enough to work with any other changes in the DB, and while a LOT of data got written to that table, reports on that data were sparse enough that they could be run at low-use periods of the day.
Added:
If the amount of data vs. reporting is a concern, the audit table could be replicated to a read-only database server, allowing you to run reports whenever necessary without bogging down the master server from doing their work.
We are using two table design for this.
One table is holding data about transaction (database, table name, schema, column, application that triggered transaction, host name for login that started transaction, date, number of affected rows and couple more).
Second table is only used to store data changes so that we can undo changes if needed and report on old/new values.
Another option is to use a third party tool for this such as ApexSQL Audit or Change Data Capture feature in SQL Server.
I have found these two links useful:
Using CLR and single audit table.
Creating a generic audit trigger with SQL 2005 CLR
Using triggers and separate audit table for each table being audited.
How do I audit changes to SQL Server data?
Are there any built-in audit packages? Oracle has a nice package, which will even send audit changes off to a separate server outside the access of any bad guy who is modifying the SQL.
Their example is awesome... it shows how to alert on anybody modifying the audit tables.
OmniAudit might be a good solution for you need. I've never used it before because I'm quite happy writing my own audit routines, but it sounds good.
I use the approach described by Greg in his answer and populate the audit table with a stored procedure called from the table triggers.