I have program that takes user and updates information about him/her in five tables. The process is fairly sophisticated as it takes many steps(pages) to complete. I have logs, sysout and syserr statements that helps me to find sql queries in IDE console but it doesn't have all of them. I've already spend many days to catch other missing queries by debugging but no luck so far. The reason why I am doing this is because I want to automate user information updates so I don't have to go through every page entering user details manually.
I wonder if I could just have some technique that will show me database table changes as I already know table names, by changes I mean whether it was update or insert statements and what exactly changed(column name and value inserted/updated). Any advice is greatly appreciated. I have IBM RAD and DB2 database. Thanks.
In DB2 you can track basic auditing information.
DB2 can track what data was modified, who modified the data, and the SQL operation that modified the data.
To track when data was modified, define your table as a system-period temporal table. The row-begin and row-end columns in the associated history table contain information about when data modifications occurred.
To track who and what SQL modified the data, you can use non-deterministic generated expression columns. These columns can contain values that are helpful for auditing purposes, such as the value of the CURRENT SQLID special register at the time that the data was modified. Possible values for non-deterministic generated expression columns are defined in the syntax for the CREATE TABLE and ALTER TABLE statements.
For example
CREATE TABLE TempTable (balance INT,
userId VARCHAR(100) GENERATED ALWAYS AS ( SESSION_USER ) ,
opCode CHAR(1)
GENERATED ALWAYS AS ( DATA CHANGE OPERATION )
... SYSTEM PERIOD (SYS_START, SYS_END));
The userId column stores who modified the data. This column is defined as a non-deterministic generated expression column that contains the value of SESSION_USER special register.
The opCode column stores the SQL operation that modified the data. This column is defined as a non-deterministic generated expression column and stores a value that indicates the type of SQL operation.
Suppose that you then use the following statements to create a history table for TempTable and to associate that history table with TempTable:
CREATE TABLE TempTable_HISTORY (balance INT, user_id VARCHAR(128) , op_code CHAR(1) ... );
ALTER TABLE TempTable ADD VERSIONING
USE HISTORY TABLE TempTable_HISTORY ON DELETE ADD EXTRA ROW;
Capturing SQL statements for a limited number of tables and a limited time - as far as I understand your problem - could be solved with the DB2 Audit facility.
create audit policy tabsql categories execute status both error type normal
audit <tabname> using policy tabsql
You have to have SECADM rights in theh database and the second command will start the audit process. You can stop it with
audit <tabname> remove policy
Check out the
db2audit
command to configure paths and extract the data from the audit file to a delimited file which then could be loaded again into the database.
The necessarfy tables can be created with the provided sqllib/misc/db2audit.ddl script. You will need the query the EXECUTE table for your SQL details
Please note that audit can capture huge amounts of data so make sure to switch it off again after you have catured the necessary information.
Related
How do I create a trigger in Microsoft SQL server to keep a track of all deleted data of any table existing in the database into a single audit table? I do not want to write trigger for each and every table in the database. There will only be once single audit table which keeps a track of all the deleted data of any table.
For example:
If a data is deleted from a person table, get all the data of that person table and store it in an XML format in an audit table
Please check my solution that I tried to describe at SQL Server Log Tool for Capturing Data Changes
The solution is build on creating trigger on selected tables dynamically to capture data changes (after insert, update, delete) and store those in a general table.
Then a job executes periodically and parses data captured and stored in this general table. After data is parsed it will be easier for humans to understand and see easily which table field is changed and its old and new values
I hope this proposed solution helps you for your own solution,
Is it possible to find date+time when a row was inserted into a table in SQL Server 2005?
Does SQL Server log insert commands?
Whenever I create a table, I alway include the following two columns:
CreatedBy varchar(255) default system_name,
CreatedAt datetime default getdate()
Although this uses a bit of extra space, I've found that the information proves very, very useful over time.
Your question is about the log. The answer is "yes". However, whether you can get the information depends on your recovery mode. If simple, then the records are overwritten for the next transaction. If bulk or full, then the information is in the log, at least since the last incremental backup.
You can derive insert date as long as you are using cdc created functions to pull actual data records.
So for example if you will pull something like:
DECLARE #from_lsn binary(10), #to_lsn binary(10);
SET #from_lsn=sys.fn_cdc_get_min_lsn ( 'name_of_your_cdc_instance_on_cdc_table' );
SET #to_lsn=sys.fn_cdc_get_max_lsn();
SELECT * FROM
cdc.fn_cdc_get_net_changes_name_of_your_cdc_instance_on_cdc_table
(
#from_lsn,
#to_lsn,
N'all'
);
You can use cdc built in function sys.fn_cdc_map_lsn_to_time to convert Log Sequence Number to datetime. Below usecase:
SELECT sys.fn_cdc_map_lsn_to_time(__$start_lsn),* FROM
cdc.fn_cdc_get_net_changes_name_of_your_cdc_instance_on_cdc_table
(
#from_lsn,
#to_lsn,
N'all'
);
you can have a InsertDate default getdate() column on your table, that would be the easiest approach.
On SQl Server 2008 you can use CDC to control changed data on your table
Change data capture records insert, update, and delete activity that
is applied to a SQL Server table. This makes the details of the
changes available in an easily consumed relational format. Column
information and the metadata that is required to apply the changes to
a target environment is captured for the modified rows and stored in
change tables that mirror the column structure of the tracked source
tables. Table-valued functions are provided to allow systematic access
to the change data by consumers.
I am trying to find a highly efficient method of auditing changes to data in a table. Currently I am using a trigger that looks at the INSERTED and DELETED tables to see what rows have changed and inserts these changes into an Audit table.
The problem is this is proving to be very inefficient (obviously!). It's possible that with 3 thousand rows inserted into the database at one time (which wouldn't be unusual) that 215000 rows would have to be inserted in total to audit these rows.
What is a reasonable way to audit all this data without it taking a long time to insert in to the database? It needs to be fast!
Thanks.
A correctly written trigger should be fast enough.
You could also look at Change Data Capture
Auditing in SQL Server 2008
I quite often use AutoAudit:
AutoAudit is a SQL Server (2005, 2008, 2012) Code-Gen utility that creates
Audit Trail Triggers with:
Created, CreatedBy, Modified, ModifiedBy, and RowVersion (incrementing
INT) columns to table
Insert event logged to Audit table
Updates old and new values logged to Audit table Delete logs all
final values to the Audit table
view to reconstruct deleted rows
UDF to reconstruct Row History
Schema Audit Trigger to track schema changes
Re-code-gens triggers when Alter Table changes the table
Update: (Original edit was rejected, but I'm re-adding it):
A major upgrade to version 3.20 was released in November 2013 with these added features:
Handles tables with up to 5 PK columns
Performance improvements up to 90% faster than version 2.00
Improved historical data retrieval UDF
Handles column/table names that need quotename [ ]
Archival process to keep the live Audit tables smaller/faster but retain the older data in archive AutoAudit tables
As others already mentioned - you can use Change Data Capture, Change Tracking, and Audit features in SQL Server, but to keep it simple and use one solution to track all SQL Server activities including these DML operations I suggest trying ApexSQL Comply. You can disable all other, and leave DML auditing option only
It uses a centralized repository for captured information on multiple SQL Server instances and their databases.
It would be best to read this article first, and then decide on using this tool:
http://solutioncenter.apexsql.com/methods-for-auditing-sql-server-data-changes-part-9-the-apexsql-solution/
SQL Server Notifications on insert update delete table change
SqlTableDependency C# componenet provides the low-level implementation to receive database notification creating SQL Server Queue and Service Broker.
Have a look at http://www.sqltabledependency.it/
For any record change, SqlTableDependency's event handler will get a notification containing modified table record values as well as DML - insert, update, delete - change executed on your database table.
You could allow the table to be self auditing by adding additional columns, for example:
For an INSERT - this is a new record and it's existence in the table is the audit itself.
With a DELETE - you can add columns like IsDeleted BIT \ DeletingUserID INT \ DeletingTimestamp DATETIME to your table.
With an UPDATE you add columns like IsLatestVersion BIT \ ParentRecordID INT to track version changes.
I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.
You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.
Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.
Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.
I have written the following trigger in SQL server:
create trigger test_trigger
on invoice -- This is the invoice table
for insert
as
declare #invoiceAmount int -- This is the amount specified in the invoice
declare #custNumber int -- This is the customer's id
--use the 'inserted' keyword to access the values inserted into the invoice table
select #invoiceAmount = Inv_Amt from inserted
select #custNumber = cust_num from inserted
update customer
set amount = #invoiceAmount
where Id = #custNumber
Will this be able to run in MS Access or is the syntax different?
The Access database engine (formerly called Jet) does not have triggers and regardless has no control-of-flow syntax e.g. a PROCEDURE must consist of exactly one SQL statement.
Tell us what you really want to do and there could be an alternative syntax.
For example, you could create a new key using a UNIQUE constraint on invoice, (cust_num, Inv_Amt), a FOREIGN KEY customer (id, amount) to reference the new key, a VIEW that joins the two tables on the FOREIGN KEY columns and exposing all four columns, then INSERT into the VIEW rather than the table 'invoice'; you may want to use privileges to prevent INSERTs to the base table but user level security was removed from the new Access 2007 engine (called ACE).
But, if you don’t mind me saying, I think your trigger doesn't reflect a real life scenario. A column vaguely named 'amount' in table 'customer' to hold the most recent invoice amount? What about when the inserted logical table contains rows for more than one customer? As I say, I think you need to tell us what you are really trying to achieve.
Access doesn't have triggers
Your trigger that you show here will bomb out since it does not take into account multirow updates the moment someone updates more than one row (and don't say it won't happen because it will better to practice some defensive coding)
Triggers fire per batch not per row, please read Multirow Considerations for DML Triggers
join inserted pseudo table and the invoice table instead to update the values...that works for 1 and more than 1 row
They may be coming in Access 2010? http://blogs.msdn.com/access/archive/2009/08/13/access-2010-data-macros-similar-to-triggers.aspx
MS Access doesn't have triggers.
That is, the the Access Jet engine (which creates .mdb files). If Access is connecting to a database server, then it will use whatever triggers are in that database.
I've never come across triggers in Access unless it's dealing with ADP on SQL Server. So your answer is yes, it's the same if you're on SQL Server for the backend, and no if the table is stored in Access.