I have an SQL table, from which data is being deleted. Nobody knows how the data is being deleted. I added a trigger and know the time, however no jobs are running that would delete the data. I also added a trigger whenever rows are being deleted from this table. I am then inserting the deleted rows and the SYSTEM_USER to a log table, however that doesnt help much. Is there anything better I can do to know who and how the data gets deleted? Would it be possible to get the server id or something? Thanks for any advice.
Sorry: I am using SQL Server 2000.
**update 1*: Its important to find out how the data gets deleted - preferably I would like to know the DTS package or SQL statement that is being executed.
Just a guess, but do you have delete cascades on one of the parent tables (those referenced by foreign keys). If so, when you delete the parent row the entries in the child table are also removed.
If the recovery mode is set to "Full", you can check the logs.
Beyond that, remove any delete grants to the table. If it still happens, whomever is doing it has root/dbo access - so change the password...
Try logging all transactions for the time being, even if if it hurts performance. MS offers a mssql profiler, including for express versions if needed. With it, you should be able to log transactions. As an alternative to profilers, you can use the trace_build stored procedure to dump activity into reference files, then just 'ctrl-f' for any instance of the word 'delete' or other similar keywords. For more info, see this SO page...
Logging ALL Queries on a SQL Server 2008 Express Database?
Also, and this may sound stupid, but investigate the possibility that what you are seeing is not deletes. Instead, investigate if records are simply being 'updated', 'replaced if already exists', 'upserted', or whatever you like to call it. In Mysql, this is the 'INSERT ... ON DUPLICATE KEY UPDATE' statement. I'm not sure of the MSSQL variant.
What recovery model is your database in? If it is full Redgate log Rescue is free and works against SQL2000 which might help you retrieve the deleted data. The Overview Video does appear to show a user column.
Or you could roll your own query against fn_dblog
Change all your passwords. Give as few people delete access as possible.
Related
Can we find the history of values, updates, deletes or inserts of a specific table?
I don't have the access to create a stored procedure. Can you please provide me a query in the answer or some other way to find the history?
Two possibilities come to mind.
1.) If you have auditing enabled, you are all set. But, I'm guessing if that was the case, you wouldn't be asking the question. If you think this request will come up again, you should investigate setting up auditing for future requests.
2.) If auditing isn't set up, there's LogMiner, which allows you to examine the contents of the archived and online redo logs. This is probably your only solution, if you need the details of inserts, updates, deletes to a specific table.
Hope that helps.
It is possible if FLASHBACK has been enabled over the schema or table. Most critical tables could have that enabled. Please check with DBA on this. If you have DBA access, then select the table name in SQL Developer , press Shift+F4 and move to Flashback tab to find details.
if enabled, you can use the below query ( just a sample)
SELECT * FROM employee AS OF TIMESTAMP
TO_TIMESTAMP('2003-04-04 09:30:00', 'YYYY-MM-DD HH:MI:SS')
WHERE name = 'JOHN';
if not enabled, you may have to write TRIGGERS for evry DML over that table. I agree the history of data before TRIGGERS are gone for ever, unless DBA is able to do some magic with redo logs..!
Is this what you are looking for ?
http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_2103.htm
I am not sure if the transaction log is what I need or what.
My first problem is that I have a stored procedure that inserts some rows. I was just checking out elmah and I see that some sql exceptions happens. They all are the same error(a PK constraint was violated).
Other than that elmah is not telling me much more. So I don't know what row caused this primary key constraint(I am guessing the same row was for some reason being added twice).
So I am not sure if the the transaction log would tell me what happened and what data was trying to be inserted. I can't recreate this error it always works for me.
My second problem is for some reason when my page loads up I have a row from that database that I don't think exists anymore(I have a hidden column with the PK in it.) When I try to find this primary key it does not exist in the database.
I am using ms sql 2005.
Thanks
I don't think transaction log will help you.
SQL 2 modes on how to insert data with uniqueness violation.
There is a setting : IGNORE_DUP_KEY. By default it is OFF. IF you turn it ON, SQL will ignire duplicate rows and your INSERT statement will succeed.
You can read about it here:
http://msdn.microsoft.com/en-us/library/ms175132.aspx
BTW, to view transaction log, you can use this command:
SELECT * FROM fn_dblog(null, null)
You can inspect the log with the (undocumented) function fn_dblog(), but it won't tell you anything in the case of duplicate key violation because the violation happens before the row is inserted, so no log record is generated. Is true though that you'll get other operations at the time of error and from those you can, possibly, recreate the actions that lead to the error condition. Note that if the database is in SIMPLE recovery model then the log gets reused and you likely lost track of anything that happened.
Have a look at this article How do checkpoints work and what gets logged for an example of fn_dblog() usage. Although is on a different topic, it shows how the function works.
If you can repeat the error I would suggest using SQL Server profiler so you can see exactly what is going on.
If you are using asp.net to load the page are you using any output caching or data caching that might be retaining the row that no longer exists in the db?
I think someone with shared access to my SQL Server '05 DB is deleting records from a table in a DB for their own reasons.
Is there any audit table I can check to see manual delete queries which may have been run on the DB in the last X number of days?
Thanks for your help.
Ed
May want to consider using a trigger temporarily.
Here's an example.
I'd add an on delete trigger to the table in question. That would allow you to keep an exact log of deleted records (ie, if on your trigger you insert into another table, etc)
SELECT deqs.last_execution_time AS [Time], dest.TEXT AS [Query]
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY deqs.last_execution_time DESC
SQL Server Profiler is probably the easiest way to do this. You can set it to dump all executed queries to a table in the database, or to a file which might be more suitable in your case. You can also set a filter to capture just the queries you're interested in, or the log files become huge.
Unless you've set things up beforehand (via triggers, running Profiler traces, or the like) no, there is no simple native way to "pull out" commands that have been run against a SQL Server database.
#David's idea of querying the procedure cache is one possibility, but would only work if the execution plan(s) are still in memory.
There are third-party transaction log readers available. They could be used to read the contents of the transaction log, but again that only helps if the data/commands are still in there, and after "X days" that seems unlikely.
Another work-around would depend on backups.
Restore a copmlete backup from before your problem time, and compare and contrast with the current version. This would show if data has been deleted, but not how.
If you are in Full backup mode and you have transaction log backups, you can perform various types of incremental restores and actually observer the deletions happening (if they are), but this would probably require a lot of point-in-time recoveries and would be very time intensive.
One third party app is storing data in a huge database (SQL Server 2000/2005). This database has more than 80 tables. How would I come to know that how many tables are affected when application stores a new record in database? Is there something available I can retrieve the list of tables affected?
You might be able to tell by running a trace in SQL Profiler on the database - the SQL:StmtCompleted event is probably the one to monitor - i.e. if the application does a series of inserts into multiple tables, you should see them go through in Profiler.
You can use SQL Profiler to trace SQL queries. So you will see sequence of calls caused by one button click in your application.
Also use can use metadata or SQL tools to get list of triggers which could make a lot of actions on simple insert.
If you have the SQL script that used to store the new record(Usually, it should be insert statement, or other DML statement such as update, merge and so on). Then you may know how many tables were affected by parsing those SQL script.
Take this SQL for example:
Insert into emp(fname, lname)
Values('john', 'reyes')
You can get result like this:
sstinsert
emp(tetInsert)
Tables:
emp
Fields:
emp.fname
emp.lname
you can add triggers on tables that get fired on update - you could use this to update a log table that would report what was being updated.
see more here: http://www.devarticles.com/c/a/SQL-Server/Using-Triggers-In-MS-SQL-Server/
Profiler is the way to go, as others have said especially with an unfamilar third party database.
I would also spend some time creating diagrams so you can see the foreign key relationships and understand how the database is put together. I usaully know my database structure so well, I can tell from the fields being inserted what tables they affect and I know what triggers are on my tables and what they affect. There is no substitute for taking the time to understand the database you support.
I accidentally deleted some rows from the database. But I already have daily database backups.
How can I restore only deleted records from the backup database?
Thanks in advance
You didn't say a database server, so I can't be sure this will work, but I believe the syntax on MSSQL would be:
UPDATE livefile
SET livefile.bloodtypefield=oldfile.bloodtypefieild
FROM [hospital].[dbo].[tblPatientFile] livefile
INNER JOIN [hospitalRapor].[dbo].[tblPatientFile] oldfile on livefile.patientid=oldfile.patientid
I highly recommend running on a test database first to make sure it has the results you want. You will of course need a user who has access to both databases and depending on whether you have triggers etc defined that may take a long time to run on 400k rows.
I take it you have a restore of the database on the same server, in which case assuming all the previous data was correct you could, although you would overwrite any updates to the blood type that have been made since your error.
I would suggest you also backup your 'incorrect' database before you go any further, so that an additional mistakes can be rectified - undone easily so you can at least return to the initial 'error' state instead of compounding problems.