I'm working with a SQL Server 2008 installation that was maintained for years by another team of programmers.
I'm having a problem that rows of data seem to be mysteriously disappearing from a specific table in my server.
I would like to be able to set up some sort of monitoring system that would tell me when the table is modified, and a summary of the modification.
I think that "triggers" might be what I'm looking for, but I've never used them before. Are triggers what I want to use, and if so, what is a good resource for learning to use them? Is there a better solution?
I think that I should mention that the table I'm referring to is not that frequently updated, so I don't think that adding a little bit of overhead should be a big deal, but I would prefer a solution that I can brush away once the problem is resolved.
A FOR DELETE trigger could help you capture the rows that are being deleted. You could create an audit table (copy of the table that you'd like to monitor) and then add this code to your trigger:
INSERT INTO [Your Audit Table]
SELECT * FROM deleted
I've also seen some "more advanced" scenarios involving FOR XML.
I don't know that the trigger would help determine who is deleting the records, but you might be able to PROVE that the records are being deleted, and perhaps what time, etc. That could help you troubleshoot further.
The following sample should be a basic idea of what you're looking for.
CREATE TABLE MyTestTable(col1 int, col2 varchar(10));
GO
CREATE TABLE MyLogTable(col1 int, col2 varchar(10), ModDate datetime, ModBy varchar(50));
GO
CREATE TRIGGER tr_MyTestTable_IO_UD ON MyTestTable AFTER UPDATE, DELETE
AS
INSERT MyLogTable
SELECT col1, col2, GETDATE(), SUSER_SNAME()
FROM deleted;
GO
Insert MyTestTable Values (1, 'aaaaa');
Insert MyTestTable Values (2, 'bbbbb');
UPDATE MyTestTable Set col2 = 'bbbcc' WHERE col1 = 2;
DELETE MyTestTable;
GO
SELECT * FROM MyLogTable;
GO
However, keep in mind that there are still ways of deleting records that won't be caught by a trigger. (TRUNCATE TABLE and various bulk update commands.)
Another solution would be to attach Sql Profiler to the database with specific conditions. This will log every query run for your inspection.
I like to stay away from triggers but they could help for your problem like Draghon said
I think you have it figured out. A trigger is likely your best bet as it's as close to the data as you can get. Inspecting the code (programming or even a stored procedure) would not give you as much an assurance as a trigger would; a Delete trigger in this case.
Check out this article: http://www.go4expert.com/forums/showthread.php?t=15510
Related
I have a script in which I create a temporary table, that I subsequently want to drop.
I simply create the table, fill it using a INSERT INTO statement, but when it comes to dropping it, the script fails stating that the table is in use.
From reading around it would seem to be caused by the transaction managment, but I'm a little confused.
Here is a little script that reproduces the issue:
CREATE TABLE SCRIPT_TEMP (
NAME VARCHAR(100) NOT NULL,
USERNAME VARCHAR(150) NOT NULL);
COMMIT WORK;
INSERT INTO SCRIPT_TEMP (NAME, USERNAME)
SELECT NAME, COALESCE(USERNAME, 'empty')
FROM SALESREPS;
COMMIT WORK;
DROP TABLE SCRIPT_TEMP;
COMMIT WORK;
Or, in order to be easily testable to anyone without a SALESREPS table, use this insert statement :o)
INSERT INTO SCRIPT_TEMP (NAME, USERNAME)
SELECT 'Name 1', 'Username 1'
FROM RDB$DATABASE;
COMMIT WORK;
I fail to see what still holds a reference to the SCRIPT_TEMP table by the time the drop call is made. Why would the script's own transaction block it even after the second COMMIT?
If I split the execution in 2 scripts everything is fine.
What am I missing??
Thanks!!
PS: using Firebird 2.5.2, in case that matters
PPS: my script is a bit more involved than this. THe temp table is filled with table names and related constraints that need to be manipulated, but that's not the issue, that part is working well. And the fact is, the issue I want to resolve is easily reproducible with the code sample here, that I got to while debugging. This SO question seems to be about the exact same problem, but the only answer is not helpful to the problem itself
Thanks to Val Marinov for pointing me in the right direction.
It appears my issue is due to my SQL Manager - there must be a setting I'm not aware of.
Script runs as expected in FlameRobin.
I have a script that runs a SELECT INTO into a table. To my knowledge, there are no other procedures that might be concurrently referencing/modifying this table. Once in awhile, however, I get the following error:
Schema changed after the target table was created. Rerun the Select
Into query.
What can cause this error and how do I avoid it?
I did some googling, and this link suggests that SELECT INTO cannot be used safely without some crazy try-catch-retry logic. Is this really the case?
I'm using SQLServer 2012.
Unless you really don't know the fields and data types in advance, I'd recommend first creating the table, then adding the data with an Insert statement. In your link, David Moutray suggests the same thing, here's his example code verbatim:
CREATE TABLE #TempTableY (ParticipantID INT NOT NULL);
INSERT #TempTableY (ParticipantID)
SELECT ParticipantID
FROM TableX;
This is probably laughably easy for an SQL expert, but SQL (although I can use it) is not really my thing.
I've got a table in a DB. (Let's call it COMPUTERS)
About 10.000 rows. 25 columns. 1 unique key: Column ASSETS.
Occasionally an external program will delete 1 or more of the rows, but isn't supposed to do that, because we still need to know some info from those rows before we can really delete the items.
We can't control the behavior of the external application so we came up with a different idea:
We want to create a second identical table (COMPUTERS_BACKUP) and initially fill this with a one-on-one copy of COMPUTERS.
After that, once a day copy new records from COMPUTERS to COMPUTERS_BACKUP and update those records in COMPUTERS_BACKUP where the original in COMPUTERS has changed (ASSETS column will never change).
That way we keep the last state of a record deleted from COMPUTERS.
Can someone supply the code for a stored procedure that can be scheduled to run once a day? I can probably figure this out myself, but it would take me several hours or so and I'm very pressed for time.
just create a trigger for insert computers table
CREATE TRIGGER newComputer
ON [Computers]
AFTER INSERT
Begin
INSERT INTO COMPUTERS_BACKUP
SELECT * FROM Inserted
End
It'll work when you insert new computer to computers table and it'll also insert the record to bakcup table
When you update computers you could change computers backup too with update trigger
CREATE TRIGGER newComputer
ON [Computers]
AFTER UPDATE
Begin
//can access before updating the record through SELECT * FROM Deleted
//can access after updating the record through SELECT * FROM Inserted
UPDATE Computers_BACKUP SET
(attributes) = inserted.(attribute)
WHERE id = inserted.id
End
At the end I guess you don't want to delete the backup when original record is deleted from computers table. You can chech more examples from msdn using triggers.
When a record removed from computers table
CREATE TRIGGER computerDeleted ON [Computers] AFTER DELETE
Begin
INSERT INTO Computers_BACKUP
SELECT * FROM Deleted
End
Besides creating triggers, you may look into enabling Change Data Capture, which is available in SQL Server Enterprise Edition. It may be an overshot, but it should be mentioned and you may find it useful for other tables and objects.
IMHO a possible solution, if you never delete records (only update) from that table in your application, can be to introduce an INSTEAD OF DELETE trigger
CREATE TRIGGER tg_computers_delete ON computers
INSTEAD OF DELETE AS
DELETE computers WHERE 1=2;
It will prevent the deletion of the records.
Here is SQLFiddle demo.
A trigger for Before Delete event can help you to guard this table:
CREATE TRIGGER backup_row_before_delete ON COMPUTERS_Table FOR Delete
as
INSERT INTO Computers_Backup
SELECT deleted.* from deleted
You can change deleted.* for deleted.col1, deleted.col2 if you want to keep certain columns only.
will delete 1 or more of the rows, but isn't supposed to do that
Then you have permission and integrity issues.
You can most certainly use a trigger to record deletions (and updates of course) but I would not recommend you use it purely to keep a copy of stuff you didn't want deleted in the first place!
Remove delete permissions if you have to or beef up your data integrity if you can. Without your schema it's hard to tell exactly how though.
Finally, use your (INSTEAD OF) trigger to check whatever conditions you need to prevent the delete when appropriate.
I am using the Function in stored procedure , procedure contain transaction and update the table and insert values in the same table , while the function is call in procedure is also fetch data from same table.
i get the procedure is hang with function.
Can have any solution for the same?
If I'm hearing you right, you're talking about an insert BLOCKING ITSELF, not two separate queries blocking each other.
We had a similar problem, an SSIS package was trying to insert a bunch of data into a table, but was trying to make sure those rows didn't already exist. The existing code was something like (vastly simplified):
INSERT INTO bigtable
SELECT customerid, productid, ...
FROM rawtable
WHERE NOT EXISTS (SELECT CustomerID, ProductID From bigtable)
AND ... (other conditions)
This ended up blocking itself because the select on the WHERE NOT EXISTS was preventing the INSERT from occurring.
We considered a few different options, I'll let you decide which approach works for you:
Change the transaction isolation level (see this MSDN article). Our SSIS package was defaulted to SERIALIZABLE, which is the most restrictive. (note, be aware of issues with READ UNCOMMITTED or NOLOCK before you choose this option)
Create a UNIQUE index with IGNORE_DUP_KEY = ON. This means we can insert ALL rows (and remove the "WHERE NOT IN" clause altogether). Duplicates will be rejected, but the batch won't fail completely, and all other valid rows will still insert.
Change your query logic to do something like put all candidate rows into a temp table, then delete all rows that are already in the destination, then insert the rest.
In our case, we already had the data in a temp table, so we simply deleted the rows we didn't want inserted, and did a simple insert on the rest.
This can be difficult to diagnose. Microsoft has provided some information here:
INF: Understanding and resolving SQL Server blocking problems
A brute force way to kill the connection(s) causing the lock is documented here:
http://shujaatsiddiqi.blogspot.com/2009/01/killing-sql-server-process-with-x-lock.html
Some more Microsoft info here: http://support.microsoft.com/kb/323630
How big is the table? Do you have problem if you call the procedure from separate windows? Maybe the problem is related to the amount of data the procedure is working with and lack of indexes.
I want to insert all the record from the back up table foo_bk into foo table without specific the columns.
if i try this query
INSERT INTO foo
SELECT *
FROM foo_bk
i'll get error "Insert Error: Column name or number of supplied values does not match table definition."
Is it possible to do bulk insert from one table to another without supply the column name?
I've google it but can't seem to find an answer. all the answer require specific the columns.
You should not ever want to do this. Select * should not be used as the basis for an insert as the columns may get moved around and break your insert (or worse not break your insert but mess up your data. Suppose someone adds a column to the table in the select but not the other table, you code will break. Or suppose someone, for reasons that surpass understanding but frequently happen, decides to do a drop and recreate on a table and move the columns around to a different order. Now your last_name is is the place first_name was in originally and select * will put it in the wrong column in the other table. It is an extremely poor practice to fail to specify columns and the specific mapping of one column to the column you want in the table you are interested in.
Right now you may have several problems, first the two structures don't match directly or second the table being inserted to has an identity column and so even though the insertable columns are a direct match, the table being inserted to has one more column than the other and by not specifying the database assumes you are going to try to insert to that column. Or you might have the same number of columns but one is an identity and thus can't be inserted into (although I think that would be a different error message).
Per this other post: Insert all values of a..., you can do the following:
INSERT INTO new_table (Foo, Bar, Fizz, Buzz)
SELECT Foo, Bar, Fizz, Buzz
FROM initial_table
It's important to specify the column names as indicated by the other answers.
Use this
SELECT *
INTO new_table_name
FROM current_table_name
You need to have at least the same number of columns and each column has to be defined in exactly the same way, i.e. a varchar column can't be inserted into an int column.
For bulk transfer, check the documentation for the SQL implementation you're using. There are often tools available to bulk transfer data from one table to another. For SqlServer 2005, for example, you could use the SQL Server Import and Export Wizard. Right-click on the database you're trying to move data around in and click Export to access it.
SQL 2008 allows you to forgo specifying column names in your SELECT if you use SELECT INTO rather than INSERT INTO / SELECT:
SELECT *
INTO Foo
FROM Bar
WHERE x=y
The INTO clause does exist in SQL Server 2000-2005, but still requires specifying column names. 2008 appears to add the ability to use SELECT *.
See the MSDN articles on INTO (SQL2005), (SQL2008) for details.
The INTO clause only works if the destination table does not yet exist, however. If you're looking to add records to an existing table, this won't help.
All the answers above, for some reason or another, did not work for me on SQL Server 2012. My situation was I accidently deleted all rows instead of just one row. After our DBA restored the table to dbo.foo_bak, I used the below to restore. NOTE: This only works if the backup table (represented by dbo.foo_bak) and the table that you are writing to (dbo.foo) have the exact same column names.
This is what worked for me using a hybrid of a bunch of different answers:
USE [database_name];
GO
SET IDENTITY_INSERT dbo.foo ON;
GO
INSERT INTO [dbo].[foo]
([rown0]
,[row1]
,[row2]
,[row3]
,...
,[rown])
SELECT * FROM [dbo].[foo_bak];
GO
SET IDENTITY_INSERT dbo.foo OFF;
GO
This version of my answer is helpful if you have primary and foreign keys.
As you probably understood from previous answers, you can't really do what you're after.
I think you can understand the problem SQL Server is experiencing with not knowing how to map the additional/missing columns.
That said, since you mention that the purpose of what you're trying to here is backup, maybe we can work with SQL Server and workaround the issue.
Not knowing your exact scenario makes it impossible to hit with a right answer here, but I assume the following:
You wish to manage a backup/audit process for a table.
You probably have a few of those and wish to avoid altering dependent objects on every column addition/removal.
The backup table may contain additional columns for auditing purposes.
I wish to suggest two options for you:
The efficient practice (IMO) for this can be to detect schema changes using DDL triggers and use them to alter the backup table accordingly. This will enable you to use the 'select * from...' approach, because the column list will be consistent between the two tables.
I have used this approach successfully and you can leverage it to have DDL triggers automatically manage your auditing tables. In my case, I used a naming convention for a table requiring audits and the DDL trigger just managed it on the fly.
Another option that might be useful for your specific scenario is to create a supporting view for the tables aligning the column list. Here's a quick example:
create table foo (id int, name varchar(50))
create table foo_bk (id int, name varchar(50), tagid int)
go
create view vw_foo as select id,name from foo
go
create view vw_foo_bk as select id,name from foo_bk
go
insert into vw_foo
select * from vw_foo_bk
go
drop view vw_foo
drop view vw_foo_bk
drop table foo
drop table foo_bk
go
I hope this helps :)
You could try this:
SELECT * INTO foo FROM foo_bk
This is a valid question for example when wanting to append newly imported rows from an imported csv file of the same raw structure into an existing table which may have DB constraints set up such as PKs and FKs.
I would simply do the following, for example:
INSERT INTO roles select * from new_imported_roles_from_csv_file
I also like this when if any new rows violate uniqueness during this operation, the INSERT will fail, not insert anything and in away 'protect' the target table from bad inbound data.