I have a SQL Server table with only one column and below is the query I use to insert data. I always "COMMIT TRAN" after the row has been inserted, but the newly added row disappears a few days after I inserted/commit tran. I have never had this issue with any other table and I use these transaction statements often. Any anyone help as to why this keeps happening with this table?
Here is the query I run:
Begin tran
insert into ur77_licensee ([licensee name])
values ('8210 - J.Crew')
select *
from ur77_licensee
where [licensee name] like '%8210%'
commit tran
You'd have to build some kind of audit system which would be quite a challenge. Your code is not causing the delete. My guess is there is some kind of maintenance job doing it, but who knows? I would:
change the permissions on the table to only allow a specific user to make changes
make your code use that user
wait for somebody to come complain about how they can no longer delete
If everybody is running as an admin, well.. that's why you don't have everybody run as admin!
Alternatively you could:
create a delete trigger. When an item is deleted have it insert into a delete table, that would at least let you seen when it happened.
Related
We use a DB2 database. Some datawarehouse tables are TRUNCATEd and reloaded every day. We run into deadlock issues when another process is running an INSERT statement against that same table.
Scenario
TRUNCATE is executed on a table.
At the same time another process INSERTS some data in the same table.(The process is based on a trigger and can start at any time )
is there a work around?
What we have thought so far is to prioritize the truncate and then go thruogh with the insert. Is there any way to iplement this. Any help would be appreciated.
You should request a table lock before you execute the truncate.
If you do this you can't get a deadlock -- the table lock won't be granted before the insert finishes and once you have the lock another insert can't occur.
Update from comment:
You can use the LOCK TABLE command. The details depend on your situation but you should be able too get away with SHARED mode. This will allow reads but not inserts (this is the issue you are having I believe.)
It is possible this won't fix your problem. That probably means your insert statement is to complicated -- maybe it is reading from a bunch of other tables or from a federated table. If this is the case, re-architect your solution to include a staging table (first insert into the staging table .. slowly.. then insert into the target table from the staging table).
Sometimes I try test scenarios between several schemas , deleting/modifying tables , inserting/updating/deleting queries , some schemas are testing and the others are Important for production. so sometimes by accident I run queries in wrong schemas. so the commit functionality does really help in this scenario.
however Truncate table tab1 doesnt need commit, and if I execute it in a wrong schema .. well you know the scneario.
My question: Is there a workarround like the commit for truncate table like the DML Statment ? If you delete a statment you have to include a commit, or in plsql you have to click the green button to commit.
I use such check , its really annoying every time I want to truncate I have to modify the condition.
select count(1) into cnt from tab1 if cnt =0 then execute'Truncate table tab1'; end if;
I am not searching for flashback. I need a checking on truncate table
As #Boneist said, truncate is DDL statement which implicitly commits. If you are not sure of the action you do in a schema, and want to commit only after a manual verification, then do not TRUNCATE, use DELETE instead.
With DELETE statement, you could control the commit. Having said that, TRUNCATE resets the high watermark back to zero, however, DELETE doesn't. Even if you delete all the rows from the table, Oracle would scan all the blocks under the HWM. Have a look at this AskTom link.
If you are looking to bring back the truncated data, and if you are on 11gR2 and up, you could use the Flashback support for DDL statements.
TRUNCATE is a DDL statement, not DML, and DDL statements automatically include commits. See https://asktom.oracle.com/pls/asktom/f?p=100:11:0%3A%3A%3A%3AP11_QUESTION_ID:7072180788422 for more info.
I'm not entirely sure I understand what it is you're trying to do - you could, as Tom suggests, perhaps use an autonomous transaction to keep the truncate separate? If you're after the ability to separate the commit part from the truncate part (ie. to rollback the truncate if you decide you called it in error), then I'm afraid you're out of luck.
This is probably laughably easy for an SQL expert, but SQL (although I can use it) is not really my thing.
I've got a table in a DB. (Let's call it COMPUTERS)
About 10.000 rows. 25 columns. 1 unique key: Column ASSETS.
Occasionally an external program will delete 1 or more of the rows, but isn't supposed to do that, because we still need to know some info from those rows before we can really delete the items.
We can't control the behavior of the external application so we came up with a different idea:
We want to create a second identical table (COMPUTERS_BACKUP) and initially fill this with a one-on-one copy of COMPUTERS.
After that, once a day copy new records from COMPUTERS to COMPUTERS_BACKUP and update those records in COMPUTERS_BACKUP where the original in COMPUTERS has changed (ASSETS column will never change).
That way we keep the last state of a record deleted from COMPUTERS.
Can someone supply the code for a stored procedure that can be scheduled to run once a day? I can probably figure this out myself, but it would take me several hours or so and I'm very pressed for time.
just create a trigger for insert computers table
CREATE TRIGGER newComputer
ON [Computers]
AFTER INSERT
Begin
INSERT INTO COMPUTERS_BACKUP
SELECT * FROM Inserted
End
It'll work when you insert new computer to computers table and it'll also insert the record to bakcup table
When you update computers you could change computers backup too with update trigger
CREATE TRIGGER newComputer
ON [Computers]
AFTER UPDATE
Begin
//can access before updating the record through SELECT * FROM Deleted
//can access after updating the record through SELECT * FROM Inserted
UPDATE Computers_BACKUP SET
(attributes) = inserted.(attribute)
WHERE id = inserted.id
End
At the end I guess you don't want to delete the backup when original record is deleted from computers table. You can chech more examples from msdn using triggers.
When a record removed from computers table
CREATE TRIGGER computerDeleted ON [Computers] AFTER DELETE
Begin
INSERT INTO Computers_BACKUP
SELECT * FROM Deleted
End
Besides creating triggers, you may look into enabling Change Data Capture, which is available in SQL Server Enterprise Edition. It may be an overshot, but it should be mentioned and you may find it useful for other tables and objects.
IMHO a possible solution, if you never delete records (only update) from that table in your application, can be to introduce an INSTEAD OF DELETE trigger
CREATE TRIGGER tg_computers_delete ON computers
INSTEAD OF DELETE AS
DELETE computers WHERE 1=2;
It will prevent the deletion of the records.
Here is SQLFiddle demo.
A trigger for Before Delete event can help you to guard this table:
CREATE TRIGGER backup_row_before_delete ON COMPUTERS_Table FOR Delete
as
INSERT INTO Computers_Backup
SELECT deleted.* from deleted
You can change deleted.* for deleted.col1, deleted.col2 if you want to keep certain columns only.
will delete 1 or more of the rows, but isn't supposed to do that
Then you have permission and integrity issues.
You can most certainly use a trigger to record deletions (and updates of course) but I would not recommend you use it purely to keep a copy of stuff you didn't want deleted in the first place!
Remove delete permissions if you have to or beef up your data integrity if you can. Without your schema it's hard to tell exactly how though.
Finally, use your (INSTEAD OF) trigger to check whatever conditions you need to prevent the delete when appropriate.
I have windows service have two method which are inserting data to the same table at the same time.but at the time of insert it threw the exception.how i can make lock on such situation.
thank you in advance
if its update statement, i would suggest BEGIN TRANSACTION with READ COMMIITED isolation level. but since its insert, u might want to create extra table act like a queue. Insert everything in new table and after that u can run your validation.
On our live/production database I'm trying to add a trigger to a table, but have been unsuccessful. I have tried a few times, but it has taken more than 30 minutes for the create trigger statement to complete and I've cancelled it.
The table is one that gets read/written to often by a couple different processes. I have disabled the scheduled jobs that update the table and attempted at times when there is less activity on the table, but I'm not able to stop everything that accesses the table.
I do not believe there is a problem with the create trigger statement itself. The create trigger statement was successful and quick in a test environment, and the trigger works correctly when rows are inserted/updated to the table. Although when I created the trigger on the test database there was no load on the table and it had considerably less rows, which is different than on the live/production database (100 vs. 13,000,000+).
Here is the create trigger statement that I'm trying to run
CREATE TRIGGER [OnItem_Updated]
ON [Item]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF update(State)
BEGIN
/* do some stuff including for each row updated call a stored
procedure that increments a value in table based on the
UserId of the updated row */
END
END
Can there be issues with creating a trigger on a table while rows are being updated or if it has many rows?
In SQLServer triggers are created enabled by default. Is it possible to create the trigger disabled by default?
Any other ideas?
The problem may not be in the table itself, but in the system tables that have to be updated in order to create the trigger. If you're doing any other kind of DDL as part of your normal processes they could be holding it up.
Use sp_who to find out where the block is coming from then investigate from there.
I believe the CREATE Trigger will attempt to put a lock on the entire table.
If you have a lots of activity on that table it might have to wait a long time and you could be creating a deadlock.
For any schema changes you should really get everyone of the database.
That said it is tempting to put in "small" changes with active connections. You should take a look at the locks / connections to see where the lock contention is.
That's odd. An AFTER UPDATE trigger shouldn't need to check existing rows in the table. I suppose it's possible that you aren't able to obtain a lock on the table to add the trigger.
You might try creating a trigger that basically does nothing. If you can't create that, then it's a locking issue. If you can, then you could disable that trigger, add your intended code to the body, and enable it. (I do not believe you can disable a trigger during creation.)
Part of the problem may also be the trigger itself. Could your trigger accidentally be updating all rows of the table? There is a big differnce between 100 rows in a test database and 13,000,000. It is a very bad idea to develop code against such a small set when you have such a large dataset as you can have no way to predict performance. SQL that works fine for 100 records can completely lock up a system with millions for hours. You really want to know that in dev, not when you promote to prod.
Calling a stored proc in a trigger is usually a very bad choice. It also means that you have to loop through records which is an even worse choice in a trigger. Triggers must alawys account for multiple record inserts/updates or deletes. If someone inserts 100,000 rows (not unlikely if you have 13,000,000 records), then looping through a record based stored proc could take hours, lock the entire table and cause all users to want to hunt down the developer and kill (or at least maim) him because they cannot get their work done.
I would not even consider putting this trigger on prod until you test against a record set simliar in size to prod.
My friend Dennis wrote this article that illustrates why testing a small volumn of information when you have a large volumn of information can create difficulties on prd that you didn't notice on dev:
http://blogs.lessthandot.com/index.php/DataMgmt/?blog=3&title=your-testbed-has-to-have-the-same-volume&disp=single&more=1&c=1&tb=1&pb=1#c1210
Run DISABLE TRIGGER triggername ON tablename before altering the trigger, then reenable it with ENABLE TRIGGER triggername ON tablename