A simple SQL statement is timing out - sql

This is a very simple SQL statement:
Update FosterHomePaymentIDNo WITH (ROWLOCK) Set FosterHomePaymentIDNo=1296
But it's timing out when I execute it from the context of an ASP.NET WebForms application.
Could this have something to do with the rowlock? How can I make sure that this SQL statement runs in a reasonable amount of time without compromising the integrity of the table? I suspect removing the rowlock would help, but then we could run into different users updating the table at the same time.
And yes, this is a "next ID" table that contains only one column and only one row; I don't know why it was set up this way instead of using an identity column or even select max(id) + 1!

If an UPDATE of a one-row table takes a long time, it's probably blocked by another session that updated it in a transaction that hasn't committed yet. This is why you don't want to generate IDs like this.

Related

delete first X row Ingres ANSI

I have 730000+ records which I need to delete in Ingres db which work with ANSI92 and I need to delete then without overload db, simple delete where search condition, doesn't work, DB just use all memory and trowing error. thinking to run it in loop, and delete by portions 10-20K of records .
i tried to use top and it didn't work
delete top (10)from TABLE where web_id <0 ;
, also was trying to use Limit also didnt work
DELETE FROM from TABLE where web_id <0 LIMIT 10;
any ideas how to do it ? Thank you !
You could use a session temporary table to hold the first 10 tids (tuple id's) and then delete based on those:
declare global temporary table session.tenrows as
select first 10 tid the_tid from "table" where web_id<0
on commit preserve rows with norecovery;
delete from "table" where tid in (select the_tid from session.tenrows);
When you say "without overload db", do you mean avoiding hitting the force-abort limit of the transaction log file? If so what might work for you is:
set session with on_logfull=notify;
delete from table where web_id<0;
This would automatically commit your transaction at points where force-abort is reached then carry on, rather than rolling back and reporting an error.
A downside of using this setting is that it can be tricky to unpick what has/hasn't been done if any other error should occur (your work will likely be partially committed), but since this appears to be a straight delete from a table it should be quite obvious which rows remain and which don't.
The "set session" statement must be run at the start of a transaction.
I would advise not running concurrent sessions with "on_logfull=notify" (there have been bugs in this area, whether they're fixed in your installation depends on your version/patch level).

Is there any simple way to set max id of ID column using trigger before insert in MS SQL Server

I've almost seen every post concerning this question but haven't captured the best one. Some of them recommend using Identity but some triggers to perform incrementing integer column.
I'd like also to use triggers as there will be more delete happen in my table in this case. In addition, as I have mainly come from Interbase DBMS where I used to create a before insert trigger on table this issue sucks until now as I migrated from Interbase to MS SQL Server.
This is how I did in Interbase
CREATE trigger currency_bi for currency
active before insert position 0
AS
declare variable m integer;
begin
select max(id)+1 from currency into :m;
if (:m is NULL ) then m=1;
new.id=:m;
end
So, as I should frequently use this, which is the best way to create a trigger that increments integer column using max(id)+1 ?
Don't use triggers to do this, it will either kill the performance or cause all sorts of concurrency problems, depending on your use of transactions and locking.
It's better to use one of mechanisms available in the engine -- identity property or sequence object.
If you're running a newer version of SQL Server, with sequence feature available, use sequence. It will allow you to reserve a range of ids from the client applcation, and assign them to new rows on the client, before sending them to server for insert.
Always use Identity option , because as you told that you frequently delete the record, in this case trigger will some time give wrong information ( Called Isolation level).
Suppose one transaction delete the highest one record and just before or same time your trigger fired. So it get the deleted highest record which is not exist after few second.
So when you fired select query, it show the gap which is wrong.
Sqlserver give the inbuilt mechanism of this type of situation with auto identity true option.
http://mrbool.com/understanding-auto-increment-in-sql-server/29171
You donot bother about this. Also draw back of trigger is if multiple insert happened, then it always fired after the last insert statement.
Try to never use trigger , as it is harmful and not controllable.
Still if you want , then add in your insert statement , not use trigger
How can I auto-increment a column without using IDENTITY?

Trigger on Audit Table failing due to update conflict

I have a number of tables that get updated through my app which return a lot of data or are difficult to query for changes. To get around this problem, I have created a "LastUpdated" table with a single row and have a trigger on these complex tables which just sets GetDate() against the appropriate column in the LastUpdated table:
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET ListItems = GetDate()
GO
This way, the clients only have to query this table for the last updated value and then can decided whether or not they need to refresh their data from the complex tables. The complex tables are using snapshot isolation to prevent dirty reads.
In busy systems, around once a day we are getting errors writing or updating data in the complex tables due to update conflicts in "LastUpdated". Because this occurs in the statement executed by the trigger, the affected complex table fails to save data. The following error is logged:
Snapshot isolation transaction aborted due to update conflict. You
cannot use snapshot isolation to access table 'dbo.tblLastUpdated'
directly or indirectly in database 'devDB' to update, delete, or
insert the row that has been modified or deleted by another
transaction. Retry the transaction or change the isolation level for
the update/delete statement.
What should I be doing here in the trigger to prevent this failure? Can I use some kind of query hints on the trigger to avoid this - or can I just ignore errors in the trigger? Updating the data in LastUpdated is not critical, but saving the data correctly into the complex tables is.
This is probably something very simple that I have overlooked or am not aware of. As always, thanks for any info.
I would say that you should look into using Change Tracking (http://msdn.microsoft.com/en-gb/library/cc280462%28v=sql.100%29.aspx), which is lightweight builtin SQL Server functionality that you can use to monitor the fact that a table has changed, as opposed to logging each individual change (which you can also do with Change Data Capture). It needs Snapshot Isolation, which you are already using.
Because your trigger is running in your parent transaction, and your snapshot has become out of date, your whole transaction would need to start again. If this is a complex workload, maintaining this last updated data in this way would be costly.
Short answer - don't do that! Making the updated transactions dependent on one single shared row makes it prone to deadlocks and and update conflicts whole gammut of nasty things.
You can either use views to determine last update, e.g.:
SELECT
t.name
,user_seeks
,user_scans
,user_lookups
,user_updates
,last_user_seek
,last_user_scan
,last_user_lookup
,last_user_update
FROM sys.dm_db_index_usage_stats i JOIN sys.tables t
ON (t.object_id = i.object_id)
WHERE database_id = db_id()
Or, if you really insist on the solution with LastUpdate, you can implement it's update from the trigger in an autonomous transactions. Even though SQL Server doesn't support autonomous transactions, it could done using liked servers: How to create an autonomous transaction in SQL Server 2008
The schema needs to change. If you have to keep your update table, make a row for every table. That would greatly reduce your locks because each table could update their very own row and not competing for the sole row in a table.
LastUpdated
table_name (varchar(whatever)) pk
modified_date (datetime)
New Trigger for tblListItem
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET modified_date = GetDate() WHERE table_name = 'tblListItem'
GO
Another option that I use a lot is having a modified_date column in every table. Then people know exactly which records to update/insert to sync with your data rather than dropping and reloading everything in the table each time one record changes or is inserted.
Alternatively, you can update the log table inside the same transaction which you use to update your complex tables inside your application & avoid the trigger altogether.
Update
You can also opt for inserting a new row instead of updating the same row in LastUpdated table. You can then query max timestamp for latest update. However, with this approach your LastUpdated table would grow each day which you need to take care of if volume of transactions is high.

Add an identity column to existing table which is changing always

I have an existing table with 15 million rows in it. I want to add an identity column and make it primary key. The problem is this table is always moving (inserts, updates, deletes). Is it possible to add the identity column with this? Or I have to stop the backgroud processes (it is a tedious task) which updates this table?
Thanks
Vikram
Given that you have 15 million rows it might take some non-trivial amount of time to execute the ALTER TABLE statement.
Since SQL Server doesn't provide table hints for ALTER TABLE its pretty safe to assume that SQL Server takes a table lock when it executes an ALTER TABLE statement.
During this time no other process will be allowed to Select, insert, update, or delete so you don't have to worry about a race condition with some other process.
If the process takes long enough your other processes will experience timeout errors. Depending on how the processes are written this is either a bad thing or a non-issue, but you'll need to figure that out. If it were me I would turn them off.

CREATE TRIGGER is taking more than 30 minutes on SQL Server 2005

On our live/production database I'm trying to add a trigger to a table, but have been unsuccessful. I have tried a few times, but it has taken more than 30 minutes for the create trigger statement to complete and I've cancelled it.
The table is one that gets read/written to often by a couple different processes. I have disabled the scheduled jobs that update the table and attempted at times when there is less activity on the table, but I'm not able to stop everything that accesses the table.
I do not believe there is a problem with the create trigger statement itself. The create trigger statement was successful and quick in a test environment, and the trigger works correctly when rows are inserted/updated to the table. Although when I created the trigger on the test database there was no load on the table and it had considerably less rows, which is different than on the live/production database (100 vs. 13,000,000+).
Here is the create trigger statement that I'm trying to run
CREATE TRIGGER [OnItem_Updated]
ON [Item]
AFTER UPDATE
AS
BEGIN
SET NOCOUNT ON;
IF update(State)
BEGIN
/* do some stuff including for each row updated call a stored
procedure that increments a value in table based on the
UserId of the updated row */
END
END
Can there be issues with creating a trigger on a table while rows are being updated or if it has many rows?
In SQLServer triggers are created enabled by default. Is it possible to create the trigger disabled by default?
Any other ideas?
The problem may not be in the table itself, but in the system tables that have to be updated in order to create the trigger. If you're doing any other kind of DDL as part of your normal processes they could be holding it up.
Use sp_who to find out where the block is coming from then investigate from there.
I believe the CREATE Trigger will attempt to put a lock on the entire table.
If you have a lots of activity on that table it might have to wait a long time and you could be creating a deadlock.
For any schema changes you should really get everyone of the database.
That said it is tempting to put in "small" changes with active connections. You should take a look at the locks / connections to see where the lock contention is.
That's odd. An AFTER UPDATE trigger shouldn't need to check existing rows in the table. I suppose it's possible that you aren't able to obtain a lock on the table to add the trigger.
You might try creating a trigger that basically does nothing. If you can't create that, then it's a locking issue. If you can, then you could disable that trigger, add your intended code to the body, and enable it. (I do not believe you can disable a trigger during creation.)
Part of the problem may also be the trigger itself. Could your trigger accidentally be updating all rows of the table? There is a big differnce between 100 rows in a test database and 13,000,000. It is a very bad idea to develop code against such a small set when you have such a large dataset as you can have no way to predict performance. SQL that works fine for 100 records can completely lock up a system with millions for hours. You really want to know that in dev, not when you promote to prod.
Calling a stored proc in a trigger is usually a very bad choice. It also means that you have to loop through records which is an even worse choice in a trigger. Triggers must alawys account for multiple record inserts/updates or deletes. If someone inserts 100,000 rows (not unlikely if you have 13,000,000 records), then looping through a record based stored proc could take hours, lock the entire table and cause all users to want to hunt down the developer and kill (or at least maim) him because they cannot get their work done.
I would not even consider putting this trigger on prod until you test against a record set simliar in size to prod.
My friend Dennis wrote this article that illustrates why testing a small volumn of information when you have a large volumn of information can create difficulties on prd that you didn't notice on dev:
http://blogs.lessthandot.com/index.php/DataMgmt/?blog=3&title=your-testbed-has-to-have-the-same-volume&disp=single&more=1&c=1&tb=1&pb=1#c1210
Run DISABLE TRIGGER triggername ON tablename before altering the trigger, then reenable it with ENABLE TRIGGER triggername ON tablename