Insert/Update Table Locks on SQL Server - sql

I have a big table with around 70 columns in SQL Server 2008. A multithreaded .NET application is calling a stored proc on database to insert into / update the table. Frequency is around 3 times a second.
I have made weekly partitions on table since almost every query has a datetime constraint on the table.
Sometimes it takes a long time to insert/update the table. I am suspicious that sometimes INSERTION makes UPDATE wait; sometimes UPDATE makes INSERTION wait. Is it possible?
How can I design the table to avoid such locks? Performance is the main issue here.

You're right that you're probably hitting deadlocks causing things to wait. A couple things to check first;
Are your indexes correct?
If your DB is in 'Full' recovery mode do you need it? Simple recovery really speeds up inserts/updates, but you loose point-in-time restores for backups.
Are you likely to have multiple threads writing the same record? If not, NOLOCK might be your friend here, but that would mean your data might be inconsitent for a second or two on occasion.

Related

Best way to do a long running schema change (or data update) in MS Sql Server?

I need to alter the size of a column on a large table (millions of rows). It will be set to a nvarchar(n) rather than nvarchar(max), so from what I understand, it will not be a long change. But since I will be doing this on production I wanted to understand the ramifications in case it does take long.
Should I just hit F5 from SSMS like I execute normal queries? What happens if my machine crashes? Or goes to sleep? What's the general best practice for doing long running updates? Should it be scheduled as a job on the server maybe?
Thanks
Please DO NOT just hit F5. I did this once and lost all the data in the table. Depending on the change, the update statement that is created for you actually stores the data in memory, drops the table, creates the new one that has the change you want, and populates the data from memory. However in my case one of the changes I made was adding a unique constraint so the population failed, and as the statement was over the data in memory was dropped. This left me with the new empty table.
I would create the table you are changing, with the change(s) you want, as a new table. Then select * into the new table, then re-name the tables in a single statement. If there is potential for data to be entered into the table while this is running and that is an issue, you may want to lock the table.
Depending on the size of the table and duration of the statement, you may want to save the locking and re-naming for later, and after the initial population of the new table do a differential population of new data and re-name the tables.
Sorry for the long post.
Edit:
Also, if the connection times out due to duration, then run the insert statement locally on the DB server. You could also create a job and run that, however it is essentially the same thing.

Sql server bulk load into a table

I need to update a table in sql server 2008 along the lines of the Merge statement - delete, insert, updates. Table is 700k rows and I need users to still have read access to it assuming an isolation level of read committed.
I tried things like ALTER TABLE table SET (LOCK_ESCALATION=DISABLE) to no avail. I tested by doing a select top 50000 * from another window, obvious read uncommitted worked :). Is there anyway around this without changing the user's isolation level and retaining an 'all or nothing' transaction behaviour?
My current solution of a cursor that commits in batches of n may allow users to work but loses the transactional behaviour. Perhaps I could just make the bulk update fast enough to always be less than 30 seconds (for timeout). The problem is the user's target db's are on very slow machines with only 512mb ram. Not sure the processor but assume it is really slow and I don't have access to them at this time!
I created a test that causes an update statement to need to run against all 700k rows:
I tried an update with a left join on my dev box (quite fast) and it was 17 seconds
The merge statement was 10 seconds
The FORWARD ONLY cursor was slower than both
These figures are acceptable on my machine but I would feel more comfortable if I could get the query time down to less than 5 seconds before allowing locks.
Any ideas on preventing any locking on the table/rows or making it faster still?
It sounds like this table may be queried a lot but not updated much. If it really is a true read-only table for everyone else, but you want to update it extremely quickly, you could create a script that uses this method (could even wrap it in a transaction, I believe, although it's probably unnecessary):
Make a copy of the table named TABLENAME_COPY
Update the copy table
Rename the original table to TABLENAME_ORIG
Rename the copy table to TABLENAME
You would only experience downtime in between steps 3 and 4, and a scripted table rename would probably be quicker than an update of so many rows.
This does all assume that no else can update the table while your process is running, but they will still be able to read it fully at any point except between 3 & 4

Improve update performance when setting column to null

Bit of a long shot here, but I have a simple query below:
begin transaction
update s
set s.SomeField = null
from someTable s (NOLOCK)
rollback transaction
This runs in ~30 seconds sitting close to the SQL Server box. Are there any tricks I can use to improve the speed. The table has 144,306 rows in it.
thanks.
The single largest component of the performance of a large UPDATE command like this is going to be the speed of your DB log.
For best performance:
Make sure the DB log (LDF file) is on a separate physical spindle from the DB data (MDF file)
Avoid parity RAID for the log volume, such as RAID-5; RAID-1 or RAID-10 are better
Make sure that the DB log file is pre-grown, and that it's physically contiguous on disk
Make sure your server has enough RAM -- ideally, at least enough to hold all of the DB pages containing the modified rows
Using SSDs for your data drive may also help, because the command will create a large number of dirty buffers, which be flushed to disk later by the lazy writer; this can make other operations on the DB slow while it's happening.
If there's no constraint on it, and you really need to set all values of that column to NULL, then I would test dropping the column and re-adding it.
Not sure if that would be faster or not, but I'd investigate it.
Try disabling the index temporarily.
You could change the syntax of your query slightly, but I had no difference in my testing by doing that. I was using STATISTICS IO and STATISTICS TIME.
You mention the column is indexed. You could disable it / re-enable it as part of your transaction. The t-sql for that is simple, see this - http://blog.sqlauthority.com/2007/05/17/sql-server-disable-index-enable-index-alter-index/
I've had to do that in the past for similar jobs and it has worked out well for me.
Try to implement like this
Disable Index
Drop the column
Create the column
Rebuild index
I can guess that it will improve performance.

Add an identity column to existing table which is changing always

I have an existing table with 15 million rows in it. I want to add an identity column and make it primary key. The problem is this table is always moving (inserts, updates, deletes). Is it possible to add the identity column with this? Or I have to stop the backgroud processes (it is a tedious task) which updates this table?
Thanks
Vikram
Given that you have 15 million rows it might take some non-trivial amount of time to execute the ALTER TABLE statement.
Since SQL Server doesn't provide table hints for ALTER TABLE its pretty safe to assume that SQL Server takes a table lock when it executes an ALTER TABLE statement.
During this time no other process will be allowed to Select, insert, update, or delete so you don't have to worry about a race condition with some other process.
If the process takes long enough your other processes will experience timeout errors. Depending on how the processes are written this is either a bad thing or a non-issue, but you'll need to figure that out. If it were me I would turn them off.

read-access to a MyISAM table during a long INSERT?

On mysql and using only myisam tables, I need to access the contents of a table during the course of a long-running INSERT.
Is there a way to prevent the INSERT from locking the table in a way that keeps a concurrent SELECT from running?
This is what I am driving at: to inspect how many records have been inserted up to now. Unfortunately WITH (NOLOCK) does not work on mysql and I could only find commands that control the transaction locks (eg, setting the transaction isolation level to READ UNCOMMITTED) -- which, from my understanding, should not apply to myisam tables at all since they don't support transactions in the first place.
MyISAM locking will block selects. Is there a reason for using MyISAM above InnoDB? If you don't want to change your engine, I suspect this might be a solution for you:
1: Create a materialized view of the table using a cron job (or other scheduled task) that your application can query without blocking.
2: Use a trigger to count up the number of inserts that have occurred, and look up the number of inserts using this meta-data table.