I have a SP written in C# which makes calculation on around 2 million rows. Calculations takes about 3 minutes. For each row result is generated in the form of three numbers.
Those results are inserted into temporary table which later is somehow processed.
Results are added in chunks and inserting takes sometimes over 200 minutes (yes, over 3 hours!). Sometimes it takes "only" 50 minutes.
I have modified it so results are kept in memory till the end and then whole 2 millions are dumped in one loop inside one transaction. Still - it takes around 20 minutes.
Similar loop written in SQL with transaction begin/commit takes less than 30 seconds.
Anyone has an idea where is the problem?
Processing 2 millions (so selecting them, etc) takes 3 minutes, inserting results in the best solution 20 minutes.
UPDATE: this table has one clustered index on identity column (to assure that rows are being physically appended at the end), no triggers, no other indexes, no other process is accessing it.
As long as we are all being vague, here is a vague answer. If inserting 2mil rows takes that long, I would check four problems in this order:
Validating foreign key references or uniqueness constraints. You shouldn't need any of these on your temporary table. Do the validation in your CLR before the record gets to the insert step.
Complicated triggers. Please tell me you don't have any triggers on the temporary table. Finish your inserts and then do more processing after everything is in.
Trying to recalculate indexes after each insert. Try dropping the indexes before the insert step and recreating after.
If these aren't it, you might be dealing with record locking. Do you have other processes that hit the temporary table that might be getting in the way? Can you stop them during your insert?
Related
I've got a seemingly simple stored procedure that is taking too long to run (25 minutes on about 1 million records). I was wondering what I can do to speed it up. It's just deleting records in a given set of statuses.
Here's the entire procedure:
ALTER PROCEDURE [dbo].[spTTWFilters]
AS
BEGIN
DELETE FROM TMW
WHERE STATUS IN ('AVAIL', 'CANCL', 'CONTACTED', 'EDI-IN', 'NOFRGHT', 'QUOTE');
END
I can obviously beef up my Azure SQL instance to run faster, but are there other ways to improve? Is my syntax not ideal? Do I need to index the STATUS column? Thanks!
So the answer, as it is generally the case with large data update operations, is to break it up into several smaller batches.
Every DML statement, by default, starts an implicit transaction, whether explicitly declared or not. By running the delete affecting a large number of rows in a single batch, locks are held on indexes and the base table for the duration of the operation and the log file will continue to grow, internally creating new VLFs for the entire transaction.
Moreover, if the delete is aborted before it completes, the rollback may well take considerably longer to complete since they are always single-threaded.
Breaking into batches, usually performed in some form of loop working progressively through a range of key values, allows the deletes to occur in smaller more manageable chunks. In this case, having a range of different status-values to delete separately appears to be enough to effect a worthwhile improvement.
You can use top keyword to delete large amount of data by using loop or use = sign instead of in keyword.
I have 2 tables which need to get refreshed every one hour. One table is a truncate load and other one is an incremental load. Total process takes around 30 seconds to complete. There are couple of applications hitting these tables on a continuous basis. I can't have applications with blank data at any moment. Any idea what could be done so that operations on these table doesn't affect the output on UI (including truncate/load)? I am thinking of creating a MV on these tables, but any better approach?
Convert the TRUNCATE to a DELETE and make the whole process one transaction. If the current process only takes 30 seconds the extra overhead of deleting and conventional inserts shouldn't be too bad.
I am facing an issue with an ever slowing process which runs every hour and inserts around 3-4 million rows daily into an SQL Server 2008 Database.
The schema consists of a large table which contains all of the above data and has a clustered index on a datetime field (by day), a unique index on a combination of fields in order to exclude duplicate inserts, and a couple more indexes on 2 varchar fields.
The typical behavior as of late, is that the insert statements get suspended for a while before they complete. The overall process used to take 4-5 mins and now it's usually well over 40 mins.
The inserts are executed by a .net service which parses a series of xml files, performs some data transformations and then inserts the data to the DB. The service has not changed at all, it's just that the inserts take longer than they use to.
At this point I'm willing to try everything. Please, let me know whether you need any more info and feel free to suggest anything.
Thanks in advance.
Sounds like you have exhausted the buffer pools ability to cache all the pages needed for the insert process. Append-style inserts (like with your date table) have a very small working set of just a few pages. Random-style inserts have basically the entire index as their working set. If you insert a row at a random location the existing page that row is supposed to be written to must be read first.
This probably means tons of disk seeks for inserts.
Make sure to insert all rows in one statement. Use bulk insert or TVPs. This allows SQL Server to optimize the query plan by sorting the inserts by key value making IO much more efficient.
This will, however, not realize a big speedup (I have seen 5x in similar situations). To regain the original performance you must bring the working set back into memory. Add RAM, purge old data, or partition such that you only need to touch very few partitions.
drop index's before insert and set them up on completion
I have a table with 20 or so columns. I have approximately 7 non-clustered indexes in that table on the columns that users filter by more often. The active records (those that the users see on their screen) are no more than 700-800. Twice a day a batch job runs and inserts a few records in that table - maybe 30 - 100 - and may update the existing ones as well.
I have noticed that the indexes need rebuilding EVERY time that the batch operation completes. Their fragmentation level doesnt go from 0-1% step by step to say 50%. I have noticed that they go from 0-1% to approx. 99% after the batch operation completes. A zillion of selects can happen on this table between batch operations but i dont think that matters.
Is this normal? i dont think it is. what do you think its the problem here? The indexed columns are mostly strings and floats.
A few changes could easily change fragmentation levels.
An insert on a page can cause a page split
Rows can overflow
Rows can be moved (forward pointers)
You'll have quite wide rows too so your data density (rows per page) is lower. DML on existing rows will cause fragmentation quite quickly if the DML is distributed across many pages
I am processing large amounts of data in iterations, each and iteration processes around 10-50 000 records. Because of such large number of records, I am inserting them into a global temporary table first, and then process it. Usually, each iteration takes 5-10 seconds.
Would it be wise to truncate the global temporary table after each iteration so that each iteration can start off with an empty table? There are around 5000 iterations.
No! The whole idea of a Global Temporary Table is that the data disappears automatically when you no longer need it.
For example, if you want the data to disappear when you COMMIT, you should use the ON COMMIT DELETE ROWS option when originally creating the table.
That way, you don't need to do a TRUNCATE - you just COMMIT, and the table is all fresh and empty and ready to be reused.
5000 iterations on 50000 records per run? If you need to be doing that much processing, surely you can optimise your processing logic to run more efficiently. That would give you more speed compared to truncating tables.
However, if you are finished with data in a temp table, you should truncate it, or just ensure that the next process to use the table isn't re-processing the same data over again.
E.g. have a 'processed' flag, so new processes don't use the existing data.
OR
remove data when no longer needed.