Large number of UPDATE queries slowing down page - sql

I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields.
After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out).
I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return.
My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL:
UPDATE dbo.Records
SET dbo.Records.file_ID = CASE #field_name WHEN 'file_ID' THEN #field_value ELSE file_ID END,
.
. (all 50 varchar field CASE statements here)
.
WHERE dbo.Records.record_ID = #record_ID
Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram).
Any suggestions appreciated.

Don't do this in sql; fix the data up in code, then do you updates.
If you have sql 2008, then look into table-value parameters. It enables you to pass an entire table as a parameter to a s'proc. From their you just have the one insert/update or merge statement

If your looping through the lines and doing individual updates/inserts this can be really expensive... Consider using SqlBulkCopy which can speed up all your inserts. Similarly, you can create a DataSet, make your updates on the dataset and then submit them all in one shot through a SqlDataAdapter.

I believe you are doing 50 case statements on every update. Sounds like that would be slow.
It is possible to solve this problem with inject proof code via parameterized querys and a string constant table.
Quick and dirty example code.
string [] queryList = { "UPDATE records SET col1 = {val} WHERE ID={key}",
"UPDATE records SET col2 = {val} WHERE ID={key}",
"UPDATE records SET col3 = {val} WHERE ID={key}",
...
"UPDATE records SET col50 = {val} WHERE ID={key}"}
Then in your call to SQL you just pick the item in the array corresponding to the col you want to update and set the value and key for the parameterized items.
I'm guessing you will see a significant improvement... let me know how it goes.

Um. Why are you inserting numeric data into VARCHAR fields then trying to run numeric checks on it? This is yucky.
Apply correct data typing and constraints to your table, do the INSERT, and see if it failed. SQL Server will happily report errors back to you.

I would try changing the recovery model to simple and look at my indexes. Kimberly Tripp did a session showing a scenario with improved performance using a heap.

Related

TSQL: Is there a way to limit the rows returned and count the total that would have been returned without the limit (without adding it to every row)?

I'm working to update a stored procedure that current selects up to n rows, if the rows returned = n, does a select count without the limit, and then returns the original select and the total impacted rows.
Kinda like:
SELECT TOP (#rowsToReturn)
A.data1,
A.data2
FROM
mytable A
SET #maxRows = ##ROWCOUNT
IF #rowsToReturn = ##ROWCOUNT
BEGIN
SET #maxRows = (SELECT COUNT(1) FROM mytableA)
END
I'm wanting reduce this to a single select statement. Based on this question, COUNT(*) OVER() allows this, but it is put on every single row instead of in an output parameter. Maybe something like FOUND_ROWS() in MYSQL, such as a ##TOTALROWCOUNT or such.
As a side note, since the actual select has an order by, the data base will need to already traverse the entire set (to make sure that it gets the correct first n ordered records), so the database should already have this count somewhere.
As #MartinSmith mentioned in a comment on this question, there is no direct (i.e. pure T-SQL) way of getting the total numbers of rows that would be returned while at the same time limiting it. In the past I have done the method of:
dump the query to a temp table to grab ##ROWCOUNT (the total set)
use ROW_NUBMER() AS [ResultID] on the ordered results of the main query
SELECT TOP (n) FROM #Temp ORDER BY [ResultID] or something similar
Of course, the downside here is that you have the disk I/O cost of getting those records into the temp table. Put [tempdb] on SSD? :)
I have also experienced the "run COUNT(*) with the same rest of the query first, then run the regular SELECT" method (as advocated by #Blam), and it is not a "free" re-run of the query:
It is a full re-run in many cases. The issue is that when doing COUNT(*) (hence not returning any fields), the optimizer only needs to worry about indexes in terms of the JOIN, WHERE, GROUP BY, ORDER BY clauses. But when you want some actual data back, that could change the execution plan quite a bit, especially if the indexes used to get the COUNT(*) are not "covering" for the fields in the SELECT list.
The other issue is that even if the indexes are all the same and hence all of the data pages are still in cache, that just saves you from the physical reads. But you still have the logical reads.
I'm not saying this method doesn't work, but I think the method in the Question that only does the COUNT(*) conditionally is far less stressful on the system.
The method advocated by #Gordon is actually functionally very similar to the temp table method I described above: it dumps the full result set to [tempdb] (the INSERTED table is in [tempdb]) to get the full ##ROWCOUNT and then it gets a subset. On the downside, the INSTEAD OF TRIGGER method is:
a lot more work to set up (as in 10x - 20x more): you need a real table to represent each distinct result set, you need a trigger, the trigger needs to either be built dynamically, or get the number of rows to return from some config table, or I suppose it could get it from CONTEXT_INFO() or a temp table. Still, the whole process is quite a few steps and convoluted.
very inefficient: first it does the same amount of work dumping the full result set to a table (i.e. into the INSERTED table--which lives in [tempdb]) but then it does an additional step of selecting the desired subset of records (not really a problem as this should still be in the buffer pool) to go back into the real table. What's worse is that second step is actually double I/O as the operation is also represented in the transaction log for the database where that real table exists. But wait, there's more: what about the next run of the query? You need to clear out this real table. Whether via DELETE or TRUNCATE TABLE, it is another operation that shows up (the amount of representation based on which of those two operations is used) in the transaction log, plus is additional time spent on the additional operation. AND, let's not forget about the step that selects the subset out of INSERTED into the real table: it doesn't have the opportunity to use an index since you can't index the INSERTED and DELETED tables. Not that you always would want to add an index to the temp table, but sometimes it helps (depending on the situation) and you at least have that choice.
overly complicated: what happens when two processes need to run the query at the same time? If they are sharing the same real table to dump into and then select out of for the final output, then there needs to be another column added to distinguish between the SPIDs. It could be ##SPID. Or it could be a GUID created before the initial INSERT into the real table is called (so that it can be passed to the INSTEAD OF trigger via CONTEXT_INFO() or a temp table). Whatever the value is, it would then be used to do the DELETE operation once the final output has been selected. And if not obvious, this part influences a performance issue brought up in the prior bullet: TRUNCATE TABLE cannot be used as it clears the entire table, leaving DELETE FROM dbo.RealTable WHERE ProcessID = #WhateverID; as the only option.
Now, to be fair, it is possible to do the final SELECT from within the trigger itself. This would reduce some of the inefficiency as the data never makes it into the real table and then also never needs to be deleted. It also reduces the over-complication as there should be no need to separate the data by SPID. However, this is a very time-limited solution as the ability to return results from within a trigger is going bye-bye in the next release of SQL Server, so sayeth the MSDN page for the disallow results from triggers Server Configuration Option:
This feature will be removed in the next version of Microsoft SQL Server. Do not use this feature in new development work, and modify applications that currently use this feature as soon as possible. We recommend that you set this value to 1.
The only actual way to do:
the query one time
get a subset of rows
and still get the total row count of the full result set
is to use .Net. If the procs are being called from app code, please see "EDIT 2" at the bottom. If you want to be able to randomly run various stored procedures via ad hoc queries, then it would have to be a SQLCLR stored procedure so that it could be generic and work for any query as stored procedures can return dynamic result sets and functions cannot. The proc would need at least 3 parameters:
#QueryToExec NVARCHAR(MAX)
#RowsToReturn INT
#TotalRows INT OUTPUT
The idea is to use "Context Connection = true;" to make use of the internal / in-process connection. You then do these basic steps:
call ExecuteDataReader()
before you read any rows, do a GetSchemaTable()
from the SchemaTable you get the result set field names and datatypes
from the result set structure you construct a SqlDataRecord
with that SqlDataRecord you call SqlContext.Pipe.SendResultsStart(_DataRecord)
now you start calling Reader.Read()
for each row you call:
Reader.GetValues()
DataRecord.SetValues()
SqlContext.Pipe.SendResultRow(_DataRecord)
RowCounter++
Rather than doing the typical "while (Reader.Read())", you instead include the #RowsToReturn param: while(Reader.Read() && RowCounter < RowsToReturn.Value)
After that while loop, call SqlContext.Pipe.SendResultsEnd() to close the result set (the one that you are sending, not the one you are reading)
then do a second while loop that cycles through the rest of the result, but never gets any of the fields:
while (Reader.Read())
{
RowCounter++;
}
then just set TotalRows = RowCounter; which will pass back the number of rows for the full result set, even though you only returned the top n rows of it :)
Not sure how this performs against the temp table method, the dual call method, or even #M.Ali's method (which I have also tried and kinda like, but the question was specific to not sending the value as a column), but it should be fine and does accomplish the task as requested.
EDIT:
Even better! Another option (a variation on the above C# suggestion) is to use the ##ROWCOUNT from the T-SQL stored procedure, sent as an OUTPUT parameter, rather than cycling through the rest of the rows in the SqlDataReader. So the stored procedure would be similar to:
CREATE PROCEDURE SchemaName.ProcName
(
#Param1 INT,
#Param2 VARCHAR(05),
#RowCount INT OUTPUT = -1 -- default so it doesn't have to be passed in
)
AS
SET NOCOUNT ON;
{any ol' query}
SET #RowCount = ##ROWCOUNT;
Then, in the app code, create a new SqlParameter, Direction = Output, for "#RowCount". The numbered steps above stay the same, except the last two (10 and 11), which change to:
Instead of the 2nd while loop, just call Reader.Close()
Instead of using the RowCounter variable, set TotalRows = (int)RowCountOutputParam.Value;
I have tried this and it does work. But so far I have not had time to test the performance against the other methods.
EDIT 2:
If the T-SQL stored procs are being called from the app layer (i.e. no need for ad hoc execution) then this is actually a much simpler variation of the above C# methods. In this case you don't need to worry about the SqlDataRecord or the SqlContext.Pipe methods. Assuming you already have a SqlDataReader set up to pull back the results, you just need to:
Make sure the T-SQL stored proc has a #RowCount INT OUTPUT = -1 parameter
Make sure to SET #RowCount = ##ROWCOUNT; immediately after the query
Register the OUTPUT param as a SqlParameter having Direction = Output
Use a loop similar to: while(Reader.Read() && RowCounter < RowsToReturn) so that you can stop retrieving results once you have pulled back the desired amount.
Remember to not limit the result in the stored proc (i.e. no TOP (n))
At that point, just like what was mentioned in the first "EDIT" above, just close the SqlDataReader and grab the .Value of the OUTPUT param :).
How about this....
DECLARE #N INT = 10
;WITH CTE AS
(
SELECT
A.data1,
A.data2
FROM mytable A
)
SELECT TOP (#N) * , (SELECT COUNT(*) FROM CTE) Total_Rows
FROM CTE
The last column will be populated with the total number of rows it would have returned without the TOP Clause.
The issue with your requirement is, you are expecting a SINGLE select statement to return a table and also a scalar value. which is not possible.
A Single select statement will return a table or a scalar value. OR you can have two separate selects one returning a Scalar value and other returning a scalar. Choice is yours :)
Just because you think TSQL should have a row count because of a sort doe not mean it does. And if it does it does it is not currently sharing it with the outside world.
What you are missing is this is very efficient
select count(*)
from ...
where ...
select top x
from ...
where ...
order by ...
With the count(*) unless the query is just plain ugly those indexes should be in memory.
It has to perform a count to sort based on what?
Did you actually evaluate any query plans?
If TSQL has to perform a sort then explain the following.
Why is the count(*) 100% of the cost when the second had to do a count anyway?
Just where in that second query plan is there a free opportunity to count?
Why are those query plans so different if they both need to count?
I think there is an arcane way to do what you want. It involves triggers and non-temporary tables. And, I should mention, although I have implemented each piece (for different purposes), I have never put them together for this purpose.
The idea starts with this Stack Overflow question. According to this source, ##ROWCOUNT counts the number of attempted inserts, even when they don't really happen. Now, I must admit that a perusal of available documentation doesn't seem to touch on this topic, so this may or may not be "correct" behavior. This method is relying on this "problem".
So, you could do what you want by:
Creating a new table for the output -- but not a table variable or a temporary table.
Creating an "instead of" trigger that prevents more than #maxRows from going into the table.
Select the query results into the table.
Read ##ROWCOUNT after the select.
Note that you can create the table and trigger using dynamic SQL. You could also create it once, and have the trigger read the #maxRows value from some sort of parameter table. As mentioned before, this needs to be a real table that supports triggers.

SQLServer - MERGE with condition is slow, even an "always false" condition

I want to execute the MERGE statement conditionally, so it won't try to match the entire target-table.
My original statement was kinda like this:
MERGE [target_table] USING [table_source]
ON (([target_table].[ID] = [table_source].[ID]) AND (condition))
WHEN MATCHED THEN UPDATE
SET [table_source].[_strField1] = [table_source].[_strField2];
Note: assume '_strField' to be typed as nvarchar(4000), and 'condition' to be something like [target_table].[_strField8] = 'sometext'.
But then I've encountered the following warning in the documentation that dictates "...Do not attempt to improve query performance by filtering out rows in the target table in the ON clause".
So my original query was altered to the following one
MERGE [target_table] USING [table_source]
ON (([target_table].[ID] = [table_source].[ID]))
WHEN MATCHED AND (condition)
THEN UPDATE
SET [table_source].[_strField] = [table_source].[_strField];
The problem is, that the query now takes a lot more time. Even changing the condition to be "always false", such as 1 = 2 doesn't help at all. On the other hand, setting different fields, such as
SET [table_source].[_intField] = [table_source].[_intField];
or any other types other than two nvarchar(4000)s causes the statement to be executed much faster.
To conclude, the things I don't understand are:
If the data-setting of nvarchar(4000) is the longer process, why setting the condition to be "1 = 2" doesn't speed up the execution time?
If the "row-matching" is the longer process, why setting INT fields does speed up the execution time?
According to sql server documentation:
"...Do not attempt to improve query performance by filtering out rows in the target table in the ON clause, such as by specifying AND NOT target_table.column_x = value. Doing so may return unexpected and incorrect results..*
your first query is invalid. You should not take time results into consideration.
If you update column with fixed lenght (int,date) there will be no "Row-Overflow Data Exceeding 8 KB" situation. This situation can occurs when you use nvarchar(4000) and give poor query performance.
We do not now how the merge function works inside and how it process data. So propably only developers of this function can give You answers to your question.
I hope that I have help you with varchar(4000) performance issue.
Marcin Pazgier

Why is the load method of a datatable sometimes so slow?

The project is a web app in ASP/VB.net. The issue is that some pages are mind-boggingly slow. After trying to track down the bottleneck, it was discovered to be the load method when filling a datatable with query results.
We are using an Oracle database and queries are executed in stored procedures. As an example, we have a relatively simple select statement within a procedure which returns 2 columns with 6 rows which was determined to take about 0.015 seconds to execute. However it takes on average 7 seconds to load the OracleDataReader into a datatable - a ridiculous amount of time for such a small record set. After messing around with the query, I found that a simple decode statement appeared to be causing the issue. The decode statement is used similar to as follows:
WHERE
DECODE(iBln, 1, column1, column2) BETWEEN iDate1 and iDate2
The iBln variable is simply a number being passed in to act as a boolean variable for determining which column should be between two dates. If I comment this decode statement out and make it simply "column1 BETWEEN iDate1 and iDate2" then the load method takes no time at all as it should, signifying that it is indeed the decode statement causing issues.
So I'm just hoping to hear from anyone that could have an idea as to what's causing this or how to fix it. It's a simple decode, how does it even affect the load method anyway?
I would verify that indexes exist for column1 and column2. If so, the likely problem is that the DECODE is preventing the use of the indexes. Try rewriting as:
WHERE ( ( iBin = 1 AND column1 BETWEEN iDate1 AND iDate2)
OR
( (iBin IS NULL OR iBin <> 1) AND column2 BETWEEN iDate1 AND iDate2)
)
If your stored procedure is returning a REF CURSOR, opening the cursor in the stored procedure will be very fast regardless of the query you're executing. Opening a cursor doesn't require that Oracle do any of the work of actually running the query, it just requires that Oracle determine the query plan which should be more or less instantaneous.
How long does it take to fetch the data from the REF CURSOR in something like SQL*Plus? If it takes something close to 7 seconds (as I suspect it will), you can eliminate the OracleDataReader class as the source of the problem. In that case, the problem would almost certainly be that the query plan is inefficient.
Based on your description, my guess is that column1 is indexed. column2 may also be indexed, it's not clear. But a regular index on either column1 or column2 could not be used to evaluate the predicate that involves the call to the DECODE function. If there are no other predicates on indexed columns, that may force Oracle to do a table scan on the underlying table (posting the full query, the table definition, and the query plan would be helpful).

Improve update SQL query performance

I've an SQL database to contain stock bars downloaded from Yahoo!. I'm trying to create some indicators to analyze these stocks (i.e. Simple Moving Average). I am concerned with the performances of my query, which is simply UPDATE #stockname SET SMA = #value WHERE id = #n . To update 2000 rows it takes 2 minutes. I tried with a stored procedure but the result is almost the same.
for (int i = 0; i < closing_prices.Count - length; i++)
{
double signalValue signalValue = Selector.SignalProcessor(Signal,
closing_prices.GetRange(i, length), length);
//Write the value into the database
string location = Convert.ToString(i + length + 1);
this.UpdateWithSingleCondition("_" + Instrument, columnName,
signalValue.ToString(), "id", location, "=", sql_Connection);
}
This cycle calls the stored procedure to update the column SMA each time a new value is generated. Is there any possibility to put directly the entire column into the database? I think this can save time. Anyway updating 500 rows in 2 min sounds very slow.
Could you tell me how to improve the execution time of my query?
Instead of writing values out one at a time, perhaps you could use a stored proc with table valued parameters to ship the data from your app to the DB in a single op then MERGE the data into your table, saving on a lot of round-tripping.
Analyze your performance. You must have SOME bottleneck. Your update count is really low. You should easily be able to do 10-30 updates per second which would translate to a lot more in 2 minutes.... and that is on a stock computer, not even one worth a database (which would mean many fast discs).
Do a performance analysis on sql server and find out your bottlenecks. You have all indices needed?
I would create a stored proc that receives a string. This string is an XML or delimeted string.
Then use one of the many string to table functions floating around
(delimeted string) http://blogs.x2line.com/al/articles/150.aspx
(xml) http://kennyshu.blogspot.com/2007/12/convert-xml-file-to-table-in-sql-2005.html
and convert the string into a temp table.
Then perform a insert from the temp to the destination table.
This way you make one call to the DB server and avoid chatter. Its a LOT faster than multiple calls.
Avoid table parameters since you cant call em from code.
First disable external key constrains. then enable them again:
To disable "ALTER TABLE" "WITH NOCHECK CONSTRAINT ALL"
To anable them, use "ALTER TABLE" together with "WITH CHECK
CONSTRAINT ALL".

Adding update SQL queries

I have a script that updates itself every week. I've got a warning from my hosting that I've been overloading the server with the script. The problem, I've gathered is that I use too many UPDATE queries (one for each of my 8000+ users).
It's bad coding, I know. So now I need to lump all the data into one SQL query and update it all at once. I hope that is what will fix my problem.
A quick question. If I add purely add UPDATE queries separated by a semicolon like this:
UPDATE table SET something=3 WHERE id=8; UPDATE table SET something=6 WHERE id=9;
And then update the database with one large SQL code as opposed to querying the database for each update, it will be faster right?
Is this the best way to "bunch" together UPDATE statements? Would this significantly reduce server load?
Make a delimited file with your values and use your equivalent of MySQL's LOAD DATA INFILE. This will be significantly faster than an UPDATE.
LOAD DATA INFILE '/path/to/myfile'
REPLACE INTO TABLE thetable(field1,field2, field3)
//optional field and line delimiters
;
Your best bet is to batch these statements by your "something" field:
UPDATE table SET something=3 WHERE id IN (2,4,6,8)
UPDATE table SET something=4 WHERE id IN (1,3,5,7)
Of course, knowing nothing about your requirements, there is likely a better solution out there...
It will improve IO since there is only one round trip, but the database "effort" will be the same.
A curiosity of SQL is that the following integer expression
(1 -abs(sign(A - B))) = 1 if A == B and 0 otherwise. For convenience lets call this expression _eq(A,B).
So
update table set something = 3*_eq(id,8) + 6* _eq(id,9)
where id in (8,9);
will do what you want with a single update statement.