Sql queries not 'reporting' back until all queries have finish executing - sql

I'm running a set of sql queries and they are not reporting the row affect until all the queries have ran. Is there anyway i can get incremental feedback.
Example:
DECLARE #HowManyLastTime int
SET #HowManyLastTime = 1
WHILE #HowManyLastTime <> 2400000
BEGIN
SET #HowManyLastTime = #HowManyLastTime +1
print(#HowManyLastTime)
END
This doesn't show the count till the loop has finished. How do i make it show the count as it runs?

To flush recordcounts and other data to the client, you'll want to use RaisError with NOWAIT. Related questions and links:
PRINT statement in T-SQL
http://weblogs.sqlteam.com/mladenp/archive/2007/10/01/SQL-Server-Notify-client-of-progress-in-a-long-running.aspx
In SSMS this will work as expected. With other clients, you might not get a response from the client until the query execution is complete.

SQL tends to be 'set-based', and you are thinking procedurally and trying to make it act systematically. It really doesn't make sense to do this in SQL.
I would be asking you motivation for doing this, and is there anything better that can be tried.

Related

SQL Server 2014 slow remote insert

I have several linked servers and I want insert a value into each of those linked servers. On first try executing, I've waited too long for the INSERT using CURSOR. It's done for about 17 hours. But I'm curious for those INSERT queries, and I checked a single line of my INSERT query using Display Estimated Execution Plan, it showed a Cost of 46% on Remote Insert and Constant Scan for 54%.
Below of my code snippets I worked before
DECLARE #Linked_Servers varchar(100)
DECLARE CSR_STAGGING CURSOR FOR
SELECT [Linked_Servers]
FROM MyTable_Contain_Lists_of_Linked_Server
OPEN CSR_STAGGING
FETCH NEXT FROM CSR_STAGGING INTO #Linked_Servers
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXEC('
INSERT INTO ['+#Linked_Servers+'].[DB].[Schema].[Table] VALUES (''bla'',''bla'',''bla'')
')
END TRY
BEGIN CATCH
DECLARE #ERRORMSG as varchar(8000)
SET #ERRORMSG = ERROR_MESSAGE()
END CATCH
FETCH NEXT FROM CSR_STAGGING INTO #Linked_Servers
END
CLOSE CSR_STAGGING
DEALLOCATE CSR_STAGGING
Also below, figure of how I check my estimation execution plan of my query
I check only INSERT query, not all queries.
How can I get best practice and best performance using Remote Insert?
You can try this, but I think the difference should be negligibly better. I do recall that when reading on the differences of approaches with doing inserts across linked servers, most of the standard approaches where basically on par with each other, though its been a while since I looked that up, so do not quote me.
It will also require you to do some light rewriting due to the obvious differences in approach (and assuming that you would be able to do so anyway). The dynamic sql required to do this might be tricky though as I am not entirely sure if you can call openquery within dynamic sql (I should know this but ive never needed to either).
However, if you can use this approach, the main benefit is that the where clause gets the destination schema without having to select any data (because 1 will never equal 0).
INSERT OPENQUERY (
[your-server-name],
'SELECT
somecolumn
, another column
FROM destinationTable
WHERE 1=0'
-- this will help reduce the scan as it will
-- get schema details without having to select data
)
SELECT
somecolumn
, another column
FROM sourceTable
Another approach you could take is to build a insert proc on the destination server/DB. Then you just call the proc by sending the params over. While yes this is a little bit more work, and introduces more objects to maintain, it add simplicity into your process and potentially reduces I/O when sending things across the linked servers, not to mention might save on CPU cost of your constant scans as well. I think its probably a more clean cut approach instead of trying to optimize linked server behavior.

Initial value of ##ROWCOUNT varies across databases and servers

We are attempting to run the a DELETE statement inside a WHILE loop (to avoid large transaction logs for lots of rows) as follows:
WHILE (##ROWCOUNT > 0)
BEGIN
DELETE TOP (250000)
FROM
MYDATABASE.MYSCHEMA.MYTABLE
WHERE
MYDATABASE.MYSCHEMA.MYTABLE.DATE_KEY = 20160301
END
When this command is executed inside a new SQL Server Management Studio connection in our development environment, it deletes rows in blocks of 250K, which is the expected behavior.
When this command is executed in the same way on our test server, we get the message
Command completed successfully
That is, the WHILE loop was not entered when the statement was run.
After some additional investigation, we have found that the behavior also varies depending on the database that we connect to. So if the code is run (in our test environment) while SQL Server Management Studio is connected to MYDATABASE, the DELETE statement does not run. If we run the code while connected to SOME_OTHER_DATABASE, it does.
We partially suspect that the value of ##ROWCOUNT is not reliable, and may be different for different connections. But when we run the code multiple times for each database & server combination, we see behavior that is 100% consistent. So random initial values of ##ROWCOUNT do not appear to explain things.
Any suggestions as to what could be going on here? Thanks for your help!
Edit #1
For those asking about the initial value of ##ROWCOUNT and where it is coming from, we're not sure. But in some cases ##ROWCOUNT is definitely being initialized to some value above zero, as the code works on a fresh connection as-is.
Edit #2
For those proposing the declaration of our own variable, for our particular application we are executing SQL commands via a programming language wrapper which only allows for the execution of one statement at a time (i.e., one semicolon).
We have previously tried to establish the value of ##ROWCOUNT by executing one delete statement prior to the loop:
Statement #1:
DELETE TOP (250000)
FROM
MYDATABASE.MYSCHEMA.MYTABLE
WHERE
MYDATABASE.MYSCHEMA.MYTABLE.DATE_KEY = 20160301
Statement #2 (##ROWCOUNT is presumably now 250,000):
WHILE (##ROWCOUNT > 0)
BEGIN
DELETE TOP (250000)
FROM
MYDATABASE.MYSCHEMA.MYTABLE
WHERE
MYDATABASE.MYSCHEMA.MYTABLE.DATE_KEY = 20160301
END
However, whatever is causing ##ROWCOUNT to take on a different value on start-up is also affecting the value between commands. So in some cases the second statement never executes.
You should not use a variable before you have set its value. That is equally true for system variables.
The code that you have is very dangerous. Someone could add something like SELECT 'Here I am in the loop' after the delete and it will break.
A better approach? Use your own variable:
DELCARE #RC int;
WHILE (#RC > 0 OR #RC IS NULL)
BEGIN
DELETE TOP (250000)
FROM MYDATABASE.MYSCHEMA.MYTABLE
WHERE MYDATABASE.MYSCHEMA.MYTABLE.DATE_KEY = 20160301;
SET #RC = ##ROWCOUNT;
END;
Where are you getting your initial ##ROWCOUNT from? I mean, you're never going to enter that block, because ##ROWCOUNT would be expected to be zero, so you'd never enter the loop. Also, deleting in 250K batches wouldn't change the size of your transaction log - all of the deletions will be logged if you're logging, so there's no benefit (and some penalty) for doing this w/in a loop.
Have you traced the session? Since ##ROWCOUNT returns the number of rows affected by the prior statement in the session, I would guess that either the last query SSMS executes as part of establishing the session returns a different number of rows in the two environments or that you have a login trigger in one or the other environments whose last statement returns a different number of rows. Either way, a trace should tell you exactly why the behavior is different.
Fundamentally, though, it makes no sense to refer to ##ROWCOUNT before you run the statement that you are interested in getting a count for. It's easy enough to fix this using a variable
DECLARE cnt integer = -1;
WHILE (cnt != 0)
BEGIN
DELETE TOP (250000)
FROM MYDATABASE.MYSCHEMA.MYTABLE
WHERE MYDATABASE.MYSCHEMA.MYTABLE.DATE_KEY = 20160301;
SET cnt = ##ROWCOUNT;
END

sql server 2005 stored procedure unexpected behaviour

i have written a simple stored procedure (run as job) that checks user subscribe keyword alerts. when article
posted the stored procedure sends email to those users if the subscribed keyword matched with article title.
One section of my stored procedure is:
OPEN #getInputBuffer
FETCH NEXT
FROM #getInputBuffer INTO #String
WHILE ##FETCH_STATUS = 0
BEGIN
--PRINT #String
INSERT INTO #Temp(ArticleID,UserID)
SELECT A.ID,#UserID
FROM CONTAINSTABLE(Question,(Text),#String) QQ
JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key]
WHERE A.ID > #ArticleID
FETCH NEXT
FROM #getInputBuffer INTO #String
END
CLOSE #getInputBuffer
DEALLOCATE #getInputBuffer
This job run every 5 minute and it checks last 50 articles.
It was working fine for last 3 months but a week before it behaved unexpectedly.
The problem is that it sends irrelevant results.
The #String contains user alert keyword and it matches to the latest articles using Full text search. The normal execution time is 3 minutes but its execution time
is 3 days (in problem).
Now the current status is its working fine but we are unable to find any reason why it sent irrelevant results.
Note: I have already removing noise words from user alert keyword.
I am using SQL Server 2005 Enterprise Edition.
I don't have the answer, but have you asked all the questions?
Does the long execution time always happen for all queries? (Yes--> corruption? disk problems?)
Or is it only for one #String? (Yes--> anything unusual about the term? Is there a "hidden" character in the string that doesn't show up in your editor?)
Does it work fine for that #String against other sets of records, maybe from a week ago? (Yes--> any unusual strings in the data rows?)
Can you reproduce it at will? (From your question, it seems that the problem is gone and you can't reproduce it.) Was it only for one person, at one time?
Hope this helps a bit!
Does the CONTAINSTABLE(Question,(Text),#String) work in an ad hoc query window? If not it may be that your Full Text search indexes are corrupt and need rebuilding
Rebuild a Full-Text Catalog
Full-Text Search How-to Topics
Also check any normal indexes on Article table, they might just need rebuilding for statistical purposes or could be corrupt too
UPDATE STATISTICS (Transact-SQL)
I'd go along with Glen Little's line of thinking here.
If a user has registered a subscribed keyword which coincidentally (or deliberately) contains some of the CONTAINSTABLE search predicates e.g. NEAR, then the query may take longer than usual. Not perhaps "minutes become days" longer, but longer.
Check for subscribed keywords containing *, ", NEAR, & and so on.
The CONTAINSTABLE function allows for a very complex set of criteria. Consider the FREETEXTTABLE function which has a lighter matching algorithm.
1) How do you know it sends irrelevant results?
If it is because user reported problem: Are you sure she didnt change her keywords between mail and report?
Can you add some automatic check at end of procedure to check if it gathered bad results? Perhaps then you can trap the cases when problems occur
2) "This job run every 5 minute and it checks last 50 articles."
Are you sure it's not related to timing? If job takes more than 5 minutes one time, what happens? A second job is starting...
You do not show your cursor declaraion, is it local or could there be some kind of interference if several processes run simultaneously? Perhaps try to add some kind of locking mechanism.
Since the cursors are nested you will want to try the following. It's my understanding that testing for zero can get you into trouble when the cursors are nested. We recently changed all of our cursors to something like this.
WHILE (##FETCH_STATUS <> -1) BEGIN
IF (##FETCH_STATUS <> -2) BEGIN
INSERT INTO #Temp(ArticleID,UserID)
SELECT A.ID,#UserID
FROM CONTAINSTABLE(Question,(Text),#String) QQ
JOIN Article A WITH (NOLOCK) ON A.ID = QQ.[Key]
WHERE A.ID > #ArticleID
END
FETCH NEXT FROM #getInputBuffer INTO #String
END

Running Stored Procedure with parameters resulting from query

It's not hard to find developers who think cursors are gauche but I am wondering how to solve the following problem without one:
Let's say I have a proc called uspStudentDelete that takes as a parameter #StudentID.
uspStudentDelete applies a bunch of cascading soft delete logic, marking a flag on tables like "classes", "grades", and so on as inactive. uspStudentDelete is well vetted and has worked for some time.
What would be the best way to run uspStudentDelete on the results of a query (e.g. select studentid from students where ... ) in TSQL?
That's exactly what cursors are intended for:
declare c cursor local for <your query here>
declare #ID int
open c
fetch next from c into #id
while ##fetch_status = 0
begin
exec uspStudentDelete #id
fetch next from c into #id
end
close c
deallocate c
Most people who rail against cursors think you should do this in a proper client, like a C# desktop application.
The best solution is to write a set-based proc to handle the delete (try running this through a cursor to delete 10,000 records and you'll see why) or to add the set-based code to the current proc with a parameter to tell you wheter to run the set-based or single record part of the proc (this at least keeps it together for maintenance purposes).
In SQL Server 2008 you can use a table variable as an input variable. If you rewrite the proc to be set-based, you can have the same logic and run it no matter if the proc sends in one record or ten thousand. You may need to have a batch process in there to avoid deleting millions of records in one go though and locking up the tables for hours. Of course if you do this you will also need to adjust how the currect sp is being called.

Transactions within loop within stored procedure

I'm working on a procedure that will update a large number of items on a remote server, using records from a local database. Here's the pseudocode.
CREATE PROCEDURE UpdateRemoteServer
pre-processing
get cursor with ID's of records to be updated
while on cursor
process the item
No matter how much we optimize it, the routine is going to take a while, so we don't want the whole thing to be processed as a single transaction. The items are flagged after being processed, so it should be possible to pick up where we left off if the process is interrupted.
Wrapping the contents of the loop ("process the item") in a begin/commit tran does not do the trick... it seems that the whole statement
EXEC UpdateRemoteServer
is treated as a single transaction. How can I make each item process as a complete, separate transaction?
Note that I would love to run these as "non-transacted updates", but that option is only available (so far as I know) in 2008.
EXEC procedure does not create a transaction. A very simple test will show this:
create procedure usp_foo
as
begin
select ##trancount;
end
go
exec usp_foo;
The ##trancount inside usp_foo is 0, so the EXEC statement does not start an implicit transaction. If you have a transaction started when entering UpdateRemoteServer it means somebody started that transaction, I can't say who.
That being said, using remote servers and DTC to update items is going to perform quite bad. Is the other server also SQL Server 2005 at least? Maybe you can queue the requests to update and use messaging between the local and remote server and have the remote server perform the updates based on the info from the message. It would perform significantly better because both servers only have to deal with local transactions, and you get much better availability due to the loose coupling of queued messaging.
Updated
Cursors actually don't start transactions. The typical cursor based batch processing is usually based on cursors and batches updates into transactions of a certain size. This is fairly common for overnight jobs, as it allows for better performance (log flush throughput due to larger transaction size) and jobs can be interrupted and resumed w/o losing everithing. A simplified version of a batch processing loop is typically like this:
create procedure usp_UpdateRemoteServer
as
begin
declare #id int, #batch int;
set nocount on;
set #batch = 0;
declare crsFoo cursor
forward_only static read_only
for
select object_id
from sys.objects;
open crsFoo;
begin transaction
fetch next from crsFoo into #id ;
while ##fetch_status = 0
begin
-- process here
declare #transactionId int;
SELECT #transactionId = transaction_id
FROM sys.dm_tran_current_transaction;
print #transactionId;
set #batch = #batch + 1
if #batch > 10
begin
commit;
print ##trancount;
set #batch = 0;
begin transaction;
end
fetch next from crsFoo into #id ;
end
commit;
close crsFoo;
deallocate crsFoo;
end
go
exec usp_UpdateRemoteServer;
I ommitted the error handling part (begin try/begin catch) and the fancy ##fetch_status checks (static cursors actually don't need them anyway). This demo code shows that during the run there are several different transactions started (different transaction IDs). Many times batches also deploy transaction savepoints at each item processed so they can skip safely an item that causes an exception, using a pattern similar to the one in my link, but this does not apply to distributed transactions since savepoints and DTC don't mix.
EDIT: as pointed out by Remus below, cursors do NOT open a transaction by default; thus, this is not the answer to the question posed by the OP. I still think there are better options than a cursor, but that doesn't answer the question.
Stu
ORIGINAL ANSWER:
The specific symptom you describe is due to the fact that a cursor opens a transaction by default, therefore no matter how you work it, you're gonna have a long-running transaction as long as you are using a cursor (unless you avoid locks altogether, which is another bad idea).
As others are pointing out, cursors SUCK. You don't need them for 99.9999% of the time.
You really have two options if you want to do this at the database level with SQL Server:
Use SSIS to perform your operation; very fast, but may not be available to you in your particular flavor of SQL Server.
Because you're dealing with remote servers, and you're worried about connectivity, you may have to use a looping mechanism, so use WHILE instead and commit batches at a time. Although WHILE has many of the same issues as a cursor (looping still sucks in SQL), you avoid creating the outer transaction.
Stu
Are yo running this only from within sql server, or from an app? if so, get the list to be processed, then loop in the app to only process for the subsets as required.
Then the transaction should be handled by your app, and should only lock the items being updated/pages the items are in.
NEVER process one item at a time in a loop when you are doing transactional work. You can loop through records processing groups of them but never ever do one record at a time. Do set-based inserts instead and your performance will change from hours to minutes or even seconds. If you are using a cursor to insert update or delete and it isn't handling at least 1000 rowa in each statement (not one at atime) you are doing the wrong thing. Cursors are an extremely poor practice for such thing.
Just an idea ..
Only process a few items when the procedure is called (e.g. only get the TOP 10 items to process)
Process those
Hopefully, this will be the end of the transaction.
Then write a wrapper that calls the procedure as long as there is more work to do (either use a simple count(..) to see if there are items or have the procedure return true indicating that there is more work to do.
Don't know if this works, but maybe the idea is helpful.