Running Stored Procedure with parameters resulting from query - sql

It's not hard to find developers who think cursors are gauche but I am wondering how to solve the following problem without one:
Let's say I have a proc called uspStudentDelete that takes as a parameter #StudentID.
uspStudentDelete applies a bunch of cascading soft delete logic, marking a flag on tables like "classes", "grades", and so on as inactive. uspStudentDelete is well vetted and has worked for some time.
What would be the best way to run uspStudentDelete on the results of a query (e.g. select studentid from students where ... ) in TSQL?

That's exactly what cursors are intended for:
declare c cursor local for <your query here>
declare #ID int
open c
fetch next from c into #id
while ##fetch_status = 0
begin
exec uspStudentDelete #id
fetch next from c into #id
end
close c
deallocate c
Most people who rail against cursors think you should do this in a proper client, like a C# desktop application.

The best solution is to write a set-based proc to handle the delete (try running this through a cursor to delete 10,000 records and you'll see why) or to add the set-based code to the current proc with a parameter to tell you wheter to run the set-based or single record part of the proc (this at least keeps it together for maintenance purposes).
In SQL Server 2008 you can use a table variable as an input variable. If you rewrite the proc to be set-based, you can have the same logic and run it no matter if the proc sends in one record or ten thousand. You may need to have a batch process in there to avoid deleting millions of records in one go though and locking up the tables for hours. Of course if you do this you will also need to adjust how the currect sp is being called.

Related

SQL Server 2014 slow remote insert

I have several linked servers and I want insert a value into each of those linked servers. On first try executing, I've waited too long for the INSERT using CURSOR. It's done for about 17 hours. But I'm curious for those INSERT queries, and I checked a single line of my INSERT query using Display Estimated Execution Plan, it showed a Cost of 46% on Remote Insert and Constant Scan for 54%.
Below of my code snippets I worked before
DECLARE #Linked_Servers varchar(100)
DECLARE CSR_STAGGING CURSOR FOR
SELECT [Linked_Servers]
FROM MyTable_Contain_Lists_of_Linked_Server
OPEN CSR_STAGGING
FETCH NEXT FROM CSR_STAGGING INTO #Linked_Servers
WHILE ##FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXEC('
INSERT INTO ['+#Linked_Servers+'].[DB].[Schema].[Table] VALUES (''bla'',''bla'',''bla'')
')
END TRY
BEGIN CATCH
DECLARE #ERRORMSG as varchar(8000)
SET #ERRORMSG = ERROR_MESSAGE()
END CATCH
FETCH NEXT FROM CSR_STAGGING INTO #Linked_Servers
END
CLOSE CSR_STAGGING
DEALLOCATE CSR_STAGGING
Also below, figure of how I check my estimation execution plan of my query
I check only INSERT query, not all queries.
How can I get best practice and best performance using Remote Insert?
You can try this, but I think the difference should be negligibly better. I do recall that when reading on the differences of approaches with doing inserts across linked servers, most of the standard approaches where basically on par with each other, though its been a while since I looked that up, so do not quote me.
It will also require you to do some light rewriting due to the obvious differences in approach (and assuming that you would be able to do so anyway). The dynamic sql required to do this might be tricky though as I am not entirely sure if you can call openquery within dynamic sql (I should know this but ive never needed to either).
However, if you can use this approach, the main benefit is that the where clause gets the destination schema without having to select any data (because 1 will never equal 0).
INSERT OPENQUERY (
[your-server-name],
'SELECT
somecolumn
, another column
FROM destinationTable
WHERE 1=0'
-- this will help reduce the scan as it will
-- get schema details without having to select data
)
SELECT
somecolumn
, another column
FROM sourceTable
Another approach you could take is to build a insert proc on the destination server/DB. Then you just call the proc by sending the params over. While yes this is a little bit more work, and introduces more objects to maintain, it add simplicity into your process and potentially reduces I/O when sending things across the linked servers, not to mention might save on CPU cost of your constant scans as well. I think its probably a more clean cut approach instead of trying to optimize linked server behavior.

Run a DELETE statement certain table names stored in a table

I have a table which stores the names of certain tables - tableNames. I'd like to run a DELETE statement on some of those tables (deleting all the rows from the tables they represent, not removing them from tableNames). I thought I could just do
DELETE FROM (SELECT tableName FROM tablesNames WHERE ...) AS deleteTables
But I keep getting an incorrect syntax error. I also thought about iterating through a table in a WHILE loop and storing using a variable, but that I'm hoping there's more simpler way. Specifically, this is for Microsoft SQL
You cannot do it that way because the inner SELECT is simply another set you're deleting from.
Basically you're creating a table of table names and telling the DB to delete it. Even iterating through them won't work without dynamic sql and EXEC
Do you need to automate this process?
What I've done in the past is something like this
SELECT
'DELETE ' + tableName
FROM
tablenames
WHERE
[conditions]
your output will look like this:
DELETE myTableName1
DELETE myTableName2
DELETE myTableName3
And then simply copying the results of this query out of the window and running them.
IF you need to automate this in SQL you can concatenate all the output strings in the result and send them as a parameter to an EXEC call.
try using cursor :
DECLARE #tableName varchar(255)
DECLARE cur cursor for select tableName from tableNames where (...)
OPEN CUR
FETCH NEXT FROM cur into #tableName
WHILE ##FETCH_STATUS = 0
BEGIN
exec('DELETE ' + #tableName)
FETCH NEXT FROM cur into #tableName
END
CLOSE cur
DEALLOCATE cur
In this respect, you can think of SQL as a compiled language like C/C++. The SQL statement is evaluated by a "compiler", and certain checks are done. One of those checks is for the existence (and permissions) for tables and columns referenced directly in the query. Exact table names must be present in your code at the time you build your query, so that the compiler can validate it.
The good news is that SQL is also a dynamic language. This means you can write a procedure to build a query as a string, and tell the database to execute that string using the EXEC command. At this point, all the same "compiler" rules apply, but since you were able to insert table names directly into your SQL string, the query will pass.
The problem is that this also has security implications. It would be a good idea to also check your table against a resource like information_schema.Tables, to avoid potential injection attacks. Unfortunately, if you're deleting whole tables your whole model may already be suspect, such that you can't guarantee that someone won't inject a table name that you really want to keep. But depending on how these are populated, you may also be just fine.
Assuming no potential constraint errors exist, one interesting possibility is an undocumented procedure sp_MSforeachtable, which will allow you to apply a given operation against all tables whose names are returned by your query:
EXEC sp_MSforeachtable #command1 = 'delete from ?'
, #whereand = 'and o.name IN (SELECT tableName FROM tablesNames WHERE ...)'
Also http://weblogs.asp.net/nunogomes/archive/2008/08/19/sql-server-undocumented-stored-procedure-sp-msforeachtable.aspx for more reading.
The delete statement works with only one table name at a time.
The full syntax is documented here, but it's TL;DR... In short, you'll have to use the loop.
I am using a similar cursor as #Pavel with a list of my indexes in order to reorganise them. Operations like this are one of the extremely few good reasons for cursors.

Sql queries not 'reporting' back until all queries have finish executing

I'm running a set of sql queries and they are not reporting the row affect until all the queries have ran. Is there anyway i can get incremental feedback.
Example:
DECLARE #HowManyLastTime int
SET #HowManyLastTime = 1
WHILE #HowManyLastTime <> 2400000
BEGIN
SET #HowManyLastTime = #HowManyLastTime +1
print(#HowManyLastTime)
END
This doesn't show the count till the loop has finished. How do i make it show the count as it runs?
To flush recordcounts and other data to the client, you'll want to use RaisError with NOWAIT. Related questions and links:
PRINT statement in T-SQL
http://weblogs.sqlteam.com/mladenp/archive/2007/10/01/SQL-Server-Notify-client-of-progress-in-a-long-running.aspx
In SSMS this will work as expected. With other clients, you might not get a response from the client until the query execution is complete.
SQL tends to be 'set-based', and you are thinking procedurally and trying to make it act systematically. It really doesn't make sense to do this in SQL.
I would be asking you motivation for doing this, and is there anything better that can be tried.

Transactions within loop within stored procedure

I'm working on a procedure that will update a large number of items on a remote server, using records from a local database. Here's the pseudocode.
CREATE PROCEDURE UpdateRemoteServer
pre-processing
get cursor with ID's of records to be updated
while on cursor
process the item
No matter how much we optimize it, the routine is going to take a while, so we don't want the whole thing to be processed as a single transaction. The items are flagged after being processed, so it should be possible to pick up where we left off if the process is interrupted.
Wrapping the contents of the loop ("process the item") in a begin/commit tran does not do the trick... it seems that the whole statement
EXEC UpdateRemoteServer
is treated as a single transaction. How can I make each item process as a complete, separate transaction?
Note that I would love to run these as "non-transacted updates", but that option is only available (so far as I know) in 2008.
EXEC procedure does not create a transaction. A very simple test will show this:
create procedure usp_foo
as
begin
select ##trancount;
end
go
exec usp_foo;
The ##trancount inside usp_foo is 0, so the EXEC statement does not start an implicit transaction. If you have a transaction started when entering UpdateRemoteServer it means somebody started that transaction, I can't say who.
That being said, using remote servers and DTC to update items is going to perform quite bad. Is the other server also SQL Server 2005 at least? Maybe you can queue the requests to update and use messaging between the local and remote server and have the remote server perform the updates based on the info from the message. It would perform significantly better because both servers only have to deal with local transactions, and you get much better availability due to the loose coupling of queued messaging.
Updated
Cursors actually don't start transactions. The typical cursor based batch processing is usually based on cursors and batches updates into transactions of a certain size. This is fairly common for overnight jobs, as it allows for better performance (log flush throughput due to larger transaction size) and jobs can be interrupted and resumed w/o losing everithing. A simplified version of a batch processing loop is typically like this:
create procedure usp_UpdateRemoteServer
as
begin
declare #id int, #batch int;
set nocount on;
set #batch = 0;
declare crsFoo cursor
forward_only static read_only
for
select object_id
from sys.objects;
open crsFoo;
begin transaction
fetch next from crsFoo into #id ;
while ##fetch_status = 0
begin
-- process here
declare #transactionId int;
SELECT #transactionId = transaction_id
FROM sys.dm_tran_current_transaction;
print #transactionId;
set #batch = #batch + 1
if #batch > 10
begin
commit;
print ##trancount;
set #batch = 0;
begin transaction;
end
fetch next from crsFoo into #id ;
end
commit;
close crsFoo;
deallocate crsFoo;
end
go
exec usp_UpdateRemoteServer;
I ommitted the error handling part (begin try/begin catch) and the fancy ##fetch_status checks (static cursors actually don't need them anyway). This demo code shows that during the run there are several different transactions started (different transaction IDs). Many times batches also deploy transaction savepoints at each item processed so they can skip safely an item that causes an exception, using a pattern similar to the one in my link, but this does not apply to distributed transactions since savepoints and DTC don't mix.
EDIT: as pointed out by Remus below, cursors do NOT open a transaction by default; thus, this is not the answer to the question posed by the OP. I still think there are better options than a cursor, but that doesn't answer the question.
Stu
ORIGINAL ANSWER:
The specific symptom you describe is due to the fact that a cursor opens a transaction by default, therefore no matter how you work it, you're gonna have a long-running transaction as long as you are using a cursor (unless you avoid locks altogether, which is another bad idea).
As others are pointing out, cursors SUCK. You don't need them for 99.9999% of the time.
You really have two options if you want to do this at the database level with SQL Server:
Use SSIS to perform your operation; very fast, but may not be available to you in your particular flavor of SQL Server.
Because you're dealing with remote servers, and you're worried about connectivity, you may have to use a looping mechanism, so use WHILE instead and commit batches at a time. Although WHILE has many of the same issues as a cursor (looping still sucks in SQL), you avoid creating the outer transaction.
Stu
Are yo running this only from within sql server, or from an app? if so, get the list to be processed, then loop in the app to only process for the subsets as required.
Then the transaction should be handled by your app, and should only lock the items being updated/pages the items are in.
NEVER process one item at a time in a loop when you are doing transactional work. You can loop through records processing groups of them but never ever do one record at a time. Do set-based inserts instead and your performance will change from hours to minutes or even seconds. If you are using a cursor to insert update or delete and it isn't handling at least 1000 rowa in each statement (not one at atime) you are doing the wrong thing. Cursors are an extremely poor practice for such thing.
Just an idea ..
Only process a few items when the procedure is called (e.g. only get the TOP 10 items to process)
Process those
Hopefully, this will be the end of the transaction.
Then write a wrapper that calls the procedure as long as there is more work to do (either use a simple count(..) to see if there are items or have the procedure return true indicating that there is more work to do.
Don't know if this works, but maybe the idea is helpful.

SQL Server - SQL Cursor vs ADO.NET

I have to compute a value involving data from several tables. I was wondering if using a stored procedure with cursors would offer a performance advantage compared to reading the data into a dataset (using simple select stored procedures) and then looping through the records? The dataset is not large, it consists in 6 tables, each with about 10 records, mainly GUIDs, several nvarchar(100) fields, a float column, and an nvarchar(max).
That would probably depend on the dataset you may be retrieving back (the larger the set, the more logical it may be to perform inside SQL Server instead of passing it around), but I tend to think that if you are looking to perform computations, do it in your code and away from your stored procedures. If you need to use cursors to pull the data together, so be it, but using them to do calculations and other non-retrieval functions I think should be shied away from.
Edit: This Answer to another related question will give some pros and cons to cursors vs. looping. This answer would seem to conflict with my previous assertion (read above) about scaling. Seems to suggest that the larger you get, the more you will probably want to move it off to your code instead of in the stored procedure.
alternative to a cursor
declare #table table (Fields int)
declare #count int
declare #i
insert inot #table (Fields)
select Fields
from Table
select #count = count(*) from #table
while (#i<=#count)
begin
--whatever you need to do
set #i = #i + 1
end
Cursors should be faster, but if you have a lot of users running this it will eat up your server resources. Bear in mind you have a more powerful coding language when writing loops in .Net rather than SQL.
There are very few occasions where a cursor cannot be replaced using standard set based SQL. If you are doing this operation on the server you may be able to use a set based operation. Any more details on what you are doing?
If you do decide to use a cursor bear in mind that a FAST_FORWARD read only cursor will give you the best performance, and make sure that you use the deallocate statement to release it. See here for cursor tips
Cursors should be faster (unless you're doing something weird in SQL and not in ADO.NET).
That said, I've often found that cursors can be eliminated with a little bit of legwork. What's the procedure you need to do?
Cheers,
Eric