Is this sql update guaranteed to be atomic? - sql

I have the following sql:
UPDATE Customer SET Count=1 WHERE ID=1 AND Count=0
SELECT ##ROWCOUNT
I need to know if this is guaranteed to be atomic.
If 2 users try this simultaneously, will only one succeed and get a return value of 1? Do I need to use a transaction or something else in order to guarantee this?
The goal is to get a unique 'Count' for the customer. Collisions in this system will almost never happen, so I am not concerned with the performance if a user has to query again (and again) to get a unique Count.
EDIT:
The goal is to not use a transaction if it is not needed. Also this logic is ran very infrequently (up to 100 per day), so I wanted to keep it as simple as possible.

It may depend on the sql server you are using. However for most, the answer is yes. I guess you are implementing a lock.

Using SQL SERVER (v 11.0.6020) that this is indeed an atomic operation as best as I can determine.
I wrote some test stored procedures to try to test this logic:
-- Attempt to update a Customer row with a new Count, returns
-- The current count (used as customer order number) and a bit
-- which determines success or failure. If #Success is 0, re-run
-- the query and try again.
CREATE PROCEDURE [dbo].[sp_TestUpdate]
(
#Count INT OUTPUT,
#Success BIT OUTPUT
)
AS
BEGIN
DECLARE #NextCount INT
SELECT #Count=Count FROM Customer WHERE ID=1
SET #NextCount = #Count + 1
UPDATE Customer SET Count=#NextCount WHERE ID=1 AND Count=#Count
SET #Success=##ROWCOUNT
END
And:
-- Loop (many times) trying to get a number and insert in into another
-- table. Execute this loop concurrently in several different windows
-- using SMSS.
CREATE PROCEDURE [dbo].[sp_TestLoop]
AS
BEGIN
DECLARE #Iterations INT
DECLARE #Counter INT
DECLARE #Count INT
DECLARE #Success BIT
SET #Iterations = 40000
SET #Counter = 0
WHILE (#Counter < #Iterations)
BEGIN
SET #Counter = #Counter + 1
EXEC sp_TestUpdate #Count = #Count OUTPUT , #Success = #Success OUTPUT
IF (#Success=1)
BEGIN
INSERT INTO TestImage (ImageNumber) VALUES (#Count)
END
END
END
This code ran, creating unique sequential ImageNumber values in the TestImage table. This proves that the above SQL update call is indeed atomic. Neither function guaranteed the updates were done, but they did guarantee that no duplicates were created, and no numbers were skipped.

Related

Multiple rows are getting inserted into a table (which is not desired) as part of a stored procedure

Update: This still remain a mystery. Checked the calling code and we did not find anything that would make the SP run in a loop.
For now we have split the SP into two which seems to have arrested the issue although not able to reason how that has helped out.
Database: MS SQL Server.
I have a SP which performs few operations - i.e inserts a row into 3 tables based on certain status as part of that SP being called.
It is getting called from our web application based on a user action.
We have cases, few times a day where the same row gets inserted multiple times (sometime more than 50+) with the same values in each row except that if you look at the datetime when the row was inserted there is a difference of few milliseconds. So it is unlikely that the user is initiating that action.
This SP is not running in a Transaction or with any locks however it is getting called probably concurrently multiple times as we have many concurrent users on the web application invoking this action.
My question is what is causing the same row to insert so many times? If concurrent execution of SP was an issue where we are updating same row then it is understood one may overwrite the other. However in this case each user calls in the SP with different parameters.
I have put the said operation in a Transaction to monitor the behavior however was looking to find out what exactly causes these kind of multiple inserts with same value just a few milliseconds apart?
USE [ABC]
GO
/****** Object: StoredProcedure [dbo].[AddProcessAdmittedDocUploadScrutinyWithLog] ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[AddProcessAdmittedDocUploadScrutinyWithLog]
(
--Insert using bulk
#stdfrm_id int,
#course_id int,
#stdfrm_scrt_apprvby int,
#stdfrm_scrt_apprvcomment varchar(max),
#sRemainingDocs varchar(max),
#DTProcessAdmittedDocUploadScrutiny AS dbo.MyDTProcessAdmittedDocUploadScrutiny READONLY
)
AS
BEGIN
DECLARE #result char
SET #result='N'
--New
declare #AuditCount int=0;
select #AuditCount=count(scrtaudit_id) from tbl_ProcessAdmittedScrutinyAuditLog
where stdfrm_id=#stdfrm_id and stdfrm_scrt_apprvby=#stdfrm_scrt_apprvby
and stdfrm_scrt_apprvcomment=#stdfrm_scrt_apprvcomment and convert(date,stdfrm_scrt_apprvon,103)=convert(date,getdate(),103)
--Checked extra conditon to avoid repeatation
if(#AuditCount=0)
BEGIN
--Call Insert
BEGIN TRY
/*Remaining Documents----------*/
DECLARE #sdtdoc_id Table (n int primary key identity(1,1), id int)
if(#sRemainingDocs is not null)
begin
--INSERT INTO #sdtdoc_id (id) SELECT Name from splitstring(#sRemainingDocs)
INSERT INTO #sdtdoc_id (id) SELECT [Value] from dbo.FN_ListToTable(#sRemainingDocs,',')
end
Declare #isRemaining int=0;
SELECT #isRemaining=Count(*) FROM #sdtdoc_id
/*Calculate stdfrm_scrt_apprvstatus*/
Declare #stdfrm_scrt_apprvstatus char(1)='A';--Approved
Declare #TotalDescripancies int;
select #TotalDescripancies=count(doc_id) from #DTProcessAdmittedDocUploadScrutiny where doc_id_scrtyn='Y'
if(#isRemaining>0)
begin
set #stdfrm_scrt_apprvstatus='H';--Discrepancies Found
end
else if exists (select count(doc_id) from #DTProcessAdmittedDocUploadScrutiny where doc_id_scrtyn='Y')
begin
if(#TotalDescripancies>0)
begin
set #stdfrm_scrt_apprvstatus='H';--Discrepancies Found
end
end
/* Check if Discrepancies Found first time then assign to Checker o.w assign to direct college like grievance*/
if(#stdfrm_scrt_apprvstatus='H')
begin
declare #countAuditLog int=0;
select #countAuditLog=count(stdfrm_id) from tbl_ProcessAdmittedScrutinyAuditLog where stdfrm_id =#stdfrm_id
if (#countAuditLog=0)
begin
set #stdfrm_scrt_apprvstatus='G'--'E';--Discrepancies Found set Edit request assign to Checker
end
--else if (#countAuditLog=1)
-- begin
--set #stdfrm_scrt_apprvstatus='G';--Discrepancies Found set Grievance assign to college
-- end
end
/*----------------------*/
/*Update status in original table-----*/
Update tbl_ProcessAdmitted set stdfrm_scrt_apprvstatus=#stdfrm_scrt_apprvstatus
,stdfrm_scrt_apprvon=getdate(),stdfrm_scrt_apprvby=#stdfrm_scrt_apprvby
,stdfrm_scrt_apprvcomment=#stdfrm_scrt_apprvcomment
where stdfrm_id =#stdfrm_id
/*Add in Main Student Log-----------*/
/********* The row here gets inserted multiple times *******************/
INSERT into tbl_ProcessAdmittedScrutinyAuditLog
(stdfrm_id, stdfrm_scrt_apprvstatus, stdfrm_scrt_apprvon, stdfrm_scrt_apprvby, stdfrm_scrt_apprvcomment )
values
(#stdfrm_id, #stdfrm_scrt_apprvstatus, getdate(), #stdfrm_scrt_apprvby, #stdfrm_scrt_apprvcomment)
DECLARE #scrtaudit_id int =##identity
/*Completed -------------------------*/
DELETE FROM tbl_ProcessAdmittedDocUploadScrutiny WHERE stdfrm_id =#stdfrm_id
SET NOCOUNT ON;
/********* The row here gets inserted multiple times *******************/
INSERT tbl_ProcessAdmittedDocUploadScrutiny
(stdfrm_id, course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment)
SELECT #stdfrm_id, #course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment
FROM #DTProcessAdmittedDocUploadScrutiny;
/*Scrutiny Document Log -------------------------*/
/********* The row here gets inserted multiple times *******************/
INSERT tbl_ProcessAdmittedDocUploadScrutinyAuditLog
(scrtaudit_id,stdfrm_id, course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment)
SELECT #scrtaudit_id,#stdfrm_id, #course_id, doc_id, doc_id_scrtyn, doc_id_scrtrmrk, doc_id_comment
FROM #DTProcessAdmittedDocUploadScrutiny;
/*Remaining Documents Insert into table*/
DELETE FROM tbl_ProcessAdmittedDocUploadScrutinyRemiaing WHERE stdfrm_id =#stdfrm_id
DECLARE #Id int,#doc_id int
WHILE (SELECT Count(*) FROM #sdtdoc_id) > 0
BEGIN
Select Top 1 #Id = n,#doc_id=id From #sdtdoc_id
--Do some processing here
insert into tbl_ProcessAdmittedDocUploadScrutinyRemiaing(stdfrm_id, doc_id )
values (#stdfrm_id,#doc_id)
insert into tbl_ProcessAdmittedDocUploadScrutinyRemiaingAuditLog
(scrtaudit_id, stdfrm_id, doc_id )
values (#scrtaudit_id,#stdfrm_id,#doc_id)
DELETE FROM #sdtdoc_id WHERE n = #Id
END --Begin end While
/*End Remaining Documents-----------*/
SET #result=#stdfrm_scrt_apprvstatus
END TRY
BEGIN CATCH
SET #result='N'
insert into tbl_ErrorSql( ErrorMessage, stdfrm_id)
values(coalesce(Error_Message(),ERROR_LINE()),#stdfrm_id)
END CATCH;
--End of Call Insert
END
SELECT #result
END

About lock behavior when update row in sql server

Now, I'm trying to increment number sequential in SQL Server with the number provided from users.
I have a problem when multiple user insert a row same time with same number.
I try to update the number that user provided to a temporary table, and I expect when I update the same table with same condition, SQL Server will lock any modified to this row until the current update finished, but it not.
Here is the update statement I used:
UPDATE GlobalParam
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
Could you tell me any way to force the other update command wait until the current command finished ?
This is entire my command :
DECLARE #Result bigint;
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
DECLARE #SelectTopStm nvarchar(MAX);
DECLARE #ExistRow int
SET #SelectTopStm = 'SELECT #ExistRow = 1 FROM (SELECT TOP 1 Code FROM Item WHERE Code = '999') temp'
EXEC sp_executesql #SelectTopStm, N'#ExistRow int output', #ExistRow output
IF (#ExistRow is not null)
BEGIN
DECLARE #MaxValue bigint
DECLARE #ReturnUpdateTbl table (ValueString nvarchar(max));
UPDATE GlobalParam SET ValueString = (CAST(ValueString as bigint) + 1)
OUTPUT inserted.ValueString INTO #ReturnUpdateTbl
WHERE [Id] = '333A8E1F-16DD-E411-8280-D4BED9D726B3'
SELECT TOP 1 #MaxValue = CAST(ValueString as bigint) FROM #ReturnUpdateTbl
SET #Result = #MaxValue
END
ELSE
BEGIN
SET #Output = 999
END
END
I write the codes above as a stored procedure.
Here is the real code when I insert one Item:
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
I create 3 threads and make it run at the same time.
The return result :
Id Code
1 999
2 1000
3 1000
Thanks
If I understood your requirements, try ROWLOCK hint to tell the optimizer to start with locking the rows one by one as the update needs them.
UPDATE GlobalParam WITH(ROWLOCK)
SET ValueString = (CAST(ValueString as bigint) + 1)
WHERE Id = 'xxxx'
By default SQL Server does READ Committed locking which releases READ locks once the read operation is committed. Once the Update statement below is complete, all read locks are released from the table Item.
UPDATE GlobalParam SET ValueString = (SELECT MAX(Code) FROM Item)
Since your INSERT into Item is outside the scope of your procedure. you can run the thread in SERIALIZABLE isolation level. Something like this.
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
DECLARE #IncrementResult BIGINT
EXEC IncrementNumberUnique
, (some parameters)..
,#Result = #IncrementResult OUTPUT
INSERT INTO ITEM (Id, Code) VALUES ('xxxx', #IncrementResult)
Changing the isolation level to SERIALIZABLE will increase blocking and contention of resources on item table.
To know more about isolation level, refer this
you should look into identity columns and remove such manual computation of incremental columns if possible.

In T-SQL / SQL Server 2000, referencing a particular row of a result set

I want to reference the nth row of the #temptable (at the second SQL comment is below). What expression will allow me to do so?
DECLARE #counter INT
SET #counter = 0
WHILE (#counter<count(#temptable))
--#temptable has one column and 0 or more rows
BEGIN
DECLARE #variab INT
EXEC #variab = get_next_ticket 3906, 'n', 1
INSERT INTO Student_Course_List
SELECT #student_id,
-- nth result set row in #temptable, where n is #count+1
#variab
SET #counter = #counter +1
END
Cursor (will this work?):
for record in (select id from #temptable) loop
--For statements, use record.id
end loop;
Normally in a relational database like SQL Server, you prefer to do set operations. So it would be best to simply have INSERT INTO tbl SOMECOMPLEXQUERY even with very complex queries. This is far preferable to row processing. In a complex system, using a cursor should be relatively rare.
In your case, it would appear that the get_next_ticket procedure performs some significant logic which is not able to be done in a set-oriented fashion. If you cannot perform it's function in an alternative set-oriented way, then you would use a CURSOR.
You would declare a CURSOR on your set SELECT whatever FROM #temptable, OPEN it, FETCH from the cursor into variables for each column and then use them in the insert.
Instead of using a while loop (with a counter like you are doing) to iterate the table you should use a cursor
Syntax would be:
DECLARE #id int
DECLARE c cursor for select id from #temptable
begin
open c
fetch next from c into #id
WHILE (##FETCH_STATUS = 0)
BEGIN
--Do stuff here
fetch next from c into #id
END
close c
deallocate c
end

SQL Batched Delete

I have a table in SQL Server 2005 which has approx 4 billion rows in it. I need to delete approximately 2 billion of these rows. If I try and do it in a single transaction, the transaction log fills up and it fails. I don't have any extra space to make the transaction log bigger. I assume the best way forward is to batch up the delete statements (in batches of ~ 10,000?).
I can probably do this using a cursor, but is the a standard/easy/clever way of doing this?
P.S. This table does not have an identity column as a PK. The PK is made up of an integer foreign key and a date.
You can 'nibble' the delete's which also means that you don't cause a massive load on the database. If your t-log backups run every 10 mins, then you should be ok to run this once or twice over the same interval. You can schedule it as a SQL Agent job
try something like this:
DECLARE #count int
SET #count = 10000
DELETE FROM table1
WHERE table1id IN (
SELECT TOP (#count) tableid
FROM table1
WHERE x='y'
)
What distinguishes the rows you want to delete from those you want to keep? Will this work for you:
while exists (select 1 from your_table where <your_condition>)
delete top(10000) from your_table
where <your_condition>
In addition to putting this in a batch with a statement to truncate the log, you also might want to try these tricks:
Add criteria that matches the first column in your clustered index in addition to your other criteria
Drop any indexes from the table and then put them back after the delete is done if that's possible and won't interfere with anything else going on in the DB, but KEEP the clustered index
For the first point above, for example, if your PK is clustered then find a range which approximately matches the number of rows that you want to delete each batch and use that:
DECLARE #max_id INT, #start_id INT, #end_id INT, #interval INT
SELECT #start_id = MIN(id), #max_id = MAX(id) FROM My_Table
SET #interval = 100000 -- You need to determine the right number here
SET #end_id = #start_id + #interval
WHILE (#start_id <= #max_id)
BEGIN
DELETE FROM My_Table WHERE id BETWEEN #start_id AND #end_id AND <your criteria>
SET #start_id = #end_id + 1
SET #end_id = #end_id + #interval
END
Sounds like this is one-off operation (I hope for you) and you don't need to go back to a state that's halfway this batched delete - if that's the case why don't you just switch to SIMPLE transaction mode before running and then back to FULL when you're done?
This way the transaction log won't grow as much. This might not be ideal in most situations but I don't see anything wrong here (assuming as above you don't need to go back to a state that's in between your deletes).
you can do this in your script with smt like:
ALTER DATABASE myDB SET RECOVERY FULL/SIMPLE
Alternatively you can setup a job to shrink the transaction log every given interval of time - while your delete is running. This is kinda bad but I reckon it'd do the trick.
Well, if you were using SQL Server Partitioning, say based on the date column, you would have possibly switched out the partitions that are no longer required. A consideration for a future implementation perhaps.
I think the best option may be as you say, to delete the data in smaller batches, rather than in one hit, so as to avoid any potential blocking issues.
You could also consider the following method:
Copy the data to keep into a temporary table
Truncate the original table to purge all data
Move everything from the temporary table back into the original table
Your indexes would also be rebuilt as the data was added back to the original table.
I would do something similar to the temp table suggestions but I'd select into a new permanent table the rows you want to keep, drop the original table and then rename the new one. This should have a relatively low tran log impact. Obviously remember to recreate any indexes that are required on the new table after you've renamed it.
Just my two p'enneth.
Here is my example:
-- configure script
-- Script limits - transaction per commit (default 10,000)
-- And time to allow script to run (in seconds, default 2 hours)
--
DECLARE #MAX INT
DECLARE #MAXT INT
--
-- These 4 variables are substituted by shell script.
--
SET #MAX = $MAX
SET #MAXT = $MAXT
SET #TABLE = $TABLE
SET #WHERE = $WHERE
-- step 1 - Main loop
DECLARE #continue INT
-- deleted in one transaction
DECLARE #deleted INT
-- deleted total in script
DECLARE #total INT
SET #total = 0
DECLARE #max_id INT, #start_id INT, #end_id INT, #interval INT
SET #interval = #MAX
SELECT #start_id = MIN(id), #max_id = MAX(id) from #TABLE
SET #end_id = #start_id + #interval
-- timing
DECLARE #start DATETIME
DECLARE #now DATETIME
DECLARE #timee INT
SET #start = GETDATE()
--
SET #continue = 1
IF OBJECT_ID (N'EntryID', 'U') IS NULL
BEGIN
CREATE TABLE EntryID (startid INT)
INSERT INTO EntryID(startid) VALUES(#start_id)
END
ELSE
BEGIN
SELECT #start_id = startid FROM EntryID
END
WHILE (#continue = 1 AND #start_id <= #max_id)
BEGIN
PRINT 'Start issued: ' + CONVERT(varchar(19), GETDATE(), 120)
BEGIN TRANSACTION
DELETE
FROM #TABLE
WHERE id BETWEEN #start_id AND #end_id AND #WHERE
SET #deleted = ##ROWCOUNT
UPDATE EntryID SET EntryID.startid = #end_id + 1
COMMIT
PRINT 'Deleted issued: ' + STR(#deleted) + ' records. ' + CONVERT(varchar(19), GETDATE(), 120)
SET #total = #total + #deleted
SET #start_id = #end_id + 1
SET #end_id = #end_id + #interval
IF #end_id > #max_id
SET #end_id = #max_id
SET #now = GETDATE()
SET #timee = DATEDIFF (second, #start, #now)
if #timee > #MAXT
BEGIN
PRINT 'Time limit exceeded for the script, exiting'
SET #continue = 0
END
-- ELSE
-- BEGIN
-- SELECT #total 'Removed now', #timee 'Total time, seconds'
-- END
END
SELECT #total 'Removed records', #timee 'Total time sec' , #start_id 'Next id', #max_id 'Max id', #continue 'COMPLETED? '
SELECT * from EntryID next_start_id
GO
The short answer is, you can't delete 2 billion rows without incurring some kind of major database downtime.
Your best option may be to copy the data to a temp table and truncate the original table, but this will fill your tempDB and would use no less logging than deleting the data.
You will need to delete as many rows as you can until the transaction log fills up, then truncate it each time. The answer provided by Stanislav Kniazev could be modified to do this by increasing the batch size and adding a call to truncate the log file.
I agree with the people who want you loop over a smaller set of records, this will be faster than trying to do the whole operation in one step. You may to experience withthe number of records you should include inthe loop. About 2000 at a time seems to be the sweet spot in most of the tables I do large deltes from althouhg a few need smaller amounts like 500. Depends on number of forign keys, size of the record, triggers etc, so it really will take some experimenting to find what you need. It also depends on how heavy the use of the table is. A heavily accessed table will need each iteration of the loop to run a shorter amount of time. If you can run during off hours, or best yet in single user mode, then you can have more records deleted in one loop.
If you don't think you do this in one night during off hours, it might be best to design the loop with a counter and only do a set number of iterations each night until it is done.
Further, if you use an implicit transaction rather than an explicit one, you can kill the loop query at any time and records already deleted will stay deleted except those in the current round of the loop. Much faster than trying to rollback half a million records becasue you've brought the system to a halt.
It is usually a good idea to backup a database immediately before undertaking an operation of this nature.

How do I execute a stored procedure once for each row returned by query?

I have a stored procedure that alters user data in a certain way. I pass it user_id and it does it's thing. I want to run a query on a table and then for each user_id I find run the stored procedure once on that user_id
How would I write query for this?
use a cursor
ADDENDUM: [MS SQL cursor example]
declare #field1 int
declare #field2 int
declare cur CURSOR LOCAL for
select field1, field2 from sometable where someotherfield is null
open cur
fetch next from cur into #field1, #field2
while ##FETCH_STATUS = 0 BEGIN
--execute your sproc on each row
exec uspYourSproc #field1, #field2
fetch next from cur into #field1, #field2
END
close cur
deallocate cur
in MS SQL, here's an example article
note that cursors are slower than set-based operations, but faster than manual while-loops; more details in this SO question
ADDENDUM 2: if you will be processing more than just a few records, pull them into a temp table first and run the cursor over the temp table; this will prevent SQL from escalating into table-locks and speed up operation
ADDENDUM 3: and of course, if you can inline whatever your stored procedure is doing to each user ID and run the whole thing as a single SQL update statement, that would be optimal
try to change your method if you need to loop!
within the parent stored procedure, create a #temp table that contains the data that you need to process. Call the child stored procedure, the #temp table will be visible and you can process it, hopefully working with the entire set of data and without a cursor or loop.
this really depends on what this child stored procedure is doing. If you are UPDATE-ing, you can "update from" joining in the #temp table and do all the work in one statement without a loop. The same can be done for INSERT and DELETEs. If you need to do multiple updates with IFs you can convert those to multiple UPDATE FROM with the #temp table and use CASE statements or WHERE conditions.
When working in a database try to lose the mindset of looping, it is a real performance drain, will cause locking/blocking and slow down the processing. If you loop everywhere, your system will not scale very well, and will be very hard to speed up when users start complaining about slow refreshes.
Post the content of this procedure you want call in a loop, and I'll bet 9 out of 10 times, you could write it to work on a set of rows.
You can do it with a dynamic query.
declare #cadena varchar(max) = ''
select #cadena = #cadena + 'exec spAPI ' + ltrim(id) + ';'
from sysobjects;
exec(#cadena);
Something like this substitutions will be needed for your tables and field names.
Declare #TableUsers Table (User_ID, MyRowCount Int Identity(1,1)
Declare #i Int, #MaxI Int, #UserID nVarchar(50)
Insert into #TableUser
Select User_ID
From Users
Where (My Criteria)
Select #MaxI = ##RowCount, #i = 1
While #i <= #MaxI
Begin
Select #UserID = UserID from #TableUsers Where MyRowCount = #i
Exec prMyStoredProc #UserID
Select
#i = #i + 1, #UserID = null
End
Use a table variable or a temporary table.
As has been mentioned before, a cursor is a last resort. Mostly because it uses lots of resources, issues locks and might be a sign you're just not understanding how to use SQL properly.
Side note: I once came across a solution that used cursors to update
rows in a table. After some scrutiny, it turned out the whole thing
could be replaced with a single UPDATE command. However, in this case,
where a stored procedure should be executed, a single SQL-command
won't work.
Create a table variable like this (if you're working with lots of data or are short on memory, use a temporary table instead):
DECLARE #menus AS TABLE (
id INT IDENTITY(1,1),
parent NVARCHAR(128),
child NVARCHAR(128));
The id is important.
Replace parent and child with some good data, e.g. relevant identifiers or the whole set of data to be operated on.
Insert data in the table, e.g.:
INSERT INTO #menus (parent, child)
VALUES ('Some name', 'Child name');
...
INSERT INTO #menus (parent,child)
VALUES ('Some other name', 'Some other child name');
Declare some variables:
DECLARE #id INT = 1;
DECLARE #parentName NVARCHAR(128);
DECLARE #childName NVARCHAR(128);
And finally, create a while loop over the data in the table:
WHILE #id IS NOT NULL
BEGIN
SELECT #parentName = parent,
#childName = child
FROM #menus WHERE id = #id;
EXEC myProcedure #parent=#parentName, #child=#childName;
SELECT #id = MIN(id) FROM #menus WHERE id > #id;
END
The first select fetches data from the temporary table. The second select updates the #id. MIN returns null if no rows were selected.
An alternative approach is to loop while the table has rows, SELECT TOP 1 and remove the selected row from the temp table:
WHILE EXISTS(SELECT 1 FROM #menuIDs)
BEGIN
SELECT TOP 1 #menuID = menuID FROM #menuIDs;
EXEC myProcedure #menuID=#menuID;
DELETE FROM #menuIDs WHERE menuID = #menuID;
END;
Can this not be done with a user-defined function to replicate whatever your stored procedure is doing?
SELECT udfMyFunction(user_id), someOtherField, etc FROM MyTable WHERE WhateverCondition
where udfMyFunction is a function you make that takes in the user ID and does whatever you need to do with it.
See http://www.sqlteam.com/article/user-defined-functions for a bit more background
I agree that cursors really ought to be avoided where possible. And it usually is possible!
(of course, my answer presupposes that you're only interested in getting the output from the SP and that you're not changing the actual data. I find "alters user data in a certain way" a little ambiguous from the original question, so thought I'd offer this as a possible solution. Utterly depends on what you're doing!)
I like the dynamic query way of Dave Rincon as it does not use cursors and is small and easy. Thank you Dave for sharing.
But for my needs on Azure SQL and with a "distinct" in the query, i had to modify the code like this:
Declare #SQL nvarchar(max);
-- Set SQL Variable
-- Prepare exec command for each distinctive tenantid found in Machines
SELECT #SQL = (Select distinct 'exec dbo.sp_S2_Laser_to_cache ' +
convert(varchar(8),tenantid) + ';'
from Dim_Machine
where iscurrent = 1
FOR XML PATH(''))
--for debugging print the sql
print #SQL;
--execute the generated sql script
exec sp_executesql #SQL;
I hope this helps someone...