Generating custom ID for my application - sql

I want to generate a custom ID for one of the feature in my application. Here is the procedure to do that:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
Declare #StartingVendorInvoiceNo int = 0
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings WITH (TABLOCK)
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo
Select #StartingVendorInvoiceNo
END
Would there be any issue if multiple users end up calling this procedure. Obviously I don't want multiple users to have the same ID. I am using TABLOCK but not sure if this is the right way or anything else too is required.

SQL Server 2012 has SEQUENCE feature, which is definitely safe for multi-user environment. It is based on an integer type, though.
If you have a complex procedure that generates a "next" ID and you want to make sure that only one instance of the procedure runs at any moment (at the expense of throughput), I'd use `sp_getapplock'. It is easy to use and understand and you don't need to worry about placing correct query hints.
Your procedure would look like this:
CREATE PROCEDURE [dbo].[GetNextVendorInvoiceNo]
AS
BEGIN
SET NOCOUNT ON;
SET XACT_ABORT ON;
BEGIN TRANSACTION;
BEGIN TRY
DECLARE #VarLockResult int;
EXEC #VarLockResult = sp_getapplock
#Resource = 'GetNextVendorInvoiceNo_app_lock',
#LockMode = 'Exclusive',
#LockOwner = 'Transaction',
#LockTimeout = 60000,
#DbPrincipal = 'public';
Declare #StartingVendorInvoiceNo int = 0;
IF #VarLockResult >= 0
BEGIN
-- Acquired the lock, generate the "next" ID
Select #StartingVendorInvoiceNo = MAX(StartingVendorInvoiceNo) + 1
From SystemSettings;
Update SystemSettings
Set StartingVendorInvoiceNo = #StartingVendorInvoiceNo;
END ELSE BEGIN
-- TODO: handle the case when it takes too long to acquire the lock,
-- i.e. return some error code
-- For example, return 0
SET #StartingVendorInvoiceNo = 0;
END;
Select #StartingVendorInvoiceNo;
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION;
-- TODO: handle the error
END CATCH;
END
Simple TABLOCK as you wrote is definitely not enough. You need to wrap everything into transaction. Then make sure that lock is held till the end of the transaction, see HOLDLOCK. Then make sure that the lock you are getting is the correct one. You may need TABLOCKX. So, overall you need a pretty good understanding of all these hints and how locking works. It is definitely possible to achieve the same effect with these hints. But, if the logic in the procedure is more complicated than your simplified example it can easily get pretty ugly.
To my mind, sp_getapplock is easy to understand and maintain.

Related

Properly understanding the error Cannot use the ROLLBACK statement within an INSERT-EXEC statement - Msg 50000, Level 16, State 1

I understand there is a regularly quoted answer that is meant to address this question, but I believe there is not enough explanation on that thread to really answer the question.
Why earlier answers are inadequate
The first (and accepted) answer simply says this is a common problem and talks about having only one active insert-exec at a time (which is only the first half of the question asked there and doesn't address the ROLLBACK error). The given workaround is to use a table-valued function - which does not help my scenario where my stored procedure needs to update data before returning a result set.
The second answer talks about using openrowset but notes you cannot dynamically specify argument values for the stored procedure - which does not help my scenario because different users need to call my procedure with different parameters.
The third answer provides something called "the old single hash table approach" but does not explain whether it is addressing part 1 or 2 of the question, nor how it works, nor why.
No answer explains why the database is giving this error in the first place.
My use case / requirements
To give specifics for my scenario (although simplified and generic), I have procedures something like below.
In a nutshell though - the first procedure will return a result set, but before it does so, it updates a status column. Effectively these records represent records that need to be synchronised somewhere, so when you call this procedure the procedure will flag the records as being "in progress" for sync.
The second stored procedure calls that first one. Of course the second stored procedure wants to take those records and perform inserts and updates on some tables - to keep those tables in sync with whatever data was returned from the first procedure. After performing all the updates, the second procedure then calls a third procedure - within a cursor (ie. row by row on all the rows in the result set that was received from the first procedure) - for the purpose of setting the status on the source data to "in sync". That is, one by one it goes back and says "update the sync status on record id 1, to 'in sync'" ... and then record 2, and then record 3, etc.
The issue I'm having is that calling the second procedure results in the error
Msg 50000, Level 16, State 1, Procedure getValuesOuterCall, Line 484 [Batch Start Line 24]
Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
but calling the first procedure directly causes no error.
Procedure 1
-- Purpose here is to return a result set,
-- but for every record in the set we want to set a status flag
-- to another value as well.
alter procedure getValues #username, #password, #target
as
begin
set xact_abort on;
begin try
begin transaction;
declare #tableVariable table (
...
);
update someOtherTable
set something = somethingElse
output
someColumns
into #tableVariable
from someTable
join someOtherTable
join etc
where someCol = #username
and etc
;
select
someCols
from #tableVariable
;
commit;
end try
begin catch
if ##trancount > 0 rollback;
declare #msg nvarchar(2048) = error_message() + ' Error line: ' + CAST(ERROR_LINE() AS nvarchar(100));
raiserror (#msg, 16, 1);
return 55555
end catch
end
Procedure 2
-- Purpose here is to obtain the result set from earlier procedure
-- and then do a bunch of data updates based on the result set.
-- Lastly, for each row in the set, call another procedure which will
-- update that status flag to another value.
alter procedure getValuesOuterCall #username, #password, #target
as
begin
set xact_abort on;
begin try
begin transaction;
declare #anotherTableVariable
insert into #anotherTableVariable
exec getValues #username = 'blah', #password = #somePass, #target = ''
;
with CTE as (
select someCols
from #anotherTableVariable
join someOtherTables, etc;
)
merge anUnrelatedTable as target
using CTE as source
on target.someCol = source.someCol
when matched then update
target.yetAnotherCol = source.yetAnotherCol,
etc
when not matched then
insert (someCols, andMoreCols, etc)
values ((select someSubquery), source.aColumn, source.etc)
;
declare #myLocalVariable int;
declare #mySecondLocalVariable int;
declare lcur_myCursor cursor for
select keyColumn
from #anotherTableVariable
;
open lcur_muCursor;
fetch lcur_myCursor into #myLocalVariable;
while ##fetch_status = 0
begin
select #mySecondLocalVariable = someCol
from someTable
where someOtherCol = #myLocalVariable;
exec thirdStoredProcForSettingStatusValues #id = #mySecondLocalVariable, etc
end
deallocate lcur_myCursor;
commit;
end try
begin catch
if ##trancount > 0 rollback;
declare #msg nvarchar(2048) = error_message() + ' Error line: ' + CAST(ERROR_LINE() AS nvarchar(100));
raiserror (#msg, 16, 1);
return 55555
end catch
end
The parts I don't understand
Firstly, I have no explicit 'rollback' (well, except in the catch block) - so I have to presume that an implicit rollback is causing the issue - but it is difficult to understand where the root of this problem is; I am not even entirely sure which stored procedure is causing the issue.
Secondly, I believe the statements to set xact_abort and begin transaction are required - because in procedure 1 I am updating data before returning the result set. In procedure 2 I am updating data before I call a third procedure to update further data.
Thirdly, I don't think procedure 1 can be converted to a table-valued function because the procedure performs a data update (which would not be allowed in a function?)
Things I have tried
I removed the table variable from procedure 2 and actually created a permanent table to store the results coming back from procedure 1. Before calling procedure 1, procedure 2 would truncate the table. I still got the rollback error.
I replaced the table variable in procedure 1 with a temporary table (ie. single #). I read the articles about how such a table persists for the lifetime of the connection, so within procedure 1 I had drop table if exists... and then create table #.... I still got the rollback error.
Lastly
I still don't understand exactly what is the problem - what is Microsoft struggling to accomplish here? Or what is the scenario that SQL Server cannot accommodate for a requirement that appears to be fairly straightforward: One procedure returns a result set. The calling procedure wants to perform actions based on what's in that result set. If the result set is scoped to the first procedure, then why can't SQL Server just create a temporary copy of the result set within the scope of the second procedure so that it can be acted upon?
Or have I missed it completely and the issue has something to do with the final call to a third procedure, or maybe to do with using try ... catch - for example, perhaps the logic is totally fine but for some reason it is hitting the catch block and the rollback there is the problem (ie. so if I fix the underlying reason leading us to the catch block, all will resolve)?

Is this sql update guaranteed to be atomic?

I have the following sql:
UPDATE Customer SET Count=1 WHERE ID=1 AND Count=0
SELECT ##ROWCOUNT
I need to know if this is guaranteed to be atomic.
If 2 users try this simultaneously, will only one succeed and get a return value of 1? Do I need to use a transaction or something else in order to guarantee this?
The goal is to get a unique 'Count' for the customer. Collisions in this system will almost never happen, so I am not concerned with the performance if a user has to query again (and again) to get a unique Count.
EDIT:
The goal is to not use a transaction if it is not needed. Also this logic is ran very infrequently (up to 100 per day), so I wanted to keep it as simple as possible.
It may depend on the sql server you are using. However for most, the answer is yes. I guess you are implementing a lock.
Using SQL SERVER (v 11.0.6020) that this is indeed an atomic operation as best as I can determine.
I wrote some test stored procedures to try to test this logic:
-- Attempt to update a Customer row with a new Count, returns
-- The current count (used as customer order number) and a bit
-- which determines success or failure. If #Success is 0, re-run
-- the query and try again.
CREATE PROCEDURE [dbo].[sp_TestUpdate]
(
#Count INT OUTPUT,
#Success BIT OUTPUT
)
AS
BEGIN
DECLARE #NextCount INT
SELECT #Count=Count FROM Customer WHERE ID=1
SET #NextCount = #Count + 1
UPDATE Customer SET Count=#NextCount WHERE ID=1 AND Count=#Count
SET #Success=##ROWCOUNT
END
And:
-- Loop (many times) trying to get a number and insert in into another
-- table. Execute this loop concurrently in several different windows
-- using SMSS.
CREATE PROCEDURE [dbo].[sp_TestLoop]
AS
BEGIN
DECLARE #Iterations INT
DECLARE #Counter INT
DECLARE #Count INT
DECLARE #Success BIT
SET #Iterations = 40000
SET #Counter = 0
WHILE (#Counter < #Iterations)
BEGIN
SET #Counter = #Counter + 1
EXEC sp_TestUpdate #Count = #Count OUTPUT , #Success = #Success OUTPUT
IF (#Success=1)
BEGIN
INSERT INTO TestImage (ImageNumber) VALUES (#Count)
END
END
END
This code ran, creating unique sequential ImageNumber values in the TestImage table. This proves that the above SQL update call is indeed atomic. Neither function guaranteed the updates were done, but they did guarantee that no duplicates were created, and no numbers were skipped.

sql locking for batch updates

I'm using SQL Server 2008 R2 and C#.
I'm inserting a batch of rows in SQL Server with a column Status set with value P.
Afterwards, I check how many rows already have the status R and if there are less than 20, I update the row to Status R.
While inserting and updating, more rows are getting added and updated all the time.
I've tried transactions and locking in multiple ways but still: at the moment that a new batch is activated, there are more than 20 rows with status R for a few milliseconds. After those few milliseconds it stabilizes back to 20.
Does anyone have an idea why at bursts the locking doesn't seem to work?
Sample code, reasons, whatever you can share on this subject can be useful!
Thanks!
Following is my stored proc:
DECLARE #return BIT
SET #return = -1
DECLARE #previousValue INT
--insert the started orchestration
INSERT INTO torchestrationcontroller WITH (ROWLOCK)
([flowname],[orchestrationid],[status])
VALUES (#FlowName, #OrchestrationID, 'P')
--check settings
DECLARE #maxRunning INT
SELECT #maxRunning = maxinstances
FROM torchestrationflows WITH (NOLOCK)
WHERE [flowname] = #FlowName
--if running is 0, than you can pass, no limitation here
IF( #maxRunning = 0 )
BEGIN
SET #return = 1
UPDATE torchestrationcontroller WITH(ROWLOCK)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- BEGIN
RETRY: -- Label RETRY
BEGIN TRY
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
BEGIN TRANSACTION T1
--else: check how many orchestrations are now running
--start lock table
DECLARE #currentRunning INT
SELECT #currentRunning = Count(*)
FROM torchestrationcontroller WITH (TABLOCKX) --Use an exclusive lock that will be held until the end of the transaction on all data processed by the statement
WHERE [flowname] = #FlowName
AND [status] = 'R'
--CASE
IF( #currentRunning < #maxRunning )
BEGIN
-- less orchestrations are running than allowed
SET #return = 1
UPDATE torchestrationcontroller WITH(TABLOCKX)
SET [status] = 'R'
WHERE [orchestrationid] = #OrchestrationID
END
ELSE
-- more or equal orchestrations are running than allowed
SET #return = 0
--end lock table
SELECT #Return
COMMIT TRANSACTION T1
END TRY
BEGIN CATCH
--PRINT 'Rollback Transaction'
ROLLBACK TRANSACTION
IF ERROR_NUMBER() = 1205 -- Deadlock Error Number
BEGIN
WAITFOR DELAY '00:00:00.05' -- Wait for 5 ms
GOTO RETRY -- Go to Label RETRY
END
END CATCH
I've been able to fix it by setting the isolation level serializable + transaction on the tree stored procedures (two of which I didn't mention because I didn't think they were of importance to this problem). Apparently it was the combination of multiple stored procs that were interfering with each other.
If you know a better way to fix it that could give me a better performance, please let me know!

Add column to table and then update it inside transaction

I am creating a script that will be run in a MS SQL server. This script will run multiple statements and needs to be transactional, if one of the statement fails the overall execution is stopped and any changes are rolled back.
I am having trouble creating this transactional model when issuing ALTER TABLE statements to add columns to a table and then updating the newly added column. In order to access the newly added column right away, I use a GO command to execute the ALTER TABLE statement, and then call my UPDATE statement. The problem I am facing is that I cannot issue a GO command inside an IF statement. The IF statement is important within my transactional model. This is a sample code of the script I am trying to run. Also notice that issuing a GO command, will discard the #errorCode variable, and will need to be declared down in the code before being used (This is not in the code below).
BEGIN TRANSACTION
DECLARE #errorCode INT
SET #errorCode = ##ERROR
-- **********************************
-- * Settings
-- **********************************
IF #errorCode = 0
BEGIN
BEGIN TRY
ALTER TABLE Color ADD [CodeID] [uniqueidentifier] NOT NULL DEFAULT ('{00000000-0000-0000-0000-000000000000}')
GO
END TRY
BEGIN CATCH
SET #errorCode = ##ERROR
END CATCH
END
IF #errorCode = 0
BEGIN
BEGIN TRY
UPDATE Color
SET CodeID= 'B6D266DC-B305-4153-A7AB-9109962255FC'
WHERE [Name] = 'Red'
END TRY
BEGIN CATCH
SET #errorCode = ##ERROR
END CATCH
END
-- **********************************
-- * Check #errorCode to issue a COMMIT or a ROLLBACK
-- **********************************
IF #errorCode = 0
BEGIN
COMMIT
PRINT 'Success'
END
ELSE
BEGIN
ROLLBACK
PRINT 'Failure'
END
So what I would like to know is how to go around this problem, issuing ALTER TABLE statements to add a column and then updating that column, all within a script executing as a transactional unit.
GO is not a T-SQL command. Is a batch delimiter. The client tool (SSM, sqlcmd, osql etc) uses it to effectively cut the file at each GO and send to the server the individual batches. So obviously you cannot use GO inside IF, nor can you expect variables to span scope across batches.
Also, you cannot catch exceptions without checking for the XACT_STATE() to ensure the transaction is not doomed.
Using GUIDs for IDs is always at least suspicious.
Using NOT NULL constraints and providing a default 'guid' like '{00000000-0000-0000-0000-000000000000}' also cannot be correct.
Updated:
Separate the ALTER and UPDATE into two batches.
Use sqlcmd extensions to break the script on error. This is supported by SSMS when sqlcmd mode is on, sqlcmd, and is trivial to support it in client libraries too: dbutilsqlcmd.
use XACT_ABORT to force error to interrupt the batch. This is frequently used in maintenance scripts (schema changes). Stored procedures and application logic scripts in general use TRY-CATCH blocks instead, but with proper care: Exception handling and nested transactions.
example script:
:on error exit
set xact_abort on;
go
begin transaction;
go
if columnproperty(object_id('Code'), 'ColorId', 'AllowsNull') is null
begin
alter table Code add ColorId uniqueidentifier null;
end
go
update Code
set ColorId = '...'
where ...
go
commit;
go
Only a successful script will reach the COMMIT. Any error will abort the script and rollback.
I used COLUMNPROPERTY to check for column existance, you could use any method you like instead (eg. lookup sys.columns).
Orthogonal to Remus's comments, what you can do is execute the update in an sp_executesql.
ALTER TABLE [Table] ADD [Xyz] NVARCHAR(256);
DECLARE #sql NVARCHAR(2048) = 'UPDATE [Table] SET [Xyz] = ''abcd'';';
EXEC sys.sp_executesql #query = #sql;
We've needed to do this when creating upgrade scripts. Usually we just use GO but it has been necessary to do things conditionally.
I almost agree with Remus but you can do this with SET XACT_ABORT ON and XACT_STATE
Basically
SET XACT_ABORT ON will abort each batch on error and ROLLBACK
Each batch is separated by GO
Execution jumps to the next batch on error
Use XACT_STATE() will test if the transaction is still valid
Tools like Red Gate SQL Compare use this technique
Something like:
SET XACT_ABORT ON
GO
BEGIN TRANSACTION
GO
IF COLUMNPROPERTY(OBJECT_ID('Color'), 'CodeID', ColumnId) IS NULL
ALTER TABLE Color ADD CodeID [uniqueidentifier] NULL
GO
IF XACT_STATE() = 1
UPDATE Color
SET CodeID= 'B6D266DC-B305-4153-A7AB-9109962255FC'
WHERE [Name] = 'Red'
GO
IF XACT_STATE() = 1
COMMIT TRAN
--else would be rolled back
I've also removed the default. No value = NULL for GUID values. It's meant to be unique: don't try and set every row to all zeros because it will end in tears...
Have you tried it without the GO?
Normally you should not mix table changes and data changes in the same script.
Another alternative, if you don't want to split the code into separate batches, is to use EXEC to create a nested scope/batch
as here

How to Suppress the SELECT Output of a Stored Procedure called from another Stored Procedure in SQL Server?

I'm not talking about doing a "SET NOCOUNT OFF". But I have a stored procedure which I use to insert some data into some tables. This procedure creates a xml response string, well let me give you an example:
CREATE PROCEDURE [dbo].[insertSomeData] (#myParam int) AS
DECLARE #reply varchar(2048)
... Do a bunch of inserts/updates...
SET #reply = '<xml><big /><outputs /></xml>'
SELECT #reply
GO
So I put together a script which uses this SP a bunch of times, and the xml "output" is getting to be too much (it's crashed my box once already).
Is there a way to suppress or redirect the output generated from this stored procedure? I don't think that modifying this stored procedure is an option.
thanks.
I guess i should clarify. This SP above is being called by a T-SQL Update script that i wrote, to be run through enterprise studio manager, etc.
And it's not the most elegant SQL i've ever written either (some psuedo-sql):
WHILE unprocessedRecordsLeft
BEGIN
SELECT top 1 record from updateTable where Processed = 0
EXEC insertSomeData #param = record_From_UpdateTable
END
So lets say the UpdateTable has some 50k records in it. That SP gets called 50k times, writing 50k xml strings to the output window. It didn't bring the sql server to a stop, just my client app (sql server management studio).
The answer you're looking for is found in a similar SO question by Josh Burke:
-- Assume this table matches the output of your procedure
DECLARE #tmpNewValue TABLE ([Id] int, [Name] varchar(50))
INSERT INTO #tmpNewValue
EXEC [ProcedureB]
-- SELECT [Id], [Name] FROM #tmpNewValue
I think I found a solution.
So what i can do now in my SQL script is something like this (sql-psuedo code):
create table #tmp(xmlReply varchar(2048))
while not_done
begin
select top 1 record from updateTable where processed = 0
insert into #tmp exec insertSomeData #param=record
end
drop table #tmp
Now if there was a even more efficient way to do this. Does SQL Server have something similar to /dev/null? A null table or something?
Answering the question, "How do I suppress stored procedure output?" really depends on what you are trying to accomplish. So I want to contribute what I encountered:
I needed to supress the stored procedure (USP) output because I just wanted the row count (##ROWCOUNT) from the output. What I did, and this may not work for everyone, is since my query was already going to be dynamic sql I added a parameter called #silentExecution to the USP in question. This is a bit parameter which I defaulted to zero (0).
Next if #silentExecution was set to one (1) I would insert the table contents into a temporary table, which is what would supress the output and then execute ##ROWCOUNT with no problem.
USP Example:
CREATE PROCEDURE usp_SilentExecutionProc
#silentExecution bit = 0
AS
BEGIN
SET NOCOUNT ON;
DECLARE #strSQL VARCHAR(MAX);
SET #strSQL = '';
SET #strSQL = 'SELECT TOP 10 * ';
IF #silentExecution = 1
SET #strSQL = #strSQL + 'INTO #tmpDevNull ';
SET #strSQL = #strSQL +
'FROM dbo.SomeTable ';
EXEC(#strSQL);
END
GO
Then you can execute the whole thing like so:
EXEC dbo.usp_SilentExecutionProc #silentExecution = 1;
SELECT ##ROWCOUNT;
The purpose behind doing it like this is if you need the USP to be able to return a result set in other uses or cases, but still utilize it for just the rows.
Just wanted to share my solution.
I have recently come across with a similar issue while writing a migration script and since the issue was resolved in a different way, I want to record it.
I have nearly killed my SSMS Client by running a simple while loop for 3000 times and calling a procedure.
DECLARE #counter INT
SET #counter = 10
WHILE #counter > 0
BEGIN
-- call a procedure which returns some resultset
SELECT #counter-- (simulating the effect of stored proc returning some resultset)
SET #counter = #counter - 1
END
The script result was executed using SSMS and default option on query window is set to show “Results to Grid”[Ctrl+d shortcut].
Easy Solution:
Try setting the results to file to avoid the grid to be built and painted on the SSMS client. [CTRL+SHIFT+F keyboard shortcut to set the query results to file].
This issue is related to : stackoverflow query
Man, this is seriously a case of a computer doing what you told it to do instead of what you wanted it to do.
If you don't want it to return results, then don't ask it to return results. Refactor that stored procedure into two:
CREATE PROCEDURE [dbo].[insertSomeData] (#myParam int) AS
BEGIN
DECLARE #reply varchar(2048)
--... Do a bunch of inserts/updates...
EXEC SelectOutput
END
GO
CREATE PROCEDURE SelectOutput AS
BEGIN
SET #reply = '<xml><big /><outputs /></xml>'
SELECT #reply
END
From which client are you calling the stored procedure? Say it was from C#, and you're calling it like:
var com = myConnection.CreateCommand();
com.CommandText = "exec insertSomeData 1";
var read = com.ExecuteReader();
This will not yet retrieve the result from the server; you have to call Read() for that:
read.Read();
var myBigString = read[0].ToString();
So if you don't call Read, the XML won't leave the Sql Server. You can even call the procedure with ExecuteNonQuery:
var com = myConnection.CreateCommand();
com.CommandText = "exec insertSomeData 1";
com.ExecuteNonQuery();
Here the client won't even ask for the result of the select.
You could create a SQL CLR stored procedure that execs this. Should be pretty easy.
I don't know if SQL Server has an option to suppress output (I don't think it does), but the SQL Query Analyzer has an option (under results tab) to "Discard Results".
Are you running this through isql?
You said your server is crashing. What is crashing the application that consumes the output of this SQL or SQL Server itself (assuming SQL Server).
If you are using .Net Framework application to call the stored procedure then take a look at SQLCommand.ExecuteNonQuery. This just executes stored procedure with no results returned. If problem is at SQL Server level then you are going to have to do something different (i.e. change the stored procedure).
You can include in the SP a parameter to indicate if you want it to do the select or not, but of course, you need to have access and reprogram the SP.
CREATE PROCEDURE [dbo].[insertSomeData] (#myParam int, #doSelect bit=1) AS
DECLARE #reply varchar(2048)
... Do a bunch of inserts/updates...
SET #reply = '<xml><big /><outputs /></xml>'
if #doSelect = 1
SELECT #reply
GO
ever tried SET NOCOUNT ON; as an option?