Constructing SQL Server stored procedure for array Input - sql

I am struggling with this. I have looked at Table Level Variables but I am thinking this is way beyond my simple understanding at this stage of SQL.
The issue I have created is I have an array of ID values I am generating inside MS Access as a result of some other tasks in there. I am wanting to send these over to SQL Server to grab the jobs with the ID number that matches.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[get_Job]
#jobID VARCHAR,
#JobIDs id_List READONLY
AS
BEGIN
SELECT #JobID AS JobID;
SELECT *
FROM Job
END;
Is my current stored procedure, however whilst I have been able to get it to return the JobID variable any list I added generates an error. If I insert only 1 ID into JobIDs, this doesn't generate a result either.
As I said I think I am punching well above my weight and am getting a bit lost in all this. Perhaps I can be directed to a better training resource or a site that explains this in baby steps or a book I can purchase to help me understand this? I would appreciate help with fixing the errors above but a fish teaching is probably better.
Thanks in advance

The issue comes down to much is how long is the list of ID's you going to pass to t-sql is the issue?
You could take the passed list (assume it is a string), say like this from Access at a PT query
exec GetHotels '1,2,3,4,5,6,7,10,20,30'
So, the above is the PT query you can/could send to sql server from Access.
So, in above, we want to return records based on above?
The T-SQL would thus become:
CREATE PROCEDURE GetHotels
#IdList nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #MySQL nvarchar(max)
set #MySQL = 'select * from tblHotels where ID in (' + #IdList + ')'
EXECUTE sp_executesql #mysql
END
GO
Now, in Access, say you have that array of "ID" ? You code will look like this:
Sub MyListQuery(MyList() As String)
' above assumes a array of id
' take array - convert to a string list
Dim strMyList As String
strMyList = "'" & Join(MyList, ",") & "'"
Dim rst As DAO.Recordset
With CurrentDb.QueryDefs("qryPassR")
.SQL = "GetHotels " & strMyList
Set rst = .OpenRecordset
End With
rst.MoveLast
Debug.Print rst.RecordCount
End Sub
Unfortunately, creating t-sql on the fly is a "less" then ideal approach. In most cases, because the table is not known at runtime, you have to specific add EXEC permissions to the user.
eg:
GRANT EXECUTE ON dbo.GetHotels TO USERTEST3
You find that such users can execute + run + use "most" store procedures, but in this case, you have to add specific rights with above grant due to the "table" not being known or resolved until runtime.
So, the above is a way to send a "given" array that you have, but from a general permissions point of view, and that of creating t-sql on the fly - I can't recommend this approach unless you are stuck, and have no other choice.
Edit
Here is a solution that works the same as above, but we don't have to create a SQL statement as a string.
CREATE PROCEDURE [dbo].[GetHotels2]
#IdList nvarchar(max)
AS
BEGIN
SET NOCOUNT ON;
-- create a table from the passed list
declare #List table (ID int)
while charindex(',',#IdList) > 0
begin
insert into #List (ID) values(left(#IDList,charindex(',',#IdList)-1))
set #Idlist = right(#IdList,len(#IdList)-charindex(',',#IdList))
end
insert into #List (ID) values(#IdList)
select * from tblHotels where ID in (select ID from #list)
END

You didn't show us what that table-valued parameter looks like - but assuming id_List contains a column called Id, then you need to join this TVP to your base table something like this:
ALTER PROCEDURE [dbo].[get_Job]
#jobID VARCHAR,
#JobIDs id_List READONLY
AS
BEGIN
SELECT (list of columns)
FROM Job j
INNER JOIN id_List l ON j.JobId = l.Id;
END;
Seems pretty easy to me - and not really all that difficult to handle! Agree?
Also, check out Bad habits to kick : declaring VARCHAR without (length) - you should always provide a length for any varchar variables and parameters that you use. Otherwise, as in your case - that #jobID VARCHAR parameter will be exactly ONE character long - and this is typically not what you expect / want ....

Related

Store a database name in variable & then using it dynamically

I have a Table in my Database Which have name of all the Database of my Server
Table Look like
create Table #db_name_list(Did INT IDENTITY(1,1), DNAME NVARCHAR(100))
INSERT INTO #db_name_list
SELECT 'db_One ' UNION ALL
SELECT 'db_Two' UNION ALL
SELECT 'db_Three' UNION ALL
SELECT 'db_four' UNION ALL
SELECT 'db_five'
select * from #db_name_list
I have so many SP in my Database..Which uses multiple table and Join Them..
At Present I am using the SQL code like
Select Column from db_One..Table1
Left outer join db_two..Table2
on ....some Condition ....
REQUIREMENT
But I do not want to HARDCODE the DATABASE Name ..
I want store DataBase name in Variable and use that .
Reason :: I want to restore same Database with Different name and want to Run those SP..At Present we Cant Do ,Because I have used db_One..Table1
or db_two..Table2
I want some thing like ...
/SAMPLE SP/
CREATE PROCEDURE LOAD_DATA
AS
BEGIN
DECLARE #dbname nvarchar(500)
set #dbname=( SELECT DNAME FROM #db_name_list WHERE Did=1)
set #dbname2=( SELECT DNAME FROM #db_name_list WHERE Did=2)
PRINT #DBNAME
SELECT * FROM #dbname..table1
/* or */
SELECT * FROM #dbname2.dbo.table1
END
i.e using Variable Instead of Database name ..
But it thow error
"Incorrect syntax near '.'."
P.S This was posted by some else on msdn but the answer there was not clear & I had the same kind of doubt. So please help
You can't use a variable like this in a static sql query. You have to use the variable in dynamic sql instead, in order to build the query you want to execute, like:
DECLARE #sql nvarchar(500) = 'SELECT * FROM ' + #dbname + '.dbo.mytable'
EXEC(#sql);
There seem to be a couple of options for you depending on your circumstances.
1. Simple - Generalise your procedures
Simply take out the database references in your stored procedure, as there is no need to have an explicit reference to the database if it is running against the database it is stored in. Your select queries will look like:
SELECT * from schema.table WHERE x = y
Rather than
SELECT * from database.schema.table WHERE x = y
Then just create the stored procedure in the new database and away you go. Simply connect to the new database and run the SP. This method would also allow you to promote the procedure to being a system stored procedure, which would mean they were automatically available in every database without having to run CREATE beforehand. For more details, see this article.
2. Moderate - Dynamic SQL
Change your stored procedure to take a database name as a parameter, such as this example:
CREATE PROCEDURE example (#DatabaseName VARCHAR(200))
AS
BEGIN
DECLARE #SQL VARCHAR(MAX) = 'SELECT * FROM ['+#DatabaseName+'].schema.table WHERE x = y'
EXEC (#SQL)
END

dynamic sql not working . Regular sql working [duplicate]

It looks like #temptables created using dynamic SQL via the EXECUTE string method have a different scope and can't be referenced by "fixed" SQLs in the same stored procedure.
However, I can reference a temp table created by a dynamic SQL statement in a subsequence dynamic SQL but it seems that a stored procedure does not return a query result to a calling client unless the SQL is fixed.
A simple 2 table scenario:
I have 2 tables. Let's call them Orders and Items. Order has a Primary key of OrderId and Items has a Primary Key of ItemId. Items.OrderId is the foreign key to identify the parent Order. An Order can have 1 to n Items.
I want to be able to provide a very flexible "query builder" type interface to the user to allow the user to select what Items he want to see. The filter criteria can be based on fields from the Items table and/or from the parent Order table. If an Item meets the filter condition including and condition on the parent Order if one exists, the Item should be return in the query as well as the parent Order.
Usually, I suppose, most people would construct a join between the Item table and the parent Order tables. I would like to perform 2 separate queries instead. One to return all of the qualifying Items and the other to return all of the distinct parent Orders. The reason is two fold and you may or may not agree.
The first reason is that I need to query all of the columns in the parent Order table and if I did a single query to join the Orders table to the Items table, I would be repoeating the Order information multiple times. Since there are typically a large number of items per Order, I'd like to avoid this because it would result in much more data being transfered to a fat client. Instead, as mentioned, I would like to return the two tables individually in a dataset and use the two tables within to populate a custom Order and child Items client objects. (I don't know enough about LINQ or Entity Framework yet. I build my objects by hand). The second reason I would like to return two tables instead of one is because I already have another procedure that returns all of the Items for a given OrderId along with the parent Order and I would like to use the same 2-table approach so that I could reuse the client code to populate my custom Order and Client objects from the 2 datatables returned.
What I was hoping to do was this:
Construct a dynamic SQL string on the Client which joins the orders table to the Items table and filters appropriate on each table as specified by the custom filter created on the Winform fat-client app. The SQL build on the client would have looked something like this:
TempSQL = "
INSERT INTO #ItemsToQuery
OrderId, ItemsId
FROM
Orders, Items
WHERE
Orders.OrderID = Items.OrderId AND
/* Some unpredictable Order filters go here */
AND
/* Some unpredictable Items filters go here */
"
Then, I would call a stored procedure,
CREATE PROCEDURE GetItemsAndOrders(#tempSql as text)
Execute (#tempSQL) --to create the #ItemsToQuery table
SELECT * FROM Items WHERE Items.ItemId IN (SELECT ItemId FROM #ItemsToQuery)
SELECT * FROM Orders WHERE Orders.OrderId IN (SELECT DISTINCT OrderId FROM #ItemsToQuery)
The problem with this approach is that #ItemsToQuery table, since it was created by dynamic SQL, is inaccessible from the following 2 static SQLs and if I change the static SQLs to dynamic, no results are passed back to the fat client.
3 around come to mind but I'm look for a better one:
1) The first SQL could be performed by executing the dynamically constructed SQL from the client. The results could then be passed as a table to a modified version of the above stored procedure. I am familiar with passing table data as XML. If I did this, the stored proc could then insert the data into a temporary table using a static SQL that, because it was created by dynamic SQL, could then be queried without issue. (I could also investigate into passing the new Table type param instead of XML.) However, I would like to avoid passing up potentially large lists to a stored procedure.
2) I could perform all the queries from the client.
The first would be something like this:
SELECT Items.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
SELECT Orders.* FROM Orders, Items WHERE Order.OrderId = Items.OrderId AND (dynamic filter)
This still provides me with the ability to reuse my client sided object-population code because the Orders and Items continue to be returned in two different tables.
I have a feeling to, that I might have some options using a Table data type within my stored proc, but that is also new to me and I would appreciate a little bit of spoon feeding on that one.
If you even scanned this far in what I wrote, I am surprised, but if so, I woul dappreciate any of your thoughts on how to accomplish this best.
You first need to create your table first then it will be available in the dynamic SQL.
This works:
CREATE TABLE #temp3 (id INT)
EXEC ('insert #temp3 values(1)')
SELECT *
FROM #temp3
This will not work:
EXEC (
'create table #temp2 (id int)
insert #temp2 values(1)'
)
SELECT *
FROM #temp2
In other words:
Create temp table
Execute proc
Select from temp table
Here is complete example:
CREATE PROC prTest2 #var VARCHAR(100)
AS
EXEC (#var)
GO
CREATE TABLE #temp (id INT)
EXEC prTest2 'insert #temp values(1)'
SELECT *
FROM #temp
1st Method - Enclose multiple statements in the same Dynamic SQL Call:
DECLARE #DynamicQuery NVARCHAR(MAX)
SET #DynamicQuery = 'Select * into #temp from (select * from tablename) alias
select * from #temp
drop table #temp'
EXEC sp_executesql #DynamicQuery
2nd Method - Use Global Temp Table:
(Careful, you need to take extra care of global variable.)
IF OBJECT_ID('tempdb..##temp2') IS NULL
BEGIN
EXEC (
'create table ##temp2 (id int)
insert ##temp2 values(1)'
)
SELECT *
FROM ##temp2
END
Don't forget to delete ##temp2 object manually once your done with it:
IF (OBJECT_ID('tempdb..##temp2') IS NOT NULL)
BEGIN
DROP Table ##temp2
END
Note: Don't use this method 2 if you don't know the full structure on database.
I had the same issue that #Muflix mentioned. When you don't know the columns being returned, or they are being generated dynamically, what I've done is create a global table with a unique id, then delete it when I'm done with it, this looks something like what's shown below:
DECLARE #DynamicSQL NVARCHAR(MAX)
DECLARE #DynamicTable VARCHAR(255) = 'DynamicTempTable_' + CONVERT(VARCHAR(36), NEWID())
DECLARE #DynamicColumns NVARCHAR(MAX)
--Get "#DynamicColumns", example: SET #DynamicColumns = '[Column1], [Column2]'
SET #DynamicSQL = 'SELECT ' + #DynamicColumns + ' INTO [##' + #DynamicTable + ']' +
' FROM [dbo].[TableXYZ]'
EXEC sp_executesql #DynamicSQL
SET #DynamicSQL = 'IF OBJECT_ID(''tempdb..##' + #DynamicTable + ''' , ''U'') IS NOT NULL ' +
' BEGIN DROP TABLE [##' + #DynamicTable + '] END'
EXEC sp_executesql #DynamicSQL
Certainly not the best solution, but this seems to work for me.
I would strongly suggest you have a read through http://www.sommarskog.se/arrays-in-sql-2005.html
Personally I like the approach of passing a comma delimited text list, then parsing it with text to table function and joining to it. The temp table approach can work if you create it first in the connection. But it feel a bit messier.
Result sets from dynamic SQL are returned to the client. I have done this quite a lot.
You're right about issues with sharing data through temp tables and variables and things like that between the SQL and the dynamic SQL it generates.
I think in trying to get your temp table working, you have probably got some things confused, because you can definitely get data from a SP which executes dynamic SQL:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + ''''
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO
Also:
USE SandBox
GO
CREATE PROCEDURE usp_DynTest(#table_type AS VARCHAR(255))
AS
BEGIN
DECLARE #sql AS VARCHAR(MAX) = 'SELECT * INTO #temp FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE = ''' + #table_type + '''; SELECT * FROM #temp;'
EXEC (#sql)
END
GO
EXEC usp_DynTest 'BASE TABLE'
GO
EXEC usp_DynTest 'VIEW'
GO
DROP PROCEDURE usp_DynTest
GO

IN-clause with optional parameter SQL

I have a stored procedure that returns me a set of data based on 2 input parameters. One of the parameter is optional so I am using
WHERE
(tbl_Process.ProjectID = #ProjectID)
AND
(tbl_AnalysisLookup.AnalysisCodeID = 7)
AND
(tbl_ProcessSubStep.ProcessID = ISNULL(#ProcessID,tbl_ProcessSubStep.ProcessID))
The #ProcessID is optional parameter so the user may/may not provide it.
Now I need to change my stored procedure to accommodate multiple ProcessId's i.e. the user can now select a list of multiple ProcessId's, Single ProcessID or No ProcessID and the stored proc should handle all these scenarios. What is the best way to achieve this without using Dynamic queries unless absolutely required.
In a nutshell, I wanted my stored proc to handle optional parameters with multiple values(WHERE IN Clause). The solution and relative link to the webpage I got it from has been provided below. It's a very good article and will help you to choose the right solution based on your requirements.
I have finally figured out how to achieve this. There are a couple of ways to do this, what I am using now is a function to split a string of ProcessID's based on delimiter and Then Inserting them into a table. Then using that table in my stored proc. Here is the code and the link to the webpage.
http://www.codeproject.com/Articles/58780/Techniques-for-In-Clause-and-SQL-Server
CREATE FUNCTION [dbo].[ufnDelimitedBigIntToTable]
(
#List varchar(max), #Delimiter varchar(10)
)
RETURNS #Ids TABLE
(Id bigint) AS
BEGIN
DECLARE #list1 VARCHAR(MAX), #Pos INT, #rList VARCHAR(MAX)
SET #List = LTRIM(RTRIM(#List)) + #Delimiter
SET #pos = CHARINDEX(#Delimiter, #List, 1)
WHILE #pos > 0
BEGIN
SET #list1 = LTRIM(RTRIM(LEFT(#List, #pos - 1)))
IF #list1 <> ''
INSERT INTO #Ids(Id) VALUES (CAST(#list1 AS bigint))
SET #List = SUBSTRING(#List, #pos+1, LEN(#List))
SET #pos = CHARINDEX(#Delimiter, #list, 1)
END
RETURN
END
Once made, the table-function can be used in a query:
Collapse | Copy Code
CREATE PROCEDURE [dbo].[GetUsingDelimitedFunctionTable]
#Ids varchar(max)
AS
BEGIN
SET NOCOUNT ON
SELECT s.Id,s.SomeString
FROM SomeString s (NOLOCK)
WHERE EXISTS ( SELECT *
FROM ufnDelimitedBigIntToTable(#Ids,',') Ids
WHERE s.Id = Ids.id )
END
The Link also provides more ways to achieve this.
Not the best, but one way is to convert both sides to "varchar" and use "Like" operator to compare them. It doesn't need any huge modifications, just change the datatype of your parameter to "varchar". Something like the code below:
'%[,]' + Convert(varchar(10), tbl_ProcessSubStep.ProcessID) + '[,]%' Like #ProcessIDs
Hope it helps.
You didn't specify your database product in your question, but I'm going to guess from the #Pararemter naming style that you're using SQL Server.
Except for the unusual requirement of interpreting empty input to mean 'all', this a restatement of the problem of Arrays in SQL, explored throughly by Erland Sommarskog. Read all his articles on the subject for a good analysis of all the techniques you can use.
Here I'll explain how to use a table-valued parameter to solve your problem.
Execute the following scripts all together to set up the test environment in an idempotent way.
Creating a sample solution
First create a new empty test database StackOverFlow13556628:
USE master;
GO
IF DB_ID('StackOverFlow13556628') IS NOT NULL
BEGIN
ALTER DATABASE StackOverFlow13556628 SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE StackOverFlow13556628;
END;
GO
CREATE DATABASE StackOverFlow13556628;
GO
USE StackOverFlow13556628;
GO
Next, create a user-defined table type PrinciapalList with one column principal_id. This type contains the input values with which to query the system table sys.database_principals.
CREATE TYPE PrincipalList AS TABLE (
principal_id INT NOT NULL PRIMARY KEY
);
GO
After that, create the stored procedure GetPrincipals which takes a PrincipalList table-valued parameter as input, and returns a result set from sys.database_principals.
CREATE PROCEDURE GetPrincipals (
#principal_ids PrincipalList READONLY
)
AS
BEGIN
IF EXISTS(SELECT * FROM #principal_ids)
BEGIN
SELECT *
FROM sys.database_principals
WHERE principal_id IN (
SELECT principal_id
FROM #principal_ids
);
END
ELSE
BEGIN
SELECT *
FROM sys.database_principals;
END;
END;
GO
If the table-valued parameter contains rows, then the procedure returns all the rows in sys.database_principals that have a matching principal_id value. If the table-valued parameter is empty, it returns all the rows.
Testing the solution
You can query multiple principals like this:
DECLARE #principals PrincipalList;
INSERT INTO #principals (principal_id) VALUES (1);
INSERT INTO #principals (principal_id) VALUES (2);
INSERT INTO #principals (principal_id) VALUES (3);
EXECUTE GetPrincipals
#principal_ids = #principals;
GO
Result:
principal_id name
1 dbo
2 guest
3 INFORMATION_SCHEMA
You can query a single principal like this:
DECLARE #principals PrincipalList;
INSERT INTO #principals (principal_id) VALUES (1);
EXECUTE GetPrincipals
#principal_ids = #principals;
GO
Result:
principal_id name
1 dbo
You can query all principals like this:
EXECUTE GetPrincipals;
Result:
principal_id name
0 public
1 dbo
2 guest
3 INFORMATION_SCHEMA
4 sys
16384 db_owner
16385 db_accessadmin
16386 db_securityadmin
16387 db_ddladmin
16389 db_backupoperator
16390 db_datareader
16391 db_datawriter
16392 db_denydatareader
16393 db_denydatawriter
Remarks
This solution is inefficient because you always have to read from the table-valued parameter twice. In practice, unless your table-valued parameter has millions of rows, it will probably not be the major bottleneck.
Using an empty table-valued parameter in this way feels unintuitive. A more obvious design might simply be to have two stored procedures - one that returns all the rows, and one that returns only rows with matching ids. It would be up to the calling application to choose which one to call.

Print Dynamic Parameter Values

I've used dynamic SQL for many tasks and continuously run into the same problem: Printing values of variables used inside the Dynamic T-SQL statement.
EG:
Declare #SQL nvarchar(max), #Params nvarchar(max), #DebugMode bit, #Foobar int
select #DebugMode=1,#Foobar=364556423
set #SQL='Select #Foobar'
set #Params=N'#Foobar int'
if #DebugMode=1 print #SQL
exec sp_executeSQL #SQL,#Params
,#Foobar=#Foobar
The print results of the above code are simply "Select #Foobar". Is there any way to dynamically print the values & variable names of the sql being executed? Or when doing the print, replace parameters with their actual values so the SQL is re-runnable?
I have played with creating a function or two to accomplish something similar, but with data type conversions, pattern matching truncation issues, and non-dynamic solutions. I'm curious how other developers solve this issue without manually printing each and every variable manually.
I dont believe the evaluated statement is available, meaning your example query 'Select #FooBar' is never persisted anywhere as 'Select 364556243'
Even in a profiler trace you would see the statement hit the cache as '(#Foobar int)select #foobar'
This makes sense, since a big benefit of using sp_executesql is that it is able to cache the statement in a reliable form without variables evaluated, otherwise if it replaced the variables and executed that statement we would just see the execution plan bloat.
updated: Here's a step in right direction:
All of this could be cleaned up and wrapped in a nice function, with inputs (#Statement, #ParamDef, #ParamVal) and would return the "prepared" statement. I'll leave some of that as an exercise for you, but please post back when you improve it!
Uses split function from here link
set nocount on;
declare #Statement varchar(100), -- the raw sql statement
#ParamDef varchar(100), -- the raw param definition
#ParamVal xml -- the ParamName -to- ParamValue mapping as xml
-- the internal params:
declare #YakId int,
#Date datetime
select #YakId = 99,
#Date = getdate();
select #Statement = 'Select * from dbo.Yak where YakId = #YakId and CreatedOn > #Date;',
#ParamDef = '#YakId int, #Date datetime';
-- you need to construct this xml manually... maybe use a table var to clean this up
set #ParamVal = ( select *
from ( select '#YakId', cast(#YakId as varchar(max)) union all
select '#Date', cast(#Date as varchar(max))
) d (Name, Val)
for xml path('Parameter'), root('root')
)
-- do the work
declare #pStage table (pName varchar(100), pType varchar(25), pVal varchar(100));
;with
c_p (p)
as ( select replace(ltrim(rtrim(s)), ' ', '.')
from dbo.Split(',', #ParamDef)d
),
c_s (pName, pType)
as ( select parsename(p, 2), parsename(p, 1)
from c_p
),
c_v (pName, pVal)
as ( select p.n.value('Name[1]', 'varchar(100)'),
p.n.value('Val[1]', 'varchar(100)')
from #ParamVal.nodes('root/Parameter')p(n)
)
insert into #pStage
select s.pName, s.pType, case when s.pType = 'datetime' then quotename(v.pVal, '''') else v.pVal end -- expand this case to deal with other types
from c_s s
join c_v v on
s.pName = v.pName
-- replace pName with pValue in statement
select #Statement = replace(#Statement, pName, isnull(pVal, 'null'))
from #pStage
where charindex(pName, #Statement) > 0;
print #Statement;
On the topic of how most people do it, I will only speak to what I do:
Create a test script that will run the procedure using a wide range of valid and invalid input. If the parameter is an integer, I will send it '4' (instead of 4), but I'll only try 1 oddball string value like 'agd'.
Run the values against a data set of representative size and data value distribution for what I'm doing. Use your favorite data generation tool (there are several good ones on the market) to speed this up.
I'm generally debugging like this on a more ad hoc basis, so collecting the results from the SSMS results window is as far as I need to take it.
The best way I can think of is to capture the query as it comes across the wire using a SQL Trace. If you place something unique in your query string (as a comment), it is very easy to apply a filter for it in the trace so that you don't capture more than you need.
However, it isn't all peaches & cream.
This is only suitable for a Dev environment, maybe QA, depending on how rigid your shop is.
If the query takes a long time to run, you can mitigate that by adding "TOP 1", "WHERE 1=2", or a similar limiting clause to the query string if #DebugMode = 1. Otherwise, you could end up waiting a while for it to finish each time.
For long queries where you can't add something the query string only for debug mode, you could capture the command text in a StmtStarted event, then cancel the query as soon as you have the command.
If the query is an INSERT/UPDATE/DELETE, you will need to force a rollback if #DebugMode = 1 and you don't want the change to occur. In the event you're not currently using an explicit transaction, doing that would be extra overhead.
Should you go this route, there is some automation you can achieve to make life easier. You can create a template for the trace creation and start/stop actions. You can log the results to a file or table and process the command text from there programatically.

Is my stored procedure executing out of order?

Brief history:
I'm writing a stored procedure to support a legacy reporting system (using SQL Server Reporting Services 2000) on a legacy web application.
In keeping with the original implementation style, each report has a dedicated stored procedure in the database that performs all the querying necessary to return a "final" dataset that can be rendered simply by the report server.
Due to the business requirements of this report, the returned dataset has an unknown number of columns (it depends on the user who executes the report, but may have 4-30 columns).
Throughout the stored procedure, I keep a column UserID to track the user's ID to perform additional querying. At the end, however, I do something like this:
UPDATE #result
SET Name = ppl.LastName + ', ' + ppl.FirstName
FROM #result r
LEFT JOIN Users u ON u.id = r.userID
LEFT JOIN People ppl ON ppl.id = u.PersonID
ALTER TABLE #result
DROP COLUMN [UserID]
SELECT * FROM #result r ORDER BY Name
Effectively I set the Name varchar column (that was previously left NULL while I was performing some pivot logic) to the desired name format in plain text.
When finished, I want to drop the UserID column as the report user shouldn't see this.
Finally, the data set returned has one column for the username, and an arbitrary number of INT columns with performance totals. For this reason, I can't simply exclude the UserID column since SQL doesn't support "SELECT * EXCEPT [UserID]" or the like.
With this known (any style pointers are appreciated but not central to this problem), here's the problem:
When I execute this stored procedure, I get an execution error:
Invalid column name 'userID'.
However, if I comment out my DROP COLUMN statement and retain the UserID, the stored procedure performs correctly.
What's going on? It certainly looks like the statements are executing out of order and it's dropping the column before I can use it to set the name strings!
[Edit 1]
I defined UserID previously (the whole stored procedure is about 200 lies of mostly irrelevant logic, so I'll paste snippets:
CREATE TABLE #result ([Name] NVARCHAR(256), [UserID] INT);
Case sensitivity isn't the problem but did point me to the right line - there was one place in which I had userID instead of UserID. Now that I fixed the case, the error message complains about UserID.
My "broken" stored procedure also works properly in SQL Server 2008 - this is either a 2000 bug or I'm severely misunderstanding how SQL Server used to work.
Thanks everyone for chiming in!
For anyone searching this in the future, I've added an extremely crude workaround to be 2000-compatible until we update our production version:
DECLARE #workaroundTableName NVARCHAR(256), #workaroundQuery NVARCHAR(2000)
SET #workaroundQuery = 'SELECT [Name]';
DECLARE cur_workaround CURSOR FOR
SELECT COLUMN_NAME FROM [tempdb].INFORMATION_SCHEMA.Columns WHERE TABLE_NAME LIKE '#result%' AND COLUMN_NAME <> 'UserID'
OPEN cur_workaround;
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #workaroundQuery = #workaroundQuery + ',[' + #workaroundTableName + ']'
FETCH NEXT FROM cur_workaround INTO #workaroundTableName
END
CLOSE cur_workaround;
DEALLOCATE cur_workaround;
SET #workaroundQuery = #workaroundQuery + ' FROM #result ORDER BY Name ASC'
EXEC(#workaroundQuery);
Thanks everyone!
A much easier solution would be to not drop the column, but don't return it in the final select.
There are all sorts of reasons why you shouldn't be returning select * from your procedure anyway.
EDIT: I see now that you have to do it this way because of an unknown number of columns.
Based on the error message, is the database case sensitive, and so there's a difference between userID and UserID?
This works for me:
CREATE TABLE #temp_t
(
myInt int,
myUser varchar(100)
)
INSERT INTO #temp_t(myInt, myUser) VALUES(1, 'Jon1')
INSERT INTO #temp_t(myInt, myUser) VALUES(2, 'Jon2')
INSERT INTO #temp_t(myInt, myUser) VALUES(3, 'Jon3')
INSERT INTO #temp_t(myInt, myUser) VALUES(4, 'Jon4')
ALTER TABLE #temp_t
DROP Column myUser
SELECT * FROM #temp_t
DROP TABLE #temp_t
It says invalid column for you. Did you check the spelling and ensure there even exists that column in your temp table.
You might try wrapping everything preceding the DROP COLUMN in a BEGIN...COMMIT transaction.
At compile time, SQL Server is probably expanding the * into the full list of columns. Thus, at run time, SQL Server executes "SELECT UserID, Name, LastName, FirstName, ..." instead of "SELECT *". Dynamically assembling the final SELECT into a string and then EXECing it at the end of the stored procedure may be the way to go.