If you’re writing dynamic SQL, always add a Debug mode. It doesn’t have to be anything fancy at first, just something like:
IF #Debug = 1 BEGIN PRINT #MySQLInjectionGift END;
How can we use the above script in our Stored procedure
It can be used as a parameter to inject code.
DROP PROC IF EXISTS dbo.usp_myproc
GO
CREATE PROC dbo.usp_myproc (#Debug bit = 0)
AS
BEGIN
DECLARE #MySQLInjectionGift varchar(max) ='a=''HI THERE'','
DECLARE #SQL varchar(max) =
'
SELECT TOP 3'+IIF(#Debug=1,#MySQLInjectionGift,'')+'* FROM SYS.TABLES
'
EXEC (#SQL)
END
GO
EXEC usp_myproc
GO
EXEC usp_myproc #Debug = 1
You can do anything with it like create WHERE clauses on the fly, create columns that do calculations that are not part of the actual query but helpful in debugging. Typical injection include arbitrary SQL select statements that show progress from one SQL to another as when the stored procedure is large, the decision execution path may not be clear especially when there are lots of conditional statement. The debug mode can also run start-up code that can prepare the data or execute a unit test at the end of the query.
Related
I'm currently trying to write a default procedure template for reporting from a T-SQL Datawarehouse.
The idea is to wrap each query in a procedure, so that permissions and logging can be managed easily.
Since this will be done by the DBAs, I would like to have this solution work by only pasting some standard code before and after the main query. I'd prefer if the DBA didn't have to modify any part of the logging-code.
I've solved this for most parts, however, I need to log which parameters the user has submitted to the procedure.
The obvious solution would be hardcode the parameters into the logging. However, the procedures can have a varying amount of parameters, and I'd therefore like a catch-all solution.
My understanding is that there is no easy way iterating through all parameters.
I can however access the parameter-names from the table sys.parameters.
The closest to a solution I've come, is this minimal example:
CREATE TABLE #loggingTable (
[ProcedureID] INT
, [paramName] NVARCHAR(128)
, [paramValue] NVARCHAR(128)
)
;
go
CREATE PROCEDURE dbo.[ThisIsMyTestProc] (
#param1 TINYINT = NULL
, #Param2 NVARCHAR(64) = null
)
AS
BEGIN
-- Do some logging here
DECLARE #query NVARCHAR(128)
DECLARE #paramName NVARCHAR(128)
DECLARE #paramValue nvarchar(128)
DECLARE db_cursor CURSOR FOR
SELECT [name] FROM [sys].[parameters] WHERE object_id = ##PROCID
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #paramName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #query = 'SELECT #paramValue = cast(' + #paramName + ' as nvarchar(128))';
SELECT #query;
-- Following line doesn't work due to scope out of bounds, and is prone to SQL-Injections.
--EXEC SP_EXECUTESQL #query; -- Uncomment for error
insert into #loggingTable(ProcedureID, paramName, paramValue)
values(##PROCID, #paramName, #paramValue)
FETCH NEXT FROM db_cursor INTO #paramName
END
CLOSE db_cursor
DEALLOCATE db_cursor
-- Run the main query here (Dummy statement)
SELECT #param1 AS [column1], #Param2 AS [column2]
-- Do more logging after statement has run
END
GO
-- test
EXEC dbo.[ThisIsMyTestProc] 1, 'val 2';
select * from #loggingTable;
-- Cleanup
DROP PROCEDURE dbo.[ThisIsMyTestProc];
DROP table #loggingTable;
However, this does have to major drawbacks.
It doesn't work due to variable scopes
It is prone to SQL-Injections, which is unacceptable
Is there any way to solve this issue?
The values of the parameters are not availiable in a generic approach. You can either create some code generator, which will use sys.parameters to create a chunk of code you'd have to copy into each of your SPs, or you might read this or this about tracing and XEvents. The SQL-Server-Profiler works this way to show you statements together with the parameter values...
If you don't want to get into tracing or XEvents you might try something along this:
--Create a dummy proc
CREATE PROCEDURE dbo.[ThisIsMyTestProc] (
#param1 TINYINT = NULL
, #Param2 NVARCHAR(64) = null
)
AS
BEGIN
SELECT ##PROCID;
END
GO
--call it to see the value of ##PROCID
EXEC dbo.ThisIsMyTestProc; --See the proc-id
GO
--Now this is the magic part. It will create a command, which you can copy and paste into your SP:
SELECT CONCAT('INSERT INTO YourLoggingTable(LogType,ObjectName,ObjectId,Parameters) SELECT ''ProcedureCall'', ''',o.[name],''',',o.object_id,','
,'(SELECT'
,STUFF((
SELECT CONCAT(',''',p.[name],''' AS [parameter/#name],',p.[name],' AS [parameter/#value],''''')
FROM sys.parameters p
WHERE p.object_id=o.object_id
FOR XML PATH('')
),1,1,'')
,' FOR XML PATH(''''),ROOT(''parameters''),TYPE)'
)
FROM [sys].[objects] o
WHERE o.object_id = 525244926; --<-- Use the proc-id here
--Now we can copy the string into our procedure
--I out-commented the INSERT part, the SELECT is enough to show the effect
ALTER PROCEDURE dbo.[ThisIsMyTestProc] (
#param1 TINYINT = NULL
, #Param2 NVARCHAR(64) = null
)
AS
BEGIN
--The generated code comes in one single line
--INSERT INTO YourLoggingTable(LogType,ObjectName,ObjectId,Parameters)
SELECT 'ProcedureCall'
,'ThisIsMyTestProc'
,525244926
,(SELECT'#param1' AS [parameter/#name],#param1 AS [parameter/#value],''
,'#Param2' AS [parameter/#name],#Param2 AS [parameter/#value],''
FOR XML PATH(''),ROOT('parameters'),TYPE)
END
GO
Hint: We need the empty element (,'') at the end of each line to allow multiple elements with the same name.
--Now we can call the SP with some param values
EXEC dbo.ThisIsMyTestProc 1,'hello';
GO
As a result, your Log-Table will get an entry like this
ProcedureCall ThisIsMyTestProc 525244926 <parameters>
<parameter name="#param1" value="1" />
<parameter name="#Param2" value="hello" />
</parameters>
Just add typical logging data like UserID, DateTime, whatever you need...
Scope is the killer issue for this approach. I don't think there's a way to reference the values of parameters by anything but their variable names. If there was a way to retrieve variable values from a collection or by declared ordinal position, it could work on the fly.
I understand wanting to keep the overhead for the DBAs low and eliminating opportunities for error, but I think the best solution is to generate the required code and supply it to the DBAs or give them a tool that generates the needed blocks of code. That's about as lightweight as we can make it for the DBA, but I think it has the added benefit of eliminating processing load in the procedure by turning it into a static statement with some conditional checking for validity and concatenation work. Cursors and looping things should be avoided as much as possible.
Write a SQL script that generates your pre- and post- query blocks. Generate them in mass with a comment at the top of each set of blocks with the stored procedure name and hand it to the DBAs to copy/paste into the respective procs. Alternatively, give them the script and let them run it as needed to generate the pre- and post- blocks themselves.
I would include some checks in the generated script to help make sure it works during execution. This will detect mismatches in the generated code due to subsequent modifications to the procedure itself. We could go the extra mile and include the names of the parameters when the code is generated and verify them against sys.parameters to make sure the parameter names hard-coded into the generated code haven't changed since code generation.
-- Log execution details pre-execution
IF object_name(##PROCID) = 'ThisIsMyTestProc' AND (SELECT COUNT(*) FROM [sys].[parameters] WHERE object_id = ##PROCID) = 2
BEGIN
EXEC LogProcPreExecution #Params = CONCAT('parm1: ', #param1, ' parm2: ', #Param2), #ProcName = 'ThisIsMyTestProc', #ExecutionTime = getdate() #ExecutionUser = system_user
END
ELSE
BEGIN
--Do error logging for proc name and parameter mismatch
END
--Log procedure would look like this
CREATE PROCEDURE
LogProcPreExecution
#Parameters varchar(max),
#ProcName nvarchar(128),
#ExecutionTime datetime,
#ExecutionUser nvarchar(50)
AS
BEGIN
--Do the logging
END
We have a process that updates certain tables based on a parameter passed in, specifically a certain state. I know organizationally this problem would be eliminated by using a single table for this data, but that is not an option -- this isn't my database.
To update these tables, we run a stored procedure. The only issue is that there was a stored procedure for each state, and this made code updates horrible. In order to minimize the amount of code needing to be maintained, we wanted to move towards a single stored procedure that takes in a state parameter, and updates the correct tables. We wanted this without 50 If statements, so the only way I could think to do this was to save the SQL code as text, and then execute the string. IE:
SET #SSQL = 'UPDATE TBL_' + #STATE +' SET BLAH = FOO'
EXEC #SSQL;
I was wondering if there was a way to do this without using strings to update the correct tables based on that parameter. These stored procedures are thousands of lines long.
Thanks all!
Instead save entire script as SQL text and execute it, just update the required table using like code below as where you need and rest continue as it is
EXEC('UPDATE TBL_' + #STATE +' SET BLAH = FOO')
You could, indeed, use dynamic SQL (the exec function) - but with long, complex stored procedures, that can indeed be horrible.
When faced with a similar problem many years ago, we created the stored procedures by running a sort of "mail-merge". We'd write the procedure to work against a single table, then replace the table names with variables and used a PHP script to output a stored procedure for each table by storing the table names in a CSV file.
You could replicate that in any scripting language of your choice - it took about a day to get this to work. It had the added benefit of allowing us to easily store the stored proc templates in source code control.
You can safely use sp_executesql which is fairly more appropriate than a simple EXEC command. To do so, even with input and output parameters :
DECLARE #sql nvarchar(4000),
#tablename nvarchar(4000) = 'YOUR_TABLE_NAME',
#params nvarchar(4000),
#count int
SELECT #sql =
N' UPDATE ' + #tablename +
N' SET Bar = #Foo;' +
N' SELECT #count = ##rowcount'
SELECT #params =
N'#Foo int, ' +
N'#count int OUTPUT'
EXEC sp_executesql #sql, #params, 2, #count OUTPUT
SELECT #count [Row(s) updated]
I encourage you reading the related part of the article mentionned here.
According to this running sp_recompile forces the object to be recompiled the next time that it is run
I need it to be recompiled the moment I run the sp-recompile command, mainly to check for syntax errors and existence of objects on which the stored procedure depends.
--
on sql 2008 there's sys.sp_refreshsqlmodule module...
Probably the simplest way to do this is to re-deploy the stored procedure, which would (as far as I'm aware) remove the need to recompile the procedure.
Something along these lines:
SET #ProcedureName = 'SampleProcedure'
CREATE TABLE #ProcedureContent (Text NVARCHAR(MAX))
INSERT INTO #ProcedureContent
EXEC sp_helptext #ProcedureName
DECLARE #ProcedureText NVARCHAR(MAX)
SET #ProcedureText = ''
SELECT #ProcedureText = #ProcedureText + [Text] FROM #ProcedureContent
EXEC ('DROP PROCEDURE ' + #ProcedureName);
EXEC (#ProcedureText)
DROP TABLE #ProcedureContent
In SQL Server 2005, is there a concept of a one-time-use, or local function declared inside of a SQL script or Stored Procedure? I'd like to abstract away some complexity in a script I'm writing, but it would require being able to declare a function.
Just curious.
You can create temp stored procedures like:
create procedure #mytemp as
begin
select getdate() into #mytemptable;
end
in an SQL script, but not functions. You could have the proc store it's result in a temp table though, then use that information later in the script ..
You can call CREATE Function near the beginning of your script and DROP Function near the end.
Common Table Expressions let you define what are essentially views that last only within the scope of your select, insert, update and delete statements. Depending on what you need to do they can be terribly useful.
I know I might get criticized for suggesting dynamic SQL, but sometimes it's a good solution. Just make sure you understand the security implications before you consider this.
DECLARE #add_a_b_func nvarchar(4000) = N'SELECT #c = #a + #b;';
DECLARE #add_a_b_parm nvarchar(500) = N'#a int, #b int, #c int OUTPUT';
DECLARE #result int;
EXEC sp_executesql #add_a_b_func, #add_a_b_parm, 2, 3, #c = #result OUTPUT;
PRINT CONVERT(varchar, #result); -- prints '5'
The below is what I have used i the past to accomplish the need for a Scalar UDF in MS SQL:
IF OBJECT_ID('tempdb..##fn_Divide') IS NOT NULL DROP PROCEDURE ##fn_Divide
GO
CREATE PROCEDURE ##fn_Divide (#Numerator Real, #Denominator Real) AS
BEGIN
SELECT Division =
CASE WHEN #Denominator != 0 AND #Denominator is NOT NULL AND #Numerator != 0 AND #Numerator is NOT NULL THEN
#Numerator / #Denominator
ELSE
0
END
RETURN
END
GO
Exec ##fn_Divide 6,4
This approach which uses a global variable for the PROCEDURE allows you to make use of the function not only in your scripts, but also in your Dynamic SQL needs.
In scripts you have more options and a better shot at rational decomposition. Look into SQLCMD mode (SSMS -> Query Menu -> SQLCMD mode), specifically the :setvar and :r commands.
Within a stored procedure your options are very limited. You can't create define a function directly with the body of a procedure. The best you can do is something like this, with dynamic SQL:
create proc DoStuff
as begin
declare #sql nvarchar(max)
/*
define function here, within a string
note the underscore prefix, a good convention for user-defined temporary objects
*/
set #sql = '
create function dbo._object_name_twopart (#object_id int)
returns nvarchar(517) as
begin
return
quotename(object_schema_name(#object_id))+N''.''+
quotename(object_name(#object_id))
end
'
/*
create the function by executing the string, with a conditional object drop upfront
*/
if object_id('dbo._object_name_twopart') is not null drop function _object_name_twopart
exec (#sql)
/*
use the function in a query
*/
select object_id, dbo._object_name_twopart(object_id)
from sys.objects
where type = 'U'
/*
clean up
*/
drop function _object_name_twopart
end
go
This approximates a global temporary function, if such a thing existed. It's still visible to other users. You could append the ##SPID of your connection to uniqueify the name, but that would then require the rest of the procedure to use dynamic SQL too.
Just another idea for anyone that's looking this up now. You could always create a permanent function in tempdb. That function would not be prefixed with ## or # to indicate it's a temporary object. It would persist "permanently" until it's dropped or the server is restarted and tempdb is rebuilt without it. The key is that it would eventually disappear once the server is restarted if your own garbage collection fails.
The scope of the function would be within TempDB but it could reference another database on the server with 3 part names. (dbname.schema.objectname) or better yet you can pass in all the parameters that the function needs to do its work so it doesn't need to look at other objects in other databases.
I'm not talking about doing a "SET NOCOUNT OFF". But I have a stored procedure which I use to insert some data into some tables. This procedure creates a xml response string, well let me give you an example:
CREATE PROCEDURE [dbo].[insertSomeData] (#myParam int) AS
DECLARE #reply varchar(2048)
... Do a bunch of inserts/updates...
SET #reply = '<xml><big /><outputs /></xml>'
SELECT #reply
GO
So I put together a script which uses this SP a bunch of times, and the xml "output" is getting to be too much (it's crashed my box once already).
Is there a way to suppress or redirect the output generated from this stored procedure? I don't think that modifying this stored procedure is an option.
thanks.
I guess i should clarify. This SP above is being called by a T-SQL Update script that i wrote, to be run through enterprise studio manager, etc.
And it's not the most elegant SQL i've ever written either (some psuedo-sql):
WHILE unprocessedRecordsLeft
BEGIN
SELECT top 1 record from updateTable where Processed = 0
EXEC insertSomeData #param = record_From_UpdateTable
END
So lets say the UpdateTable has some 50k records in it. That SP gets called 50k times, writing 50k xml strings to the output window. It didn't bring the sql server to a stop, just my client app (sql server management studio).
The answer you're looking for is found in a similar SO question by Josh Burke:
-- Assume this table matches the output of your procedure
DECLARE #tmpNewValue TABLE ([Id] int, [Name] varchar(50))
INSERT INTO #tmpNewValue
EXEC [ProcedureB]
-- SELECT [Id], [Name] FROM #tmpNewValue
I think I found a solution.
So what i can do now in my SQL script is something like this (sql-psuedo code):
create table #tmp(xmlReply varchar(2048))
while not_done
begin
select top 1 record from updateTable where processed = 0
insert into #tmp exec insertSomeData #param=record
end
drop table #tmp
Now if there was a even more efficient way to do this. Does SQL Server have something similar to /dev/null? A null table or something?
Answering the question, "How do I suppress stored procedure output?" really depends on what you are trying to accomplish. So I want to contribute what I encountered:
I needed to supress the stored procedure (USP) output because I just wanted the row count (##ROWCOUNT) from the output. What I did, and this may not work for everyone, is since my query was already going to be dynamic sql I added a parameter called #silentExecution to the USP in question. This is a bit parameter which I defaulted to zero (0).
Next if #silentExecution was set to one (1) I would insert the table contents into a temporary table, which is what would supress the output and then execute ##ROWCOUNT with no problem.
USP Example:
CREATE PROCEDURE usp_SilentExecutionProc
#silentExecution bit = 0
AS
BEGIN
SET NOCOUNT ON;
DECLARE #strSQL VARCHAR(MAX);
SET #strSQL = '';
SET #strSQL = 'SELECT TOP 10 * ';
IF #silentExecution = 1
SET #strSQL = #strSQL + 'INTO #tmpDevNull ';
SET #strSQL = #strSQL +
'FROM dbo.SomeTable ';
EXEC(#strSQL);
END
GO
Then you can execute the whole thing like so:
EXEC dbo.usp_SilentExecutionProc #silentExecution = 1;
SELECT ##ROWCOUNT;
The purpose behind doing it like this is if you need the USP to be able to return a result set in other uses or cases, but still utilize it for just the rows.
Just wanted to share my solution.
I have recently come across with a similar issue while writing a migration script and since the issue was resolved in a different way, I want to record it.
I have nearly killed my SSMS Client by running a simple while loop for 3000 times and calling a procedure.
DECLARE #counter INT
SET #counter = 10
WHILE #counter > 0
BEGIN
-- call a procedure which returns some resultset
SELECT #counter-- (simulating the effect of stored proc returning some resultset)
SET #counter = #counter - 1
END
The script result was executed using SSMS and default option on query window is set to show “Results to Grid”[Ctrl+d shortcut].
Easy Solution:
Try setting the results to file to avoid the grid to be built and painted on the SSMS client. [CTRL+SHIFT+F keyboard shortcut to set the query results to file].
This issue is related to : stackoverflow query
Man, this is seriously a case of a computer doing what you told it to do instead of what you wanted it to do.
If you don't want it to return results, then don't ask it to return results. Refactor that stored procedure into two:
CREATE PROCEDURE [dbo].[insertSomeData] (#myParam int) AS
BEGIN
DECLARE #reply varchar(2048)
--... Do a bunch of inserts/updates...
EXEC SelectOutput
END
GO
CREATE PROCEDURE SelectOutput AS
BEGIN
SET #reply = '<xml><big /><outputs /></xml>'
SELECT #reply
END
From which client are you calling the stored procedure? Say it was from C#, and you're calling it like:
var com = myConnection.CreateCommand();
com.CommandText = "exec insertSomeData 1";
var read = com.ExecuteReader();
This will not yet retrieve the result from the server; you have to call Read() for that:
read.Read();
var myBigString = read[0].ToString();
So if you don't call Read, the XML won't leave the Sql Server. You can even call the procedure with ExecuteNonQuery:
var com = myConnection.CreateCommand();
com.CommandText = "exec insertSomeData 1";
com.ExecuteNonQuery();
Here the client won't even ask for the result of the select.
You could create a SQL CLR stored procedure that execs this. Should be pretty easy.
I don't know if SQL Server has an option to suppress output (I don't think it does), but the SQL Query Analyzer has an option (under results tab) to "Discard Results".
Are you running this through isql?
You said your server is crashing. What is crashing the application that consumes the output of this SQL or SQL Server itself (assuming SQL Server).
If you are using .Net Framework application to call the stored procedure then take a look at SQLCommand.ExecuteNonQuery. This just executes stored procedure with no results returned. If problem is at SQL Server level then you are going to have to do something different (i.e. change the stored procedure).
You can include in the SP a parameter to indicate if you want it to do the select or not, but of course, you need to have access and reprogram the SP.
CREATE PROCEDURE [dbo].[insertSomeData] (#myParam int, #doSelect bit=1) AS
DECLARE #reply varchar(2048)
... Do a bunch of inserts/updates...
SET #reply = '<xml><big /><outputs /></xml>'
if #doSelect = 1
SELECT #reply
GO
ever tried SET NOCOUNT ON; as an option?