A reliable way to verify T-SQL stored procedures - sql

We're upgrading from SQL Server 2005 to 2008. Almost every database in the 2005 instance is set to 2000 compatibility mode, but we're jumping to 2008. Our testing is complete, but what we've learned is that we need to get faster at it.
I've discovered some stored procedures that either SELECT data from missing tables or try to ORDER BY columns that don't exist.
Wrapping the SQL to create the procedures in SET PARSEONLY ON and trapping errors in a try/catch only catches the invalid columns in the ORDER BYs. It does not find the error with the procedure selecting data from the missing table. SSMS 2008's intellisense, however, DOES find the issue, but I can still go ahead and successfully run the ALTER script for the procedure without it complaining.
So, why can I even get away with creating a procedure that fails when it runs? Are there any tools out there that can do better than what I've tried?
The first tool I found wasn't very useful: DbValidator from CodeProject, but it finds fewer problems than this script I found on SqlServerCentral, which found the invalid column references.
-------------------------------------------------------------------------
-- Check Syntax of Database Objects
-- Copyrighted work. Free to use as a tool to check your own code or in
-- any software not sold. All other uses require written permission.
-------------------------------------------------------------------------
-- Turn on ParseOnly so that we don't actually execute anything.
SET PARSEONLY ON
GO
-- Create a table to iterate through
declare #ObjectList table (ID_NUM int NOT NULL IDENTITY (1, 1), OBJ_NAME varchar(255), OBJ_TYPE char(2))
-- Get a list of most of the scriptable objects in the DB.
insert into #ObjectList (OBJ_NAME, OBJ_TYPE)
SELECT name, type
FROM sysobjects WHERE type in ('P', 'FN', 'IF', 'TF', 'TR', 'V')
order by type, name
-- Var to hold the SQL that we will be syntax checking
declare #SQLToCheckSyntaxFor varchar(max)
-- Var to hold the name of the object we are currently checking
declare #ObjectName varchar(255)
-- Var to hold the type of the object we are currently checking
declare #ObjectType char(2)
-- Var to indicate our current location in iterating through the list of objects
declare #IDNum int
-- Var to indicate the max number of objects we need to iterate through
declare #MaxIDNum int
-- Set the inital value and max value
select #IDNum = Min(ID_NUM), #MaxIDNum = Max(ID_NUM)
from #ObjectList
-- Begin iteration
while #IDNum <= #MaxIDNum
begin
-- Load per iteration values here
select #ObjectName = OBJ_NAME, #ObjectType = OBJ_TYPE
from #ObjectList
where ID_NUM = #IDNum
-- Get the text of the db Object (ie create script for the sproc)
SELECT #SQLToCheckSyntaxFor = OBJECT_DEFINITION(OBJECT_ID(#ObjectName, #ObjectType))
begin try
-- Run the create script (remember that PARSEONLY has been turned on)
EXECUTE(#SQLToCheckSyntaxFor)
end try
begin catch
-- See if the object name is the same in the script and the catalog (kind of a special error)
if (ERROR_PROCEDURE() <> #ObjectName)
begin
print 'Error in ' + #ObjectName
print ' The Name in the script is ' + ERROR_PROCEDURE()+ '. (They don''t match)'
end
-- If the error is just that this already exists then we don't want to report that.
else if (ERROR_MESSAGE() <> 'There is already an object named ''' + ERROR_PROCEDURE() + ''' in the database.')
begin
-- Report the error that we got.
print 'Error in ' + ERROR_PROCEDURE()
print ' ERROR TEXT: ' + ERROR_MESSAGE()
end
end catch
-- Setup to iterate to the next item in the table
select #IDNum = case
when Min(ID_NUM) is NULL then #IDNum + 1
else Min(ID_NUM)
end
from #ObjectList
where ID_NUM > #IDNum
end
-- Turn the ParseOnly back off.
SET PARSEONLY OFF
GO

You can choose different ways. First of all SQL SERVER 2008 supports dependencies which exist in DB inclusive dependencies of STORED PROCEDURE (see http://msdn.microsoft.com/en-us/library/bb677214%28v=SQL.100%29.aspx, http://msdn.microsoft.com/en-us/library/ms345449.aspx and http://msdn.microsoft.com/en-us/library/cc879246.aspx). You can use sys.sql_expression_dependencies and sys.dm_sql_referenced_entities to see and verify there.
But the most simple way to do verification of all STORED PROCEDURE is following:
export all STORED PROCEDURE
drop old existing STORED PROCEDURE
import just exported STORED PROCEDURE.
If you upgrade DB the existing Stored Procedure will be not verified, but if you create a new one, the procedure will be verified. So after exporting and exporting of all Stored Procedure you receive all existing error reported.
You can also see and export the code of a Stored Procedure with a code like following
SELECT definition
FROM sys.sql_modules
WHERE object_id = (OBJECT_ID(N'spMyStoredProcedure'))
UPDATED: To see objects (like tables and views) referenced by Stored Procedure spMyStoredProcedure you can use following:
SELECT OBJECT_NAME(referencing_id) AS referencing_entity_name
,referenced_server_name AS server_name
,referenced_database_name AS database_name
,referenced_schema_name AS schema_name
, referenced_entity_name
FROM sys.sql_expression_dependencies
WHERE referencing_id = OBJECT_ID(N'spMyStoredProcedure');
UPDATED 2: In the comment to my answer Martin Smith suggested to use sys.sp_refreshsqlmodule instead of recreating a Stored Procedure. So with the code
SELECT 'EXEC sys.sp_refreshsqlmodule ''' + OBJECT_SCHEMA_NAME(object_id) +
'.' + name + '''' FROM sys.objects WHERE type in (N'P', N'PC')
one receive a script, which can be used for verifying of Stored Procedure dependencies. The output will look like following (example with AdventureWorks2008):
EXEC sys.sp_refreshsqlmodule 'dbo.uspGetManagerEmployees'
EXEC sys.sp_refreshsqlmodule 'dbo.uspGetWhereUsedProductID'
EXEC sys.sp_refreshsqlmodule 'dbo.uspPrintError'
EXEC sys.sp_refreshsqlmodule 'HumanResources.uspUpdateEmployeeHireInfo'
EXEC sys.sp_refreshsqlmodule 'dbo.uspLogError'
EXEC sys.sp_refreshsqlmodule 'HumanResources.uspUpdateEmployeeLogin'
EXEC sys.sp_refreshsqlmodule 'HumanResources.uspUpdateEmployeePersonalInfo'
EXEC sys.sp_refreshsqlmodule 'dbo.uspSearchCandidateResumes'
EXEC sys.sp_refreshsqlmodule 'dbo.uspGetBillOfMaterials'
EXEC sys.sp_refreshsqlmodule 'dbo.uspGetEmployeeManagers'

Here is what worked for me:
-- Based on comment from http://blogs.msdn.com/b/askjay/archive/2012/07/22/finding-missing-dependencies.aspx
-- Check also http://technet.microsoft.com/en-us/library/bb677315(v=sql.110).aspx
select o.type, o.name, ed.referenced_entity_name, ed.is_caller_dependent
from sys.sql_expression_dependencies ed
join sys.objects o on ed.referencing_id = o.object_id
where ed.referenced_id is null
You should get all missing dependencies for your SPs, solving problems with late binding.
Exception: is_caller_dependent = 1 does not necessarily mean a broken dependency. It just means that the dependency is resolved on runtime because the schema of the referenced object is not specified. You can avoid it specifying the schema of the referenced object (another SP for example).
Credits to Jay's blog and the anonymous commenter...

I am fond of using Display Estimated Execution Plan. It highlights many errors reasonably without ever having to really run the proc.

I had the same problem in a previous project and wrote an TSQL checker on SQL2005 and later a Windows program implementing the same functionality.

When I came across this question I was interested in finding a safe, non-invasive, and fast technique for validating syntax and object (table, column) references.
While I agree that actually executing each stored procedure will likely turn up more issues than just compiling them, one must exercise caution with the former approach. That is, you need to know that it is, in fact, safe to execute each and every stored procedure (i.e. does it erase some tables, for example?). This safety issue can be addressed by wrapping the execution in a transaction and rolling it back so no changes are permanent, as suggested in devio's answer. Still, this approach could potentially take quite a long time depending on how much data you are manipulating.
The code in the question, and the first portion of Oleg's answer, both suggest re-instantiating each stored procedure, as that action recompiles the procedure and does just such syntactic validation. But this approach is invasive--it's fine for a private test system, but could disrupt the work of other develoeprs on a heavily used test system.
I came across the article Check Validity of SQL Server Stored Procedures, Views and Functions, which presents a .NET solution, but it is the follow-up post at the bottom by "ddblue" that intrigued me more. This approach obtains the text of each stored procedure, converts the create keyword to alter so that it can be compiled, then compiles the proc. And that accurately reports any bad table and column references. The code runs, but I quickly ran into some issues because of the create/alter conversion step.
The conversion from "create" to "alter" looks for "CREATE" and "PROC" separated by a single space. In the real-world, there could spaces or tabs, and there could be one or more than one. I added a nested "replace" sequence (thanks, to this article by Jeff Moden!) to convert all such occurrences to a single space, allowing the conversion to proceed as originally designed. Then, since that needed to be used wherever the original "sm.definition" expression was used, I added a common table expression to avoid massive, unsightly code duplication. So here is my updated version of the code:
DECLARE #Schema NVARCHAR(100),
#Name NVARCHAR(100),
#Type NVARCHAR(100),
#Definition NVARCHAR(MAX),
#CheckSQL NVARCHAR(MAX)
DECLARE crRoutines CURSOR FOR
WITH System_CTE ( schema_name, object_name, type_desc, type, definition, orig_definition)
AS -- Define the CTE query.
( SELECT OBJECT_SCHEMA_NAME(sm.object_id) ,
OBJECT_NAME(sm.object_id) ,
o.type_desc ,
o.type,
REPLACE(REPLACE(REPLACE(LTRIM(RTRIM(REPLACE(sm.definition, char(9), ' '))), ' ', ' ' + CHAR(7)), CHAR(7) + ' ', ''), CHAR(7), '') [definition],
sm.definition [orig_definition]
FROM sys.sql_modules (NOLOCK) AS sm
JOIN sys.objects (NOLOCK) AS o ON sm.object_id = o.object_id
-- add a WHERE clause here as indicated if you want to test on a subset before running the whole list.
--WHERE OBJECT_NAME(sm.object_id) LIKE 'xyz%'
)
-- Define the outer query referencing the CTE name.
SELECT schema_name ,
object_name ,
type_desc ,
CASE WHEN type_desc = 'SQL_STORED_PROCEDURE'
THEN STUFF(definition, CHARINDEX('CREATE PROC', definition), 11, 'ALTER PROC')
WHEN type_desc LIKE '%FUNCTION%'
THEN STUFF(definition, CHARINDEX('CREATE FUNC', definition), 11, 'ALTER FUNC')
WHEN type = 'VIEW'
THEN STUFF(definition, CHARINDEX('CREATE VIEW', definition), 11, 'ALTER VIEW')
WHEN type = 'SQL_TRIGGER'
THEN STUFF(definition, CHARINDEX('CREATE TRIG', definition), 11, 'ALTER TRIG')
END
FROM System_CTE
ORDER BY 1 , 2;
OPEN crRoutines
FETCH NEXT FROM crRoutines INTO #Schema, #Name, #Type, #Definition
WHILE ##FETCH_STATUS = 0
BEGIN
IF LEN(#Definition) > 0
BEGIN
-- Uncomment to see every object checked.
-- RAISERROR ('Checking %s...', 0, 1, #Name) WITH NOWAIT
BEGIN TRY
SET PARSEONLY ON ;
EXEC ( #Definition ) ;
SET PARSEONLY OFF ;
END TRY
BEGIN CATCH
PRINT #Type + ': ' + #Schema + '.' + #Name
PRINT ERROR_MESSAGE()
END CATCH
END
ELSE
BEGIN
RAISERROR ('Skipping %s...', 0, 1, #Name) WITH NOWAIT
END
FETCH NEXT FROM crRoutines INTO #Schema, #Name, #Type, #Definition
END
CLOSE crRoutines
DEALLOCATE crRoutines

Nine years after I first posed this question, and I've just discovered an amazing tool built by Microsoft themselves that not only can reliably verify stored procedure compatibility between SQL Server versions, but all other internal aspects as well. It's been renamed a few times, but they currently call it:
Microsoft® Data Migration Assistant v5.4*
* Version as of 6/17/2021
https://www.microsoft.com/en-us/download/details.aspx?id=53595
Data Migration Assistant (DMA) enables you to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality on your new version of SQL Server. It recommends performance and reliability improvements for your target environment. It allows you to not only move your schema and data, but also uncontained objects from your source server to your target server.
The answers above that use EXEC sys.sp_refreshsqlmodule were a great start, but we ran into one MAJOR problem running it on 2008 R2: any stored procedure or function that was renamed (using sp_rename, and not a DROP/CREATE pattern) REVERTED to its prior definition after running the refresh procedure, because the internal metadata isn't refreshed under the new name. It's a known bug that was fixed in SQL Server 2012, but we had a fun day of recovery afterwards. (One workaround, future readers, is to issue a ROLLBACK if the refresh throws an error.)
Anyway, times have changed, new tools are available -- and good ones at that -- thus the late addition of this answer.

Related

Dynamically iterate through passed in parameter-value(s) in T-SQL procedure

I'm currently trying to write a default procedure template for reporting from a T-SQL Datawarehouse.
The idea is to wrap each query in a procedure, so that permissions and logging can be managed easily.
Since this will be done by the DBAs, I would like to have this solution work by only pasting some standard code before and after the main query. I'd prefer if the DBA didn't have to modify any part of the logging-code.
I've solved this for most parts, however, I need to log which parameters the user has submitted to the procedure.
The obvious solution would be hardcode the parameters into the logging. However, the procedures can have a varying amount of parameters, and I'd therefore like a catch-all solution.
My understanding is that there is no easy way iterating through all parameters.
I can however access the parameter-names from the table sys.parameters.
The closest to a solution I've come, is this minimal example:
CREATE TABLE #loggingTable (
[ProcedureID] INT
, [paramName] NVARCHAR(128)
, [paramValue] NVARCHAR(128)
)
;
go
CREATE PROCEDURE dbo.[ThisIsMyTestProc] (
#param1 TINYINT = NULL
, #Param2 NVARCHAR(64) = null
)
AS
BEGIN
-- Do some logging here
DECLARE #query NVARCHAR(128)
DECLARE #paramName NVARCHAR(128)
DECLARE #paramValue nvarchar(128)
DECLARE db_cursor CURSOR FOR
SELECT [name] FROM [sys].[parameters] WHERE object_id = ##PROCID
OPEN db_cursor
FETCH NEXT FROM db_cursor INTO #paramName
WHILE ##FETCH_STATUS = 0
BEGIN
SET #query = 'SELECT #paramValue = cast(' + #paramName + ' as nvarchar(128))';
SELECT #query;
-- Following line doesn't work due to scope out of bounds, and is prone to SQL-Injections.
--EXEC SP_EXECUTESQL #query; -- Uncomment for error
insert into #loggingTable(ProcedureID, paramName, paramValue)
values(##PROCID, #paramName, #paramValue)
FETCH NEXT FROM db_cursor INTO #paramName
END
CLOSE db_cursor
DEALLOCATE db_cursor
-- Run the main query here (Dummy statement)
SELECT #param1 AS [column1], #Param2 AS [column2]
-- Do more logging after statement has run
END
GO
-- test
EXEC dbo.[ThisIsMyTestProc] 1, 'val 2';
select * from #loggingTable;
-- Cleanup
DROP PROCEDURE dbo.[ThisIsMyTestProc];
DROP table #loggingTable;
However, this does have to major drawbacks.
It doesn't work due to variable scopes
It is prone to SQL-Injections, which is unacceptable
Is there any way to solve this issue?
The values of the parameters are not availiable in a generic approach. You can either create some code generator, which will use sys.parameters to create a chunk of code you'd have to copy into each of your SPs, or you might read this or this about tracing and XEvents. The SQL-Server-Profiler works this way to show you statements together with the parameter values...
If you don't want to get into tracing or XEvents you might try something along this:
--Create a dummy proc
CREATE PROCEDURE dbo.[ThisIsMyTestProc] (
#param1 TINYINT = NULL
, #Param2 NVARCHAR(64) = null
)
AS
BEGIN
SELECT ##PROCID;
END
GO
--call it to see the value of ##PROCID
EXEC dbo.ThisIsMyTestProc; --See the proc-id
GO
--Now this is the magic part. It will create a command, which you can copy and paste into your SP:
SELECT CONCAT('INSERT INTO YourLoggingTable(LogType,ObjectName,ObjectId,Parameters) SELECT ''ProcedureCall'', ''',o.[name],''',',o.object_id,','
,'(SELECT'
,STUFF((
SELECT CONCAT(',''',p.[name],''' AS [parameter/#name],',p.[name],' AS [parameter/#value],''''')
FROM sys.parameters p
WHERE p.object_id=o.object_id
FOR XML PATH('')
),1,1,'')
,' FOR XML PATH(''''),ROOT(''parameters''),TYPE)'
)
FROM [sys].[objects] o
WHERE o.object_id = 525244926; --<-- Use the proc-id here
--Now we can copy the string into our procedure
--I out-commented the INSERT part, the SELECT is enough to show the effect
ALTER PROCEDURE dbo.[ThisIsMyTestProc] (
#param1 TINYINT = NULL
, #Param2 NVARCHAR(64) = null
)
AS
BEGIN
--The generated code comes in one single line
--INSERT INTO YourLoggingTable(LogType,ObjectName,ObjectId,Parameters)
SELECT 'ProcedureCall'
,'ThisIsMyTestProc'
,525244926
,(SELECT'#param1' AS [parameter/#name],#param1 AS [parameter/#value],''
,'#Param2' AS [parameter/#name],#Param2 AS [parameter/#value],''
FOR XML PATH(''),ROOT('parameters'),TYPE)
END
GO
Hint: We need the empty element (,'') at the end of each line to allow multiple elements with the same name.
--Now we can call the SP with some param values
EXEC dbo.ThisIsMyTestProc 1,'hello';
GO
As a result, your Log-Table will get an entry like this
ProcedureCall ThisIsMyTestProc 525244926 <parameters>
<parameter name="#param1" value="1" />
<parameter name="#Param2" value="hello" />
</parameters>
Just add typical logging data like UserID, DateTime, whatever you need...
Scope is the killer issue for this approach. I don't think there's a way to reference the values of parameters by anything but their variable names. If there was a way to retrieve variable values from a collection or by declared ordinal position, it could work on the fly.
I understand wanting to keep the overhead for the DBAs low and eliminating opportunities for error, but I think the best solution is to generate the required code and supply it to the DBAs or give them a tool that generates the needed blocks of code. That's about as lightweight as we can make it for the DBA, but I think it has the added benefit of eliminating processing load in the procedure by turning it into a static statement with some conditional checking for validity and concatenation work. Cursors and looping things should be avoided as much as possible.
Write a SQL script that generates your pre- and post- query blocks. Generate them in mass with a comment at the top of each set of blocks with the stored procedure name and hand it to the DBAs to copy/paste into the respective procs. Alternatively, give them the script and let them run it as needed to generate the pre- and post- blocks themselves.
I would include some checks in the generated script to help make sure it works during execution. This will detect mismatches in the generated code due to subsequent modifications to the procedure itself. We could go the extra mile and include the names of the parameters when the code is generated and verify them against sys.parameters to make sure the parameter names hard-coded into the generated code haven't changed since code generation.
-- Log execution details pre-execution
IF object_name(##PROCID) = 'ThisIsMyTestProc' AND (SELECT COUNT(*) FROM [sys].[parameters] WHERE object_id = ##PROCID) = 2
BEGIN
EXEC LogProcPreExecution #Params = CONCAT('parm1: ', #param1, ' parm2: ', #Param2), #ProcName = 'ThisIsMyTestProc', #ExecutionTime = getdate() #ExecutionUser = system_user
END
ELSE
BEGIN
--Do error logging for proc name and parameter mismatch
END
--Log procedure would look like this
CREATE PROCEDURE
LogProcPreExecution
#Parameters varchar(max),
#ProcName nvarchar(128),
#ExecutionTime datetime,
#ExecutionUser nvarchar(50)
AS
BEGIN
--Do the logging
END

Drop all objects in SQL Server database that belong to different schemas?

Is there a way to drop all objects in a db, with the objects belonging to two different schemas?
I had been previously working with one schema, so I query all objects using:
Select * From sysobjects Where type=...
then dropped everything I using
Drop Table ...
Now that I have introduced another schema, every time I try to drop it says something about I don't have permission or the object does not exist. BUT, if I prefix the object with the [schema.object] it works. I don't know how to automate this, cause I don't know what objects, or which of the two schemas the object will belong to. Anyone know how to drop all objects inside a db, regardless of which schema it belongs to?
(The user used is owner of both schemas, the objects in the DB were created by said user, as well as the user who is removing the objects - which works if the prefix I used IE. Drop Table Schema1.blah)
Use sys.objects in combination with OBJECT_SCHEMA_NAME to build your DROP TABLE statements, review, then copy/paste to execute:
SELECT 'DROP TABLE ' +
QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
QUOTENAME(name) + ';'
FROM sys.objects
WHERE type_desc = 'USER_TABLE';
Or use sys.tables to avoid need of the type_desc filter:
SELECT 'DROP TABLE ' +
QUOTENAME(OBJECT_SCHEMA_NAME(object_id)) + '.' +
QUOTENAME(name) + ';'
FROM sys.tables;
SQL Fiddle
Neither of the other questions seem to have tried to address the all objects part of the question.
I'm amazed you have to roll your own with this - I expected there to be a drop schema blah cascade. Surely every single person who sets up a dev server will have to do this and having to do some meta-programming before being able to do normal programming is seriously horrible. Anyway... rant over!
I started looking at some of these articles as a way to do it by clearing out a schema: There's an old article about doing this, however the tables mentioned on there are now marked as deprecated. I've also looked at the documentation for the new tables to help understand what is going on here.
There's another answer and a great dynamic sql resource it links to.
After looking at all this stuff for a while it just all seemed a bit too messy.
I think the better option is to go for
ALTER DATABASE 'blah' SET SINGLE_USER WITH ROLLBACK IMMEDIATE
drop database 'blah'
create database 'blah'
instead. The extra incantation at the top is basically to force drop the database as mentioned here
It feels a bit wrong but the amount of complexity involved in writing the drop script is a good reason to avoid it I think.
If there seem to be problems with dropping the database I might revisit some of the links and post another answer
try this with sql2012 or above,
this script may help to delete all objects by selected schema
Note: below script for dbo schema for all objects but you may change in very first line #MySchemaName
DECLARE #MySchemaName VARCHAR(50)='dbo', #sql VARCHAR(MAX)='';
DECLARE #SchemaName VARCHAR(255), #ObjectName VARCHAR(255), #ObjectType VARCHAR(255), #ObjectDesc VARCHAR(255), #Category INT;
DECLARE cur CURSOR FOR
SELECT (s.name)SchemaName, (o.name)ObjectName, (o.type)ObjectType,(o.type_desc)ObjectDesc,(so.category)Category
FROM sys.objects o
INNER JOIN sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN sysobjects so ON so.name=o.name
WHERE s.name = #MySchemaName
AND so.category=0
AND o.type IN ('P','PC','U','V','FN','IF','TF','FS','FT','PK','TT')
OPEN cur
FETCH NEXT FROM cur INTO #SchemaName,#ObjectName,#ObjectType,#ObjectDesc,#Category
SET #sql='';
WHILE ##FETCH_STATUS = 0 BEGIN
IF #ObjectType IN('FN', 'IF', 'TF', 'FS', 'FT') SET #sql=#sql+'Drop Function '+#MySchemaName+'.'+#ObjectName+CHAR(13)
IF #ObjectType IN('V') SET #sql=#sql+'Drop View '+#MySchemaName+'.'+#ObjectName+CHAR(13)
IF #ObjectType IN('P') SET #sql=#sql+'Drop Procedure '+#MySchemaName+'.'+#ObjectName+CHAR(13)
IF #ObjectType IN('U') SET #sql=#sql+'Drop Table '+#MySchemaName+'.'+#ObjectName+CHAR(13)
--PRINT #ObjectName + ' | ' + #ObjectType
FETCH NEXT FROM cur INTO #SchemaName,#ObjectName,#ObjectType,#ObjectDesc,#Category
END
CLOSE cur;
DEALLOCATE cur;
SET #sql=#sql+CASE WHEN LEN(#sql)>0 THEN 'Drop Schema '+#MySchemaName+CHAR(13) ELSE '' END
PRINT #sql
EXECUTE (#sql)
I do not know wich version of Sql Server are you using, but assuming that is 2008 or later, maybe the following command will be very useful (check that you can drop ALL TABLES in one simple line):
sp_MSforeachtable "USE DATABASE_NAME DROP TABLE ?"
This script will execute DROP TABLE .... for all tables from database DATABASE_NAME. Is very simple and works perfectly. This command can be used for execute other sql instructions, for example:
sp_MSforeachtable "USE DATABASE_NAME SELECT * FROM ?"

How to Execute SQL Query without Displaying results

Is it possible that Execute SQL Query without Displaying results?
like
Select * from Table_Name
after running this query result should not be displayed in sql server.
I'm surprised nobody came up with the answer : switch on the "discard query results after execution" option; l I'm pretty sure that was what the interviewer was after. SET FMT ONLY is totally different thing IMHO.
In SSMS
open a new query
in the menu select Query / Query options
select the Results pane
check the "discard result after execution"
The reason you might want to do this is to avoid having to wait and waste resources for the results to be loaded into the grid but still be able to have e.g. the Actual Execution Plan.
Executing will return a recordset. It may have no rows of course but get a result
You can suppress rows but not the resultset with SET FMTONLY
SET FMTONLY ON
SELECT * FROM sys.tables
SET FMTONLY OFF
SELECT * FROM sys.tables
Never had a use for it personally though...
Edit 2018. As noted, see #deroby's answer for a better solution these days
Sounds like a dubious interview question to me. I've done it, I've needed to do it, but you'd only need to do so under pretty obscure circumstances. Obscure, but sometimes very important.
As #gbn says, one programmatic way is with SET FMTONLY (thanks, now I don't have to dig it out of my old script files). Some programs and utilities do this when querying SQL; first they submit a query with FMTONLY ON, to determine the layout of the resulting table structure, then when they've prepared that they run it gain with FMTONLY OFF, to get the actual data. (I found this out when the procedure called a second procedure, the second procedure returned the data set, and for obscure reasons the whole house of cards fell down.)
This can also be done in SSMS. For all querying windows, under Tools/Options, Query Results/SQL Server/Results to XX, check "Discard results after query executes"; for only the current window, under Query/Query Options, Results/XX, same checkbox. The advantage here is that the query will run on the database server, but the data results will not be returned. This can be invaluable if you're checking the query plan but don't want to receive the resulting 10GB of of data (across the network onto your laptop), or if you're doing some seriously looped testing, as SSMS can only accept so many result sets from a given "run" before stopping the query with a "too many result sets" message. [Hmm, double-check me on that "query plan only" bit--I think it does this, but it's been a long time.]
insert anothertable
Select * from Table_Name
Executes the select but returns nothing
set noexec on
Select * from Table_Name
Parses but does not execute and so returns nothing.
Perhaps the interviewer intended to ask a different question:
How would you execute a SQL query without returning the number of results?
In that case the answer would be SET NOCOUNT ON.
If you need the query to execute but don't need the actual resultset, you can wrap the query in an EXISTS (or NOT EXISTS) statement: IF EXISTS(SELECT * FROM TABLE_NAME...). Or alternately, you could select INTO #temp, then later drop the temp table.
Is the goal to suppress all rows? Then use a filter that evaluates to false for every row:
SELECT * FROM Table_Name WHERE 1 = 2
In my case I was testing that the data was behaving in all views, e.g. any cast() functions weren't causing conversion errors, etc. so supressing the actual data wasn't an option, displaying wasn't too bad but a bit of wasted resource and better not to diplsay if sending results only in text.
I came up with the following script to test all the views in this way, the only problem is when it encounters views that have text/ntext columns.
declare csr cursor local for select name from sys.views order by name
declare #viewname sysname
declare #sql nvarchar(max)
open csr
fetch next from csr into #viewname
while ##fetch_status = 0 begin
--set #sql = 'select top 1 * from ' + #viewname
set #sql = 'declare #test nvarchar(max) select #test = checksum(*) from ' + #viewname
print #viewname
exec sp_executesql #sql
fetch next from csr into #viewname
end
close csr
deallocate csr
If you are using PostgreSQL you can put your select in a function and use
PERFORM
The PERFORM statements execute a parameter and forgot result.
A PERFORM statement sets FOUND true if it produces (and discards) one or more rows, false if no row is produced.
https://www.postgresql.org/docs/9.1/plpgsql-statements.html#:~:text=A%20PERFORM%20statement%20sets%20FOUND,if%20no%20row%20is%20returned.
Yet another use case is when you just want to read all the rows of the table, for example testing against corruptions. In this case you don't need the data itself, only the fact that it is readable or not.
However, the option name "Discard results AFTER execution" is a bit confusing - it tells me that the result is fetched and only then discarded. In contrary, it fetches the data for sure but does not store it anywhere (by default the rows are put into the grid, or whatever output you have chosen) - the received rows are discarded on the fly (and not AFTER execution).
I am surprised the community can't easily find a use case for this. Large result sets take memory on the client, which may become a problem if many SSMS windows are active (it is not unusual for me to have 2-3 instances of SSMS opened, each with 50-70 active windows). In some cases, like in Cyril's example, SSMS can run out of memory and simply unable to handle a large result set. For instance, I had a case when I needed to debug a stored procedure returning hundreds of millions of rows. It would be impossible to run in SSMS on my development machine without discarding results. The procedure was for an SSIS package where it was used as a data source for loading a data warehouse table. Debugging in SSMS involved making non-functional changes (so the result set was of no interest to me) and inspecting execution statistics and actual query execution plans.
I needed a proc to return all records updated by a specified user after a certain point in time, only showing results where records existed. Here it is:
-- Written by David Zanke
-- Return all records modified by a specified user on or after a specified date.
If mod date does not exist, return row anyhow
Set Nocount on
Declare #UserName varchar(128) = 'zanked'
, #UpdatedAfterDate Varchar( 30) = '2016-10-08'
, #TableName varchar( 128)
, #ModUser varchar( 128)
, #ModTime varchar( 128)
, #sql varchar( 2000 )
-- In a perfect world, left join would be unecessary since every row that captures the last mod user would have last mod date.
-- Unfortunately, I do not work in a perfect world and rows w/ last mod user exist w/o last mod date
Declare UserRows Cursor for Select distinct c1.table_name, c1.column_name, c2.column_name From INFORMATION_SCHEMA.COLUMNS c1
Left Join INFORMATION_SCHEMA.COLUMNS c2 On c1.Table_Name = c2.Table_Name And c2.Column_name like '%DTTM_RCD_LAST_UPD%'
Where c1.column_name like '%UPDATED_BY_USER%'
Open UserRows
Fetch UserRows Into #tablename, #ModUser, #ModTime
While ( ##FETCH_STATUS = 0 )
Begin
-- capture output from query into a temp table
Select #sql = 'Select ''' + #TableName + ''' TableName, * Into ##HoldResults From ' + #TableName + ' Where ' + #ModUser + ' = ''' + #userName + ''''
+ Case When #ModTime Is Null Then '' Else ' And ' + #ModTime + ' >= ''' + #UpdatedAfterDate + '''' End
Exec ( #sql)
-- only output where rows exist
If ##ROWCOUNT > 0
Begin
Select * from ##HoldResults
End
Drop Table ##HoldResults
Fetch UserRows Into #tablename, #ModUser, #ModTime
End
Close UserRows;
Deallocate UserRows

How to force SQL Server 2005 objects to be recompiled NOW

According to this running sp_recompile forces the object to be recompiled the next time that it is run
I need it to be recompiled the moment I run the sp-recompile command, mainly to check for syntax errors and existence of objects on which the stored procedure depends.
--
on sql 2008 there's sys.sp_refreshsqlmodule module...
Probably the simplest way to do this is to re-deploy the stored procedure, which would (as far as I'm aware) remove the need to recompile the procedure.
Something along these lines:
SET #ProcedureName = 'SampleProcedure'
CREATE TABLE #ProcedureContent (Text NVARCHAR(MAX))
INSERT INTO #ProcedureContent
EXEC sp_helptext #ProcedureName
DECLARE #ProcedureText NVARCHAR(MAX)
SET #ProcedureText = ''
SELECT #ProcedureText = #ProcedureText + [Text] FROM #ProcedureContent
EXEC ('DROP PROCEDURE ' + #ProcedureName);
EXEC (#ProcedureText)
DROP TABLE #ProcedureContent

How do I programatically perform a Modify on all stored procedures in my database in SQL 2008

What I want to do is simulate right clicking a stored procedure an selecting Modify, then execute so that my stored procedure runs.
Some of the tables in our database have changed and not all the sp's have been modified.
ie old SP =
ALTER PROCEDURE [dbo].[myProcedure]
SELECT name, address, typename from names
GO
Then the names table was modified and the typename column removed.
If i click modify on the SP then execute I get an error message in the messages output window.
I would like to do this for every sp in my database so i can see that it runs without errors.
(we have 200 sps and it would take a long time to do it manually)
Any ideas would be much appreciated.
You should compose a text file of test cases in the form:
exec <stored proc> [args]
if (##error <> 0)
begin
print "Fail"
end
go
Unfortunately there is no way to automate this further unless either:
None of your stored procedures take parameters.
Your stored procedure parameters are derivable (highly unlikely).
Even if you do supply one particular set of parameter values, this isn't comprehensively testing that all stored procs in your database are bug free. It just verifies that the sproc runs for those particular arguments. The bottom line: There are no short-cuts when it comes to proper unit testing.
You could write a cursor to run through each of them execution them. But how would you know what values to provide for the input parameters? If none of them have parameters something like this will work.
DECLARE #proc sysname
DECLARE cur CURSOR FOR SELECT '[' + schema_name(schema_id) + '].[' + name + ']'
FROM sys.procedures
OPEN cur
FETCH NEXT FROM cur INTO #proc
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC (#proc)
FETCH NEXT FROM cur INTO #proc
END
CLOSE cur
DEALLOCATE cur
Handling parameters (assuming you can figure out the values to use) would be along the same lines with an inner loop to get the parameter names, then supply them with values.