We have recently encountered a strange problem on our staging and production environments. We have run an update script to a stored procedure on SQL Server 2005, verified the new change, and started using it on our products. After sometime the same stored procedure has gone missing from the DB. This stored procedure is not being used by any other tasks except the one we are intending to use. We have checked every bit of code and deployment script, but cannot find a trace for just dropping the stored procedure.
This issue doesn't occur on our DEV and QA environments, but on Staging and Production only.
Could anybody help on this?
Kind Regards,
Mafaz
If you've ruled out the obvious (e.g. deliberate sabotage), then I would suggest having a look through sys.sql_modules for a reference to the procedure - possibly there is an accidental slip like:
IF NOT EXISTS (SELECT 1 FROM SYS.PROCEDURES WHERE NAME = 'Proc1')
DROP PROCEDURE Proc1
GO
CREATE PROC dbo.Proc1
AS
...
<< MISSING GO!
IF NOT EXISTS (SELECT 1 FROM SYS.PROCEDURES WHERE NAME = 'Proc2')
DROP PROCEDURE Proc2
GO
CREATE PROC dbo.Proc2
AS
...
i.e. in the Above, the DROP PROCEDURE Proc2 code gets appended INTO the definition of the unrelated Proc1 because of a missing GO at the end of the Proc1 definition. Every time Proc1 is run, it will drop proc Proc2 (and the if exists will inconveniently hide an error if Proc2 is already dropped).
Similarly, another common issue is leaving GRANT EXEC at the bottom of the PROC - if permissions are lax, this can destroy the performance of a procedure.
The best advice here is to execute applications with minimal permissions, so that it is not able to execute DDL such as DROP or GRANT. This way, the app will break when Proc1 is executed, allowing you to quickly track down the offending code.
Related
I have three stored procedures Sp1, Sp2 and Sp3.
The first one (Sp1) will execute the second one (Sp2) and save returned data into #tempTB1 and the second one will execute the third one (Sp3) and save data into #tempTB2.
If I execute the Sp2 it will work and it will return me all my data from the Sp3, but the problem is in the Sp1, when I execute it it will display this error:
INSERT EXEC statement cannot be nested
I tried to change the place of execute Sp2 and it display me another error:
Cannot use the ROLLBACK statement
within an INSERT-EXEC statement.
This is a common issue when attempting to 'bubble' up data from a chain of stored procedures. A restriction in SQL Server is you can only have one INSERT-EXEC active at a time. I recommend looking at How to Share Data Between Stored Procedures which is a very thorough article on patterns to work around this type of problem.
For example a work around could be to turn Sp3 into a Table-valued function.
This is the only "simple" way to do this in SQL Server without some giant convoluted created function or executed sql string call, both of which are terrible solutions:
create a temp table
openrowset your stored procedure data into it
EXAMPLE:
INSERT INTO #YOUR_TEMP_TABLE
SELECT * FROM OPENROWSET ('SQLOLEDB','Server=(local);TRUSTED_CONNECTION=YES;','set fmtonly off EXEC [ServerName].dbo.[StoredProcedureName] 1,2,3')
Note: You MUST use 'set fmtonly off', AND you CANNOT add dynamic sql to this either inside the openrowset call, either for the string containing your stored procedure parameters or for the table name. Thats why you have to use a temp table rather than table variables, which would have been better, as it out performs temp table in most cases.
OK, encouraged by jimhark here is an example of the old single hash table approach: -
CREATE PROCEDURE SP3 as
BEGIN
SELECT 1, 'Data1'
UNION ALL
SELECT 2, 'Data2'
END
go
CREATE PROCEDURE SP2 as
BEGIN
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
INSERT INTO #tmp1
EXEC SP3
else
EXEC SP3
END
go
CREATE PROCEDURE SP1 as
BEGIN
EXEC SP2
END
GO
/*
--I want some data back from SP3
-- Just run the SP1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--Try run this - get an error - can't nest Execs
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
INSERT INTO #tmp1
EXEC SP1
*/
/*
--I want some data back from SP3 into a table to do something useful
--However, if we run this single hash temp table it is in scope anyway so
--no need for the exec insert
if exists (select * from tempdb.dbo.sysobjects o where o.xtype in ('U') and o.id = object_id(N'tempdb..#tmp1'))
DROP TABLE #tmp1
CREATE TABLE #tmp1 (ID INT, Data VARCHAR(20))
EXEC SP1
SELECT * FROM #tmp1
*/
My work around for this problem has always been to use the principle that single hash temp tables are in scope to any called procs. So, I have an option switch in the proc parameters (default set to off). If this is switched on, the called proc will insert the results into the temp table created in the calling proc. I think in the past I have taken it a step further and put some code in the called proc to check if the single hash table exists in scope, if it does then insert the code, otherwise return the result set. Seems to work well - best way of passing large data sets between procs.
This trick works for me.
You don't have this problem on remote server, because on remote server, the last insert command waits for the result of previous command to execute. It's not the case on same server.
Profit that situation for a workaround.
If you have the right permission to create a Linked Server, do it.
Create the same server as linked server.
in SSMS, log into your server
go to "Server Object
Right Click on "Linked Servers", then "New Linked Server"
on the dialog, give any name of your linked server : eg: THISSERVER
server type is "Other data source"
Provider : Microsoft OLE DB Provider for SQL server
Data source: your IP, it can be also just a dot (.), because it's localhost
Go to the tab "Security" and choose the 3rd one "Be made using the login's current security context"
You can edit the server options (3rd tab) if you want
Press OK, your linked server is created
now your Sql command in the SP1 is
insert into #myTempTable
exec THISSERVER.MY_DATABASE_NAME.MY_SCHEMA.SP2
Believe me, it works even you have dynamic insert in SP2
I found a work around is to convert one of the prods into a table valued function. I realize that is not always possible, and introduces its own limitations. However, I have been able to always find at least one of the procedures a good candidate for this. I like this solution, because it doesn't introduce any "hacks" to the solution.
I encountered this issue when trying to import the results of a Stored Proc into a temp table, and that Stored Proc inserted into a temp table as part of its own operation. The issue being that SQL Server does not allow the same process to write to two different temp tables at the same time.
The accepted OPENROWSET answer works fine, but I needed to avoid using any Dynamic SQL or an external OLE provider in my process, so I went a different route.
One easy workaround I found was to change the temporary table in my stored procedure to a table variable. It works exactly the same as it did with a temp table, but no longer conflicts with my other temp table insert.
Just to head off the comment I know that a few of you are about to write, warning me off Table Variables as performance killers... All I can say to you is that in 2020 it pays dividends not to be afraid of Table Variables. If this was 2008 and my Database was hosted on a server with 16GB RAM and running off 5400RPM HDDs, I might agree with you. But it's 2020 and I have an SSD array as my primary storage and hundreds of gigs of RAM. I could load my entire company's database to a table variable and still have plenty of RAM to spare.
Table Variables are back on the menu!
I recommend to read this entire article. Below is the most relevant section of that article that addresses your question:
Rollback and Error Handling is Difficult
In my articles on Error and Transaction Handling in SQL Server, I suggest that you should always have an error handler like
BEGIN CATCH
IF ##trancount > 0 ROLLBACK TRANSACTION
EXEC error_handler_sp
RETURN 55555
END CATCH
The idea is that even if you do not start a transaction in the procedure, you should always include a ROLLBACK, because if you were not able to fulfil your contract, the transaction is not valid.
Unfortunately, this does not work well with INSERT-EXEC. If the called procedure executes a ROLLBACK statement, this happens:
Msg 3915, Level 16, State 0, Procedure SalesByStore, Line 9 Cannot use the ROLLBACK statement within an INSERT-EXEC statement.
The execution of the stored procedure is aborted. If there is no CATCH handler anywhere, the entire batch is aborted, and the transaction is rolled back. If the INSERT-EXEC is inside TRY-CATCH, that CATCH handler will fire, but the transaction is doomed, that is, you must roll it back. The net effect is that the rollback is achieved as requested, but the original error message that triggered the rollback is lost. That may seem like a small thing, but it makes troubleshooting much more difficult, because when you see this error, all you know is that something went wrong, but you don't know what.
I had the same issue and concern over duplicate code in two or more sprocs. I ended up adding an additional attribute for "mode". This allowed common code to exist inside one sproc and the mode directed flow and result set of the sproc.
what about just store the output to the static table ? Like
-- SubProcedure: subProcedureName
---------------------------------
-- Save the value
DELETE lastValue_subProcedureName
INSERT INTO lastValue_subProcedureName (Value)
SELECT #Value
-- Return the value
SELECT #Value
-- Procedure
--------------------------------------------
-- get last value of subProcedureName
SELECT Value FROM lastValue_subProcedureName
its not ideal, but its so simple and you don't need to rewrite everything.
UPDATE:
the previous solution does not work well with parallel queries (async and multiuser accessing) therefore now Iam using temp tables
-- A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished.
-- The table can be referenced by any nested stored procedures executed by the stored procedure that created the table.
-- The table cannot be referenced by the process that called the stored procedure that created the table.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NULL
CREATE TABLE #lastValue_spGetData (Value INT)
-- trigger stored procedure with special silent parameter
EXEC dbo.spGetData 1 --silent mode parameter
nested spGetData stored procedure content
-- Save the output if temporary table exists.
IF OBJECT_ID('tempdb..#lastValue_spGetData') IS NOT NULL
BEGIN
DELETE #lastValue_spGetData
INSERT INTO #lastValue_spGetData(Value)
SELECT Col1 FROM dbo.Table1
END
-- stored procedure return
IF #silentMode = 0
SELECT Col1 FROM dbo.Table1
Declare an output cursor variable to the inner sp :
#c CURSOR VARYING OUTPUT
Then declare a cursor c to the select you want to return.
Then open the cursor.
Then set the reference:
DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY FOR
SELECT ...
OPEN c
SET #c = c
DO NOT close or reallocate.
Now call the inner sp from the outer one supplying a cursor parameter like:
exec sp_abc a,b,c,, #cOUT OUTPUT
Once the inner sp executes, your #cOUT is ready to fetch. Loop and then close and deallocate.
If you are able to use other associated technologies such as C#, I suggest using the built in SQL command with Transaction parameter.
var sqlCommand = new SqlCommand(commandText, null, transaction);
I've created a simple Console App that demonstrates this ability which can be found here:
https://github.com/hecked12/SQL-Transaction-Using-C-Sharp
In short, C# allows you to overcome this limitation where you can inspect the output of each stored procedure and use that output however you like, for example you can feed it to another stored procedure. If the output is ok, you can commit the transaction, otherwise, you can revert the changes using rollback.
On SQL Server 2008 R2, I had a mismatch in table columns that caused the Rollback error. It went away when I fixed my sqlcmd table variable populated by the insert-exec statement to match that returned by the stored proc. It was missing org_code. In a windows cmd file, it loads result of stored procedure and selects it.
set SQLTXT= declare #resets as table (org_id nvarchar(9), org_code char(4), ^
tin(char9), old_strt_dt char(10), strt_dt char(10)); ^
insert #resets exec rsp_reset; ^
select * from #resets;
sqlcmd -U user -P pass -d database -S server -Q "%SQLTXT%" -o "OrgReport.txt"
This seems like it would be trivial, but I have not been able to come up with a solution to this small problem.
I am attempting to create a stored procedure in my application's database. This stored procedure just executes a job that has been set up in the SSMS on the same server (seemed to be the only way to programmatically execute these jobs).
The simple code is shown below:
USE ApplicationsDatabase
GO
CREATE PROCEDURE [dbo].[procedure]
AS
BEGIN
EXEC dbo.sp_start_job N'Nightly Download'
END
When ran as is, the procedure technically gets created but cannot be executed due to it not being able to find the 'sp_start_job' since it is using the ApplicationsDatabase. If I try to create the procedure again (after deleting previously created) but updating the USE to MSDB, it tries to add it to that system database for which I do not have permissions to do. Finally, I attempted to keep the original create statement but added the USE MSDB within the procedure (just to use the 'sp_start_job' procedure), but it would error saying USE statements cannot be placed within procedures.
After pondering on the issue for a little (I'm obviously no SQL database expert), I could not come up with a solution and decided to solicit the advice of my peers. Any help would be greatly appreciated, thanks!
You will have to fully qualify the path to the procedure. Of course, you can only execute this is the application has permissions.
Try this:
USE ApplicationsDatabase
GO
CREATE PROCEDURE [dbo].[procedure]
AS
BEGIN
EXEC msdb.dbo.sp_start_job N'Nightly Download'
END
We have a stored procedure that runs hourly and requires heavy modification. There is an issue where someone will edit it while the stored proc is running and will cause the stored proc to break and end. I am looking for an error to pop up when someone tries to edit a stored procedure while it is running, rather than breaking the execution.
It's a sql server agent job that runs hourly, I get "The definition of object 'stored_procedure' has changed since it was compiled."
Is there something I can add to the procedure? A setting?
I think you can use a trigger at the database level in order to prevent changes and within the object apply validations for the running stored procedure, something like this:
USE [YourDatabase]
GO
ALTER TRIGGER [DDLTrigger_Sample]
ON DATABASE
FOR CREATE_PROCEDURE, ALTER_PROCEDURE, DROP_PROCEDURE
AS
BEGIN
IF EXISTS (SELECT TOP 1 1
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_query_plan(req.plan_handle) sqlplan WHERE sqlplan.objectid = OBJECT_ID(N'GetFinanceInformation'))
BEGIN
PRINT 'GetFinanceInformation is running and cannot be changed'
ROLLBACK
END
END
that way you can prevent the stored procedure being changed during execution, if it's not being executed changes will be reflected as usual. hope this helps.
You should do some research and testing and confirm this is the case. Altering a SProc while executing should not impact the run.
Open two SSMS windows and run query 1 first and switch to window 2 and run that query.
Query 1
CREATE PROCEDURE sp_altertest
AS
BEGIN
SELECT 'This is a test'
WAITFOR DELAY '00:00:10'
END
GO
EXEC sp_altertest
Query2
alter procedure sp_altertest AS
BEGIN
SELECT 'This is a test'
WAITFOR DELAY '00:00:06'
END
GO
Exec sp_altertest
Query 1 should continue to run and have a 10 sec execution time while query 2 will run with a 6 sec runtime. The SProc is cached at the time of run and stored in memory. The alter should have no impact.
I have created a stored procedure and can see it under the stored procedure node, but when trying to execute it doesn't find the procedure.
Under stored procedure node it is called dbo.CopyTable
exec CopyTable
CopyTable is undefined in red saying it does not exist. Why?
Even if I right-click on the procedure and say script stored procedure as execute to - the code it generates is underlined in red and cant find stored procedure either.
Ensure that the database selected contains the stored procedure CopyTable
USE YourDatabase
EXEC CopyTable
Try adding dbo and selecting the right database,
USE databaseName
GO
EXEC dbo.CopyTable
GO
Execute a Stored Procedure
Most likely you are just in the wrong database in the query window, you can specify the database like this:
EXEC [yourDBName].dbo.CopyTable
Reading on how to Execute a Stored Procedure
Considering your updated question:
Even if i rightclick on the procedure and say script stored procedure
as execute to - the code it generates is underlined in red and cant
find stored procedure either.
This could happen if your stored procedure is invalid. Please double-check the validity of the SPROC and ensure the tables it references exist, etc.
Try running your CREATE PROCEDURE. Highlight it, f5 it, and then make sure it runs before you call it elsewhere.
Maybe in your procedure you've accidentally cut-pasted your script name (dbo.CopyTable), say something like...
SELECT * FROM dbo.CopyTable
WHERE ClientId = #ClientId
RETURN
Then when you call your proc you get 'invalid object name dbo.CopyTable' and assume sql is having trouble finding the stored-proc ... which isn't the problem, its finding and running the proc but its actually a problem within the proc.
When it comes to creating stored procedures, views, functions, etc., is it better to do a DROP...CREATE or an ALTER on the object?
I've seen numerous "standards" documents stating to do a DROP...CREATE, but I've seen numerous comments and arguments advocating for the ALTER method.
The ALTER method preserves security, while I've heard that the DROP...CREATE method forces a recompile on the entire SP the first time it's executed instead of just a a statement level recompile.
Can someone please tell me if there are other advantages / disadvantages to using one over the other?
ALTER will also force a recompile of the entire procedure. Statement level recompile applies to statements inside procedures, eg. a single SELECT, that are recompiled because the underlying tables changes, w/o any change to the procedure. It wouldn't even be possible to selectively recompile just certain statements on ALTER procedure, in order to understand what changed in the SQL text after an ALTER procedure the server would have to ... compile it.
For all objects ALTER is always better because it preserves all security, all extended properties, all dependencies and all constraints.
This is how we do it:
if object_id('YourSP') is null
exec ('create procedure dbo.YourSP as select 1')
go
alter procedure dbo.YourSP
as
...
The code creates a "stub" stored procedure if it doesn't exist yet, otherwise it does an alter. In this way any existing permissions on the procedure are preserved, even if you execute the script repeatedly.
Starting with SQL Server 2016 SP1, you now have the option to use CREATE OR ALTER syntax for stored procedures, functions, triggers, and views. See CREATE OR ALTER – another great language enhancement in SQL Server 2016 SP1 on the SQL Server Database Engine Blog. For example:
CREATE OR ALTER PROCEDURE dbo.MyProc
AS
BEGIN
SELECT * FROM dbo.MyTable
END;
Altering is generally better. If you drop and create, you can lose the permissions associated with that object.
If you have a function/stored proc that is called very frequently from a website for example, it can cause problems.
The stored proc will be dropped for a few milliseconds/seconds, and during that time, all queries will fail.
If you do an alter, you don't have this problem.
The templates for newly created stored proc are usually this form:
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'P' AND name = '<name>')
BEGIN
DROP PROCEDURE <name>
END
GO
CREATE PROCEDURE <name>
......
However, the opposite is better, imo:
If the storedproc/function/etc doesn't exist, create it with a dummy select statement. Then, the alter will always work - it will never be dropped.
We have a stored proc for that, so our stored procs/functions usually like this:
EXEC Utils.pAssureExistance 'Schema.pStoredProc'
GO
ALTER PROCECURE Schema.pStoredProc
...
and we use the same stored proc for functions:
EXEC Utils.pAssureExistance 'Schema.fFunction'
GO
ALTER FUNCTION Schema.fFunction
...
In Utils.pAssureExistance we do a IF and look at the first character after the ".": If it's a "f", we create a dummy fonction, if it's "p", we create a dummy stored proc.
Be careful though, if you create a dummy scalar function, and your ALTER is on a table-valued function, the ALTER FUNCTION will fail, saying it's not compatible.
Again, Utils.pAssureExistance can be handy, with an additional optional parameter
EXEC Utils.pAssureExistance 'Schema.fFunction', 'TableValuedFunction'
will create a dummy table-valued function,
Additionaly, I might be wrong, but I think if you do a drop procedure and a query is currently using the stored proc, it will fail.
However, an alter procedure will wait for all queries to stop using the stored proc, and then alter it. If the queries are "locking" the stored proc for too long (say a couple seconds), the ALTER will stop waiting for the lock, and alter the stored proc anyway: the queries using the stored proc will probably fail at that point.
DROP generally loses permissions AND any extended properties.
On some UDFs, ALTER will also lose extended properties (definitely on SQL Server 2005 multi-statement table-valued functions).
I typically do not DROP and CREATE unless I'm also recreating those things (or know I want to lose them).
I don't know if it's possible to make such blanket comment and say "ALTER is better". I think it all depends on the situation. If you require this sort of granular permissioning down to the procedure level, you probably should handle this in a separate procedure. There are benefits to having to drop and recreate. It cleans out existing security and resets it what's predictable.
I've always preferred using drop/recreate. I've also found it easier to store them in source control. Instead of doing .... if exists do alter and if not exists do create.
With that said... if you know what you're doing... I don't think it matters too much.
If you perform a DROP, and then use a CREATE, you have almost the
same effect as using an ALTER VIEW statement. The problem is that you need to entirely re-establish your permissions on who can and can’t use the view. ALTER retains any dependency information and set permissions.
You've asked a question specifically relating to DB objects that do not contain any data, and theoretically should not be changed that often.
Its likely you may need to edit these objects but not every 5 minutes. Because of this I think you've already hit the hammer on the head - permissions.
Short answer, not really an issue, so long as permissions are not an issue
We used to use alter while we were working in development either creating new functionality or modifying the functionality. When we were done with our development and testing we would then do a drop and create. This modifys the date/time stamp on the procs so you can sort them by date/time.
It also allowed us to see what was bundeled by date for each deliverable we sent out.
Add with a drop if exists is better because if you have multiple environments when you move the script to QA or test or prod you don't know if the script already exists in that environment. By adding an drop (if it already exists) and and then add you will be covered regardless if it exists or not. You then have to reapply permissions but its better then hearing your install script error-ed out.
From a usability point of view a drop and create is better than a alter. Alter will fail in a database that doesn't contain that object, but having an IF EXISTS DROP and then a CREATE will work in a database with the object already in existence or in a database where the object doesn't exist. In Oracle and PostgreSQL you normally create functions and procedures with the statement CREATE OR REPLACE that does the same as a SQL SERVER IF EXISTS DROP and then a CREATE. It would be nice if SQL Server picked up this small but very handy syntax.
This is how I would do it. Put all this in one script for a given object.
IF EXISTS ( SELECT 1
FROM information_schema.routines
WHERE routine_schema = 'dbo'
AND routine_name = '<PROCNAME'
AND routine_type = 'PROCEDURE' )
BEGIN
DROP PROCEDURE <PROCNAME>
END
GO
CREATE PROCEDURE <PROCNAME>
AS
BEGIN
END
GO
GRANT EXECUTE ON <PROCNAME> TO <ROLE>
GO