I want to return an output from a stored procedure by using insert into. For performance reasons, the target table is a memory-optimized table type.
I now figured out that while the stored procedure is running, all in the stored procedure affected rows are kept locked until the stored procedure completes.
Example:
insert into #ModifiedSecurities (SecurityID, AttributeTypeID)
exec Securities.spSecuritiesImportBody
#ProcessingID = #ProcessingID
During execution of Securities.spSecuritiesImportBody (which takes up to 10 minutes), all by spSecuritiesImportBody affected table rows are locked until the stored procedure completes (even the tables have nothing to do with the output of the stored procedure).
While in a single insert statement this behavior might make sense, I don't see any use of it and therefore want to get rid of these locks.
Is there any way to execute the stored procedure without creating these locks?
Here a code sample I made:
Execute the preparation
Run the code
Try to select from dbo.ProcessingsTesting while code is running. It won't be possible as table is locked. The lock is being create during dbo.UpdProcessing. However, for some reason, the lock is not being released.
select *
from dbo.ProcessingsTesting
-- start of preparation
drop procedure dbo.UpdProcessing
drop table dbo.ProcessingsTesting
drop procedure dbo.spSecuritiesImportBody
go
create table dbo.ProcessingsTesting
(
ProcessingID int,
EndDate datetime
)
insert into dbo.ProcessingsTesting
(
ProcessingID
)
select 1 union all
select 2 union all
select 3 union all
select 4 union all
select 5
-- stored procedure
go
create procedure dbo.spSecuritiesImportBody
(
#ProcessingID int
)
as
begin
exec dbo.UpdProcessing
#ProcessingID = #ProcessingID
WAITFOR DELAY '00:03:00'
-- return data
select 1, 2
end
-- stored procedure
go
create procedure dbo.UpdProcessing
(
#ProcessingID int
)
as
begin
update dbo.ProcessingsTesting
set EndDate = null
where ProcessingID = #ProcessingID
end
-- end of preparation
-- run the code
declare #ModifiedSecurities table
(
[SecurityID] [int] NOT NULL,
[AttributeTypeID] [smallint] NOT NULL
)
insert into #ModifiedSecurities (SecurityID, AttributeTypeID)
exec dbo.spSecuritiesImportBody
#ProcessingID = 1
Unless you begin and commit an explicit transaction, locks will be held on the modified rows until the outermost INSERT...EXEC statement completes. You can add an explicit transaction to the dbo.UpdProcessing proc (or surround the EXEC dbo.UpdProcessing with BEGIN TRAN and COMMIT) to release locks on the updated rows before the INSERT...EXEC completes:
ALTER PROCEDURE dbo.UpdProcessing
(
#ProcessingID int
)
AS
BEGIN TRAN;
UPDATE dbo.ProcessingsTesting
SET EndDate = null
WHERE ProcessingID = #ProcessingID
COMMIT;
GO
Although this will provide the desired results, it doesn't make much sense to me that one would update data unrelated to the SELECT results in the same stored procedure. It seems the procs should be called independently since they perform different functions.
On the procedure spSecuritiesImportBody you should change the execute update exec dbo.UpdProcessing
#ProcessingID = #ProcessingID to
Exec dbo.UpdProcessing #ProcessingID because is no sense using like this.
You can not avoid the lock because the MSSQL server use lock on row, table, pages to ensure that you are not changing the data when the server is executing statements regarding that record on database.
Related
I have a stored procedure, usp_region and it has a select statement with 50 columns as the result set. This procedure is called by multiple other stored procedures in our application.
Most of the stored procedure pass a parameter to this procedure and display the result set that it returns. I have one stored procedure, usp_calculatedDisplay, that gets the columns from this stored procedure and inserts the values into a temp table and does some more calculations on the columns.
Here's a part of the code in usp_calculatedDisplay.
Begin Procedure
/* some sql statements */
Declare #tmptable
(
-- all the 50 columns that are returned from the usp_region procedure
)
Insert Into #tmptable
exec usp_region #regionId = #id
Select t.*, /* a few calculated columns here */
From #tmptable t
End of procedure
Every time I add a column to the usp_region procedure, I'll also have to make sure I have to add it to this procedure. Otherwise it breaks. It has become difficult to maintain it since it is highly possible for someone to miss adding a column to the usp_calculatedDisplay procedure when the column is added to the usp_region.
In order to overcome this problem, I decided to do this:
Select *
Into #tmptable
From OPENROWSET('SQLNCLI',
'Server=localhost;Trusted_Connection=yes;',
'EXEC [dbo].[usp_region]')
The problem is 'Ad Hoc Distributed Queries' component is turned off. So I can't use this approach to overcome this issue. I was wondering if there are any other ways of overcoming this problem. I would really appreciate any help. Thank you!
Every time I add a column to the usp_region procedure
SQL Server is a structured database and it does not meant to solve such cases that you need to change your structure every day.
If you add/remove columns so often then you probably did not choose the right type of database, and you better re-design your system.
It has become difficult to maintain it since it is highly possible for someone to miss adding a column to the usp_calculatedDisplay procedure when the column is added to the usp_region.
There are two simple solutions for this (1) using DDL Triggers - very bad idea but simple to implement and working. (2) Using my trick to select from stored procedure
Option 1: using DDL trigger
You can automate the entire procedure and ALTER the stored procedure usp_calculatedDisplay every time that the stored procedure usp_region is changed
https://learn.microsoft.com/en-us/sql/relational-databases/triggers/ddl-triggers
The basic approach is
CREATE OR ALTER TRIGGER NotGoodSolutionTrig ON DATABASE FOR ALTER_PROCEDURE AS BEGIN
DECLARE #var_xml XML = EVENTDATA();
IF(
#var_xml.value('(EVENT_INSTANCE/DatabaseName)[1]', 'sysname') = 'tempdb'
and
#var_xml.value('(EVENT_INSTANCE/SchemaName)[1]', 'sysname') = 'dbo'
and
#var_xml.value('(EVENT_INSTANCE/ObjectName)[1]', 'sysname') = 'usp_region'
)
BEGIN
-- Here you can parse the text of the stored procedure
-- and execute ALTER on the first SP
-- To make it simpler, you can design the procedure usp_region so the columns names will be in specific row or between to comment which will help us to find it
-- The code of the Stored Procedure which you need to parse is in the value of:
-- #var_xml.value('(EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'NVARCHAR(MAX)'))
-- For example we can print it
DECLARE #SP_Code NVARCHAR(MAX)
SET #SP_Code = CONVERT(NVARCHAR(MAX), #var_xml.value('(EVENT_INSTANCE/TSQLCommand/CommandText)[1]', 'NVARCHAR(MAX)'))
PRINT #SP_Code
-- In your case, you need to execute ALTER on the usp_calculatedDisplay procedure using the text from usp_region
END
END
Option 2: trick to select from stored procedure using sys.dm_exec_describe_first_result_set
This is simple and direct way to get what you need.
CREATE OR ALTER PROCEDURE usp_calculatedDisplay AS
-- Option: using simple table, so it will exists outsie the scope of the dynamic query
DROP TABLE IF EXISTS MyTable;
DECLARE #sqlCommand NVARCHAR(MAX)
select #sqlCommand = 'CREATE TABLE MyTable(' + STRING_AGG ([name] + ' ' + system_type_name, ',') + ');'
from sys.dm_exec_describe_first_result_set (N'EXEC usp_region', null,0)
PRINT #sqlCommand
EXECUTE sp_executesql #sqlCommand
INSERT MyTable EXECUTE usp_region;
SELECT * FROM MyTable;
GO
Note!!! Both solutions are not recommended in production. My advice is to avoid such needs by redesign your system. If you need to re-write 20 SP so do it and don't be lazy! Your goal should be what best for the database usage.
I have a trigger for executing two procedures.
ALTER TRIGGER [dbo].[TRG_SP_SYNCH_CAB]
ON [VTBO_INTERFACE].[dbo].[T_TRIGGER_TABLE_FOR_SYNCH]
INSTEAD OF INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for trigger here
INSERT INTO T_TRIGGER_TABLE_FOR_SYNCH (DT)
VALUES (GETDATE());
exec PUMPOMAT_HO.DBO.SP_CM_TransferCAB
exec PUMPOMAT_HO.DBO.SP_CM_UpdateCAB
END
Execution time for two procedures is 5 mins. When I try to insert a value to T_TRIGGER_TABLE_FOR_SYNCH table, my other tables which are used in stored procedures are locked along 5 mins. But when I try to execute two procedures directly like
exec SP_CM_TransferCAB
exec SP_CM_UpdateCAB
No lock happens. What should I write in trigger to avoid of table locks.
Thanks.
Try by calling the second procedure inside(end of) the first procedure since I see no parameters are given.
Is this table [VTBO_INTERFACE].[dbo].[T_TRIGGER_TABLE_FOR_SYNCH] used in any of the procedure?
You should try to change the design/data flow to mimic this procedure call.
I have three stored procedures A, B, C
and definition of A is like
StoredProcedure A
As
Begin
--Some Stuff
Exec DBO.B [Derived Conitions]
Exec DBO.C [Derived Conitions]
END
but whenever I tried to execute the stored procedure A, at parsing time it give waring;
The module 'A' depends on the missing object 'B'. The module will still be created;
however, it cannot run successfully until the object exists.
The module 'A' depends on the missing object 'C'. The module will still be created;
however, it cannot run successfully until the object exists.
At execution time it throws exception
Could not find stored procedure 'dbo.B'.
Could not find stored procedure 'dbo.C'.
I found so many answers for calling a stored procedure with in stored procedure, but none of them worked for me.
You certainly can execute multiple procedures from within a single SP. You can even us the results from 1 SP as parameters in another.
In your specific case I suspect that there is a permissions / security or collation error which is stopping you from access the B and C stored procs.
Here is an example of SP chaining at work.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[DerivedProcedures]
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Temporary table used to store results from SP1
DECLARE #Results_ForStoredProcedure1 TABLE
(
[SPID] INT,
[Status] NVARCHAR(50),
[Login] NVARCHAR(50),
[HostName] NVARCHAR(50),
[BlkBy] NVARCHAR(5),
[DBName] NVARCHAR(50),
[Commad] NVARCHAR(50),
[CPUTime] INT,
[DiskIO] INT,
[LastBatch] NVARCHAR(50),
[ProgramName] NVARCHAR(50),
[SPID2] INT,
[RequestId] INT
)
-- Execute SP1
INSERT INTO #Results_ForStoredProcedure1
EXEC sp_who2
-- Temporary table to store the results from SP2
DECLARE #Results_ForStoredProcedure2 TABLE
(
[DatabaseName] NVARCHAR(50),
[DatabaseSize] INT,
[Remarks] NVARCHAR(50)
)
-- Execute SP2
INSERT INTO #Results_ForStoredProcedure2
EXEC sp_databases
-- do something with both SP results
SELECT DISTINCT SP2.*
FROM #Results_ForStoredProcedure1 AS SP1
INNER JOIN #Results_ForStoredProcedure2 AS SP2 ON SP2.DatabaseName = SP1.DBName
WHERE SP1.DBName IS NOT NULL
END
GO
-- TEST
EXECUTE [dbo].[DerivedProcedures]
Perhaps, it sounds hilarious but I was getting the mentioned issue as I was using the wrong DB name (for example-Use 'XYZ'). Actually, in my case I was transferring a SP from one environment to another but after doing so I would not change the corresponding DB name .Due to which I was getting the error as the SPs which were involved were present in different DBs in the dissimilar environment.
In nutshell,please check the DB name which should be the very first line of your SP.
For example- Use 'XYZ'.
I am working with sql server 2005 and working with an old stored proc which was written by someone else. I just want to call that stored proc to get data and not modify it.
The problem is that the stored proc is returning multiple result set with exactly same fields. Just the data is bit different. So when that stored proc is being called in the front end it is being used to populate two diffrent data tables which is fine.
But now I need to work with the combined results in excel and so dont have the advantages of multiple data tables.
Basically I want create a new stored proc which returns the union of the two results just by calling the existing storedproc. I dont want to create another copy of stored proc as I will have to keep tab everytime the other stored proc is changed i will have to change mine.
Is there a way to access the second result set in sql server itself.
Thanks,
--Abhi
Create a proxy procedure that throws both sets into a temp table, and select from that.
Here's a test example...
/*Create first proc that returns two data sets*/
IF OBJECT_ID('ReturningTwoDataSets') IS NOT NULL
BEGIN
DROP PROCEDURE ReturningTwoDataSets
END
GO
CREATE PROCEDURE dbo.ReturningTwoDataSets
AS
BEGIN
SET NOCOUNT ON
SELECT '1' AS [col1]
,'2' AS [col2]
,'3' AS [col3]
SELECT '4' AS [col1]
,'5' AS [col2]
,'6' AS [col3]
END
GO
/*
Create new proc that combines both data sets
into a temp table and returns a single dataset
*/
IF OBJECT_ID('ReturningOneDataSet') IS NOT NULL
BEGIN
DROP PROCEDURE ReturningOneDataSet
END
GO
CREATE PROCEDURE dbo.ReturningOneDataSet
AS
BEGIN
SET NOCOUNT ON
IF OBJECT_ID('TempDB..#OneDataSet') IS NOT NULL
BEGIN
DROP TABLE #OneDataSet
END
CREATE TABLE #OneDataSet
(
[col1] VARCHAR(100)
,[col2] VARCHAR(100)
,[col3] VARCHAR(100)
)
INSERT INTO #OneDataSet
(
col1
,col2
,col3
)
EXEC ReturningTwoDataSets
SELECT * FROM #OneDataSet
END
GO
/*Execute the old proc*/
EXEC ReturningTwoDataSets
/*Execute the new proc*/
EXEC ReturningOneDataSet
How do I count the number of rows a stored procedure would return the fastest way. Stored procedure returns rows around 100K to 1M records.
Select ##rowcount:
SELECT ##ROWCOUNT;
After executing the stored procedure.
You can define output variable:
create procedure x
(#p1 int output)
as
select #p1 = count(*)
from Table
The answer is to use ##ROWCOUNT is still valid, but I would not recommend to run in directly after EXEC like on existing answer.
SELECT statement is not always the last statement is stored procedure or you could have multiple SELECT statements:
Scenario:
CREATE PROCEDURE p
AS
BEGIN
CREATE TABLE #t(i INT);
INSERT INTO #t(i) VALUES (1),(2);
SELECT i FROM #t;
DROP TABLE IF EXISTS t;
END
EXEC p;
-- i
-- 1
-- 2
SELECT ##ROWCOUNT;
-- 0 instead of 2
db<>fiddle demo
One way is to use output parameter(as many as stored procedure resultset):
CREATE PROCEDURE p(#cnt INT OUT)
AS
BEGIN
CREATE TABLE #t(i INT);
INSERT INTO #t(i) VALUES (1),(2);
SELECT i FROM #t;
SET #cnt = ##ROWCOUNT; -- immediately after SELECT
DROP TABLE IF EXISTS t;
END
DECLARE #i INT;
EXEC p2 #cnt = #i OUT;
SELECT #i;
-- 2
db<>fiddle demo
Create procedure procedurename
AS
Begin
Select * from Table --if you want where condition write here
End
Exec Procedurename
Select ##rowcount
I have a similar task with a restriction that I must not alter the SP to get the count. Hence:
sp_configure 'show advanced options', 1;
reconfigure;
go
sp_configure 'ad hoc distributed queries', 1;
reconfigure;
go
select count(*) from
openrowset('SQLOLEDB','Data Source=localhost;Trusted_Connection=yes;
Integrated Security=SSPI','exec DBNAME..SPName')
Another way to get the same result
CREATE PROCEDURE NOMBRE_PROCEDIMIENTO
as
BEGIN
if EXISTS (SELECT * from NOMBRE_TABLA WHERE CONDITIONS HERE)
BEGIN
SELECT ##ROWCOUNT
END
END
So far the only thing that has worked for me is to:
a. Modify momentarily the stored procedure to dump the resulting dataset into a table. If changing the stored procedure is not an option replace Alter Procedure with Declare and remove the end, provide parameter values if not optional and execute as a query dumping the dataset into a table.
b. Script the table and drop it using SSMS.
c. Use the script to create a virtual table in the query and use Insert into with exec stored procedure to populate it.
d. Count the records.
/* prefix main result query execution follow with */
SELECT #mySProcRowcount = ##ROWCOUNT;
has been reliable with non-cloud based versions of MSSQLSERVER for many years.
not so much anymore with cloud-based of versions of MSSQLSERVER. i for one feel MS earned a 'for shame' comment for this in their rush to make all MS products cloud-based, first and foremost. in their haste, they reveal their cloud-based tools are less ready for prime time than they think.
it isn't insurmountable; it's just more work for a quality ready-for-prime-time solution on your part.