Once I insert default values in the table I store the result of scope identity in a variable
insert into OrderPlaced default values;
declare #id bigint;
set #id = SCOPE_IDENTITY();
After this, I have to run some other pieces of code that change the value of scope identity and after running those pieces of code I have to use the value of #id again but it shows an error saying that I must declare the variable which I have already done above.
EXEC dbo.GetRecieptById #ID = #id;
Unfortunately, I can't just select the whole code block and execute it at once as this is for a presentation and I have to show each individual steps.
Your request is how to persist the variable across batches - not within a batch.
One way would be to use SESSION_CONTEXT
declare #id bigint;
insert into OrderPlaced default values;
set #id = SCOPE_IDENTITY();
EXEC sys.sp_set_session_context #key= N'#id',#value = #id
GO
declare #id bigint = CAST(SESSION_CONTEXT(N'#id') AS BIGINT)
EXEC dbo.GetRecieptById #ID = #id;
What you wanna do ?
If you want to access the latest data in your Order Table, you can access the latest data with this code.
SELECT MAX(ID) FROM OrderPlaced
The local variable can not be used in a separate execution. You have to store all values in a temporary table.
These tables are stored in tempdb. Use local temporary table with one # or global temporary table with two ## at the beginning of the table name as follow:
create table #local_temp_table
(Id bigint not null);
...
insert into #local_temp_table ...
...
select Id from #local_temp_table;
OR
create table ##global_temp_table
(Id bigint not null);
...
insert into ##global_temp_table ...
...
select Id from ##global_temp_table;
They are automatically dropped when they go out of scope, however, you can drop them manually.
Take a look at the following link:
Temporary Tables in SQL Server
Related
I have a stored procedure like this:
CREATE PROCEDURE [dbo].[create_myNewId]
(#parentId BIGINT)
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [Mapping] (ParentId)
VALUES (#parentId)
SELECT SCOPE_IDENTITY();
END
This, when run on its own, returns the new id that has been assigned to the new row that's inserted with the parent id. However, when I do something like this:
DECLARE #NewId int
EXEC #NewId = create_myNewId #parentId = 33333
SELECT #NewId
When running this, the output window shows the result of the stored procedure, which returns an Id but #NewId always is 0. I fixed this by changing the stored procedure to use RETURN SCOPE_IDENTITY() but I was wondering why SELECT didn't work in this case?
I have my suspicions that it's something around the 0 being the success status being returned first from the stored procedure rather than the result, but was curious why this doesn't then happen when called directly from the client.
No! Write the procedure the right way:
CREATE PROCEDURE [dbo].[create_myNewId] (
#parentId bigint,
#outId bigint OUTPUT
) AS
BEGIN
SET NOCOUNT ON;
DECLARE #ids TABLE (id bigint);
INSERT INTO [Mapping](ParentId)
OUTPUT id INTO #ids
VALUES (#parentId);
SELECT #outId = id
FROM #ids;
END;
Then call this as:
DECLARE #NewId int;
EXEC create_myNewId #parentId = 33333, #NewId OUTPUT;
SELECT #NewId;
The OUTPUT clause is the recommend way to get results from a data-modification clause. The older methods using the *_IDENTITY() functions should be obsoleted.
Stored procedures do return values. These are integers that are designed to return status information. Other information should be returned via OUTPUT parameters.
Microsoft's design intent for stored procedures is that they always return an int to describe how successful the process undertaken by the procedure was. It's not intended to return a result data, and you're free to define the bits you want to return to describe succes, partial success etc. You could abuse it to return an integer result data (count query for example) if you wanted, but it's not the design intention
Executing a select query within a stored procedure creates a result set you can read on your client if the sproc is the kind that is intended to return data
My suggestion is to use an OUTPUT parameter. Not only will it be 'easier' to use when calling the stored procedure, it will also be clearer to the person calling the stored procedure.
CREATE PROCEDURE [dbo].[create_myNewId]
(#parentId BIGINT,
#myNewId BIGINT OUTPUT)
AS
BEGIN
SET NOCOUNT ON;
INSERT INTO [Mapping] ([ParentId])
VALUES (#parentId);
SET #myNewId = SCOPE_IDENTITY();
END;
GO
You would then call your stored procedure like this:
DECLARE #myNewId BIGINT;
EXECUTE [dbo].[create_myNewId] #parentId = 0, -- bigint
#myNewId = #myNewId OUTPUT; -- bigint
SELECT [This was just inserted] = #myNewId;
For anyone who has 0 as return value from a stored procedure, check if the stored procedure executes from the right database and only one procedure exists within the given context. Output parameters wouldn't be of any use if you ever plan to access the DB with ORM and the procedure returns an object's property.
I am trying to set up a stored proc that will have three variables
FK_List
String_of_Info
CreateId
I need to insert into the table one entry per foreign key from the FK_List. I was curious what the best way to structure the stored procedure to do this efficiently.
EDIT: Code snippet added
CREATE PROCEDURE StackOverFlowExample_BulkAdd
#FKList VARCHAR(MAX),
#Notes NVARCHAR(1000),
#CreateId VARCHAR(50)
AS
BEGIN
INSERT INTO [dbo].[StackOverflowTable] WITH (ROWLOCK)
([FKID], [Notes], [CreateId], [UpdateId])
VALUES (#FKList, <---- this is the problem spot
#Notes, #CreateId, #CreateId)
END
GO
Based off your comments, you simply need a slight edit
CREATE PROCEDURE StackOverFlowExample_BulkAdd
#Notes nvarchar(1000),
#CreateId varchar(50)
AS
BEGIN
INSERT INTO [dbo].[StackOverflowTable] WITH (ROWLOCK)
([FKID]
,[Notes]
,[CreateId]
,[UpdateId])
select
someID
,#Notes
,#CreateId
,#CreateId
from FKListTable
END
GO
Here is a simple demo
This will insert a row into your table for each FK reference in the reference table with the parameters you pass in. That's all there is to it!
Here's another demo that may be more clear as I use a GUID for the primary key on the secondary table.
SECOND EDIT
Based off your comments, you will need a string splitter. I have added a common one which was created by Jeff Moden. See the example here
The final proc, after you create the function, will be like below. You need to change the comma in the function to what ever the delimiter is for your application. Also, you should start using table valued parameters.
CREATE PROCEDURE StackOverFlowExample_BulkAdd
#FKList VARCHAR(MAX),
#Notes nvarchar(1000),
#CreateId varchar(50)
AS
BEGIN
INSERT INTO [dbo].[StackOverflowTable] WITH (ROWLOCK)
([FKID]
,[Notes]
,[CreateId]
,[UpdateId])
select item
,#Notes
,#CreateId
,#CreateId
from dbo.DelimitedSplit8K(#FKList,',')
END
And you can call it like so:
declare #FKList varchar(1000) = '1,2,3,4,5,6'
declare #Notes varchar(1000) = 'here is my note'
declare #CreatedId int = 1
exec StackOverFlowExample_BulkAdd #FKList, #Notes, #CreatedId
I have one table which consists of one trigger which will be called if any insert or update operation performed on that table.
This trigger will insert a new row in other physical table.
First I am taking the entire data to be inserted into a temporary table and then I am inserting data into my physical table(which has trigger).
After performing insert operation all the records in the temporary table are getting inserted into physical table but the trigger is executing for only first record, for rest of the records it is not executing.
Can anyone please help me with this issue.
NOTE : With cursor it is working fine but for performance issue I don't want to use cursor.
ALTER TRIGGER [dbo].[MY_TRG]
ON [dbo].[T_EMP_DETAILS]
FOR INSERT , UPDATE
AS
BEGIN
IF UPDATE(S_EMPLOYEE_ID)OR UPDATE(S_GRADE_ID)OR UPDATE(D_EFFECTIVE_DATE) OR UPDATE(S_EMPLOYEE_STATUS)
BEGIN
DECLARE #EmpId varchar(6)
DECLARE #HeaderId Int
DECLARE #FYStartYear varchar(4)
DECLARE #EffDate Smalldatetime
DECLARE #UpdatedBy varchar(10)
DECLARE #ActionType varchar(1)
DECLARE #RowCount Int
DECLARE #EmpRowCount Int
DECLARE #AuditRowsCount Int
DECLARE #EMP_STATUS VARCHAR(1)
DECLARE #D_FIN_START_YEAR DATETIME
DECLARE #Food_Count int
SELECT #FYStartYear = CAST(YEAR(D_CURRENT_FY_ST_DATE)AS VARCHAR) FROM dbo.APPLICATION WHERE B_IS_CURRENT_FY = 1
SELECT #UpdatedBy = 'SHARDUL'
select #EmpId = S_EMPLOYEE_ID from inserted
select #HeaderId = N_HEADER_TXN_ID from inserted
select #EffDate = D_EFFECTIVE_DATE from inserted
select #FLEXI_AMT = N_FLEX_BASKET_AMT from inserted
select #EMP_STATUS = S_EMPLOYEE_STATUS from inserted
select #D_FIN_START_YEAR=D_FIN_START_DATE from inserted
SELECT #RowCount = count(*) from T_EMP_DETAILS
WHERE S_EMPLOYEE_ID = #EmpId and
SUBSTRING(CAST(D_EFFECTIVE_DATE AS VARCHAR),1,11) = SUBSTRING(CAST(#EffDate AS VARCHAR),1,11)
BEGIN
exec INSERT_DEFAULT_VALUES #EmpId,#HeaderId,#UpdatedBy
END
That's one of many reasons Bulk is so fast :). Read Bulk Insert syntax and you'll see FIRE_TRIGGERS parameter. Use it.
As I wrote in my comment - you are using inserted in improper way. As written now it will work only for 1 row.
The second one is a WEIRD number of variables, and only few are used, why?
Third - you are using SP in the end of batch, you need to post it's code, I bet there is some insert in it, maybe you could avoid using this SP and insert directly in some table from inserted.
I am using a VB.Net transaction to execute two queries. There are two tables & following is an example structure.
USER
----
1. USER_ID - int (PK) AUTO_INCREMENT
2. USER_NAME - varchar(20)
ADDRESS
-----
1. USER_ID
2. USER_ADDRESS
As this basic structure represents a USER can have many photos. Whenever I insert a new record to the USER table photos of the user should be saved with the USER_ID which was automatically created.
I know that I need to use SCOPE_IDENTITY() for this purpose but I always get NULL for the SCOPE_IDENTITY() value, this isn't because of a trigger or anything else. Issue lies on how VB.Net creates my INSERT statement.
Here is how the queries look like in the SQLServer Profiler
Insert to the USER Table
exec sp_executesql N'INSERT INTO USER(USER_NAME) VALUES (#USER_NAME)',N'#USER_NAME nvarchar(4)',#USER_NAME =N'ABCD'
Insert to the ADDRESS Table
exec sp_executesql N'INSERT INTO ADDRESS(USER_ID,USER_ADDRESS) VALUES ((SELECT SCOPE_IDENTITY()),#USER_ADDRESS)',N'# USER_ADDRESS nvarchar(10)',#USER_ADDRESS=N'ABCDEFGHIJ'
I have appended the SELECT SCOPE_IDENTITY() directly in to the second query & I think SQL thinks that the command SCOPE_IDENTITY() is a string, how do I prevent this from happening.
As the name implies, scope_identity() is local to a scope. And sp_executesql runs inside its own scope. A later call to sp_executesql has no memories of the earlier scope.
The most logical solution would be to run both queries in the same scope. I'm not sure why you are using sp_executesql; perhaps you can omit that. Most clients run something like:
INSERT INTO USER(USER_NAME) VALUES (#USER_NAME);
SELECT SCOPE_IDENTITY() AS ID;
The client program can get the ID from the resulting row set. It can then pass the ID as a parameter to the INSERT queries for addresses. No sp_executesql required.
If you must use more than one sp_executesql, consider using anoutput parameter to ferry the identity out:
declare #ID bigint
exec sp_executesql
N'INSERT INTO USER(USER_NAME) VALUES (#USER_NAME);
SELECT #ID = SCOPE_IDENTITY();',
N'#USER_NAME nvarchar(4), #ID bigint output',
#USER_NAME = N'ABCD',
#ID = #ID output;
You can now pass #ID as a variable to your second sp_executesql.
I have an Stored Procedure that have an argument named Id:
CREATE PROCEDURE [TargetSp](
#Id [bigint]
)
AS
BEGIN
Update [ATable]
SET [AColumn] =
(
Select [ACalculatedValue] From [AnotherTable]
)
Where [ATable].[Member_Id] = #Id
END
So I need to use it for a list of Id's not for one Id like :
Exec [TargetSp]
#Id IN (Select [M].[Id] From [Member] AS [M] Where [M].[Title] = 'Example');
First: How can I Execute it for a list?
Second: Is there any Performance difference between I execute the sp many times or rewrite it in target script?
You could use a table-valued parameter (see http://msdn.microsoft.com/en-us/library/bb510489.aspx). Generally, if you send only one request to the server instead of a list of requests you will see a shorter execution time.
I normally pass in the information like that as XML, then you can use it just like it's a table... selecting, inserting, updating as necessary
DECLARE #IDS NVARCHAR(MAX), #IDOC INT
SET #IDS = N'<ROOT><ID>1</ID><ID>2<ID></ROOT>'
EXEC sp_xml_preparedocument #IDOC OUTPUT, #IDS
SELECT [ID] FROM OPENXML (#IDOC, '/ROOT/ID', 2) WITH ([ID] INT '.') AS XMLDOC
EXEC sp_xml_removedocument #IDOC
Similar to freefaller's example, but using xml type instead and inserting into a table variable #ParsedIds
DECLARE #IdXml XML = N'<root><id value="1"/><id value="2"/></root>'
DECLARE #ParsedIds TABLE (parsedId int not null)
INSERT INTO #ParsedIds (parsedId)
SELECT v.parsedId.value('#value', 'int')
FROM #IdXml.nodes('/root/id') as v(parsedId)
SELECT * FROM #ParsedIds
Interestingly I've worked on an large scale system with 1000's of users and we found that using this method out performed the table-valued parameter approach for small lists of id's (no more than say 5 id's). The table-valued parameter approach was faster for larger lists of Id's.
EDIT following edited question:
Looking at your example it looks like you want to update ATable based on the Title parameter. If you can you'd benefit from rewriting your stored procedure to instead except the title parameter.
create procedure [TargetSP](
#title varchar(50)
)
as
begin
update [ATable]
set [AColumn] =
(
select [ACalculatedValue] from [AnotherTable]
)
where [ATable].[Member_Id] in (select [M].[Id] from [Member] as [M] where [M].[Title] = #title);
end
Since you only care about all the rows with a title of 'Example', you shouldn't need to determine the list first and then tell SQL Server the list you want to update, since you can already identify those with a query. So why not do this instead (I'm guessing at some data types here):
ALTER PROCEDURE dbo.TargetSP
#title VARCHAR(255)
AS
BEGIN
SET NOCOUNT ON;
-- only do this once instead of as a subquery:
DECLARE #v VARCHAR(255) = (SELECT [ACalculatedValue] From [AnotherTable]);
UPDATE a
SET AColumn = #v
FROM dbo.ATable AS a
INNER JOIN dbo.Member AS m
ON a.Member_Id = m.Id
WHERE m.Title = #title;
END
GO
Now call it as:
EXEC dbo.TargetSP #title = 'Example';
DECLARE #VId BIGINT;
DECLARE [My_Cursor] CURSOR FAST_FORWARD READ_ONLY FOR
Select [M].[Id] From [Member] AS [M] Where [M].[Title] = 'Example'
OPEN [My_Cursor]
FETCH NEXT FROM [My_Cursor] INTO #VId
WHILE ##FETCH_STATUS = 0
BEGIN
EXEC [TargetSp]
#Id = #VId
FETCH NEXT FROM [My_Cursor] INTO #VId
END
CLOSE [My_Cursor]
DEALLOCATE [My_Cursor];
GO
if the parameter is integer, you can only pass one value at a time.
Your options are:
call the proc several times, one for each parameter
Change the proc to accept a structure where you can pass more than
one id like a varchar where you pass a coma separated list of values
(not so good) or a table-value parameter
About the performance question, it would be faster to re-write the proc to iterate through a list of ids than call it several times, once per id, BUT unless you are dealing with a HUGE list of ids, I dont think you will see much of a difference