In a stored procedure how to skip to next query statement even if the previous statement fails - sql

I'm hoping someone can give me an idea on how to handle this situation. I have a stored procedure that updates various tables. Some queries require connecting to different linked servers. Sometimes those linked servers are down and i need the procedure to still run the next statements regardless. Below is an example:
--Stored Procedure
BEGIN
INSERT INTO table1
SELECT *
FROM Z1;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
INSERT INTO table1
SELECT *
FROM Z2;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
INSERT INTO table1
SELECT *
FROM Z3;
END

You can probably do what you want with TRY/CATCH blocks:
BEGIN
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z1;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z2;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z3;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
END;
This handles runtime errors. If you have compile time errors -- such as tables not existing or the columns not matching between the tables, then this doesn't help.

If this were run from say ssms as a bunch of ordinary queries, I would've put batch separators between each of them to treat them separately. However since this is a stored procedure you can't do that. One way to get around that could be to make one stored procedure of each query and put all of them as steps inside a SQL Server Agent job. You run the job and each step runs in order from top to bottom even if some in the middle fail.

even this will also work: ##ROWCOUNT is oracle's equivalent of sql%rowcount
--Stored Procedure
BEGIN
INSERT INTO table1
SELECT *
FROM Z1;
IF ##ROWCOUNT <>1
INSERT INTO table1
SELECT *
FROM Z2;
IF ##ROWCOUNT <>1
INSERT INTO table1
SELECT *
FROM Z3;
END

Related

Could a SELECT inside of a transaction lock the table?

I would like to know if it's possible that a select is blocking a table if it's inside a transaction.
It's something like this:
CREATE PROCEDURE InsertClient (#name NVARCHAR(256))
AS
BEGIN
DECLARE #id INT = 0;
BEGIN TRY
BEGIN TRAN InsertingClient
SELECT #id = MAX(ID) + 1 FROM Clients;
INSERT INTO Clients (Id, Name)
VALUES (#id, #name);
SELECT id, name
FROM Clients;
COMMIT TRAN InsertingClient
END TRY
BEGIN CATCH
ROLLBACK TRAN InsertingClient
END CATCH;
END
It's a dummy example, but if there's a lot of records in that table, and an API is receiving a lot of requests and calling this stored procedure, could be blocked by the initial and final select? Should I use the begin and commit only in the insert to avoid the block?
Thanks!
Based on the sample code you have provided it is critical that the first select is within the transaction because it appears you are manually creating an id based on the max id in the table, and without locking the table you could end up with duplicates. One assumes your actual code has some locking hints (e.g. with (updlock,holdlock)) to ensure that.
However your second select should not be in your transaction because all it will serve to do is make the locks acquired earlier in the transaction last the additional time of the select, when (again based on the sample code) there is no need to do that.
As an aside there are much better ways to generate an id such as using an identity column.

Why rollback is not working for variable table in SQL Server 2012?

I have created one variable table. In my stored procedure, there are lots of transactions.
Now whenever an error occurs, I want to rollback a specific transactions which has some statements which insert or update or delete records from variable table.
This is just an example of my actual problem :
declare #tab table (val int)
insert into #tab select 2
insert into #tab select 3
insert into #tab select 4
select * from #tab
begin tran
begin try
update #tab set val = 1
select 1/0;
commit
end try
begin catch
rollback
end catch
select * from #tab
Actual output :-
My expected output is :-
So here rollback of a transaction is not working. Why it is not working here ? Am I doing something wrong ?
You are not using a temp table, you are using a variable table. There is a difference.
Temp tables work with transactions, variable tables don't. See http://blog.sqlauthority.com/2009/12/28/sql-server-difference-temp-table-and-table-variable-effect-of-transaction/
If you were to change your variable table #tab to a temporary table of #tab, you would get your desired behavior.
Differences between temp and variable tables: https://dba.stackexchange.com/questions/16385/whats-the-difference-between-a-temp-table-and-table-variable-in-sql-server/16386#16386
I have modified my question. Thanks for your knowledge sharing. But question remains the same. Why it is not working for variable table?
The links I posted above go through that with more detail than I could.

sql identity insert interruption

This might be an obvious question.
I have a sql sproc which runs a cursor.
The cursor will insert a certain amount of records into a table.
The problem is that, say the cursor runs 1000 insert statements, these records identity must follow on each other. if someone else runs an insert while the cursor is running, it will take up a record between two of the records the cursor is running.
can anyone give me some tips to ensure that all the cursor inserts follow each other.
Please note that my cursor might do 50 000 inserts. which means it will take a while to do all the inserts. so that table must not be interrupted while performing inserts.
You can try this:
INSERT INTO YourTable WITH (TABLOCK)
...
...
...
BEGIN TRANSACTION t_Transaction
BEGIN TRY
INSERT INTO Table
SELECT *
FROM tablx
WITH (HOLDLOCK)
COMMIT t_Transaction
END TRY
BEGIN CATCH
ROLLBACK t_Transaction
END CATCH

ERROR in TRIGGER AFTER INSERT

I have a problem with the next TRIGGER in SQL server 2008:
AFTER INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Order int,
#Payment varchar(15)
select #order=OrderID,#payment=PaymentType from inserted
IF #Payment='paypal'
begin
exec PaypalTransactionsUpdate #OrderID=#Order
end
ELSE
begin
Return
end
END
I can't understand why does not work the select from the inserted table. If I use like:
select TOP 1 #order=OrderID,#payment=PaymentType from TABLENAME where PaymentType='Paypal' order by OrderID desc.
It works perfectly, anybody can help me? I want to use the row table inserted to prevent data errors becuase the second statement is no secure, maybe the top one already changes when the trigger runs.
I was using the top1 select this weekend and was working for some orders and for some one don't for no reason. I saw this morning that the row that I want to synchronize , the trigger do it in the next insert, this means, when Paypal order comes stay there without synchronize until the next order comes and rerun the trigger. I am using the after insert modify method and can't understand why this happens. Any ideas?
I will appreciate as well if someone give me a clue how to test Trigger or see the error log of them.

Cursor performance issue on SQL Server 2008

I have a legacy stored procedure trying to create a cursor to go through each row from a query. The performance is pretty bad. Then I check the query plan and most of the cost (> 47%) is on an object [tempdb].[CWT_PrimaryKey]. This object is created by cursor created in the stored procedure. Not sure how to improve performance for this case as there is no way to do anything on this object in tempdb created by SQL Server.
The pseudo-code in stored procedure like:
BEGIN TRY
BEGIN TRANSACTION
declare mycusorr local fast_forward
for SELECT * From MyTab Where a=b;
open mycusorr;
fetch next from mycusorr into #v1, #v2, ...;
while ##fetch_status = 0
begin
--some query to check rules from different tables
Update AnotherTab Set column=value where id = #v1;
if (there is error)
insert error to error user log table;
End
close mycusorr;
deallocate mycusorr;
COMMIT;
END TRY
BEGIN CATCH
close mycusorr;
deallocate mycusorr;
SELECT ERROR_NUMBER() AS ErrorNumber,ERROR_MESSAGE() AS ErrorMessage;
ROLLBACK TRAN;
END CATCH
There is no primary key on MyTab, but index created on the columns used in condition.
There are about 10,000 rows from Mytab. Run the stored procedure take more than 3 hours and even not finished. If I remove transaction from stored procedure, it will be fast.
When I check lock with SP_lock, there are more than 10 thousand X or IX lock on key or page for the table with update clause in loop.
How about:
UPDATE t SET t.column = m.value
FROM dbo.AnotherTab AS t
INNER JOIN dbo.MyTab AS m
ON t.id = ... no idea what the join criteria is because your cursor uses SELECT *
WHERE m.a = m.b; -- I also don't think this is described well enough to guess
You can get a much, much, much better answer if you provide real code instead of pseudo-code.