MS SQL Try Catch Transastion - sql

I need to know if the sql below is correct to roll back if something bad happens.
I first create a temp table
IF OBJECT_ID(N'tempdb..#LocalSummery') IS NOT NULL
DROP TABLE #LocalSummery
CREATE TABLE #LocalSummery (
then I populate the table
insert into #LocalSummery'
SELECT *....
then i start my try catch
BEGIN TRY
BEGIN TRANSACTION
delete from LocalSummeryRealTable
INSERT INTO LocalSummeryRealTable
SELECT S.*
FROM #LocalSummery' AS S
COMMIT
END TRY
BEGIN CATCH
IF ##TRANCOUNT > 0
ROLLBACK TRANSACTION
...
I noticed when i did the insert into the LocalSummeryRealTable the fields didn't line up and I got an error. I expected things to roll back but in SSMS query window it gave me a message that the trans wasn't committed and asked me if I wanted to commit it. Why would this ask me when it should of been rolled back?

Related

In a stored procedure how to skip to next query statement even if the previous statement fails

I'm hoping someone can give me an idea on how to handle this situation. I have a stored procedure that updates various tables. Some queries require connecting to different linked servers. Sometimes those linked servers are down and i need the procedure to still run the next statements regardless. Below is an example:
--Stored Procedure
BEGIN
INSERT INTO table1
SELECT *
FROM Z1;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
INSERT INTO table1
SELECT *
FROM Z2;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
INSERT INTO table1
SELECT *
FROM Z3;
END
You can probably do what you want with TRY/CATCH blocks:
BEGIN
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z1;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z2;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
-- IF ABOVE FAILS GO TO NEXT QUERY ANYWAY
BEGIN TRY
INSERT INTO table1 SELECT * FROM Z3;
END TRY
BEGIN CATCH
-- you can do something here if you want
END CATCH;
END;
This handles runtime errors. If you have compile time errors -- such as tables not existing or the columns not matching between the tables, then this doesn't help.
If this were run from say ssms as a bunch of ordinary queries, I would've put batch separators between each of them to treat them separately. However since this is a stored procedure you can't do that. One way to get around that could be to make one stored procedure of each query and put all of them as steps inside a SQL Server Agent job. You run the job and each step runs in order from top to bottom even if some in the middle fail.
even this will also work: ##ROWCOUNT is oracle's equivalent of sql%rowcount
--Stored Procedure
BEGIN
INSERT INTO table1
SELECT *
FROM Z1;
IF ##ROWCOUNT <>1
INSERT INTO table1
SELECT *
FROM Z2;
IF ##ROWCOUNT <>1
INSERT INTO table1
SELECT *
FROM Z3;
END

what is the best way to insert multiple rows to production table in sql

I want to know what is the best practice to insert multiple rows from a source table into a destination table. Both source and destination tables are on a production database meaning delete or update transactions are being performed 24/7 on the tables. Also the destination table has an auto_incremented column which means if a Rollback transaction executes, the Rollback does not decrement the auto incremented column - find details here
BEGIN TRY
BEGIN TRANSACTION
Insert (col1,col2,col3) into destinationTable
select col1,col2,col3 from sourceTable
COMMIT TRANSACTION
PRINT 'TRANSACTION COMMITTED'
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION
PRINT 'TRANSACTION ROLLED BACK'
END CATCH

How to lock the transaction untill single query completes its execution for not getting deadlock error

I have the following code in which I have doubt.
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
Now above code will return to the application. means it is get all function for the application.
I am getting deadlock error in the application frequently.
I have hundred of users which is fetching the same table at a time.
So I have to make sure that untill the completion of update statement select statement will not fire OR how to lock the update statement.
One more doubt that if suppose I am updating the one row & another user has tried to select that table then will he get the deadlock.
(User was trying to select another row which was not in the update statement.)
what will happen for this scenario.
Please help me.
Thanks in advance
You should use transaction,
BEGIN TRANSACTION [Tran1]
BEGIN TRY
Update Statement on Table 1
Update Statement on Table 2
Select Statement which include both the Table 1
COMMIT TRANSACTION [Tran1]
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION [Tran1]
END CATCH
GO
If you want nobody to update/delete the row, I would go with the UPDLOCK on the SELECT statement. This is an indication that you will update the same row shortly, e.g.
select #Bar = Bar from oFoo WITH (UPDLOCK) where Foo = #Foo;

sql identity insert interruption

This might be an obvious question.
I have a sql sproc which runs a cursor.
The cursor will insert a certain amount of records into a table.
The problem is that, say the cursor runs 1000 insert statements, these records identity must follow on each other. if someone else runs an insert while the cursor is running, it will take up a record between two of the records the cursor is running.
can anyone give me some tips to ensure that all the cursor inserts follow each other.
Please note that my cursor might do 50 000 inserts. which means it will take a while to do all the inserts. so that table must not be interrupted while performing inserts.
You can try this:
INSERT INTO YourTable WITH (TABLOCK)
...
...
...
BEGIN TRANSACTION t_Transaction
BEGIN TRY
INSERT INTO Table
SELECT *
FROM tablx
WITH (HOLDLOCK)
COMMIT t_Transaction
END TRY
BEGIN CATCH
ROLLBACK t_Transaction
END CATCH

Why does the Try/Catch not complete in SSMS query window?

This sample script is supposed to create two tables and insert a row into each of them.
If all goes well, we should see OK and have two tables with data. If not, we should see FAILED and have no tables at all.
Running this in a query window displays an error for the second insert (as it should), but does not display either a success or failed message. The window just sits waiting for a manual rollback. ??? What am I missing in either the transactioning or the try/catch?
begin try
begin transaction
create table wpt1 (id1 int, junk1 varchar(20))
create table wpt2 (id2 int, junk2 varchar(20))
insert into wpt1 select 1,'blah'
insert into wpt2 select 2,'fred',0 -- <<< deliberate error on this line
commit transaction
print 'OK'
end try
begin catch
rollback transaction
print 'FAILED'
end catch
The problem is that your error is of a high severity, and is a type that breaks the connection immediately. TRY-CATCH can handle softer errors, but it does not catch all errors.
Look for - What Errors Are Not Trapped by a TRY/CATCH Block:
It looks like after the table is created, the following inserts are parsed (recompiled), which trigger statement level recompilations and breaks the batch.