Stored proceedure call 2nd SP with Trans Commit - sql

I have one store procedure that will need to call a second stored procedure passing over one input parameter. Due to the nature of both stored procs I'd like to be able to use transactions and commit to ensure all elements are carried out before being committed.
If the relevant contents of the first stored proc are within the transaction aswell as the call to the 2nd stored proc, will this suffice or is it the case that the events of the 2nd sp would be commited seperately???
I hope that makes sense, & thanks for in advance for the help.

Yes they will be part of the same transaction. In fact even if you started a separate transaction inside the second procedure, nested transactions in SQL do not work. The whole thing is committed or rolled back.

Related

Trigger calls Stored Procedure and if we we do a select will the return values be the new or old?

Using MS SQL Server, a Trigger calls a Stored Procedure which internally makes a select, will the return values be the new or old ones?
I know that inside the trigger I can access them by FROM INSERTED i inner join DELETED, but in this case I want to reuse (cannot change it) an existing Stored Procedure that internally makes a select on the triggered table and processes some logic with them. I just want to know if I can be sure that the existing logic will work or not (by accessing the NEW values).
I can simply try to simulate it with one update... But maybe there are other cases (example: using transactions or something else) that I maybe not be aware and never test it that could result in a different case.
I decided to ask someone else that might know better. Thank you.
AFTER triggers (the default) fire after the DML action. When the proc is called within the trigger, the tables will reflect changes made by the statement that fired the trigger as well changes made within the trigger before calling the proc.
Note changes are uncommitted until the trigger completes or explict transaction later committed.
Since the procedure is running in the same transaction as the (presumably, "after") trigger, it will see the uncommitted data.
I hope you see the implications of that: the trigger is executing as part of the transaction started by the DML statement that caused it to fire, so the stored procedure is part of the same transaction, so a "complicated" stored procedure means that transaction stays open longer, holding locks longer, making responses back to users slower, etc etc.
Also, you said
internally makes a select on the triggered table and processes some logic with them.
if you just mean that the procedure is selecting the data in order to do some complex processing and then write it to somewhere else inside the database, ok, that's not great (for reasons given above), but it will "work".
But just in case you mean you are doing some work on the data in the procedure and then returning that back to the client application, Don't do that
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return result sets may cause unexpected behavior in applications that aren't designed to work with them. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.

Revert SQL Operation Over Multiple Stored Procedures

We are working on an application where, sometimes, we need to delete (some) entries from an SQL table and replace them with a new list of rows. In our app, the logic goes:
Delete relevant_entries using delete_stored_procedure
For Each new_entry in new_entries
Insert new_entry using insert_stored_procedure
This logic works, but there is a catch. What if the DELETE works, but one of the INSERTS fail? We need to be able to revert the database back to before the table rows were deleted and any previous entries were inserted. Is there a way to do this and ONLY revert these things, without affecting any other unrelated operations that may have happened simultaneously? (i.e., a row gets created in another table- we don't want to revert that).
I understand that you can wrap multiple SQL statements in a transaction, but in this case, the logic is happening over multiple stored procedure calls. Any advice?
Create another stored procedure that:
creates a transaction
calls the stored procedures that do the work
commits or rolls back the transaction

How to pass values from a table parameter in a Stored Procedure to another Stored Procedure?

I've written a stored procedure called FooUpsert that inserts and updates data in various tables. It takes a number of numeric and string parameters that provide the data. This procedure is in good shape and I don't want to modify it.
Next, I'm writing another stored procedure that servers as a sort of bulk insert/update.
It is of tantamount importance that the procedure do its work as an atomic transaction. It would be unacceptable for some data to be inserted/updated and some not.
It seemed to me that the appropriate way of doing this would be to set up a table-valued procedure, say FooUpsertBulk. I began to write this stored procedure with a table parameter that holds data similar to what is passed to FooUpsert, the idea being that I can read it one row at a time and invoke FooUpsert for the values in each row. I realize that this may not be the best practice for this, but once again, FooUpsert is already written, plus FooUpsertBulk will be run at most a few times a day.
The problem is that in FooUpsertBulk, I don't know how to iterate the rows and pass the values in each row as parameters to FooUpsert. I do realize that I could change FooUpsert to accept a table-values parameter as well, but I don't want to rewrite FooUpsert.
Can one of you SQL ninjas out there please show me how to do this?
My SQL server is MS-SQL 2008.
Wrapping various queries into an explicit transaction (i.e. BEGIN TRAN ... COMMIT or ROLLBACK) makes all of it an atomic operation. You can:
start the transaction from the app code (assuming that FooUpsert is called by app code) and hence have to deal with the commit and rollback there as well. this still leaves lots of small operations, but a single transaction and no code changes needed.
start the transaction in a proc, calling FooUpsert in a loop that is contained in a TRY / CATCH so that you can handle the ROLLBACK if any call to FooUpsert fails.
copy the code from FooUpsert into a new FooUpsertBulk that accepts a TVP from the app code and handles everything as set-based operations. Adapt each of the queries in FooUpsertBulk from handling various input params to getting fields from the TVP table variables once the TVP is joined into the query. Keep FooUpsert in place until FooUpsertBulk is working.

Transaction starting in stored procedure 1 and ending in SP3

I have several stored procedures in a job, and In one of them I Begin a transaction to delete some rows and if rows are greater than 10 then I Roll back. however if there are not I don't want to commit straight away, because 2 stored procedure later I do something similar. however if count is greater than 10 in this instance I want it rolled back all the way to when I stared the transaction (two stored procedures ago)
Is it possible to start a transaction in a store procedure and have multiple roll backs and Commit right at the end somewhere or do I have to put all the code into 1 store procedure to do that?
This sounds incredibly prone to failure.
Regardless, you will need to start the transaction in your code then, while using the same connection, execute the procs. The code would then commit or rollback once all the procs have executed.
Assuming this is c#, see the following question for answers: Call multiple SQL Server stored procedures in a transaction
You can write several stored procedures and then execute them as nested.
You can declare variables in order to get the result and use if statement to commit or rais error for catch block or rollback transaction

C# SQL transaction

I am using a C# class that is calling a SQL stored procedure in a serializable transaction.
So if something goes wrong in the stored procedure, everything is rolled back.
I have one statement in the SQL stored procedure that should be always executed (even if the stored procedure fails at some point and a rollback occurs). The statement is an update of a record.
I cannot change the C# library, so I need to do this in my stored procedure.
Is there some way I can execute that one statement outside the transaction?
You could perhaps use SAVE TRANSACTION. It is not supported in distributed transactions, and your statement must be executed first, so it might not be what you are looking for.
I have found the solution.
I didn't realize that SQL knew it was called in a transactional matter by the c# class.
The update statement that should always be executed is the first step (and also last step) in the procedure. Let me clarify:
I have an IF function. If it is true, the update should occur no matter what. If it is false, some transactional logic should be executed.
Now in the c# class, it expects a result from the stored proc. If the proc doesn't return a result (like it does in the update statement), it rollbacks the transaction.
So by just adding the following lines right after the update statement, the update occurs :)
IF ##TRANCOUNT > 0
BEGIN
COMMIT TRANSACTION
END
Your solution does not sound like a good one.. For example- if your stored procedure will be part of bigger transaction, then it will commit all changes made before it. Also I believe no one would guess that your proc has such behaviour without first seeing code.
The need to always execute some part of proc sounds like need for security audit. So maybe you should use trace, extended events or sql server audit instead.
if you really need what you say you need- you can use method described here: How to create an autonomous transaction in SQL Server 2008