Revert SQL Operation Over Multiple Stored Procedures - sql

We are working on an application where, sometimes, we need to delete (some) entries from an SQL table and replace them with a new list of rows. In our app, the logic goes:
Delete relevant_entries using delete_stored_procedure
For Each new_entry in new_entries
Insert new_entry using insert_stored_procedure
This logic works, but there is a catch. What if the DELETE works, but one of the INSERTS fail? We need to be able to revert the database back to before the table rows were deleted and any previous entries were inserted. Is there a way to do this and ONLY revert these things, without affecting any other unrelated operations that may have happened simultaneously? (i.e., a row gets created in another table- we don't want to revert that).
I understand that you can wrap multiple SQL statements in a transaction, but in this case, the logic is happening over multiple stored procedure calls. Any advice?

Create another stored procedure that:
creates a transaction
calls the stored procedures that do the work
commits or rolls back the transaction

Related

Trigger calls Stored Procedure and if we we do a select will the return values be the new or old?

Using MS SQL Server, a Trigger calls a Stored Procedure which internally makes a select, will the return values be the new or old ones?
I know that inside the trigger I can access them by FROM INSERTED i inner join DELETED, but in this case I want to reuse (cannot change it) an existing Stored Procedure that internally makes a select on the triggered table and processes some logic with them. I just want to know if I can be sure that the existing logic will work or not (by accessing the NEW values).
I can simply try to simulate it with one update... But maybe there are other cases (example: using transactions or something else) that I maybe not be aware and never test it that could result in a different case.
I decided to ask someone else that might know better. Thank you.
AFTER triggers (the default) fire after the DML action. When the proc is called within the trigger, the tables will reflect changes made by the statement that fired the trigger as well changes made within the trigger before calling the proc.
Note changes are uncommitted until the trigger completes or explict transaction later committed.
Since the procedure is running in the same transaction as the (presumably, "after") trigger, it will see the uncommitted data.
I hope you see the implications of that: the trigger is executing as part of the transaction started by the DML statement that caused it to fire, so the stored procedure is part of the same transaction, so a "complicated" stored procedure means that transaction stays open longer, holding locks longer, making responses back to users slower, etc etc.
Also, you said
internally makes a select on the triggered table and processes some logic with them.
if you just mean that the procedure is selecting the data in order to do some complex processing and then write it to somewhere else inside the database, ok, that's not great (for reasons given above), but it will "work".
But just in case you mean you are doing some work on the data in the procedure and then returning that back to the client application, Don't do that
The ability to return results from triggers will be removed in a future version of SQL Server. Triggers that return result sets may cause unexpected behavior in applications that aren't designed to work with them. Avoid returning result sets from triggers in new development work, and plan to modify applications that currently do. To prevent triggers from returning result sets, set the disallow results from triggers option to 1.

Oracle 10g - Lock table where two procedures might update this same table synchronously

I want to lock a table in Oracle 10g, so that e.g. procedure A has to wait till procedure B is finished with updating. I read a about the Command LOCK TABLE, but I am not sure, if the other procedure is waiting for the lock to be aquired.
What's also possible is that another thread is calling the same stored procedure B during the update-process, I guess that since stored procedure is running in a single thread, this would be also a problem?
You wouldn't normally want to lock a whole table in Oracle, though of course locking in general is an important issue. By default if 2 sessions try to update the same row then the second will be "blocked" and will have to wait for the first to commit or rollback its change. You can use a select with for update clause to lock a row without updating it.
Instead of using a single regular table shared by all sessions, you could use a Global Temporary Table: then each session has its own copy.

How to pass values from a table parameter in a Stored Procedure to another Stored Procedure?

I've written a stored procedure called FooUpsert that inserts and updates data in various tables. It takes a number of numeric and string parameters that provide the data. This procedure is in good shape and I don't want to modify it.
Next, I'm writing another stored procedure that servers as a sort of bulk insert/update.
It is of tantamount importance that the procedure do its work as an atomic transaction. It would be unacceptable for some data to be inserted/updated and some not.
It seemed to me that the appropriate way of doing this would be to set up a table-valued procedure, say FooUpsertBulk. I began to write this stored procedure with a table parameter that holds data similar to what is passed to FooUpsert, the idea being that I can read it one row at a time and invoke FooUpsert for the values in each row. I realize that this may not be the best practice for this, but once again, FooUpsert is already written, plus FooUpsertBulk will be run at most a few times a day.
The problem is that in FooUpsertBulk, I don't know how to iterate the rows and pass the values in each row as parameters to FooUpsert. I do realize that I could change FooUpsert to accept a table-values parameter as well, but I don't want to rewrite FooUpsert.
Can one of you SQL ninjas out there please show me how to do this?
My SQL server is MS-SQL 2008.
Wrapping various queries into an explicit transaction (i.e. BEGIN TRAN ... COMMIT or ROLLBACK) makes all of it an atomic operation. You can:
start the transaction from the app code (assuming that FooUpsert is called by app code) and hence have to deal with the commit and rollback there as well. this still leaves lots of small operations, but a single transaction and no code changes needed.
start the transaction in a proc, calling FooUpsert in a loop that is contained in a TRY / CATCH so that you can handle the ROLLBACK if any call to FooUpsert fails.
copy the code from FooUpsert into a new FooUpsertBulk that accepts a TVP from the app code and handles everything as set-based operations. Adapt each of the queries in FooUpsertBulk from handling various input params to getting fields from the TVP table variables once the TVP is joined into the query. Keep FooUpsert in place until FooUpsertBulk is working.

Do queries executed in a Postgres trigger procedure run in the same transaction?

I have a BEFORE DELETE trigger which inserts rows into another table using SPI_exec.
Do these INSERT queries run in the same transaction as the one in which the original delete is executing? Hence, will the delete and all inserts roll back or commit together?
If not, how can I make that happen?
Yes, everything in triggers is in the same transaction as the triggering event.
Not directly related to the question, but normally you want to put side-effects in the AFTER trigger, rather than the BEFORE trigger.

Local Temporary table in Oracle 10 (for the scope of Stored Procedure)

I am new to oracle. I need to process large amount of data in stored proc. I am considering using Temporary tables. I am using connection pooling and the application is multi-threaded.
Is there a way to create temporary tables in a way that different table instances are created for every call to the stored procedure, so that data from multiple stored procedure calls does not mix up?
You say you are new to Oracle. I'm guessing you are used to SQL Server, where it is quite common to use temporary tables. Oracle works differently so it is less common, because it is less necessary.
Bear in mind that using a temporary table imposes the following overheads:read data to populate temporary tablewrite temporary table data to fileread data from temporary table as your process startsMost of that activity is useless in terms of helping you get stuff done. A better idea is to see if you can do everything in a single action, preferably pure SQL.
Incidentally, your mention of connection pooling raises another issue. A process munging large amounts of data is not a good candidate for running in an OLTP mode. You really should consider initiating a background (i.e. asysnchronous) process, probably a database job, to run your stored procedure. This is especially true if you want to run this job on a regular basis, because we can use DBMS_SCHEDULER to automate the management of such things.
IF you're using transaction (rather than session) level temporary tables, then this may already do what you want... so long as each call only contains a single transaction? (you don't quite provide enough detail to make it clear whether this is the case or not)
So, to be clear, so long as each call only contains a single transaction, then it won't matter that you're using a connection pool since the data will be cleared out of the temporary table after each COMMIT or ROLLBACK anyway.
(Another option would be to create a uniquely named temporary table in each call using EXECUTE IMMEDIATE. Not sure how performant this would be though.)
In Oracle, it's almost never necessary to create objects at runtime.
Global Temporary Tables are quite possibly the best solution for your problem, however since you haven't said exactly why you need a temp table, I'd suggest you first check whether a temp table is necessary; half the time you can do with one SQL what you might have thought would require multiple queries.
That said, I have used global temp tables in the past quite successfully in applications that needed to maintain a separate "space" in the table for multiple contexts within the same session; this is done by adding an additional ID column (e.g. "CALL_ID") that is initially set to 1, and subsequent calls to the procedure would increment this ID. The ID would necessarily be remembered using a global variable somewhere, e.g. a package global variable declared in the package body. E.G.:
PACKAGE BODY gtt_ex IS
last_call_id integer;
PROCEDURE myproc IS
l_call_id integer;
BEGIN
last_call_id := NVL(last_call_id, 0) + 1;
l_call_id := last_call_id;
INSERT INTO my_gtt VALUES (l_call_id, ...);
...
SELECT ... FROM my_gtt WHERE call_id = l_call_id;
END;
END;
You'll find GTTs perform very well even with high concurrency, certainly better than using ordinary tables. Best practice is to design your application so that it never needs to delete the rows from the temp table - since the GTT is automatically cleared when the session ends.
I used global temporary table recently and it was behaving very unwantedly manner.
I was using temp table to format some complex data in a procedure call and once the data is formatted, pass the data to fron end (Asp.Net).
In first call to the procedure, i used to get proper data and any subsequent call used to give me data from last procedure call in addition to current call.
I investigated on net and found out an option to delete rows on commit.
I thought that will fix the problem.. guess what ? when i used on commit delete rows option, i always used to get 0 rows from database. so i had to go back to original approach of on commit preserve rows, which preserves the rows even after commiting the transaction.This option clears rows from temp table only after session is terminated.
then i found out this post and came to know about the column to track call_id of a session.
I implemented that solution and still it dint fix the problem.
then i wrote following statement in my procedure before i starting any processing.
Delete From Temp_table;
Above statemnet made the trick. my front end was using connection pooling and after each procedure call it was commitng the transaction but still keeping the connection in connection pool and subsequent request was using the same connection and hence the database session was not terminated after every call..
Deleting rows from temp table before strating any processing made it work....
It drove me nuts till i found this solution....