I am developing a PL function in Postgres, and in it I am modifying records of a table, according to some logic, then I execute a final query (basically counting), and if the number I get is positive, I throw an exception to rollback the transaction (since PostgreSQL's function doesn't support transactions explicitly).
So is this a good method to emulate transactions? Do you have better suggestions ?
PS: I am using PostgreSQL 9.2 but I am migrating to 9.3 soon, if this may help somehow.
If you wish to abort a transaction within a function, then yes, raising an exception is a good choice.
You can use subtransactions within PL/PgSQL functions by using BEGIN ... EXCEPTION blocks within the function. Use an inner BEGIN ... EXCEPTION block and within it RAISE an exception with a user defined SQLSTATE. Catch the exception in the EXCEPTION block.
A RAISE (of ERROR or higher) within a BEGIN ... EXCEPTION block will roll back work done within that block, as if you had used a SAVEPOINT and ROLLBACK TO SAVEPOINT.
Functions cannot force a rollback of the top level transaction; if you RAISE an exception, an outer pl/pgsql function can catch the exception in a BEGIN ... EXCEPTION block, or the client can use a ROLLBACK TO SAVEPOINT.
I have used this solution in the past. At issue is that the PLPGSQL language does not support the use of SAVEPOINT and ROLLBACK. Transactions wrap functions, not the other way around.
RAISE is the proper methodology to notify the client of some internal condition within a function. There are several levels of RAISE, the "worst" is EXCEPTION which will abort/rollback your transactions UNLESS something catches the exception and suppresses it.
For testing, use
`RAISE DEBUG 'some comment %', var;`
If you want to put something in your logs, but don't want to rollback, you can (typically) raise a WARNING. (Any level of RAISE can go to your logs, it depends on your configuration)
Related
I'm looking for a List of all possible exceptions which can occur for each SQL Command.
For example: If I have the following code:
Procedure p1
as
l_cnt number;
Begin
Select count(*)
Into l_cnt
From xyz;
Exceptions
When ... Then
...
End;
Now I'm wondering which exception can occur in this Select Into Statement. I know a few but are they all there is? That's why I'm looking for an overview of possible Exceptions per SQL.
A question on top would be: Is it recommended to catch all possible Exceptions raised by SQL Code inside PLSQL?
I know of "when others" which is the only way for me atm to catch "unknown" exceptions.
Of course if the list of possible exceptions for a SQL is very long I would only handle the relevant exeptions and catch the others by "when others".
To be honesty, your question is pretty common.
For example, you query can lead to the following exceptions:
TIMEOUT_ON_RESOURCE - A time out occurs while the database is waiting for a resource
STORAGE_ERROR - PL/SQL ran out of memory or memory was corrupted
I would suggest to learn predifined exceptions from official documentation and select suitable for each case.
I have a file with INSERTs, UPDATEs, DELETEs. I want to execute each of the DML statements in this file, but in case any exception occurs, I want to print that exception and continue. Is there a simple solution for this? Below is a solution which involves wrapping each DML in an anonymous block and print the exception, but I think it is not simple (or elegant) enough:
BEGIN
<<DML statement goes here>>
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(DBMS_UTILITY.FORMAT_ERROR_BACKTRACE);
END;
Needless to say, this cannot be done (easily) for hundreds of DMLs.
A possible decision is to add error logging clause to your statements. To each of your statement you have to add following (for example for INSERT):
insert into my_table (...)
values (...)
LOG ERRORS INTO err$_my_table ('INSERT') REJECT LIMIT UNLIMITED;
Here err$_my_table is a table for error logging. To create it, execute (once per table) following:
begin
DBMS_ERRLOG.CREATE_ERROR_LOG ('MY_TABLE');
end;
/
Error logging clause suppresses any exception and put in error logging table all lines, which fired exceptions. After executing you can query these tables. They will also contain values of SQLCODE and SQLERRM functions. Disadvantages of this method - you need to change all your statements and create a logging table for each table.
More about clause in documentation.
Shortly, I am wondering which of the following is the better practice:
to encapsulate my code in TRY/CATCH block and display the error message
to write own checks and display custom error messages
As I have read the TRY/CATCH block is not handling all types of errors. In my situation this is not an issue.
I am building dynamic SQL that executes specific store procedure. I can make my own checks like:
is the procedure that is going to be executing exists
are all parameters supplied
can be all values converted properly
or just to encapsulate the statement like this:
BEGIN TRY
EXEC sp_executesql #DynamicSQLStatement
SET #Status = 'OK'
END TRY
BEGIN CATCH
SET #Status = ERROR_MESSAGE()
END CATCH
Is there any difference if I make the checks on my own (for example performance difference) or I should leave this work to the server?
The reason I am asking is because in some languages (like JavaScript) the use of try/catch blocks is known as bad practice.
Typically it is going to be better to check for these things yourself than letting SQL Server catch an exception. Exception handling is not cheap, as I demonstrate in these posts:
Checking for potential constraint violations before entering SQL Server TRY and CATCH logic
Performance impact of different error handling techniques
Depending on the schema, volume, concurrency, and the frequency of failures you expect, your tipping point may be different, so at very high scale this may not always be the absolute most efficient way. But in addition to the fact that in most cases you're better off, writing your own error handling and prevention - while it takes time - allows you to fully understand all of your error conditions.
That all said, you should still have TRY / CATCH as a failsafe, because there can always be exceptions you didn't predict, and having it blow up gracefully is a lot better than throwing shrapnel everywhere.
A try/catch should only be used for critical errors. If there are possible errors that you can prevent by doing a simple if statement, you should do it. Don't rely on try/catch to make sure, for example, that a user entered a date in the correct format. Check that yourself.
Personally, if I get a critical exception like that, I'd prefer to see the call stack and where it happened. So, I only do a try/catch in the topmost layer of my project.
Using Try Catch Block you can still raise Custom Error Messages, Actually if you have Custom error Message then you can only show Custom error message you cannot sql server default error message. and you can also make use of many other error functions as well to get a lot more information about your error, Like
ERROR_LINE()
ERROR_MESSAGE()
ERROR_STATE()
ERROR_NUMBER()
ERROR_STATE(),
These functions can only be used in Catch block of TRY/CATCH block and makes our lifes a lot easier as a developer :)
Semantics:
I am using PostGreSql 9.0.3 as my Dbms. Actually i was tried to accomplish one of the objectives assigned for me, which is to Raise Exception with some predefined message when some conditions failed in a IF - Statement inside my stored procedure. As a result of that exception, The Process should needs to be rolled back.
Syntax:
For r3 in select itemname,stock from stock s,items it where s.itemno=it.itemno and it.itemno=$2[I].itemno and s.stockpointno=$1.stockpointno loop
if xStock > r3.stock then
RAISE EXCEPTION '%', r3.itemname || ' decreased down from the low stock level';
end if;
End Loop;
where r3 is a record and xStock is a Double Precision Variable.
Then At the end of the stored procedure i just included the following code in order to roll
back the transactions happened.
Exception when raise_exception then rollback transaction;
The problem i am facing is when ever the manual exception getting raised, The following error bumps up.
DA00014:ERROR: XX000: SPI_execute_plan_with_paramlist failed executing query "rollback transaction": SPI_ERROR_TRANSACTION
Though the above error occured, transactions are not happened while i was checking in the tables. I don't know the exact reason why this particular error is raising when rolling back is in progress. Can anybody tell what may be the possible mistake which i was made in my code? And also suggest solutions to fix this issue.
While some database engines allow COMMIT or ROLLBACK inside a function or procedure, PostgreSQL does not. Any attempt to do that leads to an error:
ERROR: cannot begin/end transactions in PL/pgSQL
That includes code inside an exception block.
On the other hand, a RAISE EXCEPTION alone will abort the transaction with the function-supplied error message, so there's no need to trap your own exception. It would work as expected if you just removed your exception block.
As said by the plpgsql documentation in Trapping Errors:
By default, any error occurring in a PL/pgSQL function aborts
execution of the function, and indeed of the surrounding transaction
as well
You current code raises the exception, then traps it and fails in the exception block itself because of the forbidden ROLLBACK statement, which leads to the SQL engine aborting the transaction.
Would like to know whether rollback is required when a SQL exception (exception when others) is detected:
declare
cursor c_test is
select *
from tesing;
begin
for rec in c_test loop
begin
update test1 set test1.name=rec.name where test1.id=rec.id;
IF sql%rowcount = 1 THEN
commit;
ELSIF sql%rowcount =0 THEN
dbms_output.put_line('No Rows Updated');
else
dbms_output.put_line('More than 1 row exists');
rollback;
END IF;
exception when others then
dbms_output.put_line(Exception');
rollback;
end;
end;
First, I'm assuming we can ignore the syntax errors (for example, there is no END LOOP, the dbms_output.put_line call is missing the first single quote, etc.)
As to whether it is necessary to roll back changes, it depends.
In general, you would not have interim commits in a loop. That is generally a poor architecture both because it is much more costly in terms of I/O and elapsed time. It also makes it much harder to write restartable code. What happens, for example, if your SELECT statement selects 10 rows, you issue (and commit) 5 updates, and then the 6th update fails? The only way to be able to restart with the 6th row after you've fixed the exception would be to have a separate table where you stored (and updates) your code's progress. It also creates problems for any code that calls this block which has to then handle the case that half the work was done (and committed) and the other half was not.
In general, you would only put transaction control statements in the outermost blocks of your code. Since a COMMIT or a ROLLBACK in a procedure commits or rolls back any work done in the session whether or not it was done by the procedure, you want to be very cautious about adding transaction control statements. You generally want to let the caller make the determination about whether to commit or roll back. Of course, that only goes so far-- eventually, you're going to be in the outer-most block that will never be called from some other routine and you need to have appropriate transaction control-- but it's something to be very wary about if you're writing code that might be reused.
In this case, since you have interim commits, the only effect of your ROLLBACK would be that if the first update statement failed, the work that had been done in your session prior to calling this block would be rolled back. The interim commit would commit those previous changes if the first update statement was successful. That's the sort of side effect that people worry about when they talk about why interim commits and transaction control in reusable blocks are problematic.