PL/SQL Validate/Insert/Update Status - retval vs. exception - sql

I'm always looking to improve and applying best practices. I read quite a bit about refactoring in the last weeks. I have to work with a lot of awful code and I produced some not so nice stuff too but I'm trying to change that. Thats no problem for most languages but I'm pretty new to PL/SQL so I just copied the style of the already written code.
After reading some tutorials I realized that a lot of our code is pretty much more C style code using retval instead exceptions etc.
We have a lot of functions like open cursor, loop through it, validate the data, trim it or make some string manipulation and insert it into another table, update the status etc. I wonder what a best practice solution would look like on something like this. Atm most functions look like this:
LOOP
FETCH C_ABC INTO R_ABC;
EXIT WHEN C_ABC%NOTFOUND OR C_ABC%NOTFOUND IS NULL;
SAVEPOINT SAVE_LOOP;
retval := plausibilty_check(r_ABC);
IF (retval = STATUS_OK) THEN
retval := convert_ABC_TO_XYZ(r_ABC, r_XYZ);
END IF;
IF (retval = STATUS_OK) THEN
retval := insert_XYZ(r_XYZ);
END IF;
retval := update_ABC(r_ABC.PK_Id, retval);
END LOOP;
If i was using exceptions I guess I had to raise them inside the functions so I can handle them in the main function, if not everyone would have to crawl to every sub function to understand the program and where the updates happen etc. So I guess I would have to use another PL/SQL block inside the loop? Like:
LOOP
FETCH C_ABC INTO R_ABC;
EXIT WHEN C_ABC%NOTFOUND OR C_ABC%NOTFOUND IS NULL;
SAVEPOINT SAVE_LOOP;
BEGIN
plausibilty_check(r_ABC);
convert_ABC_TO_XYZ(r_ABC, r_XYZ);
insert_XYZ(r_XYZ);
update_ABC(r_ABC.PK_Id, STATUS_OK);
EXCEPTION
WHEN ERROR_CODE_XYZ THEN
update_ABC(r_ABC.PK_Id, ERROR_CODE_XYZ);
END
END LOOP;
I guess that function handles a pretty common problem but I still havn't found any tutorial covering something like this. Maybe someone more experienced with PL/SQL might gimme a hint what a best practice on a task like that would look like.

I like to use exceptions to jump out of a block of code if an exceptional event (e.g. an error) occurs.
The problem with the "retval" method of detecting error conditions is that it introduces a second layer of semantics on what a function is and for.
In principle, a function should be used to perform a calculation and return a result; in this sense, a function doesn't do anything, i.e. it makes no changes to any state - it merely returns a value.
If it cannot for some reason calculate that value, that would be an exceptional circumstance, so I'd want the function to raise an exception so that the calling program will not blindly continue on its merry way, thinking it got a valid value from the function.
On the other hand, a procedure is a method by which action is done - something is changed, something is validated, or something is sent. The normal expected path is that the procedure is executed, it does its thing, then it finishes. If an error occurs, I want it to raise an exception so that the calling program will not blindly continue thinking that the procedure has successfully done its thing.
Thus, the original intent of the fundamental difference between "procedures" and "functions" is preserved.
In languages like C, there are no procedures - everything is a function in a sense (even functions that return "void") - but in addition, there is no real concept of an "exception" - so these semantics don't apply. It's for this reason that the C style of returning an error/success flag don't translate well into languages like PL/SQL.
In your example, I'd consider doing it something like this:
BEGIN
LOOP
FETCH c_ABC INTO r_ABC;
EXIT WHEN c_ABC%NOTFOUND;
IF record_is_plausible(r_ABC) THEN
r_XYZ := convert_ABC_TO_XYZ(r_ABC);
insert_or_update_XYZ(r_XYZ);
ELSE
update_as_implausible(r_ABC);
END IF;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
-- log the error or something, then:
RAISE;
END;
So where the semantics of the operation is doing some validation or something, and returning a result, I converted plausibilty_check into a function record_is_plausible that returns a boolean.

I'd pull the call to update_ABC out of the BEGIN block and make it common at the bottom of the loop, as in:
DECLARE
nFinal_status NUMBER;
BEGIN
LOOP
FETCH C_ABC INTO R_ABC;
EXIT WHEN C_ABC%NOTFOUND OR C_ABC%NOTFOUND IS NULL;
SAVEPOINT SAVE_LOOP;
nFinal_status := nSTATUS_OK;
BEGIN
plausibilty_check(r_ABC);
convert_ABC_TO_XYZ(r_ABC, r_XYZ);
insert_XYZ(r_XYZ);
EXCEPTION
WHEN excpERROR_CODE_XYZ THEN
nFinal_status := nERROR_CODE_XYZ;
END;
update_ABC(r_ABC.PK_Id, nFinal_status);
END LOOP;
END;
You might want to have each of the procedures throw its own exception so you could more easily identify where the issue(s) are coming from - or use different exceptions/error codes for each possible issue (for example, the plausibility check might raise different exceptions depending on what implausible condition it found). However, in my experience plausibility checks will often uncover multiple conditions (if the data's bad it's often really bad :-), so it might be nice to tabularize the errors and relate them to the base ABC data through a foreign key, thus allowing each individual error to be identified with just one pass through the data. Then the 'status' field on ABC becomes moot; you either have errors associated with the ABC row or you don't. If errors exist, do whatever is needed. If no errors, proceed with 'normal' processing.
Just a thought.
Share and enjoy.

Related

TADOQuery - Edit mode inserts new record rather than editing

I'm puzzled at the behavior of the TADOQuery, let's just call Q. When I use Q.Edit, populate some fields, then Post, it ends up actually inserting a new record.
The code is simple, and reading the ID from an object:
Q.SQL.Text := 'select * from SomeTable where ID = :id';
Q.Parameters.ParamValues['id'] := MyObject.ID;
Q.Open;
try
Q.Edit;
try
Q['SomeField']:= MyObject.SomeField;
finally
Q.Post;
end;
finally
Q.Close;
end;
To my surprise, rather than updating the intended record, it decided to insert a new record. Stepping through the code, immediately after Q.Edit, the query is actually in Insert mode.
What could I be doing wrong here?
I think the comments that this behaviour is documented are off the point. What the docs don't make clear (possibly because the point never occurred to the author) is that this behaviour is not guaranteed to be deterministic.
The innards of TDataSet.Edit have scarcely changed in decades. Here is the Seattle version:
procedure TDataSet.Edit;
begin
if not (State in [dsEdit, dsInsert]) then
if FRecordCount = 0 then Insert else
begin
CheckBrowseMode;
CheckCanModify;
DoBeforeEdit;
CheckParentState;
CheckOperation(InternalEdit, FOnEditError);
GetCalcFields(ActiveBuffer);
SetState(dsEdit);
DataEvent(deRecordChange, 0);
DoAfterEdit;
end;
end;
Now, notice that the if .. then .. is predicated on the value of FRecordCount, which at various points in the TDataSet code is forced to have a given assumed value (variously 1, 0 or something else) by code such as in SetBufferCount and that behaviour isn't documented at all. So on reflection I think Jerry was probably right to expect that attempting to edit a non-existent record should be treated as an error condition, and not be fudged around by silently calling Insert whether or not it is documented.
I'm posting both a question and an answer, because the cause of the problem was totally unexpected behavior, and surely someone else had the same bewildering thing happen.
This happens in the event that the dataset you're trying to edit doesn't have any records. Personally, I would think it should produce an exception that you can't edit when there's no records. But the TADOQuery decides to append a new record instead.
The very root cause of this issue was that the object where I supplied the ID actually had a value of 0, and therefore since there's no record in the database with ID 0, it returned nothing.

How to execute SQL Script which may or may not return data?

This is an extension of an old question of mine where the answer wasn't quite what I was asking. What I'm doing is executing SQL Script on an MS SQL Server database. This script may or may not return any recordsets. The problem is that the way that ADO components work, at least to my knowledge, I can only explicitly request one or the other.
If I know a query will return data, I use TADOQuery.Open
If I know a query will not return data, I use TADOConnection.Execute
If I don't know whether query will return data or not... ???
How can I execute any query and read the response to determine whether it has any recordsets or not so I can read that recordset?
What I've tried:
Calling TADOQuery.Open, but raises exception if there's no recordset
Calling TADOQuery.ExecSql, but never returns any data
Calling TADOConnection.Execute, but never returns any data
Using Option 3 and reverting to Option 1 on exceptions, but this is double the work (script files over 38,000 lines) and kinda nasty.
Using TADOCommand.Execute, but keeps raising "Parameter object is improperly defined. Inconsistent or incomplete information was provided" on creating some stored procedures (which otherwise don't happen when using TADOConnection.Execute).
Calling TADOConnection.Execute overload which returns _Recordset, but then the TADOConnection.Errors returns empty (and I depend on this).
Just as some background, the purpose is to implement something like the SQL Query tool in the SQL Server Management Studio. It allows you to execute any SQL script, and if it returns any recordsets, it displays them, otherwise it just displays output messages. The tool I'm working on automatically breaks SQL Script at GO statements, and executes each individual block separately. Some of those blocks might return data, and others might not. It's already obvious that I cannot make this determination prior to execution, so I'm looking for a way to go ahead with the execution and observe the result. TADOConnection.Execute provides some useful information, including the Errors (or output messages).
As of now, the only option I have is to supply an option in the user interface to allow the user to choose which type of execution to use - but this is what I'm trying to eliminate.
EDIT
The TADOCommand.Execute method is the closest to what I want. However, it fails on some bits of script which otherwise work perfectly fine using TADOConnection.Execute. See #5 above in "What I've tried". I almost wrote that as my answer, until I found this error happens on almost everything.
EDIT
After posting my answer below, I then came to learn that the Errors property no longer returns anything when I use this other overload of Execute. See #6 above in "What I've tried".
Calling...
ADOConnection1.Execute('select * from something', cmdText, []);
...does not return anything in ADOConnection1.Errors, whereas...
var
R: Integer;
begin
ADOConnection1.Execute('select * from something', R);
...does return messages in ADOConnection1.Errors, which is what I need, but yet, doesn't return any recordsets.
EDIT: Not the right solution
I discovered my solution finally after digging even deeper. The answer is to use the TADOConnection.Execute() overload which supports returning the recordset:
function TADOConnection.Execute(const CommandText: WideString;
const CommandType: TCommandType = cmdText;
const ExecuteOptions: TExecuteOptions = []): _Recordset;
Then, just assign the resulting _Recordset to the Recordset property of supported dataset components.
var
RS: _Recordset;
begin
RS := ADOConnection1.Execute('select * from something', cmdText, []);
if Assigned(RS) then begin
ADODataset1.Recordset:= RS;
...
end;
end;
The downside is that you cannot use the other overload which supports returning the RowsAffected. Also, nothing is returned in the Errors property of the TADOConnection when using this overload version of Execute, whereas the other one does. But that other doesn't return a recordset.

Does Oracle 12 have problems with local collection types in SQL?

To make a long story short I propose to discuss the code you see below.
When running it:
Oracle 11 compiler raises
"PLS-00306: wrong number or types of arguments tips in call to 'PIPE_TABLE'"
"PLS-00642: Local Collection Types Not Allowed in SQL Statement"
Oracle 12 compiles the following package with no such warnings, but we have a surprise in runtime
when executing the anonymous block as is - everything is fine
(we may pipe some rows in the pipe_table function - it doesn't affect)
now let's uncomment the line with hello; or put there a call to any procedure, and run the changed anonumous block again
we get "ORA-22163: left hand and right hand side collections are not of same type"
And the question is:
Does Oracle 12 allow local collection types in SQL?
If yes then what's wrong with the code of PACKAGE buggy_report?
CREATE OR REPLACE PACKAGE buggy_report IS
SUBTYPE t_id IS NUMBER(10);
TYPE t_id_table IS TABLE OF t_id;
TYPE t_info_rec IS RECORD ( first NUMBER );
TYPE t_info_table IS TABLE OF t_info_rec;
TYPE t_info_cur IS REF CURSOR RETURN t_info_rec;
FUNCTION pipe_table(p t_id_table) RETURN t_info_table PIPELINED;
FUNCTION get_cursor RETURN t_info_cur;
END buggy_report;
/
CREATE OR REPLACE PACKAGE BODY buggy_report IS
FUNCTION pipe_table(p t_id_table) RETURN t_info_table PIPELINED IS
l_table t_id_table;
BEGIN
l_table := p;
END;
FUNCTION get_cursor RETURN t_info_cur IS
l_table t_id_table;
l_result t_info_cur;
BEGIN
OPEN l_result FOR SELECT * FROM TABLE (buggy_report.pipe_table(l_table));
RETURN l_result;
END;
END;
/
DECLARE
l_cur buggy_report.t_info_cur;
l_rec l_cur%ROWTYPE;
PROCEDURE hello IS BEGIN NULL; END;
BEGIN
l_cur := buggy_report.get_cursor();
-- hello;
LOOP
FETCH l_cur INTO l_rec;
EXIT WHEN l_cur%NOTFOUND;
END LOOP;
CLOSE l_cur;
dbms_output.put_line('success');
END;
/
In further experiments we found out that problems are even deeper than it's been assumed.
For example, varying elements used in the package buggy_report we can get an ORA-03113: end-of-file on communication channel
when running the script (in the question). It can be done with changing the type of t_id_table to VARRAY or TABLE .. INDEX BY ... There are a lot of ways and variations leading us to different exceptions, which are off topic to this post.
The one more interesting thing is that compilation time of buggy_report package specification can take up to 25 seconds,
when normally it takes about 0.05 seconds. I can definitely say that it depends on presence of TYPE t_id_table parameter in the pipe_table function declaration, and "long time compilation" happen in 40% of installation cases. So it seems that the problem with local collection types in SQL latently appear during the compilation.
So we see that Oracle 12.1.0.2 obviously have a bug in realization of using local collection types in SQL.
The minimal examples to get ORA-22163 and ORA-03113 are following. There we assume the same buggy_report package as in the question.
-- produces 'ORA-03113: end-of-file on communication channel'
DECLARE
l_cur buggy_report.t_info_cur;
FUNCTION get_it RETURN buggy_report.t_info_cur IS BEGIN RETURN buggy_report.get_cursor(); END;
BEGIN
l_cur := get_it();
dbms_output.put_line('');
END;
/
-- produces 'ORA-22163: left hand and right hand side collections are not of same type'
DECLARE
l_cur buggy_report.t_info_cur;
PROCEDURE hello IS BEGIN NULL; END;
BEGIN
l_cur := buggy_report.get_cursor;
-- comment `hello` and exception disappears
hello;
CLOSE l_cur;
END;
/
Yes, in Oracle 12c you are allowed to use local collection types in SQL.
Documentation Database New Features Guide says:
PL/SQL-Specific Data Types Allowed Across the PL/SQL-to-SQL Interface
The table operator can now be used in a PL/SQL program on a collection whose data type is declared in PL/SQL. This also allows the data type to be a PL/SQL associative array. (In prior releases, the collection's data type had to be declared at the schema level.)
However, I don't know why your code is not working, maybe this new feature has still a bug.
I fiddled around your example. The trick how Oracle 12c can use PL/SQL collections in SQL statements is that Oracle creates surrogate schema object types with compatible SQL type attributes and uses these surrogate types in a query. Your case looks like a bug. I traced the execution and the surrogate types are created only once if not exist. So the effective type doesn't change nor recompile (don't know if implicit recompilation are done using ALTER statement) during execution of pipelined function. And the issue only occurs if you use the p parameter in pipe_table function. If you don't call l_table := p; the code executes successfully even with enabled method call.

Avoid hardcoding in If statement with multiple conditions

I have a snippet of PL/SQL code that looks similar to this:
IF l_order = 'Cancelled At Order Stage' OR l_order = 'Stopped at Billing Stage' THEN
Do something
END IF;
IF l_type = 'Internal' OR l_type = 'Contracted' THEN
Do something else
END IF;
I'd like to avoid having the hard coded strings, so I considered a simple array. However, it seems like a lot of extra lines of code (creating a type, creating an array of the type, iterating through that array) just to do what I've got here.
What are the recommendations on what to do here? I know this is a very minimal issue and I'm probably micro optimising, but I'd look to know what good practice is.
$A := Hard-coding
$B := bad
$C := weigh
$D := advantage
$E := configurability
$F := cost
$G := reduced readability
$A is not always $B. You have to $C the $D of $E against the $F of $G.
People can only keep a small number of new variables in their head at once. It's up to you to decide what in your system has meaning. For example, if developers see the phrase "Cancelled At Order Stage" a dozen times a day, reading it requires no thought. If you replace that phrase with C_CANCEL_STATUS your code becomes less readable.
Or maybe you just need a little PL/SQL syntactic sugar. For example, using a pre-defined collection and the member of operator may help simplify things:
declare
c_cancelled_statuses constant sys.dbms_debug_vc2coll :=
sys.dbms_debug_vc2coll('Cancelled At Order Stage', 'Stopped at Billing Stage');
begin
if 'Cancelled At Order Stage' member of c_cancelled_statuses then
dbms_output.put_line('Cancelled!');
end if;
end;
/
All programmers understand that hard-coding can be bad (right?). Hard-coding can hurt your code but probably won't kill it.
But I've seen several systems completely destroyed by softcoding. I know too many managers that think any line of code is "hard-coded", and that all logic should be stored in a configuration table. Many horrible, proprietary programming languages have been built because of a a fear of hard-coding. Don't feel too bad about occasionally hard-coding.

Selecting large sequence value into global variable not working in PL/SQL code

I have a script where I would like to have a global variable to store a transaction number for later use. The code is working fine on one schema where the sequence value that I'm fetching is relatively low. It is not working on another schema with a higher sequence value where I get a "Numeric Overflow". If I change that sequence value to a lower number it is working as well but that is not an option.
VAR TRANSACTIONNR NUMBER;
BEGIN
--Works with NEXTVAL being around 946713241
--Doesn't work with NEXTVAL being around 2961725541
SELECT MY_SEQUENCE.NEXTVAL INTO :TRANSACTIONNR FROM DUAL;
MY_PACKAGE.STARTTRANSACTION(:TRANSACTIONNR);
END;
/
-- SQL Statements
BEGIN
MY_PACKAGE.ENDTRANSACTION;
MY_PACKAGE.DO_SOMETHING(:TRANSACTIONNR);
END;
/
What is also working is selecting the sequence into a variable declared in the DECLARE block:
DECLARE
TRANSACTIONNR NUMBER;
BEGIN
SELECT MY_SEQUENCE.NEXTVAL INTO TRANSACTIONNR FROM DUAL;
MY_PACKAGE.STARTTRANSACTION(TRANSACTIONNR);
END;
/
But that means I won't be able to reuse it in the block at the end. Setting the size of the number is not possible.
VAR TRANSACTIONNR NUMBER(15)
is not valid.
Any ideas what I could try or other ways to store global state?
On further investigation this looks like it might be a SQL Developer bug (making assumptions about what you're doing again, of course...). I can get the same error with:
VAR TRANSACTIONNR NUMBER;
BEGIN
SELECT 2961725541 INTO :TRANSACTIONNR FROM DUAL;
END;
/
It appears that SQL Developer's NUMBER is limited to 2^31, which isn't the case normally for Oracle.
A possibly workaround is to use BINARY_FLOAT to store the value, but you'll run into precision problems eventually (not sure where, but looks OK up to 2^53-ish), and you'll need to cast() it back to NUMBER when using it.
VAR TRANSACTIONNR BINARY_DOUBLE;
BEGIN
SELECT 2961725541 INTO :TRANSACTIONNR FROM DUAL;
-- dbms_output.put_line(cast(:TRANSACTIONNR as NUMBER)); -- null for some reason
END;
/
...
BEGIN
dbms_output.put_line(cast(:TRANSACTIONNR as NUMBER));
END;
/
For some reason I don't seem to be able to refer to the bind variable again in the anonymous block I set it in - it's null in the commented-out code - which seems to be another SQL Developer quirk, regardless of the var type; but as you were doing so in your code, I may again have assumed too much...
Original answer for posterity, as it may still be relevant in other circumstances...
Presumably you're doing something to end the current transaction, e.g. a commit in endtransaction; otherwise you could just refer to my_sequence.currval in the do_something call. A number variable is fine for this size of number though, it won't have a problem with a sequence that size, and it won't make any difference that it is from a sequence rather than manually assigned. I don't think the problem is with the storage or the sequence.
It seems rather more likely that the error is coming from one of the package procedures you're calling, though I can't quite imagine what you might be doing with it; something like this will cause the same error though:
create sequence my_sequence start with 2961725541;
create package my_package as
procedure starttransaction(v_num number);
procedure endtransaction;
procedure do_something(v_num number);
end my_package;
/
create package body my_package as
procedure starttransaction(v_num number) is
begin
dbms_output.put_line('starttransaction(): ' || v_num);
for i in 1..v_num loop
null;
end loop;
end starttransaction;
procedure endtransaction is
begin
dbms_output.put_line('endtransaction()');
end endtransaction;
procedure do_something(v_num number) is
begin
dbms_output.put_line('do_something(): ' || v_num);
end do_something;
end my_package;
/
When your code is run against that it throws your error:
BEGIN
*
ERROR at line 1:
ORA-01426: numeric overflow
ORA-06512: at "STACKOVERFLOW.MY_PACKAGE", line 6
ORA-06512: at line 5
endtransaction()
do_something():
Note the error is reported against line 6 in the package, which is the for ... loop line, not from the assignment in your anonymous block.
Looping like that would be an odd thing to do of course, but there are probably other ways to generate that error. The breakpoint for it working is if the nextval is above 2^31. If I start the sequence with 2147483647 it works, with 2147483648 it errors.
I'm assuming you are actually getting an ORA-01426 from the original question; if it's actually a ORA-1438 or ORA-06502 then it's easier to reproduce, by trying to assign the value to a number(9) column or variable. 'Numeric overflow' is pretty specific though.