Call Function within User-Defined Function in SQL - sql

I want to include the following function inside of another user-defined function in Oracle.
DBMS_STATS.GATHER_TABLE_STATS(SCHEMA_IN,TABLE_IN)
SCHEMA_IN and TABLE_IN are arguments to the user-defined function. However, I get the following error.
ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML
How can I resolve this? Below is my SQL script.
CREATE OR REPLACE Function GET_COLUMNS (SCHEMA_IN IN VARCHAR2, NAME_IN IN VARCHAR2)
RETURN VARCHAR2
is
L_TEXT VARCHAR2(32767) := NULL;
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(SCHEMA_IN,NAME_IN);
FOR CUR_REC IN (SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = name_in AND NUM_NULLS = 0) LOOP
L_TEXT := L_TEXT || ',' || CUR_REC.COLUMN_NAME;
END LOOP;
return(ltrim(l_text,','));
END;

gather_table_stats is a procedure, not a function. And it a procedure that includes transaction control logic (presumably, a commit at least). You cannot, therefore, call it in a function that is called from SQL. You could call your function from PL/SQL rather than SQL,
DECLARE
l_text varchar2(4000);
BEGIN
l_text := get_columns( <<schema>>, <<table>> );
END;
I would, however, be very, very dubious about the approach you're taking.
First, dbms_stats gathers statistics that are used by the optimizer. Using those statistics in other contexts is generally dangerous. Most dbms_stats calls involve some level of indeterminism-- you're generally gathering data from a sample of rows and extrapolating. That is perfectly appropriate for giving the optimizer information so that it can judge things like roughly how many rows a table scan will return. It may not be appropriate if you're trying to differentiate between a column that is never NULL and one that is very rarely NULL. Some samples may catch a NULL value, others may not. It may seem to work correctly for months or years and then start to fail either consistently or intermittantly.
Second, when you gather fresh statistics, you're potentially forcing Oracle to do hard parses on all the existing SQL statements that reference the table. That can be a major performance hit if you do this in the middle of the day. If you happen to force a query plan to change in a bad way, you'll likely cause the DBA a great deal of grief. If the DBA is gathering statistics in a particular way (locking statistics on some tables, forcing histograms on others, forcing a lack of histograms on others, etc.) to deal with performance issues, it's highly likely that you'll be either working at cross purposes or actively breaking the other.
Third, if a column never has NULL values, it really ought to be marked as NOT NULL. Then you can simply look at the data dictionary to see which columns are nullable and which are not without bothering to gather statistics.

You need to set your function to be an autonomous transaction to execute gather table stats:
CREATE OR REPLACE Function GET_COLUMNS (SCHEMA_IN IN VARCHAR2, NAME_IN IN VARCHAR2)
RETURN VARCHAR2
as
pragma autonomous_transaction;
L_TEXT VARCHAR2(32767) := NULL;
BEGIN

Related

pl/sql procedure with variable numbers of parameters

I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.

Using variable as column%TYPE (ORACLE)

I need to make a procedure for editing a table 'exemplare'
edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value ???) which updates the value in column of row with this ID.
Problem is that columns of this table have diffrent data types. Can I do something like p_value exemplare.p_column%TYPE? Or do I have to set it to VARCHAR2 and then (somehow) use converting to correct data type?
Can I do something like p_value exemplare.p_column%TYPE?
No. The signature of a procedure must be static. What you could do is overload the procedure in a package:
procedure edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value VARCHAR2);
procedure edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value DATE);
procedure edit_exemplare(p_id_exemplare IN NUMBER, p_column VARCHAR2, p_value NUMBER);
However, you're still need dynamic SQL to interpret the metadata of p_column so your code will remain clunky.
This approach reminds me of the getter and setter paradigm which is still prevalent in object-oriented programming. This is not an approach which fits SQL. To edit three table columns you will make three procedural calls which will generate and execute three dynamic UPDATE statements. This does not scale well, and it is the sort of thing which causes OO developers to assert that databases are slow, when it fact the problem is at the calling end.
There are various ways to solve this, and which is the correct one will depend on the precise details of what you're trying to do. The key point is: a single transaction should execute *no more than one** update statement per record. A properly set-based operation which updates multiple records in one statement is even better.
you can use {table}%ROWTYPE as an input.this is a good example

How to check updating of column value in oracle trigger

I'm using UPDATING(col_name) to check if column's value is updated or not inside the trigger. But the big problem is this command won't check value of :old and :new objects. UPDATING(col_name) is true if col_name existed in set part of query even with old value.
I don't want to check :old.col1<>:new.col1 for each column separately.
How can I check changing column value correctly?
I want to do this in a generic way. like :
SELECT col_name bulk collect INTO included_columns FROM trigger_columns where tbl_name ='name';
l_idx := included_columns.first;
while (l_idx is not null)
loop
IF UPDATING(included_columns(l_idx)) THEN
//DO STH
return;
END IF;
l_idx := included_columns.next(l_idx);
end loop;
Thanks
IN a comment you said:
"I want to do this in a generic way and manage it safer. put columns which are important to trigger in a table and don't put many IF in my trigger. "
I suspected that was what you wanted. The only way you can make that work is to use dynamic SQL to assemble and execute a PL/SQL block. That is a complicated solution, for no material benefit.
I'm afraid I laughed at your use of "safer" there. Triggers are already horrible: they make it harder to reason about what is happening in the database and can lead to unforeseen scalability issues. Don't make them worse by injecting dynamic SQL into the mix. Dynamic SQL is difficult because it turns compilation errors into runtime errors.
What is your objection to hardcoding column names and IF statements in a trigger? It's safer because the trigger is compiled. It's easier to verify the trigger logic because the code is right there.
If this is just about not wanting to type, then you can generate the trigger source from the data dictionary views (such as all_tab_cols) or even your own metadata tables if you must (i.e. trigger_columns).
You can define a global function similar to the following:
CREATE OR REPLACE FUNCTION NUMBER_HAS_CHANGED(pinVal_1 IN NUMBER,
pinVal_2 IN NUMBER)
RETURN CHAR
IS
BEGIN
IF (pinVal_1 IS NULL AND pinVal_2 IS NOT NULL) OR
(pinVal_1 IS NOT NULL AND pinVal_2 IS NULL) OR
pinVal_1 <> pinVal_2
THEN
RETURN 'Y';
ELSE
RETURN 'N';
END IF;
END NUMBER_HAS_CHANGED;
Now in your trigger you just write
IF NUMBER_HAS_CHANGED(:OLD.COL1, :NEW.COL1) = 'Y' THEN
-- whatever
END IF;
Note that this function is defined to return CHAR so it can also be called from SQL statements, if needed - for example, in a CASE expression. Remember that in Oracle, there is no BOOLEAN type in the database - only in PL/SQL.
You'll probably want to create additional versions of this function to handle VARCHAR2 and DATE values, for a start, but since it's a matter of replacing the data types and changing the name of the function I'll let you have the fun of writing them. :-)
Best of luck.

How to execute an oracle procedure with an out sys_refcursor parameter?

I have a proc in my package body:
create or replace package body MYPACKAGE is
procedure "GetAllRules"(p_rules out sys_refcursor)
is
begin
open p_rules for
select * from my_rules;
end "GetAllRules";
-- etc
And I'm exposing this in my package spec.
How do I execute this procedure in a new SQL Window in PL SQL Developer (or similar)?
You can execute the procedure relatively easily
DECLARE
l_rc sys_refcursor;
BEGIN
mypackage."GetAllRules"( l_rc );
END;
Of course, that simply returns the cursor to the calling application. It doesn't do anything to fetch the data from the cursor, to do something with that data, or to close the cursor. Assuming that your goal is to write some data to dbms_output (which is useful sometimes for prototyping but isn't something that production code should be relying on), you could do something like
DECLARE
l_rc sys_refcursor;
l_rec my_rules%rowtype;
BEGIN
mypackage."GetAllRules"( l_rc );
LOOP
FETCH l_rc INTO l_rec;
EXIT WHEN l_rc%NOTFOUND;
dbms_output.put_line( <<print data from l_rec>> );
END LOOP;
CLOSE l_rc;
END;
If you're really doing something like this with the cursor in PL/SQL, I'd strongly suggest returning a strongly-typed ref cursor rather than a weakly-typed one so that you can declare a record in terms of the cursor's %rowtype rather than forcing the caller to know exactly what type to declare and hoping that the query in the procedure doesn't change. This also requires you to explicitly write code to display the data which gets annoying.
If you're using SQL*Plus (or something that supports some SQL*Plus commands), you can simplify things a bit
VARIABLE rc REFCURSOR;
EXEC mypackage."GetAllRules"( :rc );
PRINT :rc;
As an aside, I'm not a fan of using case-sensitive identifiers. It gets very old to have to surround identifiers like "GetAllRules" with double-quotes every time you want to call it. Unless you have really compelling reasons, I'd suggest using standard case-insensitive identifiers. It's perfectly reasonable to capitalize identifiers reasonably in your code, of course, it just doesn't make a lot of sense to go to the effort of forcing them to be case-sensitive in the data dictionary.

Pipelined Table Function - behind the scenes

How does it make sense that I can access Nested Table's data after
using Pipe Row command? Where is the data actually saved?
I know that in pipelined functions the data is sent directly to the
invoker client, and this reduces the dynamic memory of the process
on the SGA. But when I've tried to access that information after pipelining, I succeeded. This suprises me because that means the data is actually saved somewhere, but not on the local SGA. So where?
Here is an example I've made:
CREATE OR REPLACE TYPE collection_t IS OBJECT (
ID NUMBER,
NAME VARCHAR2(50));
/
CREATE OR REPLACE TYPE collection_nt_t IS TABLE OF collection_t;
/
CREATE OR REPLACE FUNCTION my_test_collection_p(rownumvar NUMBER)
RETURN collection_nt_t
PIPELINED
IS
collection_nt collection_nt_t:= collection_nt_t();
BEGIN
FOR i IN 1..rownumvar LOOP
collection_nt.EXTEND(1);
collection_nt(i) := collection_t (i, 'test');
PIPE ROW (collection_nt(i));
END LOOP;
DBMS_OUTPUT.PUT_LINE(collection_nt(3).id);--"queries" the collection successfully! Where is the collection being saved?
RETURN;
END;
/
SELECT * FROM TABLE(my_test_collection_p(100));
I have tried the idea of batches vs row-by-row (slow by
slow) by using bulk collect and forall. But doesn't pipelining mean
LOTS of context switches by itself? To return a row each time sounds
like a bad idea. In addition, how do I choose the bulk size of pipe
line function, if any?
Why by querying "SELECT * FROM TABLE(..)" sqlplus displays the
data by batches of 15, and PL/SQL Developer displays batches of
100? What does happen behind the scenes - batches of 1 row, no?
And why does all the world uses sqlplus, when you have nice IDEs like the convenient PL/SQL Developer? Even small clues from you would help me, Thanks a lot!
Details For 1: When I am doing the same with a regular Function table, the memory in use is something like 170MB per 1M rows. But when using the above pipeline table function, the memory is not increased. I am trying to understand what exactly is taking the memory - the fact that the collection is saved as a variable?
CREATE OR REPLACE FUNCTION my_test_collection(rownumvar NUMBER) RETURN
collection_nt_t IS
collection_nt collection_nt_t:= collection_nt_t();
BEGIN
FOR i IN 1..rownumvar LOOP
collection_nt.EXTEND(1);
collection_nt(i) := collection_t (i, 'test');
END LOOP;
RETURN collection_nt;
END;
SELECT * FROM TABLE(my_test_collection(1000000));--170MB of SGA delta!
SELECT * FROM TABLE(my_test_collection_p(1000000));--none. but both of them store the whole collection!