Pipelined Table Function - behind the scenes - sql

How does it make sense that I can access Nested Table's data after
using Pipe Row command? Where is the data actually saved?
I know that in pipelined functions the data is sent directly to the
invoker client, and this reduces the dynamic memory of the process
on the SGA. But when I've tried to access that information after pipelining, I succeeded. This suprises me because that means the data is actually saved somewhere, but not on the local SGA. So where?
Here is an example I've made:
CREATE OR REPLACE TYPE collection_t IS OBJECT (
ID NUMBER,
NAME VARCHAR2(50));
/
CREATE OR REPLACE TYPE collection_nt_t IS TABLE OF collection_t;
/
CREATE OR REPLACE FUNCTION my_test_collection_p(rownumvar NUMBER)
RETURN collection_nt_t
PIPELINED
IS
collection_nt collection_nt_t:= collection_nt_t();
BEGIN
FOR i IN 1..rownumvar LOOP
collection_nt.EXTEND(1);
collection_nt(i) := collection_t (i, 'test');
PIPE ROW (collection_nt(i));
END LOOP;
DBMS_OUTPUT.PUT_LINE(collection_nt(3).id);--"queries" the collection successfully! Where is the collection being saved?
RETURN;
END;
/
SELECT * FROM TABLE(my_test_collection_p(100));
I have tried the idea of batches vs row-by-row (slow by
slow) by using bulk collect and forall. But doesn't pipelining mean
LOTS of context switches by itself? To return a row each time sounds
like a bad idea. In addition, how do I choose the bulk size of pipe
line function, if any?
Why by querying "SELECT * FROM TABLE(..)" sqlplus displays the
data by batches of 15, and PL/SQL Developer displays batches of
100? What does happen behind the scenes - batches of 1 row, no?
And why does all the world uses sqlplus, when you have nice IDEs like the convenient PL/SQL Developer? Even small clues from you would help me, Thanks a lot!
Details For 1: When I am doing the same with a regular Function table, the memory in use is something like 170MB per 1M rows. But when using the above pipeline table function, the memory is not increased. I am trying to understand what exactly is taking the memory - the fact that the collection is saved as a variable?
CREATE OR REPLACE FUNCTION my_test_collection(rownumvar NUMBER) RETURN
collection_nt_t IS
collection_nt collection_nt_t:= collection_nt_t();
BEGIN
FOR i IN 1..rownumvar LOOP
collection_nt.EXTEND(1);
collection_nt(i) := collection_t (i, 'test');
END LOOP;
RETURN collection_nt;
END;
SELECT * FROM TABLE(my_test_collection(1000000));--170MB of SGA delta!
SELECT * FROM TABLE(my_test_collection_p(1000000));--none. but both of them store the whole collection!

Related

pl/sql procedure with variable numbers of parameters

I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.

How to retrieve PL/SQL record type in SQL? [duplicate]

This question already has answers here:
Oracle: Select From Record Datatype
(6 answers)
Closed 5 years ago.
A shared package exists which defines a student record type and a function which returns a student:
CREATE OR REPLACE PACKAGE shared.student_utils IS
--aggregates related data from several tables
TYPE student_rec IS RECORD (
id student.id%TYPE,
username student.username%TYPE,
name student.name%TYPE,
phone phone.phone%TYPE
/*etc.*/
);
FUNCTION get_student(student_id IN student.id%TYPE) RETURN student_rec;
END;
Now, I'm writing a package that provides a API for an Apex application to consume. In particular, I need to provide the student record and other relevant data in a format that can be selected via SQL (and displayed in a report page in Apex.)
So far I've been trying to find the most direct way to select the data in SQL. Obviously, a record type cannot be used in SQL, so my quick-and-dirty idea was to define a table type in my package spec and a PIPELINED function in my package spec/body:
CREATE OR REPLACE PACKAGE my_schema.api IS
TYPE student_tab IS TABLE OF shared.student_utils.student_rec;
FUNCTION get_all_student_data(student_id IN student.id%TYPE) RETURN student_tab PIPELINED;
END;
/
CREATE OR REPLACE PACKAGE BODY my_schema.api IS
FUNCTION get_all_student_data(student_id IN student.id%TYPE) RETURN student_tab PIPELINED IS
BEGIN
PIPE ROW(shared.student_utils.get_student(student_id));
END;
END;
...which lets me select it like so:
SELECT * FROM TABLE(my_schema.api.get_all_student_data(1234));
This works, but building a pipelined table just for one row is overkill, and Oracle's explain plan seems to agree.
Supposedly in Oracle 12c, there should be more options available:
More PL/SQL-Only Data Types Can Cross PL/SQL-to-SQL Interface
...but I can't seem to figure it out in my scenario. Changing the function to:
FUNCTION get_all_student_data RETURN student_tab IS
r_student_tab student_tab;
BEGIN
r_student_tab(1) := shared.student_utils.get_student(student_id);
RETURN r_student_tab;
END;
...will compile, but I cannot SELECT from it as before.
OK, enough rambling, here's my actual question - what is the most direct method to call a PL/SQL function that returns a record type and select/manipulate the result in SQL?
The killer line in the documentation is this one:
A PL/SQL function cannot return a value of a PL/SQL-only type to SQL.
That appears to rule out querying directly from a function which returns a PL/SQL Record or associative array like this:
select * from table(student_utils.get_students(7890));
What does work is this, which is technically SQL (because the docs define anonymous blocks as SQL not PL/SQL):
declare
ltab student_utils.students_tab;
lrec student_utils.student_rec;
rc sys_refcursor;
begin
ltab := student_utils.get_students(1234);
open rc for select * from table(ltab);
fetch rc into lrec;
dbms_output.put_line(lrec.name);
close rc;
end;
/
This is rather lame. I suppose there are a few times when we want to open a ref cursor from an array rather than just opening it for the SQL we would use to populate the array, but it's not the most pressing of use cases.
The problem is Oracle's internal architecture: the kernel has C modules for SQL and C modules for PL/SQL (this is why you will hear people talking about "context switches"). Exposing more PL/SQL capabilities to the SQL engine requires modifying the interfaces. We can only imagine how difficult it is to allow the SQL compiler to work against the definitions of PL/SQL data structures, which are extremely unstable (potentially they can change every time we run create or replace package ....
Of course this does work for PIPELINED functions but that's because under the bonnet Oracle creates SQL types for the PL/SQL types referenced in the function. It cannot create objects on the fly for an arbitrary function we might wish to slip into a table() call. Some day it might be possible but just consider this one sticking point: what happens when a user who has execute on our package but lacks CREATE TYPE privilege tries to use the function?

Any SIMPLE way to fetch ALL results from a PL/SQL cursor?

The second part of the question: How to do the same (get ALL results, without any loops) with SQL*Plus.
I'm writing some PL/SQL scripts to test the data integrity using Jenkins.
I'm having a script like this:
declare
temp_data SOME_PACKAGE.someRefCurFunction; // type: CURSOR
DATA1 NUMBER;
DATA2 NUMBER;
DATA3 SOMETHING.SOMETHING_ELSE%TYPE;
begin
cursor := SOME_PACKAGE.someFunction('some',parameters,here);
LOOP
FETCH cursor INTO DATA1,DATA2,DATA3;
EXIT WHEN temp_data%NOTFOUND;
dbms_output.put_line(DATA1||','||DATA2||','||DATA3);
END LOOP;
end;
Relsults look like this:
Something1,,Something2
Something3,Something4,Something5
Something6,Something7,Something8
Sometimes the results are null, as in the 1st line. It doesnt matter, they should be.
The purpose of this script is simple - to fetch EVERYTHING from the cursor, comma separate the data, and print lines with results.
The example here is simple as hell, but It's just and example. The "Real life" Packages contain sometimes hundreds of variables, processing enormous database tables.
I need it to be as simple as possible.
Is there any method to fetch EVERYTHING from the cursor, comma separate single results if possible, and send it to output? The final output in the Jenkins test should be a text file, to be able to diff it with other results.
Thanks in advance :)
If you're truly open to a SQL*Plus script, rather than a PL/SQL block
SQL> set colsep ','
SQL> variable rc refcursor;
SQL> exec :rc := SOME_PACKAGE.someFunction('some',parameters,here);
SQL> print rc;
should execute the procedure and fetch all the data from your cursor. You could spool the resulting CSV output to a file using the spool command. Of course, you then may encounter issues where SQL*Plus isn't displaying the data in a clean format because of the linesize or other similar issues-- that may force you to add some additional SQL*Plus formatting commands (i.e. set linesize, column <<column name>> format <<format>>, etc.)
It's not obvious to me that a SQL*Plus script buys you much over writing some dynamic SQL that generates the PL/SQL script that you posted initially or (if you're on 12c) writing some code that uses dbms_sql to fetch data from the cursor that is returned.
The answer seems obvious. You currently have a function which returns a cursor itself returning a data set of hundreds (tho you show only three) fields. You want instead a single string with the comma-separated values. So change the function or write another one based on the same query.
package body SOME_PACKAGE
...
-- The current function
function someFunction ...
returns ref_cursor ...
-- create cursor based on:
select f1, f2, f3 --> Three (or n) fields
from ...
where ...;
return generated_cursor;
end function;
-- The new function
function someOtherFunction ...
returns ref_cursor ...
-- create cursor based on:
select f1 || ',' || f2 || ',' || f3 as StringField --> one string field
from ...
where ...;
return generated_cursor;
end function;
end package;
This isn't quite all that you asked for. It does save declaring variables (one instead of hundreds) to read the data in one row at a time, but you still read it in one row at a time instead of, as I read your question, reading in every row in one operation. But if such a super-fetch were possible, it would require massive amounts of memory. Sometimes we do things that just require massive amounts of memory and we just work with that the best we can. But your "requirement" seems to be only a matter of convenience for the developers. That, imnsho, lies way down in the list of priorities for consuming resources.
Developing a cursor that returns the data in the final form you want would seem to me to the best of all alternatives.

Call Function within User-Defined Function in SQL

I want to include the following function inside of another user-defined function in Oracle.
DBMS_STATS.GATHER_TABLE_STATS(SCHEMA_IN,TABLE_IN)
SCHEMA_IN and TABLE_IN are arguments to the user-defined function. However, I get the following error.
ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML
How can I resolve this? Below is my SQL script.
CREATE OR REPLACE Function GET_COLUMNS (SCHEMA_IN IN VARCHAR2, NAME_IN IN VARCHAR2)
RETURN VARCHAR2
is
L_TEXT VARCHAR2(32767) := NULL;
BEGIN
DBMS_STATS.GATHER_TABLE_STATS(SCHEMA_IN,NAME_IN);
FOR CUR_REC IN (SELECT COLUMN_NAME FROM USER_TAB_COLUMNS WHERE TABLE_NAME = name_in AND NUM_NULLS = 0) LOOP
L_TEXT := L_TEXT || ',' || CUR_REC.COLUMN_NAME;
END LOOP;
return(ltrim(l_text,','));
END;
gather_table_stats is a procedure, not a function. And it a procedure that includes transaction control logic (presumably, a commit at least). You cannot, therefore, call it in a function that is called from SQL. You could call your function from PL/SQL rather than SQL,
DECLARE
l_text varchar2(4000);
BEGIN
l_text := get_columns( <<schema>>, <<table>> );
END;
I would, however, be very, very dubious about the approach you're taking.
First, dbms_stats gathers statistics that are used by the optimizer. Using those statistics in other contexts is generally dangerous. Most dbms_stats calls involve some level of indeterminism-- you're generally gathering data from a sample of rows and extrapolating. That is perfectly appropriate for giving the optimizer information so that it can judge things like roughly how many rows a table scan will return. It may not be appropriate if you're trying to differentiate between a column that is never NULL and one that is very rarely NULL. Some samples may catch a NULL value, others may not. It may seem to work correctly for months or years and then start to fail either consistently or intermittantly.
Second, when you gather fresh statistics, you're potentially forcing Oracle to do hard parses on all the existing SQL statements that reference the table. That can be a major performance hit if you do this in the middle of the day. If you happen to force a query plan to change in a bad way, you'll likely cause the DBA a great deal of grief. If the DBA is gathering statistics in a particular way (locking statistics on some tables, forcing histograms on others, forcing a lack of histograms on others, etc.) to deal with performance issues, it's highly likely that you'll be either working at cross purposes or actively breaking the other.
Third, if a column never has NULL values, it really ought to be marked as NOT NULL. Then you can simply look at the data dictionary to see which columns are nullable and which are not without bothering to gather statistics.
You need to set your function to be an autonomous transaction to execute gather table stats:
CREATE OR REPLACE Function GET_COLUMNS (SCHEMA_IN IN VARCHAR2, NAME_IN IN VARCHAR2)
RETURN VARCHAR2
as
pragma autonomous_transaction;
L_TEXT VARCHAR2(32767) := NULL;
BEGIN

How do I return all values from a stored procedure?

Forgive my naivety, but I am new to using Delphi with databases (which may seem odd to some).
I have setup a connection to my database (MSSQL) using a TADOConnection. I am using TADOStoredProc to access my stored procedure.
My stored procedure returns 2 columns, a column full of server names, and a 2nd column full of users on the server. It typically returns about 70 records...not a lot of data.
How do I enumerate this stored procedure programmatically? I am able to drop a DBGrid on my form and attach it to a TDataSource (which is then attached to my ADOStoredProc) and I can verify that the data is correctly being retrieved.
Ideally, I'd like to enumerate the returned data and move it into a TStringList.
Currently, I am using the following code to enumerate the ADOStoredProc, but it only returns '#RETURN_VALUE':
ADOStoredProc1.Open;
ADOStoredProc1.ExecProc;
ADOStoredProc1.Parameters.Refresh;
for i := 0 to AdoStoredProc1.Parameters.Count - 1 do
begin
Memo1.Lines.Add(AdoStoredProc1.Parameters.Items[i].Name);
Memo1.Lines.Add(AdoStoredProc1.Parameters.Items[i].Value);
end;
Call Open to get a dataset returned
StoredProc.Open;
while not StoredProc.EOF do
begin
Memo1.Lines.Add(StoredProc.FieldByName('xyz').Value);
StoredProc.Next;
end;
Use Open to get the records from the StoredProc
Use either design-time Fields, ad-hoc Fields grabbed with FieldByName before the loop or Fields[nn] to get the values.
procedure GetADOResults(AStoredProc: TADOStoredProc; AStrings: TStrings);
var
fldServer, fldUser: TField;
begin
AStoredProc.Open;
fldServer := AStoredProc.FieldByName('ServerName');
fldUser := AStoredProc.FieldByName('User');
while not AStoredProc.EOF do
begin
AStrings.Add(Format('Server: %s - / User: %s',[fldServer.AsString, fldUser.AsString]));
// or with FFields and Index (asumming ServerName is the 1st and User the 2nd) and no local vars
AStrings.Add(Format('Server: %s - / User: %s',[AStoredProc.Fields[0].AsString, AStoredProc.Fields[1].AsString]));
AStoredProc.Next;
end;
end;
//use like
GetADOResults(ADOStoredProc1, Memo1.Lines);
Note: Fields[nn] allows to write less code but beware if the StoredProc changes the order of the returned columns.
If your stored procedure returns a result set (rows of data), don't use ExecProc. It's designed to execute procedures with no result set. Use Open or Active instead, and then you can process them just as you are using Parameters:
ADOStoredProc.Open;
for i := 0 to ADOStoredProc1.Parameters.Count - 1 do
begin
Memo1.Lines.Add(ADOStoredProc1.Parameters.Items[i].Name);
Memo1.Lines.Add(ADOStoredProc1.Parameters.Items[i].Value);
end;
BTW, calling Open and then ExecProc causes problems; Open returns a result set, ExecProc then clears it because you're running the procedure a second time with no result set expected. I also don't think you need the Parameters.Refresh, but I'm not 100% sure of that.
Take a look at this (just Googled it):
[http://www.scip.be/index.php?Page=ArticlesDelphi12&Lang=EN#Procedure][1]
Basically, a SQL Server stored procedure always returns one return value, but it can also create a result set, which you need to process like the data set returned from a regular select statement.