oracle results from tables using function - sql

I'm doing some testing to see if I can speed up a particular result set, but can't seem to get this particular solution working. I have data coming a few different tables and want to combine the data. I want to try this without using a union select to see if I get a performance improvement.
When I have a custom table/object type in a function, it seems to delete the existing data from the table when doing the subsequent select. Is there a way to do subsequent selects into the table without having the previous data deleted?
SQL Fiddle

I don't think that approach will be faster, in fact I expect it to be much slower.
But if you do want to do it, you need to put the rows from the second select into an intermediate collection and then join both using multiset union.
Something like this:
create or replace function
academic_history(p_student_id number)
return ah_tab_type
is
result ah_tab_type;
t ah_tab_type;
begin
select ah_obj_type(student_id,course_code,grade)
bulk collect into result
from completed_courses
where student_id = p_student_id;
select ah_obj_type(student_id,course_code,'P')
bulk collect into T
from trans_courses
where student_id = p_student_id;
result := result multiset union t;
return result;
end;
/

As well as the multiset approach, if you really wanted to do this you could also make it a pipelined function:
create or replace function
academic_history(p_student_id number)
return ah_tab_type pipelined
is
T ah_tab_type;
begin
select ah_obj_type(student_id,course_code,grade)
bulk collect
into T
from completed_courses
where student_id = p_student_id;
for i in 1..T.count loop
pipe row (T(i));
end loop;
select ah_obj_type(student_id,course_code,'P')
bulk collect
into T
from trans_courses
where student_id = p_student_id;
for i in 1..T.count loop
pipe row (T(i));
end loop;
return;
end;
SQL Fiddle.

Thanks a_horse_with_no_name for pointing out that doing the multiple selects one at a time will probably be slower. I was able to reduce the execution time by filtering each select by student_id and then union-ing (rather than union-ing everything then filtering). On the data set I'm working with this solution was the fastest taking less than 1/10 of a second...
create or replace function
academic_history(p_student_id number)
return ah_tab_type
is
T ah_tab_type;
begin
select ah_obj_type(student_id,course_code,grade)
bulk collect
into T
from (
select student_id,course_code,grade
from completed_courses
where student_id = p_student_id
union
select student_id,course_code,'P'
from trans_courses
where student_id = p_student_id);
return T;
end;
/
select *
from table(academic_history(1));
and this took 2-3 seconds to execute...
create view vw_academic_history
select student_id,course_code,grade
from completed_courses
union
select student_id,course_code,'P'
from trans_courses;
select *
from vw_academic_history
where student_id = 1;
SQLFiddle.

Related

How to store multiple rows in a variable in pl/sql function?

I'm writing a pl/sql function. I need to select multiple rows from select statement:
SELECT pel.ceid
FROM pa_exception_list pel
WHERE trunc(pel.creation_date) >= trunc(SYSDATE-7)
if i use:
SELECT pel.ceid
INTO v_ceid
it only stores one value, but i need to store all values that this select returns. Given that this is a function i can't just use simple select because i get error, "INTO - is expected."
You can use a record type to do that. The below example should work for you
DECLARE
TYPE v_array_type IS VARRAY (10) OF NUMBER;
var v_array_type;
BEGIN
SELECT x
BULK COLLECT INTO
var
FROM (
SELECT 1 x
FROM dual
UNION
SELECT 2 x
FROM dual
UNION
SELECT 3 x
FROM dual
);
FOR I IN 1..3 LOOP
dbms_output.put_line(var(I));
END LOOP;
END;
So in your case, it would be something like
select pel.ceid
BULK COLLECT INTO <variable which you create>
from pa_exception_list
where trunc(pel.creation_Date) >= trunc(sysdate-7);
If you really need to store multiple rows, check BULK COLLECT INTO statement and examples. But maybe FOR cursor LOOP and row-by-row processing would be better decision.
You may store all in a rowtype parameter and show whichever column you want to show( assuming ceid is your primary key column, col1 & 2 are some other columns of your table ) :
SQL> set serveroutput on;
SQL> declare
l_exp pa_exception_list%rowtype;
begin
for c in ( select *
from pa_exception_list pel
where trunc(pel.creation_date) >= trunc(SYSDATE-7)
) -- to select multiple rows
loop
select *
into l_exp
from pa_exception_list
where ceid = c.ceid; -- to render only one row( ceid is primary key )
dbms_output.put_line(l_exp.ceid||' - '||l_exp.col1||' - '||l_exp.col2); -- to show the results
end loop;
end;
/
SET SERVEROUTPUT ON
BEGIN
FOR rec IN (
--an implicit cursor is created here
SELECT pel.ceid AS ceid
FROM pa_exception_list pel
WHERE trunc(pel.creation_date) >= trunc(SYSDATE-7)
)
LOOP
dbms_output.put_line(rec.ceid);
END LOOP;
END;
/
Notes from here:
In this case, the cursor FOR LOOP declares, opens, fetches from, and
closes an implicit cursor. However, the implicit cursor is internal;
therefore, you cannot reference it.
Note that Oracle Database automatically optimizes a cursor FOR LOOP to
work similarly to a BULK COLLECT query. Although your code looks as if
it fetched one row at a time, Oracle Database fetches multiple rows at
a time and allows you to process each row individually.

Delete all data from a table after selecting all data from the same table

All i want is to select all rows from a table and once it is selected and displayed, the data residing in table must get completely deleted. The main concern is that this must be done using sql only and not plsql. Is there a way we can do this inside a package and call that package in a select statement? Please enlighten me here.
Dummy Table is as follows:
ID NAME SALARY DEPT
==================================
1 Sam 50000 HR
2 Max 45000 SALES
3 Lex 51000 HR
4 Nate 66000 DEV
Any help would be greatly appreciated.
select * from Table_Name;
Delete from Table_Name
To select the data from a SQL query try using a pipelined function.
The function can define a cursor for the data you want (or all the data in the table), loop through the cursor piping each row as it goes.
When the cursor loop ends, i.e. all data has been consumed by your query, the function can perform a TRUNCATE table.
To select from the function use the following syntax;
SELECT *
FROM TABLE(my_function)
See the following Oracle documentation for information pipelined functions - https://docs.oracle.com/cd/B28359_01/appdev.111/b28425/pipe_paral_tbl.htm
This cannot be done inside a package, because " this must be done using sql only and not plsql". A package is PL/SQL.
However it is very simple. You want two things: select the table data and delete it. Two things, two commands.
select * from mytable;
truncate mytable;
(You could replace truncate mytable; with delete from mytable;, but this is slower and needs to be followed by commit; to confirm the deletion and end the transaction.)
Without pl/sql it's not possible.
Using pl/sql you can create a function which will populate a row, and then delete
Here is example :
drop table tempdate;
create table tempdate as
select '1' id from dual
UNION
select '2' id from dual
CREATE TYPE t_tf_row AS OBJECT (
id NUMBER
);
CREATE TYPE t_tf_tab IS TABLE OF t_tf_row;
CREATE OR REPLACE FUNCTION get_tab_tf RETURN t_tf_tab PIPELINED AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
FOR rec in (select * from tempdate) LOOP
PIPE ROW(t_tf_row(rec.id));
END LOOP;
delete from tempdate ; commit;
END;
select * from table(get_tab_tf) -- it will populate and then delete
select * from tempdate --you can check here result of deleting
you can use below query
select * from Table_demo delete from Table_demo
The feature you seek is SERIALIZABLE ISOLATION LEVEL. This feature enables repeatable reads, which in particular guarantee that both SELECTand DELETEwill read and process the same identical data.
Example
Alter session set isolation_level=serializable;
select * from tempdate;
--- now insert from other session a new record
delete from tempdate ;
commit;
-- re-query the table old records are deleted, new recor preserved.

Performance of Local PL/SQL Array Vs SQL Call

If I'm using oracle sql and plsql to do computations on an employee, and then selecting more data based on the result of those computations... will it be faster to select the all the data I may need all at once when selecting the employee (assume something like 20 rows with 5 columns each) and then use that data as a local array, or select just the one row I will need when finished?
-- Example with multiple selects
declare
l_employeeID number;
l_birthday date;
l_horoscope varchar2;
begin
select employeeID into l_employeeID from employeeTbl t where t.rownum = 1;
l_birthday := get_birthdayFromID(l_employeeID);
select horoscope into l_horoscope from horoscopeTable t where l_birthday between l_horoscope.fromDate and l_horoscope.toDate;
return l_horoscope;
end;
-- Example with table selection, and loop iteration
declare
l_empolyeeID number;
l_birthday date;
l_horoscope varchar2;
l_horoscopeDates date_table;
begin
select employeeID, cast(multiset(select horoscopeDate from horoscopeTable)) as date_table into l_employeeID, l_horoscopeDates from employeeTbl t where t.rownum = 1;
l_birthday := get_birthdayFromID(l_employeeID);
for i in 1 .. l_horoscopeDates.count - 1 loop
if l_birthday between l_horoscopeDates(i) and l_horoscopeDates(i + 1) then
return l_horoscopeDates(i);
end if;
end loop;
return null;
end;
I understand that I'm paying more ram and IO to select additional data, but is it more efficient than incurring another context switch to call the sql when the extra data is not significantly larger than needed?
Context switches are considered very expansive when using Oracle. If the table doesn't contain large amounts of data, you should defeinitely query more data in order to reduce the number of times PL/SQL makes an SQL query.
Having said that, I think your question should be the other way around, why are you using PL/SQL at all, when your entire logic can be summed into a single SQL statement?
example -
select horoscope
from horoscopeTable
where (select get_birthdayFromID(employeeID)
from employeeTbl
where t.rownum = 1) between l_horoscope.fromDate and l_horoscope.toDate;
Syntax might need a little touch ups but the main idea seems right to me.
You can also find more observations in this piece about tuning PL/SQL code - http://logicalread.solarwinds.com/sql-plsql-oracle-dev-best-practices-mc08/

PL/SQL loop through cursor

My problem isn't overly complicated, but I am a newbie to PL/SQL.
I need to make a selection from a COMPANIES table based on certain conditions. I then need to loop through these and convert some of the fields into a different format (I have created functions for this), and finally use this converted version to join to a reference table to get the score variable I need. So basically:
select id, total_empts, bank from COMPANIES where turnover > 100000
loop through this selection
insert into MY_TABLE (select score from REF where conversion_func(MY_CURSOR.total_emps) = REF.total_emps)
This is basically what I am looking to do. It's slightly more complicated but I'm just looking for the basics and how to approach it to get me started!
Here's the basic syntax for cursor loops in PL/SQL:
BEGIN
FOR r_company IN (
SELECT
ID,
total_emps,
bank
FROM
companies
WHERE
turnover > 100000
) LOOP
INSERT INTO
my_table
SELECT
score
FROM
ref_table
WHERE
ref.total_emps = conversion_func( r_company.total_emps )
;
END LOOP;
END;
/
You don't need to use PL/SQL to do this:
insert into my_table
select score
from ref r
join companies c
on r.total_emps on conversion_func(c.total_emps)
where c.turnover > 100000
If you have to do this in a PL/SQL loop as asked, then I'd ensure that you do as little work as possible. I would, however, recommend bulk collect instead of the loop.
begin
for xx in ( select conversion_func(total_emps) as tot_emp
from companies
where turnover > 100000 ) loop
insert into my_table
select score
from ref
where total_emps = xx.tot_emp
;
end loop;
end;
/
For either method you need one index on ref.total_emps and preferably one on companies.turnover

plsql - get first row - which one is better?

LV_id number;
Cursor CR_test Is
select t.id
from table1 t
where t.foo = p_foo
order by t.creation_date;
Open CR_test;
Fetch CR_test
Into LV_id;
Close CR_test;
or this one :
select x.id
from(select t.id
from table1 t
where t.foo=p_foo
order by t.creation_date) x
where rownum = 1
Both above make similar result but i need known about which one is more efficient!
This is Tom Kyte's mantra:
You should do it in a single SQL statement if at all possible.
If you cannot do it in a single SQL Statement, then do it in PL/SQL.
If you cannot do it in PL/SQL, try a Java Stored Procedure.
If you cannot do it in Java, do it in a C external procedure.
If you cannot do it in a C external routine, you might want to seriously think about why it is you need to do it…
http://tkyte.blogspot.com/2006/10/slow-by-slow.html
Easiest way to find out in this case is to test your queries.
Make sure to test this yourself, indexes and data in your table may produce different results with your table.
Without any index, it looks like there is a better approach using analytic function DENSE_RANK:
SELECT MIN(id) KEEP (DENSE_RANK FIRST ORDER BY creation_date)
INTO lv_id
FROM table1
WHERE foo = p_foo;
I used the following code to test the time consumed by your queries (execute this block several times, results may vary):
DECLARE
p_foo table1.foo%TYPE := 'A';
lv_id table1.id%TYPE;
t TIMESTAMP := SYSTIMESTAMP;
BEGIN
FOR i IN 1 .. 100 LOOP
-- Query here
END LOOP;
dbms_output.put_line(SYSTIMESTAMP - t);
END;
Results:
Using cursor, fetching first row:
2.241 s
Using query with ROWNUM:
1.483 s
Using DENSE_RANK:
1.168 s