Display Number of Rows based on input parameter - sql

CREATE OR REPLACE PROCEDURE test_max_rows (
max_rows IN NUMBER DEFAULT 1000
)
IS
CURSOR cur_test ( max_rows IN number ) IS
SELECT id FROM test_table
WHERE user_id = 'ABC'
AND ROWNUM <= max_rows;
id test_table.id%TYPE;
BEGIN
OPEN cur_test(max_rows) ;
LOOP
FETCH cur_test INTO id;
EXIT WHEN cur_test%NOTFOUND;
DBMS_OUTPUT.PUT_LINE('ID:' || id);
END LOOP;
END;
My requirement is to modify the above code so that when I pass -1 for max_rows, the proc should return all the rows returned by the query. Otherwise, it should limit the rows as per max_rows.
For example:
EXECUTE test_max_rows(-1);
This command should return all the rows returned by the SELECT statement above.
EXECUTE test_max_rows(10);
This command should return only 10 rows.

You can do this with a OR clause; change:
AND ROWNUM <= max_rows;
to:
AND (max_rows < 1 OR ROWNUM <= max_rows);
Then passing zero, -1, or any negative number will fetch all rows, and any positive number will return a restricted list. You could also replace the default 1000 clause with default null, and then test for null instead, which might be a bit more obvious:
AND (max_rows is null OR ROWNUM <= max_rows);
Note that which rows you get with a passed value will be indeterminate because you don't have an order by clause at the moment.
Doing this in a procedure also seems a bit odd, and you're assuming whoever calls it will be able to see the output - i.e. will have done set serveroutput on or the equivalent for their client - which is not a very safe assumption. An alternative, if you can't specify the row limit in a simple query, might be to use a pipelined function instead - you could at least then call that from plain SQL.
CREATE OR REPLACE FUNCTION test_max_rows (max_rows IN NUMBER DEFAULT NULL)
RETURN sys.odcinumberlist PIPELINED
AS
BEGIN
FOR r IN (
SELECT id FROM test_table
WHERE user_id = 'ABC'
AND (max_rows IS NULL OR ROWNUM <= max_rows)
) LOOP
PIPE ROW (r.id);
END LOOP;
END;
/
And then call it as:
SELECT * FROM TABLE(test_max_rows);
or
SELECT * FROM TABLE(test_max_rows(10));
Here's a quick SQL Fiddle demo. But you should still consider if you can do the whole thing in plain SQL and PL/SQL altogether.

Related

How to store multiple rows in a variable in pl/sql function?

I'm writing a pl/sql function. I need to select multiple rows from select statement:
SELECT pel.ceid
FROM pa_exception_list pel
WHERE trunc(pel.creation_date) >= trunc(SYSDATE-7)
if i use:
SELECT pel.ceid
INTO v_ceid
it only stores one value, but i need to store all values that this select returns. Given that this is a function i can't just use simple select because i get error, "INTO - is expected."
You can use a record type to do that. The below example should work for you
DECLARE
TYPE v_array_type IS VARRAY (10) OF NUMBER;
var v_array_type;
BEGIN
SELECT x
BULK COLLECT INTO
var
FROM (
SELECT 1 x
FROM dual
UNION
SELECT 2 x
FROM dual
UNION
SELECT 3 x
FROM dual
);
FOR I IN 1..3 LOOP
dbms_output.put_line(var(I));
END LOOP;
END;
So in your case, it would be something like
select pel.ceid
BULK COLLECT INTO <variable which you create>
from pa_exception_list
where trunc(pel.creation_Date) >= trunc(sysdate-7);
If you really need to store multiple rows, check BULK COLLECT INTO statement and examples. But maybe FOR cursor LOOP and row-by-row processing would be better decision.
You may store all in a rowtype parameter and show whichever column you want to show( assuming ceid is your primary key column, col1 & 2 are some other columns of your table ) :
SQL> set serveroutput on;
SQL> declare
l_exp pa_exception_list%rowtype;
begin
for c in ( select *
from pa_exception_list pel
where trunc(pel.creation_date) >= trunc(SYSDATE-7)
) -- to select multiple rows
loop
select *
into l_exp
from pa_exception_list
where ceid = c.ceid; -- to render only one row( ceid is primary key )
dbms_output.put_line(l_exp.ceid||' - '||l_exp.col1||' - '||l_exp.col2); -- to show the results
end loop;
end;
/
SET SERVEROUTPUT ON
BEGIN
FOR rec IN (
--an implicit cursor is created here
SELECT pel.ceid AS ceid
FROM pa_exception_list pel
WHERE trunc(pel.creation_date) >= trunc(SYSDATE-7)
)
LOOP
dbms_output.put_line(rec.ceid);
END LOOP;
END;
/
Notes from here:
In this case, the cursor FOR LOOP declares, opens, fetches from, and
closes an implicit cursor. However, the implicit cursor is internal;
therefore, you cannot reference it.
Note that Oracle Database automatically optimizes a cursor FOR LOOP to
work similarly to a BULK COLLECT query. Although your code looks as if
it fetched one row at a time, Oracle Database fetches multiple rows at
a time and allows you to process each row individually.

foreach rows of my table and update my ROW_NUMBER column

I would like to create a script pl/sql where I can modify the value of my column ROW_NUMBER (the first time the value of ROW_NUMBER equal NULL).
This is the structure of my table 'A' :
CREATE TABLE A
(
"NAME" VARCHAR2(25 BYTE),
"NUM" NUMBER(10,0)
)
I would like to foreach all rows of table A and increment my Column 'NUM' by 1 if Column 'NAME' equal 'DEB'.
I would like to get the result like :
I created one pl/sql script :
DECLARE
INcrmt NUMBER(4):=1;
line WORK_ODI.TEST_SEQ%ROWTYPE;--before fetch it returns 0
CURSOR c_select IS
SELECT ROW_NUMBER,VALUE FROM WORK_ODI.TEST_SEQ;
BEGIN
OPEN c_select;
LOOP
FETCH c_select INTO line;
DBMS_OUTPUT.PUT_LINE(line.VALUE);
if line.VALUE like '%DEB%'
then
UPDATE WORK_ODI.TEST_SEQ SET ROW_NUMBER = INcrmt WHERE VALUE=line.VALUE;
INcrmt := INcrmt + 1;
end if;
if line.VALUE not like '%DEB%'
then
UPDATE WORK_ODI.TEST_SEQ SET ROW_NUMBER = INcrmt WHERE VALUE=line.VALUE;
end if;
EXIT WHEN c_select%NOTFOUND;
END LOOP;
CLOSE c_select;
COMMIT;
END;
DECLARE
INcrmt NUMBER(4):=1;
line WORK_ODI.TEST_SEQ%ROWTYPE;--before fetch it returns 0
CURSOR c_select IS
SELECT ROW_NUMBER,VALUE FROM WORK_ODI.TEST_SEQ;
BEGIN
OPEN c_select;
LOOP
FETCH c_select INTO line;
DBMS_OUTPUT.PUT_LINE(line.VALUE);
if line.VALUE like '%DEB%'
then
UPDATE WORK_ODI.TEST_SEQ SET ROW_NUMBER = INcrmt WHERE VALUE=line.VALUE;
INcrmt := INcrmt + 1;
end if;
if line.VALUE not like '%DEB%'
then
UPDATE WORK_ODI.TEST_SEQ SET ROW_NUMBER = INcrmt WHERE VALUE=line.VALUE;
end if;
EXIT WHEN c_select%NOTFOUND;
END LOOP;
CLOSE c_select;
COMMIT;
END;
but this is not work well , please take a look at what it gives me as result :
please anybody can help me
First, you should have an Aid column of some sort. In Oracle 12+, you can use an identity. In earlier versions, you can use a sequence. This provides an ordering for the rows in the table, based on insert order.
Second, you can do what you want on output:
select a.*,
sum(case when a.name like 'DEB%' then 1 else 0 end) over (order by aid) as row_number
from a;
If you really need to keep the values in the table, then you can use a merge statement to assign values to existing rows (the aid column is very handy for this). You will need a trigger afterwards to maintain it.
My suggestion is to do the calculation on the data, rather than storing the value in the data. Maintaining the values with updates and deletes seems like a real pain.

How to transpose a table from a wide format to narrow, using the values as a filter?

I get a table X (with 1 row):
COL_XA COL_VG COL_LF COL_EQ COL_PP COL_QM ...
1 0 0 0 1 1
Each column COL_x can have only values 0 or 1.
I want to transform this table into this form Y:
NAME
"COL_XA"
"COL_PP"
"COL_QM"
...
This table should print only those columns from table X that the first (and only) row has value 1.
This question is related to any other question about transposition, with the difference that I don't want the actual values, but the column names, which are not known in advance.
I could use Excel or PL/SQL to create a list of strings of the form
MIN(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' ELSE null END) as NAME, but this solution is inefficient (EXECUTE IMMEDIATE) and difficult to maintain. And the string passed to EXECUTE IMMEDIATE is limited to 32700 characters, which can be easily exceeded in production, where the table X can have well over 500 fields.
To completly automate the query you must be able to read the column names of the actual cursor. In PL/SQL this is possible using DBMS_SQL (other way would be in JDBC). Based on this OTN thread here a basic table function.
The importent parts are
1) dbms_sql.parse the query given as a text string and dbms_sql.execute it
2) dbms_sql.describe_columns to get the list of the column names returned from the query on table x
3) dbms_sql.fetch_rows to fetch the first row
4) loop the columns and checking the dbms_sql.column_value if equals to 1 output column_name (with PIPE)
create or replace type str_tblType as table of varchar2(30);
/
create or replace function get_col_name_on_one return str_tblType
PIPELINED
as
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(2000);
l_columnOutput varchar2(4000);
l_status integer;
l_colCnt number default 0;
l_colDesc dbms_sql.DESC_TAB;
begin
dbms_sql.parse( l_theCursor, 'SELECT * FROM X', dbms_sql.native );
for i in 1 .. 1000 loop
begin
dbms_sql.define_column( l_theCursor, i,
l_columnValue, 2000 );
l_colCnt := i;
exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
end;
end loop;
dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000 );
l_status := dbms_sql.execute(l_theCursor);
dbms_sql.describe_columns(l_theCursor,l_colCnt, l_colDesc);
if dbms_sql.fetch_rows(l_theCursor) > 0 then
for lColCnt in 1..l_colCnt
loop
dbms_sql.column_value( l_theCursor, lColCnt, l_columnValue );
--DBMS_OUTPUT.PUT_LINE( l_columnValue);
IF (l_columnValue = '1') THEN
DBMS_OUTPUT.PUT_LINE(Upper(l_colDesc(lColCnt).col_name));
pipe row(Upper(l_colDesc(lColCnt).col_name));
END IF;
end loop;
end if;
return;
end;
/
select * from table(get_col_name_on_one);
COLUMN_LOOOOOOOOOOOOOONG_100
COLUMN_LOOOOOOOOOOOOOONG_200
COLUMN_LOOOOOOOOOOOOOONG_300
COLUMN_LOOOOOOOOOOOOOONG_400
COLUMN_LOOOOOOOOOOOOOONG_500
COLUMN_LOOOOOOOOOOOOOONG_600
COLUMN_LOOOOOOOOOOOOOONG_700
COLUMN_LOOOOOOOOOOOOOONG_800
COLUMN_LOOOOOOOOOOOOOONG_900
COLUMN_LOOOOOOOOOOOOOONG_1000
You should not get in troubles with wide tables using this solution, I tested with a 1000 column tables with long column names.
Here is solution but I have to break it in two parts
First you extract all the column names of table. I have used LISTAGG to collect column names separated by ,
I will use the output of first query in second query.
select listagg(column_name,',') WITHIN GROUP (ORDER BY column_name )
from user_tab_cols where upper(table_name)='X'
The output of above query will be like COL_XA,COL_VG,COL_LF,COL_EQ,COL_PP,COL_QM ... and so on.
Copy above output and use in below query replacing
select NAME from X
unpivot ( bit for NAME in (<outputvaluesfromfirstquery>))
where bit=1
I am trying to merge above two, but I have option for pivot xml but not for unpivot xml.
You can do this with a bunch of union alls:
select 'COL_XA' as name from table t where col_xa = 1 union all
select 'COL_VG' as name from table t where col_vg = 1 union all
. . .
EDIT:
If you have only one row, then you do not need:
MIN(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' ELSE null END) as NAME
You can simply use:
(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' END)
The MIN() isn't needed for one row and the ELSE null is redundant.

How to structure SQL - select first X rows for each value of a column?

I have a table with the following type of data:
create table store (
n_id serial not null primary key,
n_place_id integer not null references place(n_id),
dt_modified timestamp not null,
t_tag varchar(4),
n_status integer not null default 0
...
(about 50 more fields)
);
There are indices on n_id, n_place_id, dt_modified and all other fields used in the query below.
This table contains about 100,000 rows at present, but may grow to closer to a million or even more. Yet, for now let's assume we're staying at around the 100K mark.
I'm trying to select rows from these table where one two conditions are met:
All rows where n_place_id is in a specific subset (this part is easy); or
For all other n_place_id values the first ten rows sorted by dt_modified (this is where it becomes more complicated).
Doing it in one SQL seems to be too painful, so I'm happy with a stored function for this. I have my function defined thus:
create or replace function api2.fn_api_mobile_objects()
returns setof store as
$body$
declare
maxres_free integer := 10;
resulter store%rowtype;
mcnt integer := 0;
previd integer := 0;
begin
create temporary table paid on commit drop as
select n_place_id from payments where t_reference is not null and now()::date between dt_paid and dt_valid;
for resulter in
select * from store where n_status > 0 and t_tag is not null order by n_place_id, dt_modified desc
loop
if resulter.n_place_id in (select n_place_id from paid) then
return next resulter;
else
if previd <> resulter.n_place_id then
mcnt := 0;
previd := resulter.n_place_id;
end if;
if mcnt < maxres_free then
return next resulter;
mcnt := mcnt + 1;
end if;
end if;
end loop;
end;$body$
language 'plpgsql' volatile;
The problem is that
select * from api2.fn_api_mobile_objects()
takes about 6-7 seconds to execute. Considering that after that this resultset needs to be joined to 3 other tables with a bunch of additional conditions applied and further sorting applied, this is clearly unacceptable.
Well, I still do need to get this data, so either I am missing something in the function or I need to rethink the entire algorithm. Either way, I need help with this.
CREATE TABLE store
( n_id serial not null primary key
, n_place_id integer not null -- references place(n_id)
, dt_modified timestamp not null
, t_tag varchar(4)
, n_status integer not null default 0
);
INSERT INTO store(n_place_id,dt_modified,n_status)
SELECT n,d,n%4
FROM generate_series(1,100) n
, generate_series('2012-01-01'::date ,'2012-10-01'::date, '1 day'::interval ) d
;
WITH zzz AS (
SELECT n_id AS n_id
, rank() OVER (partition BY n_place_id ORDER BY dt_modified) AS rnk
FROM store
)
SELECT st.*
FROM store st
JOIN zzz ON zzz.n_id = st.n_id
WHERE st.n_place_id IN ( 1,22,333)
OR zzz.rnk <=10
;
Update: here is the same selfjoin construct as a subquery (CTEs are treated a bit differently by the planner):
SELECT st.*
FROM store st
JOIN ( SELECT sx.n_id AS n_id
, rank() OVER (partition BY sx.n_place_id ORDER BY sx.dt_modified) AS zrnk
FROM store sx
) xxx ON xxx.n_id = st.n_id
WHERE st.n_place_id IN ( 1,22,333)
OR xxx.zrnk <=10
;
After much struggle, I managed to get the stored function to return the results in just over 1 second (which is a huge improvement). Now the function looks like this (I added the additional condition, which didn't affect the performance much):
create or replace function api2.fn_api_mobile_objects(t_search varchar)
returns setof store as
$body$
declare
maxres_free integer := 10;
resulter store%rowtype;
mid integer := 0;
begin
create temporary table paid on commit drop as
select n_place_id from payments where t_reference is not null and now()::date between dt_paid and dt_valid
union
select n_place_id from store where n_status > 0 and t_tag is not null group by n_place_id having count(1) <= 10;
for resulter in
select * from store
where n_status > 0 and t_tag is not null
and (t_name ~* t_search or t_description ~* t_search)
and n_place_id in (select n_place_id from paid)
loop
return next resulter;
end loop;
for mid in
select distinct n_place_id from store where n_place_id not in (select n_place_id from paid)
loop
for resulter in
select * from store where n_status > 0 and t_tag is not null and n_place_id = mid order by dt_modified desc limit maxres_free
loop
return next resulter;
end loop;
end loop;
end;$body$
language 'plpgsql' volatile;
This runs in just over 1 second on my local machine and in about 0.8-1.0 seconds on live. For my purpose, this is good enough, although I am not sure what will happen as the amount of data grows.
As a simple suggestion, the way I like to do this sort of troubleshooting is to construct a query that gets me most of the way there, and optimize it properly, and then add the necessary pl/pgsql stuff around it. The major advantage to this approach is that you can optimize based on query plans.
Also if you aren't dealing with a lot of rows, array_agg() and unnest() are your friends as they allow you (on Pg 8.4 and later!) to dispense with the temporary table management overhead and simply construct and query an array of tuples in memory as a relation. It may perform better also if you are just hitting an array in memory instead of a temp table (less planning overhead and less query overhead too).
Also on your updated query I would look at replacing that final loop with a subquery or a join, allowing the planner to decide when to do a nested loop lookup or when to try to find a better way.

how to return some default value in oracle cursor when nothing is found

I have following statement in one of the oracle SP:
OPEN CUR_NUM FOR
SELECT RTRIM(LTRIM(num)) as num
FROM USER WHERE USER_ID = v_user_id;
When the above does not return anything the SP result just has no column name in it. But I'd like the cursor to still have num column with some default value like NOTHING when no data is found.
Is this doable ?
If you really need to do that, you could use a UNION to select your dummy-row in case the user_id does not exist:
OPEN CUR_NUM FOR
SELECT RTRIM(LTRIM(num)) AS num
FROM user WHERE USER_ID = v_user_id
UNION ALL
SELECT 'NOTHING' AS num
FROM dual
WHERE NOT EXISTS (
SELECT 1
FROM user
WHERE user_id = v_user_Id
)
two choices i think:
add NVL() around the value if there is a row being returned but the column is null,
otherwise
add a MIN or MAX function if possible - this can force a row to be returned
If this code block is being used in a function, the following would work like a charm:
First, define your return variable
v_ret VARCHAR2(24);
Now, define cursor CUR_NUM as your select statement
CURSOR CUR_NUM IS
SELECT RTRIM(LTRIM(num)) as num
FROM USER WHERE USER_ID = v_user_id;
BEGIN
OPEN CUR_NUM;
FETCH CUR_NUM into v_ret;
CLOSE CUR_NUM;
RETURN nvl(v_ret,'NOTHING');
end;