I am using Oracle 10g.
My Scenario:
I am getting some more than 4000 records in a comma separated string('ord0000,ord0001,ord0002,......') as a parameter. I need to compare these values against a table1 and find out the matching recordset.
For that purpose I created a function as below:
function get_split_values (
csv_string varchar2
) return split_table pipelined
as
Delimit_String varchar2(32767) := csv_string;
Delimit_index integer;
begin
loop
Delimit_index := instr(delimit_string,',');
if Delimit_index > 0 then
pipe row(substr(delimit_string,1,delimit_index-1));
delimit_string := substr(delimit_string,delimit_index+1);
else
pipe row(delimit_string);
exit;
end if;
end loop;
return;
end get_split_values;
Now when I used this function to join with my table1 in a procedure as below:
create procedure abc (parameter_csv varchar2,...)
as
begin
open cursor for
select t.col1 from table1 t join table(get_split_values(parameter_csv)) x
on x.column_value = t.col1;
...
end abc;
It works fine when the parameter_csv have some around 300 or 400 IDs like('ord0000,ord0001,ord0002,......') but when it contains more that records I got the error
"ORA 01460 : unimplemented or unreasonable conversion requested."
I don't understand what raise this error. Any ideas?
OR is there any best way to accomplish this task.
Initially I thought you were overflowing your varchar2(32767) but a quick look at your sample IDs indicates that you shouldn't be maxing out that early(400 ids).
A quick google of the error led me to this forum in OTN: http://forums.oracle.com/forums/thread.jspa?threadID=507725&start=15&tstart=0
And to this blog post: http://oraclequirks.blogspot.com/2008/10/ora-01460-unimplemented-or-unreasonable.html
which indicates that this might be an oracle bug
If its a bug with using the PL/SQL procedure, you can just split the string as part of an inline view. Something like.
SELECT T.col1
FROM table1 T
JOIN ( SELECT REGEXP_SUBSTR( parameter_csv, '[^,]+', 1, LEVEL ) AS id
FROM DUAL
CONNECT BY LEVEL <=
LENGTH( REGEXP_REPLACE( parameter_csv, '[^,]+', '' ) ) + 1
) X
ON X.id = T.col1;
NOTE: Doesn't handle things like duplicate IDs in the the csv, empty values in the csv ,, etc
Related
I am having trouble getting a block of pl/sql code to work. In the top of my procedure I get some data from my oracle apex application on what checkboxes are checked. Because the report that contains the checkboxes is generated dynamically I have to loop through the
APEX_APPLICATION.G_F01
list and generate a comma separated string which looks like this
v_list VARCHAR2(255) := (1,3,5,9,10);
I want to then query on that list later and place the v_list on an IN clause like so
SELECT * FROM users
WHERE user_id IN (v_list);
This of course throws an error. My question is what can I convert the v_list to in order to be able to insert it into a IN clause in a query within a pl/sql procedure?
If users is small and user_id doesn't contain commas, you could use:
SELECT * FROM users WHERE ',' || v_list || ',' LIKE '%,'||user_id||',%'
This query is not optimal though because it can't use indexes on user_id.
I advise you to use a pipelined function that returns a table of NUMBER that you can query directly. For example:
CREATE TYPE tab_number IS TABLE OF NUMBER;
/
CREATE OR REPLACE FUNCTION string_to_table_num(p VARCHAR2)
RETURN tab_number
PIPELINED IS
BEGIN
FOR cc IN (SELECT rtrim(regexp_substr(str, '[^,]*,', 1, level), ',') res
FROM (SELECT p || ',' str FROM dual)
CONNECT BY level <= length(str)
- length(replace(str, ',', ''))) LOOP
PIPE ROW(cc.res);
END LOOP;
END;
/
You would then be able to build queries such as:
SELECT *
FROM users
WHERE user_id IN (SELECT *
FROM TABLE(string_to_table_num('1,2,3,4,5'));
You can use XMLTABLE as follows
SELECT * FROM users
WHERE user_id IN (SELECT to_number(column_value) FROM XMLTABLE(v_list));
I have tried to find a solution for that too but never succeeded. You can build the query as a string and then run EXECUTE IMMEDIATE, see http://docs.oracle.com/cd/B19306_01/appdev.102/b14261/dynamic.htm#i14500.
That said, it just occurred to me that the argument of an IN clause can be a sub-select:
SELECT * FROM users
WHERE user_id IN (SELECT something FROM somewhere)
so, is it possible to expose the checkbox values as a stored function? Then you might be able to do something like
SELECT * FROM users
WHERE user_id IN (SELECT my_package.checkbox_func FROM dual)
Personally, i like this approach:
with t as (select 'a,b,c,d,e' str from dual)
--
select val
from t, xmltable('/root/e/text()'
passing xmltype('<root><e>' || replace(t.str,',','</e><e>')|| '</e></root>')
columns val varchar2(10) path '/'
)
Which can be found among other examples in Thread: Split Comma Delimited String Oracle
If you feel like swamping in even more options, visit the OTN plsql forums.
I have a table with columns
BIN_1_1
BIN_1_2
BIN_1_3
all the way to BIN_10_10
The user enter a value, and the value needs to be checked in all the columns starting from BIN_1_1 to BIN_10_10.
If there is a duplicate value, it prints a msg and gets out of the procedure / function.
How do I go about this?
Do you mean something like this?
create or replace
procedure check_duplicate( p_val yourtable.bin_1_1%type) is
v_dupl number;
begin
begin
select 1 into v_dupl from yourtable
where p_val in (bin_1_1, bin_1_2, ... bin_10_10) and rownum <=1;
exception
when no_data_found
then v_dupl := 0;
end;
if v_dupl = 1
then
dbms_output.put_line('your message about duplication');
return;
else
dbms_output.put_line('here you can do anything');
end if;
end;
Try this query,
INSERT INTO yourTable values ('your values') where
WHERE BIN_1_1 NOT IN (
SELECT bins FROM (
SELECT BIN_1_1 FROM yourTable
UNION
SELECT BIN_1_2 FROM yourTable
UNION
SELECT BIN_1_3 FROM yourTable
) AS bins
)
P.S. I din't run this query.
Unpivot your table, then it's easy. You may want to write a query that will write the query below for you. ("Dynamic" SQL just to save yourself some work.)
select case when count(*) > 1 then 'Duplicate Found' end as result
from ( select *
from your_table
unpivot (val for col in (BIN_1_1, BIN_1_2, ........, BIN_10_10))
)
where val = :user_input;
Here :user_input is a bind variable - use whatever mechanism works for you (end-user interface, SQL Developer, whatever).
You need to decide what outcome you want when the value is not duplicated in the table - you didn't mention anything about that.
I get a table X (with 1 row):
COL_XA COL_VG COL_LF COL_EQ COL_PP COL_QM ...
1 0 0 0 1 1
Each column COL_x can have only values 0 or 1.
I want to transform this table into this form Y:
NAME
"COL_XA"
"COL_PP"
"COL_QM"
...
This table should print only those columns from table X that the first (and only) row has value 1.
This question is related to any other question about transposition, with the difference that I don't want the actual values, but the column names, which are not known in advance.
I could use Excel or PL/SQL to create a list of strings of the form
MIN(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' ELSE null END) as NAME, but this solution is inefficient (EXECUTE IMMEDIATE) and difficult to maintain. And the string passed to EXECUTE IMMEDIATE is limited to 32700 characters, which can be easily exceeded in production, where the table X can have well over 500 fields.
To completly automate the query you must be able to read the column names of the actual cursor. In PL/SQL this is possible using DBMS_SQL (other way would be in JDBC). Based on this OTN thread here a basic table function.
The importent parts are
1) dbms_sql.parse the query given as a text string and dbms_sql.execute it
2) dbms_sql.describe_columns to get the list of the column names returned from the query on table x
3) dbms_sql.fetch_rows to fetch the first row
4) loop the columns and checking the dbms_sql.column_value if equals to 1 output column_name (with PIPE)
create or replace type str_tblType as table of varchar2(30);
/
create or replace function get_col_name_on_one return str_tblType
PIPELINED
as
l_theCursor integer default dbms_sql.open_cursor;
l_columnValue varchar2(2000);
l_columnOutput varchar2(4000);
l_status integer;
l_colCnt number default 0;
l_colDesc dbms_sql.DESC_TAB;
begin
dbms_sql.parse( l_theCursor, 'SELECT * FROM X', dbms_sql.native );
for i in 1 .. 1000 loop
begin
dbms_sql.define_column( l_theCursor, i,
l_columnValue, 2000 );
l_colCnt := i;
exception
when others then
if ( sqlcode = -1007 ) then exit;
else
raise;
end if;
end;
end loop;
dbms_sql.define_column( l_theCursor, 1, l_columnValue, 2000 );
l_status := dbms_sql.execute(l_theCursor);
dbms_sql.describe_columns(l_theCursor,l_colCnt, l_colDesc);
if dbms_sql.fetch_rows(l_theCursor) > 0 then
for lColCnt in 1..l_colCnt
loop
dbms_sql.column_value( l_theCursor, lColCnt, l_columnValue );
--DBMS_OUTPUT.PUT_LINE( l_columnValue);
IF (l_columnValue = '1') THEN
DBMS_OUTPUT.PUT_LINE(Upper(l_colDesc(lColCnt).col_name));
pipe row(Upper(l_colDesc(lColCnt).col_name));
END IF;
end loop;
end if;
return;
end;
/
select * from table(get_col_name_on_one);
COLUMN_LOOOOOOOOOOOOOONG_100
COLUMN_LOOOOOOOOOOOOOONG_200
COLUMN_LOOOOOOOOOOOOOONG_300
COLUMN_LOOOOOOOOOOOOOONG_400
COLUMN_LOOOOOOOOOOOOOONG_500
COLUMN_LOOOOOOOOOOOOOONG_600
COLUMN_LOOOOOOOOOOOOOONG_700
COLUMN_LOOOOOOOOOOOOOONG_800
COLUMN_LOOOOOOOOOOOOOONG_900
COLUMN_LOOOOOOOOOOOOOONG_1000
You should not get in troubles with wide tables using this solution, I tested with a 1000 column tables with long column names.
Here is solution but I have to break it in two parts
First you extract all the column names of table. I have used LISTAGG to collect column names separated by ,
I will use the output of first query in second query.
select listagg(column_name,',') WITHIN GROUP (ORDER BY column_name )
from user_tab_cols where upper(table_name)='X'
The output of above query will be like COL_XA,COL_VG,COL_LF,COL_EQ,COL_PP,COL_QM ... and so on.
Copy above output and use in below query replacing
select NAME from X
unpivot ( bit for NAME in (<outputvaluesfromfirstquery>))
where bit=1
I am trying to merge above two, but I have option for pivot xml but not for unpivot xml.
You can do this with a bunch of union alls:
select 'COL_XA' as name from table t where col_xa = 1 union all
select 'COL_VG' as name from table t where col_vg = 1 union all
. . .
EDIT:
If you have only one row, then you do not need:
MIN(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' ELSE null END) as NAME
You can simply use:
(CASE WHEN t.COL_XA = 1 THEN 'COL_XA' END)
The MIN() isn't needed for one row and the ELSE null is redundant.
I have a table that contains a VARCHAR2 column called COMMANDS.
The data in this column is a bunch of difficult to read ZPL code that will be sent to a label printer, and amidst the ZPL there are several tokens in the form {TABLE.COLUMN}.
I would a like nice list of all the distinct {TABLE.COLUMN} tokens that are found in COMMANDS. I wrote the following regex to match the token format:
SELECT REGEXP_SUBSTR(COMMANDS,'\{\w+\.\w+\}') FROM MYTABLE;
The regex works, but it only returns the first matched token per row. Is there a way to return all regex matches for each row?
I'm using Oracle 11GR2.
Edit - Here is a small sample of data from a single row -- there are many such lines in each row:
^FO360,065^AEN,25,10^FD{CUSTOMERS.CUST_NAME}^FS
^FO360,095^AAN,15,12^FD{CUSTOMERS.CUST_ADDR1}^FS
So if that was the only row in table, I'd like to have returned:
{CUSTOMERS.CUST_NAME}
{CUSTOMERS.CUST_ADDR1}
You've provided sample of data saying that this is a single row but have presented it as two different rows. So this example based on your words.
-- Sample of data from your question + extra row for the sake of demonstration
-- id column is added to distinguish the rows(I assume you have one)
with t1(id, col) as(
select 1, '^FO360,065^AEN,25,10^FD{CUSTOMERS1.CUST_NAME}^FS^FO360,095^AAN,15,12^FD{CUSTOMERS1.CUST_ADDR1}^FS' from dual union all
select 2, '^FO360,065^AEN,25,10^FD{CUSTOMERS2.CUST_NAME}^FS^FO360,095^AAN,15,12^FD{CUSTOMERS2.CUST_ADDR2}^FS' from dual
),
cnt(c) as(
select level
from (select max(regexp_count(col, '{\w+.\w+}')) as o_c
from t1
) z
connect by level <= z.o_c
)
select t1.id, listagg(regexp_substr(t1.col, '{\w+.\w+}', 1, cnt.c)) within group(order by t1.id) res
from t1
cross join cnt
group by t1.id
Result:
ID RES
---------------------------------------------------------
1 {CUSTOMERS1.CUST_ADDR1}{CUSTOMERS1.CUST_NAME}
2 {CUSTOMERS2.CUST_ADDR2}{CUSTOMERS2.CUST_NAME}
As per #a_horse_with_no_name comment to the question, really, it's much simpler to just replace everything else that doesn't match the pattern. Here is an example:
with t1(col) as(
select '^FO360,065^AEN,25,10^FD{CUSTOMERS.CUST_NAME}^FS^FO360,095^AAN,15,12^FD{CUSTOMERS.CUST_ADDR1}^FS' from dual
)
select regexp_replace(t1.col, '({\w+.\w+})|.', '\1') res
from t1
Result:
RES
-------------------------------------------
{CUSTOMERS.CUST_NAME}{CUSTOMERS.CUST_ADDR1}
I think there isn't. You should write some PL/SQL to get the others matching tokens. My best advice to you is to use a pipelined function.
First, create a type:
create type strings as table of varchar2(200);
Then the function:
CREATE OR REPLACE function let_me_show
return strings PIPELINED as
l_n number;
l_r varchar2(200);
begin
for r_rec in
( SELECT commands
FROM MYTABLE )
loop
l_n := 1;
l_r := REGEXP_SUBSTR(r_rec.COMMANDS,'\{\w+\.\w+\}', 1, l_n);
while l_r is not null
loop
pipe row(l_r);
l_n := l_n + 1;
l_r := REGEXP_SUBSTR(r_rec.COMMANDS,'\{\w+\.\w+\}', 1, l_n);
end loop;
end loop;
end;
Now you can use the function to return the results:
select *
from table(let_me_show())
I'm working on a pl sql stored procedure.
What I need is to do a select, use a cursor and for every record build a string using values.
At the end I need to write this into a file.
I try to use dbms_output.put_line("toto") but the buffer size is to small because I have about 14 millions lines.
I call my procedure from a unix ksh.
I'm thinking at something like using "spool on" (on the ksh side) to dump the result of my procedure, but I don' know how to do it (if this is possible)
Anyone has any idea?
Unless it is really necessary, I would not use a procedure.
If you call the script using SQLPlus, just put the following into your test.sql (the SETs are from SQLPlus FAQ to remove noise):
SET ECHO OFF
SET NEWPAGE 0
SET SPACE 0
SET PAGESIZE 0
SET FEEDBACK OFF
SET HEADING OFF
SET TRIMSPOOL ON
SET TAB OFF
Select owner || ';' || object_name
From all_objects;
QUIT
and redirect output to a file (test.txt):
sqlplus user/passwd#instance # test.sql > test.txt
If you really need to do stuff in PL/SQL, consider putting that into a function and call it per record:
Create Or Replace Function calculate_my_row( in_some_data In Varchar2 )
Return Varchar2
As
Begin
Return in_some_data || 'something-complicated';
End calculate_my_row;
Call:
Select owner || ';' || calculate_my_row( object_name )
From all_objects;
Performance could suffer, but it should work. Make sure, that what you try can't be done in pure SQL, though.
Reading your comment I think that Analytic Function Lag is what you need.
This example appends * in case the value of val has changed:
With x As (
Select 1 id, 'A' val FROM dual
Union Select 2 id, 'A' val FROM dual
Union Select 3 id, 'B' val FROM dual
Union Select 4 id, 'B' val FROM dual
)
--# End of test-data
Select
id,
val,
Case When ( val <> prev_val Or prev_val Is Null ) Then '*' End As changed
From (
Select id, val, Lag( val ) Over ( Order By id ) As prev_val
From x
)
Order By id
Returns
ID V C
---------- - -
1 A *
2 A
3 B *
4 B
If every line of your output is the result of an operation on one row in the table, then a stored function, combined with Peter Lang's answer, can do what you need.
create function create_string(p_foobar foobar%rowtype) return varchar2 as
begin
do_some_stuff(p_foobar);
return p_foobar.foo || ';' ||p_foobar.bar;
end;
/
If it is more complicated than that, maybe you can use a pipelined table function
create type varchar_array
as table of varchar2(2000)
/
create function output_pipelined return varchar_array PIPELINED as
v_line varchar2(2000);
begin
for r_foobar in (select * from foobar)
loop
v_line := create_string(r_foobar);
pipe row(v_line);
end loop;
return;
end;
/
select * from TABLE(output_pipelined);
utl_file is your friend
http://www.adp-gmbh.ch/ora/plsql/utl_file.html
But is writes the data to the filesystem on the server so you probably need your DBA's help for this.
Tom Kyte has answered this, see
Flat
from this question on Ask Tom