error using two cursors in plsql code - sql

I have two tables ERROR_DESCRIPTION and ERROR_COLUMN.
ERROR_DESCRIPTION has below data :
"error processing column a_type"
"error processing column a_type"
"error processing column a_type"
ERROR_COLUMN has below data:
"abc",123334,"jdjjd"
"jdjd",2344,"djjd"
"djjd",234,"kkfkf"
at last my data should look like this :
error processing column a_type -"abc",123334,"jdjjd"
error processing column a_type - "jdjd",2344,"djjd"
so on ...
"a_type" is column name from ERROR_COLUMN table
i am trying to achieve this using cursors .
declare
cursor c_log is select * from ERROR_DESCRIPTION where error_data_log like'error%' ORDER BY error_data_log;
r_log ERROR_DESCRIPTION %ROWTYPE;
v_error varchar2(1000);
cursor c_dsc is select * from ERROR_COLUMN;
r_dsc ERROR_COLUMN%ROWTYPE;
begin
open c_log;
loop
fetch c_log into v_error;
open c_dsc ;
fetch c_dsc into r_dsc
dbms_output.put_line( 'error is'||v_error||'-'||r_dsc.xyz);
close c_dsc ;
end loop;
close c_log;
end ;
i am not able to get desired result .
r_dsc.xyz is column defined for that record type
can any one tell how i can i get above result.

I prefer you not use cursor when you can achieve the result with simple queries, you can get the result you mentioned with below join using substr function:
select d.val || substr(d.val,24) || c.val2 || c.val3
from ERROR_DESCRIPTION d
join ERROR_COLUMN c on substr(d.val,24)=c.val1
assuming your structure is: ERROR_DESCRIPTION(val), ERROR_COLUMN(val1,val2,val2) according to sample data you provided.
EDIT:(after comments and edit of question) if you don't have a specific formula or pattern for join and you want to join them only on base of number of recor then use rownum within subquery:
select d.val || '-' || c.val1,c.val2,c.val3
from (select rownum rn,val from ERROR_DESCRIPTION) d
join (select rownum rn,val1,val2,val3 from ERROR_COLUMN) c
on d.rn=c.rn

Related

Oracle SQL - table type in cursor causing ORA-21700: object does not exist or is marked for delete

I have problem with my function, I'm getting ORA-21700: object does not exist or is marked for delete error. It's caused by table type parameter in cursor, but I've no idea how to fix it.
I've read that table type part should be assigned to a variable, but it can't be done in cursor, right? I've marked the part which causing the issue
Can anyone help? Is there any other way I can do this?
My package looks something like this:
FUNCTION createCSV(DateFrom date
,DateTo date)
RETURN clob IS
CURSOR c_id (c_DateFrom date
,c_DateTo date) IS
SELECT id
FROM limits
WHERE utcDateFrom <= NVL(c_DateTo, utcDateFrom)
AND NVL(utcDateTo, c_DateFrom + 1) >= c_DateFrom + 1;
CURSOR c (c_DateFrom date
,c_DateTo date
,pc_tDatePeriods test_pkg.t_date_periods) IS -- this is table type (TYPE xx AS TABLE OF records)
SELECT l.id limit_id
,TO_CHAR(time_cond.utcDateFrom, og_domain.cm_yyyymmddhh24mi) time_stamp_from
,TO_CHAR(time_cond.utcDateTo, og_domain.cm_yyyymmddhh24mi) time_stamp_to
FROM limits l
JOIN (SELECT limit_id, utcDateFrom, utcDateTo FROM TABLE(pc_tDatePeriods) --This part is causing the issue
) time_cond
ON l.id = time_cond.limit_id
WHERE l.utcDateFrom <= NVL(c_DateTo, l.utcDateFrom)
AND NVL(l.utcDateTo, c_DateFrom + 1) >= c_DateFrom + 1;
CSV clob;
tDatePeriods test_pkg.t_date_periods := test_pkg.t_date_periods();
BEGIN
FOR r_id IN c_id(DateFrom, DateTo)
LOOP
tDatePeriods := test_pkg.includeTimeGaps(p_Id => r_id.id); --this loop is ok
FOR r IN c(DateFrom, DateTo, tDatePeriods) --here I'm getting error
LOOP
CSV := CSV || chr(13) || r.limit_id || ',' || r.time_stamp_from || ',' || r.time_stamp_to;
END LOOP;
END LOOP;
RETURN CSV;
END createCSV;
Your problem should be solved by declaring type test_pkg.t_date_periods on schema level instead of in package.
Similar answer can be found here but with more details.

Cursor in Oracle - after passing parameters, select does not filter result rows

I am facing for me strange issue with following parametrical cursor.
I have defined cursor in this way:
CURSOR cur_action ( product_code VARCHAR2(100) , action_master_list VARCHAR2(100))
IS
SELECT
act.ACTION_DETAIL_KEY,
act.ACTION_MASTER_KEY,
act.PRODUCT_CODE,
act.REF_ACTION_DETAIL_KEY
FROM XMLTABLE(action_master_list) x
JOIN ETDW.MFE_AR_ACTION_DETAILS act ON TO_NUMBER(x.COLUMN_VALUE) = act.ACTION_MASTER_KEY
WHERE 1=1
AND act.LAST_FLAG = 'Y'
AND act.PRODUCT_CODE = product_code;
Then I am using it in following way:
OPEN cur_action ( iFromProductCode , iActionMasterKeyList);
LOOP
FETCH cur_action BULK COLLECT INTO vActionDetailKey, vActionMasterKey, vProductCode, vRefActionDetailKey LIMIT 100;
FOR j IN 1..cur_action%ROWCOUNT
LOOP
dbms_output.put_line('vActionDetailKey: ' || vActionDetailKey (j) ||' vActionMasterKey: '|| vActionMasterKey (j) || ' vProductCode: ' || vProductCode (j));
END LOOP;
END LOOP;
Result seems to be unfilterd. It doesnt return 3 rows as expected result (this result is returned in with cusor query, when i run in outside procedure/pl block), but it returns all rows for actions in list. So it seems, that WHERE condition "act.PRODUCT_CODE = product_code" was not applied. Why?
Thank you
Why? Because you named parameter the same as column, so Oracle reads it as if it was where 1 = 1, i.e. no filtering at all.
Rename parameters to e.g.
CURSOR cur_action ( par_product_code V
----
this
and, later,
AND act.PRODUCT_CODE = par_product_code;
----
this

Performance improvement of plsql using Bulk collect

I am using bulk collect to improve execution time. When I do not use bulk collect, it executes in 4 mins.
But when I use bulk collect there is no output, neither the error message is shown in console. I can see a blank spool file created.
Please let me know if I have utilized bulk collect incorrectly, also can we use this clause in select statement with limit?
Table consists of maximum 1 million records.
SET SERVEROUTPUT ON FORMAT WRAPPED
SET VERIFY OFF
SET FEEDBACK OFF
SET TERMOUT OFF
SPOOL C:\Temp\spool_1.txt
DECLARE
cursor c2 is (
select count(distinct e.cdb_pref_event_id)
,e.supp_cd
from (select distinct eh.cdb_customer_id cdb_customer_id
,eh.cdb_pref_event_id cdb_pref_event_id
,eh.supp_cd supp_cd
from (select *
from cdb_stg.cpm_pref_event_stg_arc
where trunc(load_date) = trunc(sysdate - 1)) eh
Left outer join cdb_admin.cpm_pref_result er on (eh.cdb_customer_id =
er.cdb_customer_id and
eh.cdb_pref_event_id =
er.cdb_pref_event_id)
where er.cdb_pref_event_id is null
and er.cdb_customer_id is null) r
join cdb_admin.cpm_pref_event_exception e on (r.cdb_customer_id =
e.cdb_customer_id and
r.cdb_pref_event_id =
e.cdb_pref_event_id)
group by e.supp_cd);
TYPE totalprefresults is table of NUMBER(20);
TYPE supcd_1 is table of cdb_admin.cpm_pref_event_stg.supp_cd%TYPE;
total_prefresults totalprefresults;
supcd1 supcd_1;
--Total_prefresults NUMBER(20);
--SUPCD1 CDB_ADMIN.CPM_PREF_EVENT_STG.supp_cd%TYPE;
profile_counts NUMBER(20);
iter Integer := 0;
BEGIN
select count(distinct cdb_customer_id)
into profile_counts
from cdb_admin.cpm_pref_event_exception h
where cdb_customer_id in
(Select distinct e.cdb_customer_id
from (Select distinct eh.cdb_customer_id cdb_customer_id
,eh.cdb_pref_event_id cdb_pref_event_id
,eh.supp_cd supp_cd
from (select *
from cdb_stg.cpm_pref_event_stg_arc
where trunc(load_date) = trunc(sysdate - 1)) eh
Left outer join cdb_admin.cpm_pref_result er on (eh.cdb_customer_id =
er.cdb_customer_id and
eh.cdb_pref_event_id =
er.cdb_pref_event_id)
where er.cdb_pref_event_id is null
and er.cdb_customer_id is null) r
join cdb_admin.cpm_pref_event_exception e on (r.cdb_customer_id =
e.cdb_customer_id and
r.cdb_pref_event_id =
e.cdb_pref_event_id)
where e.supp_cd = 'PROFILE-NOT-FOUND')
and h.supp_cd != 'PROFILE-NOT-FOUND';
dbms_output.put_line('TOTAL EVENTS VALIDATION');
dbms_output.put_line('-------------------------------------------------------------');
dbms_output.put_line('');
dbms_output.put_line(rpad('Pref_Counts', 25) || rpad('Supp_CD', 25));
OPEN c2;
LOOP
FETCH c2 BULK COLLECT
INTO total_prefresults
,supcd1 limit 100;
EXIT WHEN c2%NOTFOUND;
dbms_output.put_line(rpad(total_prefresults, 25) || rpad(supcd1, 25));
IF (supcd1 = 'PROFILE-NOT-FOUND')
then
dbms_output.put_line('');
dbms_output.put_line('Profile not found records count : ' ||
total_prefresults);
dbms_output.put_line(profile_counts ||
' : counts moved to other exceptions ');
dbms_output.put_line((total_prefresults - profile_counts) ||
' : are still in Profile_not_found exception');
END IF;
iter := iter + 1;
END LOOP;
CLOSE c2;
dbms_output.put_line('');
dbms_output.put_line('Number of missing Records: ' || iter);
END;
/
SPOOL OFF
I think the bottleneck is this condition: where trunc(load_date) = trunc(sysdate - 1)
Do you haven an index on trunc(load_date)? Either create a function-based index on trunc(load_date) or if you already have an index on load_date then try
WHERE load_date >= trunc(sysdate - 1) AND load_date < trunc(sysdate)
Also check your queries whether distinct is really needed. Remove them, if possible.
I have reframed your code starting from OPEN c2; to CLOSE c2;
BULK COLLECT should be executed to store all the data in the collection only once(in one go) and then this collection can be used using the index(i.e. I in the following case) in FOR loop as follows:
OPEN C2;
FETCH C2 BULK COLLECT INTO
TOTAL_PREFRESULTS,
SUPCD1;
--EXIT WHEN C2%NOTFOUND;
CLOSE C2;
-- To list down all the values before processing the logic
FOR I IN TOTAL_PREFRESULTS.FIRST..TOTAL_PREFRESULTS.LAST LOOP
DBMS_OUTPUT.PUT_LINE(RPAD(TOTAL_PREFRESULTS(I), 25)
|| RPAD(SUPCD1(I), 25));
END LOOP;
FOR I IN TOTAL_PREFRESULTS.FIRST..TOTAL_PREFRESULTS.LAST LOOP
IF ( SUPCD1(I) = 'PROFILE-NOT-FOUND' ) THEN
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('Profile not found records count : ' || TOTAL_PREFRESULTS(I));
DBMS_OUTPUT.PUT_LINE(PROFILE_COUNTS || ' : counts moved to other exceptions ');
DBMS_OUTPUT.PUT_LINE((TOTAL_PREFRESULTS(I) - PROFILE_COUNTS)
|| ' : are still in Profile_not_found exception');
END IF;
ITER := ITER + 1;
END LOOP;
replace above code snippet in your code and try to execute.
Refer the guide to use BULK COLLECT
Cheers!!
Bulk collect can provide considerable performance gain. However there are a couple gotchas involved.
First off there is the difference in the meaning of %notfound.
On a standard cursor %notfound means all rows have already been fetched and there is no more. With bulk collect this changes to 'there were insufficient rows to reach the LIMIT specified (if present).
That does NOT mean there no rows fetched just the limit specified was not reached. For example if your limit is 100 and the fetch retrieved only 50 then %notfound would return True. This is where the referenced guide fails.
The second is what happens without the limit clause: All rows from the cursor are returned into shared memory(PGA?). So what's the problem with that.
If there 100 rows or 1000 then most likely you're k, but suppose there are 100,000 or 1M rows, they are still all loaded into memory. Finally (at least for now) when the limit clause is used then the entire Fetch+Process must itself be enclosed within a loop or you process only the first fetch - meaning only the specified limit number of rows - no matter how many actually exist. Another point where the referenced guide fails.
The following skeleton accommodates the above.
declare
max_bulk_rows constant integer := 1000; -- define the max number of rows for each fetch ...
cursor c_bulk is(
Select ... ;
type bulk_row_t is table of c_bulk%rowtype;
bulk_row bulk_row_t;
Begin
open c_bulk;
loop
fetch c_bulk -- fill buffer
bulk collect into bulk_row
limit max_bulk_row;
for i in bulk_row.first .. bulk_row.last -- process each row in buffer
loop
"process individual row here"
end loop;
foreach ... -- bulk output of rows here is needed.
exit when bulk_row.count < max_bulk_row; -- exit process loop if all rows processed
end loop ; -- loop back and fetch next buffer if needed
close c_bulk;
...
end;

How to Insert data using cursor into new table having single column containing XML type data in Oracle?

I'm able to insert values into table 2 from table 1 and execute the PL/SQL procedure successfully but somehow the output is clunky. I don't know why?
Below is the code :
create table airports_2_xml
(
airport xmltype
);
declare
cursor insert_xml_cr is select * from airports_1_orcl;
begin
for i in insert_xml_cr
loop
insert into airports_2_xml values
(
xmlelement("OneAirport",
xmlelement("Rank", i.Rank) ||
xmlelement("airport",i.airport) ||
xmlelement("Location",i.Location) ||
xmlelement("Country", i.Country) ||
xmlelement("Code_iata",i.code_iata) ||
xmlelement("Code_icao", i.code_icao) ||
xmlelement("Total_Passenger",i.Total_Passenger) ||
xmlelement("Rank_change", i.Rank_change) ||
xmlelement("Percent_Change", i.Percent_change)
));
end loop;
end;
/
select * from airports_2_xml;
Output:
Why it is showing &lt ,&gt in the output ? And why am I unable to see the output fully?
Expected output:
<OneAirport>
<Rank>3</Rank>
<Airport>Dubai International</Airport>
<Location>Garhoud</Location>
<Country>United Arab Emirates</Country>
<Code_IATA>DXB</Code_IATA>
<Code_ICAO>OMDB</Code_ICAO>
<Total_passenger>88242099</Total_passenger>
<Rank_change>0</Rank_change>
<Percent_Change>5.5</Percent_Change>
</OneAirport>
The main issue is how you are constructnig the XML. You have an outer XMLElement for OneAirport, and the content of that element is a single string.
You are generating individual XMLElements from the cursor fields, but then you are concenating those together, which gives you a single string which still has the angle brackets you're expecting. So you're trying to do something like, simplified a bit:
select
xmlelement("OneAirport", '<Rank>1</Rank><airport>Hartsfield-Jackson</airport>')
from dual;
XMLELEMENT("ONEAIRPORT",'<RANK>1</RANK><AIRPORT>HARTSFIELD-JACKSON</AIRPORT>')
--------------------------------------------------------------------------------
<OneAirport><Rank>1</Rank><airport>Hartsfield-Jackson</airp
and by default XMLElement() escapes entities in the passed-in values, so the angle-brackets are being converted to 'safe' equivalents like <. If it didn't do that, or you told it not to with noentityescaping:
select xmlelement(noentityescaping "OneAirport", '<Rank>1</Rank><airport>Hartsfield-Jackson</airport>')
from dual;
XMLELEMENT(NOENTITYESCAPING"ONEAIRPORT",'<RANK>1</RANK><AIRPORT>HARTSFIELD-JACKS
--------------------------------------------------------------------------------
<OneAirport><Rank>1</Rank><airport>Hartsfield-Jackson</airport></OneAirport>
then that would appear to be better, but you still actually have a single element with a single string (with characters that are likely to cause problems down the line), rather than the XML structure you almost certainly intended.
A simple way to get an zctual structure is with XMLForest():
xmlelement("OneAirport",
xmlforest(i.Rank, i.airport, i.Location, i.Country, i.code_iata,
i.code_icao, i.Total_Passenger, i.Rank_change, i.Percent_change)
)
You don't need the cursor loop, or any PL/SQL; you can just do:
insert into airports_2_xml (airport)
select xmlelement("OneAirport",
xmlforest(i.Rank, i.airport, i.Location, i.Country, i.code_iata,
i.code_icao, i.Total_Passenger, i.Rank_change, i.Percent_change)
)
from airports_1_orcl i;
The secondary issue is the display. You'll see more data if you issue some formatting commands, such as:
set lines 120
set long 32767
set longchunk 32767
Those will tell your client to retrieve and show more of the long (XMLType here) data, rather the default 80 characters it's giving you now.
Once you are generating a nested XML structure you can use XMLSerialize() to display that more readable when you query your second table.
Try this below block :
declare
cursor insert_xml_cr is select * from airports_1_orcl;
v_airport_xml SYS.XMLTYPE;
begin
for i in insert_xml_cr
loop
SELECT XMLELEMENT ( "OneAirport",
XMLFOREST(i.Rank as "Rank"
,i.airport as "Airport"
,i.Location as "Location"
,i.Country as "Country"
,i.code_iata as "Code_iata"
,i.code_icao as "code_icao"
,i.Total_Passenger as "Total_Passenger"
, i.Rank_change as "Rank_change"
,i.Percent_change as "Percent_Change"
))
into v_airport_xml
FROM DUAL;
insert into airports_2_xml values (v_airport_xml);
end loop;
end;

Oracle : String Concatenation is too long

I have below SQL as a part of a view. In one of the schema I am getting "String Concatenation is too long" error and not able to execute the view.
Hence I tried the TO_CLOB() and now VIEW is not throwing ERROR, but it not returning the result as well it keep on running..
Please suggest....
Sql:
SELECT Iav.Item_Id Attr_Item_Id,
LISTAGG(La.Attribute_Name
||'|~|'
|| Lav.Attribute_Value
||' '
|| Lau.Attribute_Uom, '}~}') WITHIN GROUP (
ORDER BY ICA.DISP_SEQ,LA.ATTRIBUTE_NAME) AS ATTR
FROM Item_Attribute_Values Iav,
Loc_Attribute_Values Lav,
Loc_Attribute_Uoms Lau,
Loc_Attributes La,
(SELECT *
FROM Item_Classification Ic,
CATEGORY_ATTRIBUTES CA
WHERE IC.DEFAULT_CATEGORY='Y'
AND IC.TAXONOMY_TREE_ID =CA.TAXONOMY_TREE_ID
) ICA
WHERE IAV.ITEM_ID =ICA.ITEM_ID(+)
AND IAV.ATTRIBUTE_ID =ICA.ATTRIBUTE_ID(+)
AND Iav.Loc_Attribute_Id =La.Loc_Attribute_Id
AND La.Locale_Id =1
AND Iav.Loc_Attribute_Uom_Id =Lau.Loc_Attribute_Uom_Id(+)
AND Iav.Loc_Attribute_Value_Id=Lav.Loc_Attribute_Value_Id
GROUP BY Iav.Item_Id;
Error:
ORA-01489: result of string concatenation is too long
01489. 00000 - "result of string concatenation is too long"
*Cause: String concatenation result is more than the maximum size.
*Action: Make sure that the result is less than the maximum size.
You can use the COLLECT() function to aggregate the strings into a collection and then use a User-Defined function to concatenate the strings:
Oracle Setup:
CREATE TYPE stringlist IS TABLE OF VARCHAR2(4000);
/
CREATE FUNCTION concat_List(
strings IN stringlist,
delim IN VARCHAR2 DEFAULT ','
) RETURN CLOB DETERMINISTIC
IS
value CLOB;
i PLS_INTEGER;
BEGIN
IF strings IS NULL THEN
RETURN NULL;
END IF;
value := EMPTY_CLOB();
IF strings IS NOT EMPTY THEN
i := strings.FIRST;
LOOP
IF i > strings.FIRST AND delim IS NOT NULL THEN
value := value || delim;
END IF;
value := value || strings(i);
EXIT WHEN i = strings.LAST;
i := strings.NEXT(i);
END LOOP;
END IF;
RETURN value;
END;
/
Query:
SELECT Iav.Item_Id AS Attr_Item_Id,
CONCAT_LIST(
CAST(
COLLECT(
La.Attribute_Name || '|~|' || Lav.Attribute_Value ||' '|| Lau.Attribute_Uom
ORDER BY ICA.DISP_SEQ,LA.ATTRIBUTE_NAME
)
AS stringlist
),
'}~}'
) AS ATTR
FROM your_table
GROUP BY iav.item_id;
LISTAGG is limited to 4000 characters unfortunately. So you may want to use another approach to concatenate the values.
Anyway ...
It is strange to see LISTAGG which is a rather new feature combined with error-prone SQL1992 joins. I'd suggest you re-write this. Are the tables even properly joined? It looks strange that there seems to be no relation between Loc_Attributes and, say, Loc_Attribute_Values. Doesn't have Loc_Attribute_Values a Loc_Attribute_Id so an attribute value relates to an attribute? It would be hard to believe that there is no such relation.
Moreover: Is it guaranteed that your classification subquery doesn't return more than one record per attribute?
Here is your query re-written:
select
iav.item_id as attr_item_id,
listagg(la.attribute_name || '|~|' || lav.attribute_value || ' ' || lau.attribute_uom,
'}~}') within group (order by ica.disp_seq, la.attribute_name) as attr
from item_attribute_values iav
join loc_attribute_values lav
on lav.loc_attribute_value_id = iav.loc_attribute_value_id
and lav.loc_attribute_id = iav.loc_attribute_id -- <== maybe?
join loc_attributes la
on la.loc_attribute_id = lav.loc_attribute_id
and la.loc_attribute_id = lav.loc_attribute_id -- <== maybe?
and la.locale_id = 1
left join loc_attribute_uoms lau
on lau.loc_attribute_uom_id = iav.loc_attribute_uom_id
and lau.loc_attribute_id = iav.loc_attribute_id -- <== maybe?
left join
(
-- aggregation needed to get no more than one sortkey per item attribute?
select ic.item_id, ca.attribute_id, min (ca.disp_seq) as disp_seq
from item_classification ic
join category_attributes ca on ca.taxonomy_tree_id = ic.taxonomy_tree_id
where ic.default_category = 'y'
group by ic.item_id, ca.attribute_id
) ica on ica.item_id = iav.item_id and ica.attribute_id = iav.attribute_id
group by iav.item_id;
Well, you get the idea; check your keys and alter your join criteria where necessary. Maybe this gets rid of duplicates, so LISTAGG has to concatenate less attributes, and maybe the result even stays within 4000 characters.
Xquery approach.
Creating extra types or function isn't necessary.
with test_tab
as (select object_name
from all_objects
where rownum < 1000)
, aggregate_to_xml as (select xmlagg(xmlelement(val, object_name)) xmls from test_tab)
select xmlcast(xmlquery('for $row at $idx in ./*/text() return if($idx=1) then $row else concat(",",$row)'
passing aggregate_to_xml.xmls returning content) as Clob) as list_in_lob
from aggregate_to_xml;
I guess you need to write a small function to concatenate the strings into a CLOB, because even when you cast TO_CLOB() the LISTAGG at the end, this might not work.
HereĀ“s a sample-function that takes a SELECT-Statement (which MUST return only one string-column!) and a separator and returns the collected values as a CLOB:
CREATE OR REPLACE FUNCTION listAggCLob(p_stringSelect VARCHAR2
, p_separator VARCHAR2)
RETURN CLOB
AS
cur SYS_REFCURSOR;
s VARCHAR2(4000);
c CLOB;
i INTEGER;
BEGIN
dbms_lob.createtemporary(c, FALSE);
IF (p_stringSelect IS NOT NULL) THEN
OPEN cur FOR p_stringSelect;
LOOP
FETCH cur INTO s;
EXIT WHEN cur%NOTFOUND;
dbms_lob.append(c, s || p_separator);
END LOOP;
END IF;
i := length(c);
IF (i > 0) THEN
RETURN dbms_lob.substr(c,i-length(p_separator));
ELSE
RETURN NULL;
END IF;
END;
This function can be used f.e. like this:
WITH cat AS (
SELECT DISTINCT t1.category
FROM lookup t1
)
SELECT cat.category
, listAggCLob('select t2.name from lookup t2 where t2.category = ''' || cat.category || '''', '|') allcategorynames
FROM cat;