How to Insert data using cursor into new table having single column containing XML type data in Oracle? - sql

I'm able to insert values into table 2 from table 1 and execute the PL/SQL procedure successfully but somehow the output is clunky. I don't know why?
Below is the code :
create table airports_2_xml
(
airport xmltype
);
declare
cursor insert_xml_cr is select * from airports_1_orcl;
begin
for i in insert_xml_cr
loop
insert into airports_2_xml values
(
xmlelement("OneAirport",
xmlelement("Rank", i.Rank) ||
xmlelement("airport",i.airport) ||
xmlelement("Location",i.Location) ||
xmlelement("Country", i.Country) ||
xmlelement("Code_iata",i.code_iata) ||
xmlelement("Code_icao", i.code_icao) ||
xmlelement("Total_Passenger",i.Total_Passenger) ||
xmlelement("Rank_change", i.Rank_change) ||
xmlelement("Percent_Change", i.Percent_change)
));
end loop;
end;
/
select * from airports_2_xml;
Output:
Why it is showing &lt ,&gt in the output ? And why am I unable to see the output fully?
Expected output:
<OneAirport>
<Rank>3</Rank>
<Airport>Dubai International</Airport>
<Location>Garhoud</Location>
<Country>United Arab Emirates</Country>
<Code_IATA>DXB</Code_IATA>
<Code_ICAO>OMDB</Code_ICAO>
<Total_passenger>88242099</Total_passenger>
<Rank_change>0</Rank_change>
<Percent_Change>5.5</Percent_Change>
</OneAirport>

The main issue is how you are constructnig the XML. You have an outer XMLElement for OneAirport, and the content of that element is a single string.
You are generating individual XMLElements from the cursor fields, but then you are concenating those together, which gives you a single string which still has the angle brackets you're expecting. So you're trying to do something like, simplified a bit:
select
xmlelement("OneAirport", '<Rank>1</Rank><airport>Hartsfield-Jackson</airport>')
from dual;
XMLELEMENT("ONEAIRPORT",'<RANK>1</RANK><AIRPORT>HARTSFIELD-JACKSON</AIRPORT>')
--------------------------------------------------------------------------------
<OneAirport><Rank>1</Rank><airport>Hartsfield-Jackson</airp
and by default XMLElement() escapes entities in the passed-in values, so the angle-brackets are being converted to 'safe' equivalents like <. If it didn't do that, or you told it not to with noentityescaping:
select xmlelement(noentityescaping "OneAirport", '<Rank>1</Rank><airport>Hartsfield-Jackson</airport>')
from dual;
XMLELEMENT(NOENTITYESCAPING"ONEAIRPORT",'<RANK>1</RANK><AIRPORT>HARTSFIELD-JACKS
--------------------------------------------------------------------------------
<OneAirport><Rank>1</Rank><airport>Hartsfield-Jackson</airport></OneAirport>
then that would appear to be better, but you still actually have a single element with a single string (with characters that are likely to cause problems down the line), rather than the XML structure you almost certainly intended.
A simple way to get an zctual structure is with XMLForest():
xmlelement("OneAirport",
xmlforest(i.Rank, i.airport, i.Location, i.Country, i.code_iata,
i.code_icao, i.Total_Passenger, i.Rank_change, i.Percent_change)
)
You don't need the cursor loop, or any PL/SQL; you can just do:
insert into airports_2_xml (airport)
select xmlelement("OneAirport",
xmlforest(i.Rank, i.airport, i.Location, i.Country, i.code_iata,
i.code_icao, i.Total_Passenger, i.Rank_change, i.Percent_change)
)
from airports_1_orcl i;
The secondary issue is the display. You'll see more data if you issue some formatting commands, such as:
set lines 120
set long 32767
set longchunk 32767
Those will tell your client to retrieve and show more of the long (XMLType here) data, rather the default 80 characters it's giving you now.
Once you are generating a nested XML structure you can use XMLSerialize() to display that more readable when you query your second table.

Try this below block :
declare
cursor insert_xml_cr is select * from airports_1_orcl;
v_airport_xml SYS.XMLTYPE;
begin
for i in insert_xml_cr
loop
SELECT XMLELEMENT ( "OneAirport",
XMLFOREST(i.Rank as "Rank"
,i.airport as "Airport"
,i.Location as "Location"
,i.Country as "Country"
,i.code_iata as "Code_iata"
,i.code_icao as "code_icao"
,i.Total_Passenger as "Total_Passenger"
, i.Rank_change as "Rank_change"
,i.Percent_change as "Percent_Change"
))
into v_airport_xml
FROM DUAL;
insert into airports_2_xml values (v_airport_xml);
end loop;
end;

Related

Look up SDO_GEOMETRY validation error code using SQL

I have a query that validates an SDO_GEOMETRY in Oracle 18c:
select
sdo_geom.validate_geometry_with_context(
sdo_geometry ('polygon ((676832.320 4857578.086, 665287.423 4857578.086, 665277.423 4878109.585,
676832.320 4878119.585, 676842.320 4857588.086))', 26917)
, 0.005) as validation
from
dual
VALIDATION
-----------------------------
13348 [Element <1>] [Ring <1>]
(1 row selected.)
db<>fiddle
The query produces an error code in a text column, but it doesn't describe what the code means.
I am able look up the error manually in the docs: 82 ORA-12700 to ORA-19400
ORA-13348: polygon boundary is not closed
Cause: The boundary of a
polygon does not close.
Action: Alter the coordinate values or the
definition of the SDO_GTYPE or SDO_ETYPE attribute of the geometry.
But manually looking up those error codes is inconvenient.
Is there a way to enhance the query so that it returns the full error description? (get the description from the database)
Assuming you can parse the string to pull out the error message, you can pass it to sqlerrm to get the text of the error (note that you're apparently getting a positive value, you'd need to negate that value to pass it to sqlerrm). I would assume that you could just look for everything before the first space to get the error number but I don't have a huge sample set to work with.
declare
l_message varchar2(1000);
begin
l_message := sqlerrm( -13348 );
dbms_output.put_line( l_message );
end;
/
will print
ORA-13348: polygon boundary is not closed
Building on #JustinCave's answer, here's a custom function that gets the error description from the validation text:
with function error_description(validation in varchar2) return varchar2 is
begin
return sqlerrm(substr(validation, 1, instr(validation,' ') - 1) * -1); --Multiply by -1. Oracle error codes seem to be "negative".
end;
select
error_description(validation) as error_description
from
(select
sdo_geom.validate_geometry_with_context(
sdo_geometry ('polygon ((676832.320 4857578.086, 665287.423 4857578.086, 665277.423 4878109.585, 676832.320 4878119.585, 676842.320 4857588.086))', 26917), 0.005) as validation
from dual)
ERROR_DESCRIPTION
-------------------
ORA-13348: polygon boundary is not closed
Edit:
As pointed out by #SolomonYakobson in a related post, the SQLERRM() function can also be used in a SELECT query (without the need for a custom function).
Are certain kinds of Oracle functions only available in PL/SQL, not
SQL?
Many of the functions are defined in Oracle supplied package
SYS.STANDARD.
Example: SELECT SYS.STANDARD.SQLERRM(-1422) FROM DUAL;
So we just need to fully qualify the function: SYS.STANDARD.SQLERRM()
select
sys.standard.sqlerrm(substr(validation, 1, instr(validation,' ') - 1) * -1) error_description
from
(select
sdo_geom.validate_geometry_with_context(
sdo_geometry ('polygon ((676832.320 4857578.086, 665287.423 4857578.086, 665277.423 4878109.585, 676832.320 4878119.585, 676842.320 4857588.086))', 26917), 0.005) as validation
from dual)
ERROR_DESCRIPTION
---------------------
ORA-13348: polygon boundary is not closed

How to use Oracle JSON_VALUE

I'm working on a trigger.
declare
v_time number(11, 0);
begin
for i in 0..1
loop
select JSON_VALUE(body, '$.sections[0].capsules[0].timer.seconds') into v_time from bodycontent where contentid=1081438;
dbms_output.put_line(v_time);
end loop;
end;
However, index references do not become dynamic.
like JSON_VALUE(body, '$.sections[i].capsules[i].timer.seconds')
Is there any way I can do this?
You can use JSON_TABLE:
declare
v_time number(11, 0);
begin
for i in 0..1 loop
SELECT time
INTO v_time
FROM bodycontent b
CROSS APPLY
JSON_TABLE(
b.body,
'$.sections[*]'
COLUMNS (
section_index FOR ORDINALITY,
NESTED PATH '$.capsules[*]'
COLUMNS (
capsule_index FOR ORDINALITY,
time NUMBER(11,0) PATH '$.timer.seconds'
)
)
) j
WHERE j.section_index = i + 1
AND j.capsule_index = i + 1
AND b.contentid=1081438;
dbms_output.put_line(v_time);
end loop;
end;
/
Which, for the test data:
CREATE TABLE bodycontent ( body CLOB CHECK ( body IS JSON ), contentid NUMBER );
INSERT INTO bodycontent ( body, contentid ) VALUES (
'{"sections":[
{"capsules":[{"timer":{"seconds":0}},{"timer":{"seconds":1}},{"timer":{"seconds":2}}]},
{"capsules":[{"timer":{"seconds":3}},{"timer":{"seconds":4}},{"timer":{"seconds":5}}]},
{"capsules":[{"timer":{"seconds":6}},{"timer":{"seconds":7}},{"timer":{"seconds":8}}]}]}',
1081438
);
Outputs:
0
4
Or, you can just use a query:
SELECT section_index, capsule_index, time
FROM bodycontent b
CROSS APPLY
JSON_TABLE(
b.body,
'$.sections[*]'
COLUMNS (
section_index FOR ORDINALITY,
NESTED PATH '$.capsules[*]'
COLUMNS (
capsule_index FOR ORDINALITY,
time NUMBER(11,0) PATH '$.timer.seconds'
)
)
) j
WHERE ( j.section_index, j.capsule_index) IN ( (1,1), (2,2) )
AND b.contentid=1081438;
Which outputs:
SECTION_INDEX | CAPSULE_INDEX | TIME
------------: | ------------: | ---:
1 | 1 | 0
2 | 2 | 4
(Note: the indexes from FOR ORDINALITY are 1 higher than the array indexes in the JSON path.)
db<>fiddle here
You would need to concatenate the variable in the json path:
JSON_VALUE(body, '$.sections[' || to_char(i) || '].capsules[0].timer.seconds')
I don't really see how your question relates to a trigger.
So, the problem is that you want to loop over an index (to go diagonally through the array of arrays, picking up just two elements) - but the JSON_* functions don't work with variables as array indices - they require hard-coded indexes.
PL/SQL has an answer for that - native dynamic SQL, as demonstrated below.
However, note that this approach makes repeated calls to JSON_VALUE() over the same document. Depending on the actual requirement (I assume the one in your question is just for illustration) and the size of the document, it may be more efficient to make a single call to JSON_TABLE(), as illustrated in MT0's answer. If in fact you are indeed only extracting two scalar values from a very large document, two calls to JSON_VALUE() may be justified, especially if the document is much larger than shown here; but if you are extracting many scalar values from a document that is not too complicated, then a single call to JSON_TABLE() may be better, even if in the end you don't use all the data it produces.
Anyway - as an illustration of native dynamic SQL, here's the alternative solution (using MT0's table):
declare
v_time number(11, 0);
begin
for i in 0..1 loop
execute immediate q'#
select json_value(body, '$.sections[#' || i ||
q'#].capsules[#' || i || q'#].timer.seconds')
from bodycontent
where contentid = 1081438#'
into v_time;
dbms_output.put_line(v_time);
end loop;
end;
/
0
4
PL/SQL procedure successfully completed.
You can see the example below:
Just hit this query in your console and try to understand the implementation.
SELECT JSON_VALUE('{
"increment_id": "2500000043",
"item_id": "845768",
"options": [
{
"firstname": "Kevin"
},
{
"firstname": "Okay"
},
{
"lastname": "Test"
}
]
}', '$.options[0].firstname') AS value
FROM DUAL;

import array type into hana?

I am importing data into SAP HANA using the CSV files.
When I try to import a column which has an array type then it results in the following error
ARRAY type is not compatible with PARAMETER TYPE
For example
CREATE COLUMN TABLE "SCHEMA"."TABLE"
( 'ID' INT,
'SUBJECTS' INT ARRAY)
The above query creates the table and when I run
INSERT INTO "SCHEMA"."TABLE" VALUES (1,ARRAY(1,2,3))
It inserts successfully into the HANA database.
But when I try
INSERT INTO "SCHEMA"."TABLE" VALUES (1,"{1,2,3}")
It does not work.So how can I import the array values in the CSV file to the column in HANA database.
Array storage types can currently only be created by using the ARRAY() function.
You could construct the INSERT statement in a loop but you still need to construct the ARRAY() call for every record.
Ok, here's the example you asked for.
By now you should understand that there is no simple IMPORT command that would automatically insert arrays into a HANA table.
That leaves you with two options as I see it:
you write a loader program that reads your CSV file and parses
the array data {..., ... , ...} and makes INSERT statements with
ARRAY functions out of it.
or
You load the data in two steps:
2.1 Load the data from the CSV as-is and put the array data into a CLOB column.
2.2. Add the array columns to the table and run a loop that takes the CLOB data, replaces the curly brackets with normal brackets and creates a dynamic SQL statement.
Like so:
create column table array_imp_demo as (
select owner_name , object_type, to_clob( '{'|| string_agg ( object_oid, ', ') || '}' )array_text
from ownership
group by owner_name, object_type);
select top 10 * from array_imp_demo;
/*
OWNER_NAME OBJECT_TYPE ARRAY_TEXT
SYS TABLE {142540, 142631, 142262, 133562, 185300, 142388, 133718, 142872, 133267, 133913, 134330, 143063, 133386, 134042, 142097, 142556, 142688, 142284, 133577, 185357, 142409, 133745, 142902, 133308, 133948, 134359, 143099, 133411, 134135, 142118, 142578, 142762, 142306, 133604, 142429, 133764, 142928, 133322, 133966, 134383, 143120, 133443, 134193, 142151, 142587, 142780, 142327, 133642, 142448, 133805, 142967, 185407, 133334, 133988, 134624, 143134, 133455, 134205, 142173, 142510, 142606, 142798, 142236, 133523, 142359, 133663, 142465, 133825, 142832, 133175, 133866, 134269, 143005, 133354, 134012, 134646, 143148, 133497, 134231, 142195, 142526, 142628, 142816, 142259, 133551, 142382, 133700, 142493, 133855, 142862, 133235, 133904, 134309, 143051, 133373, 134029, 142082, 143306, 133513, 134255, 142216, 142553, 142684, 142281, 133572, 185330, 142394, 133738, 142892, 133299, 133929, 134351, 143080, 133406, 134117, 142115, 142576, 142711, 142303, 133596, 142414, 133756, 142922, 185399, 133319, 133958, 134368, 143115,
SYSTEM TABLE {154821, 146065, 146057, 173960, 174000, 173983, 144132, 173970}
_SYS_STATISTICS TABLE {151968, 146993, 147245, 152026, 147057, 147023, 151822, 147175, 147206, 151275, 151903, 147198, 147241, 151326, 151798, 152010, 147016, 147039, 147068, 151804, 151810, 147002, 147202, 151264, 151850, 147186, 147114, 147228, 151300, 151792, 151992, 147030, 147062, 151840, 147181, 147210, 151289, 151754, 149419, 150274, 147791, 150574, 147992, 149721, 150672, 148132, 149894, 147042, 148434, 149059, 150071, 147518, 148687, 149271, 150221, 150877, 147716, 148913, 150388, 147957, 149675, 150634, 148102, 149829, 148267, 148979, 149976, 147494, 148663, 151107, 149224, 150178, 147667, 148855, 149532, 150306, 147925, 149602, 150598, 148080, 149792, 148179, 149926, 147417, 148547, 149186, 150107, 147568, 148804}
_SYS_EPM TABLE {143391, 143427, 143354, 143385, 143411, 143376, 143398, 143367, 143414, 143344, 175755}
_SYS_REPO TABLE {145086, 151368, 171680, 145095, 152081, 169183, 151443, 149941, 154366, 143985, 145116, 152104, 151496, 169029, 177728, 179047, 145065, 169760, 178767, 151659, 169112, 169258, 153736, 177757, 174923, 145074, 150726, 151697, 169133, 178956, 145083, 169171, 168992, 145092, 177665, 169178, 151378, 169271, 178881, 174911, 154128, 143980, 145101, 152098, 151481, 177720, 152504, 145062, 151570, 169102, 154058, 145071, 170733, 151687, 169130, 145080, 171629, 169166, 178846, 145089, 149588, 151373, 177614, 143976, 145098, 152087, 151458, 149955, 178907, 154386, 145059, 169605, 151529, 169035, 178579, 151176, 179178, 145068, 150709, 151670, 169124, 174905, 177778, 154244, 145077, 170883, 169158, 144072, 152681, 144285, 154415, 144620, 145268, 168884, 144895, 143512, 151428, 168774, 143750, 152337, 168558, 144114, 149559, 152719, 144327, 144674, 145508, 168924, 144939, 143578, 152135, 143793, 152392, 168587, 144151, 152753, 144370, 144720, 145722, 168960, 144990, 143626, 152174, 143832, 152435, 168620, 144188, 152785,
_SYS_TASK TABLE {146851, 146847, 146856, 146231, 146143, 146839, 146854, 146834, 146430, 146464, 146167, 146505, 146205, 146257, 146384, 146313, 146420, 146356, 146454, 146155, 146495, 146193, 146525, 146244, 146368, 146296, 146410, 146346, 146443, 146148, 146484, 146181, 146517, 146234, 146275, 146395, 146325}
_SYS_AFL TABLE {177209, 176741, 176522, 177243, 176777, 176692, 176201, 177294, 176929, 177328, 176967, 177383, 177105, 177015, 176826, 177143, 177056, 176866, 177215, 176748, 176550, 176474, 177249, 176784, 176699, 176218, 146871, 177300, 176935, 177335, 176973, 177389, 177111, 177021, 176835, 177156, 177062, 176883, 185427, 177221, 176755, 176572, 176487, 177261, 176790, 176706, 176398, 177306, 176941, 177341, 176979, 177397, 177119, 177033, 176843, 177162, 177069, 176889, 177228, 176762, 176589, 177267, 176799, 176713, 176416, 177313, 176953, 177348, 176991, 177404, 177127, 177040, 176849, 177173, 177075, 176895, 177195, 176732, 176507, 177234, 176768, 177274, 176805, 176720, 176448, 177285, 176921, 177319, 176959, 177354, 176997, 177375, 177095, 177007, 176818, 177411, 177133, 177047, 176858, 177179, 177081, 176911, 177201, 176738, 176519, 177241, 176775, 176690, 176196, 177280, 176814, 176727, 176464, 177291, 176927, 177326, 176965, 177360, 177003, 177381, 177103, 177013, 176824, 177141, 177053, 176864, 177191, 177091,
_SYS_XB TABLE {146971, 146957}
DEVDUDE TABLE {167121, 165374, 182658, 156616, 173111, 181568, 174901, 183413, 184432, 183498, 183470, 184464, 155821, 173102, 183613, 184495, 155857, 166744, 180454, 184547, 156234, 172096, 166765, 165548, 184649, 183399, 184357, 184577, 183477, 181594, 183537, 181572, 167201, 184685, 185467, 183406, 184422, 184610, 183491, 155842, 172923, 157723, 182636, 167895, 183463, 184454, 183505, 165542, 183606, 184488, 155849, 172749, 157626, 184527, 183449, 166759, 184627, 182827, 184347, 184568, 157619, 172118, 183530, 181556, 167137, 184670, 182642, 184411, 184598, 183484, 155835, 183456, 183593, 181584, 167328, 183421, 184443, 183586, 184474, 155828, 166392, 183620, 184517, 183435, 183442, 183512, 166753, 184557, 156598, 172106, 183523, 166771, 166568, 184660, 182630, 184381, 184587, 183428, 157681, 182649, 167264, 168236}
ADMIN TABLE {158520, 158925, 158982, 158492, 158571, 158583, 158560, 158541, 158744}
*/
Ok, let's just assume that this is the data that you managed to load from your CSV file into a table and that the ARRAY_DATA column is a big CLOB.
Now, the data is in the table, but you need to put it into an ARRAY column.
alter table array_imp_demo add (array_data integer array);
The unconvenient part follows now: creating a new UPDATE command for each record to transform the CLOB data into a proper ARRAY() function call.
Be aware that this update command works correctly, because (OWNER, OBJECT_TYPE) are used as keys in this case.
do begin
declare vsqlmain nvarchar (50) :='UPDATE array_imp_demo set array_data = ARRAY ';
declare vsql nvarchar(5000);
declare cursor c_upd for
select OWNER_NAME, OBJECT_TYPE,
replace (replace (ARRAY_TEXT, '{', '(' ), '}', ')' ) array_data
from array_imp_demo;
for cur_row as c_upd do
vsql = :vsqlmain || cur_row.array_data;
vsql = :vsql || ' WHERE owner_name = ''' || cur_row.owner_name || '''';
vsql = :vsql || ' AND object_type = ''' || cur_row.object_type || '''';
exec :vsql;
end for;
end;
select * from array_imp_demo;
OWNER_NAME OBJECT_TYPE ARRAY_TEXT ARRAY_DATA
SYS TABLE {142540, 142631, 142262,... 142540, 142631, 142262,...
SYSTEM TABLE {154821, 146065, 146057,... 154821, 146065, 146057,...
_SYS_STATISTICS TABLE {151968, 146993, 147245,... 151968, 146993, 147245,...
_SYS_EPM TABLE {143391, 143427, 143354,... 143391, 143427, 143354,...
_SYS_REPO TABLE {145086, 151368, 171680,... 145086, 151368, 171680,...
After that you can drop the CLOB column.
Ok, that's about it from my side.
Have you tried using CTL method to import data. I hope that might help.

Oracle : String Concatenation is too long

I have below SQL as a part of a view. In one of the schema I am getting "String Concatenation is too long" error and not able to execute the view.
Hence I tried the TO_CLOB() and now VIEW is not throwing ERROR, but it not returning the result as well it keep on running..
Please suggest....
Sql:
SELECT Iav.Item_Id Attr_Item_Id,
LISTAGG(La.Attribute_Name
||'|~|'
|| Lav.Attribute_Value
||' '
|| Lau.Attribute_Uom, '}~}') WITHIN GROUP (
ORDER BY ICA.DISP_SEQ,LA.ATTRIBUTE_NAME) AS ATTR
FROM Item_Attribute_Values Iav,
Loc_Attribute_Values Lav,
Loc_Attribute_Uoms Lau,
Loc_Attributes La,
(SELECT *
FROM Item_Classification Ic,
CATEGORY_ATTRIBUTES CA
WHERE IC.DEFAULT_CATEGORY='Y'
AND IC.TAXONOMY_TREE_ID =CA.TAXONOMY_TREE_ID
) ICA
WHERE IAV.ITEM_ID =ICA.ITEM_ID(+)
AND IAV.ATTRIBUTE_ID =ICA.ATTRIBUTE_ID(+)
AND Iav.Loc_Attribute_Id =La.Loc_Attribute_Id
AND La.Locale_Id =1
AND Iav.Loc_Attribute_Uom_Id =Lau.Loc_Attribute_Uom_Id(+)
AND Iav.Loc_Attribute_Value_Id=Lav.Loc_Attribute_Value_Id
GROUP BY Iav.Item_Id;
Error:
ORA-01489: result of string concatenation is too long
01489. 00000 - "result of string concatenation is too long"
*Cause: String concatenation result is more than the maximum size.
*Action: Make sure that the result is less than the maximum size.
You can use the COLLECT() function to aggregate the strings into a collection and then use a User-Defined function to concatenate the strings:
Oracle Setup:
CREATE TYPE stringlist IS TABLE OF VARCHAR2(4000);
/
CREATE FUNCTION concat_List(
strings IN stringlist,
delim IN VARCHAR2 DEFAULT ','
) RETURN CLOB DETERMINISTIC
IS
value CLOB;
i PLS_INTEGER;
BEGIN
IF strings IS NULL THEN
RETURN NULL;
END IF;
value := EMPTY_CLOB();
IF strings IS NOT EMPTY THEN
i := strings.FIRST;
LOOP
IF i > strings.FIRST AND delim IS NOT NULL THEN
value := value || delim;
END IF;
value := value || strings(i);
EXIT WHEN i = strings.LAST;
i := strings.NEXT(i);
END LOOP;
END IF;
RETURN value;
END;
/
Query:
SELECT Iav.Item_Id AS Attr_Item_Id,
CONCAT_LIST(
CAST(
COLLECT(
La.Attribute_Name || '|~|' || Lav.Attribute_Value ||' '|| Lau.Attribute_Uom
ORDER BY ICA.DISP_SEQ,LA.ATTRIBUTE_NAME
)
AS stringlist
),
'}~}'
) AS ATTR
FROM your_table
GROUP BY iav.item_id;
LISTAGG is limited to 4000 characters unfortunately. So you may want to use another approach to concatenate the values.
Anyway ...
It is strange to see LISTAGG which is a rather new feature combined with error-prone SQL1992 joins. I'd suggest you re-write this. Are the tables even properly joined? It looks strange that there seems to be no relation between Loc_Attributes and, say, Loc_Attribute_Values. Doesn't have Loc_Attribute_Values a Loc_Attribute_Id so an attribute value relates to an attribute? It would be hard to believe that there is no such relation.
Moreover: Is it guaranteed that your classification subquery doesn't return more than one record per attribute?
Here is your query re-written:
select
iav.item_id as attr_item_id,
listagg(la.attribute_name || '|~|' || lav.attribute_value || ' ' || lau.attribute_uom,
'}~}') within group (order by ica.disp_seq, la.attribute_name) as attr
from item_attribute_values iav
join loc_attribute_values lav
on lav.loc_attribute_value_id = iav.loc_attribute_value_id
and lav.loc_attribute_id = iav.loc_attribute_id -- <== maybe?
join loc_attributes la
on la.loc_attribute_id = lav.loc_attribute_id
and la.loc_attribute_id = lav.loc_attribute_id -- <== maybe?
and la.locale_id = 1
left join loc_attribute_uoms lau
on lau.loc_attribute_uom_id = iav.loc_attribute_uom_id
and lau.loc_attribute_id = iav.loc_attribute_id -- <== maybe?
left join
(
-- aggregation needed to get no more than one sortkey per item attribute?
select ic.item_id, ca.attribute_id, min (ca.disp_seq) as disp_seq
from item_classification ic
join category_attributes ca on ca.taxonomy_tree_id = ic.taxonomy_tree_id
where ic.default_category = 'y'
group by ic.item_id, ca.attribute_id
) ica on ica.item_id = iav.item_id and ica.attribute_id = iav.attribute_id
group by iav.item_id;
Well, you get the idea; check your keys and alter your join criteria where necessary. Maybe this gets rid of duplicates, so LISTAGG has to concatenate less attributes, and maybe the result even stays within 4000 characters.
Xquery approach.
Creating extra types or function isn't necessary.
with test_tab
as (select object_name
from all_objects
where rownum < 1000)
, aggregate_to_xml as (select xmlagg(xmlelement(val, object_name)) xmls from test_tab)
select xmlcast(xmlquery('for $row at $idx in ./*/text() return if($idx=1) then $row else concat(",",$row)'
passing aggregate_to_xml.xmls returning content) as Clob) as list_in_lob
from aggregate_to_xml;
I guess you need to write a small function to concatenate the strings into a CLOB, because even when you cast TO_CLOB() the LISTAGG at the end, this might not work.
HereĀ“s a sample-function that takes a SELECT-Statement (which MUST return only one string-column!) and a separator and returns the collected values as a CLOB:
CREATE OR REPLACE FUNCTION listAggCLob(p_stringSelect VARCHAR2
, p_separator VARCHAR2)
RETURN CLOB
AS
cur SYS_REFCURSOR;
s VARCHAR2(4000);
c CLOB;
i INTEGER;
BEGIN
dbms_lob.createtemporary(c, FALSE);
IF (p_stringSelect IS NOT NULL) THEN
OPEN cur FOR p_stringSelect;
LOOP
FETCH cur INTO s;
EXIT WHEN cur%NOTFOUND;
dbms_lob.append(c, s || p_separator);
END LOOP;
END IF;
i := length(c);
IF (i > 0) THEN
RETURN dbms_lob.substr(c,i-length(p_separator));
ELSE
RETURN NULL;
END IF;
END;
This function can be used f.e. like this:
WITH cat AS (
SELECT DISTINCT t1.category
FROM lookup t1
)
SELECT cat.category
, listAggCLob('select t2.name from lookup t2 where t2.category = ''' || cat.category || '''', '|') allcategorynames
FROM cat;

error using two cursors in plsql code

I have two tables ERROR_DESCRIPTION and ERROR_COLUMN.
ERROR_DESCRIPTION has below data :
"error processing column a_type"
"error processing column a_type"
"error processing column a_type"
ERROR_COLUMN has below data:
"abc",123334,"jdjjd"
"jdjd",2344,"djjd"
"djjd",234,"kkfkf"
at last my data should look like this :
error processing column a_type -"abc",123334,"jdjjd"
error processing column a_type - "jdjd",2344,"djjd"
so on ...
"a_type" is column name from ERROR_COLUMN table
i am trying to achieve this using cursors .
declare
cursor c_log is select * from ERROR_DESCRIPTION where error_data_log like'error%' ORDER BY error_data_log;
r_log ERROR_DESCRIPTION %ROWTYPE;
v_error varchar2(1000);
cursor c_dsc is select * from ERROR_COLUMN;
r_dsc ERROR_COLUMN%ROWTYPE;
begin
open c_log;
loop
fetch c_log into v_error;
open c_dsc ;
fetch c_dsc into r_dsc
dbms_output.put_line( 'error is'||v_error||'-'||r_dsc.xyz);
close c_dsc ;
end loop;
close c_log;
end ;
i am not able to get desired result .
r_dsc.xyz is column defined for that record type
can any one tell how i can i get above result.
I prefer you not use cursor when you can achieve the result with simple queries, you can get the result you mentioned with below join using substr function:
select d.val || substr(d.val,24) || c.val2 || c.val3
from ERROR_DESCRIPTION d
join ERROR_COLUMN c on substr(d.val,24)=c.val1
assuming your structure is: ERROR_DESCRIPTION(val), ERROR_COLUMN(val1,val2,val2) according to sample data you provided.
EDIT:(after comments and edit of question) if you don't have a specific formula or pattern for join and you want to join them only on base of number of recor then use rownum within subquery:
select d.val || '-' || c.val1,c.val2,c.val3
from (select rownum rn,val from ERROR_DESCRIPTION) d
join (select rownum rn,val1,val2,val3 from ERROR_COLUMN) c
on d.rn=c.rn