performance issue when inserting large records - sql

I am parsing string into comma separated and inserting them to global table. The performance is good when inserting around 5k records, performance sucks if the inserting record is around 40k+. The global table has only one column. I thought using bulk fetch and forall will increase the performance, but it’s not the case so far. How can I rewrite below insertion query or any other ways this can be achieved for inserting large records? help will be highly appreciated. I did testing by running insert query by its own and it’s taking long time to process if data size is large.
//large string
emp_refno in CLOB;
CREATE OR replace PROCEDURE employee( emp_refno IN CLOB ) AS
c_limit PLS_INTEGER := 1000;
CURSOR token_cur IS
WITH inputs(str) AS
( SELECT to_clob(emp_refno)
FROM dual ),
prep(s,n,token,st_pos,end_pos ) AS
(
SELECT ','|| str || ',',-1,NULL,NULL,1
FROM inputs
UNION ALL
SELECT s, n + 1,substr(s, st_pos, end_pos - st_pos),
end_pos + 1,instr(s, ',', 1, n + 3)
FROM prep
WHERE end_pos != 0
)
SELECT token
FROM prep
WHERE n > 0;
TYPE token_t
IS
TABLE OF CLOB;
rec_token_t TOKEN_T;
BEGIN
OPEN token_cur;
LOOP
FETCH token_cur bulk collect
INTO rec_token_t limit c_limit;
IF rec_token_t.count > 0 THEN
forall rec IN rec_token_t.first ..rec_token_t.last
INSERT INTO globaltemp_emp
VALUES ( rec_token_t(rec) );
COMMIT;
END IF;
EXIT
WHEN rec_token_t.count = 0;
END LOOP;
OPEN p_resultset FOR
SELECT e.empname,
e.empaddress,
f.department
FROM employee e
join department f
ON e.emp_id = t.emp_id
AND e.emp_refno IN
(
SELECT emp_refno
FROM globaltemp_emp) //USING gtt IN subquery
END;

I have adapted a function which gives better performance.For 90k records, it returns in 13 seconds.Also reduce the c_limit to 250
You can adapt the below
CREATE OR replace FUNCTION pipe_clob ( p_clob IN CLOB,
p_max_lengthb IN INTEGER DEFAULT 4000,
p_rec_delim IN VARCHAR2 DEFAULT '
' )
RETURN sys.odcivarchar2list pipelined authid current_user AS
/*
Break CLOB into VARCHAR2 sized bites.
Reduce p_max_lengthb if you need to expand the VARCHAR2
in later processing.
Last record delimiter in each bite is not returned,
but if it is a newline and the output is spooled
the newline will come back in the spooled output.
Note: this cannot work if the CLOB contains more than
<p_max_lengthb> consecutive bytes without a record delimiter.
*/
l_amount INTEGER;
l_offset INTEGER;
l_buffer VARCHAR2(32767 byte);
l_out VARCHAR2(32767 byte);
l_buff_lengthb INTEGER;
l_occurence INTEGER;
l_rec_delim_length INTEGER := length(p_rec_delim);
l_max_length INTEGER;
l_prev_length INTEGER;
BEGIN
IF p_max_lengthb > 4000 THEN
raise_application_error(-20001, 'Maximum record length (p_max_lengthb) cannot be greater than 4000.');
ELSIF p_max_lengthb < 10 THEN
raise_application_error(-20002, 'Maximum record length (p_max_lengthb) cannot be less than 10.');
END IF;
IF p_rec_delim IS NULL THEN
raise_application_error(-20003, 'Record delimiter (p_rec_delim) cannot be null.');
END IF;
/* This version is limited to 4000 byte output, so I can afford to ask for 4001
in case the record is exactly 4000 bytes long.
*/
l_max_length:=dbms_lob.instr(p_clob,p_rec_delim,1,1)-1;
l_prev_length:=0;
l_amount := l_max_length + l_rec_delim_length;
l_offset := 1;
WHILE (l_amount = l_max_length + l_rec_delim_length
AND
l_amount > 0)
LOOP
BEGIN
dbms_lob.READ ( p_clob, l_amount, l_offset, l_buffer );
EXCEPTION
WHEN no_data_found THEN
l_amount := 0;
END;
IF l_amount = 0 THEN
EXIT;
ELSIF lengthb(l_buffer) <= l_max_length THEN
pipe ROW(rtrim(l_buffer, p_rec_delim));
EXIT;
END IF;
l_buff_lengthb := l_max_length + l_rec_delim_length;
l_occurence := 0;
WHILE l_buff_lengthb > l_max_length
LOOP
l_occurence := l_occurence + 1;
l_buff_lengthb := instrb(l_buffer,p_rec_delim, -1, l_occurence) - 1;
END LOOP;
IF l_buff_lengthb < 0 THEN
IF l_amount = l_max_length + l_rec_delim_length THEN
raise_application_error( -20004, 'Input clob at offset '
||l_offset
||' for lengthb '
||l_max_length
||' has no record delimiter' );
END IF;
END IF;
l_out := substrb(l_buffer, 1, l_buff_lengthb);
pipe ROW(l_out);
l_prev_length:=dbms_lob.instr(p_clob,p_rec_delim,l_offset,1)-1;--san temp
l_offset := l_offset + nvl(length(l_out),0) + l_rec_delim_length;
l_max_length:=dbms_lob.instr(p_clob,p_rec_delim,l_offset,1)-1;--san temp
l_max_length:=l_max_length-l_prev_length;
l_amount := l_max_length + l_rec_delim_length;
END LOOP;
RETURN;
END;
and then use like the below in the cursor in your procedure
CURSOR token_cur IS
select * from table (pipe_clob(emp_refno||',',10,','));

Three quick suggestions:
Perform commit for around 1000(or in batches) records rather than doing for each.
Replace in with exists for the Ref cursor.
Index globaltemp_emp.emp_refno if it doesn't have already.
Additionally recommend to run explain plan for each of the DML operation to check for any odd behaviour.

user uploads text file and I parse that text file as a comma seperated string and pass it to Oracle DB.
You are doing a bunch of work to turn that file into a string and then another bunch of work to convert that string into a table. As many people have observed before me, the best performance comes from not doing work we don't have to do.
In this case this means you should load the file's contents directly into the database. We can do this with an external table. This is a mechanism which allows us to query data from a file on the server using SQL. It would look something like this:
create table emp_refno_load
(emp_refno varchar2(24))
organization external
(type oracle_loader
default directory file_upload_dir
access parameters
(records delimited by newline
fields (employee_number char(24)
)
)
location ('some_file.txt')
);
Then you can discard your stored procedure and temporary table and re-write your query to something like this:
SELECT e.empname,
e.empaddress,
f.department
FROM emp_refno_load l
join employee e ON l.emp_refno = e.emp_refno
join department f ON e.emp_id = f.emp_id
The one snag with external tables is they require access to an OS directory (file_upload_dir in my example above) and some database security policies are weird about that. However the performance benefits and simplicity of approach should carry the day.
Find out more.
An external table is undoubtedly the most performative approach (until you hit millions of roads and then you need SQL*Loader ).

Related

Can fields be removed from an ongoing cursor in PL/SQL Oracle?

I'm trying to create a massive insert from a 'temporary file', so I'm using a cursor to modify some values. I added a field based on a 'row_number()' column to get a next number created for each record. That record & my 'lot number' would constitute the new lot value (e.g. for lot 'Alpha', I would have 'Alpha01', 'Alpha02', 'Alpha03', &c.).
But I don't know how to remove that extra-column after I've done the changes, so I don't get an issue with the insert process (my cursor now has more columns than the original file).
So the current code reads:
SET SERVEROUTPUT ON;
DECLARE
-- Array of lot numbers & how I want to name them --
TYPE VARR_LOTN IS TABLE OF VARCHAR(8);
VAR_LOTN VARR_LOTN;
TOTAL INTEGER;
-- Application-relevant variables --
MAX_VAL NUMBER := &&Maximum_Values.;
VAR_MMCU VARCHAR(12) := '&&Branch.';
VAR_ITM NUMBER := '&&Item.';
VAR_DATE NUMBER := TO_CHAR(SYSDATE, 'YYYYDDD') - 1900000;
VAR_TIME NUMBER := TO_CHAR(SYSDATE, 'HH24MISS');
-- This section is the cursor I'm creating --
-- Note the Row_Number() aggregate function, which I want to use as counter --
CURSOR VAR_LOTN_C IS
SELECT LPAD(ROW_NUMBER() OVER(ORDER BY IOITM), 2, 0) IOID, T1.* FROM TESTDTA.F4108 T1
WHERE IOITM = VAR_ITM AND IOLOTS = ' ' AND TRIM(IOMCU) = VAR_MMCU AND IOMMEJ >= TO_CHAR(SYSDATE + 365, 'YYYYDDD') - 1900000 AND ROWNUM <= MAX_VAL;
-- I'm having somre trouble understanding how the %RowType attribute works, & which others are available --
VARC_LOTN VAR_LOTN_C%ROWTYPE;
BEGIN
VAR_LOTN := VARR_LOTN('Alpha', 'Beta', 'Gamma', 'Delta');
TOTAL := VAR_LOTN.COUNT;
FOR T1 IN 1 .. TOTAL
LOOP
-- I'm fetching the cursor into the "variable" --
OPEN VAR_LOTN_C;
FETCH VAR_LOTN_C INTO VARC_LOTN;
CLOSE VAR_LOTN_C;
-- This is why I added column IOID, to have records as 'Alpha01', 'Alpha02', &c --
VARC_LOTN.IOLOTN := T1 || VARC_LOTN.IOID;
-- Other relevant variable changes... --
VARC_LOTN.IODOCO := 0;
VARC_LOTN.IODCTO := NULL;
-- UA0
VARC_LOTN.IOUA01 := VAR_DATE;
VARC_LOTN.IOUA02 := 0;
VARC_LOTN.IOUA03 := 0;
VARC_LOTN.IOUA04 := VAR_DATE;
VARC_LOTN.IOUA05 := 0;
VARC_LOTN.IOUA06 := VAR_DATE;
-- UB0
VARC_LOTN.IOUB01 := 0;
VARC_LOTN.IOUB02 := 0;
VARC_LOTN.IOUB03 := 0;
VARC_LOTN.IOUB04 := 0;
VARC_LOTN.IOUB05 := 0;
VARC_LOTN.IOUB06 := 0;
-- AUDIT
VARC_LOTN.IOUSER := 'Me';
VARC_LOTN.IOPID := 'SQL';
VARC_LOTN.IOUPMJ := VAR_DATE;
VARC_LOTN.IOTDAY := VAR_TIME;
-- ***In here is where I need to get rid of column IOID, so I can insert the batch I just created into the table. I cannot insert it now because this column does not exist in table F4108, I created it only for my own numbering.-*** --
{{{ALTER TABLE VARC_LOTN DROP COLUMN IOID; or something like that...}}}
-- Here I insert the final fetch --
INSERT INTO TESTDTA.F4108
VALUES VARC_LOTN;
END LOOP;
END;
I'm not finding if it's possible to do those changes, & the other options that come to my head are not really user-friendly (entering all the columns one by one...).
Do you know if that is feasible?
Thanks!!!
No. Columns can not be removed from cursor at runtime.
But I think you dont need your column IOID in cursor.
Use following cursor query:
SELECT T1.* -- temoved row_number from here
FROM TESTDTA.F4108 T1
WHERE IOITM = VAR_ITM AND IOLOTS = ' '
AND TRIM(IOMCU) = VAR_MMCU AND IOMMEJ >= TO_CHAR(SYSDATE + 365, 'YYYYDDD') - 1900000
AND ROWNUM <= MAX_VAL
ORDER BY IOITM; -- ADDED THIS order by clause
You need to declare one local variable:
LOCAL_VARIABLE NUMBER := 0;
And where IOID is used, you can replace it with:
LOCAL_VARIABLE := LOCAL_VARIABLE + 1;
VARC_LOTN.IOLOTN := T1 || LPAD(LOCAL_VARIABLE, 2, 0);
It will achieve the same result as your code and also cursor will be free from extra column.
Cheers!!

Oracle PL SQL: Comparing ref cursor results returned by two stored procs

I was given a stored proc which generates an open cursor which is passed as output to a reporting tool. I re-wrote this stored proc to improve performance. What I'd like to do is to show that the two result sets are the same for a given set of input parameters.
Something that is the equivalent of:
select * from CURSOR_NEW
minus
select * from CURSOR_OLD
union all
select * from CURSOR_OLD
minus
select * from CURSOR_NEW
Each cursor returns several dozen columns from a large subset of tables. Each row has an id value, and a long list of other column values for that id. I would want to check:
Both cursors are returning the same set of ids (I already checked this)
Both cursors have the same list of values for each id they have in common
If it was just one or two columns, I could concatenate them and find a hash and then sum it up over the cursor. Or another way might be to create a parent program that inserted the cursor results into a global temp table and compared the results. But since it's several dozen columns I'm trying to find a less brute force approach to doing the comparison.
Also it would be nice if the solution was scalable for other situations that involved different cursors, so it wouldn't have to be manually re-written each time, since this is a situation I'm running into more often.
I figured out a way to do this. It was a lot more complicated than I expected. I ended up using some DBMS_SQL procedures that allow converting REFCURSORs to defined cursors. Oracle has documentation on it here:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/dynamic.htm#LNPLS00001
After that I concatenated the row values into a string and printed the hash. For bigger cursors, I will change concat_col_vals to use a CLOB to prevent it from overflowing.
p_testCursors returns a simple refcursor for example purposes.
declare
cx_1 sys_refcursor;
c NUMBER;
desctab DBMS_SQL.DESC_TAB;
colcnt NUMBER;
stringvar VARCHAR2(4000);
numvar NUMBER;
datevar DATE;
concat_col_vals varchar2(4000);
col_hash number;
h raw(32767);
n number;
BEGIN
p_testCursors(cx_1);
c := DBMS_SQL.TO_CURSOR_NUMBER(cx_1);
DBMS_SQL.DESCRIBE_COLUMNS(c, colcnt, desctab);
-- Define columns:
FOR i IN 1 .. colcnt LOOP
IF desctab(i).col_type = 2 THEN
DBMS_SQL.DEFINE_COLUMN(c, i, numvar);
ELSIF desctab(i).col_type = 12 THEN
DBMS_SQL.DEFINE_COLUMN(c, i, datevar);
-- statements
ELSE
DBMS_SQL.DEFINE_COLUMN(c, i, stringvar, 4000);
END IF;
END LOOP;
-- Fetch rows with DBMS_SQL package:
WHILE DBMS_SQL.FETCH_ROWS(c) > 0 LOOP
concat_col_vals := '~';
FOR i IN 1 .. colcnt LOOP
IF (desctab(i).col_type = 1) THEN
DBMS_SQL.COLUMN_VALUE(c, i, stringvar);
--Dbms_Output.Put_Line(stringvar);
concat_col_vals := concat_col_vals || '~' || stringvar;
ELSIF (desctab(i).col_type = 2) THEN
DBMS_SQL.COLUMN_VALUE(c, i, numvar);
--Dbms_Output.Put_Line(numvar);
concat_col_vals := concat_col_vals || '~' || to_char(numvar);
ELSIF (desctab(i).col_type = 12) THEN
DBMS_SQL.COLUMN_VALUE(c, i, datevar);
--Dbms_Output.Put_Line(datevar);
concat_col_vals := concat_col_vals || '~' || to_char(datevar);
-- statements
END IF;
END LOOP;
DBMS_OUTPUT.PUT_LINE(concat_col_vals);
col_hash := DBMS_UTILITY.GET_SQL_HASH(concat_col_vals, h, n);
DBMS_OUTPUT.PUT_LINE('Return Value: ' || TO_CHAR(col_hash));
DBMS_OUTPUT.PUT_LINE('Hash: ' || h);
END LOOP;
DBMS_SQL.CLOSE_CURSOR(c);
END;
/
This is not easy task for Oracle.
Very good article you can find on dba-oracle web:
Sql patterns symmetric diff
and Convert set to join sql parameter
If you need it often, you can:
add "hash column" and fill it always with insert using trigger, or
for each table in cursor output get unique value (create unique index) and compare only this column wiht anijoin
and you can find other possibilities in article.

how to sort the contents of CLOB field

I have a table and some fields are CLOB type and the content in the CLOB was delimited by some separator such as '|' and usually the content in the filed looks like this : name2|name1|name3..., actually the length of the content is more than 40000 characters, so is there any way to sort the content by asc? I want to look the content like this: name1|name2|name3...
can any body help me?
If it's even remotely possible, I'd strongly suggest you change your data model - add a details table for the names. This will solve you a lot of pain in the future.
Anyhow, if you absolutely need to store a pipe-separated list of names in your CLOB field, I'd suggest this approach:
break the CLOB into separate rows (using a pipelined function)
sort the rows
aggregate the rows into a new CLOB
A (somewhat naive and untested) implementation of this approach:
create type stringtabletype as table of varchar2(4000);
create or replace function split_CLOB(p_Value in CLOB,
p_Separator in varchar2 default '|')
return stringtabletype
pipelined as
l_Offset number default 1;
l_Str varchar2(4000);
idx number;
begin
idx := dbms_lob.instr(lob_loc => p_Value,
pattern => p_Separator,
offset => l_Offset);
dbms_output.put_line(idx);
while (idx > 0)
loop
l_Str := dbms_lob.substr(p_Value,
idx - l_Offset,
l_Offset);
pipe row(l_Str);
l_Offset := idx+1;
idx := dbms_lob.instr(p_Value,
p_Separator,
l_Offset);
dbms_output.put_line(idx);
end loop;
-- pipe remainder of string
l_Str := dbms_lob.substr(p_Value,
dbms_lob.getlength(p_Value) - l_Offset + 1,
l_Offset);
pipe row(l_str);
return;
end;
create or replace function sort_stringtabletype(p_Values in stringtabletype)
return stringtabletype as
l_Result stringtabletype;
begin
select column_value bulk collect
into l_Result
from table(p_Values)
order by column_value;
return l_Result;
end;
create or replace function stringtabletype_to_CLOB(p_Values in stringtabletype,
p_Separator in varchar2 default '|')
return CLOB as
l_Result CLOB;
begin
dbms_lob.createtemporary(l_Result, false);
for i in 1 .. p_Values.count - 1
loop
dbms_lob.writeappend(l_Result,
length(p_Values(i)),
p_Values(i));
dbms_lob.writeappend(l_Result,
length(p_Separator),
p_Separator);
end loop;
dbms_lob.writeappend(l_Result,
length(p_Values(p_Values.count)),
p_Values(p_Values.count));
return l_Result;
end;
Example usage:
select stringtabletype_to_CLOB (
sort_stringtabletype(
split_CLOB('def|abc|ghic', '|')
)
) from dual
You could then use an UPDATE statement like
update my_table
set clob_field = stringtabletype_to_CLOB (
sort_stringtabletype(
split_CLOB(my_table, '|')
)

How to remove more than one space in Oracle

I have an Oracle table which contains data like 'Shiv------Shukla' (consider '-' as space).
Now I need to write a program which leaves just one space and removes all other spaces.
Here is the program which I've made but it is not giving me expected result.
DECLARE
MAX_LIMIT VARCHAR2(50):=NULL;
REQ VARCHAR2(20):=NULL;
CURSOR C1 IS
SELECT *
FROM ASSET_Y;
BEGIN
FOR REC IN C1
LOOP
MAX_LIMIT:=LENGTH(REC.NAME)-LENGTH(REPLACE(REC.NAME,'-'));
FOR I IN 1..MAX_LIMIT
LOOP
UPDATE ASSET_Y
SET NAME=REPLACE(REC.NAME,'--','-')
WHERE REC.SNO=ASSET_Y.SNO;
COMMIT;
SELECT ASSET_Y.NAME INTO REQ FROM ASSET_Y WHERE ASSET_Y.SNO=REC.SNO;
DBMS_OUTPUT.PUT_LINE(REQ);
END LOOP;
END LOOP;
COMMIT;
END;
/
My table is
SQL> select * from asset_y;
SNO NAME FL
---------- -------------------- --
1 Shiv------Shukla y
2 Jinesh y
after running the procedure i m getting the following output.
Shiv---Shukla
Shiv---Shukla
Shiv---Shukla
Shiv---Shukla
Shiv---Shukla
Shiv---Shukla
PL/SQL procedure successfully completed.
Since regexp_replace is not available in Oracle 9i maybe you can use owa_pattern routines for simple regex replaces:
owa_pattern.change(fStr, '\s+', ' ', 'g');
More info about owa_pattern package can be found here
Bear in mind, that "\s" will match tabs and newlines as well.
With Oracle 9 you could write your own function:
CREATE FUNCTION remove_multi_spaces( in_value IN VARCHAR2 )
RETURN VARCHAR2
AS
v_result VARCHAR2(32767);
BEGIN
IF( in_value IS NOT NULL ) THEN
FOR i IN 1 .. ( LENGTH(in_value) - 1 ) LOOP
IF( SUBSTR( in_value, i, 2 ) <> ' ' ) THEN
v_result := v_result || SUBSTR( in_value, i, 1 );
END IF;
END LOOP;
v_result := v_result || SUBSTR( in_value, -1 );
END IF;
RETURN v_result;
END;
and call it in a single update-statement:
UPDATE asset_y
SET name = replace_multi_spaces( name );
BTW: With Oracle 10 you could use REGEXP_REPLACE.
Your problem is this part:
SET NAME=REPLACE(REC.NAME,'--','-')
However many times you do that within the inner loop it starts with the same value of REC.NAME as before. Changing it to this would fix it:
SET NAME=REPLACE(NAME,'--','-')
However, it is a pretty inefficient way to do this job if the table is large. You could instead do this:
BEGIN
LOOP
UPDATE ASSET_Y
SET NAME=REPLACE(NAME,'--','-')
WHERE NAME LIKE '%--%';
EXIT WHEN SQL%ROWCOUNT = 0;
END LOOP;
END;
/
Another way:
CREATE OR REPLACE
FUNCTION remove_multi_spaces( in_value IN VARCHAR2 )
RETURN VARCHAR2 IS
v_result VARCHAR2(32767) := in_value;
BEGIN
LOOP
EXIT WHEN INSTR(v_result,' ') = 0;
v_result := REPLACE(v_result, ' ', ' ');
END LOOP;
RETURN v_result;
END remove_multi_spaces;
Ack loops! No need to loop this
This will work in T-SQL...unfortunately I have no pl/sql environment to write this in. PL/SQL will have equivlents to everything used here (I think substr instead of substring and | instead of +)
declare #name varchar(200)
set #name = 'firstword secondword'
select left(#name,(patindex('% %',#name)-1)) + ' ' + ltrim(substring(#name,(patindex('% %',#name)+1),len(#name)))
You'll have to retool it to work for oracle and you'll need to replace any reference to #name to asset_y.name
select left(asset_y.name,(patindex('% %',asset_y.name)-1)) || ' ' || ltrim(substring(asset_y.name,(patindex('% %',asset_y.name)+1),len(asset_y.name)))
Sorry if it won't run as is, as I mentioned I lack an oracle install here to confirm...
Just to add...I normally turn that query above into a function named formatname and call it as select formatname(array_y.name) from... This allows me to include some form of error handling. The query will fail if patindex('% %',array_v.name) returns a null...meaning there is no space. You could do the same in a select statement using cases I guess:
select case when patindex('% %',array_v.name) > 0 then
left(asset_y.name,(patindex('% %',asset_y.name)-1)) || ' ' || ltrim(substring(asset_y.name,(patindex('% %',asset_y.name)+1),len(asset_y.name)))
else asset_y.name
from...

How to create a view that explodes a csv field into multiple rows?

I have a table structured as:
create table a (
a bigint primary key,
csv varchar(255)
)
I would like to be able to query a view ("b") such that:
select * from b;
Yields something like:
a | b
------
1 | A
1 | B
1 | C
For the case where the initial table has one row of data (1, 'A,B,C').
Is this possible with a postgres view?
In Postgres 8.4 (and I believe 8.3 as well), the regexp_split_to_table is available. This would work, however, I needed something for 8.1 as well.
This seems to work ok:
create or replace function split_xmcuser_groups_to_tuples() RETURNS SETOF RECORD AS $$
DECLARE
r a%rowtype;
strLen integer;
curIdx integer;
commaCnt integer;
curCSV varchar;
BEGIN
curIdx := 1;
commaCnt := 1;
FOR r IN SELECT * FROM a
LOOP
strLen := char_length(r.csv);
while curIdx <= strLen LOOP
curIdx := curIdx + 1;
if substr(r.csv, curIdx, 1) = ',' THEN
commaCnt := commaCnt + 1;
END IF;
END LOOP;
curIdx := 1;
while curIdx <= commaCnt LOOP
curCSV := split_part(r.csv, ',', curIdx);
if curCSV != '' THEN
RETURN QUERY select r.a,curCSV;
END IF;
curIdx := curIdx + 1;
END LOOP;
END LOOP;
RETURN;
END
$$ LANGUAGE 'plpgsql';
(and yes, I know about the performance implications and reasons not to do this)
I would say that this should be handled in application code if possible. Since it is a CSV field, I'm assuming that the number of entries is small, say, <1000 per database row. So, the memory and cpu costs wouldn't be prohibitive to split on commas and iterate as needed.
Is there a compelling reason this has to be done in postgres instead of the application? If so, perhaps you could write a psql procedure to fill a temporary table with the results of splitting each row. Here's an example of using comma splitting: http://archives.postgresql.org/pgsql-novice/2004-04/msg00117.php