Identifying changes to data over time - sql

Using Oracle Database 11.2.
Problem: Compare data from two sources and show only the differences.
I'm looking for some really slick solution to automate this comparison for hundreds of tables, each with hundreds of columns, that will work within the context of a query in a report developed in Crystal Reports. And, yes, I have considered that I took a wrong turn somewhere (Not the Crystal Reports part, though. I'm stuck with that.) and everything in this description after that point is meaningless.
Set aside thoughts about query or report performance. I intend to force filters to limit the amount of data that could be processed in a single request. What I'm asking about here is how to make this generic. In other words, I don't want to list any specific columns in my query code except, maybe, to distinguish between known grouping or lookup columns -- updated_by, updated_date, etc. I want to have queries that automatically gather those names for me.
For the sake of simplicity, let's say I want to compare data, based on filter criteria, from adjacent rows within a grouping in a table. Here is simplified example input data:
with source_data as (
select 'a' grp
, 'b' b
, 'c' c
, date '2022-12-01' record_date
, 'joe' updated_by
from dual
union all
select 'a'
, 'b'
, 'd'
, date '2022-12-02'
, 'sally' updated_by
from dual
union all
select 'a'
, 'a'
, 'd'
, date '2022-12-04'
, 'joe' updated_by
from dual
union all
select 'z' a
, 'b' b
, 'c' c
, date '2022-12-01'
, 'joe' updated_by
from dual
union all
select 'z'
, 'e'
, 'c'
, date '2022-12-08'
, 'joe' updated_by
from dual
union all
select 'z'
, 'f'
, 'c'
, date '2022-12-09'
, 'sally' updated_by
from dual
)
GRP
B
C
RECORD_DATE
UPDATED_BY
a
b
c
2022-12-01 00:00:00
joe
a
b
d
2022-12-02 00:00:00
sally
a
a
d
2022-12-04 00:00:00
joe
z
b
c
2022-12-01 00:00:00
joe
z
e
c
2022-12-08 00:00:00
joe
z
f
c
2022-12-09 00:00:00
sally
The need is to see what changes were made by people in certain categories. For this example, let's say Sally is a member of that group and Joe is not. So, the only changes I care about are on rows 2 and 6. But I need to compare each to the previous row, so...
,
changed as (
select sd.*
from source_data sd
where updated_by = 'sally'
),
changes as (
select 'current' as status
, c.*
from changed c
union all
select 'previous'
, sd.grp
, sd.b
, sd.c
, c.record_date
, c.updated_by
from source_data sd
inner join changed c on c.grp = sd.grp
and sd.record_date = (select max(record_date) from source_data where grp = c.grp and record_date < c.record_date)
)
Output from this trivial example seems simple enough. But when I have hundreds of rows by hundreds of columns to compare, it's not so easy to identify the change.
I have many tables to compare that have the same issue. Many of the tables have hundreds of columns. Usually, the difference is in only one or a few of the columns.
This will be done in a report. I don't have access to create functions or stored procedures, so I doubt I can use dynamic SQL in any way. This likely has constraints similar to developing a view.
I am NOT using PL/SQL. (Kinda tired of nearly every Oracle question related to my searches on SO having some relationship to PL/SQL, but no way to filter those out.)
I was thinking that in order to compare the data I'll first want to unpivot it to get a column/value pair on a row...
(Building on the answer to this question: ORACLE unpivot columns to rows)
, unpivot as (
Select *
From (
Select grp
, status
, record_date
, updated_by
, Case When C.lvl = 1 Then 'B'
When C.lvl = 2 Then 'C'
End col
, Case When C.lvl = 1 Then coalesce(B, '<null>')
When C.lvl = 2 Then coalesce(C, '<null>')
End val
From changes
cross join (
select level lvl
from dual
connect by level <= 2
) c
)
where val is not null
order by 1, 3, 2 desc
)
(Yes, for non-trivial data I'll need to cast the data going into val to something more generic, like a string.)
But how do I programmatically determine the number of columns, the column order, and generate the column names for both the value of col and for the reference for the CASE statement in val?
I suppose I could use something like this as part of the solution:
SELECT COLUMN_NAME
, COLUMN_ID
FROM ALL_tab_columns
WHERE OWNER = 'MY_OWNER_NAME'
AND TABLE_NAME = 'SOURCE_TABLE'
ORDER BY COLUMN_ID
But I'm not sure how to dovetail that into the solution in a meaningful way without involving dynamic SQL, which I'm pretty sure I can't do. And it would probably require referencing columns based on ordinal position, which doesn't appear to be possible in SQL. Of course, if that would work I could use a similar query to figure out how to handle data types for the val column.
Then I need to pivot that to show the before and after values in different columns. Then I can filter that to only what changed.
,
pivot as (
select grp
, record_date
, col
, updated_by
, max("'previous'") val_prev
, max("'current'") val_curr
from unpivot
pivot (
max(val)
for status
in (
'previous',
'current'
)
)
group by grp
, record_date
, col
, updated_by
)
select grp
, record_date
, col
, updated_by
, val_prev
, val_curr
from pivot
where val_curr <> val_prev
order by grp
, record_date
GRP
RECORD_DATE
COL
UPDATED_BY
VAL_PREV
VAL_CURR
a
2022-12-02 00:00:00
C
sally
c
d
z
2022-12-09 00:00:00
B
sally
e
f

You can't do this with pure SQL alone. But you can achieve what you want in a single statement using SQL macros - provided you're on an up-to-date version of Oracle Database.
This is an example of a dynamic unpivot macro that converts all the unlisted columns to rows:
create or replace function unpivot_macro (
tab dbms_tf.table_t,
keep_cols dbms_tf.columns_t
) return clob sql_macro is
sql_stmt clob;
unpivot_list clob;
select_list clob;
begin
for col in tab.column.first .. tab.column.last loop
if tab.column ( col ).description.name
not member of keep_cols then
unpivot_list := unpivot_list ||
',' || tab.column ( col ).description.name;
end if;
select_list := select_list ||
', to_char (' || tab.column ( col ).description.name || ') as ' ||
tab.column ( col ).description.name;
end loop;
sql_stmt :=
'select * from (
select ' || trim ( both ',' from select_list ) || ' from tab
)
unpivot (
val for col
in ( ' || trim ( both ',' from unpivot_list ) || ' )
)';
return sql_stmt;
end unpivot_macro;
/
select * from unpivot_macro (
source_data, columns ( grp, updated_by, record_date )
);
GRP RECORD_DATE UPDATED_BY COL VAL
a 01-DEC-2022 00:00 joe B b
a 01-DEC-2022 00:00 joe C c
a 02-DEC-2022 00:00 sally B z
a 02-DEC-2022 00:00 sally C d
a 04-DEC-2022 00:00 joe B a
a 04-DEC-2022 00:00 joe C d
...
If the reason for avoiding PL/SQL is you don't have permission to create functions, you can place the macro in the with clause.
Here's an example running on 21.3:
with function unpivot_macro (
tab dbms_tf.table_t,
keep_cols dbms_tf.columns_t
) return clob sql_macro is
sql_stmt clob;
unpivot_list clob;
select_list clob;
begin
for col in tab.column.first .. tab.column.last loop
if tab.column ( col ).description.name
not member of keep_cols then
unpivot_list := unpivot_list ||
',' || tab.column ( col ).description.name;
end if;
select_list := select_list ||
', to_char (' || tab.column ( col ).description.name || ') as ' ||
tab.column ( col ).description.name;
end loop;
sql_stmt :=
'select * from (
select ' || trim ( both ',' from select_list ) || ' from tab
)
unpivot (
val for col
in ( ' || trim ( both ',' from unpivot_list ) || ' )
)
where status is not null';
return sql_stmt;
end unpivot_macro;
source_data as (
select 'a' grp, 'b' b, 'c' c, date '2022-12-01' record_date, 'joe' updated_by
from dual union all
select 'a', 'z', 'd', date '2022-12-02', 'sally' updated_by
from dual union all
select 'a', 'a', 'd', date '2022-12-04', 'joe' updated_by
from dual union all
select 'z' a, 'b' b, 'c' c, date '2022-12-01', 'joe' updated_by
from dual union all
select 'z', 'e', 'c', date '2022-12-08', 'joe' updated_by
from dual union all
select 'z', 'f', 'c', date '2022-12-09', 'sally' updated_by
from dual
), changes as (
select s.grp, b, c,
'sally' updated_by,
case
when updated_by = 'sally' then record_date
else lead ( record_date ) over ( partition by grp order by record_date )
end record_date,
case
when updated_by = 'sally' then 'current'
when lead ( updated_by ) over ( partition by grp order by record_date ) = 'sally'
then 'previous'
end status
from source_data s
)
select * from unpivot_macro (
changes, columns ( grp, record_date, updated_by, status )
)
pivot (
max ( val ) for status
in ( 'previous' prev_val, 'current' curr_val )
)
where prev_val <> curr_val;
G UPDAT RECORD_DATE C P C
- ----- ------------------ - - -
a sally 02-DEC-22 B b z
a sally 02-DEC-22 C c d
z sally 09-DEC-22 B e f

Related

Oracle SQL: How to remove duplicate in listagg

After using listagg to combine my data, there are many duplicates that I want to remove.
Original Table
There will be only 3 types of technologies in total with no specific pattern in my data. I am wondering is it possible to remove all the duplicates and only keep 1 type in the respective row?
select
NAME,
RTRIM(
REGEXP_REPLACE(
(LISTAGG(
NVL2(Membrane_column, 'Membrane, ', NULL)
|| NVL2(SNR_column, 'SNR, ', NULL)
|| NVL2(SMR_column, 'SMR, ', NULL)
) within group (ORDER BY name)),
'Membrane, |SNR, |SMR, ', '', '1', '1', 'c')
', ')
as TECHNOLOGY
from
Table A
The current table I have for now
Name
Technology
A
SNR, SMR, SMR, SNR
B
Membrane, SNR, SMR, Membrane
C
SMR, SMR, Membrane
Desired Table
Name
Technology
A
SNR, SMR
B
Membrane, SNR, SMR
C
SMR, Membrane
This could be an easy way:
select name, listagg(technology, ', ') within group (order by 1) -- or whatever order you need
from
(
select distinct name, technology
from tableA
)
group by name
Maybe just create the SUM of the SNR/SMR/Membrane columns, group them by name, and replace the numbers with the strings that you want to see in the output.
Query (first step ...)
select name
, sum( snr_column ), sum( smr_column ), sum( membrane_column )
from original
group by name
;
-- output
NAME SUM(SNR_COLUMN) SUM(SMR_COLUMN) SUM(MEMBRANE_COLUMN)
2 1 1 2
3 null 2 1
1 2 2 null
Replace the sums, concatenate, remove the trailing comma with RTRIM()
select
name
, rtrim(
case when sum( snr_column ) >= 1 then 'SNR, ' end
|| case when sum( smr_column ) >= 1 then 'SMR, ' end
|| case when sum( membrane_column ) >= 1 then 'Membrane' end
, ' ,'
) as technology
from original
group by name
order by name
;
-- output
NAME TECHNOLOGY
1 SNR, SMR
2 SNR, SMR, Membrane
3 SMR, Membrane
Code the CASEs in the required order.
DBfiddle
Starting from Oracle 19c listagg supports distinct keyword. Also within group became optional.
with a as (
select column_value as a
from table(sys.odcivarchar2list('A', 'B', 'A', 'B', 'C')) q
)
select listagg(distinct a, ',')
from a
LISTAGG(DISTINCTA,',')
----------------------
A,B,C
livesql example here.

Oracle SQL find missing sequence in varchar2 field

I am new to oracle and to this forum. I have searched and found answers on how to do this with a column of just numbers but this has txt at the beginning then a sequenced number.
I have a table that has a varchar2 column named myid which has characters with a number at the end which is in order the number at the end is always 6 digits with leading zeros.
Hello_002190
Hello_002188
Bye_000187
Bye_000185
Bye_000184
Get_008133
Get_008131
Gone_001112
Gone_001110
Gone_001109
I need an Oracle SQL script that will show me all the missing rows.
The result for the above should be:
Hello_002189
Bye_000186
Get_008132
Gone_001111
Thanks in advance for the help
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_name ( value ) AS
SELECT 'Hello_002190' FROM DUAL UNION ALL
SELECT 'Hello_002188' FROM DUAL UNION ALL
SELECT 'Bye_000187' FROM DUAL UNION ALL
SELECT 'Bye_000185' FROM DUAL UNION ALL
SELECT 'Bye_000184' FROM DUAL UNION ALL
SELECT 'Get_008133' FROM DUAL UNION ALL
SELECT 'Get_008131' FROM DUAL UNION ALL
SELECT 'Gone_001112' FROM DUAL UNION ALL
SELECT 'Gone_001110' FROM DUAL UNION ALL
SELECT 'Gone_001109' FROM DUAL;
Query 1:
WITH data ( prefix, suffix ) AS (
SELECT SUBSTR( value, 1, INSTR( value, '_' ) ),
TO_NUMBER( SUBSTR( value, INSTR( value, '_' ) + 1 ) )
FROM table_name
),
bounds ( prefix, min_suffix, max_suffix ) AS (
SELECT prefix, MIN( suffix ), MAX( suffix )
FROM data
GROUP BY prefix
)
SELECT prefix || TO_CHAR( column_value, 'FM000000' ) AS missing_value
FROM bounds b
CROSS JOIN
TABLE(
CAST(
MULTISET(
SELECT b.min_suffix + LEVEL - 1
FROM DUAL
CONNECT BY b.min_suffix + LEVEL - 1 <= b.max_suffix
) AS SYS.ODCINUMBERLIST
)
)
MINUS
SELECT value FROM table_name
Results:
| MISSING_VALUE |
|---------------|
| Bye_000186 |
| Get_008132 |
| Gone_001111 |
| Hello_002189 |

nested for loop in oracle to find similarity optimize

I have two tables both has same value but bot are from different source.
Table 1
------------
ID Title
1 Introduction to Science
2 Introduction to C
3 Let is C
4 C
5 Java
Table 2
------------------------
ID Title
a Intro to Science
b Intro to C
c Let is C
d C
e Java
I want to compare all the title in table 1 with title in table 2 and find the similarity match.
I Have used the built-in function in orcale "UTL_MATCH.edit_distance_similarity (LS_Title, LSO_Title);"
Script:
DECLARE
LS_count NUMBER;
LSO_count NUMBER;
percentage NUMBER;
LS_Title VARCHAR2 (4000);
LSO_Title VARCHAR2 (4000);
LS_CPNT_ID VARCHAR2 (64);
LSO_CPNT_ID VARCHAR2 (64);
BEGIN
SELECT COUNT (*) INTO LS_count FROM tbl_zim_item;
SELECT COUNT (*) INTO LSO_count FROM tbl_zim_lso_item;
DBMS_OUTPUT.put_line ('value of a: ' || LS_count);
DBMS_OUTPUT.put_line ('value of a: ' || LSO_count);
FOR i IN 1 .. LS_count
LOOP
SELECT cpnt_title
INTO LS_Title
FROM tbl_zim_item
WHERE iden = i;
SELECT cpnt_id
INTO LS_CPNT_ID
FROM tbl_zim_item
WHERE iden = i;
FOR j IN 1 .. lso_count
LOOP
SELECT cpnt_title
INTO LSO_Title
FROM tbl_zim_lso_item
WHERE iden = j;
SELECT cpnt_id
INTO LSO_CPNT_ID
FROM tbl_zim_lso_item
WHERE iden = j;
percentage :=
UTL_MATCH.edit_distance_similarity (LS_Title, LSO_Title);
IF percentage > 50
THEN
INSERT INTO title_sim
VALUES (ls_cpnt_id,
ls_title,
lso_cpnt_id,
lso_title,
percentage);
END IF;
END LOOP;
END LOOP;
END;
This is running for more than 15 hours. Kindly provide a better solution.
Note : My table 1 has 20000 records and table 2 has 10000 records.
Unless I'm missing something, you don't need all of the looping and row-by-row lookups since SQL can do cross joins. Therefore my first try would be just:
insert into title_sim
( columns... )
select ls_cpnt_id
, ls_title
, lso_cpnt_id
, lso_title
, percentage
from ( select i.cpnt_id as ls_cpnt_id
, i.cpnt_title as ls_title
, li.cpnt_id as lso_cpnt_id
, li.cpnt_title as lso_title
, case -- Using Boneist's suggestion:
when i.cpnt_title = li.cpnt_title then 100
else utl_match.edit_distance_similarity(i.cpnt_title, li.cpnt_title)
end as percentage
from tbl_zim_item i
cross join tbl_zim_lso_item li )
where percentage > 50;
If there is much repetition in the titles, you might benefit from some scalar subquery caching by wrapping the utl_match.edit_distance_similarity function in a ( select ... from dual ).
If the titles are often exactly the same and assuming in those cases percentage should be 100%, you might avoid calling the function when the titles are an exact match:
begin
select count(*) into ls_count from tbl_zim_item;
select count(*) into lso_count from tbl_zim_lso_item;
dbms_output.put_line('tbl_zim_item contains ' || ls_count || ' rows.');
dbms_output.put_line('tbl_zim_lso_item contains ' || lso_count || ' rows.');
for r in (
select i.cpnt_id as ls_cpnt_id
, i.cpnt_title as ls_title
, li.cpnt_id as lso_cpnt_id
, li.cpnt_title as lso_title
, case
when i.cpnt_title = li.cpnt_title then 100 else 0
end as percentage
from tbl_zim_item i
cross join tbl_zim_lso_item li
)
loop
if r.percentage < 100 then
r.percentage := utl_match.edit_distance_similarity(r.ls_title, r.lso_title);
end if;
if r.percentage > 50 then
insert into title_sim (columns...)
values
( ls_cpnt_id
, ls_title
, lso_cpnt_id
, lso_title
, percentage );
end if;
end loop;
end;
Rather than looping through all the data, I'd merely join the two tables together, eg:
WITH t1 AS (SELECT 1 ID, 'Introduction to Science' title FROM dual UNION ALL
SELECT 2 ID, 'Introduction to C' title FROM dual UNION ALL
SELECT 3 ID, 'Let is C' title FROM dual UNION ALL
SELECT 4 ID, 'C' title FROM dual UNION ALL
SELECT 5 ID, 'Java' title FROM dual UNION ALL
SELECT 6 ID, 'Oracle for Newbies' title FROM dual),
t2 AS (SELECT 'a' ID, 'Intro to Science' title FROM dual UNION ALL
SELECT 'b' ID, 'Intro to C' title FROM dual UNION ALL
SELECT 'c' ID, 'Let is C' title FROM dual UNION ALL
SELECT 'd' ID, 'C' title FROM dual UNION ALL
SELECT 'e' ID, 'Java' title FROM dual UNION ALL
SELECT 'f' ID, 'PL/SQL rocks!' title FROM dual)
SELECT t1.title t1_title,
t2.title t2_title,
UTL_MATCH.edit_distance_similarity(t1.title, t2.title)
FROM t1
INNER JOIN t2 ON UTL_MATCH.edit_distance_similarity(t1.title, t2.title) > 50;
T1_TITLE T2_TITLE UTL_MATCH.EDIT_DISTANCE_SIMILA
----------------------- ---------------- ------------------------------
Introduction to Science Intro to Science 70
Introduction to C Intro to C 59
Let is C Let is C 100
C C 100
Java Java 100
By doing that, you can then reduce the whole thing to a single DML statement, something like:
INSERT INTO title_sim (t1_id,
t1_title,
t2_id,
t2_title,
percentage)
SELECT t1.id t1_id,
t1.title t1_title,
t2.id t2_id,
t2.title t2_title,
UTL_MATCH.edit_distance_similarity(t1.title, t2.title) percentage
FROM t1
INNER JOIN t2 ON UTL_MATCH.edit_distance_similarity(t1.title, t2.title) > 50;
which ought to be a good deal faster than your row-by-row attempt, particularly as you are unnecessarily selecting from each table twice.
As an aside, you know that you can select multiple columns into multiple variables in the same query, right?
So instead of having:
SELECT cpnt_title
INTO LS_Title
FROM tbl_zim_item
WHERE iden = i;
SELECT cpnt_id
INTO LS_CPNT_ID
FROM tbl_zim_item
WHERE iden = i;
you could instead do:
SELECT cpnt_title, cpnt_id
INTO LS_Title, LS_CPNT_ID
FROM tbl_zim_item
WHERE iden = i;
https://www.techonthenet.com/oracle/intersect.php
this will give you data which is similar in both queries
select title from table_1
intersect
select title from table_2

Oracle XE, count and display different combinations of rows based on one column

need help with a complicated query. This is an extract from my table:
USERID SERVICE
1 A
1 B
2 A
3 A
3 B
4 A
4 C
5 A
6 A
7 A
7 B
Ok, I would like the query to return and display all possible combinations that exist in my table with their respective counts based on the SERVICE column. For example first user has A and B service, this is one combination which occurred once. Next user has only service A, this is one more combination which occurred once. Third user has service A and B, this has happened once already and the count for this combination is 2 now, etc. So my output based on this particular input would be a table like this:
A AB AC ABC B BC
3 3 1 0 0 0
So to clarify a bit more, if there are 3 services, then there is 3! possible combinations; 3x2x1=6 and they are A, B, C, AB, AC, BC and ABC. And my table should contain count of users which have these combination of services assigned to them.
I have tried building a matrix using this query and then getting all counts using the CUBE function:
select service_A, service_B, service_C from
(select USERID,
max(case when SERVICE =A then 1 else null end) service_A,
max(case when SERVICE =B then 1 else null end) service_B,
max(case when SERVICE =C then 1 else null end) service_C
from SOME_TABLE)
group by CUBE(service_A, service_B,service_C);
But I don't get the count of all combinations. I need only combinations which happened, so counts 0 are not necessary but it is ok to display them. Thanks.
Don't output it as dynamic columns (it is difficult to do without using PL/SQL and dynamic SQL) but output it as rows instead (if you have a front-end then it can usually translate rows to columns much easier than oracle can):
Oracle Setup:
CREATE TABLE some_table ( USERID, SERVICE ) AS
SELECT 1, 'A' FROM DUAL UNION ALL
SELECT 1, 'B' FROM DUAL UNION ALL
SELECT 2, 'A' FROM DUAL UNION ALL
SELECT 3, 'A' FROM DUAL UNION ALL
SELECT 3, 'B' FROM DUAL UNION ALL
SELECT 4, 'A' FROM DUAL UNION ALL
SELECT 4, 'C' FROM DUAL UNION ALL
SELECT 5, 'A' FROM DUAL UNION ALL
SELECT 6, 'A' FROM DUAL UNION ALL
SELECT 7, 'A' FROM DUAL UNION ALL
SELECT 7, 'B' FROM DUAL;
Query:
SELECT service,
COUNT( userid ) AS num_users
FROM (
SELECT userid,
LISTAGG( service ) WITHIN GROUP ( ORDER BY service ) AS service
FROM some_table
GROUP BY userid
)
GROUP BY service;
Output:
SERVICE NUM_USERS
------- ----------
AC 1
A 3
AB 3
PL/SQL for dynamic columns:
VARIABLE cur REFCURSOR;
DECLARE
TYPE string_table IS TABLE OF VARCHAR2(4000);
TYPE int_table IS TABLE OF INT;
t_services string_table;
t_counts int_table;
p_sql CLOB;
BEGIN
SELECT service,
COUNT( userid ) AS num_users
BULK COLLECT INTO t_services, t_counts
FROM (
SELECT userid,
CAST( LISTAGG( service ) WITHIN GROUP ( ORDER BY service ) AS VARCHAR2(2) ) AS service
FROM some_table
GROUP BY userid
)
GROUP BY service;
p_sql := EMPTY_CLOB() || 'SELECT ';
p_sql := p_sql || t_counts(1) || ' AS "' || t_services(1) || '"';
FOR i IN 2 .. t_services.COUNT LOOP
p_sql := p_sql || ', ' || t_counts(i) || ' AS "' || t_services(i) || '"';
END LOOP;
p_sql := p_sql || ' FROM DUAL';
OPEN :cur FOR p_sql;
END;
/
PRINT cur;
Output:
AC A AB
--- --- ---
1 3 3

Concatenate results from a SQL query in Oracle

I have data like this in a table
NAME PRICE
A 2
B 3
C 5
D 9
E 5
I want to display all the values in one row; for instance:
A,2|B,3|C,5|D,9|E,5|
How would I go about making a query that will give me a string like this in Oracle? I don't need it to be programmed into something; I just want a way to get that line to appear in the results so I can copy it over and paste it in a word document.
My Oracle version is 10.2.0.5.
-- Oracle 10g --
SELECT deptno, WM_CONCAT(ename) AS employees
FROM scott.emp
GROUP BY deptno;
Output:
10 CLARK,MILLER,KING
20 SMITH,FORD,ADAMS,SCOTT,JONES
30 ALLEN,JAMES,TURNER,BLAKE,MARTIN,WARD
I know this is a little late but try this:
SELECT LISTAGG(CONCAT(CONCAT(NAME,','),PRICE),'|') WITHIN GROUP (ORDER BY NAME) AS CONCATDATA
FROM your_table
Usually when I need something like that quickly and I want to stay on SQL without using PL/SQL, I use something similar to the hack below:
select sys_connect_by_path(col, ', ') as concat
from
(
select 'E' as col, 1 as seq from dual
union
select 'F', 2 from dual
union
select 'G', 3 from dual
)
where seq = 3
start with seq = 1
connect by prior seq+1 = seq
It's a hierarchical query which uses the "sys_connect_by_path" special function, which is designed to get the "path" from a parent to a child.
What we are doing is simulating that the record with seq=1 is the parent of the record with seq=2 and so fourth, and then getting the full path of the last child (in this case, record with seq = 3), which will effectively be a concatenation of all the "col" columns
Adapted to your case:
select sys_connect_by_path(to_clob(col), '|') as concat
from
(
select name || ',' || price as col, rownum as seq, max(rownum) over (partition by 1) as max_seq
from
(
/* Simulating your table */
select 'A' as name, 2 as price from dual
union
select 'B' as name, 3 as price from dual
union
select 'C' as name, 5 as price from dual
union
select 'D' as name, 9 as price from dual
union
select 'E' as name, 5 as price from dual
)
)
where seq = max_seq
start with seq = 1
connect by prior seq+1 = seq
Result is: |A,2|B,3|C,5|D,9|E,5
As you're in Oracle 10g you can't use the excellent listagg(). However, there are numerous other string aggregation techniques.
There's no particular need for all the complicated stuff. Assuming the following table
create table a ( NAME varchar2(1), PRICE number);
insert all
into a values ('A', 2)
into a values ('B', 3)
into a values ('C', 5)
into a values ('D', 9)
into a values ('E', 5)
select * from dual
The unsupported function wm_concat should be sufficient:
select replace(replace(wm_concat (name || '#' || price), ',', '|'), '#', ',')
from a;
REPLACE(REPLACE(WM_CONCAT(NAME||'#'||PRICE),',','|'),'#',',')
--------------------------------------------------------------------------------
A,2|B,3|C,5|D,9|E,5
But, you could also alter Tom Kyte's stragg, also in the above link, to do it without the replace functions.
Here is another approach, using model clause:
-- sample of data from your question
with t1(NAME1, PRICE) as(
select 'A', 2 from dual union all
select 'B', 3 from dual union all
select 'C', 5 from dual union all
select 'D', 9 from dual union all
select 'E', 5 from dual
) -- the query
select Res
from (select name1
, price
, rn
, res
from t1
model
dimension by (row_number() over(order by name1) rn)
measures (name1, price, cast(null as varchar2(101)) as res)
(res[rn] order by rn desc = name1[cv()] || ',' || price[cv()] || '|' || res[cv() + 1])
)
where rn = 1
Result:
RES
----------------------
A,2|B,3|C,5|D,9|E,5|
SQLFiddle Example
Something like the following, which is grossly inefficient and untested.
create function foo returning varchar2 as
(
declare bar varchar2(8000) --arbitrary number
CURSOR cur IS
SELECT name,price
from my_table
LOOP
FETCH cur INTO r;
EXIT WHEN cur%NOTFOUND;
bar:= r.name|| ',' ||r.price || '|'
END LOOP;
dbms_output.put_line(bar);
return bar
)
Managed to get till here using xmlagg: using oracle 11G from sql fiddle.
Data Table:
COL1 COL2 COL3
1 0 0
1 1 1
2 0 0
3 0 0
3 1 0
SELECT
RTRIM(REPLACE(REPLACE(
XMLAgg(XMLElement("x", col1,',', col2, col3)
ORDER BY col1), '<x>'), '</x>', '|')) AS COLS
FROM ab
;
Results:
COLS
1,00| 3,00| 2,00| 1,11| 3,10|
* SQLFIDDLE DEMO
Reference to read on XMLAGG