nested for loop in oracle to find similarity optimize - sql

I have two tables both has same value but bot are from different source.
Table 1
------------
ID Title
1 Introduction to Science
2 Introduction to C
3 Let is C
4 C
5 Java
Table 2
------------------------
ID Title
a Intro to Science
b Intro to C
c Let is C
d C
e Java
I want to compare all the title in table 1 with title in table 2 and find the similarity match.
I Have used the built-in function in orcale "UTL_MATCH.edit_distance_similarity (LS_Title, LSO_Title);"
Script:
DECLARE
LS_count NUMBER;
LSO_count NUMBER;
percentage NUMBER;
LS_Title VARCHAR2 (4000);
LSO_Title VARCHAR2 (4000);
LS_CPNT_ID VARCHAR2 (64);
LSO_CPNT_ID VARCHAR2 (64);
BEGIN
SELECT COUNT (*) INTO LS_count FROM tbl_zim_item;
SELECT COUNT (*) INTO LSO_count FROM tbl_zim_lso_item;
DBMS_OUTPUT.put_line ('value of a: ' || LS_count);
DBMS_OUTPUT.put_line ('value of a: ' || LSO_count);
FOR i IN 1 .. LS_count
LOOP
SELECT cpnt_title
INTO LS_Title
FROM tbl_zim_item
WHERE iden = i;
SELECT cpnt_id
INTO LS_CPNT_ID
FROM tbl_zim_item
WHERE iden = i;
FOR j IN 1 .. lso_count
LOOP
SELECT cpnt_title
INTO LSO_Title
FROM tbl_zim_lso_item
WHERE iden = j;
SELECT cpnt_id
INTO LSO_CPNT_ID
FROM tbl_zim_lso_item
WHERE iden = j;
percentage :=
UTL_MATCH.edit_distance_similarity (LS_Title, LSO_Title);
IF percentage > 50
THEN
INSERT INTO title_sim
VALUES (ls_cpnt_id,
ls_title,
lso_cpnt_id,
lso_title,
percentage);
END IF;
END LOOP;
END LOOP;
END;
This is running for more than 15 hours. Kindly provide a better solution.
Note : My table 1 has 20000 records and table 2 has 10000 records.

Unless I'm missing something, you don't need all of the looping and row-by-row lookups since SQL can do cross joins. Therefore my first try would be just:
insert into title_sim
( columns... )
select ls_cpnt_id
, ls_title
, lso_cpnt_id
, lso_title
, percentage
from ( select i.cpnt_id as ls_cpnt_id
, i.cpnt_title as ls_title
, li.cpnt_id as lso_cpnt_id
, li.cpnt_title as lso_title
, case -- Using Boneist's suggestion:
when i.cpnt_title = li.cpnt_title then 100
else utl_match.edit_distance_similarity(i.cpnt_title, li.cpnt_title)
end as percentage
from tbl_zim_item i
cross join tbl_zim_lso_item li )
where percentage > 50;
If there is much repetition in the titles, you might benefit from some scalar subquery caching by wrapping the utl_match.edit_distance_similarity function in a ( select ... from dual ).
If the titles are often exactly the same and assuming in those cases percentage should be 100%, you might avoid calling the function when the titles are an exact match:
begin
select count(*) into ls_count from tbl_zim_item;
select count(*) into lso_count from tbl_zim_lso_item;
dbms_output.put_line('tbl_zim_item contains ' || ls_count || ' rows.');
dbms_output.put_line('tbl_zim_lso_item contains ' || lso_count || ' rows.');
for r in (
select i.cpnt_id as ls_cpnt_id
, i.cpnt_title as ls_title
, li.cpnt_id as lso_cpnt_id
, li.cpnt_title as lso_title
, case
when i.cpnt_title = li.cpnt_title then 100 else 0
end as percentage
from tbl_zim_item i
cross join tbl_zim_lso_item li
)
loop
if r.percentage < 100 then
r.percentage := utl_match.edit_distance_similarity(r.ls_title, r.lso_title);
end if;
if r.percentage > 50 then
insert into title_sim (columns...)
values
( ls_cpnt_id
, ls_title
, lso_cpnt_id
, lso_title
, percentage );
end if;
end loop;
end;

Rather than looping through all the data, I'd merely join the two tables together, eg:
WITH t1 AS (SELECT 1 ID, 'Introduction to Science' title FROM dual UNION ALL
SELECT 2 ID, 'Introduction to C' title FROM dual UNION ALL
SELECT 3 ID, 'Let is C' title FROM dual UNION ALL
SELECT 4 ID, 'C' title FROM dual UNION ALL
SELECT 5 ID, 'Java' title FROM dual UNION ALL
SELECT 6 ID, 'Oracle for Newbies' title FROM dual),
t2 AS (SELECT 'a' ID, 'Intro to Science' title FROM dual UNION ALL
SELECT 'b' ID, 'Intro to C' title FROM dual UNION ALL
SELECT 'c' ID, 'Let is C' title FROM dual UNION ALL
SELECT 'd' ID, 'C' title FROM dual UNION ALL
SELECT 'e' ID, 'Java' title FROM dual UNION ALL
SELECT 'f' ID, 'PL/SQL rocks!' title FROM dual)
SELECT t1.title t1_title,
t2.title t2_title,
UTL_MATCH.edit_distance_similarity(t1.title, t2.title)
FROM t1
INNER JOIN t2 ON UTL_MATCH.edit_distance_similarity(t1.title, t2.title) > 50;
T1_TITLE T2_TITLE UTL_MATCH.EDIT_DISTANCE_SIMILA
----------------------- ---------------- ------------------------------
Introduction to Science Intro to Science 70
Introduction to C Intro to C 59
Let is C Let is C 100
C C 100
Java Java 100
By doing that, you can then reduce the whole thing to a single DML statement, something like:
INSERT INTO title_sim (t1_id,
t1_title,
t2_id,
t2_title,
percentage)
SELECT t1.id t1_id,
t1.title t1_title,
t2.id t2_id,
t2.title t2_title,
UTL_MATCH.edit_distance_similarity(t1.title, t2.title) percentage
FROM t1
INNER JOIN t2 ON UTL_MATCH.edit_distance_similarity(t1.title, t2.title) > 50;
which ought to be a good deal faster than your row-by-row attempt, particularly as you are unnecessarily selecting from each table twice.
As an aside, you know that you can select multiple columns into multiple variables in the same query, right?
So instead of having:
SELECT cpnt_title
INTO LS_Title
FROM tbl_zim_item
WHERE iden = i;
SELECT cpnt_id
INTO LS_CPNT_ID
FROM tbl_zim_item
WHERE iden = i;
you could instead do:
SELECT cpnt_title, cpnt_id
INTO LS_Title, LS_CPNT_ID
FROM tbl_zim_item
WHERE iden = i;

https://www.techonthenet.com/oracle/intersect.php
this will give you data which is similar in both queries
select title from table_1
intersect
select title from table_2

Related

Identifying changes to data over time

Using Oracle Database 11.2.
Problem: Compare data from two sources and show only the differences.
I'm looking for some really slick solution to automate this comparison for hundreds of tables, each with hundreds of columns, that will work within the context of a query in a report developed in Crystal Reports. And, yes, I have considered that I took a wrong turn somewhere (Not the Crystal Reports part, though. I'm stuck with that.) and everything in this description after that point is meaningless.
Set aside thoughts about query or report performance. I intend to force filters to limit the amount of data that could be processed in a single request. What I'm asking about here is how to make this generic. In other words, I don't want to list any specific columns in my query code except, maybe, to distinguish between known grouping or lookup columns -- updated_by, updated_date, etc. I want to have queries that automatically gather those names for me.
For the sake of simplicity, let's say I want to compare data, based on filter criteria, from adjacent rows within a grouping in a table. Here is simplified example input data:
with source_data as (
select 'a' grp
, 'b' b
, 'c' c
, date '2022-12-01' record_date
, 'joe' updated_by
from dual
union all
select 'a'
, 'b'
, 'd'
, date '2022-12-02'
, 'sally' updated_by
from dual
union all
select 'a'
, 'a'
, 'd'
, date '2022-12-04'
, 'joe' updated_by
from dual
union all
select 'z' a
, 'b' b
, 'c' c
, date '2022-12-01'
, 'joe' updated_by
from dual
union all
select 'z'
, 'e'
, 'c'
, date '2022-12-08'
, 'joe' updated_by
from dual
union all
select 'z'
, 'f'
, 'c'
, date '2022-12-09'
, 'sally' updated_by
from dual
)
GRP
B
C
RECORD_DATE
UPDATED_BY
a
b
c
2022-12-01 00:00:00
joe
a
b
d
2022-12-02 00:00:00
sally
a
a
d
2022-12-04 00:00:00
joe
z
b
c
2022-12-01 00:00:00
joe
z
e
c
2022-12-08 00:00:00
joe
z
f
c
2022-12-09 00:00:00
sally
The need is to see what changes were made by people in certain categories. For this example, let's say Sally is a member of that group and Joe is not. So, the only changes I care about are on rows 2 and 6. But I need to compare each to the previous row, so...
,
changed as (
select sd.*
from source_data sd
where updated_by = 'sally'
),
changes as (
select 'current' as status
, c.*
from changed c
union all
select 'previous'
, sd.grp
, sd.b
, sd.c
, c.record_date
, c.updated_by
from source_data sd
inner join changed c on c.grp = sd.grp
and sd.record_date = (select max(record_date) from source_data where grp = c.grp and record_date < c.record_date)
)
Output from this trivial example seems simple enough. But when I have hundreds of rows by hundreds of columns to compare, it's not so easy to identify the change.
I have many tables to compare that have the same issue. Many of the tables have hundreds of columns. Usually, the difference is in only one or a few of the columns.
This will be done in a report. I don't have access to create functions or stored procedures, so I doubt I can use dynamic SQL in any way. This likely has constraints similar to developing a view.
I am NOT using PL/SQL. (Kinda tired of nearly every Oracle question related to my searches on SO having some relationship to PL/SQL, but no way to filter those out.)
I was thinking that in order to compare the data I'll first want to unpivot it to get a column/value pair on a row...
(Building on the answer to this question: ORACLE unpivot columns to rows)
, unpivot as (
Select *
From (
Select grp
, status
, record_date
, updated_by
, Case When C.lvl = 1 Then 'B'
When C.lvl = 2 Then 'C'
End col
, Case When C.lvl = 1 Then coalesce(B, '<null>')
When C.lvl = 2 Then coalesce(C, '<null>')
End val
From changes
cross join (
select level lvl
from dual
connect by level <= 2
) c
)
where val is not null
order by 1, 3, 2 desc
)
(Yes, for non-trivial data I'll need to cast the data going into val to something more generic, like a string.)
But how do I programmatically determine the number of columns, the column order, and generate the column names for both the value of col and for the reference for the CASE statement in val?
I suppose I could use something like this as part of the solution:
SELECT COLUMN_NAME
, COLUMN_ID
FROM ALL_tab_columns
WHERE OWNER = 'MY_OWNER_NAME'
AND TABLE_NAME = 'SOURCE_TABLE'
ORDER BY COLUMN_ID
But I'm not sure how to dovetail that into the solution in a meaningful way without involving dynamic SQL, which I'm pretty sure I can't do. And it would probably require referencing columns based on ordinal position, which doesn't appear to be possible in SQL. Of course, if that would work I could use a similar query to figure out how to handle data types for the val column.
Then I need to pivot that to show the before and after values in different columns. Then I can filter that to only what changed.
,
pivot as (
select grp
, record_date
, col
, updated_by
, max("'previous'") val_prev
, max("'current'") val_curr
from unpivot
pivot (
max(val)
for status
in (
'previous',
'current'
)
)
group by grp
, record_date
, col
, updated_by
)
select grp
, record_date
, col
, updated_by
, val_prev
, val_curr
from pivot
where val_curr <> val_prev
order by grp
, record_date
GRP
RECORD_DATE
COL
UPDATED_BY
VAL_PREV
VAL_CURR
a
2022-12-02 00:00:00
C
sally
c
d
z
2022-12-09 00:00:00
B
sally
e
f
You can't do this with pure SQL alone. But you can achieve what you want in a single statement using SQL macros - provided you're on an up-to-date version of Oracle Database.
This is an example of a dynamic unpivot macro that converts all the unlisted columns to rows:
create or replace function unpivot_macro (
tab dbms_tf.table_t,
keep_cols dbms_tf.columns_t
) return clob sql_macro is
sql_stmt clob;
unpivot_list clob;
select_list clob;
begin
for col in tab.column.first .. tab.column.last loop
if tab.column ( col ).description.name
not member of keep_cols then
unpivot_list := unpivot_list ||
',' || tab.column ( col ).description.name;
end if;
select_list := select_list ||
', to_char (' || tab.column ( col ).description.name || ') as ' ||
tab.column ( col ).description.name;
end loop;
sql_stmt :=
'select * from (
select ' || trim ( both ',' from select_list ) || ' from tab
)
unpivot (
val for col
in ( ' || trim ( both ',' from unpivot_list ) || ' )
)';
return sql_stmt;
end unpivot_macro;
/
select * from unpivot_macro (
source_data, columns ( grp, updated_by, record_date )
);
GRP RECORD_DATE UPDATED_BY COL VAL
a 01-DEC-2022 00:00 joe B b
a 01-DEC-2022 00:00 joe C c
a 02-DEC-2022 00:00 sally B z
a 02-DEC-2022 00:00 sally C d
a 04-DEC-2022 00:00 joe B a
a 04-DEC-2022 00:00 joe C d
...
If the reason for avoiding PL/SQL is you don't have permission to create functions, you can place the macro in the with clause.
Here's an example running on 21.3:
with function unpivot_macro (
tab dbms_tf.table_t,
keep_cols dbms_tf.columns_t
) return clob sql_macro is
sql_stmt clob;
unpivot_list clob;
select_list clob;
begin
for col in tab.column.first .. tab.column.last loop
if tab.column ( col ).description.name
not member of keep_cols then
unpivot_list := unpivot_list ||
',' || tab.column ( col ).description.name;
end if;
select_list := select_list ||
', to_char (' || tab.column ( col ).description.name || ') as ' ||
tab.column ( col ).description.name;
end loop;
sql_stmt :=
'select * from (
select ' || trim ( both ',' from select_list ) || ' from tab
)
unpivot (
val for col
in ( ' || trim ( both ',' from unpivot_list ) || ' )
)
where status is not null';
return sql_stmt;
end unpivot_macro;
source_data as (
select 'a' grp, 'b' b, 'c' c, date '2022-12-01' record_date, 'joe' updated_by
from dual union all
select 'a', 'z', 'd', date '2022-12-02', 'sally' updated_by
from dual union all
select 'a', 'a', 'd', date '2022-12-04', 'joe' updated_by
from dual union all
select 'z' a, 'b' b, 'c' c, date '2022-12-01', 'joe' updated_by
from dual union all
select 'z', 'e', 'c', date '2022-12-08', 'joe' updated_by
from dual union all
select 'z', 'f', 'c', date '2022-12-09', 'sally' updated_by
from dual
), changes as (
select s.grp, b, c,
'sally' updated_by,
case
when updated_by = 'sally' then record_date
else lead ( record_date ) over ( partition by grp order by record_date )
end record_date,
case
when updated_by = 'sally' then 'current'
when lead ( updated_by ) over ( partition by grp order by record_date ) = 'sally'
then 'previous'
end status
from source_data s
)
select * from unpivot_macro (
changes, columns ( grp, record_date, updated_by, status )
)
pivot (
max ( val ) for status
in ( 'previous' prev_val, 'current' curr_val )
)
where prev_val <> curr_val;
G UPDAT RECORD_DATE C P C
- ----- ------------------ - - -
a sally 02-DEC-22 B b z
a sally 02-DEC-22 C c d
z sally 09-DEC-22 B e f

doing an update statement involving a join in oracle sql

I tried the following code, but it did not work
BEGIN
For i in (select BUS_RPT_ID, BUS_RPT_PRIMARY_POC_ID from BUS_RPT_DTL )
LOOP
update BUS_RPT_DTL
set BUS_RPT_DTL.BUS_RPT_PRIMARY_POC_ID = (select usr_id
from BUS_RPT_DTL b
join FNM_USR u
on LOWER(trim(u.FRST_NAME || ' ' || u.LST_NAME)) =LOWER(trim(b.BUS_RPT_PRIMARY_POC_NME))
where b.BUS_RPT_ID = i.BUS_RPT_ID
and i.BUS_RPT_PRIMARY_POC_ID is not null
);
END LOOP;
END;
i basically have a report table with a poc id and a poc name, the poc name is fillled out but i want to pull the poc id from a usr table and plug it into the poc id in the report table, can anyone help me out?
You dont need a loop. A single update statement would be sufficient.
update BUS_RPT_DTL b
set b.BUS_RPT_PRIMARY_POC_ID = (select usr_id
from FNM_USR u
on LOWER(trim(u.FRST_NAME || ' ' || u.LST_NAME)) =LOWER(trim(b.BUS_RPT_PRIMARY_POC_NME))
)
Where b.BUS_RPT_PRIMARY_POC_ID is not null
Cheers!!
create table BUS_RPT_DTL as
(select 1 bus_rpt_id, 101 bus_rpt_primary_poc_id, 'Joe Dubb' BUS_RPT_PRIMARY_POC_NME from dual union
select 2 bus_rpt_id, 202, 'Bernie Bro' BUS_RPT_PRIMARY_POC_NME from dual union
select 3 bus_rpt_id, null, 'Don Junior' BUS_RPT_PRIMARY_POC_NME from dual
)
;
create table FNM_USR as
( select 909 usr_id, 'Joe' frst_name, 'Dubb' lst_name from dual union
select 808 usr_id, 'Bernie' frst_name, 'Bro' lst_name from dual union
select 707 usr_id, 'Don' frst_name, 'Junior' lst_name from dual
)
;
select * from BUS_RPT_DTL;
update BUS_RPT_DTL b set bus_rpt_primary_poc_id = (select usr_id from fnm_usr u where LOWER(trim(u.FRST_NAME || ' ' || u.LST_NAME)) = LOWER(trim(b.BUS_RPT_PRIMARY_POC_NME)))
where BUS_RPT_PRIMARY_POC_ID is not null
;
select * from BUS_RPT_DTL;
You can alternatively use a Merge Statement, in which you can Update the column BUS_RPT_PRIMARY_POC_ID for the matching cases for your Where clause, otherwise it would Insert new rows.
MERGE INTO BUS_RPT_DTL bb
USING ( SELECT USR_ID
FROM BUS_RPT_DTL b
JOIN FNM_USR u
ON LOWER(TRIM(u.FRST_NAME || ' ' || u.LST_NAME)) =
LOWER(TRIM(b.BUS_RPT_PRIMARY_POC_NME)) b
ON ( bb.BUS_RPT_ID = b.BUS_RPT_ID AND bb.BUS_RPT_PRIMARY_POC_ID IS NOT NULL )
WHEN MATCHED THEN UPDATE SET bb.BUS_RPT_PRIMARY_POC_ID = b.USR_ID
WHEN NOT MATCHED THEN INSERT(bb.BUS_RPT_PRIMARY_POC_NME, bb.BUS_RPT_ID, bb.BUS_RPT_PRIMARY_POC_ID)
VALUES(b.BUS_RPT_PRIMARY_POC_NME , b.BUS_RPT_ID , b.BUS_RPT_PRIMARY_POC_ID );

Oracle XE, count and display different combinations of rows based on one column

need help with a complicated query. This is an extract from my table:
USERID SERVICE
1 A
1 B
2 A
3 A
3 B
4 A
4 C
5 A
6 A
7 A
7 B
Ok, I would like the query to return and display all possible combinations that exist in my table with their respective counts based on the SERVICE column. For example first user has A and B service, this is one combination which occurred once. Next user has only service A, this is one more combination which occurred once. Third user has service A and B, this has happened once already and the count for this combination is 2 now, etc. So my output based on this particular input would be a table like this:
A AB AC ABC B BC
3 3 1 0 0 0
So to clarify a bit more, if there are 3 services, then there is 3! possible combinations; 3x2x1=6 and they are A, B, C, AB, AC, BC and ABC. And my table should contain count of users which have these combination of services assigned to them.
I have tried building a matrix using this query and then getting all counts using the CUBE function:
select service_A, service_B, service_C from
(select USERID,
max(case when SERVICE =A then 1 else null end) service_A,
max(case when SERVICE =B then 1 else null end) service_B,
max(case when SERVICE =C then 1 else null end) service_C
from SOME_TABLE)
group by CUBE(service_A, service_B,service_C);
But I don't get the count of all combinations. I need only combinations which happened, so counts 0 are not necessary but it is ok to display them. Thanks.
Don't output it as dynamic columns (it is difficult to do without using PL/SQL and dynamic SQL) but output it as rows instead (if you have a front-end then it can usually translate rows to columns much easier than oracle can):
Oracle Setup:
CREATE TABLE some_table ( USERID, SERVICE ) AS
SELECT 1, 'A' FROM DUAL UNION ALL
SELECT 1, 'B' FROM DUAL UNION ALL
SELECT 2, 'A' FROM DUAL UNION ALL
SELECT 3, 'A' FROM DUAL UNION ALL
SELECT 3, 'B' FROM DUAL UNION ALL
SELECT 4, 'A' FROM DUAL UNION ALL
SELECT 4, 'C' FROM DUAL UNION ALL
SELECT 5, 'A' FROM DUAL UNION ALL
SELECT 6, 'A' FROM DUAL UNION ALL
SELECT 7, 'A' FROM DUAL UNION ALL
SELECT 7, 'B' FROM DUAL;
Query:
SELECT service,
COUNT( userid ) AS num_users
FROM (
SELECT userid,
LISTAGG( service ) WITHIN GROUP ( ORDER BY service ) AS service
FROM some_table
GROUP BY userid
)
GROUP BY service;
Output:
SERVICE NUM_USERS
------- ----------
AC 1
A 3
AB 3
PL/SQL for dynamic columns:
VARIABLE cur REFCURSOR;
DECLARE
TYPE string_table IS TABLE OF VARCHAR2(4000);
TYPE int_table IS TABLE OF INT;
t_services string_table;
t_counts int_table;
p_sql CLOB;
BEGIN
SELECT service,
COUNT( userid ) AS num_users
BULK COLLECT INTO t_services, t_counts
FROM (
SELECT userid,
CAST( LISTAGG( service ) WITHIN GROUP ( ORDER BY service ) AS VARCHAR2(2) ) AS service
FROM some_table
GROUP BY userid
)
GROUP BY service;
p_sql := EMPTY_CLOB() || 'SELECT ';
p_sql := p_sql || t_counts(1) || ' AS "' || t_services(1) || '"';
FOR i IN 2 .. t_services.COUNT LOOP
p_sql := p_sql || ', ' || t_counts(i) || ' AS "' || t_services(i) || '"';
END LOOP;
p_sql := p_sql || ' FROM DUAL';
OPEN :cur FOR p_sql;
END;
/
PRINT cur;
Output:
AC A AB
--- --- ---
1 3 3

How to use Varray or cursor or any other array in decode function to compare the values

I want to compare the last_name values in emp_x table, name column for first 3 character if match any of the records then i want to return that value from emp_x table.
Below is my tables and records:
select substr(x.last_name,1,3) from employee x;
Mathews
Smith
Rice
Black
Green
Larry
Cat
select * from emp_x;
Mataaa
Mmitta
Smitta
Riceeeee
Expected Result:
select decode(substr(x.last_name,1,3), substr(x.last_name,1,3), (select name from emp_x y where y.name like substr(x.last_name,1,3)||'%'),x.last_name) from employee x;
Mataaa
Smitta
Riceeeee
NULL
NULL
NULL
NULL
I am getting the exact result but is there any other best way to use it in pl/sql procedure.
say for example, I am taking the 'Mathews' last_name value from employee table and read the first 3 digit and comparing in emp_x table and getting the Mataa value as result in the decode function above.
Instead of selecting values from table can we use any array or cursor to compare and get the values from varaible in pl/SQL Procedure...
Please help to resolve this..
I know this code is not the best solution, it's sort of work(ish) in nature. Anyways I wrote it to just pass some time at work, and I hope even if its not the complete solution, you get an idea regarding what you wanna achieve
DECLARE
CURSOR c1 IS select substr(x.last_name,1,3) from employee x;
c1_rec c1%ROWTYPE;
CURSOR c2 IS select * from emp_x;
c2_rec c2%ROWTYPE;
l_name employee.last_name%TYPE;
BEGIN
FOR c1_rec IN c1
LOOP
FOR c2_rec IN c2
LOOP
IF(c1_rec.last_name == c2_rec.name) THEN
SELECT last_name into l_name from EMPLOYEE where substr(last_name,1,3) = c1_rec.last_name;
dbms_output.put_line(l_name);
ELSE
dbms_output.put_line("NULL");
END IF;
END LOOP;
END LOOP;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line("exception occured");
END;
I would rather stick with this approach:
SQL> with employee as
2 (select 'Mathews' name from dual
3 union all
4 select 'Smith' from dual
5 union all
6 select 'Rice' from dual
7 union all
8 select 'Black' from dual
9 union all
10 select 'Green' from dual
11 union all
12 select 'Larry' from dual
13 union all
14 select 'Cat' from dual),
15 emp_x as
16 (select 'Mataaa' pattern from dual
17 union all
18 select 'Mmitta' from dual
19 union all
20 select 'Smitta' from dual
21 union all
22 select 'Riceeeee' from dual)
23 select nvl(ex.pattern, 'NULL') result
24 from employee e
25 left outer join emp_x ex
26 on substr(ex.pattern, 1, 3) = substr(e.name, 1, 3);
RESULT
--------
Mataaa
Smitta
Riceeeee
NULL
NULL
NULL
NULL
7 rows selected
You didn't provide any information about how big are your tables, but anyway hash join in this query would be much faster then any procedural code. Of course if you need to use it in some procedure you could wrap it in cursor:
for c1 in (select ...)
loop
dbms_output.put_line(c1.result);
end loop;
This can be used for comparing collections using cursors:
declare
cursor c1 is
select last_name ls from empc;
type x is table of employees.last_name%type;
x1 x := x();
cnt integer :=0;
begin
for z in c1 loop
cnt := cnt +1;
x1.extend;
x1(cnt) := z.ls;
if x1(cnt) is NULL then
DBMS_OUTPUT.PUT_LINE('ASHISH');
end if;
dbms_output.put_line(cnt || ' '|| x1(cnt));
end loop;
end;

Concatenate results from a SQL query in Oracle

I have data like this in a table
NAME PRICE
A 2
B 3
C 5
D 9
E 5
I want to display all the values in one row; for instance:
A,2|B,3|C,5|D,9|E,5|
How would I go about making a query that will give me a string like this in Oracle? I don't need it to be programmed into something; I just want a way to get that line to appear in the results so I can copy it over and paste it in a word document.
My Oracle version is 10.2.0.5.
-- Oracle 10g --
SELECT deptno, WM_CONCAT(ename) AS employees
FROM scott.emp
GROUP BY deptno;
Output:
10 CLARK,MILLER,KING
20 SMITH,FORD,ADAMS,SCOTT,JONES
30 ALLEN,JAMES,TURNER,BLAKE,MARTIN,WARD
I know this is a little late but try this:
SELECT LISTAGG(CONCAT(CONCAT(NAME,','),PRICE),'|') WITHIN GROUP (ORDER BY NAME) AS CONCATDATA
FROM your_table
Usually when I need something like that quickly and I want to stay on SQL without using PL/SQL, I use something similar to the hack below:
select sys_connect_by_path(col, ', ') as concat
from
(
select 'E' as col, 1 as seq from dual
union
select 'F', 2 from dual
union
select 'G', 3 from dual
)
where seq = 3
start with seq = 1
connect by prior seq+1 = seq
It's a hierarchical query which uses the "sys_connect_by_path" special function, which is designed to get the "path" from a parent to a child.
What we are doing is simulating that the record with seq=1 is the parent of the record with seq=2 and so fourth, and then getting the full path of the last child (in this case, record with seq = 3), which will effectively be a concatenation of all the "col" columns
Adapted to your case:
select sys_connect_by_path(to_clob(col), '|') as concat
from
(
select name || ',' || price as col, rownum as seq, max(rownum) over (partition by 1) as max_seq
from
(
/* Simulating your table */
select 'A' as name, 2 as price from dual
union
select 'B' as name, 3 as price from dual
union
select 'C' as name, 5 as price from dual
union
select 'D' as name, 9 as price from dual
union
select 'E' as name, 5 as price from dual
)
)
where seq = max_seq
start with seq = 1
connect by prior seq+1 = seq
Result is: |A,2|B,3|C,5|D,9|E,5
As you're in Oracle 10g you can't use the excellent listagg(). However, there are numerous other string aggregation techniques.
There's no particular need for all the complicated stuff. Assuming the following table
create table a ( NAME varchar2(1), PRICE number);
insert all
into a values ('A', 2)
into a values ('B', 3)
into a values ('C', 5)
into a values ('D', 9)
into a values ('E', 5)
select * from dual
The unsupported function wm_concat should be sufficient:
select replace(replace(wm_concat (name || '#' || price), ',', '|'), '#', ',')
from a;
REPLACE(REPLACE(WM_CONCAT(NAME||'#'||PRICE),',','|'),'#',',')
--------------------------------------------------------------------------------
A,2|B,3|C,5|D,9|E,5
But, you could also alter Tom Kyte's stragg, also in the above link, to do it without the replace functions.
Here is another approach, using model clause:
-- sample of data from your question
with t1(NAME1, PRICE) as(
select 'A', 2 from dual union all
select 'B', 3 from dual union all
select 'C', 5 from dual union all
select 'D', 9 from dual union all
select 'E', 5 from dual
) -- the query
select Res
from (select name1
, price
, rn
, res
from t1
model
dimension by (row_number() over(order by name1) rn)
measures (name1, price, cast(null as varchar2(101)) as res)
(res[rn] order by rn desc = name1[cv()] || ',' || price[cv()] || '|' || res[cv() + 1])
)
where rn = 1
Result:
RES
----------------------
A,2|B,3|C,5|D,9|E,5|
SQLFiddle Example
Something like the following, which is grossly inefficient and untested.
create function foo returning varchar2 as
(
declare bar varchar2(8000) --arbitrary number
CURSOR cur IS
SELECT name,price
from my_table
LOOP
FETCH cur INTO r;
EXIT WHEN cur%NOTFOUND;
bar:= r.name|| ',' ||r.price || '|'
END LOOP;
dbms_output.put_line(bar);
return bar
)
Managed to get till here using xmlagg: using oracle 11G from sql fiddle.
Data Table:
COL1 COL2 COL3
1 0 0
1 1 1
2 0 0
3 0 0
3 1 0
SELECT
RTRIM(REPLACE(REPLACE(
XMLAgg(XMLElement("x", col1,',', col2, col3)
ORDER BY col1), '<x>'), '</x>', '|')) AS COLS
FROM ab
;
Results:
COLS
1,00| 3,00| 2,00| 1,11| 3,10|
* SQLFIDDLE DEMO
Reference to read on XMLAGG