How to search for comma delimited string Oracle SQL? [duplicate] - sql

I'm using Oracle Apex 4,2. I have a table with a column in it called 'versions'. In the 'versions' column for each row there is a list of values that are separated by commas e.g. '1,2,3,4'.
I'm trying to create a Select List whose list of values will be each of the values that are separated by commas for one of the rows. What would the SQL query for this be?
Example:
Table Name: Products
Name | Versions
--------------------
myProd1 | 1,2,3
myProd2 | a,b,c
Desired output:
Two Select Lists.
The first one is obvious, I just select the name column from the products table. This way the user can select whatever product they want.
The second one is the one I'm not sure about. Let's say the user has select 'myProd1' from the first Select List. Then the second select should contain the following list of values for the user to select from: '1.0', '1.1' or '1.2'.

After reading your latest comments I understand that what you want is not an LOV but rather list item. Although it can be an LOV too. The first list item/lov will have all products only that user selects from it, e.g. Prod1, Prod2, Prod3... The second list item will have all versions converted from comma separated values as in your example to table as in my examples below. Because in my understanding user may pick only a single value per product from this list. Single product may have many values, e.g. Prod1 has values 1,2,3, 4. But user needs to select only one. Correct? This is why you need to convert comma values to table. The first query select is smth lk this:
SELECT prod_id
FROM your_prod_table
/
id
--------
myProd1
myProd2
.....
The second query should select all versions where product_id is in your_prod_table:
SELECT version FROM your_versions_table
WHERE prod_id IN (SELECT prod_id FROM your_prod_table)
/
Versions
--------
1,2,3,4 -- myProd1 values
a,b,c,d -- myProd2 values
.....
The above will return all versions for the product, e.g. all values for myProd1 etc...
Use my examples converting comma sep. values to table. Replace harcoded '1,2,3,4' with your value column from your table. Replace dual with your table name
If you need products and versions in a single query and single result then simply join/outer join (left, right join) both tables.
SELECT p.prod_id, v.version
FROM your_prod_table p
, your_versions_table v
WHERE p.prod_id = v.prod_id
/
In this case you will get smth lk this in output:
id | Values
------------------
myProd1 | 1,2,3,4
myProd2 | a,b,c,d
If you convert comma to table in above query then you will get this - all in one list or LOV:
id | Values
------------------
myProd1 | 1
myProd1 | 2
myProd1 | 3
myProd1 | 4
myProd2 | a
myProd2 | b
myProd2 | c
myProd2 | d
I hope this helps. Again, you may use LOV or list values if available in APEX. Two separate list of values - one for products other for versions - make more sense to me. In case of list items you will need two separate queries as above and it will be easier to do comma to table conversion for values/versions only. But is is up to you.
Comma to table examples:
-- Comma to table - regexp_count --
SELECT trim(regexp_substr('1,2,3,4', '[^,]+', 1, LEVEL)) str_2_tab
FROM dual
CONNECT BY LEVEL <= regexp_count('1,2,3,4', ',')+1
/
-- Comma to table - Length -
SELECT trim(regexp_substr('1,2,3,4', '[^,]+', 1, LEVEL)) token
FROM dual
CONNECT BY LEVEL <= length('1,2,3,4') - length(REPLACE('1,2,3,4', ',', ''))+1
/
-- Comma to table - instr --
SELECT trim(regexp_substr('1,2,3,4', '[^,]+', 1, LEVEL)) str_2_tab
FROM dual
CONNECT BY LEVEL <= instr('1,2,3,4', ',', 1, LEVEL - 1)
/
The output of all that above is the same:
STR_2_TAB
----------
1
2
3
4
Comma to table - PL/SQL-APEX example. For LOV you need SQL not PL/SQL.
DECLARE
v_array apex_application_global.vc_arr2;
v_string varchar2(2000);
BEGIN
-- Convert delimited string to array
v_array:= apex_util.string_to_table('alpha,beta,gamma,delta', ',');
FOR i in 1..v_array.count LOOP
dbms_output.put_line('Array: '||v_array(i));
END LOOP;
-- Convert array to delimited string
v_string:= apex_util.table_to_string(v_array,'|');
dbms_output.put_line('String: '||v_string);
END;
/

Related

how can function SUBSTR can help me remove vlues from my column

25779101724|GTG1105-Kibimba .These are the telefone numbers, there is more like this.After the numbers the following characters are a location of the telefone number. so i want to remove everything after these numbers(the bar,the location, and the space) so that i can query in the db the numbers which are active among these. and then i dont want to lose these location after numbers because i will need them to report the active numbers AND their location. how can i remove these location and query the active numbers and then replace the location.
I am hoping a response.
From my point of view, your by far the best option is to normalize that table and store each piece of information into its own column. Because, as long as you can easily split that string into several parts, joining it to another table will suffer as number of rows gets higher.
Anyway, here you are.
Sample data:
SQL> with test (msisdn) as
2 (select '25779101724|GTG1105-Kibimba' from dual union all
3 select '25776030896|BRR1351-Kaberenge2' from dual
4 )
Query begins here:
5 select
6 substr(msisdn, 1, instr(msisdn, '|') - 1) phone_number,
7 substr(msisdn, instr(msisdn, '|') + 1) the_rest
8 from test;
PHONE_NUMBER THE_REST
------------------------------ ------------------------------
25779101724 GTG1105-Kibimba
25776030896 BRR1351-Kaberenge2
SQL>

Translating an Excel concept into SQL

Let's say I have the following range in Excel named MyRange:
This isn't a table by any means, it's more a collection of Variant values entered into cells. Excel makes it easy to sum these values doing =SUM(B3:D6) which gives 25. Let's not go into the details of type checking or anything like that and just figure that sum will easily skip values that don't make sense.
If we were translating this concept into SQL, what would be the most natural way to do this? The few approaches that came to mind are (ignore type errors for now):
MyRange returns an array of values:
-- myRangeAsList = [1,1,1,2, ...]
SELECT SUM(elem) FROM UNNEST(myRangeAsList) AS r (elem);
MyRange returns a table-valued function of a single column (basically the opposite of a list):
-- myRangeAsCol = (SELECT 1 UNION ALL SELECT 1 UNION ALL ...
SELECT SUM(elem) FROM myRangeAsCol as r (elem);
Or, perhaps more 'correctly', return a 3-columned table such as:
-- myRangeAsTable = (SELECT 1,1,1 UNION ALL SELECT 2,'other',2 UNION ALL ...
SELECT SUM(a+b+c) FROM SELECT a FROM myRangeAsTable (a,b,c)
Unfortunately, I think this makes things the most difficult to work with, as we now have to combine an unknown number of columns.
Perhaps returning a single column is the easiest of the above to work with, but even that takes a very simple concept -- SUM(myRange) and converts into something that is anything but that: SELECT SUM(elem) FROM myRangeAsCol as r (elem).
Perhaps this could also just be rewritten to a function for convenience, for example:
Just possible direction to think
create temp function extract_values (input string)
returns array<string> language js as """
return Object.values(JSON.parse(input));
""";
with myrangeastable as (
select '1' a, '1' b, '1' c union all
select '2', 'other', '2' union all
select 'true', '3', '3' union all
select '4', '4', '4'
)
select sum(safe_cast(value as float64)) range_sum
from myrangeastable t,
unnest(extract_values(to_json_string(t))) value
with output
Note: no columns explicitly used so should work for any sized range w/o any changes in code
Depends on specific use case, I think above can be wrapped into something more friendly for someone who knows excel to do
I'll try to pose, atomic, pure SQL principles that start with obvious items and goes to the more complicated ones. The intention is, all items can be used in any RDBS:
SQL is basically designed to query tabular data which has relations. (Hence the name is Structured Query Language).
The range in excel is a table for SQL. (Yes you can have some other types in different DBs, but keep it simple so you can use the concept in different types of DBs.)
Now we accept a range in the excel is a table in a database. Then the next step is how to map columns and rows of an excel range to a DB table. It is straight forward. An excel range column is a column in DB. And a row is a row. So why is this a separate item? Because the main difference between the two is usually in DBs, adding new column is usually a pain, the DB tables are almost exclusively designed for new rows not for new columns. (But, of course there are methods to add new columns, and even there exists column based DBs, but these are out of the scope of this answer.)
Items 2 and 3 in Excel and in a DB:
/*
Item 2: Table
the range in the excel is modeled as the below test_table
Item 3: Columns
id keeps the excel row number
b, c, d are the corresponding b, c, d columns of the excel
*/
create table test_table
(
id integer,
b varchar(20),
c varchar(20),
d varchar(20)
);
-- Item 3: Adding the rows in the DB
insert into test_table values (3 /* same as excel row number */ , '1', '1', '1');
insert into test_table values (4 /* same as excel row number */ , '2', 'other', '2');
insert into test_table values (5 /* same as excel row number */ , 'TRUE', '3', '3');
insert into test_table values (6 /* same as excel row number */ , '4', '4', '4');
Now we have similar structure. Then the first thing we want to do is to have equal number of rows between excel range and db table. At DB side this is called filtering and your tool is the where condition. where condition goes through all rows (or indexes for the sake of speed but this is beyond this answer's scope), and filters out which does not satisfy the test boolean logic in the condition. (So for example where 1 = 1 is brings all rows because the condition is always true for all rows.
The next thing to do is to sum the related columns. For this purpose you have two options. To use sum(column_a + column_b) (row by row summation) or sum(a) + sum(b) (column by column summation). If we assume all the data are not null, then both gives the same output.
Items 4 and 5 in Excel and in a DB:
select sum(b + c + d) -- Item 5, first option: We sum row by row
from test_table
where id between 3 and 6; -- Item 4: We simple get all rows, because for all rows above the id are between 3 and 6, if we had another row with 7, it would be filtered out
+----------------+
| sum(b + c + d) |
+----------------+
| 25 |
+----------------+
select sum(b) + sum(c) + sum(d) -- Item 5, second option: We sum column by column
from test_table
where id between 3 and 6; -- Item 4: We simple get all rows, because for all rows above the id are between 3 and 6, if we had another row with 7, it would be filtered out
+--------------------------+
| sum(b) + sum(c) + sum(d) |
+--------------------------+
| 25 |
+--------------------------+
At this point it is better to go one step further. In the excel you have got the "pivot table" structure. The corresponding structure at SQL is the powerful group by mechanics. The group by basically groups a table according to its condition and each group behaves like a sub-table. For example if you say group by column_a for a table, the values are grouped according to the values of the table.
SQL is so powerful that you can even filter the sub groups using having clauses, which acts same as where but works over the columns in group by or the functions over those columns.
Items 6 and 7 in Excel and in a DB:
-- Item 6: We can have group by clause to simulate a pivot table
insert into test_table values (7 /* same as excel row */ , '4', '2', '2');
select b, sum(d), min(d), max(d), avg(d)
from test_table
where id between 3 and 7
group by b;
+------+--------+--------+--------+--------+
| b | sum(d) | min(d) | max(d) | avg(d) |
+------+--------+--------+--------+--------+
| 1 | 1 | 1 | 1 | 1 |
| 2 | 2 | 2 | 2 | 2 |
| TRUE | 3 | 3 | 3 | 3 |
| 4 | 6 | 2 | 4 | 3 |
+------+--------+--------+--------+--------+
Beyond this point following are the details which are not directly related with the questions purpose:
SQL has the ability for table joins (the relations). They can be thought like the VLOOKUP functionality in the Excel.
The RDBSs have the indexing mechanisms to fetch the rows as quick as possible. (Where the RDBMSs start to go beyond the purpose of excel).
The RDBSs keep huge amount of data (where excel the max rows are limited).
Both RDBSMs and Excel can be used by most of frameworks as persistent data layer. But of course Excel is not the one you pick because its reason of existence is more on the presentation layer.
The excel file and the SQL used in this answer can be found in this github repo: https://github.com/MehmetKaplan/stackoverflow-72135212/
PS: I used SQL for more than 2 decades and then reduced using it and started to use Excel much frequently because of job changes. Each time I use Excel I still think of the DBs and "relational algebra" which is the mathematical foundation of the RDBMSs.
So in Snowflake:
Strings as input:
if you have your data in a "order" table represented by this CTE:
and the data was strings of comma separated values:
WITH data(raw) as (
select * from values
('null,null,null,null,null,null'),
('null,null,null,null,null,null'),
('null,1,1,1,null,null'),
('null,2, other,2,null,null'),
('null,true,3,3,null,null'),
('null,4,4,4,null,null')
)
this SQL will select the sub part, try parse it and sum the valid values:
select sum(nvl(try_to_double(r.value::text), try_to_number(r.value::text))) as sum_total
from data as d
,table(split_to_table(d.raw,',')) r
where r.index between 2 and 4 /* the column B,C,D filter */
and r.seq between 3 and 6 /* the row 3-6 filter */
;
giving:
SUM_TOTAL
25
Arrays as input:
if you already have arrays.. here I am smash those strings into STRTOK_TO_ARRAY in the CTE to make me some arrays:
WITH data(_array) as (
select STRTOK_TO_ARRAY(column1, ',') from values
('null,null,null,null,null,null'),
('null,null,null,null,null,null'),
('null,1,1,1,null,null'),
('null,2, other,2,null,null'),
('null,true,3,3,null,null'),
('null,4,4,4,null,null')
)
thus again with almost the same SQL, but not the array indexes are 0 based, and I have used FLATTEN:
select sum(nvl(try_to_double(r.value::text), try_to_number(r.value::text))) as sum_total
from data as d
,table(flatten(input=>d._array)) r
where r.index between 1 and 3 /* the column B,C,D filter */
and r.seq between 3 and 6 /* the row 3-6 filter */
;
gives:
SUM_TOTAL
25
With JSON driven data:
This time using semi-structured data, we can include the filter ranges with the data.. and some extra "out of bounds values just to show we are not just converting it all.
WITH data as (
select parse_json('{ "col_from":2,
"col_to":4,
"row_from":3,
"row_to":6,
"data":[[101,102,null,104,null,null],
[null,null,null,null,null,null],
[null,1,1,1,null,null],
[null,2, "other",2,null,null],
[null,true,3,3,null,null],
[null,4,4,4,null,null]
]}') as json
)
select
sum(try_to_double(c.value::text)) as sum_total
from data as d
,table(flatten(input=>d.json:data)) r
,table(flatten(input=>r.value)) c
where r.index+1 between d.json:row_from::number and d.json:row_to::number
and c.index+1 between d.json:col_from::number and d.json:col_to::number
;
Here is another solution using Snowflake scripting (Snowsight format) . This code can easily be wrapped as a stored procedure.
declare
table_name := 'xl_concept'; -- input
column_list := 'a,b,c'; -- input
total resultset; -- result output
pos int := 0; -- position for delimiter
sql := ''; -- sql to be generated
col := ''; -- individual column names
begin
sql := 'select sum('; -- initialize sql
loop -- repeat until column list is empty
col := replace(split_part(:column_list, ',', 1), ',', ''); -- get the column name
pos := position(',' in :column_list); -- find the delimiter
sql := sql || 'coalesce(try_to_number('|| col ||'),0)'; -- add to the sql
if (pos > 0) then -- more columns in the column list
sql := sql || ' + ';
column_list := right(:column_list, len(:column_list) - :pos); -- update column list
else -- last entry in the columns list
break;
end if;
end loop;
sql := sql || ') total from ' || table_name||';'; -- finalize the sql
total := (execute immediate :sql); -- run the sql and store total value
return table(total); -- return total value
end;
only these two variables need to be set table_name and column_list
generates the following sql to sum up the values
select sum(coalesce(try_to_number(a),0) + coalesce(try_to_number(b),0) + coalesce(try_to_number(c),0)) from xl_concept
prep steps
create or replace temp table xl_concept (a varchar,b varchar,c varchar)
;
insert into xl_concept
with cte as (
select '1' a, '1' b, '1' c union all
select '2', 'other', '2' union all
select 'true', '3', '3' union all
select '4', '4', '4'
)
select * from cte
;
result for the run with no change
TOTAL
25
result after changing column list to column_list := 'a,c';
TOTAL
17
Also, this can be enhanced setting columns_list to * and reading the column names from information_schema.columns to include all the columns from the table.
In PostgreSQL regular expression can be used to filter non numeric values before sum
select sum(e::Numeric) from (
select e
from unnest((Array[['1','2w','1.2e+4'],['-1','2.232','zz']])) as t(e)
where e ~ '^[-+]?[0-9]*\.?[0-9]+([eE][-+]?[0-9]+)?$'
) a
expression for validating numeric value was taken from post Return Just the Numeric Values from a PostgreSQL Database Column
More secure option is to define function as in PostgreSQL alternative to SQL Servers try_cast function
Function (simplified for this example):
create function try_cast_numeric(p_in text)
returns Numeric
as
$$
begin
begin
return $1::Numeric;
exception
when others then
return 0;
end;
end;
$$
language plpgsql;
Select
select
sum(try_cast_numeric(e))
from
unnest((Array[['1','2w','1.2e+4'],['-1','2.232','zz']])) as t(e)
Most modern RDBMS support lateral joins and table value constructors. You can use them together to convert arbitrary columns to rows (3 columns per row become 3 rows with 1 column) then sum. In SQL server you would:
create table t (
id int not null primary key identity,
a int,
b int,
c int
);
insert into t(a, b, c) values
( 1, 1, 1),
( 2, null, 2),
(null, 3, 3),
( 4, 4, 4);
select sum(value)
from t
cross apply (values
(a),
(b),
(c)
) as x(value);
Below is the implementation of this concept in some popular RDBMS:
SQL Server
PostgreSQL
MySQL
Generic solution, ANSI SQL
Unpivot solution, Oracle
Using regular expression to extract all number values from a row could be another option, I guess.
DECLARE rectangular_table ARRAY<STRUCT<A STRING, B STRING, C STRING>> DEFAULT [
('1', '1', '1'), ('2', 'other', '2'), ('TRUE', '3', '3'), ('4', '4', '4')
];
SELECT SUM(SAFE_CAST(v AS FLOAT64)) AS `sum`
FROM UNNEST(rectangular_table) t,
UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r':"?([-0-9.]*)"?[,}]')) v
output:
+------+------+
| Row | sum |
+------+------+
| 1 | 25.0 |
+------+------+
You could use a CTE with a SELECT FROM VALUES
with xlary as
(
select val from (values
('1')
,('1')
,('1')
,('2')
,('OTHER')
,('2')
,('TRUE')
,('3')
,('3')
,('4')
,('4')
,('4')
) as tbl (val)
)
select sum(try_cast(val as number)) from xlary;

Finding a value in multiple columns in Oracle table

I have a table like below
ID NUMBER 1 NUMBER 2 NUMBER 3 LOC
1-14H-4950 0616167 4233243 CA
A-522355 1234567 TN
A-522357 9876543 WY
A-522371 1112223 WA
A-522423 1234567 2345678 1234567 NJ
A-A-522427 9876543 6249853 6249853 NJ
and I have a bunch of values (1234567, 9876543, 0616167, 1112223, 999999...etc) which will be used in where clause, if a value from where clause found in one of the three Number columns (Number 1 or Number 2 Number 3) then I will have to write that to output1 (its like VLOOKUP of Excel).
If the value is found in more than one of the three columns then it will be different output2 with a flag as MultipleMatches. If the value is not found in any of the three columns then it should be in Output2 with flag as No Match. I tried using self join and or clauses, but not able to get what I want.
I want to write the SQL to generate both outputs. Outputs will include all the columns from the above table. For eg:
Output 1 from above sample data will look like
ID NUMBER 1 NUMBER 2 NUMBER 3 LOC
1-14H-4950 0616167 4233243 CA
A-522371 1112223 WA
Output 2 will be like:
ID NUMBER 1 NUMBER 2 NUMBER 3 LOC Flag
A-522423 1234567 2345678 1234567 NJ Multiple Match
A-A-522427 9876543 6249853 6249853 NJ Multiple Match
1234 No Match
I want to write the SQL to generate both outputs.
One SELECT operator cannot produce two output sets.
The main question is, why split the output when that the difference is only in the FLAG column? If you really need two different output of the result, then you can do this:
(Rightly) create a common cursor for the query, where the FLAG column will be calculated and split the output screens already in the UI.
drop table test_dt;
create table test_dt as
select '1-14h-4950' id,null num1,616167 num2,4233243 num3,'ca' loc from dual
union all
select 'a-522355',null ,1234567,null,'tn' from dual union all
select 'a-522357',null ,9876543,null,'wy' from dual union all
select 'a-522371',null ,1112223,null,'wa' from dual union all
select 'a-522423',1234567,2345678,1234567,'nj' from dual union all
select 'a3-522423',null,null,null,'nj' from dual union all
select 'a-a-522427',9876543,6249853,6249853,'nj' from dual;
--
select
d.*,
case when t.cc_ndv=0 and t.cc_null=3 then 'Not matching'
when t.cc_ndv=(3-t.cc_null) then 'Once'
else 'Multiplay match'
end flag
--t.cc_ndv,
--t.cc_null
from test_dt d ,lateral(
select
count(distinct case level when 1 then num1
when 2 then num2
when 3 then num3
end ) cc_ndv,
count(distinct case level when 1 then nvl2(num1,null,1)
when 2 then nvl2(num2,null,2)
when 3 then nvl2(num3,null,3)
end ) cc_null
from dual connect by level<=3 and sys_guid()is not null
) t;
Or
create a procedure(see to dbms_sql.return_result) that returns a some data sets.
Process these data of cursors / datasets separately.

DB2: fill a dummy field with values in for loop while a select

I want to fill a dummy field with values in a for loop during a select:
Somethinhg like (table account e.g. has a field "login")
select login,(for i= 1 to 3 {list=list.login.i.","}) as list from account
The result should be
login | list
aaa | aaa1,aaa2,aaa3
bbb | bbb1,bbb2,bbb3
ccc | ccc1,ccc2,ccc3
Can someone please help me if that is possible !!!!
Many Thanks !
If this is an one-off task and the size of your loop is fixed, you can make up a table of integers and do a cartesian product with your table containing the column login:
SELECT ACC.LOGIN || NUMBRS.NUM FROM
ACCOUNT ACC, TABLE (
SELECT '1' AS NUM FROM SYSIBM.SYSDUMMY1 UNION
SELECT '2' AS NUM FROM SYSIBM.SYSDUMMY1 UNION
SELECT '3' AS NUM FROM SYSIBM.SYSDUMMY1
) NUMBRS
which will give you strings like 'aaa1', 'aaa2', 'aaa3' one string per row. Then, you can aggregate these strings with LISTAGG.
If the size is not fixed, you can always make up a temporary table and fill it up with appropriate data and use it instead of the NUMBRS table above.

Comparing list of values against table

I tried to find solution for this problem for some time but without success so any help would be much appreciated. List of IDs needs to be compared against a table and find out which records exist (and one of their values) and which are non existent. There is a list of IDs, in text format:
100,
200,
300
a DB table:
ID(PK) value01 value02 value03 .....
--------------------------------------
100 Ann
102 Bob
300 John
304 Marry
400 Jane
and output I need is:
100 Ann
200 missing or empty or whatever indication
300 John
Obvious solution is to create table and join but I have only read access (DB is closed vendor product, I'm just a user). Writing a PL/SQL function also seems complicated because table has 200+ columns and 100k+ records and I had no luck with creating dynamic array of records. Also, list of IDs to be checked contains hundreds of IDs and I need to do this periodically so any solution where each ID has to be changed in separate line of code wouldn't be very useful.
Database is Oracle 10g.
there are many built in public collection types. you can leverage one of them like this:
with ids as (select /*+ cardinality(a, 1) */ column_value id
from table(UTL_NLA_ARRAY_INT(100, 200, 300)) a
)
select ids.id, case when m.id is null then '**NO MATCH**' else m.value end value
from ids
left outer join my_table m
on m.id = ids.id;
to see a list of public types on your DB, run :
select owner, type_name, coll_type, elem_type_name, upper_bound, precision, scale from all_coll_types
where elem_type_name in ('FLOAT', 'INTEGER', 'NUMBER', 'DOUBLE PRECISION')
the hint
/*+ cardinality(a, 1) */
is just used to tell oracle how many elements are in our array (if not specified, the default will be an assumption of 8k elements). just set to a reasonably accurate number.
You can transform a variable into a query using CONNECT BY (tested on 11g, should work on 10g+):
SQL> WITH DATA AS (SELECT '100,200,300' txt FROM dual)
2 SELECT regexp_substr(txt, '[^,]+', 1, LEVEL) item FROM DATA
3 CONNECT BY LEVEL <= length(txt) - length(REPLACE(txt, ',', '')) + 1;
ITEM
--------------------------------------------
100
200
300
You can then join this result to the table as if it were a standard view:
SQL> WITH DATA AS (SELECT '100,200,300' txt FROM dual)
2 SELECT v.id, dbt.value01
3 FROM dbt
4 RIGHT JOIN
5 (SELECT to_number(regexp_substr(txt, '[^,]+', 1, LEVEL)) ID
6 FROM DATA
7 CONNECT BY LEVEL <= length(txt) - length(REPLACE(txt, ',', '')) + 1) v
8 ON dbt.id = v.id;
ID VALUE01
---------- ----------
100 Ann
300 John
200
One way of tackling this is to dynamically create a common table expression that can then be included in the query. The final synatx you'd be aiming for is:
with list_of_values as (
select 100 val from dual union all
select 200 val from dual union all
select 300 val from dual union all
...)
select
lov.val,
...
from
list_of_values lov left outer join
other_data t on (lov.val = t.val)
It's not very elegant, particularly for large sets of values, but compatibility with a database on which you might have few privileges is very good.