Finding a value in multiple columns in Oracle table - sql

I have a table like below
ID NUMBER 1 NUMBER 2 NUMBER 3 LOC
1-14H-4950 0616167 4233243 CA
A-522355 1234567 TN
A-522357 9876543 WY
A-522371 1112223 WA
A-522423 1234567 2345678 1234567 NJ
A-A-522427 9876543 6249853 6249853 NJ
and I have a bunch of values (1234567, 9876543, 0616167, 1112223, 999999...etc) which will be used in where clause, if a value from where clause found in one of the three Number columns (Number 1 or Number 2 Number 3) then I will have to write that to output1 (its like VLOOKUP of Excel).
If the value is found in more than one of the three columns then it will be different output2 with a flag as MultipleMatches. If the value is not found in any of the three columns then it should be in Output2 with flag as No Match. I tried using self join and or clauses, but not able to get what I want.
I want to write the SQL to generate both outputs. Outputs will include all the columns from the above table. For eg:
Output 1 from above sample data will look like
ID NUMBER 1 NUMBER 2 NUMBER 3 LOC
1-14H-4950 0616167 4233243 CA
A-522371 1112223 WA
Output 2 will be like:
ID NUMBER 1 NUMBER 2 NUMBER 3 LOC Flag
A-522423 1234567 2345678 1234567 NJ Multiple Match
A-A-522427 9876543 6249853 6249853 NJ Multiple Match
1234 No Match

I want to write the SQL to generate both outputs.
One SELECT operator cannot produce two output sets.
The main question is, why split the output when that the difference is only in the FLAG column? If you really need two different output of the result, then you can do this:
(Rightly) create a common cursor for the query, where the FLAG column will be calculated and split the output screens already in the UI.
drop table test_dt;
create table test_dt as
select '1-14h-4950' id,null num1,616167 num2,4233243 num3,'ca' loc from dual
union all
select 'a-522355',null ,1234567,null,'tn' from dual union all
select 'a-522357',null ,9876543,null,'wy' from dual union all
select 'a-522371',null ,1112223,null,'wa' from dual union all
select 'a-522423',1234567,2345678,1234567,'nj' from dual union all
select 'a3-522423',null,null,null,'nj' from dual union all
select 'a-a-522427',9876543,6249853,6249853,'nj' from dual;
--
select
d.*,
case when t.cc_ndv=0 and t.cc_null=3 then 'Not matching'
when t.cc_ndv=(3-t.cc_null) then 'Once'
else 'Multiplay match'
end flag
--t.cc_ndv,
--t.cc_null
from test_dt d ,lateral(
select
count(distinct case level when 1 then num1
when 2 then num2
when 3 then num3
end ) cc_ndv,
count(distinct case level when 1 then nvl2(num1,null,1)
when 2 then nvl2(num2,null,2)
when 3 then nvl2(num3,null,3)
end ) cc_null
from dual connect by level<=3 and sys_guid()is not null
) t;
Or
create a procedure(see to dbms_sql.return_result) that returns a some data sets.
Process these data of cursors / datasets separately.

Related

SQL - split numeric into 2 columns?

I am trying to split some numeric keys in my table into separate columns (to help save space in SSAS, lower cardinality)
My data looks like the below..
LeadKey
1
2
3
5522
83746623
I want to split these into 2 columns... with 4 digits in each column. (where applicable, as anything 1>9999 won't have anything populated in the 2nd column)
So an example output of the above would be the below..
LeadKey Split1 Split2
1 1
2 2
35566 3556 6
5522 5522
83746623 8374 6623
How could I achieve this? I have split columns easily before using substring and a known character.. but never had to do a split like this. Does anyone have an approach to handle this?
Here is a solution in case you have the LeadKey numbers as int.
select LeadKey
,left(LeadKey, 4) Split1
,right(LeadKey, case when len(LeadKey)-4 < 0 then 0 else len(LeadKey)-4 end) Split2
from t
LeadKey
Split1
Split2
1
1
2
2
35566
3556
6
5522
5522
83746623
8374
6623
Fiddle
In this example, I used left for the Split1, and show the values past the 4th position for the Split2:
I've included a testing temporary table to hold our the testing values.
Feel free to adjust the code to work with your situation.
DECLARE #thelist TABLE
(
LeadKey int
);
INSERT INTO #thelist (LeadKey)
select 1 union all
select 2 union all
select 35566 union all
select 5522 union all
select 83746623
select cast(x1.LeadKey as varchar(19)),
Left(x1.LeadKey, 4) as 'Split1',
(case when len(x1.LeadKey) > 4 then right(x1.LeadKey, len(x1.LeadKey) - 4)
else '' end
) as 'Split2'
from #thelist as x1

How to search for comma delimited string Oracle SQL? [duplicate]

I'm using Oracle Apex 4,2. I have a table with a column in it called 'versions'. In the 'versions' column for each row there is a list of values that are separated by commas e.g. '1,2,3,4'.
I'm trying to create a Select List whose list of values will be each of the values that are separated by commas for one of the rows. What would the SQL query for this be?
Example:
Table Name: Products
Name | Versions
--------------------
myProd1 | 1,2,3
myProd2 | a,b,c
Desired output:
Two Select Lists.
The first one is obvious, I just select the name column from the products table. This way the user can select whatever product they want.
The second one is the one I'm not sure about. Let's say the user has select 'myProd1' from the first Select List. Then the second select should contain the following list of values for the user to select from: '1.0', '1.1' or '1.2'.
After reading your latest comments I understand that what you want is not an LOV but rather list item. Although it can be an LOV too. The first list item/lov will have all products only that user selects from it, e.g. Prod1, Prod2, Prod3... The second list item will have all versions converted from comma separated values as in your example to table as in my examples below. Because in my understanding user may pick only a single value per product from this list. Single product may have many values, e.g. Prod1 has values 1,2,3, 4. But user needs to select only one. Correct? This is why you need to convert comma values to table. The first query select is smth lk this:
SELECT prod_id
FROM your_prod_table
/
id
--------
myProd1
myProd2
.....
The second query should select all versions where product_id is in your_prod_table:
SELECT version FROM your_versions_table
WHERE prod_id IN (SELECT prod_id FROM your_prod_table)
/
Versions
--------
1,2,3,4 -- myProd1 values
a,b,c,d -- myProd2 values
.....
The above will return all versions for the product, e.g. all values for myProd1 etc...
Use my examples converting comma sep. values to table. Replace harcoded '1,2,3,4' with your value column from your table. Replace dual with your table name
If you need products and versions in a single query and single result then simply join/outer join (left, right join) both tables.
SELECT p.prod_id, v.version
FROM your_prod_table p
, your_versions_table v
WHERE p.prod_id = v.prod_id
/
In this case you will get smth lk this in output:
id | Values
------------------
myProd1 | 1,2,3,4
myProd2 | a,b,c,d
If you convert comma to table in above query then you will get this - all in one list or LOV:
id | Values
------------------
myProd1 | 1
myProd1 | 2
myProd1 | 3
myProd1 | 4
myProd2 | a
myProd2 | b
myProd2 | c
myProd2 | d
I hope this helps. Again, you may use LOV or list values if available in APEX. Two separate list of values - one for products other for versions - make more sense to me. In case of list items you will need two separate queries as above and it will be easier to do comma to table conversion for values/versions only. But is is up to you.
Comma to table examples:
-- Comma to table - regexp_count --
SELECT trim(regexp_substr('1,2,3,4', '[^,]+', 1, LEVEL)) str_2_tab
FROM dual
CONNECT BY LEVEL <= regexp_count('1,2,3,4', ',')+1
/
-- Comma to table - Length -
SELECT trim(regexp_substr('1,2,3,4', '[^,]+', 1, LEVEL)) token
FROM dual
CONNECT BY LEVEL <= length('1,2,3,4') - length(REPLACE('1,2,3,4', ',', ''))+1
/
-- Comma to table - instr --
SELECT trim(regexp_substr('1,2,3,4', '[^,]+', 1, LEVEL)) str_2_tab
FROM dual
CONNECT BY LEVEL <= instr('1,2,3,4', ',', 1, LEVEL - 1)
/
The output of all that above is the same:
STR_2_TAB
----------
1
2
3
4
Comma to table - PL/SQL-APEX example. For LOV you need SQL not PL/SQL.
DECLARE
v_array apex_application_global.vc_arr2;
v_string varchar2(2000);
BEGIN
-- Convert delimited string to array
v_array:= apex_util.string_to_table('alpha,beta,gamma,delta', ',');
FOR i in 1..v_array.count LOOP
dbms_output.put_line('Array: '||v_array(i));
END LOOP;
-- Convert array to delimited string
v_string:= apex_util.table_to_string(v_array,'|');
dbms_output.put_line('String: '||v_string);
END;
/

redshift regex get multiple matches and expand rows

I'm working on the URL extraction on AWS Redshift. The URL column looks like this:
url item origin
http://B123//ajdsb apple US
http://BYHG//B123 banana UK
http://B325//BF89//BY85 candy CA
The result I want to get is to get the series that starts with B and also expand rows if there are multiple series in a URL.
extracted item origin
B123 apple US
BYHG banana UK
B123 banana UK
B325 candy CA
BF89 candy CA
BY85 candy CA
My current code is:
select REGEXP_SUBSTR(url, '(B[0-9A-Z]{3})') as extracted, item, origin
from data
The regex part works well but I have problems with extracting multiple values and expand them to new rows. I tried to use REGEXP_MATCHES(url, '(B[0-9A-Z]{3})', 'g') but function regexp_matches does not exist on Redshift...
The solution I use is fairly ugly but achieves the desired results. It involves using REGEXP_COUNT to determine the maximum number of matches in a row then joining the resulting table of numbers to a query using REGEXP_SUBSTR.
-- Get a table with the count of matches
-- e.g. if one row has 5 matches this query will return 0, 1, 2, 3, 4, 5
WITH n_table AS (
SELECT
DISTINCT REGEXP_COUNT(url, '(B[0-9A-Z]{3})') AS n
FROM data
)
-- Join the previous table to the data table and use n in the REGEXP_SUBSTR call to get the nth match
SELECT
REGEXP_SUBSTR(url, '(B[0-9A-Z]{3})', 1, n) AS extracted,
item,
origin
FROM data,
n_table
-- Only keep non-null matches
WHERE n > 0
AND REGEXP_COUNT(url, '(B[0-9A-Z]{3})') >= N
IronFarm's answer inspired me, though I wanted to find a solution that didn't require a cross join. Here's what I came up with:
with
-- raw data
src as (
select
1 as id,
'abc def ghi' as stuff
union all
select
2 as id,
'qwe rty' as stuff
),
-- for each id, get a series of indexes for
-- each match in the string
match_idxs as (
select
id,
generate_series(1, regexp_count(stuff, '[a-z]{3}')) as idx
from
src
)
select
src.id,
match_idxs.idx,
regexp_substr(src.stuff, '[a-z]{3}', 1, match_idxs.idx) as stuff_match
from
src
join match_idxs using (id)
order by
id, idx
;
This yields:
id | idx | stuff_match
----+-----+-------------
1 | 1 | abc
1 | 2 | def
1 | 3 | ghi
2 | 1 | qwe
2 | 2 | rty
(5 rows)

DB2: fill a dummy field with values in for loop while a select

I want to fill a dummy field with values in a for loop during a select:
Somethinhg like (table account e.g. has a field "login")
select login,(for i= 1 to 3 {list=list.login.i.","}) as list from account
The result should be
login | list
aaa | aaa1,aaa2,aaa3
bbb | bbb1,bbb2,bbb3
ccc | ccc1,ccc2,ccc3
Can someone please help me if that is possible !!!!
Many Thanks !
If this is an one-off task and the size of your loop is fixed, you can make up a table of integers and do a cartesian product with your table containing the column login:
SELECT ACC.LOGIN || NUMBRS.NUM FROM
ACCOUNT ACC, TABLE (
SELECT '1' AS NUM FROM SYSIBM.SYSDUMMY1 UNION
SELECT '2' AS NUM FROM SYSIBM.SYSDUMMY1 UNION
SELECT '3' AS NUM FROM SYSIBM.SYSDUMMY1
) NUMBRS
which will give you strings like 'aaa1', 'aaa2', 'aaa3' one string per row. Then, you can aggregate these strings with LISTAGG.
If the size is not fixed, you can always make up a temporary table and fill it up with appropriate data and use it instead of the NUMBRS table above.

SQL - How to returns values outside a date range query

Hoping someone can help out here, I have the following data
Field 1 Field 2 Date Data
1 1 12/09/14 1
2 2 12/09/14 1
3 1 11/09/14 1
4 3 11/09/14 1
I need to write an sql query that sums all "Data" based on a date range and then anything that matches in Field 2. So if a line is out of the date range but the value in Field 2 matches another line that is within the date range, it should be included
For example, if I was to query everything for the 12/09/14, I want to see the sum of line 1, 2 and 3.... as line 3 is outside of the date range but it matches line 1 in the "Field 2" column. Line 4 should not be included as it is outside the range and does not have a matching value in "Field 2"
Any ideas?
I've been playing around with variations of queries but it either selects only the date range values or everything :(
EDIT:
Ok I've given Rajesh answer a try and it doesn't seem to include the data outside the date range. I was expecting the final sum in this example to equal 3 but it's only showing 2
select sum(a) from (
select sum(batch_m2_nett) as a
from batch_inf
where batch_date = to_date('30/09/15','DD/MM/RR')
union
select sum(f2.batch_m2_nett) as a
from batch_inf f1
inner join batch_inf f2
on f1.batch_date = to_date('30/09/15','DD/MM/RR')
and f1.batch_opt_start_batch = f2.batch_opt_start_batch
and f2.batch_date != to_date('30/09/15','DD/MM/RR')
);
SUM(A)
------
2
SQL> select batch_no, batch_opt_start_batch, batch_date, batch_m2_nett from batch_inf where batch_no in (8811,8812,8814);
BATCH_NO BATCH_OPT_START_BATCH BATCH_DATE BATCH_M2_NETT
-------- --------------------- --------------- -------------
8811 8814 30-SEP-15 1
8812 8814 30-SEP-15 1
8814 8814 01-OCT-15 1
the first statement gets sum of data values where date matches
the second statement gets sum of data values where field2 of matched date row is matching with other rows using self join
select SUM(s)
from
(
select SUM(data) as s
from fields
where date ='12/09/14'
union
select sum(f2.data) as s
from fields f1
inner join fields f2
on f1.date ='12/09/14'
and f1.field2 = f2.field2
and f2.date != '12/09/14'
) T