How to unpivot a single row in Oracle 11? - sql

I have a row of data and I want to turn this row into a column so I can use a cursor to run through the data one by one. I have tried to use
SELECT * FROM TABLE(PIVOT(TEMPROW))
but I get
'PIVOT' Invalid Identifier error.
I have also tried that same syntax but with
('select * from TEMPROW')
Everything I see using pivot is always using count or sum but I just want this one single row of all varchar2 to turn into a column.
My row would look something like this:
ABC | 123 | aaa | bbb | 111 | 222 |
And I need it to turn into this:
ABC
123
aaa
bbb
111
222
My code is similar to this:
BEGIN
OPEN C_1 FOR SELECT * FROM TABLE(PIVOT( 'SELECT * FROM TEMPROW'));
LOOP
FETCH C_1 INTO TEMPDATA;
EXIT WHEN C_2%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(1);
END LOOP;
CLOSE C_1;
END;

You have to unpivot to convert whole row into 1 single column
select * from Table
UNPIVOT
(col for col in (
'ABC' , '123' , 'aaa' ,' bbb' , '111' , '222'
))
or use union but for that you need to add col names manually like
Select * from ( Select col1 from table
union
select col2 from table union...
Select coln from table)
sample output to show as below

One option for unpivoting would be numbering columns by decode() and cross join with the query containing the column numbers :
select decode(myId, 1, col1,
2, col2,
3, col3,
4, col4,
5, col5,
6, col6 ) as result_col
from temprow
cross join (select level AS myId FROM dual CONNECT BY level <= 6 );
or use a query with unpivot keyword by considering the common expression for the column ( namely col in this case ) must have same datatype as corresponding expression :
select result_col from
(
select col1, to_char(col2) as col2, col3, col4,
to_char(col5) as col5, to_char(col6) as col6
from temprow
)
unpivot (result_col for col in (col1,col2,col3,col4,col5,col6));
Demo

Related

Snowflake SQL - OBJECT_CONSTRUCT from COUNT and GROUP BY

I'm trying to summarize data in a table:
counting total rows
counting values on specific fields
getting the distinct values on specific fields
and, more importantly, I'm struggling with:
getting the count for each field nested in an object
given this data
COL1
COL2
A
0
null
1
B
null
B
null
the expected result from this query would be:
with dummy as (
select 'A' as col1, 0 as col2
union all
select null, 1
union all
select 'B', null
union all
select 'B', null
)
select
count(1) as total
,count(col1) as col1
,array_agg(distinct col1) as dist_col1
--,object_construct(???) as col1_object_count
,count(col2) as col2
,array_agg(distinct col2) as dist_col2
--,object_construct(???) as col2_object_count
from
dummy
TOTAL
COL1
DIST_COL1
COL1_OBJECT_COUNT
COL2
DIST_COL2
COL2_OBJECT_COUNT
4
3
["A", "B"]
{"A": 1, "B", 2, null: 1}
2
[0, 1]
{0: 1, 1: 1, null: 2}
I've tried several functions inside OBJECT_CONSTRUCT mixed with ARRAY_AGG, but all failed
OBJECT_CONSTRUCT can work with several columns but only given all (*), if you try a select statement inside, it will fail
another issue is that analytical functions are not easily taken by the object or array functions in Snowflake.
You could use Snowflake Scripting or Snowpark for this but here's a solution that is somewhat flexible so you can apply it to different tables and column sets.
Create test table/view:
Create or Replace View dummy as (
select 'A' as col1, 0 as col2
union all
select null, 1
union all
select 'B', null
union all
select 'B', null
);
Set session variables for table and colnames.
set tbname = 'DUMMY';
set colnames = '["COL1", "COL2"]';
Create view that generates the required table_column_summary data:
Create or replace View table_column_summary as
with
-- Create table of required column names
cn as (
select VALUE::VARCHAR CNAME
from table(flatten(input => parse_json($colnames)))
)
-- Convert rows into objects
,ro as (
select
object_construct_keep_null(*) row_object
-- using identifier on session variable to dynamically supply table/view name
from identifier($tbname) )
-- Flatten row objects into key/values
,rof as (
select
key col_name,
ifnull(value,'null')::VARCHAR col_value
from ro, lateral flatten(input => row_object), cn
-- You will only need this filter if you need a subset
-- of columns from the source table/query summarised
where col_name = cn.cname)
-- Get the column value distinct value counts
,cdv as (
select col_name,
col_value,
sum(1) col_value_count
from rof
group by 1,2
)
-- and derive required column level stats and combine with cdv
,cv as (
select
(select count(1) from dummy) total,
col_name,
object_construct('COL_COUNT', count(col_value) ,
'COL_DIST', array_agg(distinct col_value),
'COL_OBJECT_COUNT', object_agg(col_value,col_value_count)) col_values
from cdv
group by 1,2)
-- Return result
Select * from cv;
Use this final query if you want a solution that works flexibility with any table/columns provided as input...
Select total, object_agg(col_name, col_values) col_values_obj
From table_column_summary
Group by 1;
Or use this final query if you want the fixed columns output as described in your question...
Select total,
COL1[0]:COL_COUNT COL1,
COL1[0]:COL_DIST DIST_COL1,
COL1[0]:COL_OBJECT_COUNT COL1_OBJECT_COUNT,
COL2[0]:COL_COUNT COL2,
COL2[0]:COL_DIST DIST_COL2,
COL2[0]:COL_OBJECT_COUNT COL2_OBJECT_COUNT
from table_column_summary
PIVOT ( ARRAY_AGG ( col_values )
FOR col_name IN ( 'COL1', 'COL2' ) ) as pt (total, col1, col2);

Find min max over all columns without listing down each column name in SQL

I have a SQL table (actually a BigQuery table) that has a huge number of columns (over a thousand). I want to quickly find the min and max value of each column. Is there a way to do that?
It is impossible for me to list all the columns. Looking for ways to do something like
SELECT MAX(*) FROM mytable;
and then running
SELECT MIN(*) FROM mytable;
I have been unable to Google a way of doing that. Not sure that's even possible.
For example, if my table has the following schema:
col1 col2 col3 .... col1000
the (say, max) query should return
Row col1 col2 col3 ... col1000
1 3 18 0.6 ... 45
and the min query should return (say)
Row col1 col2 col3 ... col1000
1 -5 4 0.1 ... -5
The numbers are just for illustration. The column names could be different strings and not easily scriptable.
See below example for BigQuery Standard SQL - it works for any number of columns and does not require explicit calling/use of columns names
#standardSQL
WITH `project.dataset.mytable` AS (
SELECT 1 AS col1, 2 AS col2, 3 AS col3, 4 AS col4 UNION ALL
SELECT 7,6,5,4 UNION ALL
SELECT -1, 11, 5, 8
)
SELECT
MIN(CAST(value AS INT64)) AS min_value,
MAX(CAST(value AS INT64)) AS max_value
FROM `project.dataset.mytable` t,
UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r'":(.*?)(?:,"|})')) value
with result
Row min_value max_value
1 -1 11
Note: if your columns are of STRING data type - you should remove CAST ... AS INT64
Or if they are of FLOAT64 - replace INT64 with FLOAT64 in the CAST function
Update
Below is option to get MIN/Max for each column and present result as array of respective values as list of respective values in the order of the columns
#standardSQL
WITH `project.dataset.mytable` AS (
SELECT 1 AS col1, 2 AS col2, 3 AS col3, 14 AS col4 UNION ALL
SELECT 7,6,5,4 UNION ALL
SELECT -1, 11, 5, 8
), temp AS (
SELECT pos, MIN(CAST(value AS INT64)) min_value, MAX(CAST(value AS INT64)) max_value
FROM `project.dataset.mytable` t,
UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r'":(.*?)(?:,"|})')) value WITH OFFSET pos
GROUP BY pos
)
SELECT 'min_values' stats, TO_JSON_STRING(ARRAY_AGG(min_value ORDER BY pos)) vals FROM temp UNION ALL
SELECT 'max_values', TO_JSON_STRING(ARRAY_AGG(max_value ORDER BY pos)) FROM temp
with result as
Row stats vals
1 min_values [-1,2,3,4]
2 max_values [7,11,5,14]
Hope this is something you can still apply to whatever your final goal

First query result to be modified and used in the second query SQL

I have a table1 suppose ABC from which I am getting an o/p from a column like 'MYname_GKS_50' . I want only MYname part of that result to be used as a condition to fetch another column of different table.
Suppose If i give columnname = MYname from table2.xyz . These 2 queries should be in a single SQL query in OracleDB.
e.g:
Table1 (col1 , col2)
col2 has data MYname_GKS_50,MYname_GKS_51, MYname_GKS_52 , Ora_10, Ora_11...
i want col2 results only the MYname_GKS and Ora part for my search condition , the changing nos are not required.
Table2 (Col3, col4)
the value from col2 i.e. MYname_GKS and Ora should be now compared with col3 of table2.
if it matches, it should give me the col4 of the table2.
Any suggestions gurus!
You could use like in joining condition:
on t1.col2 like t2.col3||'%'
Like here:
with table1 (col1, col2) as (
select 1, 'MYname_GKS_50' from dual union all
select 2, 'MYname_GKS_51' from dual union all
select 3, 'MYname_GKS_52' from dual union all
select 4, 'Ora_10' from dual union all
select 5, 'Ora_11' from dual ),
table2 (col3, col4) as (
select 'MYname_GKS', 'XYZ' from dual union all
select 'Ora', 'PQR' from dual )
select *
from table1 t1 join table2 t2 on t1.col2 like t2.col3||'%'
Result:
COL1 COL2 COL3 COL4
---------- ------------- ---------- ----
1 MYname_GKS_50 MYname_GKS XYZ
2 MYname_GKS_51 MYname_GKS XYZ
3 MYname_GKS_52 MYname_GKS XYZ
4 Ora_10 Ora PQR
5 Ora_11 Ora PQR
There is a risk that table2 contains more then one matching row for instance MYname and MYname_GKS and I don't know what You want in such situation.

Value in column containing comma delimited values

My sql query results something like this:
ID Col1 Col2
1 xsy 4,5,6
2 abc 4
3 hello 5,4
I want to filter results in which search string is also passed comma separator & should match col2 in all scenarios.
Like search string is 4, should return row 1,2 & 3. if search string is 5, should return row 1 & 3.
if I pass multiple search criteria with comma like 4,5 it should return row 1, 2 & 3 because all these rows has either one of matching number
Any idea on how to accomplish this?
Add commas before and after column value (to take care of first and last item), then search for "comma value comma":
select * from tablename where ',' || col2 || ',' like '%,4,%'
|| is ANSI SQL for concat.
But you should really re-design your db and store values in a proper way!!!
Output when executed:
SQL>create table tso (id int, col1 varchar(10), col2 varchar(10));
SQL>insert into tso values (1,'xsy','4,5,6');
SQL>insert into tso values (2,'abc','4');
SQL>insert into tso values (3,'hello','5,4');
SQL>select * from tso where ',' || col2 || ',' like '%,4,%';
id col1 col2
=========== ========== ==========
1 xsy 4,5,6
2 abc 4
3 hello 5,4
3 rows found
SQL>insert into tso values (4,'extra','5,14');
SQL>insert into tso values (5,'extra','54,41');
SQL>select * from tso where ',' || col2 || ',' like '%,4,%';
id col1 col2
=========== ========== ==========
1 xsy 4,5,6
2 abc 4
3 hello 5,4
3 rows found
Another example:
SQL>select * from tso where ',' || col2 || ',' like '%,4,%'
SQL& and ',' || col2 || ',' like '%,5,%';
id col1 col2
=========== ========== ==========
1 xsy 4,5,6
3 hello 5,4
2 rows found
SQL>select * from tso where ',' || col2 || ',' like '%,4,%'
SQL& or ',' || col2 || ',' like '%,5,%';
id col1 col2
=========== ========== ==========
1 xsy 4,5,6
2 abc 4
3 hello 5,4
4 extra 5,14
4 rows found
That data model is horrible and will cause you a lot of pain in the future. If possible, you should introduce a detail table for the col2 values.
However, if you absolutely cannot change your data model, here's one way to do it:
split col2 into separate values (e.g. using regular expressions, if your RDBMS supports it)
use an IN list to filter out only the values you want
use a GROUP BY with HAVING to get only the rows that have all the values you want
Using Oracle syntax:
with v_data(id, col1, col2) as (
select 1, 'xsy', '4,5,6' from dual union all
select 2, 'abc', '4' from dual union all
select 3, 'hello', '5,4' from dual union all
select 4, 'hi', '45,511' from dual
),
v_data_with_numbers as (
select id, col1, col2,
regexp_substr(col2, '[[:digit:]]+', 1, level) as val
from v_data
connect by level <= 3
),
v_distinct_data_with_numbers as (
select distinct id, col1, col2, val
from v_data_with_numbers
where val is not null
order by id, col1, col2, val
)
select id, col1, col2
from v_distinct_data_with_numbers
where val in (4,5)
group by id, col1, col2
having count(distinct val) = 2 -- require both 4 and 5 to be present; omitting this line will give you all rows that have either one present
please try again like this
select *
from TABLENAME f
where 'searchvalue' = ANY (string_to_array(COLOUMNNAME,','))
Example
select *
from customer f
where '11' = ANY (string_to_array(customerids,','))
try this
Select * from table_name where col2 like "4"

Count comma separated values of all columns in Oracle SQL

I have already went through a number of questions and I couldn't find what I am exactly looking for.
Suppose I have a table as follows :
Col1 Col2 Col3
1,2,3 2,3,4,5,1 5,6
I need to get a result as follows using a select statement:
Col1 Col2 Col3
1,2,3 2,3,4,5,1 5,6
3 5 2
Note the added third column is the count of comma separated values.
Finding the count for a single column is simple, but this seems difficult if not impossible.
Thanks in advance.
select
col1,
regexp_count(col1, ',') + 1 as col1count,
col2,
regexp_count(col2, ',') + 1 as col2count,
col3,
regexp_count(col3, ',') + 1 as col3count
from t
FIDDLE
FIDDLE2
Per Count the number of elements in a comma separated string in Oracle an easy way to do this is to count the number of commas and then add 1
You just need the result unioned onto your original data. So, do that:
SQL> with the_data (col1, col2, col3) as (
2 select '1,2,3', '2,3,4,5,1', '5,6' from dual
3 )
4 select a.*
5 from the_data a
6 union all
7 select to_char(regexp_count(col1, ',') + 1)
8 , to_char(regexp_count(col2, ',') + 1)
9 , to_char(regexp_count(col3, ',') + 1)
10 from the_data;
COL1 COL2 COL
----- --------- ---
1,2,3 2,3,4,5,1 5,6
3 5 2
You need to convert the result to a character because you're unioning a character to a number, which Oracle will complain about.
It's worth noting that storing data in this manner violates the first normal form. This makes it far more difficult to manipulate and almost impossible to constrain to be correct. It's worth considering normalising your data model to make this, and other queries, simpler.
Finding the count for a single column is simple, but this seems difficult if not impossible.
So you don't to look for each column manually? You want it dynamically.
The design is actually flawed since it violates normalization. But if you are willing to stay with it, then you could do it in PL/SQL using REGEXP_COUNT.
Something like,
SQL> CREATE TABLE t AS
2 SELECT '1,2,3' Col1,
3 '2,3,4,5,1' Col2,
4 '5,6' Col3
5 FROM dual;
Table created.
SQL>
SQL> DECLARE
2 cnt NUMBER;
3 BEGIN
4 FOR i IN
5 (SELECT column_name FROM user_tab_columns WHERE table_name='T'
6 )
7 LOOP
8 EXECUTE IMMEDIATE 'select regexp_count('||i.column_name||', '','') + 1 from t' INTO cnt;
9 dbms_output.put_line(i.column_name||' has cnt ='||cnt);
10 END LOOP;
11 END;
12 /
COL3 has cnt =2
COL2 has cnt =5
COL1 has cnt =3
PL/SQL procedure successfully completed.
SQL>
Probably, there will be an XML solution in SQL itself, without using PL/SQL.
In SQL -
SQL> WITH DATA AS
2 ( SELECT '1,2,3' Col1, '2,3,4,5,1' Col2, '5,6' Col3 FROM dual
3 )
4 SELECT regexp_count(col1, ',') + 1 cnt1,
5 regexp_count(col2, ',') + 1 cnt2,
6 regexp_count(col3, ',') + 1 cnt3
7 FROM t;
CNT1 CNT2 CNT3
---------- ---------- ----------
3 5 2
SQL>