Display query results based on condition - sql

More of a conceptual question. I have a query that calculates a sum of some values and checks it against a template value X, something like:
SELECT
SUM(...),
X,
SUM(...) - X AS delta
FROM
...
Now the query by itself works fine. The problem is that I need it to only display some results if the delta variable is non-zero, meaning there is a difference between the calculated sum and template X. If delta is zero, I need it to display nothing.
Is this possible to achieve in SQL? If yes, how would I go about it?

You have an aggregation query. Many, if not most, databases support column aliases in the having clause:
select . . .
from . . .
group by . . .
having delta <> 0;
For those that don't, it is probably simplest to repeat the expressions:
having sum( . . . ) <> X
You can also put the query into a CTE or subquery, and then use where on the subquery.

Oracle does not support column alias in the HAVING clause. so in oracle, You must have to repeat the aggregation in the HAVING clause.
See this:
SQL> SELECT MAX(COL1) AS RES,
2 COL2
3 FROM (select 1 as col1, 1 as col2 from dual
4 union all
5 select 10 as col1, 2 as col2 from dual)
6 GROUP BY COL2
7 HAVING RES > 5;
HAVING RES > 5
*
ERROR at line 7:
ORA-00904: "RES": invalid identifier
SQL> SELECT MAX(COL1) AS RES,
2 COL2
3 FROM (select 1 as col1, 1 as col2 from dual
4 union all
5 select 10 as col1, 2 as col2 from dual)
6 GROUP BY COL2
7 HAVING MAX(COL1) > 5;
RES COL2
---------- ----------
10 2
SQL>

Related

How to unpivot a single row in Oracle 11?

I have a row of data and I want to turn this row into a column so I can use a cursor to run through the data one by one. I have tried to use
SELECT * FROM TABLE(PIVOT(TEMPROW))
but I get
'PIVOT' Invalid Identifier error.
I have also tried that same syntax but with
('select * from TEMPROW')
Everything I see using pivot is always using count or sum but I just want this one single row of all varchar2 to turn into a column.
My row would look something like this:
ABC | 123 | aaa | bbb | 111 | 222 |
And I need it to turn into this:
ABC
123
aaa
bbb
111
222
My code is similar to this:
BEGIN
OPEN C_1 FOR SELECT * FROM TABLE(PIVOT( 'SELECT * FROM TEMPROW'));
LOOP
FETCH C_1 INTO TEMPDATA;
EXIT WHEN C_2%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(1);
END LOOP;
CLOSE C_1;
END;
You have to unpivot to convert whole row into 1 single column
select * from Table
UNPIVOT
(col for col in (
'ABC' , '123' , 'aaa' ,' bbb' , '111' , '222'
))
or use union but for that you need to add col names manually like
Select * from ( Select col1 from table
union
select col2 from table union...
Select coln from table)
sample output to show as below
One option for unpivoting would be numbering columns by decode() and cross join with the query containing the column numbers :
select decode(myId, 1, col1,
2, col2,
3, col3,
4, col4,
5, col5,
6, col6 ) as result_col
from temprow
cross join (select level AS myId FROM dual CONNECT BY level <= 6 );
or use a query with unpivot keyword by considering the common expression for the column ( namely col in this case ) must have same datatype as corresponding expression :
select result_col from
(
select col1, to_char(col2) as col2, col3, col4,
to_char(col5) as col5, to_char(col6) as col6
from temprow
)
unpivot (result_col for col in (col1,col2,col3,col4,col5,col6));
Demo

Find min max over all columns without listing down each column name in SQL

I have a SQL table (actually a BigQuery table) that has a huge number of columns (over a thousand). I want to quickly find the min and max value of each column. Is there a way to do that?
It is impossible for me to list all the columns. Looking for ways to do something like
SELECT MAX(*) FROM mytable;
and then running
SELECT MIN(*) FROM mytable;
I have been unable to Google a way of doing that. Not sure that's even possible.
For example, if my table has the following schema:
col1 col2 col3 .... col1000
the (say, max) query should return
Row col1 col2 col3 ... col1000
1 3 18 0.6 ... 45
and the min query should return (say)
Row col1 col2 col3 ... col1000
1 -5 4 0.1 ... -5
The numbers are just for illustration. The column names could be different strings and not easily scriptable.
See below example for BigQuery Standard SQL - it works for any number of columns and does not require explicit calling/use of columns names
#standardSQL
WITH `project.dataset.mytable` AS (
SELECT 1 AS col1, 2 AS col2, 3 AS col3, 4 AS col4 UNION ALL
SELECT 7,6,5,4 UNION ALL
SELECT -1, 11, 5, 8
)
SELECT
MIN(CAST(value AS INT64)) AS min_value,
MAX(CAST(value AS INT64)) AS max_value
FROM `project.dataset.mytable` t,
UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r'":(.*?)(?:,"|})')) value
with result
Row min_value max_value
1 -1 11
Note: if your columns are of STRING data type - you should remove CAST ... AS INT64
Or if they are of FLOAT64 - replace INT64 with FLOAT64 in the CAST function
Update
Below is option to get MIN/Max for each column and present result as array of respective values as list of respective values in the order of the columns
#standardSQL
WITH `project.dataset.mytable` AS (
SELECT 1 AS col1, 2 AS col2, 3 AS col3, 14 AS col4 UNION ALL
SELECT 7,6,5,4 UNION ALL
SELECT -1, 11, 5, 8
), temp AS (
SELECT pos, MIN(CAST(value AS INT64)) min_value, MAX(CAST(value AS INT64)) max_value
FROM `project.dataset.mytable` t,
UNNEST(REGEXP_EXTRACT_ALL(TO_JSON_STRING(t), r'":(.*?)(?:,"|})')) value WITH OFFSET pos
GROUP BY pos
)
SELECT 'min_values' stats, TO_JSON_STRING(ARRAY_AGG(min_value ORDER BY pos)) vals FROM temp UNION ALL
SELECT 'max_values', TO_JSON_STRING(ARRAY_AGG(max_value ORDER BY pos)) FROM temp
with result as
Row stats vals
1 min_values [-1,2,3,4]
2 max_values [7,11,5,14]
Hope this is something you can still apply to whatever your final goal

Count comma separated values of all columns in Oracle SQL

I have already went through a number of questions and I couldn't find what I am exactly looking for.
Suppose I have a table as follows :
Col1 Col2 Col3
1,2,3 2,3,4,5,1 5,6
I need to get a result as follows using a select statement:
Col1 Col2 Col3
1,2,3 2,3,4,5,1 5,6
3 5 2
Note the added third column is the count of comma separated values.
Finding the count for a single column is simple, but this seems difficult if not impossible.
Thanks in advance.
select
col1,
regexp_count(col1, ',') + 1 as col1count,
col2,
regexp_count(col2, ',') + 1 as col2count,
col3,
regexp_count(col3, ',') + 1 as col3count
from t
FIDDLE
FIDDLE2
Per Count the number of elements in a comma separated string in Oracle an easy way to do this is to count the number of commas and then add 1
You just need the result unioned onto your original data. So, do that:
SQL> with the_data (col1, col2, col3) as (
2 select '1,2,3', '2,3,4,5,1', '5,6' from dual
3 )
4 select a.*
5 from the_data a
6 union all
7 select to_char(regexp_count(col1, ',') + 1)
8 , to_char(regexp_count(col2, ',') + 1)
9 , to_char(regexp_count(col3, ',') + 1)
10 from the_data;
COL1 COL2 COL
----- --------- ---
1,2,3 2,3,4,5,1 5,6
3 5 2
You need to convert the result to a character because you're unioning a character to a number, which Oracle will complain about.
It's worth noting that storing data in this manner violates the first normal form. This makes it far more difficult to manipulate and almost impossible to constrain to be correct. It's worth considering normalising your data model to make this, and other queries, simpler.
Finding the count for a single column is simple, but this seems difficult if not impossible.
So you don't to look for each column manually? You want it dynamically.
The design is actually flawed since it violates normalization. But if you are willing to stay with it, then you could do it in PL/SQL using REGEXP_COUNT.
Something like,
SQL> CREATE TABLE t AS
2 SELECT '1,2,3' Col1,
3 '2,3,4,5,1' Col2,
4 '5,6' Col3
5 FROM dual;
Table created.
SQL>
SQL> DECLARE
2 cnt NUMBER;
3 BEGIN
4 FOR i IN
5 (SELECT column_name FROM user_tab_columns WHERE table_name='T'
6 )
7 LOOP
8 EXECUTE IMMEDIATE 'select regexp_count('||i.column_name||', '','') + 1 from t' INTO cnt;
9 dbms_output.put_line(i.column_name||' has cnt ='||cnt);
10 END LOOP;
11 END;
12 /
COL3 has cnt =2
COL2 has cnt =5
COL1 has cnt =3
PL/SQL procedure successfully completed.
SQL>
Probably, there will be an XML solution in SQL itself, without using PL/SQL.
In SQL -
SQL> WITH DATA AS
2 ( SELECT '1,2,3' Col1, '2,3,4,5,1' Col2, '5,6' Col3 FROM dual
3 )
4 SELECT regexp_count(col1, ',') + 1 cnt1,
5 regexp_count(col2, ',') + 1 cnt2,
6 regexp_count(col3, ',') + 1 cnt3
7 FROM t;
CNT1 CNT2 CNT3
---------- ---------- ----------
3 5 2
SQL>

SQL - Group by numbers according to their difference

I have a table and I want to group rows that have at most x difference at col2.
For example,
col1 col2
abg 3
abw 4
abc 5
abd 6
abe 20
abf 21
After query I want to get groups such that
group 1: abg 3
abw 4
abc 5
abd 6
group 2: abe 20
abf 21
In this example difference is 1.
How can write such a query?
For Oracle (or anything that supports window functions) this will work:
select col1, col2, sum(group_gen) over (order by col2) as grp
from (
select col1, col2,
case when col2 - lag(col2) over (order by col2) > 1 then 1 else 0 end as group_gen
from some_table
)
Check it on SQLFiddle.
This should get what you need, and changing the gap to that of 5, or any other number is a single change at the #lastVal +1 (vs whatever other difference). The prequery "PreSorted" is required to make sure the data is being processed sequentially so you don't get out-of-order entries.
As each current row is processed, it's column 2 value is stored in the #lastVal for test comparison of the next row, but remains as a valid column "Col2". There is no "group by" as you are just wanting a column to identify where each group is associated vs any aggregation.
select
#grp := if( PreSorted.col2 > #lastVal +1, #grp +1, #grp ) as GapGroup,
PreSorted.col1,
#lastVal := PreSorted.col2 as Col2
from
( select
YT.col1,
YT.col2
from
YourTable YT
order by
YT.col2 ) PreSorted,
( select #grp := 1,
#lastVal := -1 ) sqlvars
try this query, you can use 1 and 2 as input and get you groups:
var grp number(5)
exec :grp :=1
select * from YourTABLE
where (:grp = 1 and col2 < 20) or (:grp = 2 and col2 > 6);

Is there better Oracle operator to do null-safe equality check?

According to this question, the way to perform an equality check in Oracle, and I want null to be considered equal null is something like
SELECT COUNT(1)
FROM TableA
WHERE
wrap_up_cd = val
AND ((brn_brand_id = filter) OR (brn_brand_id IS NULL AND filter IS NULL))
This can really make my code dirty, especially if I have a lot of where like this and the where is applied to several column. Is there a better alternative for this?
Well, I'm not sure if this is better, but it might be slightly more concise to use LNNVL, a function (that you can only use in a WHERE clause) which returns TRUE if a given expression is FALSE or UNKNOWN (NULL). For example...
WITH T AS
(
SELECT 1 AS X, 1 AS Y FROM DUAL UNION ALL
SELECT 1 AS X, 2 AS Y FROM DUAL UNION ALL
SELECT 1 AS X, NULL AS Y FROM DUAL UNION ALL
SELECT NULL AS X, 1 AS Y FROM DUAL
)
SELECT
*
FROM
T
WHERE
LNNVL(X <> Y);
...will return all but the row where X = 1 and Y = 2.
As an alternative you can use NVL function and designated literal which will be returned if a value is null:
-- both are not nulls
SQL> with t1(col1, col2) as(
2 select 123, 123 from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
RES
----------
1
-- one of the values is null
SQL> with t1(col1, col2) as(
2 select null, 123 from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
no rows selected
-- both values are nulls
SQL> with t1(col1, col2) as(
2 select null, null from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
RES
----------
1
As #Codo has noted in the comment, of course, above approach requires choosing a literal comparing columns will never have. If comparing columns are of number datatype(for example) and are able to accept any value, then choosing -1 of course won't be an option. To eliminate that restriction we can use decode function(for numeric or character datatypes) for that:
with t1(col1, col2) as(
2 select null, null from dual
3 )
4 select 1 res
5 from t1
6 where decode(col1, col2, 'same', 'different') = 'same'
7 ;
RES
----------
1
With the LNNVL function, you still have a problem when col1 and col2 (x and y in the answer) are both null. With nvl it works but it is inefficient (not understood by the optimizer) and you have to find a value that cannot appear in the data (and the optimizer should know it cannot).
For strings you can choose a value that have more characters than the maximum of the columns but it is dirty.
The true efficient way to do it is to use the (undocumented) function SYS_OP_MAP_NONNULL().
like this:
where SYS_OP_MAP_NONNULL(col1) <> SYS_OP_MAP_NONNULL(col2)
SYS_OP_MAP_NONNULL(a) is equivalent to nvl(a,'some internal value that cannot appear in the data but that is not null')