According to this question, the way to perform an equality check in Oracle, and I want null to be considered equal null is something like
SELECT COUNT(1)
FROM TableA
WHERE
wrap_up_cd = val
AND ((brn_brand_id = filter) OR (brn_brand_id IS NULL AND filter IS NULL))
This can really make my code dirty, especially if I have a lot of where like this and the where is applied to several column. Is there a better alternative for this?
Well, I'm not sure if this is better, but it might be slightly more concise to use LNNVL, a function (that you can only use in a WHERE clause) which returns TRUE if a given expression is FALSE or UNKNOWN (NULL). For example...
WITH T AS
(
SELECT 1 AS X, 1 AS Y FROM DUAL UNION ALL
SELECT 1 AS X, 2 AS Y FROM DUAL UNION ALL
SELECT 1 AS X, NULL AS Y FROM DUAL UNION ALL
SELECT NULL AS X, 1 AS Y FROM DUAL
)
SELECT
*
FROM
T
WHERE
LNNVL(X <> Y);
...will return all but the row where X = 1 and Y = 2.
As an alternative you can use NVL function and designated literal which will be returned if a value is null:
-- both are not nulls
SQL> with t1(col1, col2) as(
2 select 123, 123 from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
RES
----------
1
-- one of the values is null
SQL> with t1(col1, col2) as(
2 select null, 123 from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
no rows selected
-- both values are nulls
SQL> with t1(col1, col2) as(
2 select null, null from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
RES
----------
1
As #Codo has noted in the comment, of course, above approach requires choosing a literal comparing columns will never have. If comparing columns are of number datatype(for example) and are able to accept any value, then choosing -1 of course won't be an option. To eliminate that restriction we can use decode function(for numeric or character datatypes) for that:
with t1(col1, col2) as(
2 select null, null from dual
3 )
4 select 1 res
5 from t1
6 where decode(col1, col2, 'same', 'different') = 'same'
7 ;
RES
----------
1
With the LNNVL function, you still have a problem when col1 and col2 (x and y in the answer) are both null. With nvl it works but it is inefficient (not understood by the optimizer) and you have to find a value that cannot appear in the data (and the optimizer should know it cannot).
For strings you can choose a value that have more characters than the maximum of the columns but it is dirty.
The true efficient way to do it is to use the (undocumented) function SYS_OP_MAP_NONNULL().
like this:
where SYS_OP_MAP_NONNULL(col1) <> SYS_OP_MAP_NONNULL(col2)
SYS_OP_MAP_NONNULL(a) is equivalent to nvl(a,'some internal value that cannot appear in the data but that is not null')
Related
After every group / row i want to insert a hardcoded dummy row with a bunch of 'xxxx' to act a separator.
I would like to use oracle sql to do this query. i can execute it using a loop but i don't want to use plsql.
As the others suggest, it is best to do it on the front end.
However, if you have a burning need to be done as a query, here is how.
Here I did not use the rownum function as you have already done. I assume, your data is returned by a query, and you can replace my table with your query.
I made few more assumptions, as you have data with row numbers in it.
[I am not sure what do you mean by not PL/SQL]
Select Case When MOD(rownm, 2) = 0 then ' '
Else to_char((rownm + 1) / 2) End as rownm,
name, total, column1
From
(
select (rownm * 2 - 1) rownm,name, to_char(total) total ,column1 from t
union
SELECT (rownm * 2) rownm,'XXX' name, 'XXX' total, 'The row act .... ' column1 FROM t
) Q
Order by Q.rownm;
and here is the fiddle
Since you're already grouping the data, it might be easier to use GROUPING SETS instead of a UNION.
Grouping sets let you group by multiple sets of columns, including the same set twice to duplicate rows. Then the GROUP_ID function can be used to determine when the fake values should be used. This code will be a bit smaller than a UNION approach, and should be faster since it doesn't need to reference the table multiple times.
select
case when group_id() = 0 then name else '' end name,
case when group_id() = 0 then sum(some_value) else null end total,
case when group_id() = 1 then 'this rows...' else '' end column1
from
(
select 'jack' name, 22 some_value from dual union all
select 'jack' name, 1 some_value from dual union all
select 'john' name, 44 some_value from dual union all
select 'john' name, 1 some_value from dual union all
select 'harry' name, 1 some_value from dual union all
select 'harry' name, 1 some_value from dual
) raw_data
group by grouping sets (name, name)
order by raw_data.name, group_id();
You can use row generator technique (using CONNECT BY) and then use CASE..WHEN as follows:
SQL> SELECT CASE WHEN L.LVL = 1 THEN T.ROWNM END AS ROWNM,
2 CASE WHEN L.LVL = 1 THEN T.NAME
3 ELSE 'XXX' END AS NAME,
4 CASE WHEN L.LVL = 1 THEN TO_CHAR(T.TOTAL)
5 ELSE 'XXX' END AS TOTAL,
6 CASE WHEN L.LVL = 1 THEN T.COLUMN1
7 ELSE 'This row act as separator..' END AS COLUMN1
8 FROM T CROSS JOIN (
9 SELECT LEVEL AS LVL FROM DUAL CONNECT BY LEVEL <= 2
10 ) L ORDER BY T.ROWNM, L.LVL;
ROWNM NAME TOTAL COLUMN1
---------- ---------- ----- ---------------------------
1 Jack 23
XXX XXX This row act as separator..
2 John 45
XXX XXX This row act as separator..
3 harry 2
XXX XXX This row act as separator..
4 roy 45
XXX XXX This row act as separator..
5 Jacob 26
XXX XXX This row act as separator..
10 rows selected.
SQL>
I just had this idea of how can i loop in sql?
For example
I have this column
PARAMETER_VALUE
E,C;S,C;I,X;G,T;S,J;S,F;C,S;
i want to store all value before (,) in a temp column also store all value after (;) into another column
then it wont stop until there is no more value after (;)
Expected Output for Example
COL1 E S I G S S C
COL2 C C X T J F S
etc . . .
You can get by using regexp_substr() window analytic function with connect by level <= clause
with t1(PARAMETER_VALUE) as
(
select 'E,C;S,C;I,X;G,T;S,J;S,F;C,S;' from dual
), t2 as
(
select level as rn,
regexp_substr(PARAMETER_VALUE,'([^,]+)',1,level) as str1,
regexp_substr(PARAMETER_VALUE,'([^;]+)',1,level) as str2
from t1
connect by level <= regexp_count(PARAMETER_VALUE,';')
)
select listagg( regexp_substr(str1,'([^;]+$)') ,' ') within group (order by rn) as col1,
listagg( regexp_substr(str2,'([^,]+$)') ,' ') within group (order by rn) as col2
from t2;
COL1 COL2
------------- -------------
E S I G S S C C C X T J F S
Demo
Assuming that you need to separate the input into rows, at the ; delimiters, and then into columns at the , delimiter, you could do something like this:
-- WITH clause included to simulate input data. Not part of the solution;
-- use actual table and column names in the SELECT statement below.
with
t1(id, parameter_value) as (
select 1, 'E,C;S,C;I,X;G,T;S,J;S,F;C,S;' from dual union all
select 2, ',U;,;V,V;' from dual union all
select 3, null from dual
)
-- End of simulated input data
select id,
level as ord,
regexp_substr(parameter_value, '(;|^)([^,]*),', 1, level, null, 2) as col1,
regexp_substr(parameter_value, ',([^;]*);' , 1, level, null, 1) as col2
from t1
connect by level <= regexp_count(parameter_value, ';')
and id = prior id
and prior sys_guid() is not null
order by id, ord
;
ID ORD COL1 COL2
--- --- ---- ----
1 1 E C
1 2 S C
1 3 I X
1 4 G T
1 5 S J
1 6 S F
1 7 C S
2 1 U
2 2
2 3 V V
3 1
Note - this is not the most efficient way to split the inputs (nothing will be very efficient - the data model, which is in violation of First Normal Form, is the reason). This can be improved using standard instr and substr, but the query will be more complicated, and for that reason, harder to maintain.
I generated more input data, to illustrate a few things. You may have several inputs that must be broken up at the same time; that must be done with care. (Note the additional conditions in CONNECT BY). I also illustrate the handling of NULL - if a comma comes right after a semicolon, that means that the "column 1" part of that pair must be NULL. That is shown in the output.
I have a problem in SQL Oracle, I'm trying to create a view that contains values with letters and numbers and I want to sort them in a specific order.
Here is my query:
create or replace view table1_val (val, msg_text) as
select
val, msg_text
from
table_val
where
val in ('L1','L2','L3','L4','L5','L6','L7','L8','L9','L10','L11','L12','L13','L14','G1','G2','G3','G4')
order by lpad(val, 3);
The values are displayed like this:
G1,G2,G3,G4,L1,L2,L3,L4,L5,L6,L7,L8,L9,L10,L11,L12,L13
The thing is that I want to display the L values first and then the G values like in the where condition. The 'val' column is VARCHAR2(3 CHAR). The msg_text column is irrelevant. Can someone help me with that? I use Oracle 12C.
You must interpret the second part of the val column as a number
order by
case when val like 'L%' then 0 else 1 end,
to_number(substr(val,2))
This work fine for your current data, but may fail in future if a new record is added with non-numeric structure.
More conservative (and more hard to write), but safe would be to used a decode for all the current keys, ordering unknown keys on the last position (id = 18 in the example):
order by
decode(
'L1',1,
'L2',2,
'L3',3,
'L4',4,
'L5',5,
'L6',6,
'L7',7,
'L8',8,
'L9',9,
'L10',10,
'L11',11,
'L12',12,
'L13',13,
'G1',14,
'G2',15,
'G3',16,
'G4',17,18)
You can't do anything based on the order of the WHERE condition
But you can use a CASE on the ORDER BY
ORDER BY CASE
WHEN SUBSTR(val, 1, 1) = 'L' THEN 1
WHEN SUBSTR(val, 1, 1) = 'G' THEN 2
ELSE 3
END,
TO_NUMBER (SUBSTR(val, 2, 10));
Another option to consider might be using regular expressions, such as
SQL> with table1_val (val) as
2 (select 'L1' from dual union all
3 select 'L26' from dual union all
4 select 'L3' from dual union all
5 select 'L21' from dual union all
6 select 'L11' from dual union all
7 select 'L4' from dual union all
8 select 'G88' from dual union all
9 select 'G10' from dual union all
10 select 'G2' from dual
11 )
12 select val
13 from table1_val
14 order by regexp_substr(val, '^[[:alpha:]]+') desc,
15 to_number(regexp_substr(val, '\d+$'));
VAL
---
L1
L3
L4
L11
L21
L26
G2
G10
G88
9 rows selected.
SQL>
Is there a concept (with an implementation - in Oracle SQL for starters) which behaves like a 'universal' matcher ?
What I mean is; I know NULL is not equal to anything - including NULL.
Which is why you have to be careful to 'IS NULL' rather than '=NULL' in SQL expressions.
I also know it is useful to use the NVL (in Oracle) function to detect a NULL and replace it with something in the output.
However: what you replace the NULL with using NVL has to match the datatype of the underlying column; otherwise you'll (rightly) get an error.
An example:
I have a table with a NULLABLE column 'name' of type VARCHAR2; and this contains a NULL row.
I can fetch out the NULL and replace it with an NVL like this:
SELECT NVL(name, 'NullyMcNullFace’) from my_table;
Great.
But if the column happens to a NUMBER (say 'age'), then I have to change my NVL:
SELECT NVL(age, 32) from my_table;
Also great.
Now if the column happens to be a DATE (say 'somedate'), then I have to change my NVL again:
SELECT NVL(somedate, sysdate) from my_table;
What I'm getting at here : is that in order to deal with NULLs you have to replace with a specific something ; and that specific something has to 'fit' the data-type.
So is there a construct/concept of (for want of a better word) like 'ANY' here.
Where 'ANY' would fit into a column of any datatype (like NULL), but (unlike NULL and unlike all other specific values) would match ANYTHING (including NULL - ? probably urghhh dunno).
So that I could do:
SELECT NVL(whatever_column, ANY) from my_table;
I think the answer is probably no; and probably 'go away, NULLs are bad enough - never mind this monster you have half-thought of'.
No, there's no "universal acceptor" value in SQL that is equal to everything.
What you can do is raise the NVL into your comparison. Like if you're trying to do a JOIN:
SELECT ...
FROM my_table AS m
JOIN other_table AS o ON o.name = NVL(m.name, o.name)
So if m.name is NULL, then the join will compare o.name to o.name, which is of course always true.
For other uses of NULL, you might have to use another technique that suits the situation.
Adressing the question in the comment on Bill Karwin's answer:
I want to output a 1 if the NEW and OLD value differ and a 0 if they are the same. But (for my purposes) I want to also return 0 for two NULLS.
select
Case When (:New = :Old) or
(:New is NULL and :Old is NULL) then 0
Else
1
End
from dual
In a WHERE CLAUSE you can put a condition like this,
WHERE column1 LIKE NVL(any_column_or_param, '%')
Perhaps DECODE() would suit your purpose here?
WITH t1 AS (SELECT 1 ID, NULL val FROM dual UNION ALL
SELECT 2 ID, NULL val FROM dual UNION ALL
SELECT 3 ID, 1 val FROM dual UNION ALL
SELECT 4 ID, 2 val FROM dual UNION ALL
SELECT 5 ID, 5 val FROM dual),
t2 AS (SELECT 1 ID, NULL val FROM dual UNION ALL
SELECT 2 ID, 3 val FROM dual UNION ALL
SELECT 3 ID, 1 val FROM dual UNION ALL
SELECT 4 ID, 4 val FROM dual UNION ALL
SELECT 6 ID, 5 val FROM dual)
SELECT t1.id t1_id,
t1.val t1_val,
t2.id t2_id,
t2.val t2_val,
DECODE(t1.val, t2.val, 0, 1) different_vals
FROM t1
FULL OUTER JOIN t2 ON t1.id = t2.id
ORDER BY COALESCE(t1.id, t2.id);
T1_ID T1_VAL T2_ID T2_VAL DIFFERENT_VALS
---------- ---------- ---------- ---------- --------------
1 1 0
2 2 3 1
3 1 3 1 0
4 2 4 4 1
5 5 1
6 5 1
I have a coworker looking for this, and I don't recall ever running into anything like that.
Is there a reasonable technique that would let you simulate it?
SELECT PRODUCT(X)
FROM
(
SELECT 3 X FROM DUAL
UNION ALL
SELECT 5 X FROM DUAL
UNION ALL
SELECT 2 X FROM DUAL
)
would yield 30
select exp(sum(ln(col)))
from table;
edit:
if col always > 0
DECLARE #a int
SET #a = 1
-- re-assign #a for each row in the result
-- as what #a was before * the value in the row
SELECT #a = #a * amount
FROM theTable
There's a way to do string concat that is similiar:
DECLARE #b varchar(max)
SET #b = ""
SELECT #b = #b + CustomerName
FROM Customers
Here's another way to do it. This is definitely the longer way to do it but it was part of a fun project.
You've got to reach back to school for this one, lol. They key to remember here is that LOG is the inverse of Exponent.
LOG10(X*Y) = LOG10(X) + LOG10(Y)
or
ln(X*Y) = ln(X) + ln(Y) (ln = natural log, or simply Log base 10)
Example
If X=5 and Y=6
X * Y = 30
ln(5) + ln(6) = 3.4
ln(30) = 3.4
e^3.4 = 30, so does 5 x 6
EXP(3.4) = 30
So above, if 5 and 6 each occupied a row in the table, we take the natural log of each value, sum up the rows, then take the exponent of the sum to get 30.
Below is the code in a SQL statement for SQL Server. Some editing is likely required to make it run on Oracle. Hopefully it's not a big difference but I suspect at least the CASE statement isn't the same on Oracle. You'll notice some extra stuff in there to test if the sign of the row is negative.
CREATE TABLE DUAL (VAL INT NOT NULL)
INSERT DUAL VALUES (3)
INSERT DUAL VALUES (5)
INSERT DUAL VALUES (2)
SELECT
CASE SUM(CASE WHEN SIGN(VAL) = -1 THEN 1 ELSE 0 END) % 2
WHEN 1 THEN -1
ELSE 1
END
* CASE
WHEN SUM(VAL) = 0 THEN 0
WHEN SUM(VAL) IS NOT NULL THEN EXP(SUM(LOG(ABS(CASE WHEN SIGN(VAL) <> 0 THEN VAL END))))
ELSE NULL
END
* CASE MIN(ABS(VAL)) WHEN 0 THEN 0 ELSE 1 END
AS PRODUCT
FROM DUAL
The accepted answer by tuinstoel is correct, of course:
select exp(sum(ln(col)))
from table;
But notice that if col is of type NUMBER, you will find tremendous performance improvement when using BINARY_DOUBLE instead. Ideally, you would have a BINARY_DOUBLE column in your table, but if that's not possible, you can still cast col to BINARY_DOUBLE. I got a 100x improvement in a simple test that I documented here, for this cast:
select exp(sum(ln(cast(col as binary_double))))
from table;
Is there a reasonable technique that would let you simulate it?
One technique could be using LISTAGG to generate product_expression string and XMLTABLE + GETXMLTYPE to evaluate it:
WITH cte AS (
SELECT grp, LISTAGG(l, '*') AS product_expression
FROM t
GROUP BY grp
)
SELECT c.*, s.val AS product_value
FROM cte c
CROSS APPLY(
SELECT *
FROM XMLTABLE('/ROWSET/ROW/*'
PASSING dbms_xmlgen.getXMLType('SELECT ' || c.product_expression || ' FROM dual')
COLUMNS val NUMBER PATH '.')
) s;
db<>fiddle demo
Output:
+------+---------------------+---------------+
| GRP | PRODUCT_EXPRESSION | PRODUCT_VALUE |
+------+---------------------+---------------+
| b | 2*6 | 12 |
| a | 3*5*7 | 105 |
+------+---------------------+---------------+
More roboust version with handling single NULL value in the group:
WITH cte AS (
SELECT grp, LISTAGG(l, '*') AS product_expression
FROM t
GROUP BY grp
)
SELECT c.*, s.val AS product_value
FROM cte c
OUTER APPLY(
SELECT *
FROM XMLTABLE('/ROWSET/ROW/*'
passing dbms_xmlgen.getXMLType('SELECT ' || c.product_expression || ' FROM dual')
COLUMNS val NUMBER PATH '.')
WHERE c.product_expression IS NOT NULL
) s;
db<>fiddle demo
*CROSS/OUTER APPLY(Oracle 12c) is used for convenience and could be replaced with nested subqueries.
This approach could be used for generating different aggregation functions.
There are many different implmentations of "SQL". When you say "does sql have" are you referring to a specific ANSI version of SQL, or a vendor specific implementation. DavidB's answer is one that works in a few different environments I have tested but depending on your environment you could write or find a function exactly like what you are asking for. Say you were using Microsoft SQL Server 2005, then a possible solution would be to write a custom aggregator in .net code named PRODUCT which would allow your original query to work exactly as you have written it.
In c# you might have to do:
SELECT EXP(SUM(LOG([col])))
FROM table;