Is there a concept (with an implementation - in Oracle SQL for starters) which behaves like a 'universal' matcher ?
What I mean is; I know NULL is not equal to anything - including NULL.
Which is why you have to be careful to 'IS NULL' rather than '=NULL' in SQL expressions.
I also know it is useful to use the NVL (in Oracle) function to detect a NULL and replace it with something in the output.
However: what you replace the NULL with using NVL has to match the datatype of the underlying column; otherwise you'll (rightly) get an error.
An example:
I have a table with a NULLABLE column 'name' of type VARCHAR2; and this contains a NULL row.
I can fetch out the NULL and replace it with an NVL like this:
SELECT NVL(name, 'NullyMcNullFace’) from my_table;
Great.
But if the column happens to a NUMBER (say 'age'), then I have to change my NVL:
SELECT NVL(age, 32) from my_table;
Also great.
Now if the column happens to be a DATE (say 'somedate'), then I have to change my NVL again:
SELECT NVL(somedate, sysdate) from my_table;
What I'm getting at here : is that in order to deal with NULLs you have to replace with a specific something ; and that specific something has to 'fit' the data-type.
So is there a construct/concept of (for want of a better word) like 'ANY' here.
Where 'ANY' would fit into a column of any datatype (like NULL), but (unlike NULL and unlike all other specific values) would match ANYTHING (including NULL - ? probably urghhh dunno).
So that I could do:
SELECT NVL(whatever_column, ANY) from my_table;
I think the answer is probably no; and probably 'go away, NULLs are bad enough - never mind this monster you have half-thought of'.
No, there's no "universal acceptor" value in SQL that is equal to everything.
What you can do is raise the NVL into your comparison. Like if you're trying to do a JOIN:
SELECT ...
FROM my_table AS m
JOIN other_table AS o ON o.name = NVL(m.name, o.name)
So if m.name is NULL, then the join will compare o.name to o.name, which is of course always true.
For other uses of NULL, you might have to use another technique that suits the situation.
Adressing the question in the comment on Bill Karwin's answer:
I want to output a 1 if the NEW and OLD value differ and a 0 if they are the same. But (for my purposes) I want to also return 0 for two NULLS.
select
Case When (:New = :Old) or
(:New is NULL and :Old is NULL) then 0
Else
1
End
from dual
In a WHERE CLAUSE you can put a condition like this,
WHERE column1 LIKE NVL(any_column_or_param, '%')
Perhaps DECODE() would suit your purpose here?
WITH t1 AS (SELECT 1 ID, NULL val FROM dual UNION ALL
SELECT 2 ID, NULL val FROM dual UNION ALL
SELECT 3 ID, 1 val FROM dual UNION ALL
SELECT 4 ID, 2 val FROM dual UNION ALL
SELECT 5 ID, 5 val FROM dual),
t2 AS (SELECT 1 ID, NULL val FROM dual UNION ALL
SELECT 2 ID, 3 val FROM dual UNION ALL
SELECT 3 ID, 1 val FROM dual UNION ALL
SELECT 4 ID, 4 val FROM dual UNION ALL
SELECT 6 ID, 5 val FROM dual)
SELECT t1.id t1_id,
t1.val t1_val,
t2.id t2_id,
t2.val t2_val,
DECODE(t1.val, t2.val, 0, 1) different_vals
FROM t1
FULL OUTER JOIN t2 ON t1.id = t2.id
ORDER BY COALESCE(t1.id, t2.id);
T1_ID T1_VAL T2_ID T2_VAL DIFFERENT_VALS
---------- ---------- ---------- ---------- --------------
1 1 0
2 2 3 1
3 1 3 1 0
4 2 4 4 1
5 5 1
6 5 1
Related
Hello what I feel to be a simple question but cannot figure it out. I am trying to find the max number in relation to another column and group it, the issue that comes up is that one of the values is a string.
Name Value
Nate 0
Nate 1
Jeff 2
Nate 2
Nate 'Test'
For the data I actually want 'Test' to be equal to 1. However if I use the MAX() function here I will get:
Name Value
Nate 'Test'
Jeff 2
I can only think that maybe if I read 'Test' as 1 then use the max function (which I am not sure how to do) or possibly overload MAX() to my own definition somehow.
Thank you for any help you can give.
Storing mixed data in a string column is generally a bad idea.
You can convert a specific string to a fixed value with a case expression:
select max(case when value = 'Test' then '1' else value end) from ...
But you are still dealing with strings, so you probably want to convert them to numbers, to prevent '10' sorting before '2' for instance:
select max(to_number(case when value = 'Test' then '1' else value end)) from ...
or
select max(case when value = 'Test' then 1 else to_number(value) end) from ...
Using a CTE for your sample data:
-- CTE for dummy data
with your_table (name, value) as (
select 'Nate', '0' from dual
union all select 'Nate', '1' from dual
union all select 'Jeff', '2' from dual
union all select 'Nate', '2' from dual
union all select 'Nate', 'Test' from dual
)
-- actual query
select name,
max(case when value = 'Test' then 1 else to_number(value) end) as value
from your_table
group by name;
NAME VALUE
---- ----------
Nate 2
Jeff 2
But you have to cover all values that cannot be explicitly or implicitly converted to numbers.
If would be slightly easier if you wanted to ignore non-numeric values, or treat them all as the same fixed value, rather than mapping individual strings to their own numeric values. Then you could write a function that attempts to convert any string and if it gets any exception returns null (or some other fixed value).
From 12cR1 you can even do with with a PL/SQL declaration rather than a permanent standalone or packaged function, if this an occasional thing:
with
function hack_to_number(string varchar2) return number is
begin
return to_number(string);
exception
when others then
return 1;
end;
select name,
max(hack_to_number(value)) as value
from your_table
group by name;
NAME VALUE
---- ----------
Nate 2
Jeff 2
You'd probably be better off going back and redesigning the data model to prevent this kind of issue by using the correct data types.
As #DrYWit pointed out in a comment, from 12cR2 you don't even need to do that, as the to_number() function has this built in, if you call it explicitly:
select name,
max(to_number(value default 1 on conversion error)) as value
from your_table
group by name;
How about this regular expression "trick"?
SQL> with your_table (name, value) as (
2 select 'Nate', '0' from dual
3 union all select 'Nate', '1' from dual
4 union all select 'Jeff', '2' from dual
5 union all select 'Nate', '2' from dual
6 union all select 'Nate', 'Test' from dual
7 )
8 select name, max(to_number(value)) mv
9 from your_table
10 where regexp_like (value, '^\d+$')
11 group by name;
NAME MV
---- ----------
Nate 2
Jeff 2
SQL>
I have this table MyTable:
PROG VALUE
-------------
1 aaaaa
1 bbbbb
2 ccccc
4 ddddd
4 eeeee
now I'm checking the existence of a tuple with a certain id with a query like
SELECT COUNT(1) AS IT_EXISTS
FROM MyTable
WHERE ROWNUM = 1 AND PROG = {aProg}
For example I obtain with aProg = 1 :
IT_EXISTS
---------
1
I get with aProg = 3 :
IT_EXISTS
---------
0
The problem is that I must do multiple queries, one for every value of PROG to check.
What I want is something that with a query like
SELECT PROG, ??? AS IT_EXISTS
FROM MyTable
WHERE PROG IN {1, 2,3, 4, 5} AND {some other condition}
I can get something like
PROG IT_EXISTS
------------------
1 1
2 1
3 0
4 1
5 0
The database is Oracle...
Hope I'm clear
regards
Paolo
Take a step back and ask yourself this: Do you really need to return the rows that don't exist to solve your problem? I suspect the answer is no. Your application logic can determine that records were not returned which will allow you to simplify your query.
SELECT PROG
FROM MyTable
WHERE PROG IN (1, 2, 3, 4, 5)
If you get a row back for a given PROG value, it exists. If not, it doesn't exist.
Update:
In your comment in the question above, you stated:
the prog values are from others tables. The table of the question has only a subset of the all prog values
This suggests to me that a simple left outer join could do the trick. Assuming your other table with the PROG values you're interested in is called MyOtherTable, something like this should work:
SELECT a.PROG,
CASE WHEN b.PROG IS NOT NULL THEN 1 ELSE 0 END AS IT_EXISTS
FROM MyOtherTable AS a
LEFT OUTER JOIN MyTable AS b ON b.PROG = a.PROG
A WHERE clause could be tacked on to the end if you need to do some further filtering.
I would recommend something like this. If at most one row can match a prog in your table:
select p.prog,
(case when t.prog is null then 0 else 1 end) as it_exists
from (select 1 as prog from dual union all
select 2 as prog from dual union all
select 3 as prog from dual union all
select 4 as prog from dual union all
select 5 as prog from dual
) p left join
mytable t
on p.prog = t.prog and <some conditions>;
If more than one row could match, you'll want to use aggregation to avoid duplicates:
select p.prog,
max(case when t.prog is null then 0 else 1 end) as it_exists
from (select 1 as prog from dual union all
select 2 as prog from dual union all
select 3 as prog from dual union all
select 4 as prog from dual union all
select 5 as prog from dual
) p left join
mytable t
on p.prog = t.prog and <some conditions>
group by p.prog
order by p.prog;
One solution is to use (arguably abuse) a hierarchical query to create an arbitrarily long list of numbers (in my example, I've set the largest number to max(PROG), but you could hardcode this if you knew the top range you were looking for). Then select from that list and use EXISTS to check if it exists in MYTABLE.
select
PROG
, case when exists (select 1 from MYTABLE where PROG = A.PROG) then 1 else 0 end IT_EXISTS
from (
select level PROG
from dual
connect by level <= (select max(PROG) from MYTABLE) --Or hardcode, if you have a max range in mind
) A
;
It's still not very clear where you get the prog values to check. But if you can read them from a table, and assuming that the table doesn't contain duplicate prog values, this is the query I would use:
select a.prog, case when b.prog is null then 0 else 1 end as it_exists
from prog_values_to_check a
left join prog_values_to_check b
on a.prog = b.prog
and exists (select null
from MyTable t
where t.prog = b.prog)
If you do need to hard code the values, you can do it rather simply by taking advantage of the SYS.DBMS_DEBUG_VC2COLL function, which allows you to convert a comma-delimited list of values into rows.
with prog_values_to_check(prog) as (
select to_number(column_value) as prog
from table(SYS.DBMS_DEBUG_VC2COLL(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)) -- type your values here
)
select a.prog, case when b.prog is null then 0 else 1 end as it_exists
from prog_values_to_check a
left join prog_values_to_check b
on a.prog = b.prog
and exists (select null
from MyTable t
where t.prog = b.prog)
Note: The above queries take into account that the MyTable table may have multiple rows with the same prog value, but that you only want one row in the result. I make this assumption based the WHERE ROWNUM = 1 condition in your question.
According to this question, the way to perform an equality check in Oracle, and I want null to be considered equal null is something like
SELECT COUNT(1)
FROM TableA
WHERE
wrap_up_cd = val
AND ((brn_brand_id = filter) OR (brn_brand_id IS NULL AND filter IS NULL))
This can really make my code dirty, especially if I have a lot of where like this and the where is applied to several column. Is there a better alternative for this?
Well, I'm not sure if this is better, but it might be slightly more concise to use LNNVL, a function (that you can only use in a WHERE clause) which returns TRUE if a given expression is FALSE or UNKNOWN (NULL). For example...
WITH T AS
(
SELECT 1 AS X, 1 AS Y FROM DUAL UNION ALL
SELECT 1 AS X, 2 AS Y FROM DUAL UNION ALL
SELECT 1 AS X, NULL AS Y FROM DUAL UNION ALL
SELECT NULL AS X, 1 AS Y FROM DUAL
)
SELECT
*
FROM
T
WHERE
LNNVL(X <> Y);
...will return all but the row where X = 1 and Y = 2.
As an alternative you can use NVL function and designated literal which will be returned if a value is null:
-- both are not nulls
SQL> with t1(col1, col2) as(
2 select 123, 123 from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
RES
----------
1
-- one of the values is null
SQL> with t1(col1, col2) as(
2 select null, 123 from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
no rows selected
-- both values are nulls
SQL> with t1(col1, col2) as(
2 select null, null from dual
3 )
4 select 1 res
5 from t1
6 where nvl(col1, -1) = nvl(col2, -1)
7 ;
RES
----------
1
As #Codo has noted in the comment, of course, above approach requires choosing a literal comparing columns will never have. If comparing columns are of number datatype(for example) and are able to accept any value, then choosing -1 of course won't be an option. To eliminate that restriction we can use decode function(for numeric or character datatypes) for that:
with t1(col1, col2) as(
2 select null, null from dual
3 )
4 select 1 res
5 from t1
6 where decode(col1, col2, 'same', 'different') = 'same'
7 ;
RES
----------
1
With the LNNVL function, you still have a problem when col1 and col2 (x and y in the answer) are both null. With nvl it works but it is inefficient (not understood by the optimizer) and you have to find a value that cannot appear in the data (and the optimizer should know it cannot).
For strings you can choose a value that have more characters than the maximum of the columns but it is dirty.
The true efficient way to do it is to use the (undocumented) function SYS_OP_MAP_NONNULL().
like this:
where SYS_OP_MAP_NONNULL(col1) <> SYS_OP_MAP_NONNULL(col2)
SYS_OP_MAP_NONNULL(a) is equivalent to nvl(a,'some internal value that cannot appear in the data but that is not null')
I have a table with test fields, Example
id | test1 | test2 | test3 | test4 | test5
+----------+----------+----------+----------+----------+----------+
12345 | P | P | F | I | P
So for each record I want to know how many Pass, Failed or Incomplete (P,F or I)
Is there a way to GROUP BY value?
Pseudo:
SELECT ('P' IN (fields)) AS pass
WHERE id = 12345
I have about 40 test fields that I need to somehow group together and I really don't want to write this super ugly, long query. Yes I know I should rewrite the table into two or three separate tables but this is another problem.
Expected Results:
passed | failed | incomplete
+----------+----------+----------+
3 | 1 | 1
Suggestions?
Note: I'm running PostgreSQL 7.4 and yes we are upgrading
I may have come up with a solution:
SELECT id
,l - length(replace(t, 'P', '')) AS nr_p
,l - length(replace(t, 'F', '')) AS nr_f
,l - length(replace(t, 'I', '')) AS nr_i
FROM (SELECT id, test::text AS t, length(test::text) AS l FROM test) t
The trick works like this:
Transform the rowtype into its text representation.
Measure character-length.
Replace the character you want to count and measure the change in length.
Compute the length of the original row in the subselect for repeated use.
This requires that P, F, I are present nowhere else in the row. Use a sub-select to exclude any other columns that might interfere.
Tested in 8.4 - 9.1. Nobody uses PostgreSQL 7.4 anymore nowadays, you'll have to test yourself. I only use basic functions, but I am not sure if casting the rowtype to text is feasible in 7.4. If that doesn't work, you'll have to concatenate all test-columns once by hand:
SELECT id
,length(t) - length(replace(t, 'P', '')) AS nr_p
,length(t) - length(replace(t, 'F', '')) AS nr_f
,length(t) - length(replace(t, 'I', '')) AS nr_i
FROM (SELECT id, test1||test2||test3||test4 AS t FROM test) t
This requires all columns to be NOT NULL.
Essentially, you need to unpivot your data by test:
id | test | result
+----------+----------+----------+
12345 | test1 | P
12345 | test2 | P
12345 | test3 | F
12345 | test4 | I
12345 | test5 | P
...
- so that you can then group it by test result.
Unfortunately, PostgreSQL doesn't have pivot/unpivot functionality built in, so the simplest way to do this would be something like:
select id, 'test1' test, test1 result from mytable union all
select id, 'test2' test, test2 result from mytable union all
select id, 'test3' test, test3 result from mytable union all
select id, 'test4' test, test4 result from mytable union all
select id, 'test5' test, test5 result from mytable union all
...
There are other ways of approaching this, but with 40 columns of data this is going to get really ugly.
EDIT: an alternative approach -
select r.result, sum(char_length(replace(replace(test1||test2||test3||test4||test5,excl1,''),excl2,'')))
from mytable m,
(select 'P' result, 'F' excl1, 'I' excl2 union all
select 'F' result, 'P' excl1, 'I' excl2 union all
select 'I' result, 'F' excl1, 'P' excl2) r
group by r.result
You could use an auxiliary on-the-fly table to turn columns into rows, then you would be able to apply aggregate functions, something like this:
SELECT
SUM(fields = 'P') AS passed,
SUM(fields = 'F') AS failed,
SUM(fields = 'I') AS incomplete
FROM (
SELECT
t.id,
CASE x.idx
WHEN 1 THEN t.test1
WHEN 2 THEN t.test2
WHEN 3 THEN t.test3
WHEN 4 THEN t.test4
WHEN 5 THEN t.test5
END AS fields
FROM atable t
CROSS JOIN (
SELECT 1 AS idx
UNION ALL SELECT 2
UNION ALL SELECT 3
UNION ALL SELECT 4
UNION ALL SELECT 5
) x
WHERE t.id = 12345
) s
Edit: just saw the comment about 7.4, I don't think this will work with that ancient version (unnest() came a lot later). If anyone thinks this is not worth keeping, I'll delete it.
Taking Erwin's idea to use the "row representation" as a base for the solution a bit further and automatically "normalize" the table on-the-fly:
select id,
sum(case when flag = 'F' then 1 else null end) as failed,
sum(case when flag = 'P' then 1 else null end) as passed,
sum(case when flag = 'I' then 1 else null end) as incomplete
from (
select id,
unnest(string_to_array(trim(trailing ')' from substr(all_column_values,strpos(all_column_values, ',') + 1)), ',')) flag
from (
SELECT id,
not_normalized::text AS all_column_values
FROM not_normalized
) t1
) t2
group by id
The heart of the solution is Erwin's trick to make a single value out of the complete row using the cast not_normalized::text. The string functions are applied to strip of the leading id value and the brackets around it.
The result of that is transformed into an array and that array is transformed into a result set using the unnest() function.
To understand that part, simply run the inner selects step by step.
Then the result is grouped and the corresponding values are counted.
I have following table:
ID Data
1 A
2 A
2 B
3 A
3 B
4 C
5 D
6 A
6 B
etc. In other words, I have groups of data per ID. You will notice that the data group (A, B) occurs multiple times. I want a query that can identify the distinct data groups and number them, such as:
DataID Data
101 A
102 A
102 B
103 C
104 D
So DataID 102 would resemble data (A,B), DataID 103 would resemble data (C), etc. In order to be able to rewrite my original table in this form:
ID DataID
1 101
2 102
3 102
4 103
5 104
6 102
How can I do that?
PS. Code to generate the first table:
CREATE TABLE #t1 (id INT, data VARCHAR(10))
INSERT INTO #t1
SELECT 1, 'A'
UNION ALL SELECT 2, 'A'
UNION ALL SELECT 2, 'B'
UNION ALL SELECT 3, 'A'
UNION ALL SELECT 3, 'B'
UNION ALL SELECT 4, 'C'
UNION ALL SELECT 5, 'D'
UNION ALL SELECT 6, 'A'
UNION ALL SELECT 6, 'B'
In my opinion You have to create a custom aggregate that concatenates data (in case of strings CLR approach is recommended for perf reasons).
Then I would group by ID and select distinct from the grouping, adding a row_number()function or add a dense_rank() your choice. Anyway it should look like this
with groupings as (
select concat(data) groups
from Table1
group by ID
)
select groups, rownumber() over () from groupings
The following query using CASE will give you the result shown below.
From there on, getting the distinct datagroups and proceeding further should not really be a problem.
SELECT
id,
MAX(CASE data WHEN 'A' THEN data ELSE '' END) +
MAX(CASE data WHEN 'B' THEN data ELSE '' END) +
MAX(CASE data WHEN 'C' THEN data ELSE '' END) +
MAX(CASE data WHEN 'D' THEN data ELSE '' END) AS DataGroups
FROM t1
GROUP BY id
ID DataGroups
1 A
2 AB
3 AB
4 C
5 D
6 AB
However, this kind of logic will only work in case you the "Data" values are both fixed and known before hand.
In your case, you do say that is the case. However, considering that you also say that they are 1000 of them, this will be frankly, a ridiculous looking query for sure :-)
LuckyLuke's suggestion above would, frankly, be the more generic way and probably saner way to go about implementing the solution though in your case.
From your sample data (having added the missing 2,'A' tuple, the following gives the renumbered (and uniqueified) data:
with NonDups as (
select t1.id
from #t1 t1 left join #t1 t2
on t1.id > t2.id and t1.data = t2.data
group by t1.id
having COUNT(t1.data) > COUNT(t2.data)
), DataAddedBack as (
select ID,data
from #t1 where id in (select id from NonDups)
), Renumbered as (
select DENSE_RANK() OVER (ORDER BY id) as ID,Data from DataAddedBack
)
select * from Renumbered
Giving:
1 A
2 A
2 B
3 C
4 D
I think then, it's a matter of relational division to match up rows from this output with the rows in the original table.
Just to share my own dirty solution that I'm using for the moment:
SELECT DISTINCT t1.id, D.data
FROM #t1 t1
CROSS APPLY (
SELECT CAST(Data AS VARCHAR) + ','
FROM #t1 t2
WHERE t2.id = t1.id
ORDER BY Data ASC
FOR XML PATH('') )
D ( Data )
And then going analog to LuckyLuke's solution.