Detect all columns in an oracle table which have the same value in each row - sql

Every day, the requests get weirder and weirder.
I have been asked to put together a query to detect which columns in a table contain the same value for all rows. I said "That needs to be done by program, so that we can do it in one pass of the table instead of N passes."
I have been overruled.
So long story short. I have this very simple query which demonstrates the problem. It makes 4 passes over the test set. I am looking for ideas for SQL Magery which do not involve adding indexes on every column, or writing a program, or taking a full human lifetime to run.
And sigh It needs to be able to work on any table.
Thanks in advance for your suggestions.
WITH TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
),
KOUNTS AS
(
SELECT SQRT(COUNT(*)) S, 'Column A' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.A = Q.A OR (P.A IS NULL AND Q.A IS NULL)
UNION ALL
SELECT SQRT(COUNT(*)) S, 'Column B' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.B = Q.B OR (P.B IS NULL AND Q.B IS NULL)
UNION ALL
SELECT SQRT(COUNT(*)) S, 'Column C' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.C = Q.C OR (P.C IS NULL AND Q.C IS NULL)
UNION ALL
SELECT SQRT(COUNT(*)) S, 'Column D' COLUMNS_WITH_SINGLE_VALUES
FROM TEST_CASE P, TEST_CASE Q
WHERE P.D = Q.D OR (P.D IS NULL AND Q.D IS NULL)
)
SELECT COLUMNS_WITH_SINGLE_VALUES
FROM KOUNTS
WHERE S = (SELECT COUNT(*) FROM TEST_CASE)

do you mean something like this?
WITH
TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
)
select case when min(A) = max(A) THEN 'A'
when min(B) = max(B) THEN 'B'
when min(C) = max(C) THEN 'C'
when min(D) = max(D) THEN 'D'
else 'No one'
end
from TEST_CASE
Edit
this works:
WITH
TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
)
select case when min(nvl(A,0)) = max(nvl(A,0)) THEN 'A ' end ||
case when min(nvl(B,0)) = max(nvl(B,0)) THEN 'B ' end ||
case when min(nvl(C,0)) = max(nvl(C,0)) THEN 'C ' end ||
case when min(nvl(D,0)) = max(nvl(D,0)) THEN 'D ' end c
from TEST_CASE
Bonus: I have also added the check for the null values, so the result now is: A and D
And the SQLFiddle demo for you.

Optimizer statistics can easily identify columns with more than one distinct value. After statistics are gathered a simple query against the data dictionary will return the results almost instantly.
The results will only be accurate on 10g if you use ESTIMATE_PERCENT = 100. The results will be accurate on 11g+ if you use ESTIMATE_PERCENT = 100 or AUTO_SAMPLE_SIZE.
Code
create table test_case(a varchar2(1), b number, c varchar2(3),d number,e number);
--I added a new test case, E. E has null and not-null values.
--This is a useful test because null and not-null values are counted separately.
insert into test_case
SELECT 'X' A, 5 B, 'FRI' C, NULL D, NULL E FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D, NULL E FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D, 1 E FROM DUAL;
--Gather stats with default settings, which uses AUTO_SAMPLE_SIZE.
--One advantage of this method is that you can quickly get information for many
--tables at one time.
begin
dbms_stats.gather_schema_stats(user);
end;
/
--All columns with more than one distinct value.
--Note that nulls and not-nulls are counted differently.
--Not-nulls are counted distinctly, nulls are counted total.
select owner, table_name, column_name
from dba_tab_columns
where owner = user
and num_distinct + least(num_nulls, 1) <= 1
order by column_name;
OWNER TABLE_NAME COLUMN_NAME
------- ---------- -----------
JHELLER TEST_CASE A
JHELLER TEST_CASE D
Performance
On 11g, this method might be about as fast as mucio's SQL statement. Options like cascade => false would improve performance by not analyzing indexes.
But the great thing about this method is that it also produces useful statistics. If the system is already gathering statistics at regular intervals the hard work may already be done.
Details about AUTO_SAMPLE_SIZE algorithm
AUTO_SAMPLE_SIZE was completely changed in 11g. It does not use sampling for estimating number of distinct values (NDV). Instead it scans the whole table and uses a hash-based distinct algorithm. This algorithm does not require large amounts of memory or temporary tablespace. It's much faster to read the whole table than to sort even a small part of it. The Oracle Optimizer blog has a good description of the algorithm here. For even more details, see this presentation by Amit Podder. (You'll want to scan through that PDF if you want to verify the details in my next section.)
Possibility of a wrong result
Although the new algorithm does not use a simple sampling algorithm it still does not count the number of distinct values 100% correctly. It's easy to find cases where the estimated number of distinct values is not the same as the actual. But if the number of distinct values are clearly inaccurate, how can they be trusted in this solution?
The potential inaccuracy comes from two sources - hash collisions and synopsis splitting. Synopsis splitting is the main source of inaccuracy but does not apply here. It only happens when there are 13864 distinct values. And it never throws out all of the values, the final estimate will certainly be much larger than 1.
The only real concern is what are the chances of there being 2 distinct values with a hash collision. With a 64-bit hash the chance could be as low as 1 in 18,446,744,073,709,551,616. Unfortunately I don't know the details of their hashing algorithm and don't know the real probability. I was unable to produce any collisions from some simple testing and from previous real-life tests. (One of my tests was to use large values, since some statistics operations only use the first N bytes of data.)
Now also consider that this will only happen if all of the distinct values in the table collide. What are the chances of there being a table with only two values that just happen to collide? Probably much less than the chance of winning the lottery and getting struck by a meteorite at the same time.

If you can live with the result on a single line, this should only scan once;
WITH TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
)
SELECT
CASE WHEN COUNT(DISTINCT A) +
COUNT(DISTINCT CASE WHEN A IS NULL THEN 1 END) = 1
THEN 1 ELSE 0 END SAME_A,
CASE WHEN COUNT(DISTINCT B) +
COUNT(DISTINCT CASE WHEN B IS NULL THEN 1 END) = 1
THEN 1 ELSE 0 END SAME_B,
CASE WHEN COUNT(DISTINCT C) +
COUNT(DISTINCT CASE WHEN C IS NULL THEN 1 END) = 1
THEN 1 ELSE 0 END SAME_C,
CASE WHEN COUNT(DISTINCT D) +
COUNT(DISTINCT CASE WHEN D IS NULL THEN 1 END) = 1
THEN 1 ELSE 0 END SAME_D
FROM TEST_CASE
An SQLfiddle to test with.

this will be done in a single scan
WITH
TEST_CASE AS
(
SELECT 'X' A, 5 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 3 B, 'FRI' C, NULL D FROM DUAL UNION ALL
SELECT 'X' A, 7 B, 'TUE' C, NULL D FROM DUAL
)
select decode(count(distinct nvl(A,0)),1,'SINGLE','MULTP') COL_A,
decode(count(distinct nvl(B,0)),1,'SINGLE','MULTP') COL_B,
decode(count(distinct nvl(C,0)),1,'SINGLE','MULTP') COL_C,
decode(count(distinct nvl(D,0)),1,'SINGLE','MULTP') COL_D
from TEST_CASE

Related

How to chose Table based on parameterized database name?

My code takes in a parameter ${ID}$ (string) and based on what ID evaluates to I want to chose a different table to use. Guess I cant use a case inside a FROM statement. Some example code looks like:
select *
from ${ID}$_charges.transaction_charge
where execution_date = '2011-03-22'
So if ID is 'N' then I want to use the transaction_charge table so the statement resolves to N_charges.transaction_charge
However if ID is 'B' or 'P' then I want to use a different table called conformity_charge and the statement would evaluate to B_charges.conformity_charge or P_charges.conformity_charge
How can I write this statement?
If you have a low number of possible tables to target, the closest you can get, apart from dynamic SQL, is:
NOTE: Depending of the capabilities of your database engine and the size of your tables there might be performance penalties that may or may not matter.
SELECT a, b, c
FROM (
SELECT 'N' as TableName, a, b, c
FROM N_charges.transaction_charge
UNION ALL
SELECT 'P' as TableName, a, b, c
FROM P_charges.transaction_charge
UNION ALL
SELECT 'B' as TableName, a, b, c
FROM B_charges.transaction_charge
) t
WHERE TableName = '${ID}$'
# Another variation
SELECT a, b, c
FROM N_charges.transaction_charge
WHERE 'N' = '${ID}$'
UNION ALL
SELECT a, b, c
FROM P_charges.transaction_charge
WHERE 'P' = '${ID}$'
UNION ALL
SELECT a, b, c
FROM B_charges.transaction_charge
WHERE 'B' = '${ID}$'

How to unnest and pivot two columns in BigQuery

Say I have a BQ table containing the following information
id
test.name
test.score
1
a
5
b
7
2
a
8
c
3
Where test is nested. How would I pivot test into the following table?
id
a
b
c
1
5
7
2
8
3
I cannot pivot test directly, as I get the following error message at pivot(test): Table-valued function not found. Previous questions (1, 2) don't deal with nested columns or are outdated.
The following query looks like a useful first step:
select a.id, t
from `table` as a,
unnest(test) as t
However, this just provides me with:
id
test.name
test.score
1
a
5
1
b
7
2
a
8
2
c
3
Conditional aggregation is a good approach. If your tables are large, you might find that this has the best performance:
select t.id,
(select max(tt.score) from unnest(t.score) tt where tt.name = 'a') as a,
(select max(tt.score) from unnest(t.score) tt where tt.name = 'b') as b,
(select max(tt.score) from unnest(t.score) tt where tt.name = 'c') as c
from `table` t;
The reason I recommend this is because it avoids the outer aggregation. The unnest() happens without shuffling the data around -- and I have found that this is a big win in terms of performance.
One option could be using conditional aggregation
select id,
max(case when test.name='a' then test.score end) as a,
max(case when test.name='b' then test.score end) as b,
max(case when test.name='c' then test.score end) as c
from
(
select a.id, t
from `table` as a,
unnest(test) as t
)A group by id
Below is generic/dynamic way to handle your case
EXECUTE IMMEDIATE (
SELECT """
SELECT id, """ ||
STRING_AGG("""MAX(IF(name = '""" || name || """', score, NULL)) AS """ || name, ', ')
|| """
FROM `project.dataset.table` t, t.test
GROUP BY id
"""
FROM (
SELECT DISTINCT name
FROM `project.dataset.table` t, t.test
ORDER BY name
)
);
If to apply to sample data from your question - output is
Row id a b c
1 1 5 7 null
2 2 8 null 3

How to update a column for all rows after each time one row is processed by a UDF in BigQuery?

I'm trying to update a column for all rows after each time one row is processed by a UDF.
The example has 3 rows with 6 columns. Column "A" has the same value across 3 rows; column "B" and "A" is the joint identifier of each row; column "C" is arrays with any letters in a,b,c,d,e; column "D" is the target array to be filled in; column "E" is some integers; column "abcde" is the integer array with 5 integers specifying the counts for each letter a,b,c,d,e.
Each row will be passed into a UDF to update the column "D" and column "abcde" according to the column "C" and column "E". The rule is: select the number, which specified by "E", of items from "C" to put into "D"; the selection is random; after each selection done for a row, the column 'abcde' will be updated across all rows.
For example, to process the first row, we randomly select one item from ('a','b','c') to put into "D". Let's say the system picked the 'c' in the column "C", so the value in "D" for this row becomes ['c'] and 'abcde' gets updated to [1,3,1,1,1] (before was [1,3,2,1,1]) for all three rows.
Example data:
#StandardSQL in BigQuery
#code to generate the example table
with sample as (
select 'y1' as A, 'x1' as B, ['a','b','c'] as C, [] as D, 1 as E, [1,3,2,1,1] as abcde union all
select 'y1','x2',['a','b'],[],2,[1,3,2,1,1] union all
select 'y1','x3',['c','d','e'],[],3,[1,3,2,1,1])
select * from sample order by B
After the first row is processed:
with sample as (
select 'y1' as A, 'x1' as B, ['a','b','c'] as C, ['c'] as D, 1 as E, [1,3,1,1,1] as abcde union all
select 'y1','x2',['a','b'],[],2,[1,3,1,1,1] union all
select 'y1','x3',['c','d','e'],[],3,[1,3,1,1,1])
select * from sample order by B
After the second row is processed:
with sample as (
select 'y1' as A, 'x1' as B, ['a','b','c'] as C, ['c'] as D, 1 as E, [0,2,1,1,1] as abcde union all
select 'y1','x2',['a','b'],['a','b'],2,[0,2,1,1,1] union all
select 'y1','x3',['c','d','e'],[],3,[0,2,1,1,1])
select * from sample order by B
After the third row is processed:
with sample as (
select 'y1' as A, 'x1' as B, ['a','b','c'] as C, ['c'] as D, 1 as E, [0,2,0,0,0] as abcde union all
select 'y1','x2',['a','b'],['a','b'],2,[0,2,0,0,0] union all
select 'y1','x3',['c','d','e'],['c','d','e'],3,[0,2,0,0,0])
select * from sample order by B
Don't worry about how the UDF will do the random selection. I'm just wondering, if it's possible in BigQuery to do the task to update the column 'abcde' in the way I want?
I've tried using UDFs, but I'm struggling to get it working because my understanding of a UDF is that it can only take one row in and produce multiple rows out. So, I can't update the other rows. Is it possible just using SQL?
Expected output:
After the first row is processed:
After the third row is processed:
Additional information:
create temporary function selection(A string, B string, C ARRAY<STRING>, D ARRAY<STRING>, E INT64, abcde ARRAY<INT64>)
returns STRUCT< A stRING, B string, C array<string>, D array<string>, E int64, abcde array<int64>>
language js AS """
/*
for the row i in the data:
select the number i.E of items (randomly) from i.C where the numbers associated with the item in i.abcde is bigger than 0 (i.e. only the items with numbers in abcde bigger than 0 can be the cadidates for the random selection);
put the selected items in i.D and deduct the amount of selected items from the number for the corresponding item in the column 'abcde' FOR ALL ROWS;
proceed to the next row i+1 until every row is processed;
*/
return {A,B,C,D,E,abcde}
""";
with sample as (
select 'y1' as A, 'x1' as B, ['a','b','c'] as C, CAST([] AS ARRAY<STRING>) as D, 1 as E, [1,3,2,1,1] as abcde union all
select 'y1','x2',['a','b'],[],2,[1,3,2,1,1] union all
select 'y1','x3',['c','d','e'],[],2,[1,3,2,1,1])
select selection(A,B,C,D,E,abcde) from sample order by B
Below is for BigQuery Standard SQL
#StandardSQL
WITH sample AS (
SELECT 'y1' AS A, 'x1' AS B, ['a','b','c'] AS C, ['c'] AS D, 1 AS E, [1,3,2,1,1] AS abcde UNION ALL
SELECT 'y1','x2',['a','b'],['a','b'],2,[1,3,2,1,1] UNION ALL
SELECT 'y1','x3',['c','d','e'],['c','d','e'],3,[1,3,2,1,1] UNION ALL
SELECT 'y2' AS A, 'x1' AS B, ['a','b','c'] AS C, ['a','b'] AS D, 2 AS E, [1,3,2,1,1] AS abcde UNION ALL
SELECT 'y2','x2',['a','b'],['b'],1,[1,3,2,1,1] UNION ALL
SELECT 'y2','x3',['c','d','e'],['d','e'],2,[1,3,2,1,1]
),
counts AS (
SELECT A AS AA, dd, COUNT(1) AS cnt
FROM sample, UNNEST(D) AS dd
GROUP BY AA, dd
),
processed AS (
SELECT A, B, ARRAY_AGG(aa - IFNULL(cnt, 0) ORDER BY pos) AS abcde
FROM sample, UNNEST(abcde) AS aa WITH OFFSET AS pos
LEFT JOIN counts ON A = counts.AA
AND CASE dd
WHEN 'a' THEN 0
WHEN 'b' THEN 1
WHEN 'c' THEN 2
WHEN 'd' THEN 3
WHEN 'e' THEN 4
END = pos
GROUP BY A, B
)
SELECT s.A, s.B, s.C, s.D, s.E, p.abcde
FROM sample AS s
JOIN processed AS p
USING (A, B)
-- ORDER BY A, B
Don't worry about how the UDF will do the random selection
So, as you can see - I just put "random" values into sample data to mimic D

SQL query : how to check existence of multiple rows with one query

I have this table MyTable:
PROG VALUE
-------------
1 aaaaa
1 bbbbb
2 ccccc
4 ddddd
4 eeeee
now I'm checking the existence of a tuple with a certain id with a query like
SELECT COUNT(1) AS IT_EXISTS
FROM MyTable
WHERE ROWNUM = 1 AND PROG = {aProg}
For example I obtain with aProg = 1 :
IT_EXISTS
---------
1
I get with aProg = 3 :
IT_EXISTS
---------
0
The problem is that I must do multiple queries, one for every value of PROG to check.
What I want is something that with a query like
SELECT PROG, ??? AS IT_EXISTS
FROM MyTable
WHERE PROG IN {1, 2,3, 4, 5} AND {some other condition}
I can get something like
PROG IT_EXISTS
------------------
1 1
2 1
3 0
4 1
5 0
The database is Oracle...
Hope I'm clear
regards
Paolo
Take a step back and ask yourself this: Do you really need to return the rows that don't exist to solve your problem? I suspect the answer is no. Your application logic can determine that records were not returned which will allow you to simplify your query.
SELECT PROG
FROM MyTable
WHERE PROG IN (1, 2, 3, 4, 5)
If you get a row back for a given PROG value, it exists. If not, it doesn't exist.
Update:
In your comment in the question above, you stated:
the prog values are from others tables. The table of the question has only a subset of the all prog values
This suggests to me that a simple left outer join could do the trick. Assuming your other table with the PROG values you're interested in is called MyOtherTable, something like this should work:
SELECT a.PROG,
CASE WHEN b.PROG IS NOT NULL THEN 1 ELSE 0 END AS IT_EXISTS
FROM MyOtherTable AS a
LEFT OUTER JOIN MyTable AS b ON b.PROG = a.PROG
A WHERE clause could be tacked on to the end if you need to do some further filtering.
I would recommend something like this. If at most one row can match a prog in your table:
select p.prog,
(case when t.prog is null then 0 else 1 end) as it_exists
from (select 1 as prog from dual union all
select 2 as prog from dual union all
select 3 as prog from dual union all
select 4 as prog from dual union all
select 5 as prog from dual
) p left join
mytable t
on p.prog = t.prog and <some conditions>;
If more than one row could match, you'll want to use aggregation to avoid duplicates:
select p.prog,
max(case when t.prog is null then 0 else 1 end) as it_exists
from (select 1 as prog from dual union all
select 2 as prog from dual union all
select 3 as prog from dual union all
select 4 as prog from dual union all
select 5 as prog from dual
) p left join
mytable t
on p.prog = t.prog and <some conditions>
group by p.prog
order by p.prog;
One solution is to use (arguably abuse) a hierarchical query to create an arbitrarily long list of numbers (in my example, I've set the largest number to max(PROG), but you could hardcode this if you knew the top range you were looking for). Then select from that list and use EXISTS to check if it exists in MYTABLE.
select
PROG
, case when exists (select 1 from MYTABLE where PROG = A.PROG) then 1 else 0 end IT_EXISTS
from (
select level PROG
from dual
connect by level <= (select max(PROG) from MYTABLE) --Or hardcode, if you have a max range in mind
) A
;
It's still not very clear where you get the prog values to check. But if you can read them from a table, and assuming that the table doesn't contain duplicate prog values, this is the query I would use:
select a.prog, case when b.prog is null then 0 else 1 end as it_exists
from prog_values_to_check a
left join prog_values_to_check b
on a.prog = b.prog
and exists (select null
from MyTable t
where t.prog = b.prog)
If you do need to hard code the values, you can do it rather simply by taking advantage of the SYS.DBMS_DEBUG_VC2COLL function, which allows you to convert a comma-delimited list of values into rows.
with prog_values_to_check(prog) as (
select to_number(column_value) as prog
from table(SYS.DBMS_DEBUG_VC2COLL(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)) -- type your values here
)
select a.prog, case when b.prog is null then 0 else 1 end as it_exists
from prog_values_to_check a
left join prog_values_to_check b
on a.prog = b.prog
and exists (select null
from MyTable t
where t.prog = b.prog)
Note: The above queries take into account that the MyTable table may have multiple rows with the same prog value, but that you only want one row in the result. I make this assumption based the WHERE ROWNUM = 1 condition in your question.

Simplest SQL expression to check if two columns have same value accounting for NULL [duplicate]

This question already has answers here:
Is there better Oracle operator to do null-safe equality check?
(3 answers)
Closed 7 years ago.
I am trying to figure out the simplest generalized SQL expression that can check if two columns a and b are the same. In other words, an expression that evaluates to true when:
a is NULL and b is NULL; or
a is not NULL and b is not NULL and a = b
Assume columns a and b have exactly the same data type.
The most obvious solution, which I'm using in the below example, is horribly convoluted, particularly because I need to repeat this clause 15x in a 15-column table:
SELECT * FROM (
SELECT 'x' a, 'x' b FROM dual
UNION ALL
SELECT 'x' a, NULL b FROM dual
UNION ALL
SELECT NULL a, 'x' b FROM dual
UNION ALL
SELECT NULL a, NULL b FROM dual
UNION ALL
SELECT 'x' a, 'y' b FROM dual
UNION ALL
SELECT 'x' a, NULL b FROM dual
UNION ALL
SELECT NULL a, 'y' b FROM dual
UNION ALL
SELECT NULL a, NULL b FROM dual
)
WHERE (a IS NULL AND b IS NULL) OR
(a IS NOT NULL AND b IS NOT NULL AND a = b)
/
And the expected result is:
+--------+--------+
| a | b |
+--------+--------+
| x | x |
| (null) | (null) |
| (null) | (null) |
+--------+--------+
tl;dr - Can I simplify my WHERE clause, ie make it more compact, while keeping it logically correct?
P.S.: I couldn't give a damn about any SQL purist insistence that "NULL is not a value". For my practical purposes, if a contains NULL and b does not, then a differs from b. It is not "unknown" whether they differ. So please, in advance, no arguments up that alley!
P.P.S.: My SQL flavour is Oracle 11g.
P.P.P.S.: Someone decided this question is a duplicate of "Is there better Oracle operator to do null-safe equality check?" but a cursory check in that question will show that the answers are less helpful than the ones posted on this thread and do not satisfy my particular, and explicitly-stated criteria. Just because they are similar doesn't make them duplicates. I've never understood why people on SO work so hard to force my problem X to be someone else's problem Y.
You can readily simplify it as:
WHERE (a IS NULL AND b IS NULL) OR
(a = b)
The IS NOT NULL is not needed.
If you have a "safe" value (i.e. one that is never used), you can do this:
WHERE COALESCE(a, ' ') = COALESCE(b, ' ')
This assumes that ' ' is not a valid value.
I have found the Ask Tom article "Safely Comparing NULL Columns As Equal" to be the most helpful. In Oracle, you can use the DECODE function to do this:
WHERE 1 = DECODE(a, b, 1, 0)
And this is the most compact solution I have seen so far.
Simple is not necessarily performant.
consider this possibility.
WHERE X || 'x' = Y || 'x'
If you want to really push the envelope, use the SYS_OP_MAP_NONNULL
SELECT * FROM (
SELECT 'x' a, 'x' b FROM dual
UNION ALL
SELECT 'x' a, NULL b FROM dual
UNION ALL
SELECT NULL a, 'x' b FROM dual
UNION ALL
SELECT NULL a, NULL b FROM dual
UNION ALL
SELECT 'x' a, 'y' b FROM dual
UNION ALL
SELECT 'x' a, NULL b FROM dual
UNION ALL
SELECT NULL a, 'y' b FROM dual
UNION ALL
SELECT NULL a, NULL b FROM dual
)
WHERE NVL(a,'1')=NVL(b,'1')