Checking distinct values on column level - sql

I have four columns coming from my query. My requirement is to check if the values of all the columns are different then only select the result.
I have written this query and it is working fine. But I was just wondering if there is any better or shortcut way to achieve this
select FO, AFO, CO, ACO from mytable
where
(fo<>afo or (fo is null or afo is null))
and
(fo<>co or (fo is null or co is null))
and
(fo<>aco or (fo is null or aco is null))
and
(afo<>co or (afo is null or co is null))
and
(afo<>aco or (afo is null or aco is null))
and
(co<>aco or (co is null or aco is null))

Hmmm . . . you seem to want the four values to be different or NULL. A different method uses apply:
select t.*
from mytable t cross apply
(select count(*)
from (values (t.afo), (t.fo), (t.co), (t.aco)
) v(val)
where val is not null
having count(*) = count(distinct val)
) x;
This removes the NULL values and then checks that the remaining ones are all distinct.

Related

Use merge with certain conditions and with ROW_NUMBER function

I am trying to do a merge (insert and update) with the row_number function so that the ID_TRANS field are unique values ​​in the remaining fields apply certain conditions. But when executing it I get a right parenthesis error, it is worth mentioning that I have modified and added parentheses and it remains unresolved.
MERGE INTO TBL_TRANSAC trans
USING (
SELECT ID_TRANS,
TIT,
BEN,
BAN,
CTA_EMI,
CTA_REC,
INST,
TYPE_TRANS,
TYPE_MOV,
CONC,
DATE_OPER,
MONT,
DIV,
ID_CONT
FROM (
SELECT T1.*
, ROW_NUMBER() OVER (PARTITION BY T1.ID_TRANS ORDER BY T1.ID_TRANS DESC)ENUMERADO
FROM (
SELECT
'speibco1_'||UPPER(REPLACE( AREA,' ',''))||
UPPER(REPLACE( FVALOR,' ',''))||
UPPER(REPLACE( CLAVE_RASTREO,' ','')) ,
TIT ,
BEN,
BAN_EM_DES,
REPLACE(UPPER(NO_TP_CTA_EMISOR),' ',''),
LTRIM(CTA_REC,'0'),
'BAN ACTINVER',
'CTA_EXTER',
'SPEI ENTRADA BCO',
CONC_2 ,
TO_DATE(TO_CHAR(FVALOR),'YYYY-MM-DD'),
REPLACE(REPLACE(IMPORTE,'-',''),' ',''),
'MXN' ,
LTRIM(CTA_REC,'0')||'0999'
FROM (SELECT *
FROM IBM_I2.I2_SPEI WHERE REPLACE(NO_TP_CTA_EMISOR,' ','') IS NOT NULL
AND ID_OPERACION='0007'
AND ESTATUS='06'
AND CTA_REC NOT IN ('70000997', '7909567'))
)
WHERE ENUMERADO=1
AND ID_TRANS IS NOT NULL
)SPEI
ON (
trans.ID_TRANS = SPEI.ID_TRANS
)
WHEN MATCHED THEN
UPDATE SET
ID_TRANS = SPEI.ID_TRANS,
TIT = SPEI.TIT ,
BEN= SPEI.BEN,
BAN=SPEI.BAN_EMISOR,
CTA_EMI=SPEI.CTA_EMI,
CTA_REC =SPEI.CTA_REC,
INST= SPEI.INST,
TYPE_TRANS=SPEI.TYPE_TRANS,
TYPE_MOV=SPEI.TYPE_MOV,
CONC=SPEI.CONC,
DATE_OPER=SPEI.DATE_OPER,
MONT=SPEI.MONT,
DIV= SPEI.DIV,
ID_CONT= SPEI.ID_CONT
WHEN NOT MATCHED THEN
INSERT (
ID_TRANS,
TIT,
BEN,
BAN,
CTA_EMI,
CTA_REC,
INST,
TYPE_TRANS,
TYPE_MOV,
CONC,
DATE_OPER,
MONT,DIV,
ID_CONT
)
VALUES
(
SPEI.ID_TRANS,
SPEI.TIT ,
SPEI.BEN,
SPEI.BAN_EMISOR,
SPEI.CTA_EMI,
SPEI.CTA_REC,
SPEI.INST,
SPEI.TYPE_TRANS,
SPEI.TYPE_MOV,
SPEI.CONC ,
SPEI.DATE_OPER,
SPEI.MONT,
SPEI.DIV ,
SPEI.ID_CONT
);
MARK ERROR THAT MISSING RIGHT PARENTESIS AFTER AND ID_TRANS IS NOT NULL
STILL PLACING THE PARENTESIS.
Looks like a closing parenthesis is missing, here:
)) --> here; should be 2, not only 1
WHERE ENUMERADO=1
AND ID_TRANS IS NOT NULL
Put parentheses and aliases to each of them, as well as the fields that are called in USING put aliases, and that worked.
FROM (SELECT *
FROM IBM_I2.I2_SPEI WHERE REPLACE(NO_TP_CTA_EMISOR,' ','') IS NOT NULL
AND ID_OPERACION='0007'
AND ESTATUS='06'
AND CTA_REC NOT IN ('70000997', '7909567')
)
)T1
)T2
WHERE ENUMERADO=1
AND ID_TRANS IS NOT NULL
)SPEI
ON (
trans.ID_TRANS = SPEI.C1_ID_TRANS
)

Count of null values in all columns in total

Have 100 columns where want to find null values in all column in total.
FindNull functon helps me to convert 'null'to '1' to be able to count them
Here is my code:
(select totalAmountOfNull =
(select count(*) from (select
countOfNull=
[dmt].[findNull](column1) +
[dmt].[findNull](column2) +
[dmt].[findNull](column3) )
from dmt.tableName ) as t10 where t10.totalAmountOfNull = 3
And Answer is wrong, due to '3'. The Main problem is that I do have 100 columns in one table and want to find all null values in total. But this code gives me wrong number.
In SQL Server, you can use apply:
select count(*)
from t cross apply
(values (t.col1), (t.col2), (t.col3), . . . ) v(col)
where v.col is null;
You need to list all the columns in the values() clause.

COALESCE function in OVER statement not working

Can someone please tell me why the COALESCE is working on the first SELECT here and not the other two? I'm still getting NULL values on the second two statements.
(SELECT COALESCE(DEFax, NULL, '') FROM Debtor d WHERE d.DEIsPrimary = 1 AND d.CApKey = c.CApKey) AS FaxNumberOne,
(SELECT COALESCE(DEFax, NULL, '') FROM (SELECT ROW_NUMBER() OVER (ORDER BY DEpKey ASC)
AS rownumber, DEFax FROM Debtor d WHERE d.CApKey = c.CApKey AND d.DEIsPrimary <> 1)
AS foo WHERE rownumber = 1) AS FaxNumberTwo,
(SELECT COALESCE(DEFax, NULL, '') FROM (SELECT ROW_NUMBER() OVER (ORDER BY DEpKey ASC)
AS rownumber, DEFax FROM Debtor d WHERE d.CApKey = c.CApKey AND d.DEIsPrimary <> 1)
AS foo WHERE rownumber = 2) AS FaxNumberThree
Thanks!
Sample data and desired results would really help.
But a scalar subquery is a subquery that returns one column and zero or one rows. If it returns zero rows, then the value is NULL regardless of the expression in the SELECT. In other words, the COALESCE() needs to go outside, something like this:
coalesce( (select . . . . ),
''
)
Including NULL in the coalesce() list is not a good practice. It is unnecessary and misleading -- and always ignored.

Getting Number of Common Values from 2 comma-seperated strings

I have a table that contains comma-separated values in a column In Postgres.
ID PRODS
--------------------------------------
1 ,142,10,75,
2 ,142,87,63,
3 ,75,73,2,58,
4 ,142,2,
Now I want a query where I can give a comma-separated string and it will tell me the number of matches between the input string and the string present in the row.
For instance, for input value ',142,87,', I want the output like
ID PRODS No. of Match
------------------------------------------------------------------------
1 ,142,10,75, 1
2 ,142,87,63, 2
3 ,75,73,2,58, 0
4 ,142,2, 1
Try this:
SELECT
*,
ARRAY(
SELECT
*
FROM
unnest(string_to_array(trim(both ',' from prods), ','))
WHERE
unnest = ANY(string_to_array(',142,87,', ','))
)
FROM
prods_table;
Output is:
1 ,142,10,75, {142}
2 ,142,87,63, {142,87}
3 ,75,73,2,58, {}
4 ,142,2, {142}
Add the cardinality(anyarray) function to the last column to get just a number of matches.
And consider changing your database design.
Check This.
select T.*,
COALESCE(No_of_Match,'0')
from TT T Left join
(
select ID,count(ID) No_of_Match
from (
select ID,unnest(string_to_array(trim(t.prods, ','), ',')) A
from TT t)a
Where A in ('142','87')
group by ID
)B
On T.Id=b.id
Demo Here
OutPut
If you install the intarray extension, this gets quite easy:
select id, prods, cardinality(string_to_array(trim(prods, ','), ',')::int[] & array[142,87])
from bad_design;
Otherwise it's a bit more complicated:
select bd.id, bd.prods, m.matches
from bad_design bd
join lateral (
select bd.id, count(v.p) as matches
from unnest(string_to_array(trim(bd.prods, ','), ',')) as l(p)
left join (
values ('142'),('87') --<< these are your input values
) v(p) on l.p = v.p
group by bd.id
) m on m.id = bd.id
order by bd.id;
Online example: http://rextester.com/ZIYS97736
But you should really fix your data model.
with data as
(
select *,
unnest(string_to_array(trim(both ',' from prods), ',') ) as v
from myTable
),
counts as
(
select id, count(t) as c from data
left join
( select unnest(string_to_array(',142,87,', ',') ) as t) tmp on tmp.t = data.v
group by id
order by id
)
select t1.id, t1.prods, t2.c as "No. of Match"
from myTable t1
inner join counts t2 on t1.id = t2.id;

Two or more results of one CASE statement in SQL

Is it possible to SELECT value of two or more columns with one shot of CASE statement? I mean instead of:
select
ColumnA = case when CheckColumn='condition' then 'result1' end
,ColumnB = case when CheckColumn='condition' then 'result2' end
Something like:
select case when CheckColumn='condition' then ColumnA='result1', ColumnB='result2' end
UPDATE
Just the same as we can do with the UPDATE statement:
update CTE
set ColumnA='result1', ColumnB='result2'
where CheckColumn='condition'
It is not possible with CASE expression.
For every column you need new CASE
It is not possible, but you could use a table value constructor as a work around to this, to store each value for columna and columnb against your check column:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
('Condition1', 'Result1', 'Result2'),
('Condition2', 'Result3', 'Result4'),
('Condition3', 'Result5', 'Result6')
) AS v (CheckColumn, ColumnA, ColumnB)
ON v.CheckColumn = t.CheckColumn;
If you have more complex conditions, then you can still apply this logic, but just use a pseudo-result for the join:
SELECT t.CheckColumn,
v.ColumnA,
v.ColumnB
FROM dbo.YourTable AS t
LEFT JOIN
(VALUES
(1, 'Result1', 'Result2'),
(2, 'Result3', 'Result4'),
(3, 'Result5', 'Result6')
) AS v (ConditionID, ColumnA, ColumnB)
ON v.ConditionID = CASE WHEN <some long expression> THEN 1
WHEN <some other long expression> THEN 2
ELSE 3
END;
The equivalent select to the update is:
select 'result1', 'result2'
. . .
where CheckColumn = 'condition';
Your select is different because it produces NULL values. There is an arcane way you can essentially do this with outer apply:
select t2.*
from . . . outer apply
(select t.*
from (select 'result1' as col1, 'result2' as col2) t
where CheckColumn = 'condition'
) t2;
This will return NULL values when there is no match. And, you can have as many columns as you would like.
What I understood from your question is that you want to update multiple columns if certain condition is true.
For such situation you have to use MERGE statements.
Example of using MERGE is as given on msdn here.
Code example:
-- MERGE statement for update.
USE [Database Name];
GO
MERGE Inventory ity
USING Order ord
ON ity.ProductID = ord.ProductID
WHEN MATCHED THEN
UPDATE
SET ity.Quantity = ity.Quantity - ord.Quantity;
More MERGE statement example here.
You could solve this maybe with a CTE or a CROSS APPLY, somehting like
DECLARE #tbl2 TABLE(inx INT, val1 VARCHAR(10),val2 VARCHAR(10));
INSERT INTO #tbl2 VALUES(1,'value1a','value1b'),(2,'value2a','value2b'),(3,'value2a','value2b');
UPDATE yourTable SET col1=subTable.val1,col2=subTable.val2
FROM yourTable
CROSS APPLY(
SELECT val1,val2
FROM #tbl2
WHERE inx=1 --YourCondition
) AS subTable