SQL determine if more than one column on a given row has the same value - sql

I have a SQL Server table that includes 9 columns used to indicated what things to include or exclude in part of the UI. Each column can have the value 'A', 'X', or blank. Each row should have at most 1 'A' in any of the columns.
Due to an error many columns have multiple 'A' values. How can I write a query that returns every row that breaks this constraint?
All I have is something like:
SELECT PrimaryKey
FROM Criteria C
WHERE (C.First = 'A' AND C.Second = 'A')
OR (C.First = 'A' AND C.Third = 'A')
OR (C.First = 'A' AND C.Fourth = 'A')
...
OR (C.Eighth = 'A' AND C.Ninth = 'A')
Is there any cleaner or more elegant way to write this code?

You can use APPLY:
SELECT C.*
FROM Criteria C CROSS APPLY
(SELECT COUNT(*) as num_a_s
FROM (VALUES (First), (Second), . . . -- list all the columns here
) V(x)
WHERE v.x = 'A'
) v
WHERE v.num_a_s >= 2;
Note: Something is probably wrong with your data model if you are storing these values in columns rather than in separate rows.

Here is one way to do this
create table dbo.t1(x int, col1 varchar(10), col2 varchar(10))
insert into dbo.t1 values(1,'A','A')
insert into dbo.t1 values(2,'A','')
insert into dbo.t1 values(3,'','')
insert into dbo.t1 values(4,'','X')
select x
,count(case when n='A' then 1 end) as cnt
from dbo.t1
cross apply(values ('col1',col1)
,('col2',col2)
---repeat this for columns up to col9...
)x(m,n)
group by x
having count(case when n='A' then 1 end)>1

try the following:
select
* from YourTable
where REPLACE(TRIM('X' FROM col1),'A',1)
+ REPLACE(TRIM('X' FROM col2),'A',1)
+ REPLACE(TRIM('X' FROM col3),'A',1)
+ REPLACE(TRIM('X' FROM col4),'A',1)
+ REPLACE(TRIM('X' FROM col5),'A',1)
+ REPLACE(TRIM('X' FROM col6),'A',1)
+ REPLACE(TRIM('X' FROM col7),'A',1)
+ REPLACE(TRIM('X' FROM col8),'A',1)
+ REPLACE(TRIM('X' FROM col9),'A',1)> 1
I have used TRIM also which requires SQL Server 2017 or above.
or the following:
select *
from YourTable
where replace(replace(concat(col1,col2,col3,col4,col5,col6,col7,col8,col9), 'X', ''), 'A', 1) > 1
Please see the demo here.

SELECT *
FROM MyTable
WHERE LEN(CONCATENATE(C.First, C.... , C.Eighth)) - LEN(REPLACE(CONCATENATE(C.First, C.... , C.Eighth), 'A', '')) = 1
As an example

Related

SQL case when with equality and count conditions

How can i perform a query with filtering both value and count of the element.Such as
Select
(case
when element = 'data1' and (select count(element) from mytable where element='data1') > 15 then '1'
when element = 'data2' and (select count(element) from mytable where element='data2') > 15 then '2'
.
.
.
)
from mytable
where conditions
are there any quick and simple ways to implement this?
I think you want window functions:
select (case when element = 'data1' and
count(*) over (partition by element) > 15
then '1'
when element = 'data2' and
count(*) over (partition by element) > 15
then '2'
.
.
.
)
from mytable
where conditions
For both code clarity and performance reason, I would separate the aggregating in a CTE and then invoke it in a join. If the table is big perhaps make sense you put the result in a temporary table instead of CTE for performance reasons.
;WITH ElementCTE
AS
(
SELECT element, count(Element) AS count_Element
FROM mytable
GROUP BY element
WHERE Conditions
)
SELECT
CASE ELEMENT
WHEN 'Data1' AND count_Element > 15 THEN '1'
WHEN 'Data2' AND count_Element > 15 THEN '2'
FROM mytable AS mt
INNER JOIN Element AS el
ON mt.Element = el.Element
WHERE mt.conditions
Assuming you have a table:
CREATE TABLE NULLTEST
(
TransactioNo INT,
Code VARCHAR(25)
)
INSERT INTO NULLTEST VALUES (NULL, 'TEST1');
INSERT INTO NULLTEST VALUES (NULL, 'TEST2');
INSERT INTO NULLTEST VALUES (1, 'TEST2');
The query could look like this:
SELECT
CASE
WHEN Code = 'TEST2' AND
(SELECT COUNT(1) FROM dbo.NULLTEST n WHERE n.Code = 'TEST2')> 1 THEN '1'
WHEN Code = 'TEST1' AND
(SELECT COUNT(1) FROM dbo.NULLTEST n WHERE n.Code ='TEST1')> 1 THEN '2'
ELSE '3' end Yourcolumn
FROM dbo.NULLTEST t
WHERE ...

Possible to Search Partial Matched Strings from same table?

I have a table and lets say the table has items with the item numbers:
12345
12345_DDM
345653
2345664
45567
45567_DDM
I am having trouble creating a query that will get all of the _DDM and the corresponding item that has the same prefix digits.
So in this case I'd want both 12345 and 12345_DDM etc to be returned
Use like to find rows with _DDM.
Use EXISTS to find rows with numbers also having a _DDM row.
working demo
select *
from tablename t1
where columnname LIKE '%_DDM'
or exists (select 1 from tablename t2
where t1.columnname + '_DDM' = t2.columnname)
Try this query:
--sample data
;with tbl as (
select col from (values ('12345'),('12345_DDM'),('345653'),('2345664'), ('45567'),('45567_DDM')) A(col)
)
--select query
select col from (
select col,
prefix,
max(case when charindex('_DDM', col) > 0 then 1 else 0 end) over (partition by prefix) [prefixGroupWith_DDM]
from (
select col,
case when charindex('_DDM', col) - 1 > 0 then substring(col, 1, charindex('_DDM', col) - 1) else col end [prefix]
from tbl
) a
) a where [prefixGroupWith_DDM] = 1

How to merge two columns from CASE STATEMENT of DIFFERENT CONDITION

My expected result should be like
----invoiceNo----
T17080003,INV14080011
But right now, I've come up with following query.
SELECT AccountDoc.jobCode,AccountDoc.shipmentSyskey,AccountDoc.docType,
CASE AccountDoc.docType
WHEN 'M' THEN
JobInvoice.invoiceNo
WHEN 'I' THEN
(STUFF((SELECT ', ' + RTRIM(CAST(AccountDoc.docNo AS VARCHAR(20)))
FROM AccountDoc LEFT OUTER JOIN JobInvoice
ON AccountDoc.principalCode = JobInvoice.principalCode AND
AccountDoc.jobCode = JobInvoice.jobCode
WHERE (AccountDoc.isCancelledByCN = 0)
AND (AccountDoc.docType = 'I')
AND (AccountDoc.jobCode = #jobCode)
AND (AccountDoc.shipmentSyskey = #shipmentSyskey)
AND (AccountDoc.principalCode = #principalCode) FOR XML
PATH(''), TYPE).value('.','NVARCHAR(MAX)'),1,2,' '))
END AS invoiceNo
FROM AccountDoc LEFT OUTER JOIN JobInvoice
ON JobInvoice.principalCode = AccountDoc.principalCode AND
JobInvoice.jobCode = AccountDoc.jobCode
WHERE (AccountDoc.jobCode = #jobCode)
AND (AccountDoc.isCancelledByCN = 0)
AND (AccountDoc.shipmentSyskey = #shipmentSyskey)
AND (AccountDoc.principalCode = #principalCode)
OUTPUT:
----invoiceNo----
T17080003
INV14080011
Explanation:
I want to select docNo from table AccountDoc if AccountDoc.docType = I.
Or select invoiceNo from table JobInvoice if AccountDoc.docType = M.
The problem is what if under same jobCode there have 2 docType which are M and I, how I gonna display these 2 invoices?
You can achieve this by using CTE and FOR XML. below is the sample code i created using similar tables you have -
Create table #AccountDoc (
id int ,
docType char(1),
docNo varchar(10)
)
Create table #JobInvoice (
id int ,
invoiceNo varchar(10)
)
insert into #AccountDoc
select 1 , 'M' ,'M1234'
union all select 2 , 'M' ,'M2345'
union all select 3 , 'M' ,'M3456'
union all select 4 , 'I' ,'I1234'
union all select 5 , 'I' ,'I2345'
union all select 6 , 'I' ,'I3456'
insert into #JobInvoice
select 1 , 'INV1234'
union all select 2 , 'INV2345'
union all select 3 , 'INV3456'
select *
from #AccountDoc t1 left join #JobInvoice t2
on t1.id = t2.id
with cte as
(
select isnull( case t1.docType WHEN 'M' THEN t2.invoiceNo WHEN 'I' then
t1.docNo end ,'') invoiceNo
from #AccountDoc t1 left join #JobInvoice t2
on t1.id = t2.id )
select invoiceNo + ',' from cte For XML PATH ('')
You need to pivot your data if you have situations where there are two rows, and you want two columns. Your sql is a bit messy, particularly the bit where you put an entire select statement inside a case when in the select part of another query. These two queries are virtually the same, you should look for a more optimal way of writing them. However, you can wrap your entire sql in the following:
select
Jobcode, shipmentsyskey, [M],[I]
from(
--YOUR ENTIRE SQL GOES HERE BETWEEN THESE BRACKETS. Do not alter anything else, just paste your entire sql here
) yoursql
pivot(
max(invoiceno)
for docType in([M],[I])
)pvt

dynamically add column names and add their values as a row sql server

I have a table table1 with column name a, b, c, d, e, f.
Now the task is to get the value of each column which will definitely be a single row value and insert that into other table2 - columns(x, y, z) . So my query would be like :
insert into table2 (x, y, z)
select a, '', '' from table1
union all
select b, '', '' from table1
union all
select c, '', '' from table1
union all
select d, '', '' from table1
union all
select e, '', '' from table1
.
.
.
union all
select f, '', '' from table1
Now if a new column add in table1 then again I have to add a select statement in this. Just want to avoid this how can I write a dynamic query which automatically consider all the columns and make it shorter.
Seems like your looking for a Dynamic EAV Structure (Entity Attribute Value). Now the cool part is the #YourTable could be any query
Declare #YourTable table (ID int,Col1 varchar(25),Col2 varchar(25),Col3 varchar(25))
Insert Into #YourTable values
(1,'a','z','k')
,(2,'g','b','p')
,(3,'k','d','a')
Select A.ID
,C.*
From #YourTable A
Cross Apply (Select XMLData=cast((Select A.* for XML Raw) as xml)) B
Cross Apply (
Select Attribute = attr.value('local-name(.)','varchar(100)')
,Value = attr.value('.','varchar(max)') -- change datatype if necessary
From B.XMLData.nodes('/row') as A(r)
Cross Apply A.r.nodes('./#*') AS B(attr)
Where attr.value('local-name(.)','varchar(100)') not in ('ID','OtherFieldsToExclude') -- Field Names case sensitive
) C
Returns
ID Attribute Value
1 Col1 a
1 Col2 z
1 Col3 k
2 Col1 g
2 Col2 b
2 Col3 p
3 Col1 k
3 Col2 d
3 Col3 a
A simpler way to do this uses cross apply:
insert into table2 (x, y, z)
select v.x, '', ''
from table1 t1 cross apply
(values (t1.a), (t1.b), (t1.c), (t1.d), (t1.e), (t1.f)
) v(x);
If you want to insert new values when new columns are added to the table, then you would want a DDL and probably a DML trigger. DML triggers are the "standard" triggers.
You can read about DDL triggers in the documentation.
That said, I am highly suspicious of database systems that encourage new columns and new tables to be added. There is probably a better way to design the application, for instance, using an EAV data model that provides greater flexibility with attributes.
try this
insert into table2
select Tmp.id, tb1.* from table1 tb1,
((SELECT B.id FROM (SELECT [value] = CONVERT(XML ,'<v>' + REPLACE('a,b,c,d,e,f' , ',' , '</v><v>')+ '</v>')) A
OUTER APPLY
(SELECT id = N.v.value('.' , 'varchar(100)') FROM A.[value].nodes('/v') N ( v )) B)) Tmp
This, if I am reading it correctly, looks like a perfect time to use
PIVOT.

get the rows of a table in 1 row

I have first table (the count of rows are varibale) and I want to create the second table. what is the efficient way?
First you have to bring your data to a more 'friendly' format:
;with
data as
(
-- replace this with your select
select * from
(
VALUES ('1', 'a', 'b'),
('2', 'c', 'd'),
('3', 'e', 'f')
) as data(aa,bb,cc)
--------------------------------
),
dataAsXml as
(
select CAST(STUFF((SELECT '<i>' + d.[aa] + '</i><i>' + d.[bb] + '</i><i>' + d.[cc] + '</i>' FROM data d FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)'),1,0,'') as XML) as data
),
dataAsList as
(
select x.i.value('for $i in . return count(../*[. << $i]) + 1', 'int') as 'Ord',
x.i.value('.', 'NVARCHAR(100)') AS 'Value'
from dataAsXml
CROSS APPLY [data].nodes('//i') x(i)
),
normalized AS
(
select
case (Ord - 1) % 3 + 1
when 1 then 'aa'
when 2 then 'bb'
when 3 then 'cc'
end + cast((Ord - 1) / 3 + 1 as varchar(10)) as columnName, --fix here
value
from dataAsList
)
select * from normalized
In the query above you can plug in your data in data CTE to see the result.
The output will have two columns one that stores your column names and one with values.
SQL Fiddle
From here you have to use a dynamic query where you pivot the obtained table for columnName in the list of all the column names. I won't describe this process because it has been done many times. Take a look at this answer:
Convert Rows to columns using 'Pivot' in SQL Server
Note:
I didn't tested the performance of this method with large sets of data but
from some points of view it's efficient.
Try this one. I pivoted each of the columns then join them together in one row.
SELECT aa1,bb1,cc1,aa2,bb2,cc2,aa3,bb3,cc3 FROM
(SELECT 1 id,[2]aa1,[3]aa2,[4]aa3 FROM(SELECT aa FROM tablea) AS A
PIVOT(SUM(aa) FOR aa in([2],[3],[4])) AS pvt) A
INNER JOIN
(SELECT 1 id,[400]bb1,[200]bb2,[500]bb3 FROM(SELECT bb FROM tablea) AS A
PIVOT(SUM(bb) FOR bb in([400],[200],[500])) AS pvt) B ON A.id=B.id
INNER JOIN
(SELECT 1 id,[20]cc1,[25]cc2,[20]cc3 FROM(SELECT cc FROM tablea) AS A
PIVOT(MIN(cc) FOR cc in([20],[25])) AS pvt) C ON B.id=C.id