Identify which columns are different in the two queries - sql

I currently have a query that looks like this:
Select val1, val2, val3, val4 from Table_A where someID = 10
UNION
Select oth1, val2, val3, oth4 from Table_B where someId = 10
I initially run this same query above but with EXCEPT, to identify which ID's are returned with differences, and then I do a UNION query to find which columns specifically are different.
My goal is to compare the values between the two tables (some columns have different names). And that's what I'm doing.
However, the two queries above have about 250 different field names, so it is quite mundane to scroll through to find the differences.
Is there a better and quicker way to identify which column names are different after running the two queries?
EDIT: Here's my current process:
DROP TABLE IF EXISTS #Table_1
DROP TABLE IF EXISTS #Table_2
SELECT 'Dave' AS Name, 'Smih' AS LName, 18 AS Age, 'Alabama' AS State
INTO #Table_1
SELECT 'Dave' AS Name, 'Smith' AS LName, 19 AS Age, 'Alabama' AS State
INTO #Table_2
--FInd differences
SELECT Name, LName,Age,State FROM #Table_1
EXCEPT
SELECT Name, LName,Age,State FROM #Table_2
--How I compare differences
SELECT Name, LName,Age,State FROM #Table_1
UNION
SELECT Name, LName,Age,State FROM #Table_2
Is there any way to streamline this so I can get a column list of differences?

Here is a generic way to handle two tables differences.
We just need to know their primary key column.
It is based on JSON, and will work starting from SQL Server 2016 onwards.
SQL
-- DDL and sample data population, start
DECLARE #TableA TABLE (rowid INT IDENTITY(1,1), FirstName VARCHAR(100), LastName VARCHAR(100), Phone VARCHAR(100));
DECLARE #TableB table (rowid int Identity(1,1), FirstName varchar(100), LastName varchar(100), Phone varchar(100));
INSERT INTO #TableA(FirstName, LastName, Phone) VALUES
('JORGE','LUIS','41514493'),
('JUAN','ROBERRTO','41324133'),
('ALBERTO','JOSE','41514461'),
('JULIO','ESTUARDO','56201550'),
('ALFREDO','JOSE','32356654'),
('LUIS','FERNANDO','98596210');
INSERT INTO #TableB(FirstName, LastName, Phone) VALUES
('JORGE','LUIS','41514493'),
('JUAN','ROBERTO','41324132'),
('ALBERTO','JOSE','41514461'),
('JULIO','ESTUARDO','56201551'),
('ALFRIDO','JOSE','32356653'),
('LUIS','FERNANDOO','98596210');
-- DDL and sample data population, end
SELECT rowid
,[key] AS [column]
,Org_Value = MAX( CASE WHEN Src=1 THEN Value END)
,New_Value = MAX( CASE WHEN Src=2 THEN Value END)
FROM (
SELECT Src=1
,rowid
,B.*
FROM #TableA A
CROSS APPLY ( SELECT [Key]
,Value
FROM OpenJson( (SELECT A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES))
) AS B
UNION ALL
SELECT Src=2
,rowid
,B.*
FROM #TableB A
CROSS APPLY ( SELECT [Key]
,Value
FROM OpenJson( (SELECT A.* For JSON Path,Without_Array_Wrapper,INCLUDE_NULL_VALUES))
) AS B
) AS A
GROUP BY rowid,[key]
HAVING MAX(CASE WHEN Src=1 THEN Value END)
<> MAX(CASE WHEN Src=2 THEN Value END)
ORDER BY rowid,[key];
Output
rowid
column
Org_Value
New_Value
2
LastName
ROBERRTO
ROBERTO
2
Phone
41324133
41324132
4
Phone
56201550
56201551
5
FirstName
ALFREDO
ALFRIDO
5
Phone
32356654
32356653
6
LastName
FERNANDO
FERNANDOO

Related

How to update data as json array and select data as json array in sql server

I am new in SQL server 2017 for JSON result. I am storing JSON array in one column in my table. I am saving id's array in that table, but i want to update its relative text from other table, so please help me in this.
create table #subjectList(subjectID int identity(1,1),subjectName varchar(50))
insert into #subjectList(subjectName)
select 'Math' union all
select 'English' union all
select 'Hindi' union all
select 'PC' union all
select 'Physics'
select * from #subjectList
Create table #studentList(studentID int identity(1,1), subjectName varchar(50), choseSubjectList varchar(max))
insert into #studentList(subjectName, choseSubjectList)
Select 'A','["1","2"]'
select * from #studentList
create table #studentWithSubject(studentID int,subjectName varchar(50),choseSubjectIDList varchar(max),choseSubjectNameList varchar(max))
insert into #studentWithSubject(studentID,subjectName,choseSubjectIDList)
Select a.studentID,a.studentID,a.choseSubjectList
from #studentList a
Update #studentWithSubject set choseSubjectNameList=''
select * from #studentWithSubject
Here is #studentWithSubject output
studentID subjectName choseSubjectIDList choseSubjectNameList
1 1 ["1","2"] ''
Now I want to update subjectname from #subjectList and output should be like this:
studentID subjectName choseSubjectIDList choseSubjectNameList
1 1 ["1","2"] ["Math","English"]
One possible approach is to parse the JSON array with IDs using OPENJSON() and default schema and then generate the JSON array with names. The OPENJSON() with default schema returns a table with columns key, value and type and the key columns holds the index of each item. Note, that the important part here is to generate the names in the same order as they exists in the IDs JSON array. You need to use an apprach based on string aggregation, because I don't think, that you can generate a JSON array with scalar values using FOR JSON.
Tables:
create table #subjectList(subjectID int identity(1,1),subjectName varchar(50))
insert into #subjectList(subjectName)
select 'Math' union all
select 'English' union all
select 'Hindi' union all
select 'PC' union all
select 'Physics'
Create table #studentList(studentID int identity(1,1), subjectName varchar(50), choseSubjectList varchar(max))
insert into #studentList(subjectName, choseSubjectList)
Select 'A','["1","2"]' union all
Select 'B','["3","2","5"]' union all
Select 'C','["6","2"]'
create table #studentWithSubject(studentID int,subjectName varchar(50),choseSubjectIDList varchar(max),choseSubjectNameList varchar(max))
insert into #studentWithSubject(studentID,subjectName,choseSubjectIDList)
Select a.studentID,a.studentID,a.choseSubjectList
from #studentList a
Statement:
UPDATE #studentWithSubject
SET choseSubjectNameList = (
CONCAT(
'["',
STUFF(
(SELECT CONCAT('","', COALESCE(s.subjectName, ''))
FROM OPENJSON(#studentWithSubject.choseSubjectIDList) j
LEFT JOIN #subjectList s ON j.[value] = s.subjectID
ORDER BY CONVERT(int, j.[key])
FOR XML PATH('')), 1, 3, ''
),
'"]'
)
)
Result:
studentID subjectName choseSubjectIDList choseSubjectNameList
1 1 ["1","2"] ["Math","English"]
2 2 ["3","2","5"] ["Hindi","English","Physics"]
3 3 ["6","2"] ["","English"]

SQL Server stored procedure looping through a comma delimited cell

I am trying to figure out how to go about getting the values of a comma separated string that's present in one of my cells.
This is the query I current am trying to figure out in my stored procedure:
SELECT
uT.id,
uT.permissions
FROM
usersTbl AS uT
INNER JOIN
usersPermissions AS uP
/*Need to loop here I think?*/
WHERE
uT.active = 'true'
AND
uT.email = 'bbarker#thepriceisright.com'
The usersPermissions table looks like this:
And so a row in the usersTbl table looks like this for permissions:
1,3
I need to find a way to loop through that cell and get each number and place the name ****, in my returned results for the usersTbl.permissions.
So instead of returning this:
Name | id | permissions | age |
------------------------------------
Bbarker | 5987 | 1,3 | 87 |
It needs to returns this:
Name | id | permissions | age |
------------------------------------
Bbarker | 5987 | Read,Upload | 87 |
Really just replacing 1,3 with Read,Upload.
Any help would be great from a SQL GURU!
Reworked query
SELECT
*
FROM
usersTbl AS uT
INNER JOIN
usersPermissionsTbl AS uPT
ON
uPT.userId = uT.id
INNER JOIN
usersPermissions AS uP
ON
uPT.permissionId = uP.id
WHERE
uT.active='true'
AND
uT.email='bBarker#thepriceisright.com'
I agree with all of the comments... but strictly trying to do what you want, here's a way with a splitter function
declare #usersTbl table ([Name] varchar(64), id int, [permissions] varchar(64), age int)
insert into #usersTbl
values
('Bbarker',5987,'1,3',87)
declare #usersTblpermissions table (id int, [type] varchar(64))
insert into #usersTblpermissions
values
(1,'Read'),
(2,'Write'),
(3,'Upload'),
(4,'Admin')
;with cte as(
select
u.[Name]
,u.id as UID
,p.id
,p.type
,u.age
from #usersTbl u
cross apply dbo.DelimitedSplit8K([permissions],',') x
inner join #usersTblpermissions p on p.id = x.Item)
select distinct
[Name]
,UID
,age
,STUFF((
SELECT ',' + t2.type
FROM cte t2
WHERE t.UID = t2.UID
FOR XML PATH(''), TYPE).value('.', 'NVARCHAR(MAX)'), 1, 1, '')
from cte t
Jeff Moden Splitter
CREATE FUNCTION [dbo].[DelimitedSplit8K] (#pString VARCHAR(8000), #pDelimiter CHAR(1))
--WARNING!!! DO NOT USE MAX DATA-TYPES HERE! IT WILL KILL PERFORMANCE!
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
/* "Inline" CTE Driven "Tally Table" produces values from 1 up to 10,000...
enough to cover VARCHAR(8000)*/
WITH E1(N) AS (
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL
SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1 UNION ALL SELECT 1
), --10E+1 or 10 rows
E2(N) AS (SELECT 1 FROM E1 a, E1 b), --10E+2 or 100 rows
E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10E+4 or 10,000 rows max
cteTally(N) AS (--==== This provides the "base" CTE and limits the number of rows right up front
-- for both a performance gain and prevention of accidental "overruns"
SELECT TOP (ISNULL(DATALENGTH(#pString),0)) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) FROM E4
),
cteStart(N1) AS (--==== This returns N+1 (starting position of each "element" just once for each delimiter)
SELECT 1 UNION ALL
SELECT t.N+1 FROM cteTally t WHERE SUBSTRING(#pString,t.N,1) = #pDelimiter
),
cteLen(N1,L1) AS(--==== Return start and length (for use in substring)
SELECT s.N1,
ISNULL(NULLIF(CHARINDEX(#pDelimiter,#pString,s.N1),0)-s.N1,8000)
FROM cteStart s
)
--===== Do the actual split. The ISNULL/NULLIF combo handles the length for the final element when no delimiter is found.
SELECT ItemNumber = ROW_NUMBER() OVER(ORDER BY l.N1),
Item = SUBSTRING(#pString, l.N1, l.L1)
FROM cteLen l
;
GO
First, you should read Is storing a delimited list in a database column really that bad?, where you will see a lot of reasons why the answer to this question is Absolutely yes!
Second, you should add a table for user permissions since this is clearly a many to many relationship.
Your tables might look something like this (pseudo code):
usersTbl
(
Id int primary key
-- other user related columns
)
usersPermissionsTbl
(
UserId int, -- Foreign key to usersTbl
PermissionId int, -- Foreign key to permissionsTbl
Primary key (UserId, PermissionId)
)
permissionsTbl
(
Id int primary key,
Name varchar(20)
)
Once you have your tables correct, it's quite easy to get a list of comma separated values from the permissions table.
Adapting scsimon's sample data script to a correct many to many relationship:
declare #users table ([Name] varchar(64), id int, age int)
insert into #users values
('Bbarker',5987,87)
declare #permissions table (id int, [type] varchar(64))
insert into #permissions values
(1,'Read'),
(2,'Write'),
(3,'Upload'),
(4,'Admin')
declare #usersPermissions as table (userId int, permissionId int)
insert into #usersPermissions values (5987, 1), (5987, 3)
Now the query looks like this:
SELECT u.Name,
u.Id,
STUFF(
(
SELECT ','+ [type]
FROM #permissions p
INNER JOIN #usersPermissions up ON p.id = up.permissionId
WHERE up.userId = u.Id
FOR XML PATH('')
)
, 1, 1, '') As Permissions,
u.Age
FROM #Users As u
And the results:
Name Id Permissions Age
Bbarker 5987 Read,Upload 87
You can see a live demo on rextester.
I concur with much of the advice being presented to you in the other responses. The structure you're starting with is not going to be fun to maintain and work with. However, your situation may mean you are stuck with it so maybe some of the tools below will help you.
You can parse the delimiter with charindex() as others demonstrated here- MSSQL - How to split a string using a comma as a separator
... and even better here (several functions are provided) - Split function equivalent in T-SQL?
If you still want to do it with raw inline SQL and are committed to a loop, then pair the string manipulation with a CURSOR. Cursors have their own controversies BTW. The code below will work if your permission syntax remains consistent, which it probably doesn't.
They used charindex(',',columnName) and fed the location into the left() and right() functions along with some additional string evaluation to pull values out. You should be able to piece those together with a cursor
Your query might look like this...
--creating my temp structure
declare #userPermissions table (id int, [type] varchar(16))
insert into #userPermissions (id, [type]) values (1, 'Read')
insert into #userPermissions (id, [type]) values (2, 'Write')
insert into #userPermissions (id, [type]) values (3, 'Upload')
insert into #userPermissions (id, [type]) values (4, 'Admin')
declare #usersTbl table ([Name] varchar(16), id int, [permissions] varchar(8), age int)
insert into #usersTbl ([Name], id, [permissions], age) values ('Bbarker', 5987, '1,3', 87)
insert into #usersTbl ([Name], id, [permissions], age) values ('Mmouse', 5988, '2,4', 88)
--example query
select
ut.[Name]
, (select [type] from #userPermissions where [id] = left(ut.[permissions], charindex(',', ut.[permissions])-1) )
+ ','
+ (select [type] from #userPermissions where [id] = right(ut.[permissions], len(ut.[permissions])-charindex(',', ut.[permissions])) )
from #usersTbl ut

How can I get around differences in column types when using unpivot?

I am having problems using unpivot on columns, that are not the exact same datatype, and I can't figure out how to convert the columns on the fly, because the syntax for UNPIVOT does not seem to support it.
Consider this example:
DECLARE #People TABLE
(PersonId int, Firstname varchar(50), Lastname varchar(50))
-- Load Sample Data
INSERT INTO #People VALUES (1, 'Abe', 'Albertson')
INSERT INTO #People VALUES (2, 'Benny', 'Boomboom')
SELECT PersonId, ColumnName, Value FROM #People
UNPIVOT
(
ColumnName FOR
Value IN (FirstName, LastName)
)
The result would be this:
PersonId ColumnName Value
----------- ----------------- ----------------
1 Abe Firstname
1 Albertson Lastname
2 Benny Firstname
2 Boomboom Lastname
Everything is unicorns and rainbows. Now I change the datatype of Lastname to varchar(25) and everything breaks. The output is:
The type of column "Lastname" conflicts with the type of other columns
specified in the UNPIVOT list.
How can I get around this and convert everything to say a varchar(50) on the fly, without tampering with the actual data types on the table?
SqlFiddle working example (same datatype): http://sqlfiddle.com/#!3/f3719
SqlFiddle broken example (diff datatypes): http://sqlfiddle.com/#!3/5dca13/1
You cannot convert inside the UNPIVOT syntax but you can convert the data inside a subquery similar to the following:
select PersonId, ColumnName, Value
from
(
select personid,
firstname,
cast(lastname as varchar(50)) lastname
from People
) d
unpivot
(
Value FOR
ColumnName in (FirstName, LastName)
) unpiv;
See SQL Fiddle with Demo
Another way to do this would be to use CROSS APPLY, depending on your version of SQL Server you can use CROSS APPLY with VALUES or UNION ALL:
select PersonId, ColumnName, Value
from People
cross apply
(
select 'firstname', firstname union all
select 'lastname', cast(lastname as varchar(50))
) c (ColumnName, value)
See SQL Fiddle with Demo.
select col1, col2, Answer, Question_Col
from (
SELECT
col1,
col2,
col3,
CAST(col4 AS NVARCHAR(MAX)) col4,
-- Note: Here datatype should be same as that of previous columns
CAST(col5 AS NVARCHAR(MAX)) col5
FROM Table_Name )a
unpivot
(
Answer
for Question_Col in (
col3,
col4,
col5
)
) unpiv;

Union temporary tables to a final temporary table

I have like 10 diff temporary tables created in SQL server, what I am looking to do is union them all to a final temporary table holding them all on one table. All the tables have only one row and look pretty much exactly like the two temp tables below.
Here is what I have so far this is an example of just two of the temp tables as their all exactly like this one then #final is the table I want to union the all to:
create table #lo
(
mnbr bigint
)
insert into #login (mnbr)
select distinct (_ID)
FROM [KDB].[am_LOGS].[dbo].[_LOG]
WHERE time >= '2012-7-26 9:00:00
Select count(*) as countreject
from #lo
create table #pffblo
(
mber_br
)
insert into #pffblo (mber_br)
select distinct (mber_br)
from individ ip with (nolock)
join memb mp with (nolock)
on( ip.i_id=mp.i_id and mp.p_type=101)
where ip.times >= '2012-9-26 11:00:00.000'
select count(*) as countaccept
create table #final
(
countreject bigint
, Countacceptbigint
.....
)
insert into #final (Countreject, Countaccept....more rows here...)
select Countreject, Countaccept, ...more rows selected from temp tables.
from #final
union
(select * from #lo)
union
(select * from #pffblo)
select *
from #final
drop table #lo
drop table #pffblo
drop table #final
if this the form to union the rows form those temp tables to this final one. Then is this correct way to show all those rows that were thus unioned. When I do this union I get message number of columns in union need to match number of columns selected in union
I think you're using a union the wrong way. A union is used when you have to datasets that are the same structure and you want to put them into one dataset.
e.g.:
CREATE TABLE #Table1
(
col1 BIGINT
)
CREATE TABLE #Table2
(
col1 BIGINT
)
--populate the temporary tables
CREATE TABLE #Final
(
col1 BIGINT
)
INSERT INTO #Final (col1)
SELECT *
FROM #Table1
UNION
SELECT *
FROM #Table2
drop table #table1
drop table #table2
drop table #Final
I think what you're trying to do is get 1 data set with the count of all your tables in it. Union won't do this.
The easiest way (although not very performant) would be to do select statements like the following:
CREATE TABLE #Table1
(
col1 BIGINT
)
CREATE TABLE #Table2
(
col1 BIGINT
)
--populate the temporary tables
CREATE TABLE #Final
(
col1 BIGINT,
col2 BIGINT
)
INSERT INTO #Final (col1, col2)
select (SELECT Count(*) FROM #Table1) as a, (SELECT Count(*) FROM #Table2) as b
select * From #Final
drop table #table1
drop table #table2
drop table #Final
It appears that you want to take the values from each of temp tables and then place then into a single row of data. This is basically a PIVOT, you can use something like this:
create table #final
(
countreject bigint
, Countaccept bigint
.....
)
insert into #final (Countreject, Countaccept....more rows here...)
select
from
(
select count(*) value, 'Countreject' col -- your UNION ALL's here
from #lo
union all
select count(*) value, 'countaccept' col
from #pffblo
) x
pivot
(
max(value)
for col in ([Countreject], [countaccept])
) p
Explanation:
You will create a subquery similar to this that will contain the COUNT for each of your individual temp table. There are two columns in the subquery, one column contains the count(*) from the table and the other column is the name of the alias:
select count(*) value, 'Countreject' col
from #lo
union all
select count(*) value, 'countaccept' col
from #pffblo
You then PIVOT these values to insert into your final temp table.
If you do not want to use PIVOT, then you can use a CASE statement with an aggregate function:
insert into #final (Countreject, Countaccept....more rows here...)
select max(case when col = 'Countreject' then value end) Countreject,
max(case when col = 'countaccept' then value end) countaccept
from
(
select count(*) value, 'Countreject' col -- your UNION ALL's here
from #lo
union all
select count(*) value, 'countaccept' col
from #pffblo
) x
Or you might be able to JOIN all of the temp tables similar to this, where you create a row_number() for the one record in the table and then you join the tables with the row_number():
insert into #final (Countreject, Countaccept....more rows here...)
select isnull(lo.Countreject, 0) Countreject,
isnull(pffblo.Countaccept, 0) Countaccept
from
(
select count(*) Countreject,
row_number() over(order by (SELECT 0)) rn
from #lo
) lo
left join
(
select count(*) Countaccept,
row_number() over(order by (SELECT 0)) rn
from #pffblo
) pffblo
on lo.rn = pffblo.rn
SELECT *
INTO #1
FROM TABLE2
UNION
SELECT *
FROM TABLE3
UNION
SELECT *
FROM TABLE4
If you would like to get count for each temporary table in the resulting table, you will need just to calculate it for each column in subquery:
INSERT INTO result (col1, col2,...
SELECT
(SELECT COUNT() FROM tbl1) col1
,(SELECT COUNT() FROM tbl2) col2
..

I need to split string in select statement and insert to table

I have a data in one table. I need to copy it to another table. One of the column is text delimited string. So what I'm thinking to select all columns insert get indentity value and with subquery to split based on delimiter and insert it to another table.
Here is the data example
ID Name City Items
1 Michael Miami item|item2|item3|item4|item5
2 Jorge Hallandale item|item2|item3|item4|item5
copy Name, City to one table get identity
and split and copy Items to another table with Identity Column Value
So output should be
Users table
UserID Name City
1 Michael Miami
2 Jorge Hallandale
...
Items table
ItemID UserID Name
1 1 Item
2 1 Item2
3 1 Item3
4 1 Item4
5 2 Item
6 2 Item2
7 2 Item3
8 2 Item4
Not really sure how to do it with T-SQL. Answers with examples would be appreciated
You may create you custom function to split the string in T-Sql. You could then use the Split function as part of a JOIN with your base table to generate the final results for your INSERT statement. Have a look at this post. Hope this help.
You can do this using xml and cross apply.
See the following:
DECLARE #t table (ID int, Name varchar(20), City varchar(20), Items varchar(max));
INSERT #t
SELECT 1,'Michael','Miami' ,'item|item2|item3|item4|item5' UNION
SELECT 2,'Jorge' ,'Hallandale','item|item2|item3|item4|item5'
DECLARE #u table (UserID int identity(1,1), Name varchar(20), City varchar(20));
INSERT #u (Name, City)
SELECT DISTINCT Name, City FROM #t
DECLARE #i table (ItemID int identity(1,1), UserID int, Name varchar(20));
WITH cte_Items (Name, Items) as (
SELECT
Name
,CAST(REPLACE('<r><i>' + Items + '</i></r>','|','</i><i>') as xml) as Items
FROM
#t
)
INSERT #i (UserID, Name)
SELECT
u.UserID
,s.Name as Name
FROM
cte_Items t
CROSS APPLY (SELECT i.value('.','varchar(20)') as Name FROM t.Items.nodes('//r/i') as x(i) ) s
INNER JOIN #u u ON t.Name = u.Name
SELECT * FROM #i
See more here:
http://www.kodyaz.com/articles/t-sql-convert-split-delimeted-string-as-rows-using-xml.aspx
Can you accomplish this with recursion? My T-SQL is rusty but this may help send you in the right direction:
WITH CteList AS (
SELECT 0 AS ItemId
, 0 AS DelimPos
, 0 AS Item_Num
, CAST('' AS VARCHAR(100)) AS Item
, Items AS Remainder
FROM Table1
UNION ALL
SELECT Row_Number() OVER(ORDER BY UserID) AS ItemId
, UserID
, CASE WHEN CHARINDEX('|', Remainder) > 0
THEN CHARINDXEX('|', Remainder)
ELSE LEN(Remainder)
END AS dpos
, Item_num + 1 as Item_Num
, REPLACE(Remainder, '|', '') AS Element
, right(Remainder, dpos+1) AS Remainder
FROM CteList
WHERE dpos > 0
AND ItemNum < 20 /* Force a MAX depth for recursion */
)
SELECT ItemId
, Item
FROM CteList
WHERE item_num > 0
ORDER BY ItemID, Item_Num