Updating a column with JSON data in another table - sql

I have seen a lot on JSON and SQL Server but haven't been able to find what I am looking for.
I want to update columns in one table by retrieving JSON values from another table.
Lets say I have the below table:
table : people
+-------+-----------+
| id | name |
+-------+-----------+
| 1 | John |
| 2 | Mary |
| 3 | Jeff |
| 4 | Bill |
| 5 | Bob |
+-------+-----------+
And lets pretend I have another table filled with rows of JSON like the following:
table : archive
+-------+----------------------------------------------------------------+
| id | json |
+-------+----------------------------------------------------------------+
| 1 |[{ "Column":"name","values": { "old": "Jeff", "new": "John"}}] |
| 2 |[{ "Column":"name","values": { "old": "Rose", "new": "Mary"}}] |
+-------+----------------------------------------------------------------+
Now the idea is to change Johns name to Jeff.
UPDATE people
SET name = JSON_QUERY(archive.json, '$values.old')
WHERE ID = 1
The above SQL may make no sense but I'm just trying to get across my current logic of what I'm trying to do. I hope it makes some sense.
If more information is needed please ask.

You can read your json using openjson and a double cross apply with with clause. Then you can use an update from to change the values in #people:
declare #people table (id int, [name] varchar(50))
insert into #people values
(1, 'John')
,(2, 'Mary')
,(3, 'Jeff')
,(4, 'Bill')
,(5, 'Bob' )
declare #json table (id int, [json] nvarchar(max))
insert into #json values
(1,'[{ "Column":"name","values": { "old": "Jeff", "new": "John"}}]')
,(2,'[{ "Column":"name","values": { "old": "Rose", "new": "Mary"}}]')
update #people
set [name] = d.old
from #people p
inner join
(
select id
, c.old
, c.new
from #json a
cross apply openjson(json) with
(
[Column] nvarchar(50)
, [values] nvarchar(MAX) as JSON
) b
cross apply openjson(b.[values]) with
(
old nvarchar(50)
, new nvarchar(50)
) c
) d
on p.id = d.id
Before update:
After update:

I asked you some questions in a comment above
Two remarks: Am I correct, that you mixed old and new values? And am I
correct, that the above is just a sample and you are looking for a
generic solution, where the updates might affect different columns,
maybe even more than one per row? At least the JSON would allow more
elements in the object-array.
But - as a start - you can try this:
--mockup your set-up (thanks #Andrea, I used yours)
declare #people table (id int, [name] varchar(50))
insert into #people values
(1, 'John')
,(2, 'Mary')
,(3, 'Jeff')
,(4, 'Bill')
,(5, 'Bob' )
declare #json table (id int, [json] nvarchar(max))
insert into #json values
(1,'[{ "Column":"name","values": { "old": "Jeff", "new": "John"}}]')
,(2,'[{ "Column":"name","values": { "old": "Rose", "new": "Mary"}}]')
--This will - at least - return everything you need. The rest is - assumably - dynamic statement building and EXEC():
SELECT p.*
,A.[Column]
,JSON_VALUE(A.[values],'$.old') AS OldValue
,JSON_VALUE(A.[values],'$.new') As NewValue
FROM #people p
INNER JOIN #json j ON p.id=j.id
CROSS APPLY OPENJSON(j.[json])
WITH([Column] VARCHAR(100), [values] NVARCHAR(MAX) AS JSON) A;
The result (old and new seems to be mistaken):
id name Column OldValue NewValue
1 John name Jeff John
2 Mary name Rose Mary

Related

Can I use OUTPUT INTO to add data to a relational table with additional values?

I have two tables. One holds common data for articles, and the other holds translations for text. Something like this:
Articles Table
id | key | date
Translations Table
id | article_key | lang | title | content
key is a string and is the primary key.
article_key is a foreign key relating it to articles on the key column.
When I add a new row to the Articles, I'd like to be able to use the key that was just inserted and add a new row to the Translations Table.
I've read about OUTPUT INTO but it doesn't seem like I can add other values to the Translations table. Also I get an error about it being on either side of a relationship.
Is my only course of action to INSERT into Articles followed by an INSERT with a SELECT subquery to get the key?
Edit: Expected output would be something like:
Articles
id | key | date
---------------
1 | somekey | 2018-05-31
Article Translations
id | article_key | lang | title | content
-----------------------------------------
1 | somekey | en | lorem | ipsum
Well this could work based on your description:
SET NOCOUNT ON;
DECLARE #Articles TABLE (id INT NOT NULL
, [key] VARCHAR(50) NOT NULL
, [date] DATE NOT NULL);
DECLARE #ArticleTranslations TABLE (id INT NOT NULL
, article_key VARCHAR(50) NOT NULL
, lang VARCHAR(50) NOT NULL
, title VARCHAR(50) NOT NULL
, content VARCHAR(50) NOT NULL);
INSERT #Articles (id, [key], [date]) -- This is insert into #Articles
OUTPUT INSERTED.id, INSERTED.[key], 'en', 'lorem', 'ipsum' -- This is insert into #ArticleTranslations
INTO #ArticleTranslations (id, article_key, lang, title, content) -- This is insert into #ArticleTranslations
VALUES (1, 'somekey', GETDATE()); -- This is insert into #Articles
SELECT *
FROM #Articles;
SELECT *
FROM #ArticleTranslations;
Try this out Stack Exchange: https://data.stackexchange.com/stackoverflow/query/857925
Maybe it's not that simple as it is. So let me know whether this works or not.

SQL Server : using a LEN or variable in where clause that contains a join

I have created a map table to find various unique strings within a large list of unique hostnames.
The initial code works if I enter the various lengths i.e. varchar(2), varchar(11), etc. It's trying to reference the variable lengths is where my issues began.
I have tried several different combinations before attempting to use a variable.
For example in the where clause, substituting the varchar(2) with the m.[HostNameAlias_IDLength]
I am also having difficulty using variables.
Any thoughts would be much appreciated.
TM
P.S. A listing of the code and sample tables are listed below.
Table1
HostNameAlias_id (pk, varchar(5), not null)
ProjectName_ID (int, not null)
HostnameAlias_IDLength (computed, int, null)
Data
HostNameAlias_ID ProjectName_ID HostNameAlias_IDLength
----------------------------------------------------------
H123456789023456 16009 16
B123456789023 16005 13
C1234567890 16009 11
d12345678 16009 9
e123456 16009 8
f12345 16003 6
g1234 16035 5
h123 16035 4
j12 16005 3
k1 16007 2
Table2
[host name] (pk, nvarchar(50), not null
Projectname_id (int, not null)
Sample data:
Host name Title projectname_ID
--------------------------------------------------
C1234567890a1 vp 16009
C1234567890a2 avp 16009
h12335 student 16009
h12356 teacher 16009
h12357 prof 16009
Query
DECLARE #len = INT()
DECLARE #slen = VARCHAR(2);
SELECT DISTINCT
#len = m.[HostNameAlias_IDLength],
#slen = CONVERT(varchar(2), m.[HostNameAlias_ID]),
c.[Host Name],
m.[projectname_id]
FROM
[table1] c
JOIN
[table2] m ON c.[projectname_id] = m.[projectname_id]
WHERE
CONVERT(varchar(2), [Host Name]) IN (SELECT [HostNameAlias_ID]
FROM [table2])
The length of a result cannot be known in the where clause used to discover that length, so I fail to see why you are attempting this. In addition the column [Host Name] is a varchar(16) so you could encounter up to 16 characters, so just use that maximum ... if the conversion is needed at all.
Below I have just used LIKE instead of IN, perhaps that will assist.
SQL Fiddle
MS SQL Server 2014 Schema Setup:
CREATE TABLE Table1
([HostNameAlias_ID] varchar(16), [ProjectName_ID] int, [HostNameAlias_IDLength] int)
;
INSERT INTO Table1
([HostNameAlias_ID], [ProjectName_ID], [HostNameAlias_IDLength])
VALUES
('H123456789023456', 16009, 16),
('B123456789023', 16005, 13),
('C1234567890', 16009, 11),
('d12345678', 16009, 9),
('e123456', 16009, 8),
('f12345', 16003, 6),
('g1234', 16035, 5),
('h123', 16035, 4),
('j12', 16005, 3),
('k1', 16007, 2)
;
CREATE TABLE Table2
([HostName] varchar(13), [Title] varchar(7), [projectname_ID] int)
;
INSERT INTO Table2
([HostName], [Title], [projectname_ID])
VALUES
('C1234567890a1', 'vp', 16009),
('C1234567890a2', 'avp', 16009),
('h12335', 'student', 16009),
('h12356', 'teacher', 16009),
('h12357', 'prof', 16009)
;
Query 1:
SELECT
m.[HostName]
, c.[HostNameAlias_ID]
, m.[projectname_id]
, c.[HostNameAlias_IDLength]
FROM [table1] c
JOIN [table2] m ON c.[projectname_id] = m.[projectname_id]
WHERE [HostName] LIKE ([HostNameAlias_ID] + '%')
Results:
| HostName | HostNameAlias_ID | projectname_id | HostNameAlias_IDLength |
|---------------|------------------|----------------|------------------------|
| C1234567890a1 | C1234567890 | 16009 | 11 |
| C1234567890a2 | C1234567890 | 16009 | 11 |
re: [Host name] including spaces in column names is a complication that can and should be avoided, so I have used [HostName] instead.

SQL Merge n rows from source table to two targets

We have an old database that we maintain, and a new one that we have started using. We need to periodically transfer data from the old db to the new one. At the moment, we need to transfer, or merge as it might also be called, data from one table - Student, in the old database to two tables (ie two targets) in the new one - Person and Student. Now the catch is that the data from the old, source, database should be divided among the two tables in the new one. For example (just for the sake of this post),
Old table 'Student'
------------------------------
IdNo | FirstName | LastName |
578 | John | Doe |
645 | Sara | Doe |
New table 'Person'
-----------
Id | IdNo |
11 | 578 |
23 | 645 |
New table 'Student'
--------------------------------------
Id | PersonId | FirstName | LastName |
101| 11 | John | Doe |
102| 23 | Sara | Doe |
And the procedure should take a parameter of the number of rows to merge.
How can this be accomplished?
Update
Perhaps it would be easier for you guys to know what I mean by pseudo code:
MERGE [NewDB].[dbo].[Person] p, [NewDB].[dbo].[Student] ns -- 2 targets, this does not work
USING [OldDB].[dbo].[student] os -- source table, old student
ON p.IdNo = s.IdNo
WHEN MATCHED THEN -- Update existing rows
UPDATE p
SET p.SomeCoumn1 = os.SomeColumn1 -- works. os (old student) is know here
UPDATE ns
SET ns.SomeColumn2 = os.SomeColumn2 -- Does not work. os is not known here
WHEN NOT MATCHED BY TARGET THEN -- Add new rows
INSERT INTO p (IdNo, SomeOlumn1)
VALUES (os.Idno, os.SomeColumn1); -- os (old Studnet) is known here
INSERT INTO ns (SomeColumn2)
VALUES (os.SomeColumn2); -- Does not work. os is not knwon here
I hope that makes it somewhat clearer.
May we assume the reason you want to do this in one statement instead of two is that one if the fields in the first table you are inserting in to is an identity field (Id in the Person table in your example) that needs to be inserted into the second table?
If so, add an OUTPUT clause in the first merge statement so that you have the relationship and fields you require for the second merge statement.
declare #OldStudent table (IdNo int, FirstName varchar(30), LastName varchar(30))
declare #Person table (Id int identity, IdNo int)
declare #NewStudent table (Id int identity, PersonId int, FirstName varchar(30), LastName varchar(30))
insert #OldStudent (IdNo, FirstName, LastName)
select 578, 'John', 'Doe'
union all select 645, 'Sara', 'Doe'
declare #output table ([Action] varchar(20), PersonId int, IdNo int)
MERGE #Person p
USING #OldStudent os
ON p.IdNo = os.IdNo
WHEN MATCHED THEN -- Update existing rows
UPDATE SET IdNo = os.IdNo
WHEN NOT MATCHED BY TARGET THEN -- Add new rows
INSERT (IdNo) VALUES (os.Idno)
OUTPUT $action, inserted.Id, inserted.IdNo into #output;
WITH src AS
(
select
o.IdNo, o.PersonId, os.FirstName, os.LastName
from
#output o
inner join #OldStudent os on os.IdNo = o.IdNo
)
MERGE INTO #NewStudent as ns
USING src
ON src.PersonID = ns.PersonID
WHEN MATCHED THEN
UPDATE SET FirstName = src.FirstName, LastName = src.LastName
WHEN NOT MATCHED BY TARGET THEN -- Add new rows
INSERT (PersonID, FirstName, LastName) VALUES (src.PersonID, src.FirstName, src.LastName);
select * from #Person
select * from #NewStudent

SQL Server 2000 equivalent of GROUP_CONCAT function

I tried to use the GROUP_CONCAT function in SQL Server 2000 but it returns an error:
'group_concat' is not a recognized function name"
So I guess there is an other function for group_concat in SQL Server 2000? Can you tell me what it is?
Unfortunately since you are using SQL Server 2000 you cannot use FOR XML PATH to concatenate the values together.
Let's say we have the following sample Data:
CREATE TABLE yourtable ([id] int, [name] varchar(4));
INSERT INTO yourtable ([id], [name])
VALUES (1, 'John'), (1, 'Jim'),
(2, 'Bob'), (3, 'Jane'), (3, 'Bill'), (4, 'Test'), (4, '');
One way you could generate the list together would be to create a function. A sample function would be:
CREATE FUNCTION dbo.List
(
#id int
)
RETURNS VARCHAR(8000)
AS
BEGIN
DECLARE #r VARCHAR(8000)
SELECT #r = ISNULL(#r+', ', '') + name
FROM dbo.yourtable
WHERE id = #id
and Name > '' -- add filter if you think you will have empty strings
RETURN #r
END
Then when you query the data, you will pass a value into the function to concatenate the data into a single row:
select distinct id, dbo.list(id) Names
from yourtable;
See SQL Fiddle with Demo. This gives you a result:
| ID | NAMES |
-------------------
| 1 | John, Jim |
| 2 | Bob |
| 3 | Jane, Bill |
| 4 | Test |

How do I join an unknown number of rows to another row?

I have this scenario:
Table A:
---------------
ID| SOME_VALUE|
---------------
1 | 123223 |
2 | 1232ff |
---------------
Table B:
------------------
ID | KEY | VALUE |
------------------
23 | 1 | 435 |
24 | 1 | 436 |
------------------
KEY is a reference to to Table A's ID. Can I somehow join these tables so that I get the following result:
Table C
-------------------------
ID| SOME_VALUE| | |
-------------------------
1 | 123223 |435 |436 |
2 | 1232ff | | |
-------------------------
Table C should be able to have any given number of columns depending on how many matching values that are found in Table B.
I hope this enough to explain what I'm after here.
Thanks.
You need to use a Dynamic PIVOT clause in order to do this.
EDIT:
Ok so I've done some playing around and based on the following sample data:
Create Table TableA
(
IDCol int,
SomeValue varchar(50)
)
Create Table TableB
(
IDCol int,
KEYCol int,
Value varchar(50)
)
Insert into TableA
Values (1, '123223')
Insert Into TableA
Values (2,'1232ff')
Insert into TableA
Values (3, '222222')
Insert Into TableB
Values( 23, 1, 435)
Insert Into TableB
Values( 24, 1, 436)
Insert Into TableB
Values( 25, 3, 45)
Insert Into TableB
Values( 26, 3, 46)
Insert Into TableB
Values( 27, 3, 435)
Insert Into TableB
Values( 28, 3, 437)
You can execute the following Dynamic SQL.
declare #sql varchar(max)
declare #pivot_list varchar(max)
declare #pivot_select varchar(max)
Select
#pivot_list = Coalesce(#Pivot_List + ', ','') + '[' + Value +']',
#Pivot_select = Coalesce(#pivot_Select, ', ','') +'IsNull([' + Value +'],'''') as [' + Value + '],'
From
(
Select distinct Value From dbo.TableB
)PivotCodes
Set #Sql = '
;With p as (
Select a.IdCol,
a.SomeValue,
b.Value
From dbo.TableA a
Left Join dbo.TableB b on a.IdCol = b.KeyCol
)
Select IdCol, SomeValue ' + Left(#pivot_select, Len(#Pivot_Select)-1) + '
From p
Pivot ( Max(Value) for Value in (' + #pivot_list + '
)
)as pvt
'
exec (#sql)
This gives you the following output:
Although this works at the moment it would be a nightmare to maintain. I'd recommend trying to achieve these results somewhere else. i.e not in SQL!
Good luck!
As Barry has amply illustrated, it's possible to get multiple columns using a dynamic pivot.
I've got a solution that might get you what you need, except that it puts all of the values into a single VARCHAR column. If you can split those results, then you can get what you need.
This method is a trick in SQL Server 2005 that you can use to form a string out of a column of values.
CREATE TABLE #TableA (
ID INT,
SomeValue VARCHAR(50)
);
CREATE TABLE #TableB (
ID INT,
TableAKEY INT,
BValue VARCHAR(50)
);
INSERT INTO #TableA VALUES (1, '123223');
INSERT INTO #TableA VALUES (2, '1232ff');
INSERT INTO #TableA VALUES (3, '222222');
INSERT INTO #TableB VALUES (23, 1, 435);
INSERT INTO #TableB VALUES (24, 1, 436);
INSERT INTO #TableB VALUES (25, 3, 45);
INSERT INTO #TableB VALUES (26, 3, 46);
INSERT INTO #TableB VALUES (27, 3, 435);
INSERT INTO #TableB VALUES (28, 3, 437);
SELECT
a.ID
,a.SomeValue
,RTRIM(bvals.BValues) AS ValueList
FROM #TableA AS a
OUTER APPLY (
-- This has the effect of concatenating all of
-- the BValues for the given value of a.ID.
SELECT b.BValue + ' ' AS [text()]
FROM #TableB AS b
WHERE a.ID = b.TableAKEY
ORDER BY b.ID
FOR XML PATH('')
) AS bvals (BValues)
ORDER BY a.ID
;
You'll get this as a result:
ID SomeValue ValueList
--- ---------- --------------
1 123223 435 436
2 1232ff NULL
3 222222 45 46 435 437
This looks like something a database shouldn't do. Firstly; a table cannot have arbitrary number of columns depending on whatever you'll store. So you will have to put up a maximum number of values anyway. You can get around this by using comma seperated values as value for that cell (or a similar pivot-like solution).
However; if you do have table A and B; i recommend keeping to those two tables; as they seem to be pretty normalised. Should you need a list of b.value given an input a.some_value, the following sql query gives that list.
select b.value from a,b where b.key=a.id a.some_value='INPUT_VALUE';