How to use JSON_MODIFY to change all of the keys in a column that has an array of JSON objects? - sql

I have a column in my database that looks like this (3 separate rows of data)
Columns
[{"header":"C", "value":"A"},{"header":"D","value":"A2"},{"header":"E","value":"A3"}]
[{"header":"C", "value":"B"},{"header":"D","value":"B2"},{"header":"E","value":"B3"}]
[{"header":"C", "value":"C"},{"header":"D","value":"C2"},{"header":"E","value":"C3"}]
I want to null out all of the values of the "header" key and change the name to be test
I also want to change the name all of the "value"'s to be newHeader
I tried running a script like this to change all of the headers inside the array to be test but it doesn't expect the '*' character.
UPDATE Files
SET Columns = JSON_MODIFY(
JSON_MODIFY(Columns,'$.test', JSON_VALUE(Columns,'$[*].header')),
'$[*].header',
NULL
)
The end result I want to be like this:
Columns
[{"test":"", "newHeader":"A"},{"test":"","newHeader":"A2"},{"test":"","newHeader":"A3"}]
[{"test":"", "newHeader":"B"},{"test":"","newHeader":"B2"},{"test":"","newHeader":"B3"}]
[{"test":"", "newHeader":"C"},{"test":"","newHeader":"C2"},{"test":"","newHeader":"C3"}]

Related

Postgresql , updating existing table row with another tables data

I am trying to update a null column using another tables value but it doesn't seems to work right. below codes were tried
SET
"Test name "= "Test"(
SELECT Transformertest.Test,Transformertest.TestID
FROM public.Transformertest WHERE TestID='Tes3')
WHERE test2table.Type='Oil Immersed Transformers'
UPDATE
public.test2table
SET
"Test name" = subquery."Test"
FROM
(
SELECT
"Test"
FROM Transformertest WHERE "TestID"='Tes2'
) AS subquery
WHERE
"Type"='Auto Transformer' AND "Phase"='3' AND "Rated Frequency"='60';
enter image description here
don't use space in column name.
Integers don't need to be quoted
See the result here (enter link description here)
what you need to do here (assuming Phase and Rated Frequency are integers)
remove unnecessary "" and spaces on column names
UPDATE
public.test2table
SET
test_name = subquery.Test
FROM
(
SELECT
test
FROM Transformertest WHERE Test_ID='Tes2'
) AS subquery
WHERE
Type='Auto Transformer' AND Phase=3 AND Rated_Frequency=60;
this should be working now

multiple columns from single column in python

I am trying to split a column into multiple columns
The column has values like this:
message
------------
time=15:45:19 devname="FG3H0E3917903319" devid="FG3H0E3917903319"
logid="1059028705" type="utm" subtype="app-ctrl" eventtype="app-ctrl-all"
level="warning" vd="root" eventtime=1564226119 appid=16009
srcip=172.24.208.2 dstip=93.184.221.240 srcport=4832 dstport=80
srcintf="LAN-RahaNet" srcintfrole="lan" dstintf="WAN-RahaNet"
dstintfrole="lan" proto=6 service="HTTP" direction="outgoing" policyid=43
sessionid=493024483 applist="LanAppControl" appcat="Update"
app="MS.Windows.Update" action="block"
hostname="www.download.windowsupdate.com" incidentserialno=1522726002
url="/msdownload/update/v3/static/trustedr/en/authrootseq.txt" msg="Update:
MS.Windows.Update," apprisk="elevated"
Basically I need to split this column into:
time devname devid ...
--------------------------------------------------------------
15:45:19 FG3H0E3917903319 FG3H0E3917903319 ...
short answer:
split the message on space, to get a list of key value pairs.
split every key-value pair on = sign.
add corresponding keys to their respective columns.

Get all entries for a specific json tag only in postgresql

I have a database with a json field which has multiple parts including one called tags, there are other entries as below but I want to return only the fields with "{"tags":{"+good":true}}".
"{"tags":{"+good":true}}"
"{"has_temps":false,"tags":{"+good":true}}"
"{"tags":{"+good":true}}"
"{"has_temps":false,"too_long":true,"too_long_as_of":"2016-02-12T12:28:28.238+00:00","tags":{"+good":true}}"
I can get part of the way there with this statement in my where clause trips.metadata->'tags'->>'+good' = 'true' but that returns all instances where tags are good and true including all entries above. I want to return entries with the specific statement "{"tags":{"+good":true}}" only. So taking out the two entries that begin has_temps.
Any thoughts on how to do this?
With jsonb column the solution is obvious:
with trips(metadata) as (
values
('{"tags":{"+good":true}}'::jsonb),
('{"has_temps":false,"tags":{"+good":true}}'),
('{"tags":{"+good":true}}'),
('{"has_temps":false,"too_long":true,"too_long_as_of":"2016-02-12T12:28:28.238+00:00","tags":{"+good":true}}')
)
select *
from trips
where metadata = '{"tags":{"+good":true}}';
metadata
-------------------------
{"tags":{"+good":true}}
{"tags":{"+good":true}}
(2 rows)
If the column's type is json then you should cast it to jsonb:
...
where metadata::jsonb = '{"tags":{"+good":true}}';
If I get you right, you can check text value of the "tags" key, like here:
select true
where '{"has_temps":false,"too_long":true,"too_long_as_of":"2016-02-12T12:28:28.238+00:00","tags":{"+good":true}}'::json->>'tags'
= '{"+good":true}'

Update all rows where contains 5 keys

I have Ticket table that has some columns like this :
ID : int
Body : nvarchar
Type : int
I have many rows where the Body column has value like this :
IPAddress = sometext, ComputerName = sometext , GetID = sometext, CustomerName=sometext-sometext , PharmacyCode = 13162900
I want update all rows' Type column where the Body column has at least five of the following keys:
IPAddress, ComputerName, GetID, CustomerName, PharmacyCode
You could do it with a simple update statement like that
UPDATE Ticket
SET Type = 4
WHERE Body LIKE '%IPAddress%'
and Body LIKE '%ComputerName%'
and Body LIKE '%GetID%'
and Body LIKE '%CustomerName%'
and Body LIKE '%PharmacyCode%'
if you know the 'keys' are always in the same order you could concatenate the LIKE conditions like so
UPDATE Ticket
SET Type = 4
WHERE Body LIKE '%IPAddress%ComputerName%GetID%CustomerName%PharmacyCode%'
If you have the possibility to change the data model it would be much better to explode this key & value column into an own table and link it back to this table as it is done in a proper relational model.
If you could calculate number of key value pair by number of = present in your string you could use this query
Update tblname set col=val where len(colname) - len(replace(colname,'=','')>5
The where part actually gives number of equal signs present in your string.

SQL Server - XQuery for XML

Just similar other post, I need to retrieve any rows from table applying criteria on Xml column, for instance, supposing you have an xml column like this:
<DynamicProfile xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://schemas.datacontract.org/2004/07/WinTest">
<AllData xmlns:d2p1="http://schemas.microsoft.com/2003/10/Serialization/Arrays">
<d2p1:KeyValueOfstringstring>
<d2p1:Key>One</d2p1:Key>
<d2p1:Value>1</d2p1:Value>
</d2p1:KeyValueOfstringstring>
<d2p1:KeyValueOfstringstring>
<d2p1:Key>Two</d2p1:Key>
<d2p1:Value>2</d2p1:Value>
</d2p1:KeyValueOfstringstring>
</AllData>
</DynamicProfile>
My query would be able to return all rows where node value <d2p1:Key> = 'some key value' AND node value <d2p1Value = 'some value value'.
Imagine of that just as a dynamic table where KEY node represent the column name and Value node represent column's value.
The following query does not work because key and value nodes are not sequential:
select * from MyTable where
MyXmlField.exist('//d2p1:Key[.="One"]') = 1
AND MyXmlField.exist('//d2p1:Value[.="1"]') = 1
Instead of looking for //d2p1:key[.="One"] and //d2p1:Value[.="1"] as two separate searches, do a single query that looks for both at once, like so:
//d2p1:KeyValueOfstringstring[./d2p1:Key="One"][./d2p1:Value=1]