One of my column is jsonb and have value in the format. The value of a single row of column is below.
{
"835": {
"cost": 0,
"name": "FACEBOOK_FB1_6JAN2020",
"email": "test.user#silverpush.co",
"views": 0,
"clicks": 0,
"impressions": 0,
"campaign_state": "paused",
"processed":"in_progress",
"modes":["obj1","obj2"]
},
"876": {
"cost": 0,
"name": "MARVEL_BLACK_WIDOW_4DEC2019",
"email": "test.user#silverpush.co",
"views": 0,
"clicks": 0,
"impressions": 0,
"campaign_state": "paused",
"processed":"in_progress",
"modes":["obj1","obj2"]
}
}
I want to update campaign_info(column name) column's the inner key "processed" and "models" of the campaign_id is "876".
I have tried this query:
update safe_vid_info
set campaign_info -> '835' --> 'processed'='completed'
where cid = 'kiywgh';
But it didn't work.
Any help is appreciated. Thanks.
Is this what you want?
jsonb_set(campaign_info, '{876,processed}', '"completed"')
This updates the value at path "876" > "processed" with value 'completed'.
In your update query:
update safe_vid_info
set campaign_info = jsonb_set(campaign_info, '{876,processed}', '"completed"')
where cid = 'kiywgh';
Related
I have a database that has a column with a long string and I'm looking for a way to extract just a certain portion of it.
Here is a sample:
{
"vendorId": 53,
"externalRef": "38828059 $567.82",
"lines": [{
"amount": 0,
"lineType": "PURCHASE",
"lineItemType": "INVENTORY",
"inventory": {
"cost": 0,
"quantity": 1,
"row": "6",
"seatType": "CONSECUTIVE",
"section": "102",
"notes": "http://testurl/0F005B52CE7F5892 38828059 $567.82 ,special",
"splitType": "ANY",
"stockType": "ELECTRONIC",
"listPrice": 0,
"publicNotes": " https://brokers.123.com/wholesale/event/146489908 https://www.123.com/buy-event/4897564 ",
"eventId": 3757669,
"eventMapping": {
"eventDate": "",
"eventName": "Brandi Carlile: Beyond These Silent Days Tour",
"venueName": "Gorge Amphitheatre"
},
"tickets": [{
"seatNumber": 1527
}]
}
}]
}
What I'm looking to extract is just http://testurl/0F005B52CE7F5892
Would someone be able to assist me with the syntax how to call my query that it will make a new temp column and give me just this extracted value for each row in this column?
I user SQL Server 2008 so some newer functions won't work for me.
Upgrade your SQL Server to a supported version.
But till then, we pity those who dare to face the horror of handling JSON with only the old string functions.
select
[notes_url] =
CASE
WHEN [json_column] LIKE '%"notes": "http%'
THEN substring([json_column],
patindex('%"notes": "http%', [json_column])+10,
charindex(' ', [json_column] ,
patindex('%"notes": "http%', [json_column])+15)
- patindex('%"notes": "http%', [json_column])-10)
END
from [YourTable];
db<>fiddle here
I am trying to create a SQL query to gather information from a column in a database called "CarOptions". This column is an array that contains 1 or more JSON objects. Below is an example of the array.
I want to only grab the values of the name and the price. Could any provide me a query that can formulate a column with the name and price so that it would look like the example below or any readable format?
"Clear Guard 89500, Tint 0"
[
{
"id": 5,
"name": "Clear Guard",
"type": "ANY",
"grouping": "PREFER",
"price": 89500,
"oemOffering": false,
"learnMoreUrl": null,
"pricePercent": null,
"optionGroupId": 2,
"percentSource": null
},
{
"id": 119600,
"name": "Tint (Lifetime Warranty)",
"type": "NEW",
"grouping": "PREFER",
"price": 0,
"oemOffering": false,
"learnMoreUrl": null,
"pricePercent": null,
"optionGroupId": 18,
"percentSource": null
}
]
you can use openJson to pull the data out. Note you don't state your database, this is for SqlServer.
A very quick hacky example:
declare #json varchar(max)='[ { "id": 5, "name": "Clear Guard", "type": "ANY", "grouping": "PREFER", "price": 89500, "oemOffering": false, "learnMoreUrl": null, "pricePercent": null, "optionGroupId": 2, "percentSource": null }, { "id": 119600, "name": "Tint (Lifetime Warranty)", "type": "NEW", "grouping": "PREFER", "price": 0, "oemOffering": false, "learnMoreUrl": null, "pricePercent": null, "optionGroupId": 18, "percentSource": null } ]'
select j.[key] Id, x.[key], x.[value]
from OpenJson(#json)j
outer apply (
select [key],[value]
from OpenJson(value)
where [key] in ('name','price')
)x
Id key value
---- ---------- -------------------------
0 name Clear Guard
0 price 89500
1 name Tint (Lifetime Warranty)
1 price 0
(4 rows affected)
this my json object that already stored in my postgres 11 database
[{"user_id": 0, "user_name": "", "user_role": "", "start_timestamp":
"2020-09-08 04:01:31.636848", "end_timestamp": "2020-09-08
04:01:31.636848", "hold_timestamp_list": [], "handover_link": "",
"curr_state": "To be Alloted", "is_complete": 1, "is_onhold": 0},
{"user_id": 910, "user_name": "INM", "user_role": "",
"start_timestamp": "2020-09-09 05:11:06.476766", "end_timestamp": "",
"hold_timestamp_list": [{"s": "2020-09-09 05:11:07.359749", "e": ""}],
"handover_link": "", "curr_state": "Authoring", "is_complete": 0,
"is_onhold": 1}]
how can i get the user_name from last index (i.e : index 2 ,value =
{"user_id": 910, "user_name": "INM", "user_role": "",
"start_timestamp": "2020-09-09 05:11:06.476766", "end_timestamp": "",
"hold_timestamp_list": [{"s": "2020-09-09 05:11:07.359749", "e": ""}],
"handover_link": "", "curr_state": "Authoring", "is_complete": 0,
"is_onhold": 1}
You can use a negative index to get the last element of the json array, like so:
mycol -> -1
If you want the corresponding user name:
mycol -> -1 ->> 'user_id'
Or maybe you want an integer result:
(mycol -> -1 ->> 'user_id')::int
I have a table with this schema :
I'm trying to upload some data from Google Coud Storage using the python client. The file is JSON newline delimited. Most of my lines don't have the field "passenger_origin.accuracy" but when the filed is present I have the following error :
Error while reading
data, error message: JSON parsing error in row starting at position
2122510: No such field: driver_origin.accuracy. (error code: invalid)
Error while reading
data, error message: JSON parsing error in row starting at position
2126317: No such field: passenger_origin.accuracy. (error code:
invalid)
Example of an invalid row :
{
"id": 1479443,
"is_obsolete": 0,
"seat_count": 1,
"is_ticket_checked": 0,
"score": 0.3709318902,
"is_multimodal": 0,
"fake_paths": 0,
"passenger_origin": {
"id": 2204,
"poi_uuid": "15b4e52c-7c58-442c-98df-1eb06079f6bb",
"user_id": 1987,
"accuracy": 250.0,
"disabled": 0,
"last_update": "2017-03-10T15:15:39",
"created": "2016-02-05T17:06:26",
"modified_by_user": 1,
"is_recurrent": 0,
"source": 1,
"hidden_by_user": 0,
"kind": 2,
},
"driver_origin": {
"id": 412491,
"poi_uuid": "47e90b6d-e178-4e02-9f02-f4ea5f8beaa1",
"user_id": 71471,
"disabled": 0,
"last_update": "2017-11-02T10:09:09",
"created": "2017-11-02T10:09:09",
"modified_by_user": 0,
"is_recurrent": 0,
"source": 1,
"hidden_by_user": 0,
"kind": 2,
},
"passenger_destination": {
"id": 2203,
"poi_uuid": "c531c3ca-47f0-4003-8098-1272fee8d018",
"user_id": 1987,
"accuracy": 250.0,
"disabled": 0,
"last_update": "2017-03-10T15:12:42",
"created": "2016-02-05T17:06:19",
"modified_by_user": 1,
"is_recurrent": 0,
"source": 1,
"hidden_by_user": 0,
"kind": 1,
}
}
The table is created before the upload of the data and is not modified since. I don't understand why the upload is failing on theses fields ? Do the RECORD fields have to be REPEATED ?
To ignore the fields that aren't present in the schema, use a combination of:
configuration.load.ignoreUnknownValues
configuration.load.maxBadRecords
Setting the first to true and the second to some arbitrarily-high number, e.g. 100000, will enable the load to succeed even if there are extra fields.
The problem was configuration.load.autodetect was set to True. I set it to False and the problem was fix
I am implementing a solr project, with the structure below of my indexed object.
{
"OBJECT_HEADER_ID": 173604,
"CHARACTERISTIC_VALUE_ID": 143287,
"OBJECT_TYPE_ID": 1,
"SEQUENCE": 0,
"CHARACTERISTIC_ID": 1488,
"OBJECT_VARIANT_ID": 169941,
"ID": "84445897",
"TYPE": 0
},
{
"OBJECT_HEADER_ID": 173604,
"CHARACTERISTIC_VALUE_ID": 23502,
"OBJECT_TYPE_ID": 1,
"SEQUENCE": 0,
"CHARACTERISTIC_ID": 992,
"OBJECT_VARIANT_ID": 169941,
"ID": "84445898",
"TYPE": 0
}
And I need to make a intersect between various results in sub queries, and I don't found nothing on the WEB about how to make the query, for example:
-> Get all results that have (CHARACTERISTIC_ID = 1488 and CHARACTERISTIC_VALUE_ID = 143287) INTERSECT BY OBJECT_VARIANT_ID WITH (CHARACTERISTIC_ID = 992 and CHARACTERISTIC_VALUE_ID = 23502).