I have a column which saves json values as text. I want to select columns in query based on a certain value contained in the text field.
To be clear, I have a column server_response which saves data as follows:
{
"Success": true,
"PasswordNotExpired": true,
"Exists": true,
"Status": "A",
"Err": null,
"Statuscode": 200,
"Message": "Login Denied"
}
How can i choose columns based on if the message was/or contained Login Denied in the where clause?
This should do the trick, at least this is what i understood you want:
SELECT * FROM table WHERE server_response LIKE '%Login Denied%'
I think you need a query like this:
SELECT *
FROM yourTable
WHERE server_response->>Message LIKE '%Login Denied%'
Note: source
The -> operator returns a JSON object.
The ->> operator returns TEXT.
Related
{
"TaskType": "kkkk",
"Status": "SUCCESS",
"jobID": "18056",
"DownloadFilePath": "https://abcd",
"accountId": "1234",
"customerId": "hhff"
}
This is sample data in message in notification table. I need to search for the keyword inside 3 or 4 fields out of 6 fields.
One way is to use multiple OR statements like:
SELECT message AS note
FROM notification
WHERE message::jsonb->'TaskType' LIKE '%1234%'
OR message::jsonb->'jobId' LIKE '%1234%'
OR message::jsonb->'Status' LIKE '%1234%';
The results are satisfactory, but is there a way to optimize the query in case I need to search in more fields.
you could just cast the message column to text and search inside the text:
SELECT message AS note
FROM notification
WHERE message::text like '%1234%'
You need to iterate through all key/value pairs:
select n.*
from notification n
where exists (select *
from jsonb_each_text(n.message) as m(key,value)
where key in ('TaskType', 'jobId', 'Status')
and value like '%1234%');
If you want to search in all values, just leave out the key in (..) part in the sub-select.
Searching in all values can be done using a JSON path operator in Postgres 12 or later:
select *
from notification
where message ## '$.* like_regex "1234"'
This assumes that messages is defined as jsonb - which it should be. If it's not, you need to cast it: message::jsonb
I have the below sample JSON:
{
"Id1": {
"name": "Item1.jpg",
"Status": "Approved"
},
"Id2": {
"name": "Item2.jpg",
"Status": "Approved"
}
}
and I am trying to get the following output:
_key name Status
Id1 Item1.jpg Approved
Id2 Item2.jpg Approved
Is there any way I can achieve this in Snowflake using SQL?
You should use Snowflake's VARIANT data type in any column holding JSON data. Let's break this down step by step:
create temporary table FOO(v variant); -- Temp table to hold the JSON. Often you'll see a variant column simply called "V"
-- Insert into the variant column. Parse the JSON because variants don't hold string types. They hold semi-structured types.
insert into FOO select parse_json('{"Id1": {"name": "Item1.jpg", "Status": "Approved"}, "Id2": {"name": "Item2.jpg", "Status": "Approved"}}');
-- See how it looks in its raw state
select * from FOO;
-- Flatten the top-level JSON. The flatten function breaks down the JSON into several usable columns
select * from foo, lateral flatten(input => (foo.v)) ;
-- Now traverse the JSON using the column name and : to get to the property you want. Cast to string using ::string.
-- If you must have exact case on your column names, you need to double quote them.
select KEY as "_key",
VALUE:name::string as "name",
VALUE:Status::string as "Status"
from FOO, lateral flatten(input => (FOO.V)) ;
I was going through the Postgres Jsonb documentation but was unable to find a solution for a small issue I'm having.
I've got a table : MY_TABLE
that has the following columns:
User, Name, Data and Purchased
One thing to note is that "Data" is a jsonb and has multiple fields. One of the fields inside of "Data" is "Attribute" but it is currently a string. How can I go about changing this to a list of strings?
I have tried using json_build_array but have not had any luck
So for example, I'd want my jsonb to look like :
{
"Id": 1,
"Attributes": ["Test"]
}
instead of
{
"Id": 1,
"Attributes": "Test"
}
I only care about the "Attributes" field inside of the Json, not any other fields.
I also want to ensure for some Attributes that have an empty string "Attributes": "", they get mapped to an empty list and not a list with an empty string ([] not [""])
You can use jsonb_set(), and some conditional logic for the empty string:
jsonb_set(
mycol,
'{Attributes}',
case when js ->> 'Attributes' <> ''
then jsonb_build_array(js ->> 'Attributes')
else '[]'::jsonb
end
)
I have a table say types, which had a JSON column, say location that looks like this:
{ "attribute":[
{
"type": "state",
"value": "CA"
},
{
"type": "distance",
"value": "200.00"
} ...
]
}
Each row in the table has the data, and all have the "type": "state" in it. I want to just extract the value of "type": "state" from every row in the table, and put it in a new column. I checked out several questions on SO, like:
Query for element of array in JSON column
Index for finding an element in a JSON array
Query for array elements inside JSON type
but could not get it working. I do not need to query on this. I need the value of this column. I apologize in advance if I missed something.
create table t(data json);
insert into t values('{"attribute":[{"type": "state","value": "CA"},{"type": "distance","value": "200.00"}]}'::json);
select elem->>'value' as state
from t, json_array_elements(t.data->'attribute') elem
where elem->>'type' = 'state';
| state |
| :---- |
| CA |
dbfiddle here
I mainly use Redshift where there is a built-in function to do this. So on the off-chance you're there, check it out.
redshift docs
It looks like Postgres has a similar function set:
https://www.postgresql.org/docs/current/static/functions-json.html
I think you'll need to chain three functions together to make this work.
SELECT
your_field::json->'attribute'->0->'value'
FROM
your_table
What I'm trying is a json extract by key name, followed by a json array extract by index (always the 1st, if your example is consistent with the full data), followed finally by another extract by key name.
Edit: got it working for your example
SELECT
'{ "attribute":[
{
"type": "state",
"value": "CA"
},
{
"type": "distance",
"value": "200.00"
}
]
}'::json->'attribute'->0->'value'
Returns "CA"
2nd edit: nested querying
#McNets is the right, better answer. But in this dive, I discovered you can nest queries in Postgres! How frickin' cool!
I stored the json as a text field in a dummy table and successfully ran this:
SELECT
(SELECT value FROM json_to_recordset(
my_column::json->'attribute') as x(type text, value text)
WHERE
type = 'state'
)
FROM dummy_table
I have an issue where I have some JSON stored in my oracle database, and I need to extract values from it.
The problem is, there are some fields that are duplicated.
When I try this, it works as there is only one firstname key in the options array:
SELECT
JSON_VALUE('{"increment_id":"2500000043","item_id":"845768","options":[{"firstname":"Kevin"},{"lastname":"Test"}]}', '$.options.firstname') AS value
FROM DUAL;
Which returns 'Kevin'.
However, when there are two values for the firstname field:
SELECT JSON_VALUE('{"increment_id":"2500000043","item_id":"845768","options":[{"firstname":"Kevin"},{"firstname":"Okay"},{"lastname":"Test"}]}', '$.options.firstname') AS value
FROM DUAL;
It only returns NULL.
Is there any way to select the first occurence of 'firstname' in this context?
JSON_VALUE returns one SQL VALUE from the JSON data (or SQL NULL if the key does not exists).
If you have a collection of values (a JSON array) an you want one specific item of the array you use array subscripts (square brackets) like in JavaScript, for example [2] to select the third item. [0] selects the first item.
To get the first array item in your example you have to change the path expression from '$.options.firstname' to '$.options[0].firstname'
You can follow this query:-
SELECT JSON_VALUE('{
"increment_id": "2500000043",
"item_id": "845768",
"options": [
{
"firstname": "Kevin"
},
{
"firstname": "Okay"
},
{
"lastname": "Test"
}
]
}', '$.options[0].firstname') AS value
FROM DUAL;