The below is the json in my 'details' column in a postgreSQL DB. I am trying to do a login check to return the username which matches the password.
{
"id":"11a8581b-b56f-426e-92f6-a426ba635b98",
"firstName":"Ryan",
"lastName":"Bob",
"username":"ryan",
"address":"Flat 7, 8 Clisssld Road, London, N16 9AB",
"email":"ryan#abc.com",
"password":"$s0$e0801$M/lNYD/JsVN4FoOjs7BwBA==$+C3+A9lAYPMd1YM0FsSbaIzw0wFito4OSEvSrMM/34k="
}
SELECT details -> 'username'
FROM users WHERE
details -> 'password' = '$s0$e0801$M/lNYD/JsVN4FoOjs7BwBA==$+C3+A9lAYPMd1YM0FsSbaIzw0wFito4OSEvSrMM/34k=';
PostgreSQL provides two native operators -> and ->> to help you query JSON data.
The operator -> returns JSON object field as JSON.
The operator ->> returns JSON object field as text.
I think this link could be really helpful in your case.
Related
Imported database tables :
id | JSON
-------------|---------
Signed 32int | Raw JSON
It is easier to search via the properties of the JSON data than by id of the row itself. Each piece of JSON data contains (for this demo):
json: {
displayProperties: {},
hash: "foo"
itemType: "bar"
}
When I select I would like to matching hash, and then filter those results by a matching itemType.
My query :
SELECT json_extract(ItemDefinition.json, '$')
FROM ItemDefinition, json_tree(ItemDefinition.json, '$')
WHERE json_tree.key = 'hash' AND json_tree.value IN ${hashList}
However this returns every item that has a matching hash value. From here, I would like to also filter by key: itemType and value: "19". So I tried :
SELECT json_extract(ItemDefinition.json, '$')
FROM ItemDefinition, json_tree(ItemDefinition.json, '$')
WHERE json_tree.key = 'hash' AND json_tree.value IN ${hashList}
AND WHERE json_tree.key = 'itemType' AND json_tree.value = 19
But this isn't syntactically correct, let alone output what I am looking for. Error:
SQLITE_ERROR: near "WHERE": syntax error
The title of the question turned out to not be accurate to what I was looking for. I miss-understood what json_tree actually did. json_tree actually builds a new object with values that are filled in by the database.
What I was actually looking for was to filter by a specific value in the json column, which can be achieved by json_extract. json_extract('{column}', $.{filterValue}) will pull the raw json object out of the json column
This is the query that is working for me now:
SELECT json_extract(ItemDefinition.json, '$')
FROM ItemDefinition, json_tree(ItemDefinition.json, '$')
WHERE json_tree.key = 'hash'
AND json_tree.value IN ${hashList}
AND json_extract(ItemDefinition.json, '$.itemType') = 19
This selects the json column from ItemDefinition
Creates a json_tree from the json column
Filters results by json tree key and value
Finally filters by the property itemType from the raw json column
I have a database of resume data in json format which I am trying to transform.
One of the sections in each jsib is work_history, this is in the form of a json array i.e.
"work_experience":[
{
"job_title":"title",
"job_description":"description"
},
{
"job_title":"title",
"job_description":"description"
}
]
I am iterating over each resume (json file) and importing this data into a new table using dbt and postgreSQL with each element of the array being a new row with the associated metadata of the resume. Here is the code I used for this
select
json_array_elements(rjt.raw_json::json -> 'data' -> 'work_experience') as we,
json_array_elements(rjt.raw_json::json -> 'data' -> 'work_experience') -> 'job_title' as "name",
rjt.uuid as uuid
from raw_json_table rjt
The last thing that I need to do is add a column that lists the index that each job came from within its individual workexperience array i.e. if a job was the third element in the array it would have a 2 in the "source_location" column. How can I generate this index such that it starts at 0 for each new json file.
Move the function to the FROM clause (where set-returning functions should be used). Then you can use with ordinality which also returns a column that indicates the index inside the array
select w.experience as we,
w.experience ->> 'job_title' as "name",
w.experience ->> 'job_description' as "description",
w.idx as "index",
rjt.uuid as uuid
from raw_json_table rjt
left join json_array_elements(rjt.raw_json::json -> 'data' -> 'work_experience') with ordinality
as w(experience, idx) on true
The left join is necessary so that rows from raw_json_table that don't contain array elements are still included.
I have the following type of data as in my rows
{'new_value_formatted': 'Committed/Test Transaction', 'old_value_formatted': 'Completed'}
I would like to have new_value_formatted & old_value_formatted as individual columns and data stored in columnar manner
Tried regex '(([A-Z/ ])\w+)'. Still can't isolate.
select a.id,a.title,additional_data,
substring(b.additional_data ->> "old_value_formatted" FROM '[0-9a-zA-Z]+') as "Old Value",
substring(b.additional_data ->> "new_value_formatted" FROM '[0-9a-zA-Z]+') as "New from table a,b....
Old Value | New_Value
__________|____________
xxxxx . | xxxxx
The format presented here is not valid json. Json uses double quotes. If on postgresql 9.2+ I would do replace(data,'\'','"')::json to get valid json, taking care of quoting issues. Then you could do select new_data ->> 'new_value_formatted'
I used new_data as a placeholder for the valid json version.
If you need this for an older version, please come back to me.
P.s.: this is not tested, as I currently have no machine at hand.
I am using postgres 9.6.3 and need to convert the following python code to a sql query:
data = response.json()
activities = data['Response']['data']['activities']
for activity in activities:
activityHash = int(activity['activityHash'])
if activityHash == 2659248071:
clears = int(activity['values']['activityCompletions']['basic']['value'])
The table has two columns: (membershipid integer primary key, data jsonb). I am not sure how to handle an array like this in sql. The array is variable length and might or might not include an entry where activityHash == the desired value.
The desired result from the query would be something like SELECT membershipid, clears FROM table.
I was looking for jsonb_array_elements(activities)
I recommend you check out this link that walks you through how to traverse JSONB in Postgres.
Try the following query and see if that works for you:
SELECT
membershipid,
'data' -> 'activity' -> 'response' -> 'data' -> 'activities' ->> 'activityHash' AS activityHash,
'data' -> 'activity' -> 'response' -> 'data' -> 'activities' -> 'activityHash' -> 'values' -> 'activityCompletions' -> 'basic' ->> 'value' AS clears
FROM yourtablename
WHERE
('data' -> 'activity' -> 'response' -> 'data' -> 'activities' ->> 'activityHash')::int = 2659248071;
I am using Postgres 9.3.2 to make a database of contacts.
Example: If i have a row in my table that looks something like this.
{
firstName : "First name"
lastName : "Last name"
emails : ["email#one.com", "email#two.com", "email#three.com]
}
PS: firstName, lastName and emails are columns in my db and the value associated is the value for that column for that specific row.
I want to be able to query the db so that if i query for the email "email#four.com" the result is nothing but if i query for "email#two.com" the result will be the above row entry.
I dont think the query
"Select * from contactTable where emails="email#two.com""
will work. instead i want to do something like
"Select * from contactTable where emails contains "email#two.com""
any ideas on how to do this?
"Select * from contactTable where emails contains "email#two.com""
I think you want:
"Select * from contactTable where thejsonfield -> emails
Example setup, after fixing up your totally broken json:
CREATE TABLE contacts AS SELECT '{
"firstName" : "First name",
"lastName" : "Last name",
"emails" : ["email#one.com", "email#two.com", "email#three.com"]
}'::json AS myjsonfield;
The following will work in PostgreSQL 9.4, but unfortunately does not in 9.3 due to the oversight of the missing json_array_elements_text function:
select *
from contacts,
lateral json_array_elements_text(myjsonfield -> 'emails') email
where email = 'email#two.com';
For 9.3, you have to use a clumsier method to scan the json array for matching values:
select *
from contacts,
lateral json_array_length(myjsonfield -> 'emails') numemails,
lateral generate_series(0, numemails) n
WHERE json_array_element_text(myjsonfield -> 'emails', n) = 'email#two.com';
You can't use the simple IN or = ANY constructs because (at this point) PostgreSQL doesn't understand that you might have a json array, so it'll fail with:
regress=> SELECT * FROM contacts WHERE 'email#two.com' = ANY (myjsonfield->'emails');
ERROR: op ANY/ALL (array) requires array on right side
LINE 1: SELECT * FROM contacts WHERE 'email#two.com' = ANY (myjsonfi...
^
as it expects a PostgreSQL array, not a json array, and there's no convenient builtin to turn a json array into a PostgreSQL array yet.
Postgres has support for parsing JSON. Here is documentation: http://www.postgresql.org/docs/9.3/static/functions-json.html. I can't give you more detailed answer since you didn't provide exact data and schema, but it's easy to find the right function in documentation.