Select rows where items in a JSON array has a specific value - sql

I have a table with a JSONB column. The column contains a number of topics as an array: Example:
select id, topics from c;
id | topics
---------+-----------------------------------------------------------------------------------------------------------------------------------------
7783263 | [{"id": "ddded8f7-1a72-4e43-b040-86a01e82d2c6", "name": "Finance"}]
7783556 | [{"id": "7bad8662-a07b-45c5-bea5-1aa6050c0dfb", "name": "Politics"}]
7783795 |
7785318 | [{"id": "7bad8662-a07b-45c5-bea5-1aa6050c0dfb", "name": "Politics"}, {"id": "ddded8f7-1a72-4e43-b040-86a01e82d2c6", "name": "Finance"}]
I have tried the #> operator and some others but to no help. I need to be able to select all the items that has either one specific topic like "Finance" or several such as ["Finance", "Politics"].
I tried topics #> '{"name": ["Finance"]}' as an example, but that didn't work.

One way is to use an OR condition:
select *
from the_table
where topics #> '[{"name": "Finance"}]'
or topics #> '[{"name": "Politics"}]'
;
If you are using Postgres 12 or later, you can also collect all names into a (JSON) array and use the ?| operator:
select *
from the_table
where jsonb_path_query_array(topics, '$.name') ?| array['Finance', 'Politics']

Related

Join Postgresql jsonb object with an item from an array in values

I have two tables in my postgresql database which I want to join:
Table 1) platforms:
ID (INT) | data (JSONB)
------------+---------------------------------------------------------
1 | {"identity": "1", "platformName": "Teradata" }
2 | {"identity": "2", "platformName": "MSSQL" }
Table 2) users:
ID (INT) | data (JSONB)
------------+-----------+---------------
12 | { "role": "developer", "identity": "12", "accessRights": {"platforms": ["1"]} }
13 | { "role": "admin", "identity": "13", "accessRights": {"platforms": ["1", "2"]}" }
I need to get the list of platforms along with the list of users who has access to them. Something like this:
Platform ID | data (JSONB)
------------+-----------+---------------
1 | [{"role": "developer", "identity": "12"}]
2 | [{"role": "developer", "identity": "12"}, {"role": "admin", "identity": "13"}]
I thought maybe something like this can help:
SELECT p.id, u.id, u.role
FROM users u
INNER JOIN platforms p ON (u.data->>'accessRights'->'platforms')::text::int = p.id
GROUP BY p.id
But I can't make it work. So is there anyway to get the result I need?
A simple join is not enough because you need to aggregate information from the users.data JSON for each platform.
select p.id as platform_id, u.*
from platforms p
cross join lateral (
select jsonb_agg(u.data - 'accessRights') as data
from users u
where u.data -> 'accessRights' -> 'platforms' ? p.id::text
) as u
Note this only works because you stored the platform IDs in the array as strings, not as (JSON) integers, because the ? operator only works on strings.
Online example

PostgreSQL: Sorting the rows based on value of a JSON in an array of JSON

A table says products have a JSONB column called identifiers that stores an array of JSON objects.
Sample data in products
id | name | identifiers
-----|-------------|---------------------------------------------------------------------------------------------------------------
1 | umbrella | [{"id": "productID-umbrella-123", "domain": "ecommerce.com"}, {"id": "amzn-123", "domain": "amzn.com"}]
2 | ball | [{"id": "amzn-234", "domain": "amzn.com"}]
3 | bat | [{"id": "productID-bat-234", "domain": "ecommerce.com"}]
Now, I have to write a query that sorts the elements in the table based on the "id" value for the domain "amzn.com"
Expected result
id | name | identifiers
----- |--------------|---------------------------------------------------------------------------------------------------------------
3 | bat | [{"id": "productID-bat-234", "domain": "ecommerce.com"}]
1 | umbrella | [{"id": "productID-umbrella-123", "domain": "ecommerce.com"}, {"id": "amzn-123", "domain": "amzn.com"}]
2 | ball | [{"id": "amzn-234", "domain": "amzn.com"}]
ids of amzn.com are "amzn-123" and "amzn-234".
When sorted by ids of amzn.com "amzn-123" appears first, followed by "amzn-234"
Ordering the table by values of "id" for the domain "amzn.com",
record with id 3 appears first since the id for amzn.com is NULL,
followed by a record with id 1 and 2, which has a valid id that is sorted.
I am genuinely clueless as to how I could write a query for this use case.
If it were a JSONB and not an array of JSON I would have tried.
Is it possible to write a query for such a use case in PostgreSQL?
If yes, please at least give me a pseudo code or the rough query.
As you don't know the position in the array, you will need to iterate over all array elements to find the amazon ID.
Once you have the ID, you can use it with an order by. Using nulls first puts those products at the top that don't have an amazon ID.
select p.*, a.amazon_id
from products p
left join lateral (
select item ->> 'id' as amazon_id
from jsonb_array_elements(p.identifiers) as x(item)
where x.item ->> 'domain' = 'amzn.com'
limit 1 --<< safe guard in case there is more than one amazon id
) a on true --<< we don't really need a join condition
order by a.amazon_id nulls first;
Online example
With Postgres 12 this would be a bit shorter:
select p.*
from products p
order by jsonb_path_query_first(identifiers, '$[*] ? (#.domain == "amzn.com").id') nulls first
After few tweaks, this is the query that finally made it,
select p.*, amzn -> 'id' AS amzn_id
from products p left join lateral JSONB_ARRAY_ELEMENTS(p.identifiers) amzn ON amzn->>'domain' = 'amzn.com'
order by amzn_id nulls first

How can I query a jsonb array to find if it contains a value from a query list?

Currently my table looks something like this
id | companies
----+-------------------------------------------------------------------
1 | {"companies": [{"name": "Google", "industry": "TECH"},
| {"name": "FOX News", "industry": "MEDIA"}]}
----+--------------------------------------------------------------------
2 | {"companies": [{"name": "Honda", "industry": "AUTO"}]}
----+--------------------------------------------------------------------
3 | {"companies": [{"name": "Nike", "industry": "SPORTS"}]}
I want to grab all the rows were the companies JSONB array contains a company with industry in a the list ["TECH", "SPORTS"].
In this example, the query would return rows 1 and 3.
I'm unsure of how to do this due to the nesting involved.
You can use jsonb_array_elements() and exists:
select t.*
from mytable t
where exists (
select 1
from jsonb_array_elements(t.companies -> 'companies') x(obj)
where x.obj ->> 'industry' in ('TECH', 'SPORTS')
)
Another way to write this, is to use the contains operator #>
select *
from the_table t
where t.companies -> 'companies' #> '[{"industry": "TECH"}]'
or t.companies -> 'companies' #> '[{"industry": "SPORTS"}]'
This could make use of a GIN index on companies

Postgres get multiple rows into a single json object

I have a users table with columns like id, name, email, etc. I want to retrieve information of some users in the following format in a single json object:
{
"userId_1" : {"name" : "A", "email": "A#gmail.com"},
"userId_2" : {"name" : "B", "email": "B#gmail.com"}
}
Wherein the users unique id is the key and a json containing his information is its corresponding value.
I am able to get this information in two separate rows using json_build_object but I would want it get it in a single row in the form of one single json object.
You can use json aggregation functions:
select jsonb_object_agg(id, to_jsonb(t) - 'id') res
from mytable t
jsonb_object_agg() aggregates key/value pairs into a single object. The key is the id of each row, and the values is a jsonb object made of all columns of the table except id.
Demo on DB Fiddle
Sample data:
id | name | email
:------- | :--- | :----------
userid_1 | A | A#gmail.com
userid_2 | B | B#gmail.com
Results:
| res |
| :----------------------------------------------------------------------------------------------------- |
| {"userid_1": {"name": "A", "email": "A#gmail.com"}, "userid_2": {"name": "B", "email": "B#gmail.com"}} |
try -
select row_to_json(col) from T
link below might help https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql
Try this:
SELECT json_object(array_agg(id), array_agg(json::text)) FROM (
SELECT id, json_build_object('name', name, 'email', email) as json
FROM users_table
) some_alias_name
If your id is not of text type then you have to cast it to text too.

How to unwind jsonb array into object per jsonb column based on object id?

Given an existing data structure similar to the following:
CREATE TEMP TABLE sample (id int, metadata_array jsonb, text_id_one jsonb, text_id_two jsonb);
INSERT INTO sample
VALUES ('1', '[{"id": "textIdOne", "data": "foo"},{"id": "textIdTwo", "data": "bar"}]'), ('2', '[{"id": "textIdOne", "data": "baz"},{"id": "textIdTwo", "data": "fiz"}]');
I'm trying to unwind the jsonb array of objects from an existing metadata column into new jsonb columns in the same table; that I've already created based on the known fixed list of id keys being textIdOne, textIdTwo, etc.
I thought I was close using jsonb_populate_recordset() but then realized that will populate columns per all the jsonb object's keys; not what I want. Desired result is object per column based on object id.
The only other tricky part of this operation is that my JSON object's id values use camelCase and it seems one should avoid quoted/cased column names, BUT I don't mind quoting or modifying the column names as a means to an end & once the update query is completed I can manually change the column names as needed.
I'm using PostgreSQL 9.5.2
Existing data & structure:
id | metadata_array jsonb | text_id_one jsonb | text_id_two jsonb
---------------------------------------------------------------------------------------------
1 | [{"id": "textIdOne"...}, {"id": "textIdTwo"...}] | NULL | NULL
2 | [{"id": "textIdOne"...}, {"id": "textIdTwo"...}] | NULL | NULL
Desired result:
id | metadata_array jsonb | text_id_one jsonb | text_id_two jsonb
-------------------------------------------------------------------------------
1 | [{"id": "textIdOne",... | {"id": "textIdOne"...} | {"id": "textIdTwo"...}
2 | [{"id": "textIdOne",... | {"id": "textIdOne"...} | {"id": "textIdTwo"...}
Clarifications:
Thanks for the answers thus far everyone! Though I do know the complete list of keys (about 9) I cannot count on the ordering being consistent.
If all of the json arrays contain two elements for the two new columns then use fixed paths like in dmfay's answer. Otherwise you should unnest the arrays using jsonb_array_elements() twice, for text_id_one and text_id_two separately.
update sample t set
text_id_one = value1,
text_id_two = value2
from sample s,
jsonb_array_elements(s.metadata_array) as e1(value1),
jsonb_array_elements(s.metadata_array) as e2(value2)
where s.id = t.id
and value1->>'id' = 'textIdOne'
and value2->>'id' = 'textIdTwo'
returning t.*
Test the query in SqlFiddle.
In case of more than two elements of the arrays this variant may be more efficient (and more convenient too):
update sample t
set
text_id_one = arr1->0,
text_id_two = arr2->0
from (
select
id,
jsonb_agg(value) filter (where value->>'id' = 'textIdOne') as arr1,
jsonb_agg(value) filter (where value->>'id' = 'textIdTwo') as arr2
from sample,
jsonb_array_elements(metadata_array)
group by id
) s
where t.id = s.id
returning t.*
SqlFiddle.
You said the id list is 'fixed'; is the ordering of objects in metadata_array consistent? You can do that with plain traversal:
UPDATE sample
SET text_id_one = metadata_array->0,
text_id_two = metadata_array->1;