I have a users table with columns like id, name, email, etc. I want to retrieve information of some users in the following format in a single json object:
{
"userId_1" : {"name" : "A", "email": "A#gmail.com"},
"userId_2" : {"name" : "B", "email": "B#gmail.com"}
}
Wherein the users unique id is the key and a json containing his information is its corresponding value.
I am able to get this information in two separate rows using json_build_object but I would want it get it in a single row in the form of one single json object.
You can use json aggregation functions:
select jsonb_object_agg(id, to_jsonb(t) - 'id') res
from mytable t
jsonb_object_agg() aggregates key/value pairs into a single object. The key is the id of each row, and the values is a jsonb object made of all columns of the table except id.
Demo on DB Fiddle
Sample data:
id | name | email
:------- | :--- | :----------
userid_1 | A | A#gmail.com
userid_2 | B | B#gmail.com
Results:
| res |
| :----------------------------------------------------------------------------------------------------- |
| {"userid_1": {"name": "A", "email": "A#gmail.com"}, "userid_2": {"name": "B", "email": "B#gmail.com"}} |
try -
select row_to_json(col) from T
link below might help https://hashrocket.com/blog/posts/faster-json-generation-with-postgresql
Try this:
SELECT json_object(array_agg(id), array_agg(json::text)) FROM (
SELECT id, json_build_object('name', name, 'email', email) as json
FROM users_table
) some_alias_name
If your id is not of text type then you have to cast it to text too.
Related
I have the following two tables:
+----------------------------+
| Parent Table |
+----------------------------+
| uuid (PK) |
| caseId |
| param |
+----------------------------+
+----------------------------+
| Child Table |
+----------------------------+
| uuid (PK) |
| parentUuid (FK) |
+----------------------------+
My goal is to do a (left?) join and get all matching rows in the child table based on the FK in an array on the parent row and not inside the parent row itself on matching column names (see further down on desired output).
Examples of values in tables:
Parent table:
1. uuid: "10dd617-083-e5b5-044b-d427de84651", caseId: 1, param: "test1"
2. uuid: "5481da7-8b7-22db-d326-b6a0a858ae2f", caseId: 1, param: "test1"
3. uuid: "857dec3-aa3-1141-b8bf-d3a8a3ad28a7", caseId: 2, param: "test1"
Child table:
1. uuid: 7eafab9f-5265-4ba6-bb69-90300149a87d, parentUuid: 10dd617-083-e5b5-044b-d427de84651
2. uuid: f1afb366-2a6b-4cfc-917b-0794af7ade85, parentUuid: 10dd617-083-e5b5-044b-d427de84651
What my desired output should look like:
Something like this query (with pseudo-ish SQL code):
SELECT *
FROM Parent_table
WHERE caseId = '1'
LEFT JOIN Child_table ON Child_table.parentUuid = Parent_table.uuid
Desired output (in JSON)
[
{
"uuid": "10dd617-083-e5b5-044b-d427de84651",
"caseId": "1",
// DESIRED FORMAT HERE
"childRows": [
{
"uuid": "7eafab9f-5265-4ba6-bb69-90300149a87d",
"parentUuid": "10dd617-083-e5b5-044b-d427de84651"
},
{
"uuid": "f1afb366-2a6b-4cfc-917b-0794af7ade85",
"parentUuid": "10dd617-083-e5b5-044b-d427de84651"
}
]
},
{
"uuid": "5481da7-8b7-22db-d326-b6a0a858ae2f",
"caseId": "1"
}
]
You can use nested FOR JSON clauses to achieve this.
SELECT
p.uuid,
p.caseId,
childRows = (
SELECT
c.uuid,
c.parentUuid
FROM Child_table c
WHERE c.parentUuid = p.uuid
FOR JSON PATH
)
FROM Parent_table p
WHERE p.caseId = '1'
FOR JSON PATH;
SQL does not support rows inside rows as you actually want, instead you have to return the entire result set (either as a join or 2 separate datasets) from SQL server then create the objects in your backend. If you are using .net and EF/Linq this is as simple as getting all the parents with an include to also get the children. Other backends will do this in other ways.
My previous question has been answered, thanks to #Erwin Brandstetter for the help:
Query individual values in a nested json record
I have a follow-up:
Aurora Postgres - PostgreSQL 13.1. My jsonb column value looks like this:
'{
"usertype": [
{
"type": "staff",
"status": "active",
"permissions": {
"1": "add user",
"2": "add account"
}
},
{
"type": "customer",
"status": "suspended",
"permissions": {
"1": "add",
"2": "edit",
"3": "view",
"4": "all"
}
}
]
}'
I would like to produce a table style output where each permission item i shown as a column. It should show the value if not null else it will be NULL.
type | status | perm1 | perm2 | perm3 | perm4 | perm5 | perm6
----------+-----------+---------+------------+-------+-------+-------+-------
staff | active | adduser | addaccount | null | null | null | null
customer | suspended | add | edit | view | all | null | null
In other words, I would like a way to find out the max permissions count and show that many column in the select query.
An SQL query has to return a fixed number of columns. The return type has to be known at call time (at the latest). Number, names and data types of columns in the returned row(s) are fixed by then. There is no way to get a truly dynamic number of result columns in SQL. You'd have to use two steps (two round trips to the DB server):
Determine the list or result columns.
Send a query to produce that result.
Notably, that leaves a time window for race conditions under concurrent write load.
Typically, it's simpler to just return an array or a list or a document type (like JSON) for a variable number of values. Or a set of rows.
If there is a low, well-known maximum of possible values, say 6, like in your added example, just over-provision:
SELECT id
, js_line_item ->> 'type' AS type
, js_line_item ->> 'status' AS status
, js_line_item #>> '{permissions, 1}' AS perm1
, js_line_item #>> '{permissions, 2}' AS perm2
-- , ...
, js_line_item #>> '{permissions, 6}' AS perm6
FROM newtable n
LEFT JOIN LATERAL jsonb_array_elements(n.column1 -> 'usertype') AS js_line_item ON true;
LEFT JOIN to retain rows without any permissions.
A table says products have a JSONB column called identifiers that stores an array of JSON objects.
Sample data in products
id | name | identifiers
-----|-------------|---------------------------------------------------------------------------------------------------------------
1 | umbrella | [{"id": "productID-umbrella-123", "domain": "ecommerce.com"}, {"id": "amzn-123", "domain": "amzn.com"}]
2 | ball | [{"id": "amzn-234", "domain": "amzn.com"}]
3 | bat | [{"id": "productID-bat-234", "domain": "ecommerce.com"}]
Now, I have to write a query that sorts the elements in the table based on the "id" value for the domain "amzn.com"
Expected result
id | name | identifiers
----- |--------------|---------------------------------------------------------------------------------------------------------------
3 | bat | [{"id": "productID-bat-234", "domain": "ecommerce.com"}]
1 | umbrella | [{"id": "productID-umbrella-123", "domain": "ecommerce.com"}, {"id": "amzn-123", "domain": "amzn.com"}]
2 | ball | [{"id": "amzn-234", "domain": "amzn.com"}]
ids of amzn.com are "amzn-123" and "amzn-234".
When sorted by ids of amzn.com "amzn-123" appears first, followed by "amzn-234"
Ordering the table by values of "id" for the domain "amzn.com",
record with id 3 appears first since the id for amzn.com is NULL,
followed by a record with id 1 and 2, which has a valid id that is sorted.
I am genuinely clueless as to how I could write a query for this use case.
If it were a JSONB and not an array of JSON I would have tried.
Is it possible to write a query for such a use case in PostgreSQL?
If yes, please at least give me a pseudo code or the rough query.
As you don't know the position in the array, you will need to iterate over all array elements to find the amazon ID.
Once you have the ID, you can use it with an order by. Using nulls first puts those products at the top that don't have an amazon ID.
select p.*, a.amazon_id
from products p
left join lateral (
select item ->> 'id' as amazon_id
from jsonb_array_elements(p.identifiers) as x(item)
where x.item ->> 'domain' = 'amzn.com'
limit 1 --<< safe guard in case there is more than one amazon id
) a on true --<< we don't really need a join condition
order by a.amazon_id nulls first;
Online example
With Postgres 12 this would be a bit shorter:
select p.*
from products p
order by jsonb_path_query_first(identifiers, '$[*] ? (#.domain == "amzn.com").id') nulls first
After few tweaks, this is the query that finally made it,
select p.*, amzn -> 'id' AS amzn_id
from products p left join lateral JSONB_ARRAY_ELEMENTS(p.identifiers) amzn ON amzn->>'domain' = 'amzn.com'
order by amzn_id nulls first
I have a struct/model
type User struct {
gorm.Model
Name string `gorm:"unique;not null" json:"name"`
Data postgres.Jsonb `json:"data"`
}
I can query in postgres
db=# select id,name,data from users where data #> '{"foo": "bar"}';
id | name | data
----+-------+------------------
6 | user01 | {"foo": "bar"}
7 | user02 | {"foo": "bar"}
8 | user03 | {"foo": "bar"}
How do I construct a query on the jsonB column for a particular key(s)? I was not able to find any documentation for using model objects to query. I understand its possible to do with raw query, but wanted to see how it can be done using model object ie.
users := []model.User{}
db.Find(&users, map[string]interface{}{"foo": "bar"})
http://gorm.io/docs/dialects.html
http://gorm.io/docs/query.html
In your example you are not specifying which field the map will filter. Try
db.Find(&users, "data #> ?", map[string]interface{}{"foo": "bar"})
You make the query like this :
users := []model.User{}
db.Where("data ->> 'foo' = ?", "bar").Find(&users)
I don't know if this works for jsonp but datatypes worked for me with JSON saved as a string
import "gorm.io/datatypes"
db.Where(datatypes.JSONQuery("data").Equals("bar", "foo")).Find(&users)
Given an existing data structure similar to the following:
CREATE TEMP TABLE sample (id int, metadata_array jsonb, text_id_one jsonb, text_id_two jsonb);
INSERT INTO sample
VALUES ('1', '[{"id": "textIdOne", "data": "foo"},{"id": "textIdTwo", "data": "bar"}]'), ('2', '[{"id": "textIdOne", "data": "baz"},{"id": "textIdTwo", "data": "fiz"}]');
I'm trying to unwind the jsonb array of objects from an existing metadata column into new jsonb columns in the same table; that I've already created based on the known fixed list of id keys being textIdOne, textIdTwo, etc.
I thought I was close using jsonb_populate_recordset() but then realized that will populate columns per all the jsonb object's keys; not what I want. Desired result is object per column based on object id.
The only other tricky part of this operation is that my JSON object's id values use camelCase and it seems one should avoid quoted/cased column names, BUT I don't mind quoting or modifying the column names as a means to an end & once the update query is completed I can manually change the column names as needed.
I'm using PostgreSQL 9.5.2
Existing data & structure:
id | metadata_array jsonb | text_id_one jsonb | text_id_two jsonb
---------------------------------------------------------------------------------------------
1 | [{"id": "textIdOne"...}, {"id": "textIdTwo"...}] | NULL | NULL
2 | [{"id": "textIdOne"...}, {"id": "textIdTwo"...}] | NULL | NULL
Desired result:
id | metadata_array jsonb | text_id_one jsonb | text_id_two jsonb
-------------------------------------------------------------------------------
1 | [{"id": "textIdOne",... | {"id": "textIdOne"...} | {"id": "textIdTwo"...}
2 | [{"id": "textIdOne",... | {"id": "textIdOne"...} | {"id": "textIdTwo"...}
Clarifications:
Thanks for the answers thus far everyone! Though I do know the complete list of keys (about 9) I cannot count on the ordering being consistent.
If all of the json arrays contain two elements for the two new columns then use fixed paths like in dmfay's answer. Otherwise you should unnest the arrays using jsonb_array_elements() twice, for text_id_one and text_id_two separately.
update sample t set
text_id_one = value1,
text_id_two = value2
from sample s,
jsonb_array_elements(s.metadata_array) as e1(value1),
jsonb_array_elements(s.metadata_array) as e2(value2)
where s.id = t.id
and value1->>'id' = 'textIdOne'
and value2->>'id' = 'textIdTwo'
returning t.*
Test the query in SqlFiddle.
In case of more than two elements of the arrays this variant may be more efficient (and more convenient too):
update sample t
set
text_id_one = arr1->0,
text_id_two = arr2->0
from (
select
id,
jsonb_agg(value) filter (where value->>'id' = 'textIdOne') as arr1,
jsonb_agg(value) filter (where value->>'id' = 'textIdTwo') as arr2
from sample,
jsonb_array_elements(metadata_array)
group by id
) s
where t.id = s.id
returning t.*
SqlFiddle.
You said the id list is 'fixed'; is the ordering of objects in metadata_array consistent? You can do that with plain traversal:
UPDATE sample
SET text_id_one = metadata_array->0,
text_id_two = metadata_array->1;