I'm quite new to SQL and have two Postgresql tables :
CREATE TABLE project (
id uuid DEFAULT uuid_generate_v4 (),
name VARCHAR(100) NOT NULL,
creator_id uuid NOT NULL
);
CREATE TABLE task (
id uuid DEFAULT uuid_generate_v4 (),
name VARCHAR(100) NOT NULL,
project_id uuid NOT NULL,
);
I'm running a pretty simple join on it :
SELECT project.*, task.name as task_name
FROM project
INNER JOIN task ON task.project_id = $1
WHERE project.id =
Result is :
[
{
id: '5936d843-aca0-4453-ad24-a7b3a6b90393',
name: 'Test project',
creator_id: '2e0e73af-e824-46a2-89ee-c08cf9c5de7a',
task_name: 'Test task'
},
{
id: '5936d843-aca0-4453-ad24-a7b3a6b90393',
name: 'Test project',
creator_id: '2e0e73af-e824-46a2-89ee-c08cf9c5de7a',
task_name: 'Test task 2'
}
]
My question is, is that possible to merge those rows on id to have a result looking more like this :
[
{
id: '5936d843-aca0-4453-ad24-a7b3a6b90393',
name: 'Test project',
creator_id: '2e0e73af-e824-46a2-89ee-c08cf9c5de7a',
tasks: [
{
task_name: 'Test task'
},
{
task_name: 'Test task 2'
}
}
]
I know there is a few thing that can help me achieve that, like using COALESCE, json_build_object or json_agg. But this makes me build "complex" queries for something that looks pretty simple, so do you know if there's a simpler way to do this, or should I just take the first result and process it with my language of choice (here javascript) to merge as needed ?
You need to group by project and aggregate project tasks.
SELECT p.*,
jsonb_agg(jsonb_build_object('task_name', t.name)) tasks
FROM project p
INNER JOIN task t ON t.project_id = p.id
WHERE p.id = $1 -- or whatever your selection condition is
group by p.id, p.name, p.creator_id;
If project.id is primary key then
group by p.id, p.name, p.creator_id
can be simplified as
group by p.id;
I assume that the expected JSON array result is shaped in the logic tier or by something like an ORM. If you wish that (btw much better) the query itself returns the result as a JSON array, then
select jsonb_agg(to_josnb(t.*)) from
(
... the query above ...
) t;
Related
Say I have this json in a SQL Column called MyJson in a table called StoreTeams
{
MyTeams: [
{
id: 1
},
{
id: 2
},
{
id: 3
}
]
}
I want to take all these id's and then do a inner join against another table.
User Table
- id <pk>
- firstName
- lastName
I am not sure how I would do this, I would be probably running this code via ado.net.
You can use openjson(). You don't specify the exact result you want, but the logic is:
select *
from mytable t
cross apply openjson(t.myjson, '$.MyTeams') with (id int '$.id') as x
inner join users u on u.id = x.id
I have the following structure
task -> hasMany links -> hasOne user
what I would like to see returned is
task.id | [
{ linkType: 'foo', userid:1, taskid:1,
user : { name: 'jon', lastname: 'smith'},
... ]
being a sql noob, I have managed to get so far as this
select task.id, json_agg(links.*) as links from task
left outer join links on task.id = links.taskid
join "user" on "user".id = links.userid
group by task.id ;
which gives me
task.id | [{ linkType: 'foo', userid:1, taskid:1}, ... ]
but obviously missing the user
I'm kinda stuck now on how to add the user property to each of the link array items
I've read several documents but they always seem to stop at the first join
the schema design is
create table task (id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY)
create table "user" (id int GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, username text)
create table links (taskid integer references task (id), userid integer references "user" (id) );
Your example does not reference the table structure you provided, because "user" has a name and a lastname column in the former and a single username column in the latter, but you did provide enough information to give an answer. I am using the two-column user, because it allows to show how to construct nested JSON objects.
SELECT task.id,
json_agg(
json_build_object(
'linkType', 'foo',
'taskid', links.taskid,
'userid', links.userid,
'user', json_build_object('name', u.name, 'lastname', u.lastname)
)
) AS links
FROM task
LEFT OUTER JOIN links ON task.id = links.taskid
JOIN "user" u ON u.id = links.userid
GROUP BY task.id;
The general problem of building json objects with the appropriate key values is discussed in this excellent DBA.SE answer. I have adopted what is listed there as solution number 3 because I think it is the most flexible and the most readable. Your tastes might differ.
Hello Guys I am trying to retrieve data using this command:
SELECT TOP 1000 [TS]
,[Id]
,[Name]
,[Email]
FROM [tblData].[dbo].[Info]
This gives me a results:
TS: QWRTY
ID: 191
Name: Henrol
Email: Email
I also have another tbl "tblNickName" where there is ID,Nickname column:
ID: 191
NickName: Henjoe
Now I want to change my retrieved data to be something like this:
TS: QWRTY
ID: Henjoe -- The ID now is changed to their nick name from another table.
Name: Henrol
Email: #email
I don't really search what would be the syntax/or right query to do it.
Hope you help me. THanks !
You need to use INNER JOIN for retrieve data from another table.
SELECT TOP 1000 I.[TS]
,N.NickName AS [Id]
,I.[Name]
,I.[Email]
FROM [tblData].[dbo].[Info] I
INNER JOIN [tblData].[dbo].[tblNickName] N ON I.[Id] = N.[Id]
(I'm playing around with using Postgresql 9.3 to do some of the lifting require to assemble a JSON data structure.)
Given the following schema:
person
id integer,
name text,
age integer
job
id references person,
title text
is it possible to use Postgresql's JSON functions to return something like
| id | personalia | jobs |
|----|----------------------------|----------------------------------------------|
| 1 | {"name": "kim", "age": 55} | [{"title": "Plumber"}, {"title": "manager"}] |
i.e. to select a subset of columns and even do a sub query/join to produce an array based on data from another table that matches some criteria (here: person.id = job.id).
Reading through the Postgresql JSON documentation, I see the building blocks are there, but I don't see how to do more advanced stuff like the above scenario – possibly because I lack the SQL know-how.
If using Postgres >= 9.4 this can be done using json_build_object and json_agg:
SELECT
p.id,
json_build_object(
'name', p.name,
'age', p.age
) AS personalia,
json_agg(
json_build_object(
'title', j.title
)
) AS jobs
FROM person p
LEFT JOIN job j USING (id)
GROUP BY p.id;
select
id,
format('{"name": %s, "age", %s}', to_json(name), to_json(age))::json as personalia,
array_to_json(array_agg(title)) as jobs
from
person p
left join
(
select id, format('{"title": %s}', to_json(title))::json as title
from job
)
job j using (id)
group by id, name, age
Let's say I create two tables using the following SQL,
such that post has many comment:
CREATE TABLE IF NOT EXISTS post (
id SERIAL PRIMARY KEY,
title VARCHAR NOT NULL,
text VARCHAR NOT NULL
)
CREATE TABLE IF NOT EXISTS comment (
id SERIAL PRIMARY KEY,
text VARCHAR NOT NULL,
post_id SERIAL REFERENCES post (id)
)
I would like to be able to query these tables so as to serve a response that
looks like this:
{
"post" : [
{ id: 100,
title: "foo",
text: "foo foo",
comment: [1000,1001,1002] },
{ id: 101,
title: "bar",
text: "bar bar",
comment: [1003] }
],
"comment": [
{ id: 1000,
text: "bla blah foo",
post: 100 },
{ id: 1001,
text: "bla foo foo",
post: 100 },
{ id: 1002,
text: "foo foo foo",
post: 100 },
{ id: 1003,
text: "bla blah bar",
post: 101 },
]
}
Doing this naively would involve to SELECT statements,
the first along the lines of
SELECT DISTINCT ON(post.id), post.title, post.text, comment.id
FROM post, comment
WHERE post.id = comment.post_id
... and the second something along the lines of
SELECT DISTINCT ON(comment.id), comment.text, post.id
FROM post, comment
WHERE post.id = comment.post_id
However, I cannot help but think that there is a way to do this involving
only one SELECT statement - is this possible?
Notes:
I am using Postgres, but I do not require a Postgres-specific solution. Any standard SQL solution should do.
The queries above are illustrative only, they do not give we exactly what is necessary at the moment.
It looks like what the naive solution here does is perform the same join on the same two tables, just doing a distinct on a different table each time. This definitely leaves room for improvement.
It appears that ActiveModel Serializers in Rails already do this - if someone familair with them would like to chime in how they work under the hood, that would be great.
You need two queries to get the form you laid out:
SELECT p.id, p.title, p.text, array_agg(c.id) AS comments
FROM post p
JOIN comment c ON c.post_id = p.id
WHERE p.id = ???
GROUP BY p.id;
Or faster, if you really want to retrieve all or most of your posts:
SELECT p.id, p.title, p.text, c.comments
FROM post p
JOIN (
SELECT post_id, array_agg(c.id) AS comments
FROM comment
GROUP BY 1
) c ON c.post_id = p.id
GROUP BY 1;
Plus:
SELECT id, text, post_id
FROM comment
WHERE post_id = ??;
Single query
SQL can only send one result type per query. For a single query, you would have to combine both tables, listing columns for post redundantly. That conflicts with the desired response in your question. You have to give up one of the two conflicting requirements.
SELECT p.id, p.title, p.text AS p_text, c.id, c.text AS c_text
FROM post p
JOIN comment c ON c.post_id = p.id
WHERE p.id = ???
Aside: The column comment.post_id should be integer, not serial! Also, column names are probably just for a quick show case. You wouldn't use the non-descriptive text as column name, which also conflicts with a basic data type.
Compare this related case:
Foreign key of serial type - ensure always populated manually
However, I cannot help but think that there is a way to do this involving only one SELECT statement - is this possible?
Technically: yes. If you really want your data in json anyway, you could use PostgreSQL (9.2+) to generate it with the json functions, like:
SELECT row_to_json(sq)
FROM (
SELECT array_to_json(ARRAY(
SELECT row_to_json(p)
FROM (
SELECT *, ARRAY(SELECT id FROM comment WHERE post_id = post.id) AS comment
FROM post
) AS p
)) AS post,
array_to_json(ARRAY(
SELECT row_to_json(comment)
FROM comment
)) AS comment
) sq;
But I'm not sure it's worth it -- usually not a good idea to dump all your data without limit / pagination.
SQLFiddle