Writing sequelize queries for geometries in POSTGRES - sql

I have the following query and want to write it as a sequelize query
`SELECT "Gigs"."id",
"gigType",
"gigCategory",
"gigTitle",
"gigDescription",
"minOrderAmount",
unit,
"unitPrice",
stock,
sold,
"expireDate",
"Gigs".userid AS "sellerId",
"growerType" AS "sellerType",
"points",
json_build_object('id', "locationId", 'lat', lat, 'lng', lng) AS location
FROM (SELECT DISTINCT ON ("gigid") "gigid", "locationId", lat, lng
FROM (SELECT "gigid",
id as "locationId",
st_x(coordinates::geometry) as lat,
st_y(coordinates::geometry) as lng
FROM "Locations"
WHERE ST_DWithin(coordinates,
ST_MakePoint(${location.lat}, ${location.lng})::geography,
${distance})
ORDER BY coordinates <-> ST_MakePoint(${location.lat}, ${location.lng})::geography
LIMIT ${limit}) AS nearGigIds) AS distinctGigIds
INNER JOIN "Gigs"
ON distinctGigIds."gigid" = "Gigs"."id"
INNER JOIN "Users" U
ON U.id = "Gigs".userid
INNER JOIN "Customers" C on U.id = C.userid
INNER JOIN "Growers" G on C.userid = G.userid
WHERE "expireDate" > ${today}::text::date
ORDER BY points DESC
OFFSET ${offset} LIMIT 10;`
I want to know how to write ORDER BY coordinates <-> ST_MakePoint(${location.lat}, ${location.lng})::geography part in the query. I referred the docs and there is a way to write functions in ORDER BY as follows.
order: [
// Will order by otherfunction(`col1`, 12, 'lalala') DESC
[sequelize.fn('otherfunction', sequelize.col('col1'), 12, 'lalala'), 'DESC'],
],
});
But I am confused about how to write the aforesaid part like this.

db.Table.findAll({
attributes: {
include: [
[
Sequelize.fn(
'ST_Distance',
Sequelize.fn('point', Sequelize.col('longitude'), Sequelize.col('latitude')),
Sequelize.fn('point', longitude, latitude),
),
'distanceAttribute',
],
],
},
order:[['distanceAttribute', 'DESC']]
});

Related

Make SELECT subquery COUNT the total subscribers of a subscriber

I trying to create a query that counts the total subscribers of a subscriber. It currently looks like this:
await this.queryInstance.query(
'SELECT all_users_subbed_to.* , (SELECT COUNT(??????)) AS subscribers_sub_count
FROM
(SELECT publisher_id, subscriber_id, u2.username
AS username, u2.user_photo AS user_photo
FROM subscribers s
INNER JOIN users u
ON (u.id = s.subscriber_id)
INNER JOIN users u2 ON (u2.id = s.publisher_id)
WHERE subscriber_id = ($1)
LIMIT 20
OFFSET ($2))
AS all_users_subbed_to;'
,
[currentUserId = 80, offset]
);
The FROM CLAUSE AKA all_users_subbed_to is WORKING correctly and displays a ALL the subscribers the current user has. The data comes back as this:
"subscribedToCurrentUser": [
{
"publisher_id": 84,
"subscriber_id": 80,
"username": "supercoookie",
"user_photo": "profile-pic-for-supercoookie.jpeg"
},
{
"publisher_id": 88,
"subscriber_id": 80,
"username": "GERPAL1",
"user_photo": "profile-pic-for-GERPAL1.jpeg"
}
]
The issue I am having is getting the total subscriber counts for the list of those subscribers. I need to use the subscribers publisher_id ie all_users_subbed_to.publisher_id and get their total subs (using COUNT) from the subscribers table. I would like to create a new column called have subscribers_sub_count that contains that total.
Any ideas?
It should look like this:
"subscribedToCurrentUser": [
{
"publisher_id": 84,
"subscriber_id": 80,
"username": "supercoookie",
"user_photo": "profile-pic-for-supercoookie.jpeg",
"subscribers_sub_count": 3
},
{
"publisher_id": 88,
"subscriber_id": 80,
"username": "GERPAL1",
"user_photo": "profile-pic-for-GERPAL1.jpeg",
"subscribers_sub_count": 70
}
]
The subscribers table looks like this:
await this.queryInstance.query(
'SELECT all_users_subbed_to.*, COUNT(all_users_subbed_to.id) AS subscribers_sub_count
FROM
(SELECT publisher_id, subscriber_id, u2.username
AS username, u2.user_photo AS user_photo
FROM subscribers s
INNER JOIN users u
ON (u.id = s.subscriber_id)
INNER JOIN users u2 ON (u2.id = s.publisher_id)
WHERE subscriber_id = ($1)
LIMIT 20
OFFSET ($2))
AS all_users_subbed_to;'
,
[currentUserId = 80, offset]
);
Fixed it. It just needed a WHERE clause that used data from all_users_subbed_to
await this.queryInstance.query(
'SELECT all_users_subbed_to.* ,
(SELECT COUNT(*) FROM subscribers s2 WHERE s2.publisher_id = all_users_subbed_to.publisher_id) AS subscribers_sub_count AS subscribers_sub_count
FROM
(SELECT publisher_id, subscriber_id, u2.username
AS username, u2.user_photo AS user_photo
FROM subscribers s
INNER JOIN users u
ON (u.id = s.subscriber_id)
INNER JOIN users u2 ON (u2.id = s.publisher_id)
WHERE subscriber_id = ($1)
LIMIT 20
OFFSET ($2))
AS all_users_subbed_to;'
,
[currentUserId = 80, offset]
);

Postgres query - return flattened JSON

I have the following working query:
const getPromos = async (limit = 10, site: string, branch: string) => {
const query = `SELECT
json_build_object(
'id', p.id,
'description', p.description,
'discounted_price', p.discounted_price,
'items', jsonb_agg((i.id, i.price, i.title))
)
FROM promotions p
INNER JOIN promotion_items pi ON p.id = pi.promotion_id
INNER JOIN items i ON pi.item_code = i.item_code WHERE site_id = ${site} and store_id = ${branch}
GROUP BY p.id LIMIT ${limit}`;
return await db.query(query);
};
The issue is simple - each item (in this example - promotion) is returned with an object that wraps it - named json_build_object. I don't want any object to wrap my promotions - just like this:
[{id:1, .... items: [...items here...]}, {id:2, .... items: [...items here...]}]
Any idea?
You can get the desired result directly from the query when you aggregate the resultset using jsonb_agg (like items further down the query).
SELECT
jsonb_agg(jsonb_build_object(
'id', p.id,
'description', p.description,
'discounted_price', p.discounted_price,
'items', jsonb_agg((i.id, i.price, i.title))
))
--- the rest of your query

SQL - Combine 4 columns into one new column as a single JSON object

Edit: FYI. So my PrimaryImage is actually an int and is used as a foreign key to my Images table. I just wanted to explain that so it is clear why I set it to on f.PrimaryImage = i.Id. For some reason all my rows are getting populated with every i.Id, i.EntityId, i.ImageTypeId, & i.ImageUrl instead of just where my f.PrimaryImage = i.Id.
I am writing a SQL stored procedure to SELECT ALL and I want to combine my last 4 columns Id, EntityId, ImageTypeId, ImageUrl into one new column PrimaryImage as a single JSON object. I was able to successfully do that with my Skills column but for that I needed it as an array that holds JSON objects so simply using "FOR JSON AUTO" took care of that. But like I said for PrimaryImage, I need that as a single JSON object that contains my Id, EntityId, ImageTypeId, & ImageUrl. I included a picture of my result after executing this proc and right below my table I drew a representation of what I want the column to look like. Just to clarify I have four tables my Friends, FriendSkills, Skills, & Images tables which I've used join statements organize accordingly. Basically my schema needs to look like this:
{
"id": 0000,
"userId": "String"
"bio": "String",
"title": "String",
"summary": "String",
"headline": "String",
"statusId": "String",
"slug": "String",
"skills": [{id: 0, name: "String"},{id: 0, name: "String"}],
"primaryImage": {
"id": 0,
"entityId": 0,
"imageTypeId": 0,
"imageUrl": "String"
}
}
Here is my stored procedure
ALTER PROC [dbo].[Friends_SelectAllV2]
AS
/* --- Test Proc ------
Execute dbo.Friends_SelectAllV2
*/
BEGIN
Select f.Id
,f.UserId
,f.DateAdded
,f.DateModified
,f.Title
,f.Bio
,f.Summary
,f.Headline
,f.Slug
,f.StatusId
,Skills = ( SELECT s.Id,
s.Name
From dbo.Skills as s inner join dbo.FriendSkills fs
on s.Id = fs.SkillId
Where f.Id = fs.FriendId
FOR JSON AUTO
),
PrimaryImage = (SELECT i.Id,
i.EntityId,
i.ImageTypeId,
i.ImageUrl
From dbo.Friends f left join dbo.Images as i
on f.PrimaryImage = i.Id
Where f.PrimaryImage = i.Id
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
)
END
You don't actually need another subquery if the inner property is a single object, you can use FOR JSON PATH with explicit path syntax.
Select f.Id
,f.UserId
,f.DateAdded
,f.DateModified
,f.Title
,f.Bio
,f.Summary
,f.Headline
,f.Slug
,f.StatusId
,Skills = ( SELECT s.Id,
s.Name
From dbo.Skills as s inner join dbo.FriendSkills fs
on s.Id = fs.SkillId
Where f.Id = fs.FriendId
FOR JSON AUTO
),
i.Id AS [PrimaryImage.Id],
i.EntityId AS [PrimaryImage.EntityId],
i.ImageTypeId AS [PrimaryImage.ImageTypeId],
i.ImageUrl AS [PrimaryImage.ImageUrl]
From dbo.Friends f
left join dbo.Images as i on f.PrimaryImage = i.Id
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER;
If however there are multiple images, and you need a subquery, the issue here is that you are putting the whole Friends table in again, instead of correlating the outside table. You would also remove WITHOUT_ARRAY_WRAPPER.
Select f.Id
,f.UserId
,f.DateAdded
,f.DateModified
,f.Title
,f.Bio
,f.Summary
,f.Headline
,f.Slug
,f.StatusId
,Skills = ( SELECT s.Id,
s.Name
From dbo.Skills as s inner join dbo.FriendSkills fs
on s.Id = fs.SkillId
Where f.Id = fs.FriendId
FOR JSON AUTO
),
PrimaryImage = (SELECT i.Id,
i.EntityId,
i.ImageTypeId,
i.ImageUrl
FROM dbo.Images as i
WHERE f.PrimaryImage = i.Id
FOR JSON PATH
)
From dbo.Friends f
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER;

Sequelize is automatically adding a sub query within the where clause. Is there a way to make it skip adding the where clause

I have a Sequelize query that uses INNER JOINS. The issue is that sequelize is internally adding another where clause with a sub-query on the child table. That is eating up the query performance. Below are an examples of my code and the raw query output.
Is there a way to make sequelize skip adding this where clause?
Sequelize version: 6.x
PostModel.findAll({
where: {
id: 1,
},
include: [
{
model: CommentsModel,
required: true,
}
]
})
The query builds an SQL query as below.
SELECT "post".*
FROM (SELECT "post"."*"
FROM "posts" AS "post"
WHERE "post"."id" = 2
AND (SELECT "post_id"
FROM "comments" AS "c"
WHERE "comments"."post_id" = "post"."id" AND ("c"."text_search" ## 'who:*')) IS NOT NULL
ORDER BY "post"."id" DESC
LIMIT 50 OFFSET 0) AS "post"
LEFT OUTER JOIN "post_tags" AS "tags" ON "post"."id" = "tags"."post_id"
LEFT OUTER JOIN "tag" AS "tags->tag" ON "tags"."tag_id" = "tags->tag"."id"
INNER JOIN "comments" AS "c" ON "post"."id" = "c"."post_id" AND ("c"."text_search" ## 'who:*')
ORDER BY "post"."id" DESC;
As you can see the WHERE clause has a new added
(SELECT "post_id"
FROM "comments" AS "c"
WHERE "comments"."post_id" = "post"."id" AND ("c"."text_search" ## 'who:*'))
This is basically killing the performance of the query.
After a lot research I figured out the solution.
We need to add subQuery: false within the association.
PostModel.findAll({
where: {
id: 1,
},
include: [
{
subQuery: false,
model: CommentsModel,
required: true,
}
]
})
Query output:
SELECT "post".*
FROM (SELECT "post"."*"
FROM "posts" AS "post"
WHERE "post"."id" = 2
ORDER BY "post"."id" DESC
LIMIT 50 OFFSET 0) AS "post"
LEFT OUTER JOIN "post_tags" AS "tags" ON "post"."id" = "tags"."post_id"
LEFT OUTER JOIN "tag" AS "tags->tag" ON "tags"."tag_id" = "tags->tag"."id"
INNER JOIN "comments" AS "c" ON "post"."id" = "c"."post_id" AND ("c"."text_search" ## 'who:*')
ORDER BY "post"."id" DESC;

PSQL Join alternative to return all rows

I've got a PSQL function that has 3 joins in it and the data is returned in a json object. I have a 4th table that I need to get data from but it has a one-to-many relationship with the table I wish to join on.
This is my current code:
select json_agg(row_to_json(s)) as results from (
select g.*,row_to_json(o.*) as e_occurence,
row_to_json(d.*) as e_definition,
row_to_json(u.*) as e_e_updates,
cardinality(o.m_ids) as m_count
from schema.e_group g
join schema.e_occurrence o on g.id = o.e_group_id
join schema.e_definition d on g.e_id = d.id
left join schema.e_e_updates u on d.id = u.e_id
) s
This gets me an array of objects that follows this rough structure:
[
{
"id": 11308158,
"e_id": 16,
"created_on": "2020-09-09T12:08:07.556062",
"event_occurence": {
"id": 9081887,
"e_id": 16,
"e_group_id": 11308158
},
"e_definition": {
"id": 16,
"name": "Placeholder name"
},
"e_e_updates": {
"id": 22,
"user_id": "7281057e-2876-1673-js7d-7cqj611b4557",
"e_id": 16
},
"m_count": 0
}
]
My problem is that the table e_e_updates can have multiple records for each corresponding e_definition.id.
Clearly the join will not work as hoped in this instance as I'd like e_e_updates to be an array of all the linked rows.
Is there an alternative means of solving this issue?
Basically, you need another level of aggregation. This should do what you want:
select json_agg(row_to_json(s)) as results
from (
select
g.*,
row_to_json(o.*) as e_occurence,
row_to_json(d.*) as e_definition,
u.u_arr as e_e_updates,
cardinality(o.m_ids) as m_count
from schema.e_group g
join schema.e_occurrence o on g.id = o.e_group_id
join schema.e_definition d on g.e_id = d.id
left join (
select e_id, json_agg(row_to_json(*)) u_arr
from schema.e_e_updates
group by on e_id
) u on d.id = u.e_id
) s
You could also do this with a subquery:
select json_agg(row_to_json(s)) as results
from (
select
g.*,
row_to_json(o.*) as e_occurence,
row_to_json(d.*) as e_definition,
(
select json_agg(row_to_json(u.*))
from schema.e_e_updates u
where u.e_id = d.id
) as e_e_updates,
cardinality(o.m_ids) as m_count
from schema.e_group g
join schema.e_occurrence o on g.id = o.e_group_id
join schema.e_definition d on g.e_id = d.id
) s