Postgres query optimization on joins and where in clauses - sql

So I am trying to make a backend to send users notification from time to time. Now in order to do that I need to procure some data from different postgres tables. I wrote this query but it is taking 12-14 seconds to get the data.
When run without where in clause I get the data in almost 700ms.
SELECT DISTINCT ON (t."playerId") t."gzpId", t."pubCode", t."playerId" as token, t."provider",
COALESCE(p."preferenceValue",'en') as lang,
s."segmentId"
FROM "userPlayerIdMap" t LEFT JOIN
"userPreferences" p
ON t."gzpId" = p."gzpId" LEFT JOIN
"segment" s
ON t."gzpId" = s."gzpId"
WHERE t."pubCode" IN ('hyrmas','ayqioa','rj49as99') and
t."provider" IN ('FCM','ONE_SIGNAL') and
s."segmentId" IN (0,1,2,3,4,5,6) and
p."preferenceValue" IN ('en','hi')
ORDER BY t."playerId" desc;
Rows in "userPlayerIdMap" = 650000
Rows in "userPreferences" = 1456466
Rows in "segment" = 5674186
I have already added indexes on the required columns.
Would really appreciate some help.

Use subqueries:
SELECT t."gzpId", t."pubCode", t."playerId" as token, t."provider",
COALESCE((SELECT p."preferenceValue"
FROM "userPreferences" p
WHERE t."gzpId" = p."gzpId" AND
p."preferenceValue" IN ('en', 'hi')
LIMIT 1
), 'en'
) as lang,
(SELECT s."segmentId"
FROM "segment" s
WHERE t."gzpId" = s."gzpId" AND
s."segmentId" IN (0, 1, 2, 3, 4, 5, 6)
LIMIT 1
) as segmentId
FROM "userPlayerIdMap"
WHERE t."pubCode" IN ('hyrmas', 'ayqioa', 'rj49as99') and
t."provider" IN ('FCM', 'ONE_SIGNAL')
-- ORDER BY t."playerId" desc;
I'm not sure the ORDER BY is necessary. If it was only being used for the DISTINCT ON, then it is not necessary in this version of the logic.
At the very least (with the ORDER BY) this will reduce the number of rows that need to be sorted. If you don't need the ORDER BY, then there is no sort -- a significant performance gain.
Then, you want indexes on:
userPreferences(gzpId, preferenceValue)
segment(gzpId, segmentId)
The index on userPlayerIdMap is trickier. I don't think that Postgres can use the index for both ins without a scan. You want the more selective column first, but one of:
userPlayerIdMap(provider, pubCode, gzpId)
userPlayerIdMap(pubCode, provider, gzpId)
I threw gzpId so Postgres can use the index to look up the values in the subquery.

Related

Agregating a subquery

I try to find what I missed in the code to retrieve the value of "Last_Maintenance" in a table called "Interventions".
I try to understand the order rules of SQL and the particularities of subqueries without success.
Did I missed something, something basic or an important step?
---Interventions with PkState "Schedule_Visit" with the Last_Maintenance aggregation
SELECT Interventions.ID AS Nro_Inter,
--Interventions.PlacesList AS Nro_Place,
MaintenanceContracts.Num AS Nro_Contract,
Interventions.TentativeDate AS Schedule_Visit,
--MaintenanceContracts.NumberOfVisits AS Number_Visits_Contracts,
--Interventions.VisitNumber AS Visit_Number,
(SELECT MAX(Interventions.AssignmentDate)
FROM Interventions
WHERE PkState = 'AE4B42CF-0003-4796-89F2-2881527DFB26' AND PkMaintenanceContract IS NOT NULL) AS Last_Maintenance --PkState "Maintenance Executed"
FROM Interventions
INNER JOIN MaintenanceContracts ON MaintenanceContracts.Pk = Interventions.PkMaintenanceContract
WHERE PkState = 'AE4B42CF-0000-4796-89F2-2881527ABC26' AND PkMaintenanceContract IS NOT NULL --PkState "Schedule_Visit"
GROUP BY Interventions.AssignmentDate,
Interventions.ID,
Interventions.PlacesList,
MaintenanceContracts.Num,
Interventions.TentativeDate,
MaintenanceContracts.NumberOfVisits,
Interventions.VisitNumber
ORDER BY Nro_Contract
I try to use GROUP BY and HAVING clause in a sub query, I did not succeed. Clearly I am lacking some understanding.
Output
The output of "Last_Maintenance" is the last date of entire contracts in the DB, which is not the desirable output. The desirable output is to know the last date the maintenance was executed for each row, meaning, for each "Nro-Contract". Somehow I need to aggregate like I did below.
In opposition of what mention I did succeed in another table.
In the table Contracts I did had success as you can see.
SELECT
MaintenanceContracts.Num AS Nro_Contract,
MAX(Interventions.AssignmentDate) AS Last_Maintenance
--MaintenanceContracts.Name AS Place
--MaintenanceContracts.StartDate,
--MaintenanceContracts.EndDate
FROM MaintenanceContracts
INNER JOIN Interventions ON Interventions.PkMaintenanceContract = MaintenanceContracts.Pk
WHERE MaintenanceContracts.ActiveContract = 2 OR MaintenanceContracts.ActiveContract = 1 --// 2 = Inactive; 1 = Active
GROUP BY MaintenanceContracts.Num, MaintenanceContracts.Name,
MaintenanceContracts.StartDate,
MaintenanceContracts.EndDate
ORDER BY Nro_Contract
I am struggling to understanding how nested queries works and how I can leverage in a simple manner the queries.
I think you're mixed up in how aggregation works. The MAX function will get a single MAX value over the entire dataset. What you're trying to do is get a MAX for each unique ID. For that, you either use derived tables, subqueries or windowed functions. I'm a fan of using the ROW_NUMBER() function to assign a sequence number. If you do it correctly, you can use that row number to get just the most recent record from a dataset. From your description, it sounds like you always want to have the contract and then get some max values for that contract. If that is the case, then you're second query is closer to what you need. Using windowed functions in derived queries has the added benefit of not having to worry about using the GROUP BY clause. Try this:
SELECT
MaintenanceContracts.Num AS Nro_Contract,
--MaintenanceContracts.Name AS Place
--MaintenanceContracts.StartDate,
--MaintenanceContracts.EndDate
i.AssignmentDate as Last_Maintenance
FROM MaintenanceContracts
INNER JOIN (
SELECT *
--This fuction will order the records for each maintenance contract.
--The most recent intervention will have a row_num = 1
, ROW_NUMBER() OVER(PARTITION BY PkMaintenanceContract ORDER BY AssignmentDate DESC) as row_num
FROM Interventions
) as i
ON i.PkMaintenanceContract = MaintenanceContracts.Pk
AND i.row_num = 1 --Used to get the most recent intervention.
WHERE MaintenanceContracts.ActiveContract = 2
OR MaintenanceContracts.ActiveContract = 1 --// 2 = Inactive; 1 = Active
ORDER BY Nro_Contract
;

Please help to optimise the query

I am new to sql query things. Below sql query is taking almost 4 minutes to run because of which page remains stuck for that much time.We need to optimise this query so that user is not stuck there for that much time
Please help to optimise.
We are using oracle db
Some improvements can be made by having 2 pre-computed columns in aaa_soldto_xyz table:
aaa_soldto_xyz.ID1 = Substr(aaa_soldto_xyz.xyz_id, 0, 6)
aaa_soldto_xyz.ID2 = Substr(aaa_soldto_xyz.xyz_id, 1, Length ( aaa_soldto_xyz.xyz_id) - 5) )
Those can make better use of existing or new indexes.
We can not help you in optimizing the query without explain plan.
But obvious improvement needed in this query is: remove abc_customer_address table from subquery(select clause) and do left join in from list and check for the result.
You need to change following clauses.
From clause
Left join (SELECT ADDR.zip_code,
ADDR.ID,
ROW_NUMBER() OVER (PARTITION BY ADDR.ID ORDER BY 1) AS RN
FROM abc_customer_address ADDR
WHERE ADDR.id = abc_customer.billing ) ADDR
ON (ADDR.id = abc_customer.billing AND ADDR.RN = 1)
Select clause:
CASE WHEN abc_customer_address.zip_code IS NULL THEN ADDR.zip_code
ELSE abc_customer_address.zip_code
END AS ZIP_CODE,
Cheers!!

Query including subquery and group by slower than expected

The whole query below runs incredibly slowly.
The subquery query [alias Stage_1] takes only 1.37 minutes returning 9514 records, however the whole query takes over 20 minutes, returning 2606 records.
I could use a #temp table to hold the subquery to improve the performance however I would prefer not to.
An overview of the query is that table WeeklySpace inner joins to Spaceblock_Name_to_PG table on SpaceblockName_SID, this cuts down the results in WeeklySpace and includes PG_Code with the results in WeeklySpace. WeeklySpace is then Full Outer Joined to Sales_PG_Wk across 3 fields. The where clause focuses the results, and may be changed. The results from the subquery are then sum'd. You cannot do the final sum'ing in the subquery due to the group by and sum over used.
I believe the issue is due to the subquery re calculation repeatedly during the group by in the final sum'ing. The field SpaceblockName_SID also appears to be involved in causing the issue as without it the run time with a group by in the subquery isn't affected.
I have read though loads of suggestion, trying them all to resolve the issue.
These include;
Adding TOP 2147483647 with Order by to force intermediate
materialization, both in the subquery and using a CTE.
Adding a join after stage_1.
Cast'ing SpaceblockName_SID from an int to a varchar and back again
The execution plan (cut in two parts, shown below the code) for both the subquery and the whole query appear similar. The cost is around the Full Outer Join (Hash Match), which I expected.
The query is running on T-SQL 2005.
Any help greatly appreciated!
select
Cost_centre
, Fin_week
, SpaceblockName_SID
, sum(Propor_rep_SRV) as Total_SpaceblockName_SID_SRV
from
(
select
coalesce(space_side.fin_week , sales_side.fin_week) as Fin_week
,coalesce(space_side.cost_centre , sales_side.cost_Centre) as Cost_centre
,space_side.SpaceblockName_SID
,case
when space_side.SpaceblockName_SID is null
then sales_side.SalesExVAT
else sum(space_side.TLM)
/nullif(sum (sum(space_side.TLM) ) over (partition by coalesce(space_side.fin_week , sales_side.fin_week)
, coalesce(space_side.cost_centre , sales_side.cost_Centre)
, coalesce( Spaceblock_Name_to_PG.PG_Code, sales_side.PG_Code)) ,0)*sales_side.SalesExVAT
end as Propor_rep_SRV
from
WeeklySpace as space_side
INNER JOIN
Spaceblock_Name_to_PG
ON space_side.SpaceblockName_SID = Spaceblock_Name_to_PG.SpaceblockName_SID
and Spaceblock_Name_to_PG.PG_Code < 10000
full outer join
sales_pg_wk as sales_side
on space_side.fin_week = sales_side.fin_week
and space_side.Cost_Centre = sales_side.Cost_Centre
and Spaceblock_Name_to_PG.PG_code = sales_side.pg_code
where
coalesce(space_side.fin_week, sales_side.fin_week) between 201538 and 201550
and
coalesce(space_side.cost_centre, sales_side.cost_Centre) in (3, 2800)
group by
coalesce(space_side.fin_week, sales_side.fin_week)
,coalesce(space_side.cost_centre, sales_side.cost_Centre)
,coalesce( Spaceblock_Name_to_PG.PG_Code, sales_side.PG_Code)
,sales_side.SalesExVAT
,space_side.SpaceblockName_SID
) as stage_1
group by
Cost_centre
, Fin_week
, SpaceblockName_SID
Execution plan left hand side
Execution plan right hand side
You didn't mentioned about indices are created or not on those columns those you used in your query. If not then create and check performance of the query
In looking at you logic I think you split this in two with a UNION
One with Spaceblock_Name_to_PG.PG_Code < 10000 and the other with Spaceblock_Name_to_PG.PG_Code >= 10000
And consider this change
If may be doing a bunch of join that you are going to throw out anyway
full outer join sales_pg_wk as sales_side
on space_side.fin_week = sales_side.fin_week
and space_side.Cost_Centre = sales_side.Cost_Centre
and Spaceblock_Name_to_PG.PG_code = sales_side.pg_code
and space_side.fin_week between 201538 and 201550
and sales_side.fin_week between 201538 and 201550
and space_side.cost_centre in (3, 2800)
and sales_side.cost_Centre in (3, 2800)

optimizing a large "distinct" select in postgres

I have a rather large dataset (millions of rows). I'm having trouble introducing a "distinct" concept to a certain query. (I putting distinct in quotes, because this could be provided by the posgtres keyword DISTINCT or a "group by" form).
A non-distinct search takes 1ms - 2ms ; all attempts to introduce a "distinct" concept have grown this to the 50,000ms - 90,000ms range.
My goal is to show the latest resources based on their most recent appearance in an event stream.
My non-distinct query is essentially this:
SELECT
resource.id AS resource_id,
stream_event.event_timestamp AS event_timestamp
FROM
resource
JOIN
resource_2_stream_event ON (resource.id = resource_2_stream_event.resource_id)
JOIN
stream_event ON (resource_2_stream_event.stream_event_id = stream_event.id)
WHERE
stream_event.viewer = 47
ORDER BY event_timestamp DESC
LIMIT 25
;
I've tried many different forms of queries (and subqueries) using DISTINCT, GROUP BY and MAX(event_timestamp). The issue isn't getting a query that works, it's getting one that works in a reasonable execution time. Looking at the EXPLAIN ANALYZE output for each one, everything is running off of indexes. Th problem seems to be that with any attempt to deduplicate my results, postges must assemble the entire resultset onto disk; since each table has millions of rows, this becomes a bottleneck.
--
update
here's a working group-by query:
EXPLAIN ANALYZE
SELECT
resource.id AS resource_id,
max(stream_event.event_timestamp) AS stream_event_event_timestamp
FROM
resource
JOIN resource_2_stream_event ON (resource_2_stream_event.resource_id = resource.id)
JOIN stream_event ON stream_event.id = resource_2_stream_event.stream_event_id
WHERE (
(stream_event.viewer_id = 57) AND
(resource.condition_1 IS NOT True) AND
(resource.condition_2 IS NOT True) AND
(resource.condition_3 IS NOT True) AND
(resource.condition_4 IS NOT True) AND
(
(resource.condition_5 IS NULL) OR (resource.condition_6 IS NULL)
)
)
GROUP BY (resource.id)
ORDER BY stream_event_event_timestamp DESC LIMIT 25;
looking at the query planner (via EXPLAIN ANALYZE), it seems that adding in the max+groupby clause (or a distinct) forces a sequential scan. that is taking about half the time to computer. there already is an index that contains every "condition", and i tried creating a set of indexes (one for each element). none work.
in any event, the difference is between 2ms and 72,000ms
Often, distinct on is the most efficient way to get one row per something. I would suggest trying:
SELECT DISTINCT ON (r.id) r.id AS resource_id, se.event_timestamp
FROM resource r JOIN
resource_2_stream_event r2se
ON r.id = r2se.resource_id JOIN
stream_event se
ON r2se.stream_event_id = se.id
WHERE se.viewer = 47
ORDER BY r.id, se.event_timestamp DESC
LIMIT 25;
An index on resource(id, event_timestamp) might help performance.
EDIT:
You might try using a CTE to get what you want:
WITH CTE as (
SELECT r.id AS resource_id,
se.event_timestamp AS stream_event_event_timestamp
FROM resource r JOIN
resource_2_stream_event r2se
ON r2se.resource_id = r.id JOIN
stream_event se
ON se.id = r2se.stream_event_id
WHERE ((se.viewer_id = 57) AND
(r.condition_1 IS NOT True) AND
(r.condition_2 IS NOT True) AND
(r.condition_3 IS NOT True) AND
(r.condition_4 IS NOT True) AND
( (r.condition_5 IS NULL) OR (r.condition_6 IS NULL)
)
)
)
SELECT resource_id, max(stream_event_event_timestamp) as stream_event_event_timestamp
FROM CTE
GROUP BY resource_id
ORDER BY stream_event_event_timestamp DESC
LIMIT 25;
Postgres materializes the CTE. So, if there are not that many matches, this may speed the query by using indexes for the CTE.

Select first or random row in group by

I have this query using PostgreSQL 9.1 (9.2 as soon as our hosting platform upgrades):
SELECT
media_files.album,
media_files.artist,
ARRAY_AGG (media_files. ID) AS media_file_ids
FROM
media_files
INNER JOIN playlist_media_files ON media_files.id = playlist_media_files.media_file_id
WHERE
playlist_media_files.playlist_id = 1
GROUP BY
media_files.album,
media_files.artist
ORDER BY
media_files.album ASC
and it's working fine, the goal was to extract album/artist combinations and in the result set have an array of media files ids for that particular combo.
The problem is that I have another column in media files, which is artwork.
artwork is unique for each media file (even in the same album) but in the result set I need to return just the first of the set.
So, for an album that has 10 media files, I also have 10 corresponding artworks, but I would like just to return the first (or a random picked one for that collection).
Is that possible to do with only SQL/Window Functions (first_value over..)?
Yes, it's possible. First, let's tweak your query by adding alias and explicit column qualifiers so it's clear what comes from where - assuming I've guessed correctly, since I can't be sure without table definitions:
SELECT
mf.album,
mf.artist,
ARRAY_AGG (mf.id) AS media_file_ids
FROM
"media_files" mf
INNER JOIN "playlist_media_files" pmf ON mf.id = pmf.media_file_id
WHERE
pmf.playlist_id = 1
GROUP BY
mf.album,
mf.artist
ORDER BY
mf.album ASC
Now you can either use a subquery in the SELECT list or maybe use DISTINCT ON, though it looks like any solution based on DISTINCT ON will be so convoluted as not to be worth it.
What you really want is something like an pick_arbitrary_value_agg aggregate that just picks the first value it sees and throws the rest away. There is no such aggregate and it isn't really worth implementing it for the job. You could use min(artwork) or max(artwork) and you may find that this actually performs better than the later solutions.
To use a subquery, leave the ORDER BY as it is and add the following as an extra column in your SELECT list:
(SELECT mf2.artwork
FROM media_files mf2
WHERE mf2.artist = mf.artist
AND mf2.album = mf.album
LIMIT 1) AS picked_artwork
You can at a performance cost randomize the selected artwork by adding ORDER BY random() before the LIMIT 1 above.
Alternately, here's a quick and dirty way to implement selection of a random row in-line:
(array_agg(artwork))[width_bucket(random(),0,1,count(artwork)::integer)]
Since there's no sample data I can't test these modifications. Let me know if there's an issue.
"First" pick
Wouldn't it be simpler / cheaper to just use min():
SELECT m.album
,m.artist
,array_agg(m.id) AS media_file_ids
,min(m.artwork) AS artwork
FROM playlist_media_files p
JOIN media_files m ON m.id = p.media_file_id
WHERE p.playlist_id = 1
GROUP BY m.album, m.artist
ORDER BY m.album, m.artist;
Abitrary / random pick
If you are looking for a random selection, #Craig already provided a solution with truly random picks.
You could also use a CTE to avoid additional scans on the (possibly big) base table and then run two separate (cheap) subqueries on the small result set.
For arbitrary selection - not truly random, the result will depend on the physical order of rows in the table and implementation-specifics:
WITH x AS (
SELECT m.album, m.artist, m.id, m.artwork
FROM playlist_media_files p
JOIN media_files m ON m.id = p.media_file_id
)
SELECT a.album, a.artist, a.media_file_ids, b.artwork
FROM (
SELECT album, artist, array_agg(id) AS media_file_ids
FROM x
) a
JOIN (
SELECT DISTINCT ON (1,2) album, artist, artwork
FROM x
) b USING (album, artist);
For truly random results, you can add an ORDER BY .. random() like this to subquery b:
JOIN (
SELECT DISTINCT ON (1, 2) album, artist, artwork
FROM x
ORDER BY 1, 2, random()
) b USING (album, artist);