I have this query in PostgreSQL:
SELECT COUNT("contacts"."id")
FROM "contacts"
INNER JOIN "phone_numbers" ON "phone_numbers"."id" = "contacts"."phone_number_id"
INNER JOIN "companies" ON "companies"."id" = "contacts"."company_id"
WHERE (
(
(
CAST("phone_numbers"."value" AS VARCHAR) ILIKE '%a%'
OR CAST("contacts"."first_name" AS VARCHAR) ILIKE '%a%'
)
OR CAST("contacts"."last_name" AS VARCHAR) ILIKE '%a%'
)
OR CAST("companies"."name" AS VARCHAR) ILIKE '%a%'
)
When I run the query it is taking 19secs to run. I need to improve the performance.
Note: I already have the index for the columns.
EXPLAIN ANALYZE report
Finalize Aggregate (cost=209076.49..209076.54 rows=1 width=8) (actual time=6117.381..6646.477 rows=1 loops=1)
-> Gather (cost=209076.42..209076.48 rows=4 width=8) (actual time=6117.370..6646.473 rows=5 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Partial Aggregate (cost=209066.42..209066.47 rows=1 width=8) (actual time=5952.710..5952.723 rows=1 loops=5)
-> Hash Join (cost=137685.37..208438.42 rows=251200 width=8) (actual time=3007.055..5945.571 rows=39193 loops=5)
Hash Cond: (contacts.company_id = companies.id)
Join Filter: (((phone_numbers.value)::text ~~* '%as%'::text) OR ((contacts.first_name)::text ~~* '%as%'::text) OR ((contacts.last_name)::text ~~* '%as%'::text) OR ((companies.name)::text ~~* '%as%'::text))
Rows Removed by Join Filter: 763817
-> Parallel Hash Join (cost=137684.86..201964.34 rows=1003781 width=41) (actual time=3006.633..4596.987 rows=803010 loops=5)
Hash Cond: (contacts.phone_number_id = phone_numbers.id)
-> Parallel Seq Scan on contacts (cost=0.00..59316.85 rows=1003781 width=37) (actual time=11.032..681.124 rows=803010 loops=5)
-> Parallel Hash (cost=68914.22..68914.22 rows=1295458 width=20) (actual time=1632.770..1632.770 rows=803184 loops=5)
Buckets: 65536 Batches: 64 Memory Usage: 4032kB
-> Parallel Seq Scan on phone_numbers (cost=0.00..68914.22 rows=1295458 width=20) (actual time=10.780..1202.242 rows=803184 loops=5)
-> Hash (cost=0.30..0.30 rows=4 width=40) (actual time=0.258..0.258 rows=4 loops=5)
Buckets: 1024 Batches: 1 Memory Usage: 9kB
-> Seq Scan on companies (cost=0.00..0.30 rows=4 width=40) (actual time=0.247..0.248 rows=4 loops=5)
Planning Time: 1.895 ms
Execution Time: 6646.558 ms
Please help me on this performance issue.
I tried FUNCTION row_count_estimate (query text) and it is not giving the exact count.
Solution Tried:
I tried the Robert solution and got 16 Secs to run
My Query is:
SELECT Count(id) AS id
FROM (
SELECT contacts.id AS id
FROM contacts
WHERE (
contacts.last_name ilike '%as%')
OR (
contacts.last_name ilike '%as%')
UNION
SELECT contacts.id AS id
FROM contacts
WHERE contacts.phone_number_id IN
(
SELECT phone_numbers.id AS phone_number_id
FROM phone_numbers
WHERE phone_numbers.value ilike '%as%')
UNION
SELECT contacts.id AS id
FROM contacts
WHERE contacts.company_id IN
(
SELECT companies.id AS company_id
FROM companies
WHERE companies.name ilike '%as%' )) AS ID
Report:
Aggregate (cost=395890.08..395890.13 rows=1 width=8) (actual time=5942.601..5942.667 rows=1 loops=1)
-> Unique (cost=332446.76..337963.57 rows=1103362 width=8) (actual time=5929.800..5939.658 rows=101989 loops=1)
-> Sort (cost=332446.76..335205.17 rows=1103362 width=8) (actual time=5929.799..5933.823 rows=101989 loops=1)
Sort Key: contacts.id
Sort Method: external merge Disk: 1808kB
-> Append (cost=10.00..220843.02 rows=1103362 width=8) (actual time=1.158..5900.926 rows=101989 loops=1)
-> Gather (cost=10.00..61935.48 rows=99179 width=8) (actual time=1.158..569.412 rows=101989 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Parallel Seq Scan on contacts (cost=0.00..61826.30 rows=24795 width=8) (actual time=0.446..477.276 rows=20398 loops=5)
Filter: ((last_name)::text ~~* '%as%'::text)
Rows Removed by Filter: 782612
-> Nested Loop (cost=0.84..359.91 rows=402 width=8) (actual time=5292.088..5292.089 rows=0 loops=1)
-> Index Scan using idx_phone_value on phone_numbers (cost=0.41..64.13 rows=402 width=8) (actual time=5292.087..5292.087 rows=0 loops=1)
Index Cond: ((value)::text ~~* '%as%'::text)
Rows Removed by Index Recheck: 4015921
-> Index Scan using index_contacts_on_phone_number_id on contacts contacts_1 (cost=0.43..0.69 rows=1 width=16) (never executed)
Index Cond: (phone_number_id = phone_numbers.id)
-> Gather (cost=10.36..75795.48 rows=1003781 width=8) (actual time=26.298..26.331 rows=0 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=0.36..74781.70 rows=250945 width=8) (actual time=3.758..3.758 rows=0 loops=5)
Hash Cond: (contacts_2.company_id = companies.id)
-> Parallel Seq Scan on contacts contacts_2 (cost=0.00..59316.85 rows=1003781 width=16) (actual time=0.128..0.128 rows=1 loops=5)
-> Hash (cost=0.31..0.31 rows=1 width=8) (actual time=0.726..0.727 rows=0 loops=5)
Buckets: 1024 Batches: 1 Memory Usage: 8kB
-> Seq Scan on companies (cost=0.00..0.31 rows=1 width=8) (actual time=0.726..0.726 rows=0 loops=5)
Filter: ((name)::text ~~* '%as%'::text)
Rows Removed by Filter: 4
Planning Time: 0.846 ms
Execution Time: 5948.330 ms
I tried the below also:
EXPLAIN ANALYZE SELECT
count(id) AS id
FROM
(SELECT
contacts.id AS id
FROM
contacts
WHERE
(
position('as' in LOWER(last_name)) > 0
)
UNION
SELECT
contacts.id AS id
FROM
contacts
WHERE
EXISTS (
SELECT
1
FROM
phone_numbers
WHERE
(
position('as' in LOWER(phone_numbers.value)) > 0
)
AND (
contacts.phone_number_id = phone_numbers.id
)
)
UNION
SELECT
contacts.id AS id
FROM
contacts
WHERE
EXISTS (
SELECT
1
FROM
companies
WHERE
(
position('as' in LOWER(companies.name)) > 0
)
AND (
contacts.company_id = companies.id
)
)
UNION DISTINCT SELECT
contacts.id AS id
FROM
contacts
WHERE
(
position('as' in LOWER(first_name)) > 0
)
) AS ID;
Report
Aggregate (cost=1609467.66..1609467.71 rows=1 width=8) (actual time=1039.249..1039.330 rows=1 loops=1)
-> Unique (cost=1320886.03..1345980.09 rows=5018811 width=8) (actual time=999.363..1030.500 rows=195963 loops=1)
-> Sort (cost=1320886.03..1333433.06 rows=5018811 width=8) (actual time=999.362..1013.818 rows=198421 loops=1)
Sort Key: contacts.id
Sort Method: external merge Disk: 3520kB
-> Gather (cost=10.00..754477.62 rows=5018811 width=8) (actual time=0.581..941.210 rows=198421 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Parallel Append (cost=0.00..749448.80 rows=5018811 width=8) (actual time=290.521..943.736 rows=39684 loops=5)
-> Parallel Hash Join (cost=101469.35..164569.24 rows=334587 width=8) (actual time=724.841..724.843 rows=0 loops=2)
Hash Cond: (contacts.phone_number_id = phone_numbers.id)
-> Parallel Seq Scan on contacts (cost=0.00..59315.91 rows=1003762 width=16) (never executed)
-> Parallel Hash (cost=78630.16..78630.16 rows=431819 width=8) (actual time=723.735..723.735 rows=0 loops=2)
Buckets: 131072 Batches: 32 Memory Usage: 0kB
-> Parallel Seq Scan on phone_numbers (cost=0.00..78630.16 rows=431819 width=8) (actual time=723.514..723.514 rows=0 loops=2)
Filter: ("position"(lower((value)::text), 'as'::text) > 0)
Rows Removed by Filter: 2007960
-> Hash Join (cost=0.38..74780.48 rows=250940 width=8) (actual time=0.888..0.888 rows=0 loops=1)
Hash Cond: (contacts_1.company_id = companies.id)
-> Parallel Seq Scan on contacts contacts_1 (cost=0.00..59315.91 rows=1003762 width=16) (actual time=0.009..0.009 rows=1 loops=1)
-> Hash (cost=0.33..0.33 rows=1 width=8) (actual time=0.564..0.564 rows=0 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 8kB
-> Seq Scan on companies (cost=0.00..0.33 rows=1 width=8) (actual time=0.563..0.563 rows=0 loops=1)
Filter: ("position"(lower((name)::text), 'as'::text) > 0)
Rows Removed by Filter: 4
-> Parallel Seq Scan on contacts contacts_2 (cost=0.00..66844.13 rows=334588 width=8) (actual time=0.119..315.032 rows=20398 loops=5)
Filter: ("position"(lower((last_name)::text), 'as'::text) > 0)
Rows Removed by Filter: 782612
-> Parallel Seq Scan on contacts contacts_3 (cost=0.00..66844.13 rows=334588 width=8) (actual time=0.510..558.791 rows=32144 loops=3)
Filter: ("position"(lower((first_name)::text), 'as'::text) > 0)
Rows Removed by Filter: 1306206
Planning Time: 2.115 ms
Execution Time: 1040.620 ms
It's hard to help you, because I don't have acces to your data. Let me try...
EXPLAIN ANALYZE report shows that:
Yor query doesn't using indexes. Full scan on table phone_numbers tooks 1.202 second, and 0.681 senod on contacts table.
"Rows Removed by Join Filter: 763817".
"Parallel Hash Join (cost=137684.86..201964.34 rows=1003781 width=41) (actual time=3006.633..4596.987 rows=803010 loops=5)" . So this query joins ~800k rows and then filter 763k of it.
Maybe you can reverse that. This should speed up (but that needs to be checked).
For example you can test this - rewrite your query in this direction:
SELECT COUNT( ID)
FROM
(
SELECT "contacts"."id"
FROM "contacts"
Where <filters on contract here>
union
SELECT "contacts"."id"
FROM "contacts"
where phone_number_id in ( select "phone_numbers"."id"
from "phone_numbers"
where <filters on phone_numbers here>
) as A
union
SELECT "contacts"."id"
FROM "contacts"
where company_id in ( select "companies"."id"
from "companies"
where <filters on companies here> )
) as B
Two indexes: one on column contacts.phone_number_id and another on contacts.company_id might help.
EDIT:
It using index on "phone_numbers"."id" with nested loop it tooks 5 seconds.
Try to avoid this.
Please check, what it will do for this:
SELECT Count(id) AS id
FROM (
SELECT contacts.id AS id
FROM contacts
WHERE (
contacts.last_name ilike '%as%')
OR (
contacts.last_name ilike '%as%')
UNION
SELECT contacts.id AS id
FROM contacts
WHERE contacts.phone_number_id IN
(
SELECT to_number(to_char(phone_numbers.id))) /* just for disable index scan for that column */ AS phone_number_id
FROM phone_numbers
WHERE phone_numbers.value ilike '%as%')
UNION
SELECT contacts.id AS id
FROM contacts
WHERE contacts.company_id IN
(
SELECT companies.id AS company_id
FROM companies
WHERE companies.name ilike '%as%' )) AS ID
Aggregate (cost=419095.35..419095.40 rows=1 width=8) (actual time=13235.986..13236.335 rows=1 loops=1)
-> Unique (cost=346875.23..353155.24 rows=1256002 width=8) (actual time=13211.350..13230.729 rows=195963 loops=1)
-> Sort (cost=346875.23..350015.24 rows=1256002 width=8) (actual time=13211.349..13219.607 rows=195963 loops=1)
Sort Key: contacts.id
Sort Method: external merge Disk: 3472kB
-> Append (cost=2249.63..218658.27 rows=1256002 width=8) (actual time=5927.019..13164.421 rows=195963 loops=1)
-> Gather (cost=2249.63..48279.58 rows=251838 width=8) (actual time=5927.019..6911.795 rows=195963 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Parallel Bitmap Heap Scan on contacts (cost=2239.63..48017.74 rows=62960 width=8) (actual time=5861.480..6865.957 rows=39193 loops=5)
Recheck Cond: (((first_name)::text ~~* '%as%'::text) OR ((last_name)::text ~~* '%as%'::text))
Rows Removed by Index Recheck: 763815
Heap Blocks: exact=10860 lossy=6075
-> BitmapOr (cost=2239.63..2239.63 rows=255705 width=0) (actual time=5917.966..5917.966 rows=0 loops=1)
-> Bitmap Index Scan on idx_trgm_contacts_first_name (cost=0.00..1291.57 rows=156527 width=0) (actual time=2972.404..2972.404 rows=4015039 loops=1)
Index Cond: ((first_name)::text ~~* '%as%'::text)
-> Bitmap Index Scan on idx_trgm_contacts_last_name (cost=0.00..822.14 rows=99177 width=0) (actual time=2945.560..2945.560 rows=4015038 loops=1)
Index Cond: ((last_name)::text ~~* '%as%'::text)
-> Nested Loop (cost=81.96..384.33 rows=402 width=8) (actual time=6213.028..6213.028 rows=0 loops=1)
-> Unique (cost=81.52..83.53 rows=402 width=8) (actual time=6213.027..6213.027 rows=0 loops=1)
-> Sort (cost=81.52..82.52 rows=402 width=8) (actual time=6213.027..6213.027 rows=0 loops=1)
Sort Key: ((NULLIF((phone_numbers.id)::text, ''::text))::integer)
Sort Method: quicksort Memory: 25kB
-> Index Scan using idx_trgm_phone_value on phone_numbers (cost=0.41..64.13 rows=402 width=8) (actual time=6213.006..6213.006 rows=0 loops=1)
Index Cond: ((value)::text ~~* '%as%'::text)
Rows Removed by Index Recheck: 4015921
-> Index Scan using index_contacts_on_phone_number_id on contacts contacts_1 (cost=0.44..0.70 rows=1 width=16) (never executed)
Index Cond: (phone_number_id = (NULLIF((phone_numbers.id)::text, ''::text))::integer)
-> Gather (cost=10.36..75794.22 rows=1003762 width=8) (actual time=25.691..25.709 rows=0 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=0.36..74780.46 rows=250940 width=8) (actual time=2.653..2.653 rows=0 loops=5)
Hash Cond: (contacts_2.company_id = companies.id)
-> Parallel Seq Scan on contacts contacts_2 (cost=0.00..59315.91 rows=1003762 width=16) (actual time=0.244..0.244 rows=1 loops=5)
-> Hash (cost=0.31..0.31 rows=1 width=8) (actual time=0.244..0.244 rows=0 loops=5)
Buckets: 1024 Batches: 1 Memory Usage: 8kB
-> Seq Scan on companies (cost=0.00..0.31 rows=1 width=8) (actual time=0.244..0.244 rows=0 loops=5)
Filter: ((name)::text ~~* '%as%'::text)
Rows Removed by Filter: 4
Planning Time: 1.458 ms
Execution Time: 13236.949 ms
I tried below,
SELECT Count(id) AS id
FROM (
SELECT contacts.id AS id
FROM contacts
WHERE (substring(LOWER(contacts.first_name), position('as' in LOWER(first_name)), 2) = 'as')
OR (substring(LOWER(contacts.last_name), position('as' in LOWER(last_name)), 2) = 'as')
UNION
SELECT contacts.id AS id
FROM contacts
WHERE contacts.phone_number_id IN
(
SELECT NULLIF(CAST(phone_numbers.id AS text), '')::int AS phone_number_id
FROM phone_numbers
WHERE (substring(LOWER(phone_numbers.value), position('as' in LOWER(phone_numbers.value)), 2) = 'as'))
UNION
SELECT contacts.id AS id
FROM contacts
WHERE contacts.company_id IN
(
SELECT companies.id AS company_id
FROM companies
WHERE (substring(LOWER(companies.name), position('as' in LOWER(companies.name)), 2) = 'as') )) AS ID
Aggregate (cost=508646.88..508646.93 rows=1 width=8) (actual time=1455.892..1455.995 rows=1 loops=1)
-> Unique (cost=447473.09..452792.55 rows=1063892 width=8) (actual time=1431.464..1450.434 rows=195963 loops=1)
-> Sort (cost=447473.09..450132.82 rows=1063892 width=8) (actual time=1431.464..1439.267 rows=195963 loops=1)
Sort Key: contacts.id
Sort Method: external merge Disk: 3472kB
-> Append (cost=10.00..340141.41 rows=1063892 width=8) (actual time=0.391..1370.557 rows=195963 loops=1)
-> Gather (cost=10.00..84460.02 rows=40050 width=8) (actual time=0.391..983.457 rows=195963 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Parallel Seq Scan on contacts (cost=0.00..84409.97 rows=10012 width=8) (actual time=1.696..987.285 rows=39193 loops=5)
Filter: (("substring"(lower((first_name)::text), "position"(lower((first_name)::text), 'as'::text), 2) = 'as'::text) OR ("substring"(lower((last_name)::text), "position"(lower((last_name)::text), 'as'::text), 2) = 'as'::text))
Rows Removed by Filter: 763817
-> Nested Loop (cost=85188.17..100095.23 rows=20080 width=8) (actual time=364.076..364.125 rows=0 loops=1)
-> HashAggregate (cost=85187.73..86191.73 rows=20080 width=8) (actual time=364.074..364.123 rows=0 loops=1)
Group Key: (NULLIF((phone_numbers.id)::text, ''::text))::integer
Batches: 1 Memory Usage: 793kB
-> Gather (cost=10.00..85137.53 rows=20080 width=8) (actual time=363.976..364.025 rows=0 loops=1)
Workers Planned: 3
Workers Launched: 3
-> Parallel Seq Scan on phone_numbers (cost=0.00..85107.45 rows=6477 width=8) (actual time=357.030..357.031 rows=0 loops=4)
Filter: ("substring"(lower((value)::text), "position"(lower((value)::text), 'as'::text), 2) = 'as'::text)
Rows Removed by Filter: 1003980
-> Index Scan using index_contacts_on_phone_number_id on contacts contacts_1 (cost=0.44..0.64 rows=1 width=16) (never executed)
Index Cond: (phone_number_id = (NULLIF((phone_numbers.id)::text, ''::text))::integer)
-> Gather (cost=10.40..75794.26 rows=1003762 width=8) (actual time=6.889..6.910 rows=0 loops=1)
Workers Planned: 4
Workers Launched: 4
-> Hash Join (cost=0.40..74780.50 rows=250940 width=8) (actual time=0.138..0.139 rows=0 loops=5)
Hash Cond: (contacts_2.company_id = companies.id)
-> Parallel Seq Scan on contacts contacts_2 (cost=0.00..59315.91 rows=1003762 width=16) (actual time=0.004..0.004 rows=1 loops=5)
-> Hash (cost=0.35..0.35 rows=1 width=8) (actual time=0.081..0.081 rows=0 loops=5)
Buckets: 1024 Batches: 1 Memory Usage: 8kB
-> Seq Scan on companies (cost=0.00..0.35 rows=1 width=8) (actual time=0.081..0.081 rows=0 loops=5)
Filter: ("substring"(lower((name)::text), "position"(lower((name)::text), 'as'::text), 2) = 'as'::text)
Rows Removed by Filter: 4
Planning Time: 0.927 ms
Execution Time: 1456.742 ms
I'm looking for suggestions/direction on how I can improve this large query here.
When I explain/analyze it, I see some weak spots, such as large over-estimations, and slow sequential scans on joins.
However, after checking some indexes, and digging in, I'm still at a loss as to how I can improve this:
Query:
WITH my_activities AS (
WITH people_relations AS (
SELECT people.id AS people_relations_id, array_agg(DISTINCT type) AS person_relations, companies.id AS company_id, companies.name AS company_name, companies.platform_url AS company_platform_url FROM people
INNER JOIN relationships AS person_relation ON platform_user_id = 6 AND person_relation.person_id = people.id AND person_relation.type != 'Suppressee'
LEFT OUTER JOIN companies ON people.company_id = companies.id
GROUP BY people.id, people.title, companies.id, companies.name, companies.platform_url)
SELECT owner_person.id,
owner_person.full_name,
owner_person.first_name,
owner_person.last_name,
owner_person.title,
owner_person.headshot AS owner_headshot,
owner_person.public_identifier AS owner_public_identifier,
owner_relations.person_relations AS owner_relationships,
owner_relations.company_id AS owner_company_id,
owner_relations.company_name AS owner_company_name,
owner_relations.company_platform_url AS owner_company_platform_url,
recipient_relations.person_relations AS recipient_relationships,
activities.id AS activity_id,
activities.key AS activity_key,
recipient.id AS recipient_id,
recipient.full_name AS recipient_full_name,
recipient.title AS recipient_title,
recipient.headshot AS recipient_headshot,
recipient.public_identifier AS recipient_public_identifier,
recipient_relations.company_name AS recipient_company_name,
recipient_relations.company_platform_url AS recipient_company_platform_url,
recipient_person.type AS recipient_relation,
coalesce(t_posts.id, t_post_likes.id, t_post_comments.id) AS trackable_id,
trackable_type,
coalesce(t_posts.post_date, t_post_comments.created_time, t_post_likes_post.post_date, activities.occurred_at) AS trackable_date,
coalesce(t_posts.permalink, t_post_comments.permalink, t_post_likes_post.permalink) AS trackable_permalink,
coalesce(t_posts.content, t_post_comments_post.content, t_post_likes_post.content) AS trackable_content,
trackable_companies.name AS trackable_company_name,
trackable_companies.platform_url AS trackable_company_platform_url,
t_post_comments.comment as trackable_comment FROM people AS owner_person
INNER JOIN activities ON activities.owner_id = owner_person.id AND activities.owner_type = 'Person'
AND ((activities.key = 'job.changed' AND activities.occurred_at > '2022-01-31 15:09:54') OR
(activities.key != 'job.changed' AND activities.occurred_at > '2022-04-24 14:09:54'))
LEFT OUTER JOIN li_user_activities ON activities.id = li_user_activities.activity_id AND li_user_activities.platform_user_id = 6
AND li_user_activities.dismissed_at IS NULL
LEFT OUTER JOIN icp_ids ON owner_person.id = icp_ids.icp_id
LEFT OUTER JOIN companies as trackable_companies ON trackable_companies.id = activities.trackable_id AND activities.trackable_type = 'Company'
LEFT OUTER JOIN posts as t_posts ON activities.trackable_id = t_posts.id AND activities.trackable_type = 'Post'
LEFT OUTER JOIN post_likes as t_post_likes ON activities.trackable_id = t_post_likes.id AND activities.trackable_type = 'PostLike'
LEFT OUTER JOIN posts as t_post_likes_post ON t_post_likes.post_id = t_post_likes_post.id
LEFT OUTER JOIN post_comments as t_post_comments ON activities.trackable_id = t_post_comments.id AND activities.trackable_type = 'PostComment'
LEFT OUTER JOIN posts as t_post_comments_post ON t_post_comments.post_id = t_post_comments_post.id
LEFT OUTER JOIN people AS recipient ON recipient.id = activities.recipient_id
LEFT OUTER JOIN relationships AS recipient_person ON recipient_person.person_id = recipient.id
INNER JOIN people_relations AS owner_relations ON owner_relations.people_relations_id = owner_person.id
LEFT OUTER JOIN people_relations AS recipient_relations ON recipient_relations.people_relations_id = recipient.id
WHERE ((recipient.id IS NULL OR recipient.id != owner_person.id) ) AND (key != 'asdasd'))
SELECT owner_relationships AS owner_relationships,
json_agg(DISTINCT recipient_relationships) AS recipient_relationships,
id,
jsonb_build_object('id', id, 'first_name', first_name, 'last_name', last_name, 'full_name', full_name, 'title', title, 'headshot', owner_headshot, 'public_identifier', owner_public_identifier, 'profile_url', ('https://' || owner_public_identifier), 'company', jsonb_build_object( 'id', owner_company_id, 'name', owner_company_name, 'platform_url', owner_company_platform_url )) AS owner,
json_agg( DISTINCT jsonb_build_object('id', activity_id,
'key', activity_key,
'recipient', jsonb_build_object('id', recipient_id, 'full_name', recipient_full_name, 'title', recipient_title, 'headshot', recipient_headshot, 'public_identifier', recipient_public_identifier, 'profile_url', ('https://' || recipient_public_identifier), 'relation', recipient_relationships, 'company', jsonb_build_object('name', recipient_company_name, 'platform_url', recipient_company_platform_url)),
'trackable', jsonb_build_object('id', trackable_id, 'type', trackable_type, 'comment', trackable_comment, 'permalink', trackable_permalink, 'date', trackable_date, 'content', trackable_content, 'company_name', trackable_company_name, 'company_platform_url', trackable_company_platform_url)
)) AS data
FROM my_activities
GROUP BY id, first_name, last_name, full_name, title, owner_headshot, owner_public_identifier, owner_relationships, owner_company_id, owner_company_name, owner_company_platform_url
Explain (also seen here: https://explain.dalibo.com/plan/3pJg ):
GroupAggregate (cost=654190.74..655692.10 rows=21448 width=298) (actual time=3170.209..3267.033 rows=327 loops=1)
Group Key: my_activities.id, my_activities.first_name, my_activities.last_name, my_activities.full_name, my_activities.title, my_activities.owner_headshot, my_activities.owner_public_identifier, my_activities.owner_relationships, my_activities.owner_company_id, my_activities.owner_company_name, my_activities.owner_company_platform_url
-> Sort (cost=654190.74..654244.36 rows=21448 width=674) (actual time=3168.944..3219.547 rows=2733 loops=1)
Sort Key: my_activities.id, my_activities.first_name, my_activities.last_name, my_activities.full_name, my_activities.title, my_activities.owner_headshot, my_activities.owner_public_identifier, my_activities.owner_relationships, my_activities.owner_company_id, my_activities.owner_company_name, my_activities.owner_company_platform_url
Sort Method: external merge Disk: 3176kB
-> Subquery Scan on my_activities (cost=638222.87..646193.71 rows=21448 width=674) (actual time=3142.221..3210.966 rows=2733 loops=1)
-> Hash Right Join (cost=638222.87..645979.23 rows=21448 width=706) (actual time=3142.219..3210.753 rows=2733 loops=1)
Hash Cond: (recipient_relations.people_relations_id = recipient.id)
CTE people_relations
-> GroupAggregate (cost=142850.94..143623.66 rows=34343 width=152) (actual time=1556.908..1593.594 rows=33730 loops=1)
Group Key: people.id, companies.id
-> Sort (cost=142850.94..142936.80 rows=34343 width=129) (actual time=1556.875..1560.123 rows=33780 loops=1)
Sort Key: people.id, companies.id
Sort Method: external merge Disk: 3816kB
-> Gather (cost=1647.48..137915.08 rows=34343 width=129) (actual time=1405.433..1537.693 rows=33780 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Nested Loop Left Join (cost=647.48..133480.78 rows=14310 width=129) (actual time=570.743..710.682 rows=11260 loops=3)
-> Nested Loop (cost=647.05..104036.25 rows=14310 width=55) (actual time=570.719..655.804 rows=11260 loops=3)
-> Parallel Bitmap Heap Scan on relationships person_relation (cost=646.62..13074.28 rows=14310 width=13) (actual time=570.627..579.277 rows=11260 loops=3)
Recheck Cond: (platform_user_id = 6)
Filter: ((type)::text <> 'Suppressee'::text)
Rows Removed by Filter: 12
Heap Blocks: exact=1642
-> Bitmap Index Scan on index_relationships_on_platform_user_id_and_person_id (cost=0.00..638.03 rows=34347 width=0) (actual time=2.254..2.254 rows=33829 loops=1)
Index Cond: (platform_user_id = 6)
-> Index Scan using people_pkey on people (cost=0.43..6.36 rows=1 width=46) (actual time=0.006..0.006 rows=1 loops=33780)
Index Cond: (id = person_relation.person_id)
-> Index Scan using companies_pkey on companies (cost=0.43..2.06 rows=1 width=82) (actual time=0.005..0.005 rows=1 loops=33780)
Index Cond: (id = people.company_id)
-> CTE Scan on people_relations recipient_relations (cost=0.00..686.86 rows=34343 width=104) (actual time=0.018..4.247 rows=33730 loops=1)
-> Hash (cost=488466.12..488466.12 rows=21448 width=2209) (actual time=3142.015..3191.555 rows=2733 loops=1)
Buckets: 2048 Batches: 16 Memory Usage: 655kB
-> Merge Join (cost=487925.89..488466.12 rows=21448 width=2209) (actual time=3094.438..3187.748 rows=2733 loops=1)
Merge Cond: (owner_relations.people_relations_id = activities.owner_id)
-> Sort (cost=5272.71..5358.57 rows=34343 width=112) (actual time=1622.739..1626.249 rows=33730 loops=1)
Sort Key: owner_relations.people_relations_id
Sort Method: external merge Disk: 4128kB
-> CTE Scan on people_relations owner_relations (cost=0.00..686.86 rows=34343 width=112) (actual time=1556.912..1610.745 rows=33730 loops=1)
-> Materialize (cost=482653.17..482746.77 rows=18719 width=2113) (actual time=1471.676..1552.408 rows=69702 loops=1)
-> Sort (cost=482653.17..482699.97 rows=18719 width=2113) (actual time=1471.672..1543.930 rows=69702 loops=1)
Sort Key: owner_person.id
Sort Method: external merge Disk: 84608kB
-> Gather (cost=64235.86..464174.85 rows=18719 width=2113) (actual time=1305.158..1393.927 rows=81045 loops=1)
Workers Planned: 2
Workers Launched: 2
-> Parallel Hash Left Join (cost=63235.86..461302.95 rows=7800 width=2113) (actual time=1289.165..1311.400 rows=27015 loops=3)
Hash Cond: (t_post_comments.post_id = t_post_comments_post.id)
-> Nested Loop Left Join (cost=51190.69..443455.30 rows=7800 width=1700) (actual time=443.623..511.046 rows=27015 loops=3)
Join Filter: ((activities.trackable_type)::text = 'PostComment'::text)
Rows Removed by Join Filter: 1756
-> Parallel Hash Left Join (cost=51190.26..395642.27 rows=7800 width=1408) (actual time=443.580..471.580 rows=27015 loops=3)
Hash Cond: (recipient.id = recipient_person.person_id)
-> Nested Loop Left Join (cost=26667.49..366532.83 rows=7800 width=1408) (actual time=214.602..348.548 rows=6432 loops=3)
Filter: ((recipient.id IS NULL) OR (recipient.id <> owner_person.id))
Rows Removed by Filter: 249
-> Nested Loop Left Join (cost=26667.06..310170.84 rows=7800 width=1333) (actual time=214.591..338.396 rows=6681 loops=3)
Join Filter: ((activities.trackable_type)::text = 'Company'::text)
Rows Removed by Join Filter: 894
-> Hash Left Join (cost=26666.63..257110.20 rows=7800 width=1259) (actual time=214.566..324.738 rows=6681 loops=3)
Hash Cond: (activities.id = li_user_activities.activity_id)
-> Nested Loop (cost=25401.21..255737.89 rows=7800 width=1259) (actual time=208.406..315.896 rows=6681 loops=3)
-> Parallel Hash Left Join (cost=25400.78..199473.40 rows=7800 width=1161) (actual time=208.367..216.663 rows=6681 loops=3)
Hash Cond: (t_post_likes.post_id = t_post_likes_post.id)
-> Nested Loop Left Join (cost=12700.61..182373.75 rows=7800 width=623) (actual time=143.176..167.675 rows=6681 loops=3)
Join Filter: ((activities.trackable_type)::text = 'PostLike'::text)
Rows Removed by Join Filter: 1095
-> Parallel Hash Left Join (cost=12700.17..131647.07 rows=7800 width=611) (actual time=143.146..156.428 rows=6681 loops=3)
Hash Cond: (activities.trackable_id = t_posts.id)
Join Filter: ((activities.trackable_type)::text = 'Post'::text)
Rows Removed by Join Filter: 1452
-> Parallel Seq Scan on activities (cost=0.00..115613.42 rows=7800 width=61) (actual time=0.376..80.040 rows=6681 loops=3)
Filter: (((key)::text <> 'asdasd'::text) AND ((owner_type)::text = 'Person'::text) AND ((((key)::text = 'job.changed'::text) AND (occurred_at > '2022-01-31 15:09:54'::timestamp without time zone)) OR (((key)::text <> 'job.changed'::text) AND (occurred_at > '2022-04-24 14:09:54'::timestamp without time zone))))
Rows Removed by Filter: 27551
-> Parallel Hash (cost=8996.19..8996.19 rows=44719 width=550) (actual time=57.638..57.639 rows=35776 loops=3)
Buckets: 8192 Batches: 16 Memory Usage: 4032kB
-> Parallel Seq Scan on posts t_posts (cost=0.00..8996.19 rows=44719 width=550) (actual time=0.032..14.451 rows=35776 loops=3)
-> Index Scan using post_likes_pkey on post_likes t_post_likes (cost=0.43..6.49 rows=1 width=12) (actual time=0.001..0.001 rows=1 loops=20042)
Index Cond: (id = activities.trackable_id)
-> Parallel Hash (cost=8996.19..8996.19 rows=44719 width=550) (actual time=35.322..35.322 rows=35776 loops=3)
Buckets: 8192 Batches: 16 Memory Usage: 4000kB
-> Parallel Seq Scan on posts t_post_likes_post (cost=0.00..8996.19 rows=44719 width=550) (actual time=0.022..10.427 rows=35776 loops=3)
-> Index Scan using people_pkey on people owner_person (cost=0.43..7.21 rows=1 width=98) (actual time=0.014..0.014 rows=1 loops=20042)
Index Cond: (id = activities.owner_id)
-> Hash (cost=951.58..951.58 rows=25107 width=4) (actual time=6.115..6.116 rows=25698 loops=3)
Buckets: 32768 Batches: 1 Memory Usage: 1160kB
-> Seq Scan on li_user_activities (cost=0.00..951.58 rows=25107 width=4) (actual time=0.011..3.578 rows=25698 loops=3)
Filter: ((dismissed_at IS NULL) AND (platform_user_id = 6))
Rows Removed by Filter: 15722
-> Index Scan using companies_pkey on companies trackable_companies (cost=0.43..6.79 rows=1 width=82) (actual time=0.002..0.002 rows=0 loops=20042)
Index Cond: (id = activities.trackable_id)
-> Index Scan using people_pkey on people recipient (cost=0.43..7.21 rows=1 width=83) (actual time=0.001..0.001 rows=1 loops=20042)
Index Cond: (id = activities.recipient_id)
-> Parallel Hash (cost=16874.67..16874.67 rows=466168 width=4) (actual time=79.735..79.736 rows=372930 loops=3)
Buckets: 131072 Batches: 16 Memory Usage: 3840kB
-> Parallel Seq Scan on relationships recipient_person (cost=0.00..16874.67 rows=466168 width=4) (actual time=0.021..35.805 rows=372930 loops=3)
-> Index Scan using post_comments_pkey on post_comments t_post_comments (cost=0.42..6.12 rows=1 width=300) (actual time=0.001..0.001 rows=0 loops=81045)
Index Cond: (id = activities.trackable_id)
-> Parallel Hash (cost=8996.19..8996.19 rows=44719 width=425) (actual time=726.076..726.076 rows=35776 loops=3)
Buckets: 16384 Batches: 16 Memory Usage: 3264kB
-> Parallel Seq Scan on posts t_post_comments_post (cost=0.00..8996.19 rows=44719 width=425) (actual time=479.054..488.703 rows=35776 loops=3)
Planning Time: 5.286 ms
JIT:
Functions: 304
Options: Inlining true, Optimization true, Expressions true, Deforming true
Timing: Generation 22.990 ms, Inlining 260.865 ms, Optimization 1652.601 ms, Emission 1228.811 ms, Total 3165.267 ms
Execution Time: 3303.637 ms
UPDATE:
Here's the plan with jit=off:
https://explain.dalibo.com/plan/EXn
It looks like essentially all your time is going to doing just-in-time compilations. Turn off JIT (jit=off in the config file, or set jit=off; to do it in the session.)
I thought turning JIT off would make it fall a lot more than that, since the original attributed all but 3303.637 - 3165.267 = 138 ms to JIT. You should alternate a few times between JIT on and off to see if the times you originally report are reproducible or might just be to differences in caching effects.
Also, the times you report are 2-3 times longer than the times the plan itself reports. That is another thing you should check to see how reproducible it is. Maybe most of the time is spent formatting the data to send, or sending it over the network. (That seems unlikely with only 240 rows, but I don't know what else would explain it.)
The time spent is spread thinly throughout the plan now, so there is no one change that could be made to any of the nodes that would make a big difference to the overall time. And I don't see that the estimation errors are driving any plan choices where better estimates would lead to better choices.
Given the lack of a clear bottleneck, opportunities to speed up would probably be faster drives or more RAM for caching or increasing max_parallel_workers_per_gather so you can get more work done in parallel.
Looking at the text of the query, I don't understand what its motivation is so that limits my ability to make suggestions. But there a lot of DISTINCTs there. Are some of the joins generating needless duplicate rows, which are then condensed back down with the DISTINCTs? If so, maybe using WHERE exists (...) could improve things.
It tries to retrieve videos in order of the number of tags that are the same as the specific video.
The following query takes about 800ms, but the index appears to be used.
If you remove COUNT, GROUP BY, and ORDER BY from the SQL query, it runs super fast.(1-5ms)
In such a case, improving the SQL query alone will not speed up the process and
Do I need to use MATERIALIZED VIEW?
SELECT "videos_video"."id",
"videos_video"."title",
"videos_video"."thumbnail_url",
"videos_video"."preview_url",
"videos_video"."embed_url",
"videos_video"."duration",
"videos_video"."views",
"videos_video"."is_public",
"videos_video"."published_at",
"videos_video"."created_at",
"videos_video"."updated_at",
COUNT("videos_video"."id") AS "n"
FROM "videos_video"
INNER JOIN "videos_video_tags" ON ("videos_video"."id" = "videos_video_tags"."video_id")
WHERE ("videos_video_tags"."tag_id" IN
(SELECT U0."id"
FROM "videos_tag" U0
INNER JOIN "videos_video_tags" U1 ON (U0."id" = U1."tag_id")
WHERE U1."video_id" = '748b1814-f311-48da-a1f5-6bf8fe229c7f'))
GROUP BY "videos_video"."id"
ORDER BY "n" DESC
LIMIT 20;
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=1040.69..1040.74 rows=20 width=24) (actual time=738.648..738.654 rows=20 loops=1)
-> Sort (cost=1040.69..1044.29 rows=1441 width=24) (actual time=738.646..738.650 rows=20 loops=1)
Sort Key: (count(videos_video.id)) DESC
Sort Method: top-N heapsort Memory: 27kB
-> HashAggregate (cost=987.93..1002.34 rows=1441 width=24) (actual time=671.006..714.322 rows=188818 loops=1)
Group Key: videos_video.id
Batches: 1 Memory Usage: 28689kB
-> Nested Loop (cost=35.20..980.73 rows=1441 width=16) (actual time=0.341..559.034 rows=240293 loops=1)
-> Nested Loop (cost=34.78..340.88 rows=1441 width=16) (actual time=0.278..92.806 rows=240293 loops=1)
-> HashAggregate (cost=34.35..34.41 rows=6 width=32) (actual time=0.188..0.200 rows=4 loops=1)
Group Key: u0.id
Batches: 1 Memory Usage: 24kB
-> Nested Loop (cost=0.71..34.33 rows=6 width=32) (actual time=0.161..0.185 rows=4 loops=1)
-> Index Only Scan using videos_video_tags_video_id_tag_id_f8d6ba70_uniq on videos_video_tags u1 (cost=0.43..4.53 rows=6 width=16) (actual time=0.039..0.040 rows=4 loops=1)
Index Cond: (video_id = '748b1814-f311-48da-a1f5-6bf8fe229c7f'::uuid)
Heap Fetches: 0
-> Index Only Scan using videos_tag_pkey on videos_tag u0 (cost=0.28..4.97 rows=1 width=16) (actual time=0.035..0.035 rows=1 loops=4)
Index Cond: (id = u1.tag_id)
Heap Fetches: 0
-> Index Scan using videos_video_tags_tag_id_2673cfc8 on videos_video_tags (cost=0.43..35.90 rows=1518 width=32) (actual time=0.029..16.728 rows=60073 loops=4)
Index Cond: (tag_id = u0.id)
-> Index Only Scan using videos_video_pkey on videos_video (cost=0.42..0.44 rows=1 width=16) (actual time=0.002..0.002 rows=1 loops=240293)
Index Cond: (id = videos_video_tags.video_id)
Heap Fetches: 46
Planning Time: 1.980 ms
Execution Time: 739.446 ms
(26 rows)
Time: 742.145 ms
---------- Results of the execution plan for the query as answered by Edouard. ----------
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=30043.90..30212.53 rows=20 width=746) (actual time=239.142..239.219 rows=20 loops=1)
-> Limit (cost=30043.48..30043.53 rows=20 width=24) (actual time=239.089..239.093 rows=20 loops=1)
-> Sort (cost=30043.48..30607.15 rows=225467 width=24) (actual time=239.087..239.090 rows=20 loops=1)
Sort Key: (count(*)) DESC
Sort Method: top-N heapsort Memory: 26kB
-> HashAggregate (cost=21789.21..24043.88 rows=225467 width=24) (actual time=185.710..219.211 rows=188818 loops=1)
Group Key: vt.video_id
Batches: 1 Memory Usage: 22545kB
-> Nested Loop (cost=20.62..20187.24 rows=320395 width=16) (actual time=4.975..106.839 rows=240293 loops=1)
-> Index Only Scan using videos_video_tags_video_id_tag_id_f8d6ba70_uniq on videos_video_tags vvt (cost=0.43..4.53 rows=6 width=16) (actual time=0.033..0.043 rows=4 loops=1)
Index Cond: (video_id = '748b1814-f311-48da-a1f5-6bf8fe229c7f'::uuid)
Heap Fetches: 0
-> Bitmap Heap Scan on videos_video_tags vt (cost=20.19..3348.60 rows=1518 width=32) (actual time=4.311..20.663 rows=60073 loops=4)
Recheck Cond: (tag_id = vvt.tag_id)
Heap Blocks: exact=34757
-> Bitmap Index Scan on videos_video_tags_tag_id_2673cfc8 (cost=0.00..19.81 rows=1518 width=0) (actual time=3.017..3.017 rows=60073 loops=4)
Index Cond: (tag_id = vvt.tag_id)
-> Index Scan using videos_video_pkey on videos_video v (cost=0.42..8.44 rows=1 width=738) (actual time=0.005..0.005 rows=1 loops=20)
Index Cond: (id = vt.video_id)
Planning Time: 0.854 ms
Execution Time: 241.392 ms
(21 rows)
Time: 242.909 ms
Here below are some ideas to simplify the query. Then an EXPLAIN ANALYSE will confirm the potential impacts on the query performance.
Starting from the subquery :
SELECT U0."id"
FROM "videos_tag" U0
INNER JOIN "videos_video_tags" U1 ON (U0."id" = U1."tag_id")
WHERE U1."video_id" = '748b1814-f311-48da-a1f5-6bf8fe229c7f'
According to the JOIN clause : U0."id" = U1."tag_id" so that SELECT U0."id" can be replaced by SELECT U1."tag_id".
In this case, the table "videos_tag" U0 is not used anymore in the subquery which can be simplified as :
SELECT U1."tag_id"
FROM "videos_video_tags" U1
WHERE U1."video_id" = '748b1814-f311-48da-a1f5-6bf8fe229c7f'
And the WHERE clause of the main query becomes :
WHERE "videos_video_tags"."tag_id" IN
( SELECT U1."tag_id"
FROM "videos_video_tags" U1
WHERE U1."video_id" = '748b1814-f311-48da-a1f5-6bf8fe229c7f'
)
which can be transformed as a self join on the table "videos_video_tags" to be added in the FROM clause of the main query :
FROM "videos_video" AS v
INNER JOIN "videos_video_tags" AS vt
ON v."id" = vt."video_id"
INNER JOIN "videos_video_tags" AS vvt
ON vvt."tag_id" = vt."tag_id"
WHERE vvt."video_id" = '748b1814-f311-48da-a1f5-6bf8fe229c7f'
Finally, the GROUP BY "videos_video"."id" clause can be replaced by GROUP BY "videos_video_tags"."video_id" according to the JOIN clause between both tables, and this new GROUP BY clause associated to the ORDER BY clause and LIMIT clause can apply to a subquery involving the table "videos_video_tags" only, and before joining with the table "videos_video" :
SELECT v."id",
v."title",
v."thumbnail_url",
v."preview_url",
v."embed_url",
v."duration",
v."views",
v."is_public",
v."published_at",
v."created_at",
v."updated_at",
w."n"
FROM "videos_video" AS v
INNER JOIN
( SELECT vt."video_id"
, count(*) AS "n"
FROM "videos_video_tags" AS vt
INNER JOIN "videos_video_tags" AS vvt
ON vvt."tag_id" = vt."tag_id"
WHERE vvt."video_id" = '748b1814-f311-48da-a1f5-6bf8fe229c7f'
GROUP BY vt."video_id"
ORDER BY "n" DESC
LIMIT 20
) AS w
ON v."id" = w."video_id"
This query currently take 4 minutes to run:
with name1 as (
select col1 as a1, col2 as a2, sum(FEE) as a3
from s1, date
where return_date = datesk and year = 2000
group by col1, col2
)
select c_id
from name1 ala1, ss, cc
where ala1.a3 > (
select avg(a3) * 1.2 from name1 ctr2
where ala1.a2 = ctr2.a2
)
and s_sk = ala1.a2
and s_state = 'TN'
and ala1.a1 = c_sk
order by c_id
limit 100;
I have set work_mem=’1000MB’ and enable-nestloop=off
EXPLAIN ANALYZE of this query is: http://explain.depesz.com/s/DUa
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
--------------------
Limit (cost=59141.02..59141.09 rows=28 width=17) (actual time=253707.928..253707.940 rows=100 loops=1)
CTE name1
-> HashAggregate (cost=11091.33..11108.70 rows=1390 width=14) (actual time=105.223..120.358 rows=50441 loops=1)
Group Key: s1.col1, s1.col2
-> Hash Join (cost=2322.69..11080.90 rows=1390 width=14) (actual time=10.390..79.897 rows=55820 loops=1)
Hash Cond: (s1.return_date = date.datesk)
-> Seq Scan on s1 (cost=0.00..7666.14 rows=287514 width=18) (actual time=0.005..33.801 rows=287514 loops=1)
-> Hash (cost=2318.11..2318.11 rows=366 width=4) (actual time=10.375..10.375 rows=366 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
-> Seq Scan on date (cost=0.00..2318.11 rows=366 width=4) (actual time=5.224..10.329 rows=366 loops=1)
Filter: (year = 2000)
Rows Removed by Filter: 72683
-> Sort (cost=48032.32..48032.39 rows=28 width=17) (actual time=253707.923..253707.930 rows=100 loops=1)
Sort Key: cc.c_id
Sort Method: top-N heapsort Memory: 32kB
-> Hash Join (cost=43552.37..48031.65 rows=28 width=17) (actual time=253634.511..253696.291 rows=18976 loops=1)
Hash Cond: (cc.c_sk = ala1.a1)
-> Seq Scan on cc (cost=0.00..3854.00 rows=100000 width=21) (actual time=0.009..18.527 rows=100000 loops=1)
-> Hash (cost=43552.02..43552.02 rows=28 width=4) (actual time=253634.420..253634.420 rows=18976 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 668kB
-> Hash Join (cost=1.30..43552.02 rows=28 width=4) (actual time=136.819..253624.375 rows=18982 loops=1)
Hash Cond: (ala1.a2 = ss.s_sk)
-> CTE Scan on name1 ala1 (cost=0.00..43548.70 rows=463 width=8) (actual time=136.756..253610.817 rows=18982 loops=1)
Filter: (a3 > (SubPlan 2))
Rows Removed by Filter: 31459
SubPlan 2
-> Aggregate (cost=31.29..31.31 rows=1 width=32) (actual time=5.025..5.025 rows=1 loops=50441)
-> CTE Scan on name1 ctr2 (cost=0.00..31.27 rows=7 width=32) (actual time=0.032..3.860 rows=8241 loops=50441)
Filter: (ala1.a2 = a2)
Rows Removed by Filter: 42200
-> Hash (cost=1.15..1.15 rows=12 width=4) (actual time=0.036..0.036 rows=12 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 1kB
-> Seq Scan on ss (cost=0.00..1.15 rows=12 width=4) (actual time=0.025..0.033 rows=12 loops=1)
Filter: (s_state = 'TN'::bpchar)
Planning time: 0.316 ms
Execution time: 253708.351 ms
(36 rows)
With enable_nestloop=on;
EXPLAIN ANLYZE result is : http://explain.depesz.com/s/NPo
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------
--------------
Limit (cost=54916.36..54916.43 rows=28 width=17) (actual time=257869.004..257869.015 rows=100 loops=1)
CTE name1
-> HashAggregate (cost=11091.33..11108.70 rows=1390 width=14) (actual time=92.354..104.103 rows=50441 loops=1)
Group Key: s1.col1, s1.col2
-> Hash Join (cost=2322.69..11080.90 rows=1390 width=14) (actual time=9.371..68.156 rows=55820 loops=1)
Hash Cond: (s1.return_date = date.datesk)
-> Seq Scan on s1 (cost=0.00..7666.14 rows=287514 width=18) (actual time=0.011..25.637 rows=287514 loops=1)
-> Hash (cost=2318.11..2318.11 rows=366 width=4) (actual time=9.343..9.343 rows=366 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 13kB
-> Seq Scan on date (cost=0.00..2318.11 rows=366 width=4) (actual time=4.796..9.288 rows=366 loops=1)
Filter: (year = 2000)
Rows Removed by Filter: 72683
-> Sort (cost=43807.66..43807.73 rows=28 width=17) (actual time=257868.994..257868.998 rows=100 loops=1)
Sort Key: cc.c_id
Sort Method: top-N heapsort Memory: 32kB
-> Nested Loop (cost=0.29..43806.98 rows=28 width=17) (actual time=120.358..257845.941 rows=18976 loops=1)
-> Nested Loop (cost=0.00..43633.22 rows=28 width=4) (actual time=120.331..257692.654 rows=18982 loops=1)
Join Filter: (ala1.a2 = ss.s_sk)
Rows Removed by Join Filter: 208802
-> CTE Scan on name1 ala1 (cost=0.00..43548.70 rows=463 width=8) (actual time=120.316..257652.636 rows=18982 loops=1)
Filter: (a3 > (SubPlan 2))
Rows Removed by Filter: 31459
SubPlan 2
-> Aggregate (cost=31.29..31.31 rows=1 width=32) (actual time=5.105..5.105 rows=1 loops=50441)
-> CTE Scan on name1 ctr2 (cost=0.00..31.27 rows=7 width=32) (actual time=0.032..3.952 rows=8241 loops=50441)
Filter: (ala1.a2 = a2)
Rows Removed by Filter: 42200
-> Materialize (cost=0.00..1.21 rows=12 width=4) (actual time=0.000..0.001 rows=12 loops=18982)
-> Seq Scan on ss (cost=0.00..1.15 rows=12 width=4) (actual time=0.007..0.012 rows=12 loops=1)
Filter: (s_state = 'TN'::bpchar)
-> Index Scan using cc_pkey on cc (cost=0.29..6.20 rows=1 width=21) (actual time=0.007..0.007 rows=1 loops=18982)
Index Cond: (c_sk = ala1.a1)
Planning time: 0.453 ms
Execution time: 257869.554 ms
(34 rows)
Many other queries run quickly with enable_nestloop=off, there is no big difference for this query. Raw data is not really big, so 4 minutes is too much. I was expecting around 4-5 seconds.
Why is it taking so long !?
I tried this in both postgres versions 9.4 and 9.5. It is same. Maybe I can create brin indexes. But I am not sure for which columns to create.
Configuration setting:
effective_cache_size | 89GB
shared_buffers | 18GB
work_mem | 1000MB
maintenance_work_mem | 500MB
checkpoint_segments | 32
constraint_exclusion | on
checkpoint_completion_target | 0.5
Like John Bollinger commented, your sub-query gets evaluated for each row of the main query. But since you are averaging on a simple column, you can easily move the sub-query out to a CTE and calculate the average once, which should speed up things tremendously:
with name1 as (
select col1 as a1, col2 as a2, sum(FEE) as a3
from s1, date
where return_date = datesk and year = 2000
group by col1, col2
), avg_a3_by_a2 as (
select a2, avg(a3) * 1.2 as avg12
from name1
group by a2
)
select c_id
from name1, avg_a3_by_a2, ss, cc
where name1.a3 > avg_a3_by_a2.avg12
and name1.a2 = avg_a3_by_a2.a2
and s_sk = name1.a2
and s_state = 'TN'
and name1.a1 = c_sk
order by c_id
limit 100;
The new CTE calculates the average + 20% for every distinct value of a2.
Please also use the JOIN syntax instead of comma-separated FROM items as it makes your code far more readable. And if you start using aliases in your query, use them consistently on all tables and columns. I could correct neither of these two issues because of lack of information.
I have a relation like this
R ( EDGE INTEGER, DIHEDRAL INTEGER, FACE INTEGER , VALENCY INTEGER)
I tested twice, 64 rows table R and 128 rows table R. but the simpler one takes much more time than the second one. The explain is like below (It shows error on explain.depesz.com). Could anyone help me to check why? thanks.
plan for 64 rows:
HashAggregate (cost=260.16..260.17 rows=1 width=12) (actual rows=64 loops=1)
-> Nested Loop (cost=89.44..260.15 rows=1 width=12) (actual rows=256 loops=1)
Join Filter: ((f1.face < f2.face) AND (e3.edge <> f1.edge) AND (e4.edge <> e3.edge) AND (f1.edge = f2.edge) AND (f1.face =
e3.face))
Rows Removed by Join Filter: 142606080
-> Nested Loop (cost=41.91..167.59 rows=1 width=16) (actual rows=557056 loops=1)
-> Nested Loop (cost=41.91..125.71 rows=1 width=8) (actual rows=256 loops=1)
Join Filter: ((e5.edge <> f2.edge) AND (e5.edge <> e2.edge) AND (e2.face = e5.face))
Rows Removed by Join Filter: 1113856
-> Hash Join (cost=41.91..83.73 rows=1 width=16) (actual rows=512 loops=1)
Hash Cond: (f2.face = e2.face)
Join Filter: (e2.edge <> f2.edge)
Rows Removed by Join Filter: 256
-> Seq Scan on r f2 (cost=0.00..41.76 rows=12 width=8) (actual rows=384 loops=1)
Filter: (valency = 3)
Rows Removed by Filter: 1920
-> Hash (cost=41.76..41.76 rows=12 width=8) (actual rows=2176 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 85kB
-> Seq Scan on r e2 (cost=0.00..41.76 rows=12 width=8) (actual rows=2176 loops=1)
Filter: (dihedral = 2)
Rows Removed by Filter: 128
-> Seq Scan on r e5 (cost=0.00..41.76 rows=12 width=8) (actual rows=2176 loops=512)
Filter: (dihedral = 2)
Rows Removed by Filter: 128
-> Seq Scan on r e3 (cost=0.00..41.76 rows=12 width=8) (actual rows=2176 loops=256)
Filter: (dihedral = 2)
Rows Removed by Filter: 128
-> Hash Join (cost=47.53..92.32 rows=11 width=16) (actual rows=256 loops=557056)
Hash Cond: (e4.face = f1.face)
Join Filter: (e4.edge <> f1.edge)
Rows Removed by Join Filter: 128
-> Seq Scan on r e4 (cost=0.00..36.01 rows=2301 width=8) (actual rows=2304 loops=557056)
-> Hash (cost=47.52..47.52 rows=1 width=8) (actual rows=128 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 5kB
-> Seq Scan on r f1 (cost=0.00..47.52 rows=1 width=8) (actual rows=128 loops=1)
Filter: ((valency = 3) AND (dihedral = 1))
Rows Removed by Filter: 2176
Total runtime: 159268.541 ms
(37 rows)
plan for 128 rows
HashAggregate (cost=501.28..501.29 rows=1 width=12) (actual rows=128 loops=1)
-> Nested Loop (cost=171.98..501.27 rows=2 width=12) (actual rows=512 loops=1)
Join Filter: ((e3.edge <> f1.edge) AND (e4.edge <> e3.edge) AND (f1.face = e3.face))
Rows Removed by Join Filter: 2227712
-> Seq Scan on r e3 (cost=0.00..80.31 rows=22 width=8) (actual rows=4352 loops=1)
Filter: (dihedral = 2)
Rows Removed by Filter: 256
-> Materialize (cost=171.98..420.08 rows=2 width=20) (actual rows=512 loops=4352)
-> Nested Loop (cost=171.98..420.07 rows=2 width=20) (actual rows=512 loops=1)
Join Filter: ((f1.face < f2.face) AND (f1.edge = f2.edge))
Rows Removed by Join Filter: 261632
-> Nested Loop (cost=80.59..242.23 rows=1 width=8) (actual rows=512 loops=1)
Join Filter: ((e5.edge <> f2.edge) AND (e5.edge <> e2.edge) AND (e2.face = e5.face))
Rows Removed by Join Filter: 4455936
-> Seq Scan on r e5 (cost=0.00..80.31 rows=22 width=8) (actual rows=4352 loops=1)
Filter: (dihedral = 2)
Rows Removed by Filter: 256
-> Materialize (cost=80.59..161.05 rows=2 width=16) (actual rows=1024 loops=4352)
-> Hash Join (cost=80.59..161.04 rows=2 width=16) (actual rows=1024 loops=1)
Hash Cond: (f2.face = e2.face)
Join Filter: (e2.edge <> f2.edge)
Rows Removed by Join Filter: 512
-> Seq Scan on r f2 (cost=0.00..80.31 rows=22 width=8) (actual rows=768 loops=1)
Filter: (valency = 3)
Rows Removed by Filter: 3840
-> Hash (cost=80.31..80.31 rows=22 width=8) (actual rows=4352 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 170kB
-> Seq Scan on r e2 (cost=0.00..80.31 rows=22 width=8) (actual rows=4352 loops=1)
Filter: (dihedral = 2)
Rows Removed by Filter: 256
-> Hash Join (cost=91.39..177.51 rows=22 width=16) (actual rows=512 loops=512)
Hash Cond: (e4.face = f1.face)
Join Filter: (e4.edge <> f1.edge)
Rows Removed by Join Filter: 256
-> Seq Scan on r e4 (cost=0.00..69.25 rows=4425 width=8) (actual rows=4608 loops=512)
-> Hash (cost=91.38..91.38 rows=1 width=8) (actual rows=256 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 10kB
-> Seq Scan on r f1 (cost=0.00..91.38 rows=1 width=8) (actual rows=256 loops=1)
Filter: ((valency = 3) AND (dihedral = 1))
Rows Removed by Filter: 4352
Total runtime: 1262.761 ms
(41 rows)
The query planner uses statistics on row counts/index sizes/etc. to estimate how to get the best performance out of a query. A bulk insertion of rows immediately followed by a query may not show best performance, because these statistics may be out of date.
To make sure the planner makes informed choices, you need to issue a call to ANALYZE prior to running your EXPLAIN query.
In your specific scenario, chances are the planner made a bad choice in the first case (the 64 rows) and a good one in the second case (the 128 rows).