Primary key scanning in partitioned table - sql

I have really big table, which I need to be partitioned by date (via trigger in my case).
The problem I've encountered is I can get data by timestamp filter pretty fast, but can't get good performance while extracting data for certain row by primary key.
The main table is:
CREATE TABLE parent_table (
guid uuid NOT NULL DEFAULT uuid_generate_v4(), -- This is gonna be the primary key
tm timestamptz NOT NULL, -- Timestamp, on which paritions are based
value int4 not null default -1, -- Just a value
CONSTRAINT z_detections_pk PRIMARY KEY (guid)
);
CREATE INDEX parent_table_tm_idx ON dev.dev_z_detections USING btree (tm DESC);
Then I create simple trigger for creation new parition if there are new date
CREATE OR REPLACE FUNCTION parent_table_insert_fn()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
DECLARE
schema_name varchar(255) := 'public';
table_master varchar(255) := 'parent_table';
table_part varchar(255) := '';
table_date_underscore varchar(255) := '';
constraint_tm_start timestamp with time zone;
constraint_tm_end timestamp with time zone;
BEGIN
table_part := table_master || '_' || to_char(timezone('utc', new.tm), 'YYYY_MM_DD');
table_date_underscore := '' || to_char(timezone('utc', new.tm), 'YYYY_MM_DD');
PERFORM
1
from
information_schema.tables
WHERE
table_schema = schema_name
AND table_name = table_part
limit 1;
IF NOT FOUND
then
constraint_tm_start := to_char(timezone('utc', new.tm), 'YYYY-MM-DD')::timestamp at time zone 'utc';
constraint_tm_end := constraint_tm_start + interval '1 day';
execute '
CREATE TABLE ' || schema_name || '.' || table_part || ' (
CONSTRAINT parent_table_' || table_date_underscore || '_pk PRIMARY KEY (guid),
CONSTRAINT parent_table_' || table_date_underscore || '_ck CHECK ( tm >= ' || QUOTE_LITERAL(constraint_tm_start) || ' and tm < ' || QUOTE_LITERAL(constraint_tm_end) || ' )
) INHERITS (' || schema_name || '.' || table_master || ');
CREATE INDEX parent_table_' || table_date_underscore || '_tidx ON ' || schema_name || '.' || table_part || ' USING btree (tm desc);
';
END IF;
execute '
INSERT INTO ' || schema_name || '.' || table_part || '
SELECT ( (' || QUOTE_LITERAL(NEW) || ')::' || schema_name || '.' || TG_RELNAME || ' ).*;';
RETURN NULL;
END;
$function$
;
Enable trigger on parent table:
create trigger parent_table_insert_fn_trigger before insert
on parent_table for each row execute function parent_table_insert_fn();
And insert some data in it:
insert into parent_table(guid, tm, value)
values
('1f4835c0-2b22-4cfc-ab3c-940af679ace6', '2021-04-06 14:00:00+03:00', 1),
('5ca37d57-e79e-4e1f-ace7-91eb671f3a82', '2021-04-07 15:30:00+03:00', 2),
('b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808', '2021-04-07 17:10:00+03:00', 3),
('ad69cd35-5b20-466f-9d5c-61fa5d41bc5f', '2021-04-08 16:50:00+03:00', 66),
('bb0ec87a-72bb-438e-8f4c-2cdc3ae7d525', '2021-03-21 19:00:00+03:00', -10);
After those manipulations I've got 4 tables:
parent_table
parent_table_2021_03_21
parent_table_2021_04_06
parent_table_2021_04_07
parent_table_2021_04_08
Checking if indexing works good for timestamps filter:
explain analyze
select * from parent_table where tm > '2021-04-07 10:00:00+03:00' and tm <= '2021-04-07 16:30:00+03:00';
> > >
Append (cost=0.00..14.43 rows=8 width=28) (actual time=0.017..0.020 rows=1 loops=1)
-> Seq Scan on parent_table parent_table_1 (cost=0.00..0.00 rows=1 width=28) (actual time=0.002..0.002 rows=0 loops=1)
Filter: ((tm > '2021-04-07 10:00:00+03'::timestamp with time zone) AND (tm <= '2021-04-07 16:30:00+03'::timestamp with time zone))
-> Bitmap Heap Scan on parent_table_2021_04_07 parent_table_2 (cost=4.22..14.39 rows=7 width=28) (actual time=0.013..0.015 rows=1 loops=1)
Recheck Cond: ((tm > '2021-04-07 10:00:00+03'::timestamp with time zone) AND (tm <= '2021-04-07 16:30:00+03'::timestamp with time zone))
Heap Blocks: exact=1
-> Bitmap Index Scan on parent_table_2021_04_07_tidx (cost=0.00..4.22 rows=7 width=0) (actual time=0.008..0.008 rows=1 loops=1)
Index Cond: ((tm > '2021-04-07 10:00:00+03'::timestamp with time zone) AND (tm <= '2021-04-07 16:30:00+03'::timestamp with time zone))
Planning Time: 0.381 ms
Execution Time: 0.053 ms
This is fine and works as I expected.
But selecting by certain primary key gives me next analyze's output:
explain analyze
select * from parent_table where guid = 'b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808';
> > >
Append (cost=0.00..32.70 rows=5 width=28) (actual time=0.021..0.035 rows=1 loops=1)
-> Seq Scan on parent_table parent_table_1 (cost=0.00..0.00 rows=1 width=28) (actual time=0.003..0.004 rows=0 loops=1)
Filter: (guid = 'b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808'::uuid)
-> Index Scan using parent_table_2021_04_06_pk on parent_table_2021_04_06 parent_table_2 (cost=0.15..8.17 rows=1 width=28) (actual time=0.008..0.008 rows=0 loops=1)
Index Cond: (guid = 'b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808'::uuid)
-> Index Scan using parent_table_2021_04_07_pk on parent_table_2021_04_07 parent_table_3 (cost=0.15..8.17 rows=1 width=28) (actual time=0.008..0.009 rows=1 loops=1)
Index Cond: (guid = 'b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808'::uuid)
-> Index Scan using parent_table_2021_04_08_pk on parent_table_2021_04_08 parent_table_4 (cost=0.15..8.17 rows=1 width=28) (actual time=0.004..0.004 rows=0 loops=1)
Index Cond: (guid = 'b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808'::uuid)
-> Index Scan using parent_table_2021_03_21_pk on parent_table_2021_03_21 parent_table_5 (cost=0.15..8.17 rows=1 width=28) (actual time=0.006..0.006 rows=0 loops=1)
Index Cond: (guid = 'b57bfbf6-7ed0-4dde-a40b-9fa2e6f24808'::uuid)
Planning Time: 0.345 ms
Execution Time: 0.076 ms
And this query gives me bad perfomance (I guess?) especially on really big paritioned tables like 10M+ rows for each partition.
So my question is: what should I do to evade partitions scans for simple primary key lookup?
Note: I'm using PostgreSQL 13.1
UPDATE 2021-04-07 15:22+03:00:
So, in semi-production table I have such results:
Timestamp filter
Append (cost=0.00..809.35 rows=16616 width=32) (actual time=0.037..5.612 rows=16865 loops=1)
-> Seq Scan on wifi_logs t_1 (cost=0.00..0.00 rows=1 width=32) (actual time=0.010..0.011 rows=0 loops=1)
Filter: ((tm >= '2020-04-07 14:00:00+03'::timestamp with time zone) AND (tm <= '2020-04-07 17:00:00+03'::timestamp with time zone))
-> Index Scan using wifi_logs_tm_idx_2020_04_07 on wifi_logs_2020_04_07 t_2 (cost=0.29..726.27 rows=16615 width=32) (actual time=0.026..4.655 rows=16865 loops=1)
Index Cond: ((tm >= '2020-04-07 14:00:00+03'::timestamp with time zone) AND (tm <= '2020-04-07 17:00:00+03'::timestamp with time zone))
Planning Time: 14.869 ms
Execution Time: 6.151 ms
GUID (primary key filter)
-> Seq Scan on wifi_logs t_1 (cost=0.00..0.00 rows=1 width=32) (actual time=0.015..0.016 rows=0 loops=1)
Filter: (guid = '78bc5537-4f2f-4e83-8abd-4241ac3f9f27'::uuid)
-> Seq Scan on wifi_logs_2014_12_04 t_4 (cost=0.00..1.01 rows=1 width=32) (actual time=0.006..0.006 rows=0 loops=1)
Filter: (guid = '78bc5537-4f2f-4e83-8abd-4241ac3f9f27'::uuid)
Rows Removed by Filter: 1
--
-- TONS OF PARTITION TABLE SCANS
---
-> Index Scan using wifi_logs_2021_03_18_pk on wifi_logs_2021_03_18 t_387 (cost=0.42..8.44 rows=1 width=32) (actual time=0.011..0.011 rows=0 loops=1)
Index Cond: (guid = '78bc5537-4f2f-4e83-8abd-4241ac3f9f27'::uuid)
-> Seq Scan on wifi_logs_1970_01_01 t_388 (cost=0.00..3.60 rows=1 width=32) (actual time=0.020..0.020 rows=0 loops=1)
Filter: (guid = '78bc5537-4f2f-4e83-8abd-4241ac3f9f27'::uuid)
Rows Removed by Filter: 119
-> Index Scan using wifi_logs_2021_03_19_pk on wifi_logs_2021_03_19 t_389 (cost=0.42..8.44 rows=1 width=32) (actual time=0.012..0.012 rows=0 loops=1)
Index Cond: (guid = '78bc5537-4f2f-4e83-8abd-4241ac3f9f27'::uuid)
--
-- ANOTHER TONS OF PARTITION TABLE SCANS
---
-> Index Scan using wifi_logs_2021_04_07_pk on wifi_logs_2021_04_07 t_408 (cost=0.42..8.44 rows=1 width=32) (actual time=0.010..0.010 rows=0 loops=1)
Index Cond: (guid = '78bc5537-4f2f-4e83-8abd-4241ac3f9f27'::uuid)
Planning Time: 97.662 ms
Execution Time: 36.756 ms

This is normal, and there is no way to avoid it except
create fewer partitions, so that you have to scan fewer partitions
add a condition on tm to the query to avoid scanning them all
You will notice that the planning time greatly exceeds the query execution time. To help with that, you can
create fewer partitions, so that the optimizer has less work to do
use prepared statements to avoid the planing effort

Related

Postgres performance issue with query based on filter values

I am not an expert in Postgres, but I am trying to understand this strange behaviour and perhaps some of you might give me some insight.
Those are the tables and indexes involved
Tables
CREATE TABLE swp_am_hcbe_pro.submissions
(
id bigint NOT NULL DEFAULT nextval('swp_am_hcbe_pro.submissions_id_seq'::regclass),
application_id bigint NOT NULL,
transaction_names_id bigint NOT NULL,
"timestamp" timestamp without time zone NOT NULL,
submission_status character varying(32) COLLATE pg_catalog."default" NOT NULL,
submission_type character varying(16) COLLATE pg_catalog."default" NOT NULL,
exit_code character varying(32) COLLATE pg_catalog."default",
ignore_partner_status boolean NOT NULL DEFAULT false,
ignore_sell_partner_status boolean NOT NULL DEFAULT false,
ignore_exclusion_rules boolean NOT NULL DEFAULT false,
dpa_iban character varying(32) COLLATE pg_catalog."default",
dpa_bic character varying(32) COLLATE pg_catalog."default",
dpa_id bigint,
dpa_blz bigint,
dda_iban character varying(32) COLLATE pg_catalog."default",
dda_bic character varying(32) COLLATE pg_catalog."default",
dda_id bigint,
dda_blz bigint,
dda_sepa_mandate_ref character varying(128) COLLATE pg_catalog."default",
use_different_sepa_mandate character varying(34) COLLATE pg_catalog."default",
use_manual_limit_extension boolean NOT NULL DEFAULT false,
use_automatic_limit_extension boolean NOT NULL DEFAULT false,
json_payload text COLLATE pg_catalog."default" NOT NULL,
final_timestamp timestamp without time zone,
CONSTRAINT submissions_pkey PRIMARY KEY (id),
CONSTRAINT submission_app_id FOREIGN KEY (application_id)
REFERENCES swp_am_hcbe_pro.applications (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE,
CONSTRAINT submission_transaction_names_id FOREIGN KEY (transaction_names_id)
REFERENCES swp_am_hcbe_pro.transaction_names (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION,
CONSTRAINT chk_submission_status CHECK (submission_status::text = ANY (ARRAY['ERROR'::character varying, 'DENIED'::character varying, 'PROCESSED'::character varying, 'REJECTED'::character varying, 'PROCESSING'::character varying, 'SCHEDULED'::character varying]::text[])),
CONSTRAINT submission_types CHECK (submission_type::text = ANY (ARRAY['AUTO'::character varying, 'MANUAL'::character varying]::text[]))
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
CREATE TABLE swp_am_hcbe_pro.applications
(
id bigint NOT NULL DEFAULT nextval('swp_am_hcbe_pro.applications_id_seq'::regclass),
correlation_id character varying(64) COLLATE pg_catalog."default" NOT NULL,
incoming_timestamp timestamp without time zone NOT NULL,
source_input character varying(16) COLLATE pg_catalog."default" NOT NULL,
source_file_path character varying(255) COLLATE pg_catalog."default",
application_type character varying(127) COLLATE pg_catalog."default" NOT NULL,
loan_id bigint,
vin character varying(17) COLLATE pg_catalog."default",
cooperation_name character varying(255) COLLATE pg_catalog."default",
cooperation_id bigint,
submitter_name character varying(255) COLLATE pg_catalog."default",
submitter_id bigint,
dealer_name character varying(255) COLLATE pg_catalog."default",
dealer_id bigint,
dealer_ext_id character varying(25) COLLATE pg_catalog."default",
invoice_id character varying(25) COLLATE pg_catalog."default",
stock_id character varying(20) COLLATE pg_catalog."default",
payment_term character varying(20) COLLATE pg_catalog."default",
reg_document_id character varying(25) COLLATE pg_catalog."default",
invoice_amount numeric(20,4),
application_status character varying(64) COLLATE pg_catalog."default",
dealer_group_id bigint,
approver text COLLATE pg_catalog."default",
approve_timestamp timestamp without time zone,
payload text COLLATE pg_catalog."default" NOT NULL,
auto_resub_attempts integer NOT NULL DEFAULT 0,
row_number bigint,
email_sent boolean DEFAULT false,
modified_date timestamp(6) without time zone DEFAULT CURRENT_TIMESTAMP,
product_name text COLLATE pg_catalog."default",
priority smallint,
CONSTRAINT applications_pkey PRIMARY KEY (id),
CONSTRAINT chk_application_status CHECK (application_status::text = ANY (ARRAY['PROCESSED'::character varying, 'PROCESSING'::character varying, 'WAIT_NEXT_SUBMISSION'::character varying, 'WAIT_MANUAL_SUBMISSION'::character varying, 'WAIT_AUTOMATIC_SUBMISSION'::character varying, 'WAIT_IN_QUEUE'::character varying, 'SUBMISSION_NOT_FOUND'::character varying, 'WAIT_FOR_ASYNC_ACTIVATION'::character varying, 'WAIT_FOR_ASYNC_SHIPMENT'::character varying, 'WAIT_FOR_BOOKING_CONFIRMATION'::character varying, 'WAIT_FOR_ACTIVATION_CONFIRMATION'::character varying, 'REJECTED'::character varying, 'NOT_IN_QUEUE'::character varying, 'SCHEDULED'::character varying]::text[])),
CONSTRAINT chk_source CHECK (source_input::text = ANY (ARRAY['LM'::character varying, 'KOSYFA'::character varying, 'SWPII'::character varying, 'ADM'::character varying]::text[]))
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
CREATE TABLE swp_am_hcbe_pro.transaction_names
(
id bigint NOT NULL DEFAULT nextval('swp_am_hcbe_pro.transaction_names_id_seq'::regclass),
name character varying(32) COLLATE pg_catalog."default" NOT NULL,
sub_name character varying(32) COLLATE pg_catalog."default",
CONSTRAINT transaction_names_pkey PRIMARY KEY (id)
)
WITH (
OIDS = FALSE
)
TABLESPACE pg_default;
Indexes
CREATE INDEX submissions_app_id_asc_timestamp_desc_idx
ON swp_am_hcbe_pro.submissions USING btree
(application_id, "timestamp" DESC)
TABLESPACE pg_default;
CREATE INDEX submissions_app_id_timestamp_trans_name_id_idx
ON swp_am_hcbe_pro.submissions USING btree
(application_id, "timestamp", transaction_names_id)
TABLESPACE pg_default;
CREATE INDEX submissions_timestamp_asc_app_id_asc_idx
ON swp_am_hcbe_pro.submissions USING btree
("timestamp", application_id)
TABLESPACE pg_default;
CREATE INDEX application_correlation_id_idx
ON swp_am_hcbe_pro.applications USING btree
(correlation_id COLLATE pg_catalog."default")
TABLESPACE pg_default;
CREATE INDEX application_correlation_row_number_idx
ON swp_am_hcbe_pro.applications USING btree
(correlation_id COLLATE pg_catalog."default", row_number)
TABLESPACE pg_default;
CREATE INDEX applications_application_status_idx
ON swp_am_hcbe_pro.applications USING btree
(application_status COLLATE pg_catalog."default")
TABLESPACE pg_default;
CREATE INDEX applications_invoice_idx
ON swp_am_hcbe_pro.applications USING btree
(invoice_id COLLATE pg_catalog."default")
TABLESPACE pg_default;
CREATE INDEX applications_vin_idx
ON swp_am_hcbe_pro.applications USING btree
(vin COLLATE pg_catalog."default")
TABLESPACE pg_default;
I have a the following view
CREATE OR REPLACE VIEW swp_am_hcbe_pro.application_list_simple AS
WITH subm AS (
SELECT DISTINCT ON (s.application_id) s.application_id,
s."timestamp",
s.exit_code,
s.transaction_names_id
FROM swp_am_hcbe_pro.submissions s
ORDER BY s.application_id, s."timestamp" DESC
)
SELECT app.id,
app.correlation_id,
app.source_input,
app.source_file_path,
app.application_type,
app.loan_id,
app.vin,
app.cooperation_name,
app.cooperation_id,
app.submitter_name,
app.submitter_id,
app.dealer_id,
app.dealer_name,
app.dealer_ext_id,
app.invoice_id,
app.stock_id,
app.payment_term,
app.reg_document_id,
app.invoice_amount,
app.application_status,
app.incoming_timestamp,
app.dealer_group_id,
app.approver,
app.approve_timestamp,
subm.exit_code,
tn.name AS transaction_name,
tn.sub_name AS sub_transaction_name,
tn.id AS transaction_type_id,
subm."timestamp" AS last_submission_timestamp,
app.modified_date
FROM swp_am_hcbe_pro.applications app
LEFT JOIN subm ON app.id = subm.application_id
LEFT JOIN swp_am_hcbe_pro.transaction_names tn ON tn.id = subm.transaction_names_id;
If I run this statement, the elapsed time is : Execution time: 2481.333 ms
explain analyze
SELECT *, count(*) OVER () AS total FROM swp_am_hcbe_pro.application_list_simple
WHERE INCOMING_TIMESTAMP >= '2021-11-08' AND INCOMING_TIMESTAMP <= '2021-11-09'
ORDER BY APPROVE_TIMESTAMP DESC, INCOMING_TIMESTAMP DESC LIMIT 100 OFFSET 0
;
I got the following
"Limit (cost=461799.85..461800.10 rows=100 width=490) (actual time=2473.878..2474.618 rows=100 loops=1)"
" -> Sort (cost=461799.85..461803.13 rows=1311 width=490) (actual time=2473.877..2474.612 rows=100 loops=1)"
" Sort Key: app.approve_timestamp DESC, app.incoming_timestamp DESC"
" Sort Method: top-N heapsort Memory: 112kB"
" -> WindowAgg (cost=458791.38..461749.74 rows=1311 width=490) (actual time=2471.792..2473.247 rows=1620 loops=1)"
" -> Hash Left Join (cost=458791.38..461720.25 rows=1311 width=482) (actual time=2456.132..2470.895 rows=1620 loops=1)"
" Hash Cond: (subm.transaction_names_id = tn.id)"
" CTE subm"
" -> Unique (cost=0.43..333656.64 rows=129297 width=31) (actual time=0.036..1846.992 rows=645062 loops=1)"
" -> Index Scan using submissions_app_id_asc_timestamp_desc_idx on submissions s (cost=0.43..329433.26 rows=1689349 width=31) (actual time=0.033..1621.049 rows=1699582 loops=1)"
" -> Hash Right Join (cost=125133.09..128058.44 rows=1311 width=459) (actual time=2456.083..2470.337 rows=1620 loops=1)"
" Hash Cond: (subm.application_id = app.id)"
" -> CTE Scan on subm (cost=0.00..2585.94 rows=129297 width=106) (actual time=0.038..2135.256 rows=645062 loops=1)"
" -> Hash (cost=125116.71..125116.71 rows=1311 width=361) (actual time=237.582..238.310 rows=1620 loops=1)"
" Buckets: 2048 Batches: 1 Memory Usage: 483kB"
" -> Gather (cost=1000.00..125116.71 rows=1311 width=361) (actual time=11.959..236.468 rows=1620 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Parallel Seq Scan on applications app (cost=0.00..123985.61 rows=546 width=361) (actual time=2.880..97.484 rows=540 loops=3)"
" Filter: ((incoming_timestamp >= '2021-11-08 00:00:00'::timestamp without time zone) AND (incoming_timestamp <= '2021-11-09 00:00:00'::timestamp without time zone))"
" Rows Removed by Filter: 214530"
" -> Hash (cost=1.29..1.29 rows=29 width=31) (actual time=0.033..0.033 rows=29 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 10kB"
" -> Seq Scan on transaction_names tn (cost=0.00..1.29 rows=29 width=31) (actual time=0.011..0.015 rows=29 loops=1)"
"Planning time: 0.587 ms"
"Execution time: 2481.333 ms"
If I run this one, which only changes the date filter, it takes Execution time: 365817.271 ms
explain analyze
SELECT *, count(*) OVER () AS total FROM swp_am_hcbe_pro.application_list_simple
WHERE INCOMING_TIMESTAMP >= '2021-11-09' AND INCOMING_TIMESTAMP <= '2021-11-10'
ORDER BY APPROVE_TIMESTAMP DESC, INCOMING_TIMESTAMP DESC LIMIT 100 OFFSET 0
;
"Limit (cost=462844.68..462844.69 rows=1 width=490) (actual time=365809.554..365810.419 rows=100 loops=1)"
" -> Sort (cost=462844.68..462844.69 rows=1 width=490) (actual time=365809.553..365810.411 rows=100 loops=1)"
" Sort Key: app.approve_timestamp DESC, app.incoming_timestamp DESC"
" Sort Method: top-N heapsort Memory: 125kB"
" -> WindowAgg (cost=334656.77..462844.67 rows=1 width=490) (actual time=365806.595..365808.483 rows=2140 loops=1)"
" -> Nested Loop Left Join (cost=334656.77..462844.65 rows=1 width=482) (actual time=2094.856..365793.839 rows=2140 loops=1)"
" CTE subm"
" -> Unique (cost=0.43..333656.64 rows=129297 width=31) (actual time=0.036..1771.818 rows=645068 loops=1)"
" -> Index Scan using submissions_app_id_asc_timestamp_desc_idx on submissions s (cost=0.43..329433.26 rows=1689349 width=31) (actual time=0.034..1563.614 rows=1699595 loops=1)"
" -> Nested Loop Left Join (cost=1000.00..129187.86 rows=1 width=459) (actual time=2094.836..365762.361 rows=2140 loops=1)"
" Join Filter: (app.id = subm.application_id)"
" Rows Removed by Join Filter: 1380443382"
" -> Gather (cost=1000.00..124985.71 rows=1 width=361) (actual time=8.475..33.996 rows=2140 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Parallel Seq Scan on applications app (cost=0.00..123985.61 rows=1 width=361) (actual time=1.809..103.597 rows=713 loops=3)"
" Filter: ((incoming_timestamp >= '2021-11-09 00:00:00'::timestamp without time zone) AND (incoming_timestamp <= '2021-11-10 00:00:00'::timestamp without time zone))"
" Rows Removed by Filter: 214359"
" -> CTE Scan on subm (cost=0.00..2585.94 rows=129297 width=106) (actual time=0.030..125.740 rows=645068 loops=2140)"
" -> Index Scan using transaction_names_pkey on transaction_names tn (cost=0.14..0.16 rows=1 width=31) (actual time=0.009..0.009 rows=1 loops=2140)"
" Index Cond: (id = subm.transaction_names_id)"
"Planning time: 0.414 ms"
"Execution time: 365817.271 ms"
I can't really understand why this is happening. I tried also to run queries with filters getting more than one date ( like one week, one month ) and all of them works fine.
I vacuum the affected tables, even though there were not so many rows. What else can I check ?
If you need further information, don't hesitate to ask me
UPDATE
If I change the query to this, using to_timestamp over the string, then it works. But why it does work in all the other cases and not in this one ? Why is always happening with the current date ?
explain analyze
SELECT * FROM swp_am_hcbe_pro.application_list_simple
WHERE INCOMING_TIMESTAMP >= to_timestamp('2021-11-09 00:00:00','YYYY-MM-DD HH24:MI:SS')
AND INCOMING_TIMESTAMP <= to_timestamp('2021-11-10 00:00:00','YYYY-MM-DD HH24:MI:SS')
ORDER BY APPROVE_TIMESTAMP DESC, INCOMING_TIMESTAMP DESC LIMIT 100 OFFSET 0 ;
I obtain the following
"Limit (cost=463151.72..463151.97 rows=100 width=481) (actual time=2743.036..2743.923 rows=100 loops=1)"
" -> Sort (cost=463151.72..463153.01 rows=517 width=481) (actual time=2743.035..2743.918 rows=100 loops=1)"
" Sort Key: app.approve_timestamp DESC, app.incoming_timestamp DESC"
" Sort Method: top-N heapsort Memory: 121kB"
" -> Hash Left Join (cost=460200.05..463126.79 rows=517 width=481) (actual time=2730.684..2741.744 rows=2382 loops=1)"
" Hash Cond: (subm.transaction_names_id = tn.id)"
" CTE subm"
" -> Unique (cost=0.43..333658.84 rows=129297 width=31) (actual time=0.020..1669.678 rows=645311 loops=1)"
" -> Index Scan using submissions_app_id_asc_timestamp_desc_idx on submissions s (cost=0.43..329435.46 rows=1689349 width=31) (actual time=0.019..1476.827 rows=1700028 loops=1)"
" -> Hash Right Join (cost=126539.56..129464.91 rows=517 width=458) (actual time=2730.642..2740.999 rows=2382 loops=1)"
" Hash Cond: (subm.application_id = app.id)"
" -> CTE Scan on subm (cost=0.00..2585.94 rows=129297 width=106) (actual time=0.023..1924.458 rows=645311 loops=1)"
" -> Hash (cost=126533.10..126533.10 rows=517 width=360) (actual time=736.655..737.534 rows=2382 loops=1)"
" Buckets: 4096 (originally 1024) Batches: 1 (originally 1) Memory Usage: 864kB"
" -> Gather (cost=1000.00..126533.10 rows=517 width=360) (actual time=18.882..734.265 rows=2382 loops=1)"
" Workers Planned: 2"
" Workers Launched: 2"
" -> Parallel Seq Scan on applications app (cost=0.00..125481.40 rows=215 width=360) (actual time=15.908..610.513 rows=794 loops=3)"
" Filter: ((incoming_timestamp >= to_timestamp('2021-11-09 00:00:00'::text, 'YYYY-MM-DD HH24:MI:SS'::text)) AND (incoming_timestamp <= to_timestamp('2021-11-10 00:00:00'::text, 'YYYY-MM-DD HH24:MI:SS'::text)))"
" Rows Removed by Filter: 214359"
" -> Hash (cost=1.29..1.29 rows=29 width=31) (actual time=0.026..0.026 rows=29 loops=1)"
" Buckets: 1024 Batches: 1 Memory Usage: 10kB"
" -> Seq Scan on transaction_names tn (cost=0.00..1.29 rows=29 width=31) (actual time=0.012..0.018 rows=29 loops=1)"
"Planning time: 0.370 ms"
"Execution time: 2751.279 ms"
So, the question remains
Why this query takes 360 seconds ?
SELECT * FROM swp_am_hcbe_pro.application_list_simple
WHERE INCOMING_TIMESTAMP >= '2021-11-09' AND INCOMING_TIMESTAMP <= '2021-11-10'
ORDER BY APPROVE_TIMESTAMP DESC, INCOMING_TIMESTAMP DESC LIMIT 100 OFFSET 0
;
But this one takes 3 seconds
SELECT * FROM swp_am_hcbe_pro.application_list_simple
WHERE INCOMING_TIMESTAMP >= to_timestamp('2021-11-09 00:00:00','YYYY-MM-DD HH24:MI:SS')
AND INCOMING_TIMESTAMP <= to_timestamp('2021-11-10 00:00:00','YYYY-MM-DD HH24:MI:SS')
ORDER BY APPROVE_TIMESTAMP DESC, INCOMING_TIMESTAMP DESC LIMIT 100 OFFSET 0 ;
In any other case, it works no matter I use or not to_timestamp. Just a remark, I removed the count(*) over() in my last update to show that is not relevant, so still the problem remains.
Thank you for your support
Try to avoid the (non-indexed) CTE scan by using a (TEMP) view instead of a CTE [I also replaced the DISTINCT ON(...) BY a NOT EXISTS(...) ]:
CREATE OR REPLACE VIEW vsubm AS
SELECT -- DISTINCT ON (s.application_id)
s.application_id
, s.ztimestamp
, s.exit_code
, s.transaction_names_id
FROM submissions s
WHERE NOT EXISTS ( SELECT *
FROM submissions nx
WHERE nx.application_id = s.application_id
AND nx.ztimestamp > s.ztimestamp
)
-- ORDER BY s.application_id, s.ztimestamp DESC
;
CREATE OR REPLACE VIEW application_list_simple2 AS
SELECT app.id
, app.correlation_id
, app.source_input
, app.source_file_path
, app.application_type
, app.loan_id
, app.vin
, app.cooperation_name
, app.cooperation_id
, app.submitter_name
, app.submitter_id
, app.dealer_id
, app.dealer_name
, app.dealer_ext_id
, app.invoice_id
, app.stock_id
, app.payment_term
, app.reg_document_id
, app.invoice_amount
, app.application_status
, app.incoming_timestamp AS INCOMING_TIMESTAMP
, app.dealer_group_id
, app.approver
, app.approve_timestamp AS APPROVE_TIMESTAMP
, vsubm.exit_code
, tn.name AS transaction_name
, tn.sub_name AS sub_transaction_name
, tn.id AS transaction_type_id
, vsubm.ztimestamp AS last_submission_timestamp
, app.modified_date
FROM applications app
LEFT JOIN vsubm ON app.id = vsubm.application_id
LEFT JOIN transaction_names tn ON tn.id = vsubm.transaction_names_id
;
-- EXPLAIN
-- explain analyze
SELECT *
-- , count(*) OVER () AS total
FROM application_list_simple2
WHERE INCOMING_TIMESTAMP >= '2021-11-08' AND INCOMING_TIMESTAMP < '2021-11-09'
ORDER BY APPROVE_TIMESTAMP DESC, INCOMING_TIMESTAMP DESC
-- LIMIT 100 OFFSET 0
WRT the observed behaviour:
selecting a date-range at the edge of time may cause a different plan to be generated. Different from a timespan in the middle
maybe the statistics for today's records are still incomplete (the statistics collector could be lagging behind)
bad plans (lots of hash-joins and seq-scans) may be caused by lack of statistics, lack of indexes, or random_page_cost set to high.
The row-sizes for the tables are rather large. Maybe some normalisation is needed, especially for the applications table.
mixing timestamps with/without time zone could cause some confusion. [general advise: always use timestamps with time zone]

Separate PostgreSQL partitions join

I'm using PostgreSQL 10.6. I have several tables partitioned by day. Each day has its own data. I want to select rows from this tables within a day.
drop table IF EXISTS request;
drop table IF EXISTS request_identity;
CREATE TABLE IF NOT EXISTS request (
id bigint not null,
record_date date not null,
payload text not null
) PARTITION BY LIST (record_date);
CREATE TABLE IF NOT EXISTS request_p1 PARTITION OF request FOR VALUES IN ('2001-01-01');
CREATE TABLE IF NOT EXISTS request_p2 PARTITION OF request FOR VALUES IN ('2001-01-02');
CREATE INDEX IF NOT EXISTS i_request_p1_id ON request_p1 (id);
CREATE INDEX IF NOT EXISTS i_request_p2_id ON request_p2 (id);
do $$
begin
for i in 1..100000 loop
INSERT INTO request (id,record_date,payload) values (i, '2001-01-01', 'abc');
end loop;
for i in 100001..200000 loop
INSERT INTO request (id,record_date,payload) values (i, '2001-01-02', 'abc');
end loop;
end;
$$;
CREATE TABLE IF NOT EXISTS request_identity (
record_date date not null,
parent_id bigint NOT NULL,
identity_name varchar(32),
identity_value varchar(32)
) PARTITION BY LIST (record_date);
CREATE TABLE IF NOT EXISTS request_identity_p1 PARTITION OF request_identity FOR VALUES IN ('2001-01-01');
CREATE TABLE IF NOT EXISTS request_identity_p2 PARTITION OF request_identity FOR VALUES IN ('2001-01-02');
CREATE INDEX IF NOT EXISTS i_request_identity_p1_payload ON request_identity_p1 (identity_name, identity_value);
CREATE INDEX IF NOT EXISTS i_request_identity_p2_payload ON request_identity_p2 (identity_name, identity_value);
do $$
begin
for i in 1..100000 loop
INSERT INTO request_identity (parent_id,record_date,identity_name,identity_value) values (i, '2001-01-01', 'NAME', 'somename'||i);
end loop;
for i in 100001..200000 loop
INSERT INTO request_identity (parent_id,record_date,identity_name,identity_value) values (i, '2001-01-02', 'NAME', 'somename'||i);
end loop;
end;
$$;
analyze request;
analyze request_identity;
I make select inside 1 day and see a good request plan:
explain analyze select *
from request
where record_date between '2001-01-01' and '2001-01-01'
and exists (select * from request_identity where parent_id = id and identity_name = 'NAME' and identity_value = 'somename555' and record_date between '2001-01-01' and '2001-01-01')
limit 100;
Limit (cost=8.74..16.78 rows=1 width=16)
-> Nested Loop (cost=8.74..16.78 rows=1 width=16)
-> HashAggregate (cost=8.45..8.46 rows=1 width=8)
Group Key: request_identity_p1.parent_id
-> Append (cost=0.42..8.44 rows=1 width=8)
-> Index Scan using i_request_identity_p1_payload on request_identity_p1 (cost=0.42..8.44 rows=1 width=8)
Index Cond: (((identity_name)::text = 'NAME'::text) AND ((identity_value)::text = 'somename555'::text))
Filter: ((record_date >= '2001-01-01'::date) AND (record_date <= '2001-01-01'::date))
-> Append (cost=0.29..8.32 rows=1 width=16)
-> Index Scan using i_request_p1_id on request_p1 (cost=0.29..8.32 rows=1 width=16)
Index Cond: (id = request_identity_p1.parent_id)
Filter: ((record_date >= '2001-01-01'::date) AND (record_date <= '2001-01-01'::date))
But if I make a select for 2 days or more, then PostgreSQL first appends rows of all partitions of request_identity and all partitions of request, and then joins them.
So this is the SQL that is not working as i want:
explain analyze select *
from request
where record_date between '2001-01-01' and '2001-01-02'
and exists (select * from request_identity where parent_id = id and identity_name = 'NAME' and identity_value = 'somename1777' and record_date between '2001-01-01' and '2001-01-02')
limit 100;
Limit (cost=17.19..50.21 rows=2 width=16)
-> Nested Loop (cost=17.19..50.21 rows=2 width=16)
-> Unique (cost=16.90..16.91 rows=2 width=8)
-> Sort (cost=16.90..16.90 rows=2 width=8)
Sort Key: request_identity_p1.parent_id
-> Append (cost=0.42..16.89 rows=2 width=8)
-> Index Scan using i_request_identity_p1_payload on request_identity_p1 (cost=0.42..8.44 rows=1 width=8)
Index Cond: (((identity_name)::text = 'NAME'::text) AND ((identity_value)::text = 'somename1777'::text))
Filter: ((record_date >= '2001-01-01'::date) AND (record_date <= '2001-01-02'::date))
-> Index Scan using i_request_identity_p2_payload on request_identity_p2 (cost=0.42..8.44 rows=1 width=8)
Index Cond: (((identity_name)::text = 'NAME'::text) AND ((identity_value)::text = 'somename1777'::text))
Filter: ((record_date >= '2001-01-01'::date) AND (record_date <= '2001-01-02'::date))
-> Append (cost=0.29..16.63 rows=2 width=16)
-> Index Scan using i_request_p1_id on request_p1 (cost=0.29..8.32 rows=1 width=16)
Index Cond: (id = request_identity_p1.parent_id)
Filter: ((record_date >= '2001-01-01'::date) AND (record_date <= '2001-01-02'::date))
-> Index Scan using i_request_p2_id on request_p2 (cost=0.29..8.32 rows=1 width=16)
Index Cond: (id = request_identity_p1.parent_id)
Filter: ((record_date >= '2001-01-01'::date) AND (record_date <= '2001-01-02'::date))
In my case it doesn't make sense to join (with nested loops) of these appends since the consistent rows are only within 1 day partitions group.
The desired result for me is that PostgreSQL makes joins between request_p1 to request_identity_p1, and request_p2 to request_identity_p2 first and only after that is makes appends of results.
The question is:
Is there a way to perform joins between partitions separately within 1 day partitions group?
Thanks.

Sql query based on historical event and its performance

Below are tables and sample data. I want to get the data up to a provided event_id - 1 or to the latest if no event_id is provided for an ngram and only return the data if that max(event_id) having some data.
Please look at examples and scenarios below. It's easier to explain that way.
CREATE TABLE NGRAM_CONTENT
(
event_id BIGINT PRIMARY KEY,
ref TEXT NOT NULL,
data TEXT
);
CREATE TABLE NGRAM
(
event_id BIGINT PRIMARY KEY,
ngram TEXT NOT NULL ,
ref TEXT not null,
name_length INT not null,
) ;
insert into NGRAM_CONTENT(event_id, ref, data) values (1, 'p1', 'a data 1');
insert into NGRAM_CONTENT(event_id, ref, data) values (2, 'p1', 'a data 2');
insert into NGRAM_CONTENT(event_id, ref, data) values (3, 'p1', null);
insert into NGRAM_CONTENT(event_id, ref, data) values (4, 'p2', 'b data 1');
insert into NGRAM_CONTENT(event_id, ref, data) values (5, 'p2', 'b data 2');
insert into NGRAM_CONTENT(event_id, ref, data) values (6, 'p2', 'c data 1');
insert into NGRAM_CONTENT(event_id, ref, data) values (7, 'p2', 'c data 2');
insert into NGRAM(ngram, ref, event_id, name_length) values ('a', 'p1', 1, 10);
insert into NGRAM(ngram, ref, event_id, name_length) values ('a', 'p1', 2, 12);
insert into NGRAM(ngram, ref, event_id, name_length) values ('b', 'p1', 2, 13);
insert into NGRAM(ngram, ref, event_id, name_length) values ('b', 'p2', 4, 8);
insert into NGRAM(ngram, ref, event_id, name_length) values ('b', 'p2', , 10);
insert into NGRAM(ngram, ref, event_id, name_length) values ('c', 'p2', 6, 20);
insert into NGRAM(ngram, ref, event_id, name_length) values ('c', 'p2', 7, 50);
Here are the tables of example input and desired output
| ngram | event_id | output |
| 'a','b'| < 2 | 'a data 1' |
| 'a','b'| < 3 | 'a data 2' |
| 'a','b'| < 4 | null |
| 'a' | - | null |
| 'a' | - | null |
| 'b' | - | null |
| 'a','b'| - | null |
| 'b' | < 5 | b data 1 |
| 'c' | < 7 | c data 1 |
| 'c' | - | c data 2 |
I've got the following query that worked. (I didn't mention name_length above, as it's just complicated example). Need to replace that 100000000 with an event_id above and also the values for searching of ngram
with max_matched_event_id_and_ref_from_index as
(
-- getting max event id and ref for all the potential matches
select max(event_id) as max_matched_event_id, ref
from ngram
where name_length between 14 and 18 and
ngram in ('a', 'b')
and event_id < 1000000000
group by ref
having count(*) >= 2
),
max_current_event_id as
(
select max(event_id) as max_current_event_id
from ngram_content w
inner join max_matched_event_id_and_ref_from_index n on w.ref = n.ref
where w.event_id >= n.max_matched_event_id and event_id < 1000000000
group by n.ref
)
select nc.data
from ngram_content nc
inner join max_current_event_id m on nc.event_id = m.max_current_event_id
inner join max_matched_event_id_and_ref_from_index mi on nc.event_id = mi.max_matched_event_id;
I have about 450 million from ngram table and about 55 million rows from ngram_content table.
The current query takes over a minute to return which is too slow for our usage.
I've got indexes as followings:
CREATE INDEX combined_index ON NGRAM (ngram, name_length, ref, event_id);
CREATE INDEX idx_ref_ngram_content ON ngram_content (ref);
CREATE INDEX idx_ngram_content_event_id ON ngram_content (event_id);
And here are the detailed explanation from query plan execution:
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Hash Join (cost=4818702.89..4820783.06 rows=1 width=365) (actual time=29201.537..29227.214 rows=15081 loops=1)
Hash Cond: (mi.max_matched_event_id = nc.event_id)
Buffers: shared hit=381223 read=342422, temp read=2204 written=2213
CTE max_matched_event_id_and_ref_from_index
-> Finalize GroupAggregate (cost=3720947.79..3795574.47 rows=87586 width=16) (actual time=19163.811..19978.720 rows=43427 loops=1)
Group Key: ngram.ref
Filter: (count(*) >= 2)
Rows Removed by Filter: 999474
Buffers: shared hit=35270 read=225113, temp read=1402 written=1410
-> Gather Merge (cost=3720947.79..3788348.60 rows=525518 width=24) (actual time=19163.620..19649.679 rows=1048271 loops=1)
Workers Planned: 2
Workers Launched: 2
Buffers: shared hit=104576 read=669841 written=1, temp read=4183 written=4207
-> Partial GroupAggregate (cost=3719947.77..3726690.76 rows=262759 width=24) (actual time=19143.782..19356.718 rows=349424 loops=3)
Group Key: ngram.ref
Buffers: shared hit=104576 read=669841 written=1, temp read=4183 written=4207
-> Sort (cost=3719947.77..3720976.62 rows=411540 width=16) (actual time=19143.770..19231.539 rows=362406 loops=3)
Sort Key: ngram.ref
Sort Method: external merge Disk: 11216kB
Worker 0: Sort Method: external merge Disk: 11160kB
Worker 1: Sort Method: external merge Disk: 11088kB
Buffers: shared hit=104576 read=669841 written=1, temp read=4183 written=4207
-> Parallel Index Only Scan using combined_index on ngram (cost=0.57..3674535.28 rows=411540 width=16) (actual time=1.122..18715.404 rows=362406 loops=3)
Index Cond: ((ngram = ANY ('{ORA,AN,MG}'::text[])) AND (name_length >= 14) AND (name_length <= 18) AND (event_id < 1000000000))
Heap Fetches: 1087219
Buffers: shared hit=104560 read=669841 written=1
CTE max_current_event_id
-> GroupAggregate (cost=1020964.39..1021403.41 rows=200 width=40) (actual time=7631.312..7674.228 rows=43427 loops=1)
Group Key: n.ref
Buffers: shared hit=174985 read=70887, temp read=1179 written=273
-> Sort (cost=1020964.39..1021110.06 rows=58270 width=40) (actual time=7631.304..7644.203 rows=71773 loops=1)
Sort Key: n.ref
Sort Method: external merge Disk: 2176kB
Buffers: shared hit=174985 read=70887, temp read=1179 written=273
-> Nested Loop (cost=0.56..1016352.18 rows=58270 width=40) (actual time=1.093..7574.448 rows=71773 loops=1)
Buffers: shared hit=174985 read=70887, temp read=907
-> CTE Scan on max_matched_event_id_and_ref_from_index n (cost=0.00..1751.72 rows=87586 width=40) (actual time=0.000..838.522 rows=43427 loops=1)
Buffers: temp read=907
-> Index Scan using idx_ref_ngram_content on ngram_content w (cost=0.56..11.57 rows=1 width=16) (actual time=0.104..0.154 rows=2 loops=43427)
Index Cond: (ref = n.ref)
Filter: ((event_id < 1000000000) AND (event_id >= n.max_matched_event_id))
Rows Removed by Filter: 0
Buffers: shared hit=174985 read=70887
-> CTE Scan on max_matched_event_id_and_ref_from_index mi (cost=0.00..1751.72 rows=87586 width=8) (actual time=19163.813..19168.081 rows=43427 loops=1)
Buffers: shared hit=35270 read=225113, temp read=495 written=1410
-> Hash (cost=1722.50..1722.50 rows=200 width=381) (actual time=10035.797..10035.797 rows=43427 loops=1)
Buckets: 32768 (originally 1024) Batches: 2 (originally 1) Memory Usage: 3915kB
Buffers: shared hit=345953 read=117309, temp read=1179 written=704
-> Nested Loop (cost=0.56..1722.50 rows=200 width=381) (actual time=7632.365..9994.328 rows=43427 loops=1)
Buffers: shared hit=345953 read=117309, temp read=1179 written=273
-> CTE Scan on max_current_event_id m (cost=0.00..4.00 rows=200 width=8) (actual time=7631.315..7695.869 rows=43427 loops=1)
Buffers: shared hit=174985 read=70887, temp read=1179 written=273
-> Index Scan using idx_ngram_content_event_id on ngram_content nc (cost=0.56..8.58 rows=1 width=373) (actual time=0.052..0.052 rows=1 loops=43427)
Index Cond: (event_id = m.max_current_event_id)
Buffers: shared hit=170968 read=46422
Planning Time: 7.872 ms
Execution Time: 29231.222 ms
(57 rows)
Any suggestions on how to optimise the query or indexes so the query can run faster please?

Missing table access in PostgreSQL query plan

I have two same tables one having 1k rows and the second 1M rows. I use the following script to populate them.
CREATE TABLE Table1 (
id int NOT NULL primary key,
groupby int NOT NULL,
orderby int NOT NULL,
local_search int NOT NULL,
global_search int NOT NULL,
padding varchar(100) NOT NULL
);
CREATE TABLE Table2 (
id int NOT NULL primary key,
groupby int NOT NULL,
orderby int NOT NULL,
local_search int NOT NULL,
global_search int NOT NULL,
padding varchar(100) NOT NULL
);
INSERT
INTO Table1
WITH t1 AS
(
SELECT id
FROM generate_series(1, 10000) id
), t2 AS
(
SELECT id,
id % 100 groupby
FROM t1
), t3 AS
(
SELECT b.id, b.groupby, row_number() over (partition by groupby order by id) orderby
FROM t2 b
)
SELECT id,
groupby,
orderby,
orderby % 50 local_search,
id % 1000 global_search,
RPAD('Value ' || id || ' ' , 100, '*') as padding
FROM t3;
INSERT
INTO Table2
WITH t1 AS
(
SELECT id
FROM generate_series(1, 1000000) id
), t2 AS
(
SELECT id,
id % 100 groupby
FROM t1
), t3 AS
(
SELECT b.id, b.groupby, row_number() over (partition by groupby order by id) orderby
FROM t2 b
)
SELECT id,
groupby,
orderby,
orderby % 50 local_search,
id % 1000 global_search,
RPAD('Value ' || id || ' ' , 100, '*') as padding
FROM t3;
I created also secondary index on table2
CREATE INDEX ix_Table2_groupby_orderby ON Table2 (groupby, orderby);
Now, I have the following query
select b.id, b.groupby, b.orderby, b.local_search, b.global_search, b.padding
from Table2 b
join Table1 a on b.orderby = a.id
where a.global_search = 1 and b.groupby < 10;
which leads to the following query plan using explain(analyze)
"Nested Loop (cost=0.42..17787.05 rows=100 width=121) (actual time=0.056..34.722 rows=100 loops=1)"
" -> Seq Scan on table1 a (cost=0.00..318.00 rows=10 width=4) (actual time=0.033..1.313 rows=10 loops=1)"
" Filter: (global_search = 1)"
" Rows Removed by Filter: 9990"
" -> Index Scan using ix_table2_groupby_orderby on table2 b (cost=0.42..1746.81 rows=10 width=121) (actual time=0.159..3.337 rows=10 loops=10)"
" Index Cond: ((groupby < 10) AND (orderby = a.id))"
"Planning time: 0.296 ms"
"Execution time: 34.775 ms"
and my question is: how it comes that he does not access the table2 in the query plan? He uses just ix_table2_groupby_orderby, but it contains just groupby, orderby and maybe id columns. How he gets the remaining columns of Table2 and why it is not in the query plan?
** EDIT **
I have tried explain(verbose) As suggested #laurenzalbe. This is the result
"Nested Loop (cost=0.42..17787.05 rows=100 width=121) (actual time=0.070..35.678 rows=100 loops=1)"
" Output: b.id, b.groupby, b.orderby, b.local_search, b.global_search, b.padding"
" -> Seq Scan on public.table1 a (cost=0.00..318.00 rows=10 width=4) (actual time=0.031..1.642 rows=10 loops=1)"
" Output: a.id, a.groupby, a.orderby, a.local_search, a.global_search, a.padding"
" Filter: (a.global_search = 1)"
" Rows Removed by Filter: 9990"
" -> Index Scan using ix_table2_groupby_orderby on public.table2 b (cost=0.42..1746.81 rows=10 width=121) (actual time=0.159..3.398 rows=10 loops=10)"
" Output: b.id, b.groupby, b.orderby, b.local_search, b.global_search, b.padding"
" Index Cond: ((b.groupby < 10) AND (b.orderby = a.id))"
"Planning time: 16.201 ms"
"Execution time: 35.754 ms"
Actually, I do not fully understand why the access to the heap of table2 is not there, but I accept it as an answer.
An index scan in PostgreSQL accesses not only the index, but also the table. This is not explicitly shown in the execution plan and is necessary to find out if a row is visible to the transaction or not.
Try EXPLAIN (VERBOSE) to see what columns are returned.
See the documentation for details:
All indexes in PostgreSQL are secondary indexes, meaning that each index is stored separately from the table's main data area (which is called the table's heap in PostgreSQL terminology). This means that in an ordinary index scan, each row retrieval requires fetching data from both the index and the heap.

Would a partial index be used on a query?

Given this partial index:
CREATE INDEX orders_id_created_at_index
ON orders(id) WHERE created_at < '2013-12-31';
Would this query use the index?
SELECT *
FROM orders
WHERE id = 123 AND created_at = '2013-10-12';
As per the documentation, "a partial index can be used in a query only if the system can recognize that the WHERE condition of the query mathematically implies the predicate of the index".
Does that mean that the index will or will not be used?
You can check and yes, it would be used. I've created sql fiddle to check it with a query like this:
create table orders(id int, created_at date);
CREATE INDEX orders_id_created_at_index ON orders(id) WHERE created_at < '2013-12-31';
insert into orders
select
(random()*500)::int, '2013-01-01'::date + ((random() * 200)::int || ' day')::interval
from generate_series(1, 10000) as g
SELECT * FROM orders WHERE id = 123 AND created_at = '2013-10-12';
SELECT * FROM orders WHERE id = 123 AND created_at = '2014-10-12';
sql fiddle demo
If you check execution plans for these queries, you'll see for first query:
Bitmap Heap Scan on orders (cost=4.39..40.06 rows=1 width=8) Recheck Cond: ((id = 123) AND (created_at < '2013-12-31'::date)) Filter: (created_at = '2013-10-12'::date)
-> Bitmap Index Scan on orders_id_created_at_index (cost=0.00..4.39 rows=19 width=0) Index Cond: (id = 123)
and for second query:
Seq Scan on orders (cost=0.00..195.00 rows=1 width=8) Filter: ((id = 123) AND (created_at = '2014-10-12'::date))