Tuning query using index. Best approach? - sql

I am around a problem here. I'm using Oracle 11g and I have this query:
SELECT /*+ PARALLEL(16) */
prdecdde,
prdenusi,
prdenpol,
prdeano,
prdedtpr
FROM stat_pro_det
WHERE prdeisin IS NULL AND PRDENUSI IS NOT NULL AND prdedprv = '20160114'
GROUP BY prdecdde,
prdenusi,
prdenpol,
prdeano,
prdedtpr;
I get the next execution plan:
--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 53229 | 2287K| | 3652 (4)| 00:00:01 |
| 1 | HASH GROUP BY | | 53229 | 2287K| 3368K| 3652 (4)| 00:00:01 |
|* 2 | TABLE ACCESS BY INDEX ROWID| STAT_PRO_DET | 53229 | 2287K| | 3012 (3)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | STAT_PRO_DET_08 | 214K| | | 626 (4)| 00:00:01 |
--------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("PRDENUSI" IS NOT NULL AND "PRDEISIN" IS NULL)
3 - access("PRDEDPRV"='20160114')
Note
-----
- Degree of Parallelism is 1 because of hint
I still have a lot of CPU cost. The STAT_PRO_DET_08 index is:
CREATE INDEX STAT_PRO_DET_08 ON STAT_PRO_DET(PRDEDPRV)
I've tried to add PRDEISIN and PRDENUSI to the index, putting the most selective at first, but with worst results.
This table have 128 million records (yes...maybe we need to a PARTITION TABLE). But I can not partition the table for now.
What are the other sugestions? A different index could get better results or can not do better than this?
Thanks in advance!!!!
EDIT1:
Guys.. thanks a lot for all your help. Especially #Marmite
I have a next question: And adding these two queries to the subject. Create one index for each one or can I have a index that resolve my performance problem in these three queries?
SELECT /*+ PARALLEL(16) */
prdecdde,
prdenuau,
prdenpol,
prdeano,
prdedtpr
FROM stat_pro_det
WHERE prdeisin IS NULL AND PRDENUSI IS NULL AND prdedprv = '20160114'
GROUP BY prdecdde,
prdenuau,
prdenpol,
prdeano,
prdedtpr;
and
SELECT /*+ PARALLEL(16) */
prdeisin, prdenuau
FROM stat_pro_det, mtauto
WHERE prdedprv = '20160114' AND prdenuau = autonuau AND autoisin IS NULL
GROUP BY prdenuau, prdeisin

First, you might as well rewrite the query as:
SELECT /*+ PARALLEL(16) */ DISTINCT
prdecdde, prdenusi, prdenpol, prdeano, prdedtpr
FROM stat_pro_det
WHERE prdeisin IS NULL AND PRDENUSI IS NOT NULL AND prdedprv = '20160114';
(This is shorter and makes it easier to change the list of columns you are interested in.)
The best index for this query is: stat_pro_det(prdedprv, prdeisin, prdenusi, prdecdde, prdenpol, prdeano, prdedtpr).
The first three columns are important for the WHERE clause and filtering the data. The remaining columns "cover" the query, meaning that the index itself can resolve the query without having to access data pages.

First make a following decisions:
you access using index or using full table scan
you use parallel query or no_parallel
Generall rule is index access work fine for small number of accessed records, but scale not well with a high number.
So the best way test all options and see the results.
For parallel FULL TABLE SCAN
use hint as follows (replace you table name or alias for tab)
SELECT /*+ FULL(tab) PARALLEL(16) */
This scales better, but is not instant for small number of records.
For index access
Note that this will not be done in parallel. Check the note in your explain plan in teh question.
Defining index containing all columns (as proposed by Gordon) you will perform a (sequential) index range scan without accessing the table.
As noted above - depending of the number of accessed keys this will be quick or slow.
For parallel index access
You need to define a GLOBAL partitioned index
create index tab_idx on tab (col3,col2,col1,col4,col5)
global partition by hash (col3,col2,col1,col4,col5) PARTITIONS 16;
Than hint
SELECT /*+ INDEX(tab tab_idx) PARALLEL_INDEX(tab,16) */
You will perform the same index range scan, but this time in parallel. So there is a chance that it will respond bettwer that serial execution. If you realy can open DOP 16 depend of course of your database HW setting and configuration...

Related

Index is not being used by optimizer

I have a query which is performing very badly due to full scan of a table.I have checked the statistics rebuild the indexes but its not working.
SQL Statement:
select distinct NA_DIR_EMAIL d, NA_DIR_EMAIL r
from gcr_items , gcr_deals
where gcr_deals.GCR_DEALS_ID=gcr_items.GCR_DEALS_ID
and
gcr_deals.bu_id=:P0_BU_ID
and
decode(:P55_DIRECT,'ALL','Y',trim(upper(NA_ORG_OWNER_EMAIL)))=
decode(:P55_DIRECT,'ALL','Y',trim(upper(:P55_DIRECT)))
order by 1
Execution Plan :
Plan hash value: 3180018891
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8 | 00:11:42 |
| 1 | SORT ORDER BY | | 8 | 00:11:42 |
| 2 | HASH UNIQUE | | 8 | 00:11:42 |
|* 3 | HASH JOIN | | 7385 | 00:11:42 |
|* 4 | VIEW | index$_join$_002 | 10462 | 00:00:05 |
|* 5 | HASH JOIN | | | |
|* 6 | INDEX RANGE SCAN | GCR_DEALS_IDX12 | 10462 | 00:00:01 |
| 7 | INDEX FAST FULL SCAN| GCR_DEALS_IDX1 | 10462 | 00:00:06 |
|* 8 | TABLE ACCESS FULL | GCR_ITEMS | 7386 | 00:11:37 |
-------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("GCR_DEALS"."GCR_DEALS_ID"="GCR_ITEMS"."GCR_DEALS_ID")
4 - filter("GCR_DEALS"."BU_ID"=TO_NUMBER(:P0_BU_ID))
5 - access(ROWID=ROWID)
6 - access("GCR_DEALS"."BU_ID"=TO_NUMBER(:P0_BU_ID))
8 - filter(DECODE(:P55_DIRECT,'ALL','Y',TRIM(UPPER("NA_ORG_OWNER_EMAI
L")))=DECODE(:P55_DIRECT,'ALL','Y',TRIM(UPPER(:P55_DIRECT))))
In the beginning a part of the condition in the WHERE clause must be decomposed (or "decompiled" - or "reengeenered") into a simpler form without using decode function, which a form can be understandable by the query optimizer:
AND
decode(:P55_DIRECT,'ALL','Y',trim(upper(NA_ORG_OWNER_EMAIL)))=
decode(:P55_DIRECT,'ALL','Y',trim(upper(:P55_DIRECT)))
into:
AND (
:P55_DIRECT = 'ALL'
OR
trim(upper(:P55_DIRECT)) = trimm(upper(NA_ORG_OWNER_EMAIL))
)
To find rows in the table based on values stored in the index, Oracle uses an access method named Index scan, see this link for details:
https://docs.oracle.com/cd/B19306_01/server.102/b14211/optimops.htm#i52300
One of the most common access method is Index Range Scan see here:
https://docs.oracle.com/cd/B19306_01/server.102/b14211/optimops.htm#i45075
The documentation says (in the latter link) that:
The optimizer uses a range scan when it finds one or more leading
columns of an index specified in conditions, such as the following:
col1 = :b1
col1 < :b1
col1 > :b1
AND combination of the preceding conditions for leading columns in the
index
col1 like 'ASD%' wild-card searches should not be in a leading
position otherwise the condition col1 like '%ASD' does not result in a
range scan.
The above means that the optimizer is able to use the index to find rows only for query conditions that contain basic comparision operators: = < > <= >= LIKE which are used to comparing simple values with plain column names. What the documentation doesn't clearly say - and you need to deduce it reading between the lines - is a fact that when some function is used in the condition, in a form function( column_name ) or function( expression_involving_column_names ) , then the index range scan cannot be used. In this case the query optimizer must evaluate this expression individually for each row in the table, thus must read all rows (perform a full table scan).
A short conclusion and a rule of thumb:
Functions in the WHERE clause can prevent the optimizer from using
indexes
If you see some function somewhere in the WHERE clause, then it is a sign that you are
running the red light
STOP immediately and think three times how
this function impact the query optimizer and the performance of your
query, and try to rewrite the condition to a form that the optimizer
is able to understand.
Now take a look at our rewritten condition:
AND (
:P55_DIRECT = 'ALL'
OR
trim(upper(:P55_DIRECT)) = trimm(upper(NA_ORG_OWNER_EMAIL))
)
and STOP - there are still two functions trim and upper applied to a column named NA_ORG_OWNER_EMAIL. We need to think how they can impact the query optimizer.
I assume that you have created a plain index on a single column: CREATE INDEX somename ON GCR_ITEMS( NA_ORG_OWNER_EMAIL ).If yes, then the index contains only plain values of NA_ORG_OWNER_EMAIL.
But the query is trying to find trimm(upper(NA_ORG_OWNER_EMAIL)) values, which are not stored in the index, so this index cannot be used in this case.
This condition requires a function based index:
https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_indexes.htm#ADFNS00505
CREATE INDEX somename ON GCR_ITEMS( trim( upper( NA_ORG_OWNER_EMAIL )))
Unfortunately even the function based index will still not help, because the condition in the query is too general - if a value of :P55_DIRECT = ALL the query must retrieve all rows from the table (perform a full table scan), otherwise must use the index to search value within it.
This is because the query is planned (think of it as "compiled") by the query optimizer only once, during it's first execution. Then the plan is stored in the cache and used to execute the query for all further executions. A value of the parameter is not know in advance, so the plan must consider each possible cases, thus will always perform a full table scan.
In 12c there is a new feature "Adaptive query optimalization":
https://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm#TGSQL94982
where the query optimizer analyses each parameters of the query on each runs, and is able to detect that the plan is not optimal for some runtime parameters, and choose a better "subplans" depending on actual parameter's value ... but you must use 12c, and additionally pay for Enterprise Edition, because only this edition includes that feature. And it's still not certain if the adaptive plan will work in this case or not.
What you can do without paying for 12c EE is to DIVIDE this general query into two separate variants, one for a case where :P55_DIRECT = ALL, and the other for remaining cases, and run an appropriate variant in the client (your application) depending on the value of this parameter.
A version for :P55_DIRECT = ALL, that will perform a full table scan
where gcr_deals.GCR_DEALS_ID=gcr_items.GCR_DEALS_ID
and
gcr_deals.bu_id=:P0_BU_ID
order by 1
and a version for other cases, that will use the function based index:
where gcr_deals.GCR_DEALS_ID=gcr_items.GCR_DEALS_ID
and
gcr_deals.bu_id=:P0_BU_ID
and
trim(upper(:P55_DIRECT)) = trimm(upper(NA_ORG_OWNER_EMAIL))
order by 1

PostgreSQL Index not used when data rows are large

Hi I'm curious about why index doesn't work when data rows are large even 100.
Here's select for 10 data:
mydb> explain select * from data where user_id=1;
+-----------------------------------------------------------------------------------+
| QUERY PLAN |
|-----------------------------------------------------------------------------------|
| Index Scan using ix_data_user_id on data (cost=0.14..8.15 rows=1 width=2043) |
| Index Cond: (user_id = 1) |
+-----------------------------------------------------------------------------------+
EXPLAIN
Here's select for 100 data:
mydb> explain select * from data where user_id=1;
+------------------------------------------------------------+
| QUERY PLAN |
|------------------------------------------------------------|
| Seq Scan on data (cost=0.00..44.67 rows=1414 width=945) |
| Filter: (user_id = 1) |
+------------------------------------------------------------+
EXPLAIN
How can index work when data rows are 100?
100 is not a large amount of data. Think 10,000 or 100,000 rows for a respectable amount.
To put it simply, records in a table are stored on data pages. A data page typically has about 8k bytes (it depends on the database and on settings). A major purpose of indexes is to reduce the number of data pages that need to be read.
If all the records in a table fit on one page, there is no need to reduce the number pages being read. The one page will be read. Hence, the index may not be particularly useful.

Adding Postgres index to one table locks up another

I look after a single Postgres 9.3.3 (Amazon RDS instance: db.m3.2xlarge), which is the back-end of a system that logs incoming statistics and provides reports based on those data - yes, from the same DB node.
Performance is generally very good, but upon adding an extra index on table R to improve reporting performance, logging performance collapsed, as both INSERTs and UPDATEs on a different table L used by the logging process immediately began to lock - seemingly on one another, according to pg_locks, although no deadlocks were reported. Immediately, all available connections (according to pg_stat_activity) locked in the same way, DB CPU rose quickly to 100%. The logger's load-balancer took all of its nodes out of use, but as the INSERTs and UPDATEs refused to complete or to time out, all connections stayed locked.
Note that this isn't a problem during index creation, only during usage. Nor is this an issue of load: throttling logging by 90% and starting the system completely afresh again immediately locked it up. No reporting whatsoever was happening at the same time.
Dropping the R index immediately releases all L locks.
I create the index with:
CREATE INDEX idxForGroup ON R (group,article_id,month);
where the columns are:
'group' type: VARCHAR(64) defaultValue: "" nullable: false
'month' type: TIMESTAMP nullable: false
'article_id' type: BIGINT defaultValue: 0 nullable: false
There is already a composite primary key, of which the above is just a subset:
customer_resource_id (a FK), subtype (a VARCHAR), group, article_id, month
I should add that there is a relationship between R and L: a trigger updates the reporting table R based upon updates to L:
CREATE TRIGGER on_event_report AFTER INSERT OR UPDATE ON L FOR EACH ROW EXECUTE PROCEDURE resource_event_trigger();
I accept that adding any index imposes a small (microseconds?) cost/load, but there are already indexes on R, so I don't understand how a 'little' extra indexing on R could have such a huge impact as to cause lockups for L.
Update:
If I investigate the L queries that are getting locked:
EXPLAIN (analyze,buffers) update L set count=count+1 where customer_resource_id=911657 and item_type_id='type' and event_subtype='subtype' and reporting_date='2014-04-13 00:00:00' AND group='';
Update on L (cost=0.57..20.18 rows=5 width=49) (actual time=70.968..70.968 rows=0 loops=1)
Buffers: shared hit=170 read=16 dirtied=15
-> Index Scan using L_pkey on L (cost=0.57..20.18 rows=5 width=49) (actual time=0.067..0.525 rows=19 loops=1)
Index Cond: ((customer_resource_id = 911657) AND ((group)::text = ''::text) AND ((item_type_id)::text = 'type'::text) AND ((event_subtype)::text = 'subtype'::text) AND (article_id = 0))
Buffers: shared hit=24
Trigger on_L: time=11626.219 calls=19 <---
Total runtime: 11697.285 ms
So, you'd think the trigger that updates R must be the problem - and yet when I EXPLAIN the trigger queries, they all check out fine: indexes hit, no scans, etc.
Update 2:
Not sure if this is really a locking issue, or just a massive performance degradation, but here's pg_locks with the index present:
SELECT mode,COUNT(*) FROM pg_locks GROUP BY mode;
mode | granted | count
------------------+---------+-------
AccessShareLock | t | 24715
ExclusiveLock | t | 1504
ExclusiveLock | f | 138
RowExclusiveLock | t | 5901
RowShareLock | t | 185
ShareLock | f | 95
Drop the index, and within seconds:
mode | count
-----------------+-------
ExclusiveLock | 3
AccessShareLock | 31
Update 3:
Here's the source of the trigger on the logging table L that updates the reporting table R:
CREATE OR REPLACE FUNCTION resource_event_trigger()
RETURNS TRIGGER AS $$
DECLARE
cre_row R%ROWTYPE;
delta INTEGER;
BEGIN
SELECT * INTO cre_row FROM R cre WHERE cre.customer_resource_id = NEW.customer_resource_id AND cre.group = NEW.group_id AND cre.subtype = NEW.event_subtype AND cre.date = date_trunc('month', NEW.date) AND cre.article_id = NEW.article_id;
IF cre_row IS null THEN
INSERT INTO R (customer_resource_id, group, subtype, article_id, date) VALUES (NEW.customer_resource_id, NEW.group_id, NEW.event_subtype, NEW.article_id, date_trunc('month', NEW.date));
END IF;
IF TG_OP = 'INSERT' THEN
delta = NEW.event_count;
ELSE
delta = NEW.event_count - OLD.event_count;
END IF;
CASE
WHEN NEW.item_type_id = 'typeA' THEN
UPDATE R SET count_A = count_A + delta WHERE customer_resource_id = NEW.customer_resource_id AND group = NEW.group_id AND subtype = NEW.event_subtype AND article_id = NEW.article_id AND date = date_trunc('month', NEW.date);
[...]
END CASE;
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
It's long-ish, but pretty straightforward. When 'EXPLAIN'ed individually, all the individual queries use primary keys / indexes, use few buffers, etc.
Update 4:
If I examine the created index, I notice:
SELECT tablename, attname, n_distinct, correlation from pg_stats where tablename='R' AND attname IN ('group','article_id','date','customer_resource_id','subtype') ORDER BY attname;
tablename | attname | n_distinct | correlation
-----------+----------------------+------------+-------------
R | article_id | 25886 | 0.756468
R | group | 165 | 0.227023
R | customer_resource_id | -0.304469 | 0.729134
R | date | 53 | 0.943593
R | subtype | 2 | 0.657429
... which looks plausible. And if I look at cardinality I get:
SELECT relname, relkind, reltuples as cardinality, relpages FROM pg_class where relkind='i' [...] order by relname;
relname | relkind | cardinality | relpages
-------------+---------+-------------+----------
R_pkey | i | 2.69955e+07 | 293035
idxForGroup | i | 2.70333e+07 | 134149
L_pkey | i | 7.14889e+07 | 771581
Both the PK and the newly added index have values that are almost the same as the row count which, again, should be fine...
Well while I can't say EXACTLY what your problem may be, it does obviously seem to hinge on this new index. It would certainly be easier if I were to look thoroughly through things, but I will give a couple shots in the dark.
Indexes and postgres performance can be a big subject, but fundamentally there are a few things I see could be wrong that you should check:
When changing/adding an index on a table, the query optimizer (that fires milliseconds before the query goes, and evaluates how best to execute that query) of course is looking at the table in a different way. It sees: "Hey, there is a new index here and maybe this is better than the old index I was using" and in some cases the query optimizer can be wrong.
And so instead of using a past index that was working just fine, the optimizer then starts doing full table scans or something stupid. This is in an extreme case, but could of course occur.
The other thing is that you may be dealing with a lot of "dead rows". Whenever you to an update to a table, it creates a "dead row" and inserts a new one (your updated row). The "dead row" doesn't really go anywhere and sometimes can bloat up your table and the Postgres query optimizer can go a little haywire trying to understand this.
One time I had a table that just went CRAZY like this and I could not, for the life of me, understand why it ws going SO SLOW. I then looked at the table statistics and saw hundreds of thousands of dead rows. I took a shot in the dark and just dropped and re-created the table (albeit, that takes some work on a live database) and magically everything worked completely fine after that (NOTHING was changed on the table structure or data in the table - other than that dead rows were completely released when dropping that instance of the table).
While that is a bit extreme, what I would do immediate is:
a) run vacuum on the table which does a pretty good jog at getting the "dead rows" out of the way
b) then run an analyze on the table. This re-sets the table summarized statistics, which is where the optimizer takes a lot of the data to determine how best to query the table.

Vertica and joins

I'm adapting a web analysis tool to use Vertica as the DB. I'm having real problems optimizing joins. I tried creating pre-join projections for some of my queries, and while it did make the queries blazing fast, it slowed data loading into the fact table to a crawl.
A simple INSERT INTO ... SELECT * FROM which we use to load data into the fact table from a staging table goes from taking ~5 seconds to taking 20+ minutes.
Because of this I dropped all pre-join projections and tried using the Database Designer to design query specific projections but it's not enough. Even with those projections a simple join is taking ~14 seconds, something that takes ~1 second with a pre-join projection.
My question is this: Is it normal for a pre-join projection to slow data insertion this much and if not, what could be the culprit? If it is normal, then it's a show stopper for us and are there other techniques we could use to speed up the joins?
We're running Vertica on a 5 node cluster, each node having 2 x quad core CPU and 32 GB of memory. The tables in my example query have 188,843,085 and 25,712,878 rows respectively.
The EXPLAIN output looks like this:
EXPLAIN SELECT referer_via_.url as referralPageUrl, COUNT(DISTINCT sessio
n.id) as visits FROM owa_session as session JOIN owa_referer AS referer_vi
a_ ON session.referer_id = referer_via_.id WHERE session.yyyymmdd BETWEEN
'20121123' AND '20121123' AND session.site_id = '49' GROUP BY referer_via_
.url ORDER BY visits DESC LIMIT 250;
Access Path:
+-SELECT LIMIT 250 [Cost: 1M, Rows: 250 (STALE STATISTICS)] (PATH ID: 0)
| Output Only: 250 tuples
| Execute on: Query Initiator
| +---> SORT [Cost: 1M, Rows: 1 (STALE STATISTICS)] (PATH ID: 1)
| | Order: count(DISTINCT "session".id) DESC
| | Output Only: 250 tuples
| | Execute on: All Nodes
| | +---> GROUPBY PIPELINED (RESEGMENT GROUPS) [Cost: 1M, Rows: 1 (STALE
STATISTICS)] (PATH ID: 2)
| | | Aggregates: count(DISTINCT "session".id)
| | | Group By: referer_via_.url
| | | Execute on: All Nodes
| | | +---> GROUPBY HASH (SORT OUTPUT) (RESEGMENT GROUPS) [Cost: 1M, Rows
: 1 (STALE STATISTICS)] (PATH ID: 3)
| | | | Group By: referer_via_.url, "session".id
| | | | Execute on: All Nodes
| | | | +---> JOIN HASH [Cost: 1M, Rows: 1 (STALE STATISTICS)] (PATH ID:
4) Outer (RESEGMENT)
| | | | | Join Cond: ("session".referer_id = referer_via_.id)
| | | | | Execute on: All Nodes
| | | | | +-- Outer -> STORAGE ACCESS for session [Cost: 463, Rows: 1 (ST
ALE STATISTICS)] (PUSHED GROUPING) (PATH ID: 5)
| | | | | | Projection: public.owa_session_projection
| | | | | | Materialize: "session".id, "session".referer_id
| | | | | | Filter: ("session".site_id = '49')
| | | | | | Filter: (("session".yyyymmdd >= 20121123) AND ("session"
.yyyymmdd <= 20121123))
| | | | | | Execute on: All Nodes
| | | | | +-- Inner -> STORAGE ACCESS for referer_via_ [Cost: 293K, Rows:
26M] (PATH ID: 6)
| | | | | | Projection: public.owa_referer_DBD_1_seg_Potency_2012112
2_Potency_20121122
| | | | | | Materialize: referer_via_.id, referer_via_.url
| | | | | | Execute on: All Nodes
To speedup join:
Design session table as being partitioned on column "yyyymmdd". This will enable partition pruning
Add condition on column "yyyymmdd" to _referer_via_ and partition on it, if it is possible (most likely not)
have column site_id as possible close to the beginning of order by list in used (super)projection of session
have both tables segmented on referer_id and id correspondingly.
And having more nodes in cluster do help.
My question is this: Is it normal for a pre-join projection to slow data insertion this much and if not, what could be the culprit? If it is normal, then it's a show stopper for us and are there other techniques we could use to speed up the joins?
I guess the amount affected would vary depending on data sets and structures you are working with. But, since this is the variable you changed, I believe it is safe to say the pre-join projection is causing the slowness. You are gaining query time at the expense of insertion time.
Someone please correct me if any of the following is wrong. I'm going by memory and by information picked up with conversations with others.
You can speed up your joins without a pre-join projection a few ways. In this case, the referrer ID. I believe if you segment your projections for both tables with the join predicate that would help. Anything you can do to filter the data.
Looking at your explain plan, you are doing a hash join instead of a merge join, which you probably want to look at.
Lastly, I would like to know via the explain plan or through system tables if your query is actually using the projections Database Designer has recommended. If not, explicitly specify them in your query and see if that helps.
You seem to have a lot of STALE STATISTICS.
Responding to STALE statistics is important. Because that is the reason why your queries are slow. Without statistics about the underlying data, Vertica's query optimizer cannot choose the best execution plan. And responding to STALE statistics only improves SELECT performance not update performance.
If you update your tables regularly do remember there are additional things you have to consider in VERTICA. Please check the answer that I posted to this question.
I hope that should help improve your update speed.
Explore the AHM settings as explained in that answer. If you don't need to be able to select deleted rows in a table later, it is often a good idea to not keep them around. There are ways to keep only the latest epoch version of the data. Or manually purge deleted data.
Let me know how it goes.
I think your query could use some more of being explicit. Also don't use that Devil BETWEEN Try this:
EXPLAIN SELECT
referer_via_.url as referralPageUrl,
COUNT(DISTINCT session.id) as visits
FROM owa_session as session
JOIN owa_referer AS referer_via_
ON session.referer_id = referer_via_.id
WHERE session.yyyymmdd <= '20121123'
AND session.yyyymmdd > '20121123'
AND session.site_id = '49'
GROUP BY referer_via_.url
-- this `visits` column needs a table name
ORDER BY visits DESC LIMIT 250;
I'll say I'm really perplexed as to why you would use the same DATE with BETWEEN may want to look into that.
this is my view coming from an academic background working with column databases, including Vertica (recent PhD graduate in database systems).
Blockquote
My question is this: Is it normal for a pre-join projection to slow data insertion this much and if not, what could be the culprit? If it is normal, then it's a show stopper for us and are there other techniques we could use to speed up the joins?
Blockquote
Yes, updating projections is very slow and you should ideally do it only in large batches to amortize the update cost. The fundamental reason is that each projection represents another copy of the data (of each table column that is part of the projection).
A single row insert requires adding one value (one attribute) to each column in the projection. For example, a single row insert in a table with 20 attributes requires at least 20 column updates. To make things worse, each column is sorted and compressed. This means that inserting the new value in a column requires multiple operations on large chunks of data: read data / decompress / update / sort / compress data / write data back. Vertica has several optimization for updates but cannot hide completely the cost.
Projections can be thought of as the equivalent of multi-column indexes in a traditional row store (MySQL, PostgreSQL, Oracle, etc.). The upside of projections versus traditional B-Tree indexes is that reading them (using them to answer a query) is much faster than using traditional indexes. The reasons are multiple: no need to access head data as for non-clustered indexes, smaller size due to compression, etc. The flipside is that they are way more difficult to update. Tradeoffs...

Optimize postgresql query

I have 2 tables in PostgreSQL 9.1 - flight_2012_09_12 containing approx 500,000 rows and position_2012_09_12 containing about 5.5 million rows. I'm running a simple join query and it's taking a long time to complete and despite the fact the tables aren't small I'm convinced there are some major gains to be made in the execution.
The query is:
SELECT f.departure, f.arrival,
p.callsign, p.flightkey, p.time, p.lat, p.lon, p.altitude_ft, p.speed
FROM position_2012_09_12 AS p
JOIN flight_2012_09_12 AS f
ON p.flightkey = f.flightkey
WHERE p.lon < 0
AND p.time BETWEEN '2012-9-12 0:0:0' AND '2012-9-12 23:0:0'
The output of explain analyze is:
Hash Join (cost=239891.03..470396.82 rows=4790498 width=51) (actual time=29203.830..45777.193 rows=4403717 loops=1)
Hash Cond: (f.flightkey = p.flightkey)
-> Seq Scan on flight_2012_09_12 f (cost=0.00..1934.31 rows=70631 width=12) (actual time=0.014..220.494 rows=70631 loops=1)
-> Hash (cost=158415.97..158415.97 rows=3916885 width=43) (actual time=29201.012..29201.012 rows=3950815 loops=1)
Buckets: 2048 Batches: 512 (originally 256) Memory Usage: 1025kB
-> Seq Scan on position_2012_09_12 p (cost=0.00..158415.97 rows=3916885 width=43) (actual time=0.006..14630.058 rows=3950815 loops=1)
Filter: ((lon < 0::double precision) AND ("time" >= '2012-09-12 00:00:00'::timestamp without time zone) AND ("time" <= '2012-09-12 23:00:00'::timestamp without time zone))
Total runtime: 58522.767 ms
I think the problem lies with the sequential scan on the position table but I can't figure out why it's there. The table structures with indexes are below:
Table "public.flight_2012_09_12"
Column | Type | Modifiers
--------------------+-----------------------------+-----------
callsign | character varying(8) |
flightkey | integer |
source | character varying(16) |
departure | character varying(4) |
arrival | character varying(4) |
original_etd | timestamp without time zone |
original_eta | timestamp without time zone |
enroute | boolean |
etd | timestamp without time zone |
eta | timestamp without time zone |
equipment | character varying(6) |
diverted | timestamp without time zone |
time | timestamp without time zone |
lat | double precision |
lon | double precision |
altitude | character varying(7) |
altitude_ft | integer |
speed | character varying(4) |
asdi_acid | character varying(4) |
enroute_eta | timestamp without time zone |
enroute_eta_source | character varying(1) |
Indexes:
"flight_2012_09_12_flightkey_idx" btree (flightkey)
"idx_2012_09_12_altitude_ft" btree (altitude_ft)
"idx_2012_09_12_arrival" btree (arrival)
"idx_2012_09_12_callsign" btree (callsign)
"idx_2012_09_12_departure" btree (departure)
"idx_2012_09_12_diverted" btree (diverted)
"idx_2012_09_12_enroute_eta" btree (enroute_eta)
"idx_2012_09_12_equipment" btree (equipment)
"idx_2012_09_12_etd" btree (etd)
"idx_2012_09_12_lat" btree (lat)
"idx_2012_09_12_lon" btree (lon)
"idx_2012_09_12_original_eta" btree (original_eta)
"idx_2012_09_12_original_etd" btree (original_etd)
"idx_2012_09_12_speed" btree (speed)
"idx_2012_09_12_time" btree ("time")
Table "public.position_2012_09_12"
Column | Type | Modifiers
-------------+-----------------------------+-----------
callsign | character varying(8) |
flightkey | integer |
time | timestamp without time zone |
lat | double precision |
lon | double precision |
altitude | character varying(7) |
altitude_ft | integer |
course | integer |
speed | character varying(4) |
trackerkey | integer |
the_geom | geometry |
Indexes:
"index_2012_09_12_altitude_ft" btree (altitude_ft)
"index_2012_09_12_callsign" btree (callsign)
"index_2012_09_12_course" btree (course)
"index_2012_09_12_flightkey" btree (flightkey)
"index_2012_09_12_speed" btree (speed)
"index_2012_09_12_time" btree ("time")
"position_2012_09_12_flightkey_idx" btree (flightkey)
"test_index" btree (lon)
"test_index_lat" btree (lat)
I can't think of any other way to rewrite the query and so I'm stumped at this point. If the current setup is as good as it gets so be it but it seems to me that it should be much faster than it currently is. Any help would be much appreciated.
The row count estimates are pretty reasonable, so I doubt this is a stats issue.
I'd try:
Creating an index on position_2012_09_12(lon,"time") or possibly a partial index on position_2012_09_12("time") WHERE (lon < 0) if you routinely search for lon < 0.
Setting random_page_cost lower, maybe 1.1. See if (a) this changes the plan and (b) if the new plan is actually faster. For testing purposes to see if avoiding a seqscan would be faster you can SET enable_seqscan = off; if it is, change the cost paramters.
Increase work_mem for this query. SET work_mem = 10M or something before running it.
Running the latest PostgreSQL if you aren't already. Always specify your PostgreSQL version in questions. (Update after edit): You're on 9.1; that's fine. The biggest performance improvement in 9.2 was index-only scans, and it doesn't seem likely that you'd benefit massively from index-only scans for this query.
You'll also somewhat improve performance if you can get rid of columns to narrow the rows. It won't make tons of difference, but it'll make some.
The reason you are getting a sequential scan is that Postgres believes that it will read less disk pages that way than using indexes. It is probably right. Consider, if you use a non-covering index, you need to read all the matching index pages. it essentially outputs a list of row identifiers. The DB engine then needs to read each of the matching data pages.
Your position table uses 71 bytes per row, plus whatever a geom type takes (I'll assume 16 bytes for illustration), making 87 bytes. A Postgres page is 8192 bytes. So you have approximately 90 rows per pages.
Your query matches 3950815 out of 5563070 rows, or about 70% of the total. Assuming the data is randomly distributed, with regard to your where filters, there is a pretty much a 30% ^ 90 chance of finding a data page with no matching row. This is essentially nothing. So regardless of how good your indexes are, you're still going to have to read all the data pages. If you're going to have to read all the pages anyway, a table scan is usually a good approach.
The one get out here, is that I said non-covering index. If you are prepared to create indexes that can answer queries in of themselves, you can avoid looking up the data pages at all, so you are back in the game. I'd suggest the following are worth looking at:
flight_2012_09_12 (flightkey, departure, arrival)
position_2012_09_12 (filghtkey, time, lon, ...)
position_2012_09_12 (lon, time, flightkey, ...)
position_2012_09_12 (time, long, flightkey, ...)
The dots here represent the rest of the columns you are selecting. You'll only need one of the indexes on position, but it's hard to tell which will prove the best. The first approach may permit a merge join on presorted data, with the cost of reading the whole second index to do the filtering. The second and third will allow data to be prefiltered, but require a hash join. Give how much of the cost appears to be in the hash join, the merge join might well be a good option.
As your query requires 52 of the 87 bytes per row, and indexes have overheads, you may not end up with the index taking much, if any, less space then the table itself.
Another approach is to attack the "randomly distributed" side of it, by looking at clustering.