Adding Postgres index to one table locks up another - sql

I look after a single Postgres 9.3.3 (Amazon RDS instance: db.m3.2xlarge), which is the back-end of a system that logs incoming statistics and provides reports based on those data - yes, from the same DB node.
Performance is generally very good, but upon adding an extra index on table R to improve reporting performance, logging performance collapsed, as both INSERTs and UPDATEs on a different table L used by the logging process immediately began to lock - seemingly on one another, according to pg_locks, although no deadlocks were reported. Immediately, all available connections (according to pg_stat_activity) locked in the same way, DB CPU rose quickly to 100%. The logger's load-balancer took all of its nodes out of use, but as the INSERTs and UPDATEs refused to complete or to time out, all connections stayed locked.
Note that this isn't a problem during index creation, only during usage. Nor is this an issue of load: throttling logging by 90% and starting the system completely afresh again immediately locked it up. No reporting whatsoever was happening at the same time.
Dropping the R index immediately releases all L locks.
I create the index with:
CREATE INDEX idxForGroup ON R (group,article_id,month);
where the columns are:
'group' type: VARCHAR(64) defaultValue: "" nullable: false
'month' type: TIMESTAMP nullable: false
'article_id' type: BIGINT defaultValue: 0 nullable: false
There is already a composite primary key, of which the above is just a subset:
customer_resource_id (a FK), subtype (a VARCHAR), group, article_id, month
I should add that there is a relationship between R and L: a trigger updates the reporting table R based upon updates to L:
CREATE TRIGGER on_event_report AFTER INSERT OR UPDATE ON L FOR EACH ROW EXECUTE PROCEDURE resource_event_trigger();
I accept that adding any index imposes a small (microseconds?) cost/load, but there are already indexes on R, so I don't understand how a 'little' extra indexing on R could have such a huge impact as to cause lockups for L.
Update:
If I investigate the L queries that are getting locked:
EXPLAIN (analyze,buffers) update L set count=count+1 where customer_resource_id=911657 and item_type_id='type' and event_subtype='subtype' and reporting_date='2014-04-13 00:00:00' AND group='';
Update on L (cost=0.57..20.18 rows=5 width=49) (actual time=70.968..70.968 rows=0 loops=1)
Buffers: shared hit=170 read=16 dirtied=15
-> Index Scan using L_pkey on L (cost=0.57..20.18 rows=5 width=49) (actual time=0.067..0.525 rows=19 loops=1)
Index Cond: ((customer_resource_id = 911657) AND ((group)::text = ''::text) AND ((item_type_id)::text = 'type'::text) AND ((event_subtype)::text = 'subtype'::text) AND (article_id = 0))
Buffers: shared hit=24
Trigger on_L: time=11626.219 calls=19 <---
Total runtime: 11697.285 ms
So, you'd think the trigger that updates R must be the problem - and yet when I EXPLAIN the trigger queries, they all check out fine: indexes hit, no scans, etc.
Update 2:
Not sure if this is really a locking issue, or just a massive performance degradation, but here's pg_locks with the index present:
SELECT mode,COUNT(*) FROM pg_locks GROUP BY mode;
mode | granted | count
------------------+---------+-------
AccessShareLock | t | 24715
ExclusiveLock | t | 1504
ExclusiveLock | f | 138
RowExclusiveLock | t | 5901
RowShareLock | t | 185
ShareLock | f | 95
Drop the index, and within seconds:
mode | count
-----------------+-------
ExclusiveLock | 3
AccessShareLock | 31
Update 3:
Here's the source of the trigger on the logging table L that updates the reporting table R:
CREATE OR REPLACE FUNCTION resource_event_trigger()
RETURNS TRIGGER AS $$
DECLARE
cre_row R%ROWTYPE;
delta INTEGER;
BEGIN
SELECT * INTO cre_row FROM R cre WHERE cre.customer_resource_id = NEW.customer_resource_id AND cre.group = NEW.group_id AND cre.subtype = NEW.event_subtype AND cre.date = date_trunc('month', NEW.date) AND cre.article_id = NEW.article_id;
IF cre_row IS null THEN
INSERT INTO R (customer_resource_id, group, subtype, article_id, date) VALUES (NEW.customer_resource_id, NEW.group_id, NEW.event_subtype, NEW.article_id, date_trunc('month', NEW.date));
END IF;
IF TG_OP = 'INSERT' THEN
delta = NEW.event_count;
ELSE
delta = NEW.event_count - OLD.event_count;
END IF;
CASE
WHEN NEW.item_type_id = 'typeA' THEN
UPDATE R SET count_A = count_A + delta WHERE customer_resource_id = NEW.customer_resource_id AND group = NEW.group_id AND subtype = NEW.event_subtype AND article_id = NEW.article_id AND date = date_trunc('month', NEW.date);
[...]
END CASE;
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
It's long-ish, but pretty straightforward. When 'EXPLAIN'ed individually, all the individual queries use primary keys / indexes, use few buffers, etc.
Update 4:
If I examine the created index, I notice:
SELECT tablename, attname, n_distinct, correlation from pg_stats where tablename='R' AND attname IN ('group','article_id','date','customer_resource_id','subtype') ORDER BY attname;
tablename | attname | n_distinct | correlation
-----------+----------------------+------------+-------------
R | article_id | 25886 | 0.756468
R | group | 165 | 0.227023
R | customer_resource_id | -0.304469 | 0.729134
R | date | 53 | 0.943593
R | subtype | 2 | 0.657429
... which looks plausible. And if I look at cardinality I get:
SELECT relname, relkind, reltuples as cardinality, relpages FROM pg_class where relkind='i' [...] order by relname;
relname | relkind | cardinality | relpages
-------------+---------+-------------+----------
R_pkey | i | 2.69955e+07 | 293035
idxForGroup | i | 2.70333e+07 | 134149
L_pkey | i | 7.14889e+07 | 771581
Both the PK and the newly added index have values that are almost the same as the row count which, again, should be fine...

Well while I can't say EXACTLY what your problem may be, it does obviously seem to hinge on this new index. It would certainly be easier if I were to look thoroughly through things, but I will give a couple shots in the dark.
Indexes and postgres performance can be a big subject, but fundamentally there are a few things I see could be wrong that you should check:
When changing/adding an index on a table, the query optimizer (that fires milliseconds before the query goes, and evaluates how best to execute that query) of course is looking at the table in a different way. It sees: "Hey, there is a new index here and maybe this is better than the old index I was using" and in some cases the query optimizer can be wrong.
And so instead of using a past index that was working just fine, the optimizer then starts doing full table scans or something stupid. This is in an extreme case, but could of course occur.
The other thing is that you may be dealing with a lot of "dead rows". Whenever you to an update to a table, it creates a "dead row" and inserts a new one (your updated row). The "dead row" doesn't really go anywhere and sometimes can bloat up your table and the Postgres query optimizer can go a little haywire trying to understand this.
One time I had a table that just went CRAZY like this and I could not, for the life of me, understand why it ws going SO SLOW. I then looked at the table statistics and saw hundreds of thousands of dead rows. I took a shot in the dark and just dropped and re-created the table (albeit, that takes some work on a live database) and magically everything worked completely fine after that (NOTHING was changed on the table structure or data in the table - other than that dead rows were completely released when dropping that instance of the table).
While that is a bit extreme, what I would do immediate is:
a) run vacuum on the table which does a pretty good jog at getting the "dead rows" out of the way
b) then run an analyze on the table. This re-sets the table summarized statistics, which is where the optimizer takes a lot of the data to determine how best to query the table.

Related

Efficiency problem querying postgresql table

I have the following PostgreSQL table with about 67 million rows, which stores the EOD prices for all the US stocks starting in 1985:
Table "public.eods"
Column | Type | Collation | Nullable | Default
--------+-----------------------+-----------+----------+---------
stk | character varying(16) | | not null |
dt | date | | not null |
o | integer | | not null |
hi | integer | | not null |
lo | integer | | not null |
c | integer | | not null |
v | integer | | |
Indexes:
"eods_pkey" PRIMARY KEY, btree (stk, dt)
"eods_dt_idx" btree (dt)
I would like to query efficiently the table above based on either the stock name or the date. The primary key of the table is stock name and date. I have also defined an index on the date column, hoping to improve performance for queries that retrieve all the records for a specific date.
Unfortunately, I see a big difference in performance for the queries below. While getting all the records for a specific stock takes a decent amount of time to complete (2 seconds), getting all the records for a specific date takes much longer (about 56 seconds). I have tried to analyze these queries using explain analyze, and I have got the results below:
explain analyze select * from eods where stk='MSFT';
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on eods (cost=169.53..17899.61 rows=4770 width=36) (actual time=207.218..2142.215 rows=8364 loops=1)
Recheck Cond: ((stk)::text = 'MSFT'::text)
Heap Blocks: exact=367
-> Bitmap Index Scan on eods_pkey (cost=0.00..168.34 rows=4770 width=0) (actual time=187.844..187.844 rows=8364 loops=1)
Index Cond: ((stk)::text = 'MSFT'::text)
Planning Time: 577.906 ms
Execution Time: 2143.101 ms
(7 rows)
explain analyze select * from eods where dt='2010-02-22';
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------------------
Index Scan using eods_dt_idx on eods (cost=0.56..25886.45 rows=7556 width=36) (actual time=40.047..56963.769 rows=8143 loops=1)
Index Cond: (dt = '2010-02-22'::date)
Planning Time: 67.876 ms
Execution Time: 56970.499 ms
(4 rows)
I really cannot understand why the second query runs 28 times slower than the first query. They retrieve a similar number of records, they both seem to be using an index. So could somebody please explain to me why this difference in performance, and can I do something to improve the performance of the queries that retrieve all the records for a specific date?
I would guess that this has to do with the data layout. I am guessing that you are loading the data by stk, so the rows for a given stk are on a handful of pages that pretty much only contain that stk.
So, the execution engine is only reading about 25 pages.
On the other hand, no single page contains two records for the same date. When you read by date, you have to read about 7,556 pages. That is, about 300 times the number of pages.
The scaling must also take into account the work for loading and reading the index. This should be about the same for the two queries, so the ratio is less than a factor of 300.
There can be more issues - so it is hard to say where is a problem. Index scan should be usually faster, than bitmap heap scan - if not, then there can be following problems:
unhealthy index - try to run REINDEX INDEX indexname
bad statistics - try to run ANALYZE tablename
suboptimal state of table - try to run VACUUM tablename
too low, or to high setting of effective_cache_size
issues with IO - some systems has a problem with high random IO, try to increase random_page_cost
Investigation what is a issue is little bit alchemy - but it is possible - there are only closed set of very probably issues. Good start is
VACUUM ANALYZE tablename
benchmark your IO if it is possible (like bonie++)
To find the difference, you'll probably have to run EXPLAIN (ANALYZE, BUFFERS) on the query so that you see how many blocks are touched and where they come from.
I can think of two reasons:
Bad statistics that make PostgreSQL believe that dt has a high correlation while it has not. If the correlation is low, a bitmap index scan is often more efficient.
To see if that is the problem, run
ANALYZE eods;
and see if that changes the execution plans chosen.
Caching effects: perhaps the first query finds all required blocks already cached, while the second doesn't.
At any rate, it might be worth experimenting to see if a bitmap index scan would be cheaper for the second query:
SET enable_indexscan = off;
Then repeat the query.

Index is not being used by optimizer

I have a query which is performing very badly due to full scan of a table.I have checked the statistics rebuild the indexes but its not working.
SQL Statement:
select distinct NA_DIR_EMAIL d, NA_DIR_EMAIL r
from gcr_items , gcr_deals
where gcr_deals.GCR_DEALS_ID=gcr_items.GCR_DEALS_ID
and
gcr_deals.bu_id=:P0_BU_ID
and
decode(:P55_DIRECT,'ALL','Y',trim(upper(NA_ORG_OWNER_EMAIL)))=
decode(:P55_DIRECT,'ALL','Y',trim(upper(:P55_DIRECT)))
order by 1
Execution Plan :
Plan hash value: 3180018891
-------------------------------------------------------------------------
| Id | Operation | Name | Rows | Time |
-------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 8 | 00:11:42 |
| 1 | SORT ORDER BY | | 8 | 00:11:42 |
| 2 | HASH UNIQUE | | 8 | 00:11:42 |
|* 3 | HASH JOIN | | 7385 | 00:11:42 |
|* 4 | VIEW | index$_join$_002 | 10462 | 00:00:05 |
|* 5 | HASH JOIN | | | |
|* 6 | INDEX RANGE SCAN | GCR_DEALS_IDX12 | 10462 | 00:00:01 |
| 7 | INDEX FAST FULL SCAN| GCR_DEALS_IDX1 | 10462 | 00:00:06 |
|* 8 | TABLE ACCESS FULL | GCR_ITEMS | 7386 | 00:11:37 |
-------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
3 - access("GCR_DEALS"."GCR_DEALS_ID"="GCR_ITEMS"."GCR_DEALS_ID")
4 - filter("GCR_DEALS"."BU_ID"=TO_NUMBER(:P0_BU_ID))
5 - access(ROWID=ROWID)
6 - access("GCR_DEALS"."BU_ID"=TO_NUMBER(:P0_BU_ID))
8 - filter(DECODE(:P55_DIRECT,'ALL','Y',TRIM(UPPER("NA_ORG_OWNER_EMAI
L")))=DECODE(:P55_DIRECT,'ALL','Y',TRIM(UPPER(:P55_DIRECT))))
In the beginning a part of the condition in the WHERE clause must be decomposed (or "decompiled" - or "reengeenered") into a simpler form without using decode function, which a form can be understandable by the query optimizer:
AND
decode(:P55_DIRECT,'ALL','Y',trim(upper(NA_ORG_OWNER_EMAIL)))=
decode(:P55_DIRECT,'ALL','Y',trim(upper(:P55_DIRECT)))
into:
AND (
:P55_DIRECT = 'ALL'
OR
trim(upper(:P55_DIRECT)) = trimm(upper(NA_ORG_OWNER_EMAIL))
)
To find rows in the table based on values stored in the index, Oracle uses an access method named Index scan, see this link for details:
https://docs.oracle.com/cd/B19306_01/server.102/b14211/optimops.htm#i52300
One of the most common access method is Index Range Scan see here:
https://docs.oracle.com/cd/B19306_01/server.102/b14211/optimops.htm#i45075
The documentation says (in the latter link) that:
The optimizer uses a range scan when it finds one or more leading
columns of an index specified in conditions, such as the following:
col1 = :b1
col1 < :b1
col1 > :b1
AND combination of the preceding conditions for leading columns in the
index
col1 like 'ASD%' wild-card searches should not be in a leading
position otherwise the condition col1 like '%ASD' does not result in a
range scan.
The above means that the optimizer is able to use the index to find rows only for query conditions that contain basic comparision operators: = < > <= >= LIKE which are used to comparing simple values with plain column names. What the documentation doesn't clearly say - and you need to deduce it reading between the lines - is a fact that when some function is used in the condition, in a form function( column_name ) or function( expression_involving_column_names ) , then the index range scan cannot be used. In this case the query optimizer must evaluate this expression individually for each row in the table, thus must read all rows (perform a full table scan).
A short conclusion and a rule of thumb:
Functions in the WHERE clause can prevent the optimizer from using
indexes
If you see some function somewhere in the WHERE clause, then it is a sign that you are
running the red light
STOP immediately and think three times how
this function impact the query optimizer and the performance of your
query, and try to rewrite the condition to a form that the optimizer
is able to understand.
Now take a look at our rewritten condition:
AND (
:P55_DIRECT = 'ALL'
OR
trim(upper(:P55_DIRECT)) = trimm(upper(NA_ORG_OWNER_EMAIL))
)
and STOP - there are still two functions trim and upper applied to a column named NA_ORG_OWNER_EMAIL. We need to think how they can impact the query optimizer.
I assume that you have created a plain index on a single column: CREATE INDEX somename ON GCR_ITEMS( NA_ORG_OWNER_EMAIL ).If yes, then the index contains only plain values of NA_ORG_OWNER_EMAIL.
But the query is trying to find trimm(upper(NA_ORG_OWNER_EMAIL)) values, which are not stored in the index, so this index cannot be used in this case.
This condition requires a function based index:
https://docs.oracle.com/cd/E11882_01/appdev.112/e41502/adfns_indexes.htm#ADFNS00505
CREATE INDEX somename ON GCR_ITEMS( trim( upper( NA_ORG_OWNER_EMAIL )))
Unfortunately even the function based index will still not help, because the condition in the query is too general - if a value of :P55_DIRECT = ALL the query must retrieve all rows from the table (perform a full table scan), otherwise must use the index to search value within it.
This is because the query is planned (think of it as "compiled") by the query optimizer only once, during it's first execution. Then the plan is stored in the cache and used to execute the query for all further executions. A value of the parameter is not know in advance, so the plan must consider each possible cases, thus will always perform a full table scan.
In 12c there is a new feature "Adaptive query optimalization":
https://docs.oracle.com/database/121/TGSQL/tgsql_optcncpt.htm#TGSQL94982
where the query optimizer analyses each parameters of the query on each runs, and is able to detect that the plan is not optimal for some runtime parameters, and choose a better "subplans" depending on actual parameter's value ... but you must use 12c, and additionally pay for Enterprise Edition, because only this edition includes that feature. And it's still not certain if the adaptive plan will work in this case or not.
What you can do without paying for 12c EE is to DIVIDE this general query into two separate variants, one for a case where :P55_DIRECT = ALL, and the other for remaining cases, and run an appropriate variant in the client (your application) depending on the value of this parameter.
A version for :P55_DIRECT = ALL, that will perform a full table scan
where gcr_deals.GCR_DEALS_ID=gcr_items.GCR_DEALS_ID
and
gcr_deals.bu_id=:P0_BU_ID
order by 1
and a version for other cases, that will use the function based index:
where gcr_deals.GCR_DEALS_ID=gcr_items.GCR_DEALS_ID
and
gcr_deals.bu_id=:P0_BU_ID
and
trim(upper(:P55_DIRECT)) = trimm(upper(NA_ORG_OWNER_EMAIL))
order by 1

Determine Oracle query execution time and proposed datasize without actually executing query

In oracle Is there any way to determine howlong the sql query will take to fetch the entire records and what will be the size of it, Without actually executing and waiting for entire result.
I am getting repeatedly to download and provide the data to the users using normal oracle SQL select (not datapump/import etc) . Some times rows will be in millions.
Actual run time will not known unless you run it, but you can try to estimate it..
first you can do explain plan explain only, this will NOT run query -- based on your current stats it will show you more or less how it will be executed
this will not have actual time and efforts to read the data from datablocks..
do you have large blocksize
is this schema normalized/de-normalized for query/reporting?
how large is row does it fit in same block so only 1 fetch is needed?
of rows you are expecting
based on amount of data * your network latency
Based on this you can try estimate time
This requires good statistics, explain plan for ..., adjusting sys.aux_stats, and then adjusting your expectations.
Good statistics The explain plan estimates are based on optimizer statistics. Make sure that tables and indexes have up-to-date statistics. On 11g this usually means sticking with the default settings and tasks, and only manually gathering statistics after large data loads.
Explain plan for ... Use a statement like this to create and store the explain plan for any SQL statement. This even works for creating indexes and tables.
explain plan set statement_id = 'SOME_UNIQUE_STRING' for
select * from dba_tables cross join dba_tables;
This is usually the best way to visualize an explain plan:
select * from table(dbms_xplan.display);
Plan hash value: 2788227900
-------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Time |
-------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 12M| 5452M| 00:00:19 |
|* 1 | HASH JOIN RIGHT OUTER | | 12M| 5452M| 00:00:19 |
| 2 | TABLE ACCESS FULL | SEG$ | 7116 | 319K| 00:00:01 |
...
The raw data is stored in PLAN_TABLE. The first row of the plan usually sums up the estimates for the other steps:
select cardinality, bytes, time
from plan_table
where statement_id = 'SOME_UNIQUE_STRING'
and id = 0;
CARDINALITY BYTES TIME
12934699 5717136958 19
Adjust sys.aux_stats$ The time estimate is based on system statistics stored in sys.aux_stats. These are numbers for metrics like CPU speed, single-block I/O read time, etc. For example, on my system:
select * from sys.aux_stats$ order by sname
SNAME PNAME PVAL1 PVAL2
SYSSTATS_INFO DSTART 09-11-2014 11:18
SYSSTATS_INFO DSTOP 09-11-2014 11:18
SYSSTATS_INFO FLAGS 1
SYSSTATS_INFO STATUS COMPLETED
SYSSTATS_MAIN CPUSPEED
SYSSTATS_MAIN CPUSPEEDNW 3201.10192837466
SYSSTATS_MAIN IOSEEKTIM 10
SYSSTATS_MAIN IOTFRSPEED 4096
SYSSTATS_MAIN MAXTHR
SYSSTATS_MAIN MBRC
SYSSTATS_MAIN MREADTIM
SYSSTATS_MAIN SLAVETHR
SYSSTATS_MAIN SREADTIM
The numbers can be are automatically gathered by dbms_stats.gather_system_stats. They can also be manually modified. It's a SYS table but relatively safe to modify. Create some sample queries, compare the estimated time with the actual time, and adjust the numbers until they match.
Discover you probably wasted a lot of time
Predicting run time is theoretically impossible to get right in all cases, and in practice it is horribly difficult to forecast for non-trivial queries. Jonathan Lewis wrote a whole book about those predictions, and that book only covers the "basics".
Complex explain plans are typically "good enough" if the estimates are off by one or two orders of magnitude. But that kind of difference is typically not good enough to show to a user, or use for making any important decisions.

Vertica and joins

I'm adapting a web analysis tool to use Vertica as the DB. I'm having real problems optimizing joins. I tried creating pre-join projections for some of my queries, and while it did make the queries blazing fast, it slowed data loading into the fact table to a crawl.
A simple INSERT INTO ... SELECT * FROM which we use to load data into the fact table from a staging table goes from taking ~5 seconds to taking 20+ minutes.
Because of this I dropped all pre-join projections and tried using the Database Designer to design query specific projections but it's not enough. Even with those projections a simple join is taking ~14 seconds, something that takes ~1 second with a pre-join projection.
My question is this: Is it normal for a pre-join projection to slow data insertion this much and if not, what could be the culprit? If it is normal, then it's a show stopper for us and are there other techniques we could use to speed up the joins?
We're running Vertica on a 5 node cluster, each node having 2 x quad core CPU and 32 GB of memory. The tables in my example query have 188,843,085 and 25,712,878 rows respectively.
The EXPLAIN output looks like this:
EXPLAIN SELECT referer_via_.url as referralPageUrl, COUNT(DISTINCT sessio
n.id) as visits FROM owa_session as session JOIN owa_referer AS referer_vi
a_ ON session.referer_id = referer_via_.id WHERE session.yyyymmdd BETWEEN
'20121123' AND '20121123' AND session.site_id = '49' GROUP BY referer_via_
.url ORDER BY visits DESC LIMIT 250;
Access Path:
+-SELECT LIMIT 250 [Cost: 1M, Rows: 250 (STALE STATISTICS)] (PATH ID: 0)
| Output Only: 250 tuples
| Execute on: Query Initiator
| +---> SORT [Cost: 1M, Rows: 1 (STALE STATISTICS)] (PATH ID: 1)
| | Order: count(DISTINCT "session".id) DESC
| | Output Only: 250 tuples
| | Execute on: All Nodes
| | +---> GROUPBY PIPELINED (RESEGMENT GROUPS) [Cost: 1M, Rows: 1 (STALE
STATISTICS)] (PATH ID: 2)
| | | Aggregates: count(DISTINCT "session".id)
| | | Group By: referer_via_.url
| | | Execute on: All Nodes
| | | +---> GROUPBY HASH (SORT OUTPUT) (RESEGMENT GROUPS) [Cost: 1M, Rows
: 1 (STALE STATISTICS)] (PATH ID: 3)
| | | | Group By: referer_via_.url, "session".id
| | | | Execute on: All Nodes
| | | | +---> JOIN HASH [Cost: 1M, Rows: 1 (STALE STATISTICS)] (PATH ID:
4) Outer (RESEGMENT)
| | | | | Join Cond: ("session".referer_id = referer_via_.id)
| | | | | Execute on: All Nodes
| | | | | +-- Outer -> STORAGE ACCESS for session [Cost: 463, Rows: 1 (ST
ALE STATISTICS)] (PUSHED GROUPING) (PATH ID: 5)
| | | | | | Projection: public.owa_session_projection
| | | | | | Materialize: "session".id, "session".referer_id
| | | | | | Filter: ("session".site_id = '49')
| | | | | | Filter: (("session".yyyymmdd >= 20121123) AND ("session"
.yyyymmdd <= 20121123))
| | | | | | Execute on: All Nodes
| | | | | +-- Inner -> STORAGE ACCESS for referer_via_ [Cost: 293K, Rows:
26M] (PATH ID: 6)
| | | | | | Projection: public.owa_referer_DBD_1_seg_Potency_2012112
2_Potency_20121122
| | | | | | Materialize: referer_via_.id, referer_via_.url
| | | | | | Execute on: All Nodes
To speedup join:
Design session table as being partitioned on column "yyyymmdd". This will enable partition pruning
Add condition on column "yyyymmdd" to _referer_via_ and partition on it, if it is possible (most likely not)
have column site_id as possible close to the beginning of order by list in used (super)projection of session
have both tables segmented on referer_id and id correspondingly.
And having more nodes in cluster do help.
My question is this: Is it normal for a pre-join projection to slow data insertion this much and if not, what could be the culprit? If it is normal, then it's a show stopper for us and are there other techniques we could use to speed up the joins?
I guess the amount affected would vary depending on data sets and structures you are working with. But, since this is the variable you changed, I believe it is safe to say the pre-join projection is causing the slowness. You are gaining query time at the expense of insertion time.
Someone please correct me if any of the following is wrong. I'm going by memory and by information picked up with conversations with others.
You can speed up your joins without a pre-join projection a few ways. In this case, the referrer ID. I believe if you segment your projections for both tables with the join predicate that would help. Anything you can do to filter the data.
Looking at your explain plan, you are doing a hash join instead of a merge join, which you probably want to look at.
Lastly, I would like to know via the explain plan or through system tables if your query is actually using the projections Database Designer has recommended. If not, explicitly specify them in your query and see if that helps.
You seem to have a lot of STALE STATISTICS.
Responding to STALE statistics is important. Because that is the reason why your queries are slow. Without statistics about the underlying data, Vertica's query optimizer cannot choose the best execution plan. And responding to STALE statistics only improves SELECT performance not update performance.
If you update your tables regularly do remember there are additional things you have to consider in VERTICA. Please check the answer that I posted to this question.
I hope that should help improve your update speed.
Explore the AHM settings as explained in that answer. If you don't need to be able to select deleted rows in a table later, it is often a good idea to not keep them around. There are ways to keep only the latest epoch version of the data. Or manually purge deleted data.
Let me know how it goes.
I think your query could use some more of being explicit. Also don't use that Devil BETWEEN Try this:
EXPLAIN SELECT
referer_via_.url as referralPageUrl,
COUNT(DISTINCT session.id) as visits
FROM owa_session as session
JOIN owa_referer AS referer_via_
ON session.referer_id = referer_via_.id
WHERE session.yyyymmdd <= '20121123'
AND session.yyyymmdd > '20121123'
AND session.site_id = '49'
GROUP BY referer_via_.url
-- this `visits` column needs a table name
ORDER BY visits DESC LIMIT 250;
I'll say I'm really perplexed as to why you would use the same DATE with BETWEEN may want to look into that.
this is my view coming from an academic background working with column databases, including Vertica (recent PhD graduate in database systems).
Blockquote
My question is this: Is it normal for a pre-join projection to slow data insertion this much and if not, what could be the culprit? If it is normal, then it's a show stopper for us and are there other techniques we could use to speed up the joins?
Blockquote
Yes, updating projections is very slow and you should ideally do it only in large batches to amortize the update cost. The fundamental reason is that each projection represents another copy of the data (of each table column that is part of the projection).
A single row insert requires adding one value (one attribute) to each column in the projection. For example, a single row insert in a table with 20 attributes requires at least 20 column updates. To make things worse, each column is sorted and compressed. This means that inserting the new value in a column requires multiple operations on large chunks of data: read data / decompress / update / sort / compress data / write data back. Vertica has several optimization for updates but cannot hide completely the cost.
Projections can be thought of as the equivalent of multi-column indexes in a traditional row store (MySQL, PostgreSQL, Oracle, etc.). The upside of projections versus traditional B-Tree indexes is that reading them (using them to answer a query) is much faster than using traditional indexes. The reasons are multiple: no need to access head data as for non-clustered indexes, smaller size due to compression, etc. The flipside is that they are way more difficult to update. Tradeoffs...

Oracle <> , != , ^= operators

I want to know the difference of those operators, mainly their performance difference.
I have had a look at Difference between <> and != in SQL, it has no performance related information.
Then I found this on dba-oracle.com,
it suggests that in 10.2 onwards the performance can be quite different.
I wonder why? does != always perform better then <>?
NOTE: Our tests, and performance on the live system shows, changing from <> to != has a big impact on the time the queries return in. I am here to ask WHY this is happening, not whether they are same or not. I know semantically they are, but in reality they are different.
I have tested the performance of the different syntax for the not equal operator in Oracle. I have tried to eliminate all outside influence to the test.
I am using an 11.2.0.3 database. No other sessions are connected and the database was restarted before commencing the tests.
A schema was created with a single table and a sequence for the primary key
CREATE TABLE loadtest.load_test (
id NUMBER NOT NULL,
a VARCHAR2(1) NOT NULL,
n NUMBER(2) NOT NULL,
t TIMESTAMP NOT NULL
);
CREATE SEQUENCE loadtest.load_test_seq
START WITH 0
MINVALUE 0;
The table was indexed to improve the performance of the query.
ALTER TABLE loadtest.load_test
ADD CONSTRAINT pk_load_test
PRIMARY KEY (id)
USING INDEX;
CREATE INDEX loadtest.load_test_i1
ON loadtest.load_test (a, n);
Ten million rows were added to the table using the sequence, SYSDATE for the timestamp and random data via DBMS_RANDOM (A-Z) and (0-99) for the other two fields.
SELECT COUNT(*) FROM load_test;
COUNT(*)
----------
10000000
1 row selected.
The schema was analysed to provide good statistics.
EXEC DBMS_STATS.GATHER_SCHEMA_STATS(ownname => 'LOADTEST', estimate_percent => NULL, cascade => TRUE);
The three simple queries are:-
SELECT a, COUNT(*) FROM load_test WHERE n <> 5 GROUP BY a ORDER BY a;
SELECT a, COUNT(*) FROM load_test WHERE n != 5 GROUP BY a ORDER BY a;
SELECT a, COUNT(*) FROM load_test WHERE n ^= 5 GROUP BY a ORDER BY a;
These are exactly the same with the exception of the syntax for the not equals operator (not just <> and != but also ^= )
First each query is run without collecting the result in order to eliminate the effect of caching.
Next timing and autotrace were switched on to gather both the actual run time of the query and the execution plan.
SET TIMING ON
SET AUTOTRACE TRACE
Now the queries are run in turn. First up is <>
> SELECT a, COUNT(*) FROM load_test WHERE n <> 5 GROUP BY a ORDER BY a;
26 rows selected.
Elapsed: 00:00:02.12
Execution Plan
----------------------------------------------------------
Plan hash value: 2978325580
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 26 | 130 | 6626 (9)| 00:01:20 |
| 1 | SORT GROUP BY | | 26 | 130 | 6626 (9)| 00:01:20 |
|* 2 | INDEX FAST FULL SCAN| LOAD_TEST_I1 | 9898K| 47M| 6132 (2)| 00:01:14 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N"<>5)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
22376 consistent gets
22353 physical reads
0 redo size
751 bytes sent via SQL*Net to client
459 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
26 rows processed
Next !=
> SELECT a, COUNT(*) FROM load_test WHERE n != 5 GROUP BY a ORDER BY a;
26 rows selected.
Elapsed: 00:00:02.13
Execution Plan
----------------------------------------------------------
Plan hash value: 2978325580
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 26 | 130 | 6626 (9)| 00:01:20 |
| 1 | SORT GROUP BY | | 26 | 130 | 6626 (9)| 00:01:20 |
|* 2 | INDEX FAST FULL SCAN| LOAD_TEST_I1 | 9898K| 47M| 6132 (2)| 00:01:14 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N"<>5)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
22376 consistent gets
22353 physical reads
0 redo size
751 bytes sent via SQL*Net to client
459 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
26 rows processed
Lastly ^=
> SELECT a, COUNT(*) FROM load_test WHERE n ^= 5 GROUP BY a ORDER BY a;
26 rows selected.
Elapsed: 00:00:02.10
Execution Plan
----------------------------------------------------------
Plan hash value: 2978325580
--------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 26 | 130 | 6626 (9)| 00:01:20 |
| 1 | SORT GROUP BY | | 26 | 130 | 6626 (9)| 00:01:20 |
|* 2 | INDEX FAST FULL SCAN| LOAD_TEST_I1 | 9898K| 47M| 6132 (2)| 00:01:14 |
--------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
2 - filter("N"<>5)
Statistics
----------------------------------------------------------
0 recursive calls
0 db block gets
22376 consistent gets
22353 physical reads
0 redo size
751 bytes sent via SQL*Net to client
459 bytes received via SQL*Net from client
3 SQL*Net roundtrips to/from client
1 sorts (memory)
0 sorts (disk)
26 rows processed
The execution plan for the three queries is identical and the timings 2.12, 2.13 and 2.10 seconds.
It should be noted that whichever syntax is used in the query the execution plan always displays <>
The tests were repeated ten times for each operator syntax. These are the timings:-
<>
2.09
2.13
2.12
2.10
2.07
2.09
2.10
2.13
2.13
2.10
!=
2.09
2.10
2.12
2.10
2.15
2.10
2.12
2.10
2.10
2.12
^=
2.09
2.16
2.10
2.09
2.07
2.16
2.12
2.12
2.09
2.07
Whilst there is some variance of a few hundredths of the second it is not significant. The results for each of the three syntax choices are the same.
The syntax choices are parsed, optimised and are returned with the same effort in the same time. There is therefore no perceivable benefit from using one over another in this test.
"Ah BC", you say, "in my tests I believe there is a real difference and you can not prove it otherwise".
Yes, I say, that is perfectly true. You have not shown your tests, query, data or results. So I have nothing to say about your results. I have shown that, with all other things being equal, it doesn't matter which syntax you use.
"So why do I see that one is better in my tests?"
Good question. There a several possibilities:-
Your testing is flawed (you did not eliminate outside factors -
other workload, caching etc You have given no information about
which we can make an informed decision)
Your query is a special case (show me the query and we can discuss it).
Your data is a special case (Perhaps - but how - we don't see that either).
There is some other outside influence.
I have shown via a documented and repeatable process that there is no benefit to using one syntax over another. I believe that <> != and ^= are synonymous.
If you believe otherwise fine, so
a) show a documented example that I can try myself
and
b) use the syntax which you think is best. If I am correct and there is no difference it won't matter. If you are correct then cool, you have an improvement for very little work.
"But Burleson said it was better and I trust him more than you, Faroult, Lewis, Kyte and all those other bums."
Did he say it was better? I don't think so. He didn't provide any definitive example, test or result but only linked to someone saying that != was better and then quoted some of their post.
Show don't tell.
You reference the article on the Burleson site. Did you follow the link to the Oracle-L archive? And did you read the other emails replying to the email Burleson cites?
I don't think you did, otherwise you wouldn't have asked this question. Because there is no fundamental difference between != and <>. The original observation was almost certainly a fluke brought about by ambient conditions in the database. Read the responses from Jonathan Lewis and Stephane Faroult to understand more.
" Respect is not something a programmer need to have, its the basic
attitude any human being should have"
Up to a point. When we meet a stranger in the street then of course we should be courteous and treat them with respect.
But if that stranger wants me to design my database application in a specific way to "improve performance" then they should have a convincing explanation and some bulletproof test cases to back it up. An isolated anecdote from some random individual is not enough.
The writer of the article, although a book author and the purveyor of some useful information, does not have a good reputation for accuracy. In this case the article was merely a mention of one persons observations on a well known Oracle mailing list. If you read through the responses you will see the assumptions of the post challenged, but no presumption of accuracy. Here are some excerpts:
Try running your query through explain plan (or autotrace) and see
what that says...
According to this, "!=" is considered to be the same as "<>"...
Jonathan Lewis
Jonathan Lewis is a well respected expert in the Oracle community.
Just out of curiosity... Does the query optimizer generate a different
execution plan for the two queries? Regards, Chris
.
Might it be bind variable peeking in action? The certain effect of
writing != instead of <> is to force a re-parse. If at the first
execution the values for :id were different and if you have an
histogram on claws_doc_id it could be a reason. And if you tell me
that claws_doc_id is the primary key, then I'll ask you what is the
purpose of counting, in particular when the query in the EXISTS clause
is uncorrelated with the outer query and will return the same result
whatever :id is. Looks like a polling query. The code surrounding it
must be interesting.
Stéphane Faroult
.
I'm pretty sure the lexical parse converts either != to <> or <> to
!=, but I'm not sure whether that affects whether the sql text will
match a stored outline.
.
Do the explain plans look the same? Same costs?
The following response is from the original poster.
Jonathan, Thank you for your answer. We did do an explain plan on
both versions of the statement and they were identical, which is what
is so puzzling about this. According to the documentation, the two
forms of not equal are the same (along with ^= and one other that I
can't type), so it makes no sense to me why there is any difference in
performance.
Scott Canaan
.
Not an all inclusive little test but it appears at least in 10.1.0.2
it gets pared into a "<>" for either (notice the filter line for each
plan)
.
Do you have any Stored Outline ? Stored Outlines do exact (literal)
matches so if you have one Stored Outline for, say, the SQL with a
"!=" and none for the SQL with a "<>" (or a vice versa), the Stored
Outline might be using hints ? (although, come to think of it, your
EXPLAIN PLAN should have shown the hints if executing a Stored Outline
?)
.
Have you tried going beyond just explain & autotrace and running a
full 10046 level 12 trace to see where the slower version is spending
its time? This might shed some light on the subject, plus - be sure
to verify that the explain plans are exactly the same in the 10046
trace file (not the ones generated with the EXPLAIN= option), and in
v$sqlplan. There are some "features" of autotrace and explain that
can cause it to not give you an accurate explain plan.
Regards, Brandon
.
Is the phenomenon totally reproducible ?
Did you check the filter_predicates and access_predicates of the plan,
or just the structure. I don't expect any difference, but a change in
predicate order can result in a significant change in CPU usage if you
are unlucky.
If there is no difference there, then enable rowsource statistics
(alter session set "_rowsource_execution_statistics"=true) and run the
queries, then grab the execution plan from V$sql_plan and join to
v$sql_plan_statistics to see if any of the figures about last_starts,
last_XXX_buffer_gets, last_disk_reads, last_elapsed_time give you a
clue about where the time went.
If you are on 10gR2 there is a /*+ gather_plan_statistics */ hint you
can use instead of the "alter session".
Regards Jonathan Lewis
At this point the thread dies and we see no further posts from the original poster, which leads me to believe that either the OP discovered an assumption they had made that was not true or did no further investigation.
I will also point out that if you do an explain plan or autotrace, you will see that the comparison is always displayed as <>.
Here is some test code. Increase the number of loop iterations if you like. You may see one side or the other get a higher number depending on the other activity on the server activity, but in no way will you see one operator come out consistently better than the other.
DROP TABLE t1;
DROP TABLE t2;
CREATE TABLE t1 AS (SELECT level c1 FROM dual CONNECT BY level <=144000);
CREATE TABLE t2 AS (SELECT level c1 FROM dual CONNECT BY level <=144000);
SET SERVEROUTPUT ON FORMAT WRAPPED
DECLARE
vStart Date;
vTotalA Number(10) := 0;
vTotalB Number(10) := 0;
vResult Number(10);
BEGIN
For vLoop In 1..10 Loop
vStart := sysdate;
For vLoop2 In 1..2000 Loop
SELECT count(*) INTO vResult FROM t1 WHERE t1.c1 = 777 AND EXISTS
(SELECT 1 FROM t2 WHERE t2.c1 <> 0);
End Loop;
vTotalA := vTotalA + ((sysdate - vStart)*24*60*60);
vStart := sysdate;
For vLoop2 In 1..2000 Loop
SELECT count(*) INTO vResult FROM t1 WHERE t1.c1 = 777 AND EXISTS
(SELECT 1 FROM t2 WHERE t2.c1 != 0);
End Loop;
vTotalB := vTotalB + ((sysdate - vStart)*24*60*60);
DBMS_Output.Put_Line('Total <>: ' || RPAD(vTotalA,8) || '!=: ' || vTotalB);
vTotalA := 0;
vTotalB := 0;
End Loop;
END;
A Programmer will use !=
A DBA will use <>
If there is a different execution plan it may be that there are differences in the query cache or statistics for each notation. But I don't really think it is so.
Edit:
What I mean above. In complex databases there can be some strange side effects. I don't know oracle good enough, but I think there is an Query Compilation Cache like in SQL Server 2008 R2.
If a query is compiled as new query, the database optimiser calculates a new execution plan depending on the current statistics. If the statistics has changed it will result in a other, may be a worse plan.