Redshift UPDATE uses Seq Scan very slow - sql

I have to update about 300 rows in a large table (600m rows) and I'm trying to make it faster.
The query I am using is a bit tricky:
UPDATE my_table
SET name = CASE WHEN (event_name in ('event_1', 'event_2', 'event_3'))
THEN 'deleted' ELSE name END
WHERE uid IN ('id_1', 'id_2')
I try to use EXPLAIN on this query and I get:
XN Seq Scan on my_table (cost=0.00..103935.76 rows=4326 width=9838)
Filter: (((uid)::text = 'id_1'::text) OR ((uid)::text = 'id_2'::text))
I have an interleaved sortkey, and uid is one of the columns included in this sortkey.
The reason for why the query looks like this is that in the real context the number of columns in SET (along with name) might vary, but it probably won't be more than 10.
Basic idea is that I don't want cross join (update rules are specific to the columns, I don't want to mix them together).
For example in future there will be a query like:
UPDATE my_table
SET name = CASE WHEN (event_name in ("event_1", "event_2", "event_3")) THEN 'deleted' ELSE name END,
address = CASE WHEN (event_name in ("event_1", "event_4")) THEN 'deleted' ELSE address END
WHERE uid IN ("id_1", "id_2")
Anyway, back to the first query, it runs for a very long time (about 45 minutes) and takes 100% CPU.
I tried to check even simpler query:
explain UPDATE my_table SET name = 'deleted' WHERE uid IN ('id_1', 'id_2')
XN Seq Scan on my_table (cost=0.00..103816.80 rows=4326 width=9821)
Filter: (((uid)::text = 'id_1'::text) OR ((uid)::text = 'id_2'::text))
I don't know what else I can add to the question to make it more clear, would be happy to hear any advice.

Have you tried removing the interleaved sort key and replacing it with a simple sort key on uid or a compound sort key with uid as the first column?
Also, the name uid makes me think that you may being using a GUID/UUID as the value. I would suggest that this is an anti-pattern for an id value in Redshift and especially for a sort key.
Problems with GUID/UUID id:
Do not occur in a predictable sequence
Often triggers a full sequential scan
New rows always disrupt the sort
Compress very poorly
Requires more disk space for storage
Requires more data to be read when queried

update in redshift is delete and then insert. Redshift by design just mark the rows as deleted and not deleting them physically(ghost rows). Explicit vacuum delete only < table_name > required to reclaim space.
Seq. Scan impacted by these ghost rows. Would suggest to run above command and check the performance of query later.

Related

Performance impact of view on aggregate function vs result set limiting

The problem
Using PostgreSQL 13, I ran into a performance issue selecting the highest id from a view that joins two tables, depending on the select statement I execute.
Here's a sample setup:
CREATE TABLE test1 (
id BIGSERIAL PRIMARY KEY,
joincol VARCHAR
);
CREATE TABLE test2 (
joincol VARCHAR
);
CREATE INDEX ON test1 (id);
CREATE INDEX ON test1 (joincol);
CREATE INDEX ON test2 (joincol);
CREATE VIEW testview AS (
SELECT test1.id,
test1.joincol AS t1charcol,
test2.joincol AS t2charcol
FROM test1, test2
WHERE test1.joincol = test2.joincol
);
What I found out
I'm executing two statements which result in completely different execution plans and runtimes. The following statement executes in less than 100ms. As far as I understand the execution plan, the runtime is independent of the rowcount, since Postgres iterates the rows one by one (starting at the highest id, using the index) until a join on a row is possible and immediately returns.
SELECT id FROM testview ORDER BY ID DESC LIMIT 1;
However, this one takes over 1 second on average (depending on rowcount), since the two tables are "joined completely", before Postgres uses the index to select the highest id.
SELECT MAX(id) FROM testview;
Please refer to this sample on dbfiddle to check the explain plans:
https://www.db-fiddle.com/f/bkMNeY6zXqBAYUsprJ5eWZ/1
My real environment
On my real environment test1 contains only a hand full of rows (< 100), having unique values in joincol. test2 contains up to ~10M rows, where joincol always matches a value of test1's joincol. test2's joincol is not nullable.
The actual question
Why does Postgres not recognize that it could use an Index Scan Backward on row basis for the second select? Is there anything I could improve on the tables/indexes?
Queries not strictly equivalent
why does Postgres not recognize that it could use a Index Scan Backward on row basis for the second select?
To make the context clear:
max(id) excludes NULL values. But ORDER BY ... LIMIT 1 does not.
NULL values sort last in ascending sort order, and first in descending. So an Index Scan Backward might not find the greatest value (according to max()) first, but any number of NULL values.
The formal equivalent of:
SELECT max(id) FROM testview;
is not:
SELECT id FROM testview ORDER BY id DESC LIMIT 1;
but:
SELECT id FROM testview ORDER BY id DESC NULLS LAST LIMIT 1;
The latter query doesn't get the fast query plan. But it would with an index with matching sort order: (id DESC NULLS LAST).
That's different for the aggregate functions min() and max(). Those get a fast plan when targeting table test1 directly using the plain PK index on (id). But not when based on the view (or the underlying join-query directly - the view is not the blocker). An index sorting NULL values in the right place has hardly any effect.
We know that id in this query can never be NULL. The column is defined NOT NULL. And the join in the view is effectively an INNER JOIN which cannot introduce NULL values for id.
We also know that the index on test.id cannot contain NULL values.
But the Postgres query planner is not an AI. (Nor does it try to be, that could get out of hands quickly.) I see two shortcomings:
min() and max() get the fast plan only when targeting the table, regardless of index sort order, an index condition is added: Index Cond: (id IS NOT NULL)
ORDER BY ... LIMIT 1 gets the fast plan only with the exactly matching index sort order.
Not sure, whether that might be improved (easily).
db<>fiddle here - demonstrating all of the above
Indexes
Is there anything I could improve on the tables/indexes?
This index is completely useless:
CREATE INDEX ON "test" ("id");
The PK on test.id is implemented with a unique index on the column, that already covers everything the additional index might do for you.
There may be more, waiting for the question to clear up.
Distorted test case
The test case is too far away from actual use case to be meaningful.
In the test setup, each table has 100k rows, there is no guarantee that every value in joincol has a match on the other side, and both columns can be NULL
Your real case has 10M rows in table1 and < 100 rows in table2, every value in table1.joincol has a match in table2.joincol, both are defined NOT NULL, and table2.joincol is unique. A classical one-to-many relationship. There should be a UNIQUE constraint on table2.joincol and a FK constraint t1.joincol --> t2.joincol.
But that's currently all twisted in the question. Standing by till that's cleaned up.
This is a very good problem, and good testcase.
I tested it in postgres 9.3 perhaps 13 is can it more more fast.
I used Occam's Razor and i excluded some possiblities
View (without view is slow to)
JOIN can filter some rows (unfortunatly in your test not, but more length md5 5-6 yes)
Other basic equivalent select statements not solve yout problem (inner query or exists)
I achieved to use just index, but because the tables isn't bigger than indexes it was not the solution.
I think
CREATE INDEX on "test" ("id");
is useless, because PK!
If you change this
CREATE INDEX on "test" ("joincol");
to this
CREATE INDEX ON TEST (joincol, id);
Than the second query use just indexes.
After you run this
REINDEX table test;
REINDEX table test2;
VACUUM ANALYZE test;
VACUUM ANALYZE test2;
you can achive some performance tuning. Because you created indexes before inserts.
I think the reason is the two aim of DB.
First aim optimalize just some row. So run Nested Loop. You can force it with limit x.
Second aim optimalize whole table. Run this query fast for whole table.
In this situation postgres optimalizer didn't notice that simple MAX can run with NESTED LOOP. Or perhaps postgres cannot use limit in aggregate clause (can run on whole partial select, what is filtered with query).
And this is very expensive. But you have possiblities to write there other aggregates, like SUM, MIN, AVG stb.
Perhaps can help you the Window functions too.

Hint for SQL Query Joined tables with million records

I have below query that is taking on an average more than 5 seconds to fetch the data in a transaction that is triggered in-numerous times via application. I am looking for a hint that can possibly help me reduce the time taken for this query everytime its been fired. My conditions are that I cannot add any indexes or change any settings of application for this query. Hence oracle hints or changing the structure of the query is the only choice I have. Please find below my query.
SELECT SUM(c.cash_flow_amount) FROM CM_CONTRACT_DETAIL a ,CM_CONTRACT b,CM_CONTRACT_CASHFLOW c
WHERE a.country_code = Ip_country_code
AND a.company_code = ip_company_code
AND a.dealer_bp_id = ip_bp_id
AND a.contract_start_date >= ip_start_date
AND a.contract_start_date <= ip_end_date
AND a.version_number = b.current_version
AND a.status_code IN ('00','10')
AND a.country_code = b.country_code
AND a.company_code = b.company_code
AND a.contract_number = b.contract_number
AND a.country_code = c.country_code
AND a.company_code = c.company_code
AND a.contract_number = c.contract_number
AND a.version_number = c.version_number
AND c.cash_flow_type_code IN ('07','13');
The things to know about the tables are that they are all transactional tables and the data of this table keeps changing everyday. They have records in 1 lacs to 10 lacs in numbers.
This is the explain plan currently on the query:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Hint=RULE
SORT AGGREGATE
TABLE ACCESS BY INDEX ROWID CM_CONTRACT_CASHFLOW
NESTED LOOPS
NESTED LOOPS
TABLE ACCESS BY INDEX ROWID CM_CONTRACT_DETAIL
INDEX RANGE SCAN XIF760CT_CONTRACT_DETAIL
TABLE ACCESS BY INDEX ROWID CM_CONTRACT
INDEX UNIQUE SCAN XPKCM_CONTRACT
INDEX RANGE SCAN XPKCM_CONTRACT_CASHFLOW
Indexes on CM_CONTRACT_DETAIL:
XPKCM_CONTRACT_DETAIL is a composite unique index on country_code, company_code, contract_number and version_number
XIF760CT_CONTRACT_DETAIL is a non unique index on dealer_bp_id
Indexes on CM_CONTRACT:
XPKCM_CONTRACT is a composite unique index on country_code, company_code, contract_number
Indexes on CM_CONTRACT_CASHFLOW:
XPKCM_CONTRACT_CASHFLOW is a composite unique index on country_code, company_code, contract_number and version_number,supply_sequence_number, cash_flow_type_code,payment_date.
Could you please help better this query? Please let me know if anything else about the tables is required on this. Stats are not gathered on this tables either.
Your query plan says HINT=RULE. Why is that? Is this the standard setting in your dbms? Why not make use of the optimizer? You can use /*+CHOOSE*/ for that. This may be all that's needed. (Why are there no Stats on the tables, though?)
EDIT: The above was nonsense. By not gathering any statistics you prevent the optimizer from doing its work. It will always fall back to the good old rules, because it has no basis to calculate costs on and find a better plan. It is strange to see that you voluntarily keep the dbms from getting your queries fast. You can use hints in your queries of course, but be careful always to check and alter them when table data changes significantly. Better gather statistics and have the optimizer doing this work. As to useful hints:
My feeling says: With that many criteria on CM_CONTRACT_DETAIL this should be the driving table. You can force that with /*+LEADING(a)*/. Maybe even use a full table scan on that table /*+FULL(a)*/, which you can still speed up with parallel execution: /*+PARALLEL(a,4)*/.
Good luck :-)

Adding string comparison where clause to inner join takes huge performance hit with SQLite

I'm playing around with Philadelphia transit data and I have an sqlite database storing the gtfs data. I have this query looking for departure times at a particular stop:
SELECT "stop_time".departure_time FROM "stop_time"
INNER JOIN "trip" ON "trip".trip_id = "stop_time".trip_id
WHERE
(trip.route_id = '10726' )
-- AND (trip.service_id = '1')
AND (stop_time.stop_id = '220')
AND (time( stop_time.departure_time ) > time('08:30:45'))
AND (time( stop_time.departure_time ) < time('09:30:45'));
The clause to match service_id to 1 is currently commented out. If I run the query as it is now, without matching service_id, it takes 2 seconds. If I uncomment the service_id clause, it'll take 30. I'm clueless as to why since I'm already looking into the trip table for the route_id.
Any thoughts?
This generally happens if an index is defined on (route_id, stop_id and departure_time) that is used when there is no filter on service_id. Once you include that in the WHERE clause, since it is not present in the index, it requires a TABLE SCAN and hence the shoot up in the execution time. If you include service_id also in the index definition, then TABLE SCAN would be replaced with INDEX SCAN.
The reason is you have an index on service_id that is being chosen in preference to either another index, and there being not many different values of service_id, so using the index isn't being very useful because there are so many rows for service_id = '1'.

Need to index column in AND statement?

I have to do a SELECT on a table like this:
id
username
speed
is_running
The statement is like:
SELECT *
FROM mytable
WHERE username = 'foo'
AND is_running = 1
I have an index on "username". If I'm running the above statement, do I need to also index "is_running" for best performance? Or does only the first column of the select make a difference? I'm using mysql 5.0.
It depends on the type of data you're storing. If it's bool, you may not see a gain from an index on that column alone. You may want to try to add a composite index on the two columns:
ALTER TABLE mytable ADD INDEX `IDX_USERNAME_IS_RUNNING` ( `username` , `is_running` );
It will ultimately depend on the amount of data in the table as to if you require the index. In many cases, the engine might just do a table scan and omit your index all together if it thinks that is faster. Do you have 100 users, or 100,000 users?
On a bit/bool column you are not going to utilize a ton of storage space for the index, so it probably won't hurt unless you have a really high insertion rate.
If you have a table with 1 million users and only 1 or 2 is running at any one time - sure, index by is_running and it will give you fantastic performance. This specific use case would do best to have 2 indexes individually on columns - username, isrunning. The reason for 2 indexes is if you are asking for is_running=0, in which case it uses the username index instead.
Otherwise, there is 0% chance of a composite index on (username, isrunning) helping anything. Stick to a single index on username.
Finally, do you really need to whole record? Select *? In some scenarios close to the tipping point (when MySQL thinks the index+lookups becomes less efficient than a straight scan), you can make this query run faster than the original. Have an index on (username,id)
SELECT mytable.*
FROM (
SELECT id
FROM mytable
WHERE username = 'foo'
AND is_running = 1
) X
INNER JOIN mytable on mytable.id = X.id

Slow MySQL retreival of the row with the largest value in indexed column

I have a SQL table readings something like:
id int
client_id int
device_id int
unique index(client_id, device_id)
I do not understand why the following query is so slow:
SELECT client_id FROM `readings` WHERE device_id = 10 ORDER BY client_id DESC LIMIT 1
My understanding with the index is that mysql keeps an ordered list (one property of a btree) of each row in the table sorted first by client_id and then by device_id. When I execute an explain on this query it says that it will use the index, but that it will need to look at every row. This makes sense since in the worst case, there may only be one row with device_id = 10 and that may also be the row with the smallest client_id and thus at the end of it's search. However, in practice, this is not true. My table has ~10 million rows, and rows with device_id = 10 are spread fairly evenly throughout that table. Why then doesn't MySQL start at the end of the index and scan until it finds the first row with device_id = 10, stop and return that value? It does not seem possible that this is what is happening since the query takes ~30 seconds to execute.
Is it that my unique key is implemented as a hash somehow and thus not accessible in a list form? PHPMyAdmin is telling me that it is implemented as a b-tree, which makes me think that it should be able to do the scan as I mentioned above and quit with the first instance.
Where is my error and how can I make this query execute more quickly?
Thanks
Try switching the column order in the index:
unique index(device_id, client_id)
Since you are filtering on device_id, you would want that to be the first column in the index.
First, I'm assuming that you have good statistics for this table. If not, you'll want to analyze the table to make sure the optimizer can figure out what the best option is.
Here's another approach you could try that might work better. I could be that MySQL is not understanding your intent well enough to optimize correctly:
SELECT MAX(client_id) from readings where device_id = 10
Otherwise you could modify the index to be by device_id first, then client_id. Or you could add another index by just device_id.
You have a compound index on (client_id, device_id), these will(more or less) be concatenated for the purpose of indexing and the index will only be considered if you use the
first of the column(s). Your query is using 'device_id' which is the last of them, you could provide a separate index on that column, or swap the columns around in the index.
Also, check the output of EXPLAIN on your queries.