We have a table on production which has been there for quite some time and the volume of that table is huge(close to 3 TB), since most of the data in this table is stale and unused we are planning to get rid of historical data which does not have any references.
There is a column "active" with type boolean which we can use to get rid of this data, however this column is not indexed.
Considering the volume of the table i am not too sure whether creation of a new index is going to help, i tried to incrementally delete the inactive rows 100K at a time but still the volume is so huge that this is going to take months to clear up.
The primary key of the table is of type UUID, i thought of creating a new table and inserting only the valued with active="true" as
insert
into
mytable_active
select
*
from
mytable
where
is_active = true;
But as expected this approach also fails because of the volume and keeps running like forever.
Any suggestions approaches would be most welcome.
When you need to delete a lot of rows quickly, partitioning is great......... when the table is already partitioned.
If there is no index on the column you need, then at least one full table scan will be required, unless you can use another index like "date" or something to narrow it down.
I mean, you could create an index "WHERE active" but that would also require the full table scan you're trying to avoid, so... meh.
First, DELETE. Just don't, not even in small bits with LIMIT. Not only will it write most of the table (3TB writes) but it will also write it to the WAL (3 more TB) and it will also update the indexes, and write that to the WAL too. This will take forever, and the random IO from index updates will nuke your performance. And if it ever finishes, you will still have a 3TB file, with most of it unallocated. Plus indexes.
So, no DELETE. Uh, wait.
Scenario with DELETE:
Swap the table with a view "SELECT * FROM humongous WHERE active=true" and add triggers or rules on the view to redirect updates/inserts/delete to the underlying table. Make sure triggers set all new rows with active=true.
Re-create each index (concurrently) except the primary key, adding "WHERE active=true". This will require a full table scan for the first index, even if you create the index on "active", because CREATE INDEX WHERE doesn't seem to be able to use another index to speed up when a WHERE is specified.
Drop the old indices
Note the purpose of the view is only to ensure absolutely all queries have "active=true" in the WHERE, because otherwise, they wouldn't be able to use the conditional indices we just created, so each query would be a full table scan, and that would be undesirable.
And now, you can DELETE, bit by bit, with your delete from mytable where id in ( select id from mytable where active = false limit 100000);
It's a tradeoff, you'll have a large number of table scans to recreate indices, but you'll avoid the random IO from index update due to a huge delete, which is the real reason why you say it will take months.
Scenario with INSERT INTO new_table SELECT...
If you have inserts and updates running on this huge table, then you have a problem, because these will not be transferred to the new table during the operation. So a solution would be to:
turn off all the scripts and services that run long queries
lock everything
create new_table
rename huge_table to huge_old
create a view that is a UNION ALL of huge_table and huge_old. From the application point of view, this view replaces huge_table. It must handle priority, ie if a row is present in the new table, a row with the same id present in the old table should be ignored... so it will have to have a JOIN. This step should be tested carefully beforehand.
unlock
Then, let it run for a while, see if the view does not destroy your performance. At this point, if it breaks, you can easily go back by dropping the view and renaming the table back to its old self. I said to turn off all the scripts and services that run long queries because these might fail with the view, and you don't want to take a big lock while one long query is running, because that will halt everything until it's done.
add insert/update/delete triggers on the view to redirect the writes to new_table. Inserts go directly to the new table, updates will have to transfer the row, deletes will have to hit both tables, and UNIQUE constraints will be... interesting. This will be a bit complicated.
Now to transfer the data.
Even if it takes a while, who cares? It will finish eventually. I suppose if you have a 3TB table, you must have some decent storage, even if that's these old spinning things that we used to put data on, it shouldn't take more than a few hours if the IO is not random. So the idea is to only use linear IO.
Fingers crossed hoping the table does not have a big text column that is stored in separate TOAST table that is going to require one random access per row. Did you check?
Now, you might actually want it to run for longer so it uses less IO bandwidth, both for reads and writes, and especially WAL writes. It doesn't matter how long the query runs as long as it doesn't degrade performance for the rest of the users.
Postgres will probably go for a parallel table scan to use all the cores and all the IO in the box, so maybe disable that first.
Then I think you should try to avoid the hilarious (for onlookers) scenario where it reads from the table for half a day, not finding any rows that match, so the disks handle the reads just fine, then it finds all the rows that match at the end and proceeds to write 300GB to the WAL and the destination table, causing huge write contention, and you have to Ctrl-C it when you know, you just know it in your gut that it was THIS CLOSE to finishing.
So:
create bogus_table just like mytable but without indices;
insert into bogus_table select * from mytable;
10% of "active" rows is still 300GB so better check the server can handle writing a 300GB table without slowing down. Watch vmstat and check if iowait goes crazy, watch number of transactions per second, query latency, web server responsiveness, the usual database health stuff. If the phone rings, hit Ctrl-C and say "Fixed!"
After it's done a few checkpoints, Ctrl-C. Time to do the real thing.
Now to make this query take much longer (and therefore destroy much less IO bandwidth) you can add this to the columns in your select:
pg_sleep((random()<0.000001)::INTEGER * 0.1)
That will make it sleep for 0.1s every million rows on average. Adjust to taste while looking at vmstat.
You can also monitor query progress using hacks.
It should work fine.
Once the interesting rows have been extracted from the accursed table, you could move the old data to a data warehouse or something, or to cold storage, or have fun loading it into clickhouse if you want to run some analytics.
Maybe partitioning the new table would also be a good idea, before it grows back to 3TB. Or periodically moving old rows.
Now, I wonder how you backup this thing...
-- EDIT
OK, I have another idea, maybe simpler, but you'll need a box.
Get a second server with fast storage and setup logical replication. On this replica server, create an empty UNLOGGED replica of the huge table with only one index on the primary key. Logical replication will copy the entire table, so it will take a while. A second network card in the original server or some QoS tuning would help not blowing up the ethernet connection you actually use to serve queries.
Logical replication is row based and identifies rows by primary key, so you absolutely need to manually create that PK index on the slave.
I've tested it on my home box right now and it works very well. The initial data transfer was a bit slow, but that may be my network. Pausing then resuming replication transferred rows inserted or updated on the master during the pause. However, renaming the table seems to break it, so you won't be able to do INSERT INTO SELECT, you'll have to DELETE on the replica. With SSDs, only one PK index, the table set to UNLOGGED, it should not take forever. Maybe using btrfs would turn the random index write IO into linear IO due to its copy on write nature. Or, if the PK index fits in shared_buffers, just YOLO it and set checkpoint_timeout to "7 days" so it doesn't actually write anything. You'll probably need to do the delete in chunks so the replicated updates keep up.
When I dropped the PK index to speed up the deletion, then recreated it before re-enabling replication, it didn't catch up on the updates. So you can't drop the index.
But is there a way to only transfer the rows you want to keep instead of transferring everything and deleting, while also having the replica keep up with the master's updates?... It's possible to do it for inserts (just disable the initial data copy) but not for updates unfortunately. You'd need an integer primary key so you could generate bogus rows on the replica that would then be updated during replication... but you can't do that with your UUID PK.
Anyway. Once this is done, set the number of WAL segments to be kept on the master server to a very high value, to resume replication later without missing updates.
And now you can run your big DELETE on the replica. When it's done, vacuum, maybe CLUSTER, re-create all indexes, etc, and set the table to LOGGED.
Then you can failover to the new server. Or if you're feeling adventurous, you could replicate the replica's table back on the master, since it will have the same name it should be in another schema.
That should allow for very little downtime since all updates are replicated, the replica will always be up to date.
I would suggest:
Copy the active records to a temporary table
Drop the main table
Rename the temporary table to the main table name
Having existing data in a class (within a database)... I am not able to create an index (of type "all") and return the data that was inserted before the creation of the new index.
I have tried to do it programmatically (python) and via web interface.
I hope to be able to recover pre-existing data when a new index is created.
Fauna automatically builds indices on creation adding any prior records that are covered by the index without user intervention. If you ever experience indices missing data then you should contact us! As it happened today we had a brief outage that while it didn't prevent reads writes or index updates did stall index rebuilds.
Does deleting data from a database (using actual delete SQL query) cause huge problems in re-indexing of table data (say tens of millions of data) thereby increasing system overhead and consuming more resource?
Most databases do not immediately delete the index nodes associated with deleted rows from the table. Depending on the specifics of how duplicate index keys are handled this may have no effect at all. For example, one scheme for duplicate key index building is to only have a single B+Tree node for the key value but have it point to a list of rows that contain that key value in the indexed column(s). In that case deleting one or even many of the rows in the table does not affect the efficiency of the index tree at all until all of the rows with that key value have been deleted at which time the key node will be flagged as deleted but not necessarily removed from the tree. Of course in the case of a unique index key any deletion will result in a node that is flagged as deleted. When that happens to many key values near each other on disk the index tree may become inefficient.
One solution is to rebuild the index from scratch either by dropping it and recreating it or if the DBMS has the feature by a “reindex” command. Another solution used by some more advanced database systems is to track whenever a search of an index actually encounters a deleted node. If this happens so often that a configured threshold is exceeded then an automated thread will “clean” the index actually removing deleted nodes and possibly compressing mostly empty index pages or even rebalancing the index tree. The advantage of this “cleaner thread” feature is that inefficient indexes that are not often used, or in which the subtrees of the index containing deleted nodes are no longer accessed (imagine deleting out-dated rows in an index whose lead column is the date used to purge rows), do not take up resources to clean or rebuild them since they are not affecting performance.
I have a large Redis sorted set. We need to re-index the data in the set daily, while clients actively request data from the set. My plan is to simply build a second set using a different key and then replace the existing key with the new one:
Build new "indexed" sorted set
RENAME "indexed" set to "live" to replace existing "live" set.
Looking at the RENAME documentation, it states:
If newkey already exists it is overwritten, when this happens RENAME executes an implicit DEL operation, so if the deleted key contains a very big value it may cause high latency even if RENAME itself is usually a constant-time operation.
I'm wondering, then, if it's better to rename the "live" sorted set (e.g. to "dead"), then rename the new "indexed" sorted set to "live" -- and pipeline those requests. And only then, issue a separate DEL command to delete the "dead" set:
Build new "indexed" sorted set
pipeline: RENAME existing "live" set to "dead"
pipeline: RENAME new "indexed" set to "live"
DEL "dead" set
ideas?
Using DEL, you are only postponing the problem. During the DEL, redis blocks other clients.
First, I'd investigate how big the problem is. It can be a problem, for example deleting a 3.5GB ZSET key takes about 2 seconds on our staging system.
If it's a problem, split up the DEL by using ZREMRANGEBYRANK and ZCARD.
Pipelining is efficient (non-transactional ofcourse), so it helps to determine the total size upfront by ZCARD, and after that, issue N ZREMRANGEBYRANK commands (piped) with a range of (example) -10000 0, ending with '0 -1'. As soon as all members are deleted, Redis automatically deletes the key (the Sorted Set) itself.
Hope this helps, TW
If redis stores data as key value pair in memory, what is the size of hash table redis creates initially to store key-value pairs? Do it create a table of size equivalent to maxmemory parameter in config file?
No, the size of the hash table of the main dictionary is dynamic.
The initial size is 4 entries. Then it grows to accommodate the data, following powers of 2. Growing is dynamic, so rehashing is incrementally performed in background. Expensive rehashing operations can not block a simple set command.