Most efficient way of updating ~100 million rows in Postgresql database? - sql

I have a database with a single table. This table will need to be updated every few weeks. We need to ingest third-party data into it and it will contain 100-120 million rows. So the flow is basically:
Get the raw data from the source
Detect inserts, updates & deletes
Make updates and ingest into the database
What's the best way of detecting and performing updates?
Some options are:
Compare incoming data with current database one by one and make single updates. This seems very slow and not feasible.
Ingest incoming data into a new table, then switch out old table with the new table
Bulk updates in-place in the current table. Not sure how to do this.
What do you suggest is the best option, or if there's a different option out there?

Postgres has a helpful guide for improving performance of bulk loads. From your description, you need to perform a bulk INSERT in addition to a bulk UPDATE and DELETE. Below is a roughly step by step guide for making this efficient:
Configure Global Database Configuration Variables Before the Operation
ALTER SYSTEM SET max_wal_size = <size>;
You can additionally disable WAL entirely.
ALTER SYSTEM SET wal_level = 'minimal';
ALTER SYSTEM SET archive_mode = 'off';
ALTER SYSTEM SET max_wal_senders = 0;
Note that these changes will require a database restart to take effect.
Start a Transaction
You want all work to be done in a single transaction in case anything goes wrong. Running COPY in parallel across multiple connections does not usually increase performance as disk is usually the limiting factor.
Optimize Other Configuration Variables at the Transaction level
SET LOCAL maintenance_work_mem = <size>
...
You may need to set other configuration parameters if you are doing any additional special processing of the data inside Postgres (work_mem is usually most important there especially if using Postgis extension.) See this guide for the most important configuration variables for performance.
CREATE a TEMPORARY table with no constraints.
CREATE TEMPORARY TABLE changes(
id bigint,
data text,
) ON COMMIT DROP; --ensures this table will be dropped at end of transaction
Bulk Insert Into changes using COPY FROM
Use the COPY FROM Command to bulk insert the raw data into the temporary table.
COPY changes(id,data) FROM ..
DROP Relations That Can Slow Processing
On the target table, DROP all foreign key constraints, indexes and triggers (where possible). Don't drop your PRIMARY KEY, as you'll want that for the INSERT.
Add a Tracking Column to target Table
Add a column to target table to determine if row was present in changes table:
ALTER TABLE target ADD COLUMN seen boolean;
UPSERT from the changes table into the target table:
UPSERTs are performed by adding an ON CONFLICT clause to a standard INSERT statement. This prevents the need from performing two separate operations.
INSERT INTO target(id,data,seen)
SELECT
id,
data,
true
FROM
changes
ON CONFLICT (id) DO UPDATE SET data = EXCLUDED.data, seen = true;
DELETE Rows Not In changes Table
DELETE FROM target WHERE not seen is true;
DROP Tracking Column and Temporary changes Table
DROP TABLE changes;
ALTER TABLE target DROP COLUMN seen;
Add Back Relations You Dropped For Performance
Add back all constraints, triggers and indexes that were dropped to improve bulk upsert performance.
Commit Transaction
The bulk upsert/delete is complete and the following commands should be performed outside of a transaction.
Run VACUUM ANALYZE on the target Table.
This will allow the query planner to make appropriate inferences about the table and reclaim space taken up by dead tuples.
SET maintenance_work_mem = <size>
VACUUM ANALYZE target;
SET maintenance_work_mem = <original size>
Restore Original Values of Database Configuration Variables
ALTER SYSTEM SET max_wal_size = <size>;
...
You may need to restart your database again for these settings to take effect.

Related

Safely replace table with new data and schema

I am trying to create a stored procedure to recreate a table from scratch, with a possible change of schema (including possible additions/removals of columns), by using a DROP TABLE followed by a SELECT INTO, like this:
BEGIN TRAN
DROP TABLE [MyTable]
SELECT (...) INTO [MyTable] FROM (...)
COMMIT
My concern is that errors could be generated if someone tries to access the table after it has been dropped but before the SELECT INTO has completed. Is there a way to lock [MyTable] in a way that will persist through the DROP?
Instead of DROP/SELECT INTO, I could TRUNCATE/INSERT INTO, but this would not allow the schema to be changed. SELECT INTO is convenient in my situation because it allows the new schema to be automatically determined. Is there a way to make this work safely?
Also, I would like to be sure that the source tables in "FROM (...)" are not locked during this process.
If you try to make a significant change to the table (like adding a column in the middle of existing columns, not at the end) using SSMS and see what script it generates, you'll see that SSMS uses sp_rename.
The general structure of the SSMS's script:
create a new table with temporary name
populate the new table with data
drop the old table
rename the new table to the correct name.
All this in a transaction.
This should keep the time when tables are locked to a minimum.
BEGIN TRANSACTION
SELECT (...) INTO dbo.Temp_MyTable FROM (...)
DROP TABLE dbo.MyTable
EXECUTE sp_rename N'dbo.Temp_MyTable', N'dbo.MyTable', 'OBJECT'
COMMIT
DROP TABLE MyTable acquires a schema modification (Sch-M) lock on it until the end of transaction, so all other queries using MyTable would wait. Even if other queries use the READ UNCOMMITTED isolation level (or the infamous WITH (NOLOCK) hint).
See also MSDN Lock Modes:
Schema Locks
The Database Engine uses schema modification (Sch-M)
locks during a table data definition language (DDL) operation, such as
adding a column or dropping a table. During the time that it is held,
the Sch-M lock prevents concurrent access to the table. This means the
Sch-M lock blocks all outside operations until the lock is released.

Populating a table from a view in Oracle with "locked" truncate/populate

I would like to populate a table from a (potentially large) view on a scheduled basis.
My process would be:
Disable indexes on table
Truncate table
Copy data from view to table
Enable indexes on table
In SQL Server, I can wrap the process in a transaction such that when I truncate the table a schema modification lock will be held until I commit. This effectively means that no other process can insert/update/whatever until the entire process is complete.
However I am aware that in Oracle the truncate table statement is considered DDL and will thus issue an implicit commit.
So my question is how can I mimic the behaviour of SQL Server here? I don't want any other process trying to insert/update/whatever whilst I am truncating and (re)populating the table. I would also prefer my other process to be unaware of any locks.
Thanks in advance.
Make your table a partitioned table with a single partition and local indexes only. Then whenever you need to refresh:
Copy data from view into a new temporary table
CREATE TABLE tmp AS SELECT ... FROM some_view;
Exchange the partition with the temporary table:
ALTER TABLE some_table
EXCHANGE PARTITION part WITH TABLE tmp
WITHOUT VALIDATION;
The table is only locked for the duration of the partition exchange, which, without validation and global index update, should be instant.

Trigger on Audit Table failing due to update conflict

I have a number of tables that get updated through my app which return a lot of data or are difficult to query for changes. To get around this problem, I have created a "LastUpdated" table with a single row and have a trigger on these complex tables which just sets GetDate() against the appropriate column in the LastUpdated table:
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET ListItems = GetDate()
GO
This way, the clients only have to query this table for the last updated value and then can decided whether or not they need to refresh their data from the complex tables. The complex tables are using snapshot isolation to prevent dirty reads.
In busy systems, around once a day we are getting errors writing or updating data in the complex tables due to update conflicts in "LastUpdated". Because this occurs in the statement executed by the trigger, the affected complex table fails to save data. The following error is logged:
Snapshot isolation transaction aborted due to update conflict. You
cannot use snapshot isolation to access table 'dbo.tblLastUpdated'
directly or indirectly in database 'devDB' to update, delete, or
insert the row that has been modified or deleted by another
transaction. Retry the transaction or change the isolation level for
the update/delete statement.
What should I be doing here in the trigger to prevent this failure? Can I use some kind of query hints on the trigger to avoid this - or can I just ignore errors in the trigger? Updating the data in LastUpdated is not critical, but saving the data correctly into the complex tables is.
This is probably something very simple that I have overlooked or am not aware of. As always, thanks for any info.
I would say that you should look into using Change Tracking (http://msdn.microsoft.com/en-gb/library/cc280462%28v=sql.100%29.aspx), which is lightweight builtin SQL Server functionality that you can use to monitor the fact that a table has changed, as opposed to logging each individual change (which you can also do with Change Data Capture). It needs Snapshot Isolation, which you are already using.
Because your trigger is running in your parent transaction, and your snapshot has become out of date, your whole transaction would need to start again. If this is a complex workload, maintaining this last updated data in this way would be costly.
Short answer - don't do that! Making the updated transactions dependent on one single shared row makes it prone to deadlocks and and update conflicts whole gammut of nasty things.
You can either use views to determine last update, e.g.:
SELECT
t.name
,user_seeks
,user_scans
,user_lookups
,user_updates
,last_user_seek
,last_user_scan
,last_user_lookup
,last_user_update
FROM sys.dm_db_index_usage_stats i JOIN sys.tables t
ON (t.object_id = i.object_id)
WHERE database_id = db_id()
Or, if you really insist on the solution with LastUpdate, you can implement it's update from the trigger in an autonomous transactions. Even though SQL Server doesn't support autonomous transactions, it could done using liked servers: How to create an autonomous transaction in SQL Server 2008
The schema needs to change. If you have to keep your update table, make a row for every table. That would greatly reduce your locks because each table could update their very own row and not competing for the sole row in a table.
LastUpdated
table_name (varchar(whatever)) pk
modified_date (datetime)
New Trigger for tblListItem
CREATE TRIGGER [dbo].[trg_ListItem_LastUpdated] ON [dbo].[tblListItem]
FOR INSERT, UPDATE, DELETE
AS
UPDATE LastUpdated SET modified_date = GetDate() WHERE table_name = 'tblListItem'
GO
Another option that I use a lot is having a modified_date column in every table. Then people know exactly which records to update/insert to sync with your data rather than dropping and reloading everything in the table each time one record changes or is inserted.
Alternatively, you can update the log table inside the same transaction which you use to update your complex tables inside your application & avoid the trigger altogether.
Update
You can also opt for inserting a new row instead of updating the same row in LastUpdated table. You can then query max timestamp for latest update. However, with this approach your LastUpdated table would grow each day which you need to take care of if volume of transactions is high.

Automatically dropping PostgreSQL tables once per day

I have a scenario where I have a central server and a node. Both server and node are capable of running PostgreSQL but the storage space on the node is limited. The node collects data at a high speed and writes the data to its local DB.
The server needs to replicate the data from the node. I plan on accomplishing this with Slony-I or Bucardo.
The node needs to be able to delete all records from its tables at a set interval in order to minimize disk space used. Should I use pgAgent with a job consisting of a script like
DELETE FROM tablex, tabley, tablez;
where the actual batch file to run the script would be something like
#echo off
C:\Progra~1\PostgreSQL\9.1\bin\psql -d database -h localhost -p 5432 -U postgres -f C:\deleteFrom.sql
?
I'm just looking for opinions if this is the best way to accomplish this task or if anyone knows of a more efficient way to pull data from a remote DB and clear that remote DB to save space on the remote node. Thanks for your time.
The most efficient command for you is the TRUNCATE command.
With TRUNCATE, you can chain up tables, like your example:
TRUNCATE tablex, tabley, tablez;
Here's the description from the postgres docs:
TRUNCATE quickly removes all rows from a set of tables. It has the same effect as an unqualified DELETE on each table, but since it does not actually scan the tables it is faster. Furthermore, it reclaims disk space immediately, rather than requiring a subsequent VACUUM operation. This is most useful on large tables.
You may also add CASCADE as a parameter:
CASCADE Automatically truncate all tables that have foreign-key references to any of the named tables, or to any tables added to the group due to CASCADE.
The two best options, depending on your exact needs and workflow, would be truncate, as #Bohemian suggested, or to create a new table, rename, then drop.
We use something much like the latter create/rename/drop method in one of our major projects. This has an advantage where you need to be able to delete some data, but not all data, from a table very quickly. The basic workflow is:
Create a new table with a schema identical to the old one
CREATE new_table LIKE ...
In a transaction, rename the old and new tables simultaneously:
BEGIN;
RENAME table TO old_table;
RENAME new_table TO table;
COMMIT;
[Optional] Now you can do stuff with the old table, while the new table is happily accepting new inserts. You can dump the data to your centralized server, run queries on it, or whatever.
Delete the old table
DROP old_table;
This is an especially useful strategy when you want to keep, say, 7 days of data around, and only discard the 8th day's data all at once. Doing a DELETE in this case can be very slow. By storing the data in partitions (one for each day), it is easy to drop an entire day's data at once.

Add new column without table lock?

In my project having 23 million records and around 6 fields has been indexed of that table.
Earlier I tested to add delta column for Thinking Sphinx search but it turns in holding the whole database lock for an hour. Afterwards when the file is added and I try to rebuild indexes this is the query that holds the database lock for around 4 hours:
"update user_messages set delta = false where delta = true"
Well for making the server up I created a new database from db dump and promote it as database so server can be turned live.
Now what I am looking is that adding delta column in my table with out table lock is it possible? And once the column delta is added then why is the above query executed when I run the index rebuild command and why does it block the server for so long?
PS.: I am on Heroku and using Postgres with ika db model.
Postgres 11 or later
Since Postgres 11, only volatile default values still require a table rewrite. The manual:
Adding a column with a volatile DEFAULT or changing the type of an existing column will require the entire table and its indexes to be rewritten.
Bold emphasis mine. false is immutable. So just add the column with DEFAULT false. Super fast, job done:
ALTER TABLE tbl ADD column delta boolean DEFAULT false;
Postgres 10 or older, or for volatile DEFAULT
Adding a new column without DEFAULT or DEFAULT NULL will not normally force a table rewrite and is very cheap. Only writing actual values to it creates new rows. But, quoting the manual:
Adding a column with a DEFAULT clause or changing the type of an
existing column will require the entire table and its indexes to be rewritten.
UPDATE in PostgreSQL writes a new version of the row. Your question does not provide all the information, but that probably means writing millions of new rows.
While doing the UPDATE in place, if a major portion of the table is affected and you are free to lock the table exclusively, remove all indexes before doing the mass UPDATE and recreate them afterwards. It's faster this way. Related advice in the manual.
If your data model and available disk space allow for it, CREATE a new table in the background and then, in one transaction: DROP the old table, and RENAME the new one. Related:
Best way to populate a new column in a large table?
While creating the new table in the background: Apply all changes to the same row at once. Repeated updates create new row versions and leave dead tuples behind.
If you cannot remove the original table because of constraints, another fast way is to build a temporary table, TRUNCATE the original one and mass INSERT the new rows - sorted, if that helps performance. All in one transaction. Something like this:
BEGIN
SET temp_buffers = 1000MB; -- or whatever you can spare temporarily
-- write-lock table here to prevent concurrent writes - if needed
LOCK TABLE tbl IN SHARE MODE;
CREATE TEMP TABLE tmp AS
SELECT *, false AS delta
FROM tbl; -- copy existing rows plus new value
-- ORDER BY ??? -- opportune moment to cluster rows
-- DROP all indexes here
TRUNCATE tbl; -- empty table - truncate is super fast
ALTER TABLE tbl ADD column delta boolean DEFAULT FALSE; -- NOT NULL?
INSERT INTO tbl
TABLE tmp; -- insert back surviving rows.
-- recreate all indexes here
COMMIT;
You could add another table with the one column, there won't be any such long locks. Of course there should be another column, a foreign key to the first column.
For the indexes, you could use "CREATE INDEX CONCURRENTLY", it doesn't use too heavy locks on this table http://www.postgresql.org/docs/9.1/static/sql-createindex.html.