How do I replace a table in Postgres? - sql

Basically I want to do this:
begin;
lock table a;
alter table a rename to b;
alter table a1 rename to a;
drop table b;
commit;
i.e. gain control and replace my old table while no one has access to it.

Simpler:
BEGIN;
DROP TABLE a;
ALTER TABLE a1 RENAME TO a;
COMMIT;
DROP TABLE acquires an ACCESS EXCLUSIVE lock on the table anyway. An explicit LOCK command is no better. And renaming a dead guy is just a waste of time.
You may want to write-lock the old table while preparing the new, to prevent writes in between. Then you'd issue a lock like this earlier in the process:
LOCK TABLE a IN SHARE MODE;
What happens to concurrent transactions trying to access the table? It's not that simple, read this:
Best way to populate a new column in a large table?
Explains why you may have seen error messages like this:
ERROR: could not open relation with OID 123456

Create SQL-backup, make changes you need directly at the backup.sql file and restore database. I used this trick when have added INHERIT for group of tables (Postgres dbms) to remove inherited fields from subtable.

I would use answer#13, but I agree, it will not inherit the constraints, and drop table might fail
line up the relevant constraints first (like from pg_dump --schema-only,
drop the constraints
do the swap per answer#13
apply the constraints (sql snippets from the schema dump)

Related

The "proper" way to atomically replace all contents in a PostgreSQL table?

In the project I have been recently working on, many (PostgreSQL) database tables are just used as big lookup arrays. We have several background worker services, which periodically pull the latest data from a server, then replace all contents of a table with the latest data. The replacing has to be atomic because we don't want a partially completed table to be seen by lookup-ers.
I thought the simplest way to do the replacing is something like this:
BEGIN;
DELETE FROM some_table;
COPY some_table FROM 'source file';
COMMIT;
But I found a lot of production code use this method instead:
BEGIN;
CREATE TABLE some_table_tmp (LIKE some_table);
COPY some_table_tmp FROM 'source file';
DROP TABLE some_table;
ALTER TABLE some_table_tmp RENAME TO some_table;
COMMIT;
(I omit some logic such as change the owner of a sequence, etc.)
I just can't see any advantage of this method. Especially after some discoveries and experiments. SQL statements like ALTER TABLE and DROP TABLE acquire an ACCESS EXCLUSIVE lock, which even blocks a SELECT.
Can anyone explain what problem the latter SQL pattern is trying to solve? Or it's wrong and we should avoid using it?

oracle creating table from another table created partially ; unable to extend temp space

We are trying to create a table from another table with method -
create table tab1 as select * from tab2;
But the process failed with error
ORA-01652: unable to extend temp segment by 8192 in tablespace
However the table tab1 is created with partial data only. There is a count mismatch in tab1 and tab2. Any of these two tables being not populated/ updated by any transaction. This happened with a couple of tables.
What my knowledge says about it, a create table should create a table at all or not at all. There is no possibility of table being created partially.
Any insight is suggested from experts.
Putting the cause of the error aside (addressed by #Leo in his answer):
I have not found anything specific on transactions for CREATE TABLE ... AS SELECT. Any CREATE TABLE statement is a DDL operation, which in turn are generally non-transactional operations.
This is just a speculation, but I'd say that the table creation did succeed. The instruction you gave is basically a two in one, where the first one is the actual table creation, which does work (and as it is not transactional, it can't be affected by the second one) and the second is a variant of a bulk insert from select (with implicit commits for batches), which breaks at some point.
This is probably not answering your question, but as the operation is apparently two-phase anyway, if you need more transactional approach, you would benefit from splitting the operation into two separate ones:
first:
CREATE TABLE tab1 AS SELECT * FROM tab2 WHERE 1 = 2;
second:
INSERT INTO tab1 SELECT * FROM tab2;
This way if the second part fails, you will not end up with a partial insert. You will still have the table in place though.
Execute the following to determine the filename for the existing tablespace as sysadmin
SELECT * FROM DBA_DATA_FILES;
Then extend the size of the datafile as follows (replace the filename with the one from the previous query):
ALTER DATABASE DATAFILE 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF' RESIZE 4096M;
You can first try below command or ask DBA to give the privilege:
grant unlimited tablespace to <schema_name>;

Safely replace table with new data and schema

I am trying to create a stored procedure to recreate a table from scratch, with a possible change of schema (including possible additions/removals of columns), by using a DROP TABLE followed by a SELECT INTO, like this:
BEGIN TRAN
DROP TABLE [MyTable]
SELECT (...) INTO [MyTable] FROM (...)
COMMIT
My concern is that errors could be generated if someone tries to access the table after it has been dropped but before the SELECT INTO has completed. Is there a way to lock [MyTable] in a way that will persist through the DROP?
Instead of DROP/SELECT INTO, I could TRUNCATE/INSERT INTO, but this would not allow the schema to be changed. SELECT INTO is convenient in my situation because it allows the new schema to be automatically determined. Is there a way to make this work safely?
Also, I would like to be sure that the source tables in "FROM (...)" are not locked during this process.
If you try to make a significant change to the table (like adding a column in the middle of existing columns, not at the end) using SSMS and see what script it generates, you'll see that SSMS uses sp_rename.
The general structure of the SSMS's script:
create a new table with temporary name
populate the new table with data
drop the old table
rename the new table to the correct name.
All this in a transaction.
This should keep the time when tables are locked to a minimum.
BEGIN TRANSACTION
SELECT (...) INTO dbo.Temp_MyTable FROM (...)
DROP TABLE dbo.MyTable
EXECUTE sp_rename N'dbo.Temp_MyTable', N'dbo.MyTable', 'OBJECT'
COMMIT
DROP TABLE MyTable acquires a schema modification (Sch-M) lock on it until the end of transaction, so all other queries using MyTable would wait. Even if other queries use the READ UNCOMMITTED isolation level (or the infamous WITH (NOLOCK) hint).
See also MSDN Lock Modes:
Schema Locks
The Database Engine uses schema modification (Sch-M)
locks during a table data definition language (DDL) operation, such as
adding a column or dropping a table. During the time that it is held,
the Sch-M lock prevents concurrent access to the table. This means the
Sch-M lock blocks all outside operations until the lock is released.

How to rename/recreate a table without disrupting service?

I've a table I need to purge without disrupting the service. About 99.99% of data should be deleted, so I'm trying to recreate the table and moving the 0.01% usefull data into the new table as following (and I will truncate the old table later) :
BEGIN ISOLATION LEVEL SERIALIZABLE;
LOCK TABLE table1 IN ACCESS EXCLUSIVE MODE;
/* I rename the old table */
ALTER TABLE table1 RENAME TO table1_to_be_deleted;
/* And I recreate the table */
CREATE TABLE table1 (
...
);
/* Restore usefull data from old table to new one */
INSERT INTO table1 SELECT * FROM table1_to_be_deleted WHERE toBeKept = 1;
COMMIT;
But when I run my transaction I've got some client's error due to rows not found into the new table but present in the old one. These rows are well tagged as to be kept so they should be copied from old table to the new inside the transaction and found by the client's request....
When other requests are waiting for a lock acquired on a table, has it got a pointer to the targeted object? It's the only I've which can explained the update of the old table after I commit my transaction...
PS : I'm using Postgres 9.1
To do that I'd rather:
create auxilary table
create rules to DML instead of original table to auxilary
create rule to select instead of original, `unionned' both
move good data from ONLY original to auxilary
truncate original
either move back data (will not need to rebuild references) or rename
drop obsoleted rules and objects
But really, I'd just delete from where 99%, not inventing the wheel

Define FK where target table does not exists

When I create table that has definition of FK's directly in CREATE command and target table does not exists yet, results in error.
Can checking, if target table exists, be somehow suspended?
my DBMS is Postgres.
Example (pseudocode):
create table "Bar"
foo_id integer FK of "Foo"."id",
someattr text;
create table "Foo"
id integer;
Example is in wrong order, thats why it wont run.
I'm trying to recreate databse in batch, based on definitions in many sql files.
When I create table that has definition of FK's directly in CREATE command and target table does not exists yet, results in error.
Can checking, if target table exists, be somehow suspended?
The best ways to deal with this are likely:
Create your tables in the correct order, or
Create the constraints
outside the table creation, after all tables are created.
Brute force is always an option.
Keep on running your DDL scripts in until you get a run with no errors.
More elegance requires a sequential structuring of your scripts.
Adding existence checks is possible, but I am not too familiar with the postgres metadata.