Is it possible to wrap DDL changes in a transaction in PostgreSQL? - sql

I know that in MySQL ddl statements such as alter table/create table/etc cause an implicit transaction commit.
As we are moving to PostgreSQL is it possible to wrap multiple DDL statments in a transaction?
This would make migration scripts a lot more robust, a failed DDL change would cause everything to rollback.

DDL statements are covered by transactions. I can't find the relevant section in the official documentation, but have provided a link to the wiki which covers it.
Just remember that transactions aren't automatically opened in postgresql, you must start them with BEGIN or START TRANSACTION.
Postgresql Wiki about Transactional DDL

Not every Postgres DDL statement can be wrapped in transaction. Statements like DROP DATABASE / DROP TABLESPACE and some other file-system-related cant rollback.
Also:
ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum
type) cannot be executed inside a transaction block.
Also some statements like TRUNCATE are 'not MVCC save'. Changes, made by that kind of statements can affect other queries, even if they are rolled back.
So - read the official manual for your version of postgres to find out if your DDL's are transaction safe.

Related

Why use Implicit and Explicit Transaction Modes in SQL Server?

Reading MS documentation on different transaction modes in SQL Server.
Autocommit mode does everything the Implicit and Explicit Transaction mode does with less code, so why should I use Implicit and Explicit Transaction modes in my code ?
Autocommit transaction is only for single query. If you need transaction involving multiple queries, you must use Implicit and Explicit Transaction mode.
As you know sqlserver automatically done the job of transaction commit. But some time we need to commit /rollback on particular condition/logic/business rule(s).
For example, we have one master table and 3 child/details table or say 1 or more child tables.
Suppose, we must save master table entry along with all details table with reference of master table's pk-id. In any case anything an issue then revert whole thing.
So in this scenerio we need to use explicit transaction to commit or rollback as a unit of work. We can use try..catch block for error handling and rollback the transaction.
If we not used this transaction, then after each insert statement sqlserver auto-commit the inserted row and not rollback ever.

Why does Drop Index require commit?

In my multi-threaded program, one thread drops indexes on a table (this happens first), and other threads insert records in the same table. It so happened that when dropping index is attempted, the table gets locked and the insert transactions become "waiting".
After wasting a lot of time on non-solutions to the problem, I found the real solution is to commit immediately after dropping the index. When commit is issued, the table is unlocked and the insert transactions complete successfully.
My question is, why? I was under the impression that Drop Index is a DDL statement and therefore does not need to be committed. Postgres seems to prove me wrong.
In PostgreSQL, all DDL commands are transactional. So if you start a transaction block, or your driver starts a transaction block for you, or your driver is not in autocommit mode, you need to commit all DDL commands, just like other SQL commands.
Other SQL databases do this differently.
(Nitpicking: Some DDL commands in PostgreSQL cannot be run in a transaction block, only in a transaction by themselves. So you may consider those to be exceptions to the above "all DDL commands" claim. But that's not quite the same thing as your question: Those commands still need to be committed, they just can't be run in a transaction together with other commands.)
I don't know about Postgres, but DDL statements are not always auto-committed.
In Oracle for example they are, but in DB2 they are not (you can do a create table + indexes and then rollback the whole lot). I think SQL Server also needs the commit (unless auto-commit is on).
Basically (depending on the DB flavour) a DDL statement is not always auto-committed.

Undoing sql scripts

I have a problem to solve which requires undo operation of each executed sql file in Oracle Database.
I execute them in an xml file with MSBuild - exec command sqlplus with log in and #*.sql.
Obviously rollback won't do, because it can't rollback already commited transaction.
I have been searching for several days and still can't find the answer. What I learned is Oracle Flashback and Point in Time Recovery. The problem is that I want the changes to be undone only for the current user i.e. if another user makes some changes at the same time then my solution performs undo only on user 'X' not 'Y'.
I found the start_scn and commit_scn in flashback_transaction_query. But does it identify only one user? What if I flashback to a given SCN? Will that undo only for me or for other users as well? I have taken out
select start_scn from flashback_transaction_query WHERE logon_user='MY_USER_NAME'
and
WHERE table_name = "MY_TABLE NAME"
and performed
FLASHBACK TO SCN"here its number"
on a chosen operation's SCN. Will that work for me?
I also found out about Point in Time Recovery but as I read it makes the whole database unavailable so other users will be unable to work with it.
So I need something that will undo a whole *.sql file.
This is possible but maybe not with the tools that you use. sqlplus can rollback your transaction, you just have to make sure auto commit isn't enabled and that your scripts only contain a single commit right before you end the sqlplus session (if you don't commit at all, sqlplus will always roll back all changes when it exits).
The problems start when you have several scripts and you want, for example, to rollback a script that you ran yesterday. This is a whole new can of worms and there is no general solution that will always work (it's part of the "merge problem" group of problems, i.e. how can you merge transactions by different users when everyone can keep transactions open for as long as they like).
It can be done but you need to carefully design your database for it, the business rules must be OK with it, etc.
To general approach would be to have a table which contains the information which rows were modified (= created,updated,deleted) by the script plus the script name plus the time when it was executed.
With this information, you can generate SQL which can undo the changes created by a script. To fill such a table, use triggers or generate your scripts in such a way that they write this information as well (note: This is probably beyond a "simple" sqlplus solution; you will have to write your own data loader for this).
Ok I solved the problem by creating a DDL and DML TRIGGER. The first one takes "extra" column (which is the DDL statement you have just entered) from v$open_cursor and inserts into my table. The second gets "undo_sql" from flashback_transaction_query which is the opposite action of your DML action - if INSERT then undo_sql is DELETE with all necessary data.
Triggers work before DELETE,INSERT (DML) on specific table and ALTER,DROP,CREATE (DDL) on specific SCHEMA or VIEW.

How to update SQL Batch?

In my client place they have a database. Once I complete the incremental changes on database, I have prepare the list of SQL object changes in one SQL file.
The script is like this:
If sql object 1 present in database
DROP the SQL object 1
GO
create the SQL Object 1
If sql object 2 present in database
DROP the SQL object 2
create the SQL Object 2
All the time I have drop the existing Object and re-create the same.
Now this batch may contains some error.
My requirement is that if there any error in file the file. non of the the sql objects has been re-created. it should rollback the old sql objects.
If there is no error then it would create all the SQL objects.
Due the GO statement in middle I could not able to user the TRANSACTION in sql.
How can this be solved?
Don't use GO, then. Simply remove it from your script, and add your BEGIN and COMMIT TRANSACTION commands where you need them.
BEGIN TRAN
IF EXISTS Object1
BEGIN
DROP Object1;
END
CREATE Object1;
IF EXISTS Object2
BEGIN
DROP Object2;
END
CREATE Object2;
COMMIT TRAN
Modifying database schema via DROP/CREATE has many problems:
it may loose data
it looses permissions and extended properties added to the objects dropped
cross object dependencies (eg. foreign keys) require a certain order of drop/create
Usually is better to try to ALTER the object from schema version to schema version. This requires you to know which schema version is currently deployed, but that problem is easily solvable (use a database extended property, see Version Control and your Database).
Back to your question, a naive approach is to wrap your entire script in a big BEGIN TRAN/COMMIT but that seldom works:
it creates a potentially large transaction that requires much log space.
the result is impossible to validate until after the commit when is too late to do anything about it
the behavior mingling exceptions and transactions is messy at best. XACT_ABORT ON helps somehow, but only so much.
Not all DLL statements can be run from inside a transaction
For these resons I would recommnd a much simpler and safer approach: take a backup, WITH COPY_ONLY, of the database before modifying the schema. If anything goes wrong, rollback to the copy. Alternative, a database snapshot can be used as a backup. See How to: Revert a Database to a Database Snapshot.
Note that BEGIN TRAN/COMMIT can span batches (ie. can be separated by multiple GO) so your concern is not an issue.

How do transactions within Oracle stored procedures work? Is there an implicit transaction?

In an Oracle stored procedure, how do I write a transaction? Do I need to do it explicitly or will Oracle automatically lock rows?
You might want to browse the concept guide, in particular the chapter about transactions:
A transaction is a logical unit of work that comprises one or more SQL statements run by a single user. [...] A transaction begins with the user's first executable SQL statement. A transaction ends when it is explicitly committed or rolled back by that user.
You don't have to explicitely start a transaction, it is done automatically. You will have to specify the end of the transaction with a commit (or a rollback).
The locking mechanism is a fundamental part of the DB, read about it in the chapter Data Concurrency and Consistency.
Regarding stored procedures
A stored procedure is a set of statements, they are executed in the same transaction as the calling session (*). Usually, transaction control (commit and rollback) belongs to the calling application. The calling app has a wider vision of the process (which may involve several stored procedures) and is therefore in a better position to determine if the data is in a consistent state. While you can commit in a stored procedure, it is not the norm.
(*) except if the procedure is declared as an autonomous transaction, in which case the procedure is executed as an independent session (thanks be here now, now I see your point).
#AdamStevenson Concerning DDL, there's a cite from the Concept's Guide:
If the
current transaction contains any DML statements, Oracle first commits
the
transaction, and then runs and commits the DDL statement as a new,
single
statement transaction.
So if you have started a transaction before the DDL statement (e.g. wrote an INSERT, UPDATE, DELETE, MERGE statements), the transaction started will be implicitly commited - you should always keep that in mind when processing DML statements.
I agree with Vincent Malgrat, you might find some very useful information about transaction processing at the Concept's Guide.