Why does Drop Index require commit? - sql

In my multi-threaded program, one thread drops indexes on a table (this happens first), and other threads insert records in the same table. It so happened that when dropping index is attempted, the table gets locked and the insert transactions become "waiting".
After wasting a lot of time on non-solutions to the problem, I found the real solution is to commit immediately after dropping the index. When commit is issued, the table is unlocked and the insert transactions complete successfully.
My question is, why? I was under the impression that Drop Index is a DDL statement and therefore does not need to be committed. Postgres seems to prove me wrong.

In PostgreSQL, all DDL commands are transactional. So if you start a transaction block, or your driver starts a transaction block for you, or your driver is not in autocommit mode, you need to commit all DDL commands, just like other SQL commands.
Other SQL databases do this differently.
(Nitpicking: Some DDL commands in PostgreSQL cannot be run in a transaction block, only in a transaction by themselves. So you may consider those to be exceptions to the above "all DDL commands" claim. But that's not quite the same thing as your question: Those commands still need to be committed, they just can't be run in a transaction together with other commands.)

I don't know about Postgres, but DDL statements are not always auto-committed.
In Oracle for example they are, but in DB2 they are not (you can do a create table + indexes and then rollback the whole lot). I think SQL Server also needs the commit (unless auto-commit is on).
Basically (depending on the DB flavour) a DDL statement is not always auto-committed.

Related

Rollback to savepoint doesn't release locks

I think I have a misunderstanding about how to use Savepoints. Perhaps someone can clear it up for me. I present my example as what I am trying to do, and what I have experienced.
My app is doing a certain procedure.
Before that procedure (and associated DB operations) I create a savepoint.
During that procedure, I initiate a select for update,
which creates a number of locks:
lock1 - duration=transaction, class=row, type=intent row=big number
lock2 - duration=transaction, class=row, type=WriteNoPK row=big number
Should that java procedure succeed, the associated DB transaction is completed via a commit.
However, if the java procedure fails, I want also to rollback any associated DB operations.
I have been attempting this by:
conn.rollback(mySavepoint);
However, this has not been releasing the table locks created (above) by the DB operations (that I thought I just rolled back by conn.rollback(mySavepoint);)
I have tested this behaviour with two databases: Sybase, and Derby.
Why is this the case?
Do I really need to commit after the conn.rollback(mySavepoint) ???
It just seems a bit counter-intuitive.

Is it possible to wrap DDL changes in a transaction in PostgreSQL?

I know that in MySQL ddl statements such as alter table/create table/etc cause an implicit transaction commit.
As we are moving to PostgreSQL is it possible to wrap multiple DDL statments in a transaction?
This would make migration scripts a lot more robust, a failed DDL change would cause everything to rollback.
DDL statements are covered by transactions. I can't find the relevant section in the official documentation, but have provided a link to the wiki which covers it.
Just remember that transactions aren't automatically opened in postgresql, you must start them with BEGIN or START TRANSACTION.
Postgresql Wiki about Transactional DDL
Not every Postgres DDL statement can be wrapped in transaction. Statements like DROP DATABASE / DROP TABLESPACE and some other file-system-related cant rollback.
Also:
ALTER TYPE ... ADD VALUE (the form that adds a new value to an enum
type) cannot be executed inside a transaction block.
Also some statements like TRUNCATE are 'not MVCC save'. Changes, made by that kind of statements can affect other queries, even if they are rolled back.
So - read the official manual for your version of postgres to find out if your DDL's are transaction safe.

How do transactions within Oracle stored procedures work? Is there an implicit transaction?

In an Oracle stored procedure, how do I write a transaction? Do I need to do it explicitly or will Oracle automatically lock rows?
You might want to browse the concept guide, in particular the chapter about transactions:
A transaction is a logical unit of work that comprises one or more SQL statements run by a single user. [...] A transaction begins with the user's first executable SQL statement. A transaction ends when it is explicitly committed or rolled back by that user.
You don't have to explicitely start a transaction, it is done automatically. You will have to specify the end of the transaction with a commit (or a rollback).
The locking mechanism is a fundamental part of the DB, read about it in the chapter Data Concurrency and Consistency.
Regarding stored procedures
A stored procedure is a set of statements, they are executed in the same transaction as the calling session (*). Usually, transaction control (commit and rollback) belongs to the calling application. The calling app has a wider vision of the process (which may involve several stored procedures) and is therefore in a better position to determine if the data is in a consistent state. While you can commit in a stored procedure, it is not the norm.
(*) except if the procedure is declared as an autonomous transaction, in which case the procedure is executed as an independent session (thanks be here now, now I see your point).
#AdamStevenson Concerning DDL, there's a cite from the Concept's Guide:
If the
current transaction contains any DML statements, Oracle first commits
the
transaction, and then runs and commits the DDL statement as a new,
single
statement transaction.
So if you have started a transaction before the DDL statement (e.g. wrote an INSERT, UPDATE, DELETE, MERGE statements), the transaction started will be implicitly commited - you should always keep that in mind when processing DML statements.
I agree with Vincent Malgrat, you might find some very useful information about transaction processing at the Concept's Guide.

Truncate Table Within Transaction

Can the SQL "truncate table" command be used within a transaction? I am creating an app and my table has a ton of records. I want to delete all the records, but if the app fails I was to rollback my transaction. Deleting each record takes a very long time. I'm wondering if I use truncate table, can I still rollback the transaction and get my data back in the event of a failure. I realize that truncate table doesn't write each delete to the transaction log, but I'm wondering if it writes the page deallocation to the log so that rollback works.
In SQL Server, you can rollback a TRUNCATE from a transaction. It does write page deallocation to the log, as you mentioned.
In Oracle, TRUNCATE TABLE is a DDL statement that cannot be used in a transaction (or, more accurately, cannot be rolled back). AFAIK, if there is a transaction in progress when the statement is executed, the transaction is committed and then the TRUNCATE is executed and cannot be undone.
In Informix, the behaviour of TRUNCATE is slightly different; you can use TRUNCATE in a transaction, but the only statements permissible after that are COMMIT and ROLLBACK.
Other DBMS probably have their own idiosyncratic interpretations of the behaviour of TRUNCATE TABLE.
If you read the official documentation of PostgreSQL, It said that The TRUNCATE TABLE statement is transaction-safe.

Sybase ASE: "Your server command encountered a deadlock situation"

When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go