Is it possible to roll back CREATE TABLE and ALTER TABLE statements in major SQL databases? - sql

I am working on a program that issues DDL. I would like to know whether CREATE TABLE and similar DDL can be rolled back in
Postgres
MySQL
SQLite
et al
Describe how each database handles transactions with DDL.

http://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis provides an overview of this issue from PostgreSQL's perspective.
Is DDL transactional according to this document?
PostgreSQL - yes
MySQL - no; DDL causes an implicit commit
Oracle Database 11g Release 2 and above - by default, no, but an alternative called edition-based redefinition exists
Older versions of Oracle - no; DDL causes an implicit commit
SQL Server - yes
Sybase Adaptive Server - yes
DB2 - yes
Informix - yes
Firebird (Interbase) - yes
SQLite also appears to have transactional DDL as well. I was able to ROLLBACK a CREATE TABLE statement in SQLite. Its CREATE TABLE documentation does not mention any special transactional 'gotchas'.

PostgreSQL has transactional DDL for most database objects (certainly tables, indices etc but not databases, users). However practically any DDL will get an ACCESS EXCLUSIVE lock on the target object, making it completely inaccessible until the DDL transaction finishes. Also, not all situations are quite handled- for example, if you try to select from table foo while another transaction is dropping it and creating a replacement table foo, then the blocked transaction will finally receive an error rather than finding the new foo table. (Edit: this was fixed in or before PostgreSQL 9.3)
CREATE INDEX ... CONCURRENTLY is exceptional, it uses three transactions to add an index to a table while allowing concurrent updates, so it cannot itself be performed in a transaction.
Also the database maintenance command VACUUM cannot be used in a transaction.

Can't be done with MySQL it seems, very dumb, but true... (as per the accepted answer)
"The CREATE TABLE statement in InnoDB is processed as a single
transaction. This means that a ROLLBACK from the user does not undo
CREATE TABLE statements the user made during that transaction."
https://dev.mysql.com/doc/refman/5.7/en/implicit-commit.html
Tried a few different ways and it simply won't roll back..
Work around is to simply set a failure flag and do "drop table tblname" if one of the queries failed..

Looks like the other answers are pretty outdated.
As of 2019:
Postgres has supported transactional DDL for many releases.
SQLite has supported transactional DDL for many releases.
MySQL has supported Atomic DDL since 8.0 (which was released in 2018).

While it is not strictly speaking a "rollback", in Oracle the FLASHBACK command can be used to undo these types of changes, if the database has been configured to support it.

Related

SQL operations to Database catalog

Are we able to perform SQL operations like INSERT, UPDATE, DELETE to Database Catalog (It is more theory question than practice)
If a database supports INFORMATION_SCHEMA and provides instruments for altering the database catalog, then yes, you can use SQL operations normally.
For example, in PostgreSQL documentation you can read:
The system catalogs are the place where a relational database management system stores schema metadata, such as information about tables and columns, and internal bookkeeping information. PostgreSQL's system catalogs are regular tables. You can drop and recreate the tables, add columns, insert and update values, and severely mess up your system that way. Normally, one should not change the system catalogs by hand, there are always SQL commands to do that. (For example, CREATE DATABASE inserts a row into the pg_database catalog — and actually creates the database on disk.)
So, you change the catalog indirectly creating a new database. Nonetheless, with PostgreSQL you can directly change the catalog, using SQL commands like DROP, INSERT, UPDATE and so on.
Some RDBMS don't provide such a possibility, such as Oracle Database, IBM DB2, SQLite or Sybase ASE. Some RDBMS provide INFORMATION_SCHEMA, but it is read-only, so you can't do anything crazy, for example, MySQL. Its documentation reads:
Although you can select INFORMATION_SCHEMA as the default database with a USE statement, you can only read the contents of tables, not perform INSERT, UPDATE, or DELETE operations on them.

Can I specify locking -schema- for a table or it depends on the transaction?

In Sybase, I can specify the locking schema for the table if it is, data rows, pages or table lock.
The below is an example in SYBASE how to create a table with specifying the lock table.
create table dbo.EX_EMPLOYEE(
TEXT varchar(1000) null
)
alter table EX_EMPLOYEE lock allpages
go
In SQL server there are such lock tables(SO answer) but can I specify the lock for the table?
My question: Can I specify the table type of locks ? or in SQL server it is different? Does it depend on the query that I run?
in this link it says :
As Andreas pointed out there is no default locking level locks are
taken as per operation you are trying to perform in the database. Just
some examples. If it is delete/update for a particular row exclusive
lock will be taken on that row If it is select operation Shared lock
will be taken If it is altered table Schema Mod lock will be taken soon
and so forth As Jeremy pointed out If you are looking for Isolation
level it is read committed.
are they are right ? can I say that locking table in Sybase is different than SQL server?
The locking mechanisms are not the same, but you do have some control in SQL Server for locking - you can specify with rowlock, with paglock or with (tablockx) for example on a query to take an exclusive table lock.
As with all such locks when you take control - you have to take responsibility for the blocking you can cause - so use carefully.
Docs with full descriptions : https://learn.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-table

Is there a difference in the way DDLs and DMLs are implemented by a database?

DDLs and DMLs are two strict categories of types of statements used for interacting with a database. I am not sure why this categorization exists.
Is there a fundamental difference in the way an Oracle database would work internally with respect to a DDL and DML statement?
One major (technical) difference between DDL and DML in Oracle is, that DDL is not transactional, i.e. they cannot be rolled back and don't require a commit. As a matter of fact DDL in Oracle does an implicit commit before it's executed.
Other databases (e.g. Postgres, DB2) do not make a difference with regards to transactions between DDL and DML
After all it's just a categorization, similar to the terms "application" and "server" (as in database server). From an operating point of view, OpenOffice and Oracle are both simply "applications", but yet we classify them into different categories.
DDL statements are used to define database structures, objects, and schemas whereas DML statements are used for managing data within schema objects. At the end of the day, Oracle (o r any other data management system) would process each type statement according to security permissions and object availability (i.e. locks on tables / views and isolation levels).
Also, schema definitions are held in internal master tables so your DDL statements actually affect the data stored in those tables and perhaps can be considered "master DML" statements in that sense.
If your question amounts to "is there a reason why it is necessary for DDL and DML to "be implemented differently", the answer is "NO".
However, the definers of the SQL language have opted for making DDL syntactically distinct. As a consequence, adding a column to a table must be done through the appropriate ALTER TABLE command. A side-effect of that command is that a row gets inserted in the catalog table that documents all columns. Stress side-effect.
But there is no fundamental reason why the insertion of a row in the catalog table could not be the trigger itself for the column addition, thus entirely eliminating the need for any "dedicated DDL".

How to drop all triggers in a Firebird 1.5 database

For debug purposes I need to send 1 table of an existing Firebird 1.5 database to someone.
In stead of sending the whole db , I want to send just the db with just this table - no triggers, no constraints. I can't copy the data to another db because it's just that that we want to check - why this one table is given troubles.
I am just wondering if there is a way to drop all triggers , all constraints and all but one table (using some clever trick with the system tables or so ) ?
Using GUI tool (I personally prefer IBExpert) execute following command:
select 'DROP TRIGGER ' || rdb$trigger_name || ';' from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null)
Copy result into clipboard then paste and execute within script executive
window.
If your database backup can switch to Firebird 2.1 there is some switch in gbak and isql.
Some Firebird command-line tools have
been supplied with new switches to
suppress the automatic firing of
database triggers:
gbak -nodbtriggers
isql -nodbtriggers
nbackup -T
These switches can only be used by the
database owner and SYSDBA.
You can drop all triggers by directly deleting them from the system table, like so:
delete from rdb$triggers
where (rdb$system_flag = 0 or rdb$system_flag is null);
Note that the normal way of using drop trigger is certainly preferable, but it can be done.
You can also drop constraints by executing DDL statements, but to enumerate constraints and drop them in a SQL script you would need the execute block functionality that Firebird 1.5 doesn't have.
There are similar statements to delete other database objects, but actually running these successfully may be much more difficult because of dependencies between objects. You can't drop any object as long as another object depends on it. This can become really tricky due to circular references, where two (or even more) objects depend on one another, forming a cycle, so there isn't a single one that may be dropped first.
The way around this is to break one of the dependencies. A procedure for example that has dependencies to other objects can be altered to have an empty body, after which it does no longer depend on those other objects, so they may be dropped then. Dropping foreign keys is another way of eliminating dependencies between tables.
I don't know of any tool implementing such a partial delete of database objects, your use case is IMO far from common. You could however have a look at the FlameRobin source code which has a certain amount of dependency detection in the code that is used to create DDL scripts or modification statements for database objects. Armed with that information you could write your own tool to do it.
If it's a one time thing it may be enough to do this manually, though. Use any Firebird management tool of your choice for that.

read-access to a MyISAM table during a long INSERT?

On mysql and using only myisam tables, I need to access the contents of a table during the course of a long-running INSERT.
Is there a way to prevent the INSERT from locking the table in a way that keeps a concurrent SELECT from running?
This is what I am driving at: to inspect how many records have been inserted up to now. Unfortunately WITH (NOLOCK) does not work on mysql and I could only find commands that control the transaction locks (eg, setting the transaction isolation level to READ UNCOMMITTED) -- which, from my understanding, should not apply to myisam tables at all since they don't support transactions in the first place.
MyISAM locking will block selects. Is there a reason for using MyISAM above InnoDB? If you don't want to change your engine, I suspect this might be a solution for you:
1: Create a materialized view of the table using a cron job (or other scheduled task) that your application can query without blocking.
2: Use a trigger to count up the number of inserts that have occurred, and look up the number of inserts using this meta-data table.