Why did HSQLDB 2.5.x drop support for `ROLLBACK ON DEADLOCK`? - hsqldb

I use HSQLDB 2.4 with a database setup script that contains
SET DATABASE TRANSACTION ROLLBACK ON DEADLOCK TRUE
After I updated to 2.5, this now fails with:
error in script file line: X org.hsqldb.HsqlException:
unexpected token: DEADLOCK required: CONFLICT
The release notes do not contain a single word about it.
Was the removal of this syntax without a migration period an intentional decision, or a bug – and why isn't it documented anywhere?

In 2012, CONFLICT became the default token and was persisted in the database .script files. This token has been used in the Guide since that year.
The older token DEADLOCK was still accepted for a number of years as a synonym. It was finally removed in 2019. So there was a seven year migration period.

Related

Running updateSQL for the first time gives database returned ROLLBACK error DatabaseException

I am using Liquibase v3.9 with PostgreSQL v11 for the first time.
When testing out my changelog for the very first time I run updateSQL to see the output of the SQL that will be run against the database. I get this error:
Unexpected error running Liquibase: liquibase.exception.DatabaseException: org.postgresql.util.PSQLException: The database returned ROLLBACK, so the transaction cannot be committed. Transaction failure cause is <<ERROR: relation "public.databasechangeloglock" does not exist
Position: 22>>
For more information, please use the --logLevel flag
This happens because updateSQL is expecting databasechangelog table to exist, and if this is the first time you are running Liquibase against the database then those tables won't exist yet (they get created the first time you run liquibase update).
I do think this is a valid use case for running updateSQL, you can request this feature here:
https://github.com/liquibase/liquibase/issues
relation "public.databasechangeloglock" does not exist
I was with this issue using PostgreSQL in a container.
Then I realized the memory limit given to PostgreSQL was insufficient.
After Increasing PostgreSQL limit memory to 512MiB the problem was solved.

Rollback to tag not working if tag applied to runOnChange changeset

I have been able to do rollback to tag with just schema changes, but I ran into a scenario that does not work when I mix in stored procedures.
I am using SQL changelogs against an Oracle database. Here is the scenario:
Release 1.0.0
I have a script r-1.0.0.sql that contains creates a table, and a script proc.sql that creates a stored procedure. The proc changeset is tagged as runOnChange=true.
I am happy with the changes, and I tag the database with tag 1.0.0
In the end the DATABASECHANGELOG table shows:
1 - r-1.0.0.sql-EXECUTED
2 - proc.sql-EXECUTED-(tag)1.0.0
Release 2.0.0
I have a script r-2.0.0 that renames a column, and I also updated proc.sql with the new column name. After running this, DATABASECHANGELOG is:
1 - r-1.0.0.sql-EXECUTED
4 - proc.sql-RERAN-(tag)1.0.0
3 - r-2.0.0.sql-EXECUTED
You notice that the re-ran proc script has a new number, but it still keeps the 1.0.0 tag
If now I want to rollback to tag 1.0.0, the rollback command does nothing, because tag 1.0.0 corresponds to the very latest change in the log.
This seems to be by design. Is there a different way to organize my changes to make this work?
I found a solution based on the article I linked above. Due to constraints in my environment I did not have an easy way to pass Java environment variables. I ended up installing a custom batch file (I named it Liquibase-sp.bat) with the following content:
#echo off
IF NOT DEFINED JAVA_OPTS set JAVA_OPTS=
set JAVA_OPTS=-Dliquibase.databaseChangeLogTableName=STOREDPROCCHANGELOG %JAVA_OPTS%
liquibase %*
It sets the parameter in a variable used by the Liquibase batch file, then calls the batch file passing the entire command line.
During my deployment I apply schema changes by calling "liquibase", and then apply stored proc changes by calling "liquibase-sp". The schema changes get logged in the default DATABASECHANGELOG table, while the proc changes get logged in a separate STOREDPROCCHANGELOG table. All tagging is done calling "liquibase" so it uses the default table, and only schema changes get tagged with a version.
Rollback works in the scenario I mentioned.
I expect to see an additional problem if I have a release 2.0.0 with no changes. When I tag the database with 2.0.0, the tag on the last changeset is modified from 1.0.0 to 2.0.0. This means any rollback to tag 1.0.0 would fail. I'm not worried though, there are procedural workarounds for this.

How to continue sql script on error?

We have a couple of migration scripts, which alter the schema from version to version.
Sometimes it happens, that a migration step (e.g. adding a column to a table) was already done manually or by a patch installation, and thus the migration script fails.
How do I prevent the script from stopping on error (ideally at specific expected errors) and instead log a message and continue with the script?
We use PostgresQL 9.1, both a solution for PostgresQL as well as a general SQL solution would be fine.
Although #LucM's answer seems to be good recommendation - #TomasGreif pointed me to an answer that went more in the direction of my origin request.
For the given example of adding a column, this can be done by using the DO statement for catching the exception:
DO $$
BEGIN
BEGIN
ALTER TABLE mytable ADD COLUMN counter integer default 0;
EXCEPTION
WHEN duplicate_column THEN RAISE NOTICE 'counter column already exists';
END;
END;
$$;
The hint led me to the right PostgresQL site, describing the error codes one could trap - so that was the right solution for me.
I don't think that you have another solution than running the entire script outside of a transaction.
What I would do if I was in that situation:
Do the modifications to the metadata (drop/create table/column...) outside of a
transaction.
Do all modifications to the data (update/insert/delete)
inside a transaction.

Does Liquibase support dry run?

We have couple of data schemas and we investigate the migration to Liquibase. (One of data schemas is already migrated to Liquibase).
Important question for us is if Liquibase supports dry run:
We need to run database changes on all schemas without commit to ensure we do not have problems.
In case of success all database changes run once again with commit.
(The question similar to this SQL Server query dry run but related to Liquibase)
Added after the answer
I read documentation related to updateSQL and it is not answers the requirements of “dry run”.
It just generates the SQL (in command line, in Ant task and in Maven plugin).
I will clarify my question:
Does Liquibase support control on transactions?
I want to open transaction before executing of Liquibase changelog, and to rollback the transaction after the changelog execution.
Of course, I need to verify the result of the execution.
Is it possible?
Added
Without control on transactions (or dry run) we can not migrate to Liquibase all our schemas.
Please help.
You can try "updateSQL" mode, it will connect db (check you access rights), acquire db lock, generate / print SQL sentences to be applied (based on db state and you current liquibase change sets) also it will print chageset id's missing in current state of db and release db lock.
Unfortunately, no.
By default, Liquibase commits the transaction executing all statements of a changeset. I assume that the migration paths you have in mind usually involve more than a single changeset.
The only way you can modify the transaction behavior is the runInTransaction attribute for the <changeset> tag, as documented here. By setting it to false, you effectively disable the transaction management, i.e. it enables auto-commit mode as you can see in ChangeSet.java.
I think that this feature could be a worthwhile addition to Liquibase, so I opened a feature request: CORE-1790.
I think your answer is "it does not support dry runs" but the problem is primarily with the database and not with liquibase.
Liquibase does run each changeSet in a transaction and commits it after inserting into the DATABASECHANGELOG table so in theory you could override liquibase logic to roll back that transaction instead of committing it, but you will run into the problem where most SQL ran by liquibase is auto-committing.
For example, if you had a changeSet of:
<changeSet>
<createTable name="test">
...
</createTable>
</changeSet>
What is ran is:
START TRANSACTION
CREATE TABLE NAME ...
INSERT INTO DATABASECHANGELOG...
COMMIT
but even if you changed the last command to ROLLBACK the create table call will auto-commit when it runs and the only thing that will actually roll back is the INSERT.
NOTE: there are some databases that will rollback DDL SQL such as postgresql, but the majority do not.
INSERT/UPDATE commands would run in a transaction and could be auto-rolled back at the end, but liquibase does not have a postCondition command to do the in-transaction check of the state that would be required. That would be a useful feature (https://liquibase.jira.com/browse/CORE-1793) but even it would not be usable if there are any auto-committing change tags in the changeset. If you added a postcondition to create table example above, the postcondition would fail and the update would fail, but the table would still be there.
If your Liquibase migration is sufficiently database agnostic, you can just run it on an in-memory H2 database (or some other "throwaway database") that you can spin up easily using a few lines of code.
var info = new Properties();
info.put("user", "sa");
info.put("password", "");
try (var con = new org.h2.Driver().connect("jdbc:h2:mem:db", info)) {
var accessor = new FileSystemResourceAccessor();
var jdbc = new JdbcConnection(con);
var database = DatabaseFactory.getInstance().findCorrectDatabaseImplementation(jdbc);
Liquibase liquibase = new Liquibase("/path/to/liquibase.xml", accessor, database);
liquibase.update("");
}
I've blogged about this approach more in detail here.

uncommittable transaction is detected at the end of batch. the transaction is rolled back

We are having problem with the server migration. We have one application that are having
so much transactions It working fine on the one database server. But when transfer same database to another server. We are facing the following error.
Server: Msg 3998, Level 16, State 1, Line 1
Uncommittable transaction is
detected at the end of the batch. The
transaction is rolled back.
Same database is copied to the another server with the all data. If we change connectionstring to old server then it is working fine.
Can anybody suggest on this?
This message means one of the other participants in the transaction voted to rollback. After that the transaction must fail.
So this message is a consequence, rather than a cause. Are you receiving any earlier / other error messages?
What happens when you run the query from Management Studio?
What you seem to have is a problem where the record is acceptable in one database but not theother. Suggest you look at the differnces between the two database structures (yes I know they are supposed to be the same, but clearly they are not). Suspect you will either find a collation difference, a data type differnce, or a data length differnce between the two. YOu might also have a table where the identity definition is missing and thus it can't insert becasue it is a required field and the value is missing. Tools like SQl Compare are easy to use to find differences.