execute parameterized liquibase changeset multiple times with different parameters - sql

I have been struggling with a liquibase challenge for some time and I hope somebody here can help me:
I would like to execute a simple parameterized liquibase script multiple times on the same db schema with different parameters:
<changeSet id="1" author="me" dbms="Oracle" runOnChange="false" failOnError="true">
<sql splitStatements="true">
GRANT SELECT on SOME_VIEW to ${db_user};
</sql>
</changeSet>
Now I execute liquibase one time with -Ddb_user=first_user and than with -Ddb_user=second_user. The second run fails, because liquibase calculates the checkSum after replacing the ${db_user} parameter (what makes perfect sense) and therefore the combination of id/author/filename and checkSum is already present in the DATABASECHANGELOG table.
Is there a best practice way to solve this problem?
Thanks in advance.

There is a runOnChange as an attribute for the changeSet which will run your changeset each time it was changed. Maybe this does what you are looking for?

You can always use runOnChange=true or <validCheckSum>any</validCheckSum>
There is an issue that explains this decision here: https://liquibase.jira.com/browse/CORE-2506

Related

Update an existing trigger into Liquibase

I'm very new at Liquibase, and I need some help.
I have an existing trigger that was not capturing all the data; I made some changes to my local Oracle database. Now I need to add those changes into the Liquibase, but I'm lost how to do that.
I know you cannot breach the contract in liquibase by updating the original .xml file directly.
From my understanding, I need to create a new changelog .XML file and then include the path on the other post_migration file.
My confusion is, do I have to drop the original trigger, then create a new file or?
Thanks!
I never create triggers, procedures or even views within the XML file exactly because this makes things more complicated (I think).
I typically move the actual trigger definition into a SQL script (that I can also run separately during development and testing), then include that SQL file from within the Liquibase changelog:
<changeSet id="42" author="arthur" runOnChange="true">
<sqlFile path="triggers/some_trigger.sql"
stripComments="false"
splitStatements="true"
endDelimiter="/"
relativeToChangelogFile="true"/>
</changeSet>
The some_trigger.sql script is stored in git (svn, ...) together with the XML changelog. The runOnChange="true" is the "magical ingredient" here. You don't have to touch the XML file, you just edit the SQL script. During deployment, Liquibase will check if the (SQL) file has changed and run the script if needec.
So, I believe that you create/update/replace the SQL trigger in your local developer database and right now you want to include the liquibase script to the release distribution package of your product.
The liquibase doesn't provide special xml syntax to create triggers, so you will just add a new changeset that holds your pl/sql script inside the <SQL> tag. The script will be the same that you run on your local database.
The example code here:
<changeSet id="1" author="me">
<sql endDelimiter="/">
CREATE OR REPLACE TRIGGER trigger_name before insert
on table1_name for each row
BEGIN
select seq_myseq.nextval
into :new.myid
from dual;
END;
/
</sql>
</changeSet>
This code just compile trigger in the aimed database when you call liquibase update. In most cases, it is enough. But I strictly recommend you to ask your DBA or team led for rules that your team enforced for writing liquibase scripts. For this reason, the result may be much more complicated.

Liquibase preconditions to all changelog file in sql

I would like to run a SQL precondition checking for each changeSet in my SQL changeLogFile. It is actually a precondition on the changeLog itself
Here is an extract of it :
--liquibase formatted sql
--preconditions onFail:HALT onError:HALT
--precondition-sql-check expectedResult:"1.0" SELECT VERSION FROM VERSION_TABLE;
--changeset bob:1 failOnError:true dbms:oracle
ALTER INDEX XXX RENAME TO YYY;
--rollback YYY RENAME TO XXX;
Even if the precondition is actually not respected, liquibase still run all the changeset.
Does somebody knows if it is an error from my side or if liquibase does not allow preconditions on entire changeLog for SQL changeLog file ?
Thanks in advance !
If you go through the documentation, then its stated that we can only apply pre-conditions on a specific change set. Also, only the SQL Check precondition is supported.
Liquibase doc for sql changelog files - https://www.liquibase.org/documentation/sql_format.html
One thing you could do is have a top-level master changelog in XML/YAML/JSON format and then use <include> or <includeAll> elements to include your liquibase formatted sql changelogs. If you do that, then you can have changelog-level preconditions.

Postgres: How to create index concurrently using liquibase

Looking at the liquibase documention http://www.liquibase.org/documentation/changes/create_index.html, CIC is not possible with create index, as liquibase doesn't have a tag to specify concurrent option.
Is there a way to create index concurrently with liquibase?
You can specify runInTransaction as false to create the index concurrently.
Creating a concurrent index must be done with the arbitrary sql change:
<changeSet runInTransaction="false" id="10-add-widgets-kind-index" author="username">
<sql dbms="postgresql">
CREATE INDEX CONCURRENTLY
IF NOT EXISTS idx_widgets_kind
ON widgets(kind)
</sql>
</changeSet>
This is a combination of a_horse_with_no_name's comment and TheDude's answer.
The previous answers do the job. I would like to offer an alternative that doesn't directly answer the OP's question, but does offer a solution with the same end result with some added advantages. I feel it is good to show other options for people that stumble upon this answer like I did.
In order to create the index using only Liquibase, you would need to use the <sql> tag. I caution against this as it can cause undesired consequences in the event that you use a different database for any reason (development, evaluation, testing, etc). The sql statement will be skipped and you can be left thinking that the index was added when in reality it was not.
Additionally, this can lead to a less controlled migration, assuming you are running this on a production system without taking it down for maintenance and the migration being part of the build process.
I would propose creating the index directly on Postgres and adding the index migration normally using Liquibase and a precondition check.
First, add the index manually:
CREATE INDEX CONCURRENTLY widgets_kind_idx ON widgets (kind);
And then add to your Liquibase changeSet:
<changeSet id="10-add-widgets-kind-index" author="username">
<preConditions onFail="MARK_RAN">
<not>
<indexExists indexName="widgets_kind_idx" />
</not>
</preConditions>
<createIndex tableName="widgets" indexName="widgets_kind_idx">
<column name="kind" />
</createIndex>
</changeSet>
This offers the ability to add the index in any manner desired and keeps your Liquibase migrations in a known state. A fresh database being setup would not require the CONCURRENTLY keyword.

Liquibase: Convert createTable changeSet entry to DDL SQL statement

I'd like to use JDBC to create tables in a database agnostic way. I'm pretty sure that Liquibase has solved this problem since it can take a generic createTable XML changeSet element and convert it into a database specific SQL DDL statement.
Can someone please tell me which liquibase classes / utililities are involved in converting a generic createTable changeSet into a database specific create table SQL script?. Sample usage (ie a test case) would be great.
Please note that I do not wish to invoke the entire liquibase pipeline. In particular I do NOT want the databasechangelog table.
I'd recommend reading the liquibase unit tests.

Liquibase changeSet with failOnError="false" are always ran?

I'm trying to execute the following changeSet in liquibase which should create an index. If the index doesn't exist, it should silently fail:
<changeSet failOnError="false" author="sys" id="1">
<createIndex unique="true" indexName="key1" tableName="Table1">
<column name="name" />
</createIndex>
</changeSet>
So far, so good. The Problem is, that this changeSet doesn't get logged into DATABASECHANGELOG table and is therefor executed every time liquibase runs. According to the liquibase documentation and e.g. this answer from Nathen Voxland i thought that the changeset should be marked as ran in the DATABASECHANGELOG table. Instead it isn't logged at all and as i said before executed every time liquibase runs (and fails everytime again).
Am i missing something?
(I'm using MySQL as DBMS)
In the answer given by Nathen Voxland, he recommended the more correct approach of using a precondition to check the state of the database, before running the changeset.
It seems to me that ignoring a failure is a bad idea.... Means you don't fully control the database configuration.... The "failOnError" parameter allows liquibase to continue. Wouldn't it be a bad idea for a build to record a changset as executed, if in fact it didn't because an error occurred?