How to recalculate checksums without re-running the statements in liquibase? - liquibase

We were upgrade our liquibase from 1.9.0.0 to 3.6.3. When running migration MD5SUM for 3.6.3 was updated but it was trying to re-run the previously executed changesets which were executed with liquibase 1.9.0.0. How do i run only updating checksum with without re-running the statements.
Thanks.

Liquibase has Command Line Interface. And the CLI has clearCheckSums command.
clearCheckSums clears all checksums and nullifies the MD5SUM column of
the DATABASECHANGELOG table so they will be re-computed on the next
database update.
changesets that have been deployed will have their checksums
re-computed, and pending changesets will be deployed.

Posting a link to the answer on liquibase forums for other users if they encounter the same query ever in future.

Related

Liquibase over SSH: Unexpected error running Liquibase: <file> does not exist

We have been using Liquibase successfully for about six months. I'm moving to a new CI/CD pipeline using CircleCI and running into an error when running liquibase update over SSH.
Here's the command (after many iterations and much exploration of Liquibase documentation):
ssh $SSH_USER#$TEST_JOB_SSH_HOST "cd /var/www/html/liquibase ; liquibase --url=jdbc:$TEST_DB_URL/$TEST_DB_SCHEMA?user=$TEST_DB_USERNAME --username=$TEST_DB_USERNAME --password="\""$TEST_DB_PASSWORD"\"" --changelog-file=cl-main.xml --search-path=.,./ update --log-level 1"
The result:
However, the file does exist and can be seen here:
It was successfully executed several months ago using our old approach. Now I think Liquibase is just parsing files and somehow failing, likely because it's running from a different directory.
Here's a snippet from the changeset file:
<sqlFile dbms="mysql, mariadb"
encoding="UTF-8"
endDelimiter=";"
path="/../data/regional_integration_details-ingest_day-01.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="true"/>
I think the issue is the leading slash.
The command I pasted above was based on reviewing this help document: https://docs.liquibase.com/concepts/changelogs/how-liquibase-finds-files.html
I'm struggling with the proper syntax to include in the --search-path parameter -- if that's even the correct parameter -- to make this work.
The nuclear option (yet to be tested) is to update all of our changesets, removing the leading slash. I'd prefer not to go that route.
Suggestions?
Edit 1
Updating to mention that the first four changesets are parsed successfully. They have path values like ../dirname/sqlscript_00.sql. Liquibase chokes on the first script with /../dirname/sqlscript_01.sql.
Also, we have no problems running Liquibase in local development, when we cd to /var/www/html/liquibase in our Docker containers and execute the liquibase update command.
Edit 2
Having CircleCI SSH directly into the server doesn't work, as it doesn't carry the variables over with it.
Passing the commands via SSH preserves those variables.
Liquibase removed support for absolute paths in v4.x.

Liquibase - Test changeset before executing

I have a pipeline Jenkins that execute liquibase scripts. However, lots of time the pipeline failed because there are errors in the script.
I would like to test my script locally before running the pipeline. I would run the script locally to detect if there are errors (syntaxe problem, column that doesn't exist, etc), without creating an entry in the databasechangelog.
One option is to run updateSQL, which will display the sql that liquibase update WOULD run. You can take that sql and run it in any SQL query IDE of your choice to test syntax.

I'm having trouble with extended entities

This question is related to I need help upgrading OroCommerce to 4.1.1.
I'm getting several errors related to extended entities... I believe there must be something wrong with cache building but I can't find the root cause (nor a solution :( ).
I checked the db structure in my production server against the VM where everything is working just fine and I can't see any significant difference (meaning the new fields such as digitalAsset_id for oro_attachment_file table or wysiwyg for oro_fallback_localization_val are there).
I just run an extra php bin/console oro:migration:load --force -e prod it didn't make a difference...
Edit:
Just checked the differences in the var/cache directory of both installations and in fact I see that the VM version has the methods that are missing from the prod one.
I uploaded the working code into the production server and re run the platform upgrade but I'm still running into issues.
In case oro:migration:load command (or oro:platform:update that actually triggers migration load) failed for the first time, you have to:
fix errors,
restore from the database dump
and run the command again.
Otherwise, there could be migrations that end up with errors,
but on the second run, they are not executed again, which could lead to the mess with the database schema, entity metadata, or entity config.
Also oro:migration:load command is not self-sufficient. There could be a need to warm up some entity configuration after the schema change. Please, try to run oro:platform:update, even if all the migrations are already executed, it would try to warm up all the caches and could fix an error.

Dropwizard Liquibase - databasechangelog does not exist

who knows how to start migrations from scratch?
I'am running my migrations as usual, with command
**java -jar yourService.jar db migrate -i dev --dry-run dev**
But instead of running migrations - I receive
**Error: relation "MyScheme.databasechangelog" does not exist**
Who knows what can be the problem ?
Do I need to add databasechangelog and databasechangeloglock manually ?It's strange for me, because when I use liquibase separatelly from other frameworks, it generates this tables for me.
I needed to run my service with following command:
**java -jar yourService.jar db migrate -i dev dev**
Anyway, if someone will reach similar problem - feel free to use command above.

Liquibase generateChangeLog is failing - with Table already exists

I am getting a "table already exists" error from Liquibase when I run my Jhipster project:
[ERROR] liquibase - classpath:config/liquibase/master.xml: classpath:config/liquibase/changelog/db-changelog-001.xml::1::jhipster: Change Set classpath:config/liquibase/changelog/db-changelog-001.xml::1::jhipster failed. Error: Error executing SQL CREATE TABLE fc.T_USER (login VARCHAR(50) NOT NULL, .....: Table 't_user' already exists
I have generated the Liquibase changelog file into config\liquibase\changelog directory using
liquibase --driver=com.mysql.jdbc.Driver ^
--classpath=C:\Users\Greg\.IntelliJIdea13\config\jdbc-drivers\mysql-connector-java-5.1.31-bin.jar ^
--changeLogFile=db-changelog-001.xml ^
--url="jdbc:mysql://localhost/fc" ^
--username=root ^
generateChangeLog
So something is tricking Liquibase into trying to re-create the database when the changelog, I thought, was setting a baseline of the existing database.
Jhipster version: When I yo jhipster -v is says 1.2. When I nmp update jhipster is says I am on the latest = 17.2
Liquibase version tried 3.0, 3.1 and 3.2
Mysql database from XAMP
2 tables are created in Mysql - databasechangelog and databasechangeloglock
databasechangelog remains empty and databasechangeloglock has a record added when Jhipster app is run
This process was working but not since move to new computer. When it was working I saw databasechangelog had a couple of records in it as well as 1 in databasechangeloglock
Tips on how to debug as welcome as an answer. Thanks.
Running generateChangeLog as described above, and then changeLogSync in situ, results in the field [FILENAME] in the databasechangelog table having the value db-changelog-001.xml.
What it needs to be is the full address from where liquibase is run. When in a Jhipster app I am seeing classpath:config/liquibase/changelog/db-changelog-001.xml. So it does not seem to be using only ID as the row identifier, as I was expecting.