We have been using Liquibase successfully for about six months. I'm moving to a new CI/CD pipeline using CircleCI and running into an error when running liquibase update over SSH.
Here's the command (after many iterations and much exploration of Liquibase documentation):
ssh $SSH_USER#$TEST_JOB_SSH_HOST "cd /var/www/html/liquibase ; liquibase --url=jdbc:$TEST_DB_URL/$TEST_DB_SCHEMA?user=$TEST_DB_USERNAME --username=$TEST_DB_USERNAME --password="\""$TEST_DB_PASSWORD"\"" --changelog-file=cl-main.xml --search-path=.,./ update --log-level 1"
The result:
However, the file does exist and can be seen here:
It was successfully executed several months ago using our old approach. Now I think Liquibase is just parsing files and somehow failing, likely because it's running from a different directory.
Here's a snippet from the changeset file:
<sqlFile dbms="mysql, mariadb"
encoding="UTF-8"
endDelimiter=";"
path="/../data/regional_integration_details-ingest_day-01.sql"
relativeToChangelogFile="true"
splitStatements="true"
stripComments="true"/>
I think the issue is the leading slash.
The command I pasted above was based on reviewing this help document: https://docs.liquibase.com/concepts/changelogs/how-liquibase-finds-files.html
I'm struggling with the proper syntax to include in the --search-path parameter -- if that's even the correct parameter -- to make this work.
The nuclear option (yet to be tested) is to update all of our changesets, removing the leading slash. I'd prefer not to go that route.
Suggestions?
Edit 1
Updating to mention that the first four changesets are parsed successfully. They have path values like ../dirname/sqlscript_00.sql. Liquibase chokes on the first script with /../dirname/sqlscript_01.sql.
Also, we have no problems running Liquibase in local development, when we cd to /var/www/html/liquibase in our Docker containers and execute the liquibase update command.
Edit 2
Having CircleCI SSH directly into the server doesn't work, as it doesn't carry the variables over with it.
Passing the commands via SSH preserves those variables.
Liquibase removed support for absolute paths in v4.x.
Related
We were upgrade our liquibase from 1.9.0.0 to 3.6.3. When running migration MD5SUM for 3.6.3 was updated but it was trying to re-run the previously executed changesets which were executed with liquibase 1.9.0.0. How do i run only updating checksum with without re-running the statements.
Thanks.
Liquibase has Command Line Interface. And the CLI has clearCheckSums command.
clearCheckSums clears all checksums and nullifies the MD5SUM column of
the DATABASECHANGELOG table so they will be re-computed on the next
database update.
changesets that have been deployed will have their checksums
re-computed, and pending changesets will be deployed.
Posting a link to the answer on liquibase forums for other users if they encounter the same query ever in future.
This question is related to I need help upgrading OroCommerce to 4.1.1.
I'm getting several errors related to extended entities... I believe there must be something wrong with cache building but I can't find the root cause (nor a solution :( ).
I checked the db structure in my production server against the VM where everything is working just fine and I can't see any significant difference (meaning the new fields such as digitalAsset_id for oro_attachment_file table or wysiwyg for oro_fallback_localization_val are there).
I just run an extra php bin/console oro:migration:load --force -e prod it didn't make a difference...
Edit:
Just checked the differences in the var/cache directory of both installations and in fact I see that the VM version has the methods that are missing from the prod one.
I uploaded the working code into the production server and re run the platform upgrade but I'm still running into issues.
In case oro:migration:load command (or oro:platform:update that actually triggers migration load) failed for the first time, you have to:
fix errors,
restore from the database dump
and run the command again.
Otherwise, there could be migrations that end up with errors,
but on the second run, they are not executed again, which could lead to the mess with the database schema, entity metadata, or entity config.
Also oro:migration:load command is not self-sufficient. There could be a need to warm up some entity configuration after the schema change. Please, try to run oro:platform:update, even if all the migrations are already executed, it would try to warm up all the caches and could fix an error.
Is it possible to delete old builds in Gitlab CI?
I tested a few things and have now about 20 builds that are useless (most are failed anyway).
It also shows stages that I don't have anymore which kinda clutters the Pipelines page and some of the uploaded artifacts are a bit big.
I wasn't able to find any documentation on this, only that disabling CI in the settings doesn't remove the builds.
Using Gitlab 8.10 Community (hosted by Gitlab.com)
There is currently no option in the GUI to completely get rid of a build other than expunge related data from the build. (The erase option in the build)
If you would have a local installation you could modify the database directly but I would advise caution. (I'll put the guide here for completeness sake)
Login to the GitLab database. If you use the default PostgreSQL :
sudo -u gitlab-psql /opt/gitlab/embedded/bin/psql -h /var/opt/gitlab/postgresql -d gitlabhq_production
Check if there is a table ci_builds. For pSQL: \dt
Delete the builds with normal SQL. For example: DELETE FROM ci_builds WHERE id = 2
(Optional) If you want to cleanup a list of commits which triggered a build you need to midify the table ci_commits.
who knows how to start migrations from scratch?
I'am running my migrations as usual, with command
**java -jar yourService.jar db migrate -i dev --dry-run dev**
But instead of running migrations - I receive
**Error: relation "MyScheme.databasechangelog" does not exist**
Who knows what can be the problem ?
Do I need to add databasechangelog and databasechangeloglock manually ?It's strange for me, because when I use liquibase separatelly from other frameworks, it generates this tables for me.
I needed to run my service with following command:
**java -jar yourService.jar db migrate -i dev dev**
Anyway, if someone will reach similar problem - feel free to use command above.
Take a look at Get puppet build to fail when the contained SQL script fails execution
I was attempting to run a vagrant build which installs Oracle XE in an Ubuntu Virtualbox VM and then runs a an SQL script to initialize the Oracle Schema. The vagrant build is here : https://github.com/ajorpheus/vagrant-ubuntu-oracle-xe. The setup.sql is run as a part of the oracle module's init.pp (right at the bottom or search for 'oracle-script').
When running the SQL script as a part of the vagrant build, I see the following error:
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: Error 6 initializing SQL*Plus
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: SP2-0667: Message file sp1<lang>.msb not found
notice: /Stage[main]/Oracle::Xe/Exec[oracle-script]/returns: SP2-0750: You may need to set ORACLE_HOME to your Oracle software directory
There were two things that were instrumental in me finding a workaround for the problem:
As suggested in this answer, setting the logoutput attribute to true for the exec block under question immediately showed me the error, whereas before the exec was just failing silently.
It seemed strange that I was able to run the command (sqlplus system/manager#xe < /tmp/setup.sql) after manually logging in as the 'vagrant' user. That suggested that there was something missing in the environment. Therefore, I copied all ORACLE env. vars into the exec as seen on Line 211 here
That worked, however, setting up the env vars manually seems a bit brittle. Is there a better way to setup the ORACLE environment for the vagrant user? Or, is there a way to get puppet to setup the environment for the vagrant user similar to an interactive shell?
If some profile has been set up to give the user a working interactive shell, you should be able to pass your action through such a shell
command => 'bash -i -c "<actual command>"'
As an aside about logoutput, since you mentioned that - the documentation advises that "on_failure" is a sane default, as it will only bloat your output when there are actual errors to analyze. It is the actual default in the latests versions of Puppet.