Liquibase: Deleting Sql File used in Changeset - liquibase

we are using liquibase for our DB-Deployment. We are currently doing some refactoring. There is an old changeset which is running an sql script. This has been executed in the past. Now this sql script (or the statement in it) is outdated and I would like to deleted from our VCS System. I did that and tried to deploy again, but I am getting an exception saying that the sql file is not there, although my Changeset has runAlways="false" and runOnChange="false".
Does this mean that I have to always keep the specific sql script always in my CVS although it is deprecated?

According how checksum works on Liquibase, you can't delete it. Technically you run that changeset one time, and something happened on DB, then liquibase created a checksum for your file. Now if you change some part of it, liquibase will say your file is updated and you can't do that.
So in summary, your file is part of the story of your application build, even if it is no longer useful, and you can't get rid of it.
If you are not in production case, one solution could be create a dump from your current database as init.sql and start from that.
I hope this helps you.

Related

How to run Sequelize down migration for single file

I'm working on a migration using Sequelize. If the migration up method throws an error, the migration is not logged in the database as having completed. So, if I run db:migrate:undo, it instead runs down on the previous (and working) migration. As a result, I have a half executed migration where the schema for it remains in the database because the corresponding down method is never run by Sequelize. So, I need to either somehow force a single down method to run (which I'm not seeing an option for). Or, I need to manually clean up my database every time I run a failing migration, which can be a real pain for complicated migrations where I'm constantly going through trial and error. Is there an easier way to doing this?
sequelize db:migrate:undo --name 20200409091823-example_table.js
Use this command to undo any particular migration
manually insert the migration into your migrations table so that sequelize will think it has completed.
To verify check the status of your migrations before and after you edit the table.
db:migrate:status
Everything listed as "up" is something that can go "down" and vice versa.
There is no way to do it as of now... There is an open issue for this in Sequelize cli repo
I tried something and it worked for me:
rename your migration file and make sure it comes first alphabetically (make it the first one in migration files)
comment code in the second migration file
run sequelize db:migrate
This will run only the first migration file.
Don't forget to uncomment the migration file you commented before

Does liquibase have a method to test a rollback strategy without actually executing it?

I am brand new to Liquibase...
Today I wrote a Liquibase changeset using --liquibase formatted sql.
I created two tables where the second had a foreign key dependency on the first.
My rollback strategy was (mistakenly) drop table1; drop table2. When I ran the update and tested the rollback it failed because of the foreign key constraint. However, when I corrected my mistake and attempted to rerun it, it failed because the checksum didn't match.
I know that the obvious answer is to make more atomic changesets, however...
Does Liquibase support a way to test this sort of thing without actually running it so I can avoid the problem with the checksum on the edited rollback?
Failing that: is there a workaround for the checksum problem that will let me edit my files after running the update? (ctrl+z?)
The short answer to your question is, no Liquibase doesn't have such a thing.
Liquibase is a great toolkit, but it doesn't have a lot of bells and whistles, and it doesn't have much of an 'opinion' on how it should be used, or what your workflow should be. In your case, I would suggest that one way to deal with the problem is to just drop the database and then re-create it from the changelog. If you have already deployed the changelog in multiple places, that might not be possible, and if you weren't prepared to do that, that could be a problem.
There is an option to specify a validChecksums attribute on a changeset, so you could use that, but in general if you are using that you are making your changelog more complicated.
If you wanted to look at something that is more full-featured and that has the ability to forecast changes before they are deployed, please check out my company's product, Datical DB. It uses liquibase at its core, but adds a whole lot more (and is priced accordingly).
Liquibase provides updateTestingRollback command that basically updates the database, then tests its rollback and if successful, applies the changes again.
Your problem with invalid checksums may be solved with clearCheckSums command. It removes current checksums from database and on next update change sets that have already been deployed will have their checksums recomputed, and change sets that have not been deployed will be deployed.
For more details check Liquibase commands.

Play framework ignores evolution script

I'm currently covering the basics of SQL databases and using them in play framework. I have created postgres database and successfully configured it in my application.conf
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://database.example.com/playdb"
db.default.user=postgres
db.default.password=qwerty
I have also created 1.sql file in conf/evolutions/evolutions/default directory and wrote there same example SQL code to create simple table. The problem is that play seems to ignore the existence of this file. When I run my server and connect to localhost, I'm suppoused to be asked by Play, whether I would like to have my script applied to my database or not. Unfortunately I'm not and the only thing play is doing, is loading my home page (CREATE TABLE in 1.sql is not executed and I don't have any tables created). Any ideas what am I doing wrong?
Make sure you have following line in your build.sbt file
libraryDependencies += evolutions
In my case, evolution ignored 2.sql (and 1.sql).
To resolve this I had to remove those comments from 1.sql:
#--- Created by Ebean DDL
# To stop Ebean DDL generation, remove this comment and start using Evolutions
and add my own comment such as:
# Initial version
Also, viewing 1.sql file in VI revealed it contains ^M characters which had to be removed.
After those two steps, evolution stopped overwriting 1.sql and finally used the provided 2.sql file.
Evolution seems to overwrite 1.sql even if you do not perform the db update, so make sure you are editing the original version.
I am using play 2.4.1.

Can you control Liquibase updateSQL output by major release?

We use liquibase to generate database changes but need these scripted into SQL files because our DB server has neither Liquibase nor even a JVM installed.
We use the updateSQL command to create the DDL script needed to make changes, however if we run (on our development server) a 'dropAll' first we get a change set for every release.
Is there a way to run Liquibase regular 'update' for one collection of changesets (i.e. all prior releases) and then produce updateSQL output only for the last release? Essentially can we parameterize our build process to indicate the release we are targeting and automatically produce only SQL for that release?
Thanks
We have a similar setup.
The way we handle this is we have an "integration" db that always keeps the last official release.
When we have a new release candidate we let liquibase run (updateSQL) against that integration DB. Since it is on the last (resp. the current) release, updateSQL will only write out the difference between the new release candidate and the last release.
So you have a delta ddl that needs to be applied to go from release x to y.
Once the release candidate is released we let liquibase also update the integration db.
There are several ways to run a portion of your changelog.
If you break up your changelog files using versions (changelog-1.0.xml, changelog-1.1.xml, changelog-2.0.xml) and then have a master changelog.xml that <include>'s them, then the easiest solution for you may be to just run updateSql passing in the changelog version you want. If you want to generate sql for the entire database, run "liquibase --changelogFile=master.changelog.xml updateSql". If you want to generate sql for just version 2.1, run "liquibase --changeLogFile=changelog-2.1.xml updateSql"
#Jens answer works well and is what I often suggest. It has the advantage compared to the above option of catching new changeSets that were introduced into old changelog versions through a merged in patch release, for example.
Beyond that, you could use context or preconditions to dynamically control what is ran. Depending on your setup there may be ways to get those to work well for you.
Finally, there is always the extension system where you can write custom logic around how changelogs are parsed and executed to pull out older version changeSets based on file name or some other mechanism that works for you.

HSQLDB delete all data and and configuration files

I have a problem about hsqldb database. Firstly I want to ask where hsqldb keep data in file system, it keeps in directory which we give in the jdbc URL,doesn't it? Secondly, after test finished I want to clear all of the things about data and files that is created for test. Which procedure I should follow?
Answer to your first question is yes - hsqldb creates a directory and stores everything it needs for the database there.
2nd question - if you want to start over from scratch with an empty database, you can delete the created directory, and the JDBC driver will recreate it when you try to connect. My guess is though that you want to keep the database intact, but just remove your test data? In that case you will need a cleanup script to explicitly remove the rows that you added. This usually lives in a 'teardown' function of your test case.
You might want to use an in-memory HSQL database.