Test data snapshot using liquibase - liquibase

I have a very basic knowledge on liquibase.My project team is planning to use Liquibase for Test data snapshot. we have a cloud database (Postgresql) for different microservices. we have planned to take Test data snap before scripts execution and roll back the original state after the execution. Any insights on this would be really helpful. Thanks!

Liquibase has this capability which is a Pro feature. You can use liquibase rollbackToUpdate feature to rollback the database state to a specific deployment ID. Read more about the rollback Pro features here: https://docs.liquibase.com/commands/pro/rollbackoneupdate.html
NOTE: Before running the liquibase rollbackToUpdate, make sure to view the SQL log output of what all will be rolled back. You need to use liquibase rollbackTpUpdateSQL to get the log output in SQL format. https://docs.liquibase.com/commands/pro/rollbackoneupdatesql.html

You can use context attribut of liquibase, context = dev, test, faker.....
in producton you must execute liquibase with another context for skipping fake-data.

Related

Backup restore and apply liquibase changes

We have a project where the database support is given for Sql Server and Oracle and to control the version on the database side we are using liquibase. Sometimes we need to bring a backup of the customer's database to our own infrastructure to investigate issues and work on it. We currently have more than 1000 changesets (and counting..) for database versioning, which takes a lot of time.
The problem: When we bring a backup from the customer and restore it into our local environment, we need to clear the DATABASECHANGELOG and re-run all the changesets again to force the liquibase calculate the correct checksum. We don't know how exactly the liquibase calculate it but we suppose it envolves environments variables like database and instance name, which is different from our customer to our own env.
Question: We would like to know, how could we improve this process? Maybe configuring how the liquibase should calculate the checksum (maybe considering just the ID, Author and Script). Or recalculate the checksum to our environment. Clearning the DATABASECHANGELOG and re-runing all the changesets is consuming a lot of time and make it difficult to maintenance.
Thank you.

Use Liquibase autogenerated xml for Corda Enterprise DB migration

I switched to Corda Enterprise mainly to try how it handles automated database migration.
In the documentation here it says tools-database-manager generates only SQL version of Liquibase script for initial DB and SQL version is Database specific so should not be used for production.
But it is possible to generate the XML also with liqubase cmd using this command:
/snap/bin/liquibase --url="jdbc:h2:tcp://localhost:10039/node" --driver=org.h2.Driver --classpath=/home/corda/Downloads/h2.jar generateChangeLog
which I did, and then I had to remove all the chnagelogs which are related to corda internal tables, and left only the ones that are my own and it seems everything works.
So the question is - may this approach have some hidden dangers that I don't know. Why otherwise Corda team developed tools-database-manager, and why they don't yet support xml generation with tools-database-manager?
And this leads to another question - what if I for example forget to include one of my tables in the initial script? Seems corda does not complain about it. Won't my table be created? Will I be able to ever migrate that table if it is missing in the initial script?
Firstly tools-database-manager is a helper tool available to make it easy for developers to perform database migration.
Let’s say you have 2 nodes in your network, each using a different database. PartyA uses PostgreSQL and PartyB uses Oracle. If PartyA uses this tool to create the migration script by connecting to PostgreSQL, this will out SQL statements specific to PostgreSQL.
Hence this is not portable and hence it's said the generated script is database specific.
Also, you do not want to trust a script and fire it directly on your production database, it contains DDL statements, so it is strongly recommended that every time a script is generated, make sure you know what the script is doing by manually looking into it.
There are a lot of enhancements going on in this space, supporting XML for migration script being one of them.
As mentioned earlier, you should manually look at the migration script. If you forget to add one of your table, Corda will not complain. It will fail sometime later when from within your code you try to access this table.
Yes, you can stop the node and create the table again by adding a create table script.

how to use liquibase diffChangeLog with the current changelog as reference (to generate incremental change set)

I have an existing database and have used the generateChangeLog command line to create the initial changelog. This works fine :-)
But now I want the developers to use all the tools/processes they know/use already to develop the database and code and use a script to generate any incremental change sets as appropriate.
That is: do a diff against the current state of the developer's database (url/username/password in the properties file) using the current changelog (changeLogFile in the properties file) as the base reference.
There seems no easy way to do this - the best I've come up with is:
Create a new temporary database.
Use liquibase to initialise the temp database (to what is currently in the changelog) by overriding the connection url: liquibase --url=jdbc:mysql://localhost:3306/tempbase update
Use liquibase to generate a changeset in the changelog by diff'ing the two databases:
liquibase --referenceUrl=jdbc:mysql://localhost:3306/tempbase --referenceUsername=foo --referencePassword=baz diffChangeLog
Drop the temporary database.
Synchronise the changeset: liquibase changelogSync
but there must be a better way...
You are right that liquibase cannot compare a changelog file with a database. The only real option is to compare your developer database with an actual liquibase-managed database, or at least one temporarily created.
What I would suggest as the better way is to consider shifting the developers to author liquibase changeSets in the first place. It is different tooling than they may be used to, but it has the huge advantage that they will know that the change they wanted to make is the one that will make it all the way to production. Any diff-based process (such as using diffChangeLog) will usually guess right about what changed, but not always and those differences are often not noticed until into production.
Liquibase has various features such as formatted SQL changelogs that are designed to make the transition from developers working directly against their database to tracking changes through Liquibase because once that transition is made many things get much easier.
With Liquibase Pro you can create a snapshot file that accomplishes the same thing. And then use the snapshot file to compare your database updates.
https://www.liquibase.org/documentation/snapshot.html
I mention Pro because it takes care of stored logic comparisons as well.

Schema Change/Update script for Database deploy

I have a need to change the database schema . I'm planning to write Schema change and update scripts for tracking database changes and updating them. I followed
Versioning Databases – Change Scripts
for a start, I got a gist of what he is getting at however since I haven't worked much on SQL scripts before, a tutorial or something to start with would be good. I did some research on the web and came to know that most people use Automatic comparing tools to generate the script which I don't want to do for obvious reason that I won't learn the anything in the process.
I'm looking for some tutorials/links on How to write Change scripts and Update scripts ? Especially update scripts as I couln't find even a single script/pseudo-code on how to do update schema by comparing SchemaChangeLog table, connecting to the table using scripts...
Thanks in advance!
I would recommend using a database migration tool like liquibase.
Each change to the database is captured as a changeset and liquibase will automatically keep track of which changesets have been applied to the database, enabling updates and rollbacks.

Groovy, How to do 2 phase commit? Is Sql.withTransaction can manage transaction scope accross multiple databases?

Well, I think my question says it all. I need to know if Groovy SQL supports two phase commits. I'm actually programming a Grails Service where I want to define a method which does the following:
Get SQL instance for Database 1,
Get SQL instance for Databsae 2,
Open a transaction some how:
Within the transaction call two different stored procedures on each database respectively.
Then commit some how or rollback on both connection if needed.
I didn't find any useful information yet about this anywhere on the web.
I've to program two phase commits any way, so even if this is supported by some other means (e.g. getting help from spring artifacts and use them in grails), please guide me. This has become a show stopper for me at the moment.
Note: I'm using MySQL and mysql-connector driver.
Thanks,
Alam Sher
The current version of MySQL seems to support two-phase commits as long as you're using the INNODB storage engine. There are other restrictions.
MySQL reference for two-phase commit
Groovy added "transaction support" in 1.7, but I'm not certain what they mean by that.