Do we require to do DML commands like insert in liquibase scripts of Corda state schema during migration from v1 to v2? - migration

Imagine I have added a new field called token in my CarV2 State (version 2) & suppose token's value populated should be existing field value = carNo+10.
My thought process, is in liquibase schema script for version2 of CarV2 State, we just need to add a new column & the data that needs to be populated in schema table will be handled in State Migration transaction inside Corda (means new state in migration txn of CarStateV2 can be created with this logic).
Is that correct?
Or should I need to add a DML operation command after the column addition changeset in liquibase script of version 2 itself for this (carNo+10) logic?

My understanding for the migration scenario which I verified in slack channel is:
Case1 : If the field added is there in state and in schema table (means mapping is there between them) then, logic for data population for version 2 release should be handled in a corda transaction for statev1 to v2 and along with that this schema table data will also be updated. (Then dont need any liquibase dml scripts for v2)
Case2 : if we need to add and populate some field in schema table alone. Which is not mapped in state, then its data population for version 2 release should be handled in liquibase version2.xml file with proper logic.

Related

How to generate and store the "initial" state of an existing database in Liquibase DATABASECHANGELOG table?

I'm in the process of integrating the springboot microservices with Liquibase. Prior to executing any changesets, I would like extract the "initial" state of an existing database(Oracle) and store in Liquibase DATABASECHANGELOG table. Is there way to do this?
What you would do is use the diffChangeLog command to generate a changelog.xml that contains all the changes needed to update a pristine database to the existing state of your database. If you already have a changelog, this would append to the end of that changelog, and you might want to manually rearrange the changesets so they are in the correct order.
You then use the changeLogSync command to populate the existing database with a DATABASECHANGELOG table that shows all of those changes have been deployed to that database.

Liquibase validation failed after modifying the entity

I wanted to change the data type of one field from string to date. So i dropped the table in db. Then modified the liquibase file and ran the application. now it complains with the following message.
liquibase.exception.ValidationFailedException: Validation Failed:
So after that I reverted the liquibase file changes and ran the application. This time no error but it is not creating the table.
Please help me how to solve this issue.
I assume the failed validation was an error about checksums. This happens when you modify a changeset which was already executed and try to execute it again.
Liquibase keeps all executed changesets in a table called databasechangelog, so it can find out which changesets can be skipped during execution.
To execute a changeset again, delete the corresponding from this table before, and run Liquibase again.
When using Liquibase, you shouldn't (in general) modify the database outside of Liquibase - the main exception being if you are a developer working on your own private development database. If you are in that state (working on your own private database), then when you modify the database outside of Liquibase (i.e. dropping a table) you will also need to delete the row in the DATABASECHANGELOG table that corresponds to the table create statement so that when you re-run liquibase update it will re-create the table.

Redgate migration scripts not running on deployment

I've been reading through the Redgate documentation on migration scripts and I'm trying to add a new column to a table that has a foreign key from another table.
Here's what I have done:
Added the new column, made it null-able and created the
relationship to a new table then I've committed the changes.
I then add static data to the new table so that the migration can
run. I commit this static data.I then add a blank migration script,
and set all null values on the column I've created in the last
commit to be the Id of one of the records in the related table. I
then commit this change.
I then run a deployment of both commits to my testing environment where
records already exist.
The problem I'm having is that the column gets created but the script seems like its not running as the column values stay null. I've verified that the script should actually change the columns as I've attempted to run it manually and it executes successfully.
Am I doing something wrong when using these scripts? Thanks.
I was creating blank migration scripts which lead to SQL Compare to set the column as not null. You have to specifically create a migration script on the schema change that requires it or SQL Compare will override all changes.

Does liquibase recognize if table is already up to date before executing changeSet?

In our case liquibase is used to update databses for existing installation. New installations are already up to date.
Assuming we have got a new installation. Starting the application will force to execute liquibase changesets (e.g. change type of a column) but as I mentioned before there is nothing to update as the column already was created with the correct type.
Does liquibase recognize that the table column is already up to date or does it try to execute the changeset as there is no entry within the databasechangelog table for it?
Liquibase uses an alternative approach that avoids a need to analyze the target database's data dictionary. This makes DB operations simpler and more cross platform.
A special table "DATABASECHANGELOG" keeps a record of the changesets applied to the target database instance. This table also contains a checksum (calculated at runtime) to determine if changsets are altered between runs of liquibase.
So if you altered the type of a table column, liquibase can detect this and can throw an error, when run against an existing database. (Obviously, on a new DB, the table would be created as expected).
Finally, the changeset documentation describes two optional attributes ("runAlways" and "runOnChange") which could tell lqiuibase to reapply a changeset more than once to a database. There is also a "clearCheckSums" command that can be used to reset the checksums on an existing database. Obviously you need to know what you're doing when using such an option :-)
Liquibase will not recognize anything automatically.
But you can use <preConditions/> in your changeSet to check if your changeSet must be applied or not.

BigQuery Schema error despite updating schema

I'm trying to run multiple simultaneous jobs in order to load around 700K record to a single BigQuery table. My code (Java) creates the schema from the records of is job, and updates the BigQuery schema, if needed.
Workflow is as follows:
A single job creates the table and sets the (initial) schema.
For each load job we create the schema from the records of the job. Then we pull the existing table schema from BigQuery, and if it's not a superset of the schema associated with the job, we update the schema with the new merged schema. The last part (starting from pulling the existing schema) is synced (using a lock) - only one job performs it at a time. The update of the schema is using the UPDATE method, and the lock is released only after the client update method returns.
I was expecting to avoid encountering schema update errors using this workflow. I'm assuming that once the client returns from the update job, then the table is updated, and that jobs that are in process can't be hurt from the schema update.
Nevertheless, I still get schema update errors from time to time. Is the update method atomic? How do I know when a schema was actually updated?
Updates in BigQuery are atomic, but they are applied at the end of the job. When a job completes, it makes sure that the schemas are equivalent. If there was a schema update while the job was running, this check will fail.
We should probably make sure that the schemas are compatible instead of equivalent. If you do an append with a compatible schema (i.e. you have a subset of the table schema) that should succeed, but currently BigQuery doesn't allow this. I'll file a bug.