How to define an optional change set in Liquibase? - liquibase

We use Liquibase as database refactoring tool in a cloud service, and would now like to employ it to do some lightweight data migration, which would be realized as CustomTaskChange and would take just a few seconds. This data migration is 'nice to have' but it is by no means mandatory for the service to function properly - if it fails for some reason, the change set should just be skipped, the service started nevertheless, and the change set retried during the next restart of the service until it finally succeeds. So, errors when executing the change set should be ignored but the set marked as ran only after it actually ran successfully once.
We wonder how we could implement this kind of behavior using Liquibase: The <changeSet> attribute failOnError="false" continues in case of an error but according to documentation and an answer given by Nathan Voxland here at StackOverflow it always marks the change set as ran - hence Liquibase wouldn't retry to execute it during the next startup of the service. The <preConditions> attribute onFail seems to be concerned only with failing preconditions so startup would still fail in case of an error when setting onFail to, say, CONTINUE.
Is there any other option / attribute that we overlooked or a recommended fashion to solve this kind of situation?

You may be able to achieve the "retry until successful" behaviour if you implement the optional data migration inside the code of a custom precondition. Then, you could configure onFail of that precondition to CONTINUE which will give you the behaviour you want (source):
CONTINUE – Skip over the change set. Execution of the change set will be attempted again on the next update. Continue with the change log.
I'm not entirely sure if implementing the migration in the precondition code is technically possible – because it certainly wasn't meant for such things. And you also may want to verify that the custom precondition is not executed again once the patch set has been marked as ran.

Related

Facing errors while SQL deployment in azure devops pipeline

Am using sql DACPAC type of deployment in release pipeline of azure devops.but getting below error. I have no idea about SQL.Any suggestions?
Publishing to database 'database_name' on server 'Server_name'.
Initializing deployment (Start)
*** The column [dbo].[xxxx].[yyyy] is being dropped, data loss could
occur.
Initializing deployment (Complete)
Analyzing deployment plan (Start)
*** If this deployment is executed, changes to [dbo].[xxx2] might
introduce run-time errors in [dbo].[yyyy2].
Analyzing deployment plan (Complete)
Updating database (Start)
An error occurred while the batch was being executed.
Updating database (Failed)
Agree with Michael's appointment.
The column [***] is being dropped, data loss could occur.
and
If this deployment is executed, changes to [] might introduce
run-time errors in [].
These are all expected which caused by against the security. I assume you did some changes into your database which can not sure whether it would break anything on target database. Now, it will block the deployment since the server can't determine whether the changes are secure.
The first solution is set /p:BlockOnPossibleDataLoss=false.
The BlockOnPossibleDataLoss default value is true, which means stop the deployment if possible data loss detected. And false let SqlPackage.exe ignore them.
So, please go the task, then locate to and input the above argument into Additional SqlPackage.exe Arguments:
The second solution is input
/p:TreatVerificationErrorsAsWarnings=true
Note: The second solution should be used if the first one does not work for you.
Set TreatVerificationErrorsAsWarnings=true means treating the verification errors as warnings to get a complete list of issues, and it can bypass the limitation of allowing the publish action to stop when the first error occurs.
See this doc to get more publish action.

Wix Installer - how to run a custom action just after rollback happened?

I have created a Wix installer having multiple functionalities like deploying a service to Tomcat, Add and update config file of service at tomcat server and some other, Creating Web application at IIS, Creating MongoDB etc.
So to do some update config kind of task I have written custom actions with deferred execution, sometimes due to some reason custom action get failed and it causes to Rollback and sometimes this rollback leaves some footprints like service at Tomcat or other configuration files or may want to remove Mongo DB etc.
SO here I want to remove left footprints using a custom action just after rollback happened.
I have added a custom action Execute="rollback" and calling it Before="Installfinalize", But it's just calling before rollback happened.
Is there any way to deal smartly with such kind of situation?
Rollback custom actions only execute after a failure has occurred, and only the subset that were scheduled before the error. Assuming you need elevated privileges, they are your only clean option, so I would start with verifying the order of your actions' scheduling.
There's one more place you can try to run an action after rollback occurs: as the end of the Install UI Sequence. When an error occurs in an installation that's showing full UI, it will then run UI sequence entry msiDoActionStatusFailure (-3). Typically this shows a dialog box explaining that the installation (or uninstallation) failed. And it's hard to do more, as properties do not flow back from the execute sequence to the UI sequence.
In theory you can schedule any action at that entry in order to do something first. However, this action will run with the same privileges as the UI (typically limited), and will only run when the UI is shown. So this probably will not help your scenario. (Also, unless you're careful, you'll mess up the expected UI experience if you fail to invoke the dialog box it would otherwise show.)

programmatically set show_sql without recreating SessionFactory?

In NHibernate, I have show_sql turned on for running unit tests. Each of my unit tests clears the database and refills it, and this results in lots of sql queries that I don't want NHibernate to output.
Is it possible to control show_sql without destroying the SessionFactory? If possible, I'd like to turn it off when running setup for a test, then turn it on again when the body of the test starts to run.
Is this possible?
The only place you can set this is when building a NHibernate.Cfg.Configuration.
Once you've created a SessionFactory from your Configuration, there's no way to access the configuration settings, which I think is one of the reasons for using a factory pattern: to ensure that instances once successfully built can't be messed up by runtime re- or mis-configuration.
If you really need that feature, get the NH source code and find the place where the show_sql setting is evaluated.
Another option although it may/may not be as good is to use NHProf and just initialise NHProf when testing.
NHProf doesn't log setting/clearing database just queries used.

How to override edit locks

I'm writing a WLST script to deploy some WAR's and an EAR. However, intermittently, the script will time out because it can't seem to get an edit lock (this script is part of a chain of many other scripts). I was wondering, is there a way to override or stop any current locks on the server? This is only a temporary solution, but in the interest of time, it will do for now.
Thanks.
You could try setting a wait period and timeout:
startEdit([waitTimeInMillis], [timeoutInMillis], [exclusive]).
Are other scripts erroring out, leaving the session locked? You could try adding exception handling around those. Also, if you have 'Automatically acquire lock" enabled in the Admin Console and you use the admin console sometimes it can cause problems if you are running scripts at the same time, even though you are not making "lock-requiring" changes.
Also, are you using the same user for the chained scripts?
Within WLST, you can pass a number as a parameter to gain an exclusive lock. This allows the script to grab a different lock than the regular one that's used whenever an administrator locks from the console. It also prevents two instances of the same script from stepping on each other.
However, this creates complex change merge scenarios that are best avoided (by processes).
Oracle's documentation on configuration locks can be found here.
Alternatively, if you want the script to temporarily relieve any existing locks regardless of the pending changes, you may as well disable change management from the console, minimizing the inconvenience caused.
WLST also contains the cancelEdit command that you could run before you startEdit. Hope one of these options pan out!
To take the configuration change lock from another administrator:
If another administrator already has the configuration lock, the following message appears: Another user already owns the lock. You will need to either wait for the lock to be released, or take the lock.
Locate the Change Center in the upper left corner of the
Administration Console.
Click Take Lock & Edit.
Make your configuration changes.
In the Change Center, click Activate Changes. Not all changes take
effect immediately. Some require a restart (see Use the Change
Center).
As long as you're running WLST as an administrative user, you should be able to jump into an existing edit session with the edit() command - I've done a quick test with two admin users, one in the Admin Console, and one using WLST, and it appears to work fine - I can see the changes in the Admin Console session inside the WLST interpreter.
You could put a very simple exception handler around your calls to startEdit that will log the exception's stack trace, but do nothing else. And then rely on the edit call to pop you into the change session.
Relying on that is going to be tricky though if another script has started an edit session and is expecting to be able to commit that change session itself - you'll be getting exceptions and unreliable behaviour across multiple invocations.

TeamCity: Managing deployment dependencies for acceptance tests?

I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?