How to prevent failure to provide Liquibase Context in PROD? - liquibase

I have a huge concern about the Liquibase behavior of ignoring changeset Context if no context is supplied as a run-time parameter.
I'm setting up my first Liquibase project, using "dev, test, prod" as contexts in changesets. I'm passing in the context from a Spring Boot application.properties, which will have different versions for dev, test, etc. So the PROD version will have spring.liquibase.contexts=prod. So far, so good.
But what will happen if somehow, years from now, that line gets accidentally deleted, or commented out? Or what if someone happens to run Liquibase against PROD and doesn't supply "prod" as context?
It seems to me that ALL prior changesets NOT marked with "prod" will then run. This would include any marked just "test", that insert test data, or--God forbid--drop tables... Worse, they'll be running out of order.
I understand Liquibase DOES recommend including "test"-only changesets along with everything else, and using the "test" context (only) to distinguish them.
So. Am I right that this is a potential disaster waiting to happen? Is there a way to prevent this?
Thank you, StackOverflows!!

Yes, you are right that a potential disaster can happen. It could happen many other ways in your described process as well. This design is on purpose b/c most don't use contexts and so the majority want all changesets to run when you do an liquibase update.
A safety net I have seen at various places: Create a check for context around the liquibase command in your cicd automation instrumentation layer. For example those that use Jenkins, make sure there is a mandatory parameter for context before the build can even run.

Related

Ordering tests in TFS 2012

There are a few tests in my testing solution that must be run first or else later tests will fail. I want a way to ensure these are run first and in a specific order. Is there any way of doing this other than using a .orderedtest file?
Some problems with the .orderedtest:
Certain tests should be run in a random order after the "set up" tests are finished
Ordered test does not seem to call the ClassInitialize method
Isn't an orderedtest a form or test list that is deprecated in VS/TFS 2012?
My advice would be to fix your tests to remove the dependencies (i.e. make them proper "unit" tests) - otherwise they are bound to cause problems later, e.g.:
causing a simple failure to cascade so that hundreds of tests fail and make it hard to find the root cause
failing unexpectedly because someone has inadvertently modified the execution order
reporting passes when in fact they should be failing, just because the initial state is not as they required
You could try approaches like:
keep the tests separate but make each of them set up and tear down the test environment that they require. (A shared class to provide the initial state would be helpful here)
merge the related tests into a single one, so that you can control the setup, execution, and close-down in a robust way.

Should I commit having test still to pass(failing)?

Our rails development team tries to follow Continuous Integration. We have decided to adopt a policy of only committing features whose tests pass. Is that a good way to go on? Should I delay integrating with other one's features until my tests pass(Even if the partial part of the feature works ok)? Thanks in advance
The tests should pass--if you're running a CI server it'll just spam people with emails until they do. Without a CI server everyone else will have to figure out if those tests are "supposed" to fail. Boo.
Another option is to only check in tests for actually-written features; if you're using tests as an executable specification they wouldn't all pass until the entire app was done and nobody would be able to check anything in ever.
You may also be able to mark tests as "pending" or indicate they should be skipped, but remembering to un–pend/-skip them is often problematic.
The tests SHOULD PASS that's the reason why you are writing them in the first place, if for some reason one or more tests do not pass, it indicates that something went wrong (obviously) and you and your team should be working on the solution.
If the code were committed with test failures, spam mails blaming the programmer who did it, this way the next time he will pay more attention before committing code
I have heard one way to avoid committing code with test failures but I have not personally tested, it involves to have two repositories (it could be a branch), the theory behind is:
The developers commits will target a branch, the purpose of this branch is just to guarantee that all tests pass, you should configure your CI server to build and run tests from this branch
When all the tests pass in the branch, a merge should be done to the trunk, since everyone should be working on this branch the merge should be transparent and automatic
I repeat I have not tested this approach and in my opinion it involves more problems than it solves
Another alternative could be to add a hook to the commit event in your VCS and force to run all tests but this could be time consuming just to perform a single commit
As additional info you could check this response
https://stackoverflow.com/a/7110774/1268570
I would wait personally to the test passes before I intergrate other features.

Grails integration tests failing in a (seemingly) random and non-repeatable way

We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)

How to break a maven build when dependencies are out of date?

I love the maven-versions-plugin but sometimes I forget to run it for a while. Is there a way to make a maven build fail (and thus have a continuous build fail) when certain important dependencies are out of date?
I think you're approaching this incorrectly. Mail yourself the output of the maven-versions-plugin if you want, but don't fail the build due to changes outside of your control.
Even more, why would you want to needlessly update to the latest versions? I have seen many tricky problems appear due to upgrades which have brought slight changes to previous behaviour.
This, in general, is a bad practice - to update versions automatically. There is no practical reason of using the latest version of any package. If the library you're using satisfies your requirements you should stay with this version for security/stability reasons. And forever.
I think that maven-versions-plugin is an anti-pattern itself.
ps. When and if you want to do integration testing of modules developed by different teams/programmers, it is "integration testing". Even in this case I still think that on-fly version updating is the wrong approach. Root project should not do this integration testing, instead, every sub-module (or JAR, in your case), has to be responsible for integration testing of itself together with the rest of the system. When a sub-module increases its version it has to validate whether everything is still fine, and only then has to release a new version to the repository. And when the sub-module is doing the validation it has to be dependent on statically specified version numbers.

Check sql script valid

As part of a release we run a load of PL/SQL scripts against a database. Recently someone left the ; off the end of a line in one script that was called another script so this meant that script did not get run. Because this did not cause an error, it just didn't get run, it took quite a while to track down what had happened.
I want to check the scripts before they are run for lines in them that are missing either a ; at the end or a / on the line after. This is made more complicated as 'lines' in the script could actually span more than one line if it is statement or block of code.
To me this seems like to do this I'm going to have to parse the scripts then check they meet the above.
I've found ANTLR and wonder if this might be a way to do it since there seem to be existing PL/SQL grammars but looks like that's going to be a step learning curve for what's just a simple check.
Does anyone know an easy way or any other tools, eclipse plugins etc that I can use to check for lines in the scripts that are missing either a ; at the end or a / on the line after?
Update
We already do most of the stuff Tom H suggested. The scripts are run into our test server and we have a version table that gets updated at the end. The problem was that the missing semi-colon in the container script meant one script did not get run but the rest including the one to update the version number ran without errors. Therefore the problem only got picked up quite a way into testing. This needed the database restored before running the scripts with the missing semi-colon added so basically resulted in half a day of testing time being lost. If there was a simple way to check this before running the scripts into the test server it could save quite a bit of time.
I agree with MattH that you may be going about this the wrong way. I would just add an insert statement to the end of all of your scripts which insert a "version" row into a table in the database. At the end of your deployment scripts it's then an easy task to check that the version table has all of the correct rows in it.
Also, you should have all of your release scripts being run exactly as they will be in production against your QA server. That's where all of the testing takes place. You never do anything to the server besides what is in your release steps - you only run the release scripts and if those release scripts are ever changed then you refresh the QA server with them and redo testing.
When you go to production your release process has then been fully tested. As a fail safe measure you can also use tools like Red Gate's SQL Compare and SQL Data Compare to check that production matches the QA server. The data compare would only be against certain tables (look-up tables, etc.). If you have data changes to major tables (1M rows, etc.) then you can right a custom script to check that they are correct.
Even if the scripts are different for every release (and not part of a defined source control structure that creates or replaces database objects) I would adopt a practice of breaking the scripts down into the most fundamental units of work per file and deploying them through Ant with the standard sql task. You probably have these types of scripts:
CREATE or REPLACE dbobject...
SQL DML scripts
Anonymous PL/SQL blocks
If you standardize on a consistent statement delimiter (I suggest using "/" since it works with all of the cases above) and set the deployment to fail on error, then Ant will either deploy all of the files or indicate why it couldn't.
I think it would be very difficult to otherwise parse files of one or more SQL and/or PLSQL statements and find missing delimiters if there are no standards on delimiter choice or statements per file.
Just a thought, but are you going about this the wrong way?
I assume, at the file-level, the lack of a semi colon in the file was not a problem? but it only became a problem when run via the batch processing? If that's the case maybe you can change your batch processing to cope with this.
If it was the file, then testing should have picked it up. You don't want to parse your input files to make sure they compile etc.