Bamboo specs create plans with circular dependency - bamboo

I am trying to specify multiple plans in bamboo specs (8.1.3) but those happen to have circular dependency:
Plan A (Unit tests) when finished - emits artifacts and triggers Plan B
Plan B (Sonar scanner) requires artifacts from Plan A
So in specs: A references B (child) and B references A (artifacts).
Everything is fine as long as those plans already exists but I would like to be able to define specs for a new project and then I get errors that referenced plan doesn't exist.
If I ever need to migrate entire specs there would be a lot of changes to first disable references and then enable them again so I would like to avoid that.
Is there any way for bamboo to first create plans and only then configure them? Maybe it is already supported in newer specs version? (I couldn't find any mention and cannot test it on my instance)

Related

Can Liquibase or Flyway handle multi non-linear versioning scenario?

Here is a tough one.
v1.1 has a table with index i.
v2.1 contains this table and index as well.
A bug was discovered and in v1.1.0.1 we changes the code and as a result, decided to drop the index.
We created a corresponding patch for v2.1, v2.1.0.6.
The customer applied patch v1.1.0.1 and a few weeks later upgraded to v2.1 (without patch 6)
As v2.1 code base performs better with the index we have a "broken" application.
I can't force my customers to apply the latest patch.
I can't force the developers to avoid such scenarios.
Can Liquibase or Flyway handle this scenario?
I guess these kind of problems are more organizational and not tool-specific. If you support multiple Version (A branch 1.0 and a newer one 2.0) and provide patches for both (which is totally legitimate approach - don't get me wrong here) you will probably have to provide upgrade notes for all these versions and maybe a matrix that shows from which version to which you can go (and what you can't do).
I just happened to upgrade an older version of Atlassian's Jira Bugtracker and had to find out that they do provide upgrade notes for all versions.
That would have meant to go from one version to the next to finally arrive at the latest version (I was on version 4.x and wanted to go to the latest 5.x) and obey all upgrade notes in between. (Btw, I skipped all this and set it up as a complete fresh installation to avoid this.)
Just to give you an impression, here is a page that shows all these upgrade notes:
https://confluence.atlassian.com/display/JIRA/Important+Version-Specific+Upgrade+Notes
So I guess you could provide a small script that recreates the index if somebody wants to go from version 1.1.0.1 to 2.1 and state in upgrade notes that it needs to be applied.
Since you asked if liquibase (or flyway) can support this, maybe it is helpful to mention that liquibase (I only know liquibase) has a something called preConditions. Which means you can run a changeset (resp. an sql) based on the fact that an (e.g.) index exists <indexExists>.
That could help to re-create the index if it is missing.
But since version 2.1 has already been released (before knowing that the index might be dropped in a future bugfix) there is no chance to add this feature to the upgrade procedure of version 2.1.
Liquibase will handle the drop index change across branches fine, but since you are going from a version that contains code (a drop index change) to one that does not expect that you are going to end up with your broken app state.
With liquibase, changes are completely independent of each other and independent of any versioning. You can think of the liquibase changelog as an ordered list of changes to make that each have a unique identifier. When you do an update, liquibase checks each change in turn to see if it has been ran and runs it if it has not.
Any "versioning" is purely within your codebase and branching scheme, liquibase does not care.
Imagine you start out with your 1.1.0 release that looks like:
change a
change b
change c
when you deploy 1.1.0, the customer database will know changes a,b, and c were ran.
You have v2.1 with new changesets to the end of your changelog file, so it looks like:
change a
change b
change c
change x
change y
change z
and all 2.1 customers database know that a,b,c,x,y,z are applied.
When you create 1.1.0.1 with changeset d that drops your index, you end up with this changelog in the 1.1.0.1 branch:
change a
change b
change c
change d
But when you upgrade your 1.1.0.1 customers to 2.1, liquibase just compares the defined changesets of (a,b,c,x,y,z) against the known changesets of (a,b,c,d) and runs x,y,z. It doesn't care that there is an already ran changeset of d, it does nothing about that.
The liquibase diff support can be used as a bit of a sanity check and would be able to report that there is a missing index compared to some "correct" database, but that is not something you would normally do in a production deployment scenario.
The answer may be a bit late, but I will share my experience. We also came across the same problem in our project. We dealt with it in the next way:
Since releases in our project were not made often, we marked each changeset in liquibase particular context. The value was the exact version migration (like v6.2.1-v6.2.2). We passed value to liquibase though jndi properties, so customer was able to specify them. So during upgrade customer was responsible for setting right value for migration scope for each upgrade. Liquibase context can accept list of values. So in the end, context looked like this:
context=v5.1-5.2,v5.3-5.3.1,v5.3.1-5.4,v6.2.1-v6.2.2

What relations View dependencies in SSMS don't show

I've heard that you cannot rely on SSMS view dependencies, that dependencies for objects on linked servers and dynamic code dependencies aren't showed.
Is there anything else that is now recognized as dependency and shown? What dependencies are not?
I've never done a 'proper' investigation into the behaviour, but I've certainly come to distrust the feature - I find that even local dependencies get out of date and I simply can't base critical decisions on the results (e.g. can I really delete that object)

How do I run just a single stage in my bamboo build?

I have a bamboo build with 2 stages: Build&Test and Publish. The way bamboo works, if Build&Test fails, Publish is not run. This is usually the way that I want things.
However, sometimes, Build&Test will fail, but I still want Publish to run. Typically, this is a manual process where even though there is a failing test, I want to push a button so that I can just run the Publish stage.
In the past, I had two separate plans, but I want to keep them together as one. Is this possible?
From the Atlassian help forum, here:
https://answers.atlassian.com/questions/52863/how-do-i-run-just-a-single-stage-of-a-build
Short answer: no. If you want to run a stage, all prior stages have to finish successfully, sorry.
What you could do is to use the Quarantine functionality, but that involves re-running the failed job (in yet-unreleased Bamboo 4.1, you may have to press "Show more" on the build result screen to see the re-run button).
Another thing that could be helpful in such situation (but not for OP) is disabling jobs.
Generally speaking, the best solution to most Bamboo problems is to rely on Bamboo as little as possible because you ultimately can't patch it.
In this case, I would just quickly write / re-use a aynchronous dependency resolution mechanism (something like GNU Make and its targets), and run that from a single stage.
Then just run everything on the default all-like target, and let users select the target on a custom run variable.

How to break a maven build when dependencies are out of date?

I love the maven-versions-plugin but sometimes I forget to run it for a while. Is there a way to make a maven build fail (and thus have a continuous build fail) when certain important dependencies are out of date?
I think you're approaching this incorrectly. Mail yourself the output of the maven-versions-plugin if you want, but don't fail the build due to changes outside of your control.
Even more, why would you want to needlessly update to the latest versions? I have seen many tricky problems appear due to upgrades which have brought slight changes to previous behaviour.
This, in general, is a bad practice - to update versions automatically. There is no practical reason of using the latest version of any package. If the library you're using satisfies your requirements you should stay with this version for security/stability reasons. And forever.
I think that maven-versions-plugin is an anti-pattern itself.
ps. When and if you want to do integration testing of modules developed by different teams/programmers, it is "integration testing". Even in this case I still think that on-fly version updating is the wrong approach. Root project should not do this integration testing, instead, every sub-module (or JAR, in your case), has to be responsible for integration testing of itself together with the rest of the system. When a sub-module increases its version it has to validate whether everything is still fine, and only then has to release a new version to the repository. And when the sub-module is doing the validation it has to be dependent on statically specified version numbers.

Static code analysis: integrate into debug and release builds, or just one or the other?

As a best practice, do you run code analysis on both debug and release builds, or just one or the other?
If for some reason the two builds are different (and they really shouldn't be for static analysis purposes), you should ensure that your metrics are running against what's actually going out to production.
Ideally, you should have a CI server, and the commands that developers run to initiate such analysis are no different from what the CI server does.
I usually pick one and that one is the release build. I guess it doesn't really matter but I tend to think that when gather information about what will run in production it is best to test exactly what will go to production (this goes for analysis, profiling, benchmarking, etc.).
Static Code Analysis will show the same results regardless of your build type.
Debug/Release only changes the resulting assembly and the inclusion or exclusion of debugging information at runtime.
I don't have separate ‘debug’ and ‘release’ builds (see Separate ‘debug’ and ‘release’ builds?).
The LLVM folks actually recommend analyzing the DEBUG configuraion:
ALWAYS analyze a project in its "debug" configuration
Most projects can be built in a "debug" mode that enables assertions.
Assertions are picked up by the static analyzer to prune infeasible
paths, which in some cases can greatly reduce the number of false
positives (bogus error reports) emitted by the tool.
In addition, debug builds tend to be faster (no need for optimization), and in the CI world faster is always better (all else being equal).