We'd like to split a version into several sprints, what is the best way to do this? Currently it seems to me that sprint == versions since every sprint is used as a affected version which for me doesn't feel right. What I'd like to do is the following:
I'd like to specify a version, say 1.0.0. This version is split in several sprints, A, B, C. No if a bug after the final release occurs, i like to specify as affected version 1.0.0 and not sprint A, B or C.
Is this somehow possible?
We use the YouTrack cloud hosting.
First of all, usually sprint == Fix version (the version in which this feature/task is meant to be implemented). Affected version is the one that contains the given bug (this version is implemented already). These two kinds of versions can use the same set of values (standard behaviour) or the different ones (can be changed in custom fields settings).
So, your Fix versions set of values should contain A, B and C, and the Affected versions one should contain 1.0.0. If you prefer using the same set of versions in both cases, you can leave sprint 1.0.0 (corresponding to Fix version 1.0.0).
Now, when you create a swimlane/task on an Agile Board in a selected sprint, the corresponding version is set as a Fix version (e.g. when you create a task in B sprint, the B Fix version is set to this task).
When you create a bug on an issue list, you can set Affected version as 1.0.0.
If you also want to set affected version for Bugs automatically, you can write simple workflow rule:
rule Set affected version
when issue.becomesReported() && issue.Type == {Bug} {
issue.Affected versions.add({1.0.0}});
}
Related
I'm trying to understand semantic versioning. Currently my module have 2 major versions as shown below.
1.0.0, 1.1.0, 1.1.1, 1.1.2
....
2.0.0, 2.1.0
So here i have copule of questions:
Found one bug in all vesions so ineed to fix that bug in all vesrions and update the vesion? or fix and update versions like 1.1.3 and 2.1.1
What if a new release has a feature and a bug fix, what should I increment?
When in doubt, the SemVer spec should always be referenced.
Say you find a bug in the following feature sets:
1.0.x
1.1.x
1.3.x
2.0.x
2.1.x
In each case, a bug fix for that feature level would look like:
1.0.x+1
1.1.x+1
1.3.x+1
2.0.x+1
2.1.x+1
Where x is the highest patch number for each of the feature sets.
Decide whether you need to support earlier versions with bug fixes. At some point, most teams limit down-level work to bug fixes and only go back two or three minor releases in each major series they still support. It's not uncommon to halt all version 1 work after one or two releases in the version 2 series.
Semver 2.0.0 #7 specifies:
Minor version Y (x.Y.z | x > 0) MUST be incremented if new, backwards compatible functionality is introduced to the public API. It MUST be incremented if any public API functionality is marked as deprecated. It MAY be incremented if substantial new functionality or improvements are introduced within the private code. It MAY include patch level changes. Patch version MUST be reset to 0 when minor version is incremented.
Basically, you bump either Minor or Major depending on whether you added back-compat features or made breaking changes. You can include all the bug fixes and new features you want in a single release. All lower version fields reset to zero when you bump Major or Minor.
Lets say I have a function that can be called via an API like $MyFunction and for brevity $MyFunction returns 12. Now lets say I rename $MyFunction to $The12Function but it still returns the same result (in this example the integer 12). Does this warrant a bump to the major or minor SemVer version number?
One could argue that I am not allowing for backwards compatibility because $MyFunction no longer works. However, one could also argue that there is backwards compatibility because you can still return the same result via $The12Function.
From http://semver.org:
Given a version number MAJOR.MINOR.PATCH, increment the:
MAJOR version when you make incompatible API changes,
MINOR version when you add functionality in a backwards-compatible
manner, and
PATCH version when you make backwards-compatible bug fixes.
So, in your case, if you don't also maintain the old function name, in order to retain compatibility with older versions of the API you should increment the major version number.
One way to look at it, in order to know if compatibility is broken, would be to imagine that your API and functionality is encapsulated in a library which offers this functionality to other programs. You now make changes to that API. If the programs which linked to the old version of your API need to be changed in order to use the new version of your library, you have broken compatibility and the major version should be changed. You may solve this problem by overriding and maintaining the deprecated old function calls, but it would increase the complexity of the API.
Here is a tough one.
v1.1 has a table with index i.
v2.1 contains this table and index as well.
A bug was discovered and in v1.1.0.1 we changes the code and as a result, decided to drop the index.
We created a corresponding patch for v2.1, v2.1.0.6.
The customer applied patch v1.1.0.1 and a few weeks later upgraded to v2.1 (without patch 6)
As v2.1 code base performs better with the index we have a "broken" application.
I can't force my customers to apply the latest patch.
I can't force the developers to avoid such scenarios.
Can Liquibase or Flyway handle this scenario?
I guess these kind of problems are more organizational and not tool-specific. If you support multiple Version (A branch 1.0 and a newer one 2.0) and provide patches for both (which is totally legitimate approach - don't get me wrong here) you will probably have to provide upgrade notes for all these versions and maybe a matrix that shows from which version to which you can go (and what you can't do).
I just happened to upgrade an older version of Atlassian's Jira Bugtracker and had to find out that they do provide upgrade notes for all versions.
That would have meant to go from one version to the next to finally arrive at the latest version (I was on version 4.x and wanted to go to the latest 5.x) and obey all upgrade notes in between. (Btw, I skipped all this and set it up as a complete fresh installation to avoid this.)
Just to give you an impression, here is a page that shows all these upgrade notes:
https://confluence.atlassian.com/display/JIRA/Important+Version-Specific+Upgrade+Notes
So I guess you could provide a small script that recreates the index if somebody wants to go from version 1.1.0.1 to 2.1 and state in upgrade notes that it needs to be applied.
Since you asked if liquibase (or flyway) can support this, maybe it is helpful to mention that liquibase (I only know liquibase) has a something called preConditions. Which means you can run a changeset (resp. an sql) based on the fact that an (e.g.) index exists <indexExists>.
That could help to re-create the index if it is missing.
But since version 2.1 has already been released (before knowing that the index might be dropped in a future bugfix) there is no chance to add this feature to the upgrade procedure of version 2.1.
Liquibase will handle the drop index change across branches fine, but since you are going from a version that contains code (a drop index change) to one that does not expect that you are going to end up with your broken app state.
With liquibase, changes are completely independent of each other and independent of any versioning. You can think of the liquibase changelog as an ordered list of changes to make that each have a unique identifier. When you do an update, liquibase checks each change in turn to see if it has been ran and runs it if it has not.
Any "versioning" is purely within your codebase and branching scheme, liquibase does not care.
Imagine you start out with your 1.1.0 release that looks like:
change a
change b
change c
when you deploy 1.1.0, the customer database will know changes a,b, and c were ran.
You have v2.1 with new changesets to the end of your changelog file, so it looks like:
change a
change b
change c
change x
change y
change z
and all 2.1 customers database know that a,b,c,x,y,z are applied.
When you create 1.1.0.1 with changeset d that drops your index, you end up with this changelog in the 1.1.0.1 branch:
change a
change b
change c
change d
But when you upgrade your 1.1.0.1 customers to 2.1, liquibase just compares the defined changesets of (a,b,c,x,y,z) against the known changesets of (a,b,c,d) and runs x,y,z. It doesn't care that there is an already ran changeset of d, it does nothing about that.
The liquibase diff support can be used as a bit of a sanity check and would be able to report that there is a missing index compared to some "correct" database, but that is not something you would normally do in a production deployment scenario.
The answer may be a bit late, but I will share my experience. We also came across the same problem in our project. We dealt with it in the next way:
Since releases in our project were not made often, we marked each changeset in liquibase particular context. The value was the exact version migration (like v6.2.1-v6.2.2). We passed value to liquibase though jndi properties, so customer was able to specify them. So during upgrade customer was responsible for setting right value for migration scope for each upgrade. Liquibase context can accept list of values. So in the end, context looked like this:
context=v5.1-5.2,v5.3-5.3.1,v5.3.1-5.4,v6.2.1-v6.2.2
Is there a tool to run unit tests on previous versions of software that's in source control?
The idea would be a bug surfaced and I want to know when it was introduced so I write a new test and the software checks out each back version from source control, running the test on each one, until the test doesn't fail anymore or we reach the beginning.
We use subversion but I'm curious of if anything like this exists in general.
Mercurial has a built in command called bisect that essentially does what you are looking for.
It is designed to work with a user-written script but in a nutshell, it does a binary search where your script (which runs the unit tests) tells bisect if the checked out revision "passes" or "fails" and based on that it moves through the history until it finds the revision where the bug was introduced.
I'm not sure if such a tool exists for SVN, but I've found bisect with Mercurial to be very useful for this sort of thing.
Bisect in Mercurial (and Git) perform exactly this job, except of ckecking each back version - it finds source of problem faster
Just about any version control system lets you check out a specific version of an entire build. And lets you track the history/the changes of any specific file(s) of the build.
Normally, I just take a simple "divide and conquer" approach:
a) Check out a really old version into a scratch directory
b) Build and confirm it DOESN'T have the bug
c) Manually compare the old and current versions and make "educated guesses" as to "what changed".
d) Check out a version between the old and current version (based on what I found in step c).
e) Build and test.
f) If it has the bug, check out version between a) and d).
If it doesn't have the bug, check out a version between d) and the current.
g) Rinse and repeat
And yes, some or all of this can certainly be scripted.
In bash (if you're on Linux), or in the scripting language of your choice.
I'm looking for a version numbering scheme that expresses the extent of change, especially compatiblity.
Apache APR, for example, use the well known version numbering scheme
<major>.<minor>.<patch>
example: 4.5.11
Maven suggests a similar but more detailed schema:
<major>.<minor>.<patch>-<qualifier>-<build number>
example: 4.5.11-RC1-3732
Where is the Maven versioning scheme defined? Are there conventions for qualifier and build number? Probably it is a bad idea to use maven but not to follow the Maven version scheme ...
What other version numbering schemes do you know? What scheme would you prefer and why?
I would recommend the Semantic Versioning standard, which the Maven versioning system also appears to follow. Please check out,
http://semver.org/
In short it is <major>.<minor>.<patch><anything_else>, and you can add additional rules to the anything else part as seems fit to you. eg. -<qualifier>-<build_number>.
Here is the current Maven version comparison algorithm, and a discussion of it. As long as versions only grow, and all fields except the build number are updated manually, you're good. Qualifiers work like this: if one is a prefix of the other, longer is older. Otherwise they are compared alphabetically. Use them for pre-releases.
Seconding the use of semantic versioning for expressing compatibility; major is for non-backwards compatible changes, minor for backward-compatible features, patch for backward-compatible bugfixes. Document it so your library users can express dependencies on your library correctly. Your snapshots are automated and don't have to increment these, except the first snapshot after a release because of the way prefixes are compared.
Purely for completeness, i will mention the old Apple standard for version numbers. This looks like major version. minor version. bug version. stage. non-release revision. Stage is a code drawn from the set d (development), a (alpha), b (beta), or fc (final customer ship - more or less the same as release candidate, i think).
The stage and non-release revision are only used for versions short of proper releases.
So, the first version of something might be 1.0.0. You might have released a bugfix as 1.0.1, a new version (with more features) as 1.1, and a rewrite or major upgrade as 2.0. If you then wanted to work towards 2.0.1, you might start with 2.0.1d1, 2.0.1d2, on to 2.0.1d153 or whatever it took you, then send out 2.0.1a1 to QA, and after they approved 2.0.1a37, send 2.0.1b1 to some willing punters, then after 2.0.1b9 survived a week in the field, burn 2.0.1fc1 and start getting signoffs. When 2.0.1fc17 got enough, it would become 2.0.1, and there would be much rejoicing.
This format was standardised enough that there was a packed binary format for it, and helper routines in the libraries for doing comparisons.
After reading a lot of articles/QAs/FAQs/books I become to think
that [MAJOR].[MINOR].[REV] is most useful versioning schema to
describe compatibility between project version (versioning schema
for developer, does not for marketing).
MAJOR changes is backward incompatible and require changing
project name, path to files, GUIDs, etc.
MINOR changes is backward compatible. Mark introduction of new
features.
REV for security/bug fixes. Backward and forward compatible.
This versioning schema inspired by libtool versioning semantics and by articles:
http://www106.pair.com/rhp/parallel.html
NOTE: I also recommend provide build/date/custom/quality as additional info (build
number, build date, customer name, release quality):
Hello app v2.6.34 for National bank, 2011-05-03, beta, build 23545
But this info is not versioning info!
Note that a version number scheme (like x.y.0 vs. x.y) can be constrained by external factors.
Consider that announcement for Git 1.9 (Januaury 2014):
A release candidate Git v1.9-rc2 is now available for testing at the usual places.
I've heard rumours that various third-party tools do not like the two-digit version numbers (e.g. "Git 2.0") and started barfing left and right when the users install v1.9-rc1.
While it is tempting to laugh at them for their sloppy assumption, I am also practical and
do not mind calling the upcoming release v1.9.0 to help them.
If we go that route (and I am inclined to go that route at this moment), the versioning scheme will be:
The next release candidate will be v1.9.0-rc3, not v1.9-rc3;
The first maintenance release for v1.9.0 will be v1.9.1 (and Nth one be v1.9.N); and
The feature release after v1.9.0 will be either v1.10.0 or v2.0.0, depending on how big the feature jump we are looking at.