What is the difference between release and iteration on JIRA? - iteration

What is the difference between release and iteration on JIRA?
:)

As Jira is a tool that manage Agile teams, they use the same language,
just have a look here to understand the meaning of iteration and release in an Agile context.

Related

Why chained variables not involved anymore in the taskassigning example solution from Optaplanner 8.17?

From OptaPlanner 8.17, it seems that the code of task assigning example project has been refactored a lot. I didn't succeed in finding in the release notes nor on Github any comment about these changes.
In particular, the implementation of the problem to solve doesn't involve chained variables anymore since this version. Could someone from the OptaPlanner team explain why ? I'm also a bit confuse because the latest version of the documentation related to this example project is still referencing the previous deleted classes from version before 8.17 (eg.
org/optaplanner/examples/taskassigning/domain/TaskOrEmployee.java).
It's using #PlanningListVariable, an new (experimental) alternative to chained planning variables, which is far easier to understand and maintain.
Documentation for this new feature hasn't been written yet. We're finishing up the ListVariableListener interface and then the documantation will be updated to cover #PlanningListVariable too. At that time, it will be ready for announcement.
Unlike a normal feature, this big, complex feature took more than a year to bake. That's why it's been delivered in portions. One could argue the task assignment example shouldn't have escaped the feature branch, but it was proving extremely expensive to not merge the stable feature branches in sooner rather than later.

Development/QA/Production Environment

I am the QA Test Lead for a large enterprise software company with a team of over 30 developers and a small team of QA Testers. We currently use SVN to do all our code and schema check in which is then built out each night after hours.
My dilemma is this: All of developments code is promoted from their machine to the central repository on a daily basis into a single branch. This branch is our production code for our next software release. Each day when code is checked in, the stable branch is de-stabilized with this new piece of code until QA can get to testing it. It can sometimes take weeks for QA to get to a specific piece of code to test. The worst part of all of this is that we identify months ahead of time of what code is going to go into the standard release and what code will be bumped to the next branch, which has us coding all the way up until almost the actual release date.
I'm really starting to see the effects of this process (put in place by my predecessors) and I'm trying to come up with a way that won't piss off development whereby they can promote code to a QA environment, without holding up another developers piece of code. A lot of our code has shared libraries, and as I mentioned before it can sometimes take QA awhile to get to a piece of code to test. I don't want to hold up development in a certain area while that piece of code is waiting to be tested.
My question now is, what is the best methodology to adopt here? Is there software out there than can help with this? All I really want to do is ensure QA has enough time to test a release without any new code going in until it's tested. I don't want to end up on the street looking for a new job because "QA is doing a crappy job" according to a lot of people in the organization.
Any suggestions are greatly appreciated and will help with our testing and product.
It's a broad question which takes a broad answer, and I'm not sure if I know all it takes (I've been working as dev lead and architect, not as test manager). I see several problems in the process you describe, each require a solution:
Test team working on intermediate versions
This should be handled by working with the dev guys on splitting their work effort into meaningful iterations (called sprints in agile methodology) and delivering a working version every few weeks. Moreover, it should be established that feature are implemented by priority. This has the benefit that it keep the "test gap" fixed: you always test the latest version, which is a few weeks old, and devs understand that any problem you find there is more important than new features for next version.
Test team working on non stable versions
There is absolutely no reason why test team should invest time in version which are "dead on arrival". Continuous Integration is a methodology by which "breaking the code" is found as soon as possible. This require some investment in products like Hudson or home-grown solution to make sure build failure are notices as they occur and some "Smoke Testing" is applied to them.
Your test cycle is long
Invest in automated testing. This is not to say your testers need to learn to program; rather you should invest in recruiting or growing people with their knowledge and passion in writing stable automated tests.
You choose "coding all the way up until almost the actual release date"
That's right; it's a choice made by you and your management, favoring more features over stability and quality. It's a fine choice in some companies with a need to get to market ASAP or have a key customer satisfied; but it's a poor long-term investment. Once you convince your management it's a choice, you can stop taking it when it's not really needed.
Again, it's my two cents.
You need a continuous integration server that is able to automate the build and testing and deployment. I would look at a combination of Apache Hudson, JUnit (DBUnit), Selenium and code quality tools like Sonar.
To ensure that the code that the QA is testing is unique and not constantly changing, you should make the use of TAGs. A tag is like a branch except that the contents are immutable. Once a set of files have been checked in / committed you cannot change and then commit on top of those files. This way the QA has a stable version of code they are working with.
Using SVN without branching seems like a wasted resource. They should set up a stable branch and a test branch (ie. the daily build). When code is tested in the daily build it can be then pushed up to the development release branch.
Like Albert mentioned depending on what your code is you might also look into some automated tests for some of the shared libraries (which depending on where you are in development really shouldn't be changing all that much or your Dev team is doing a crappy job of organization imho).
You might also talk with your dev team leaders (or who ever manages them) and discuss where they view QA and what QA can do to help them the best. Ask: Does your dev team have a set cut off time before releases? Do you test every single line of code? Are there places that you might be spending too much detailed time testing? It shouldn't all fall on QA, QA and dev need to work together to get the product out.

Tickets for integration tests in Pivotal Tracker - chore or feature?

Say you're using Pivotal Tracker and making tickets for integration tests that are missing in a project. Would you make them as Chores or Features? Curious.
I'm just kind of learning this myself but the way I understand it is that you would mark this as a chore.
The reasoning is that the end user won't notice this directly. Sure it'll provide
them with better quality in the end, but they can't actually "see" it.

How do you handle technology updates in long running projects?

Let's assume you're in the middle of a long running project (long running = several years) and, as expected, there will be several things coming up with brand new releases. There might be a new .Net Framework with brand new features (e.g. Linq, Entity Framework, WPF, WF...), a new Visual Studio or V.next of your favorite Control Library, a new Mock Framework and a lot more things.
What are your guidelines for handling these technology updates? Do you adopt them instantly or do you ignore them until the end of the project? Do you have different guidelines for different things (Tools, Frameworks, supporting stuff)?
In my experience, these decisions are always made on a case-by-case basis. Several factors are considered, including:
How mature is the new technology? Does the organization like to be at the forefront working with bleeding edge new technologies, or does it prefer to work with proven tools and methodologies?
What skill sets do your people have? Are they consistent with use of the new technology, or is more training needed? Will improved productivity outweigh the time it takes to come up to speed?
What investment do you have in the existing technology? What is the cost of moving to the new technology? How much rework and rewriting of code is involved?
What is the requirement? Is it supported by the existing techology, or are new tools needed to fulfill the requirement?
What are the performance expectations? Does the new technology provide a performance improvement that cannot be met with the old technology?
What about the technological culture? Is the organization vendor specific (e.g. a Microsoft shop)? Can open-source code be used?
What is the scope of the project? Is it a large project that would benefit from supporting technologies like frameworks and tools, or is it a small project that would be unduly weighed down and complicated by these things?
How is the new technology supported? Does the vendor have good documentation? Is there someone you can talk to if you have problems? Or are you an organization that has people that know how to solve problems without a support contract?
Is the technology comfortable to work with? Does it seem to make sense? Is it clean and elegant? Do other people seem to like it? Are other people having problems with it?
Is the technology the latest flavor of the week? Has it proven itself in the battlefield to produce tangible results, or is it just a religion?
How much time do you have to learn the new technology and iron out the kinks? Do the benefits outweigh the costs?
As a very brief example, I chose Link to SQL for my most recent project, because the project was complex enough to warrant an ORM, L2S performs well and is lightweight, we are a Microsoft Shop, and it is my sense that the Entity framework is not quite ready for prime time (even though Microsoft says that it will be the go-to framework for the future).
Stick with what you've started with.
A large and long running project often comes with a huge and highly complex code-base. Any change or upgrade to a new version of a library can add bugs in very subtle and unexpected ways.
Also: For large projects the tools and libraries used should have been tested and evaluated in the design-phase. Unless you find a show-stopper or a security issue it's best to not upgrade.
Always remember: Don't change horses in the middle of a stream. :-)
I would say different factors pitch in, like-
Say a software is nearing its end of life, for example last April, Microsoft retired mainstream support for SQL Server 2000, and your product uses it then its wiser to go for the next version of SQL Server in your next release.
Another factor which comes into play is how much value does the new features in the latest release of a software would bring to your product. It may well be the case that the new release of .NET framework has something which does not add any value to your product, then that does not build a strong case to upgrade.
Budget is also an important factor. I think you need to upgrade licenses in order to step up to the next release unless you are already part of something like software assurance.
Training to the team is also a factor. If the latest release is going to add to your product then you will have to train your team as well.
Well, there could be other telling factors too. These were the ones off the top of my head. I hope it helps.
cheers
If you're talking about a framework-specific example, the biggest piece of advice I'll give you is keep the system and your application separate. This is why I love patterns such as Model-View-Controller - it keeps your code modular and means you can upgrade sections without breaking the app as an entirety.
On a more practical level, if your framework has a Git or SVN repository, checkout the usual 'system' directory from the repo, then you can call 'svn update' occasionally to keep up with the latest and greatest builds.
I would suggest that the project not last that long. Develop the application in smaller pieces with iterations every couple months. That way, as new technology comes out, you can make the necessary change and implement updates as you go rather then have to decide to redevelop the whole application. As you say, trying to develop the whole application as things change just doesn't work.
As another poster said, it's certainly a case-by-case basis thing. What you can upgrade and when is determined mostly by how hard or easy it is to test the new version of the system. Having a comprehensive automated test suite for your application helps a lot with this.
Generally, I try to update to the latest stable release of libraries and so on as often as possible, because that makes maintenance easier. If you don't update, you may find yourself patching or working around bugs in the version of the library you are using. If you update less frequently, each update will be more work because you have more changes to deal with, and it's been longer since you last touched the system, and thus you remember less about it.

Release notes, what for? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What are release notes for and who reads them? Should/could they be automated by just spitting out bug fixes for the current release, or do they warrant careful human editing?
So, anybody with a link to best practices(reasoning behind) in regards to software release notes?
Bugfixes and added features. Users will read them to determine if they should go to the trouble of installing an incremental upgrade, or wait until the next release because this one doesn't add any features they need or fix any bugs in features that they were using.
I'd say they at least require a human to read through them and make sure that each note is useful. It depends how good the comments on your bug fixes are.
Release notes are also important to your testing organization (if you have one), so that they know what's changed in the release and needs testing.
This really depends on who your application is built for and your organizations goals. However, I tend to believe that release notes need to be a concise listing of important key additions, enhancements, or fixes that are included in the particular release.
Sometimes a simple dump of the bug tracking system information is enough, other times, I find that they need to be refined.
The key is that typically Release Notes are believed to be the "Hey look at what we did" type listing of changes.
Our release notes are human created instead of machine created. The cover three main topics.
What is included in the releases
(list of files)
How to install it
What changed since the last version
(especially if the changes are not in
the manual yet).
items 1 and 2 don't chnage much from version to version, but they need to be reviewed. Item 3 takes the most work.
Release notes are also very important in production environment.
They help answering the age-old question:
What the heck is currently running into production ?
Or the more refined question: does this bug has actually been fixed in this release ?
I habitually read release notes. I tend to want a full comprehensive list of feature changes ( or as good as plausible ) in order to greater empower my utilization of the new product.
I want to see when certain critical bugs, or critical security issues are solved.
Release notes and READMEs can be really important if your customer has to take special action in addition to normal procedures in order to upgrade. It also be useful to warn customers/users of any db upgrades that can automatically happen as a result of installing a newer patch. The way I see it, Release Notes and READMEs should be written for the System Administrator audience. So include the kinds of things they would want to know about: summaries of important changes, how to install, known bugs, anything your software might do that would make someone pull their hair out, etc.
Release note depend of your organization.
I can talk for my organization. We use release note in PDF format and every time we publish a clickonce or a backend version. We send to office manager the Release note. This is a document used by the top administrator of the business (not only IT). This document is a way for them to know what is going on. What have changed, new features that are now in production, bugs fixed, and other thing that they might want to explain to their user.
This is a document that can be between 3 to 4 pages, describing the job that has been done in this version with brief words.
This is of course highly dependent on the type of application/service/whatnot,
but I've found that reading the release notes of my favorite developing tools etc..
often make me stumble upon nice, interesting or even killer features that I'd probably miss if I did'nt at least skim the notes.....well, perhaps not killer, but you get my drift ;-)
As most computer power users(now what kind of expression is that...)
I never bother much with ordinary documentation, so this gives me
that little something extra something besides clicking, hovering and faq'ing...
Release notes are for testers and users to know what's new/changed. Additionally, release notes can be used as supporting documentation when billing a new "version" of a software for client that you are building for them. v1.31 seems a lot easier to relate and drill into.
Rather than compile both lists manually, if you can use your release notes to do that for you it's great.