How to find all build plans I have created? - bamboo

I created a build plan in Bamboo recently for something I'm working on. I accidentally created it within the wrong project. We have so many projects with so many plans each that I lost the plan I had created in the wrong spot.
Fortunately, I just found the plan. I wasted too much time and effort looking for it though since we have so many plans with similar names since we have lots of similar projects.
How could I have done a search by plan creator so that I could see all plans created by me?
I have Googled various things, including "bamboo find plans I've created" and "bamboo search by plan creator" and others, but everything turns up results about creating plans or other unrelated pages.

This capability is not built into Bamboo as of their latest version 6.10. The only auditing I am aware of it an audit log for task changes.
You may be able to create something custom using the rest API though.

Related

Writing a Test Plan for Requirements without Domain Knowledge

I'm writing up a test plan to verfy that software, which I didn't write, satisfies a bunch requirements.
What advice do you have? Has anyone been in this situation before? I reach out for help from the developer who wrote the code and the requirements expert, but getting a lot of their time is tough.
Probably the first thing is to figure out what each requirement means for end-users.
Then I'd start with writing several tests (without much details) against each requirement you have. When there is some basic stuff, try to group it into several sections.
Don't forget to make such checks as Antivirus scan, install/uninstall (if available), all files including documentation should be in place, try to make some negative cases (application shouldn't crash if user made a typo at input).
Then try to formalize tests: what exact you do and what exact you check.
Write it in well-readable document, and send to developer, requirement expert and probably some others for review and feedback. That would probably be a good start.

Optimization tool for Rails 3 in development?

I'm developing a Rails 3 app deployed on Heroku which would like to optimize. I've explored different solutions such as query_reviewer or New Relic.
I couldn't make query_reviewer work with Rails 3.0.1 (also I had to switch to MySql, because PostgreSQL is not supported).
Regarding New Relic, it looks like a great free tool, but works only in production. I first need to improve many DB queries at development before getting to tune the app in production.
So none of this tools fit my needs.
Any advice? Maybe I should just rely on log traces and reduce the number of SQL queries?
You want to find out which activities aren't absolutely necessary and would save a good amount of time if you could "prune" them?
Forgive me for being a one-track answerer, but there's an easy way to do that, and it's easy to demonstrate.
While the code is running slowly and making you wait, manually interrupt it with Ctrl-C or whatever, and examine the stack trace. Do this a few times.
Anything you see it doing on more than one stack trace is responsible for a substantial percent of time, and it doesn't really matter exactly how much. If it's something you could prune, it will have that much less work to do.
If the efficacy of this method seems doubtful because it's low-tech, that's understandable, but in fact it can quickly find any problem any profiler can find.
I found that New Relic has a Development mode, which looks like an ideal setup for optimizing an application in development phase: http://support.newrelic.com/kb/docs/developer-mode

Process for publishing test-versions of product internally, and keeping a list of "what is in this version", when using Mercurial?

I need some concrete ideas for this.
We're looking at changing our version control system, and I'd like for us to use Mercurial. It would ease a lot of pain related to some internal processes, as well as pose some challenges.
One of those challenges is that the version control system we're currently using is not a distributed one, and thus has the concept of revision numbers for each changeset, which we've used internally.
Basically, when the programmer checks in the final change that fixes a case in our case management system, Visual Studio responds with which changeset number this became, and the programmer then affixes that to the case which basically says "If you're running a version of our product with the last version number this value, or higher, then you have all the changes in that version."
However, with Mercurial that doesn't work, as revision numbers can and will change as commits come in from different branches.
So I'm wondering how I can get something similar.
Note, this is not about release management. Releases are much more controlled, but on-going tests are more fluid, so we'd like to avoid having a tester continuously trying to figure out if the test-cases on his list are really available in the version he's testing on.
Basically, I need the ability for someone testing to see if the version they're testing on has the changes related to a test-case, or not.
I was considering the following:
The programmer commits, and grabs the hash of the changeset
The programmer affixes this to the case in our case tracker
The build process will have to tag (not in the Mercurial way) the version so that it knows which changeset it was built from
I have to make it easy to take the hash of the changeset our product was built from, look it up in the changeset log of the repository that is used for our build machine, and then figure out if the product changeset is the same as, or an ancestor of, each case in the test list.
So I have two questions:
Is this a feasible approach? I'm not adverse to creating a web application that makes this easy to handle
Does anyone know of an alternate process that would help me? I've looked at tagging, but it seems that tagging ends up adding merge pressure, is that something I want? (ie. adding/moving a tag ends up as a commit, which needs to be merged with the rest of the system)
Is there anything out there that would help me out of the box, that is, have someone made, or know of, something like this already?
Any other ideas?
Is it right to say that you're looking for a lightweight tagging process linked to your build process?
I'm not keen on the idea of the programmer grabbing the last hash and sticking it somewhere else - sounds like the sort of manual process that you couldn't rely on happening. Would you be able to build a process around programmers adding the case number to their commit message so something could later link the commit to the original case? When the case was marked as "closed" you could pick up all commits against the case.
Lots of case control systems have this - Fogbugz, for example.
Both bitbucket and google code keep a branching timeline, that shows visually what has been merged, by who, and when. I suspect that this might be what you want to do: it's a very simple way to resolve issue 4.
How they do that, I don't know, but the tools are out there. BitBucket offers commercial code hosting.

Why automate builds?

So, I'm a firm believer in having automated builds that run nightly (or even more often), especially during the late phases of a project. I was trying to convince a colleague tonight that we need to make some changes to facilitate this, and he challenged the whole premise of having automated builds in the first place. It is late on a Friday night, I've had a long week, I'm tired, and I honestly couldn't come up with a good answer. So, good people of the amazingly awesome Stack Overflow community, I come to you with this simple question:
Why have an automated build (or why not)?
I have a continuous integration server set up in a VM that mimics my production environment; by running automated builds, I know a LOT sooner when I've done something to screw up the code, and can make moves to fix it.
In a project with multiple people, especially larger projects, there are no guarantees that every user is running the tests and doing a full build. The longer you go without a full build, the greater the chances that some bug will sneak its way into the system while each dev is plugging away at his branch. Automated builds negate this issue by making sure the whole team knows, within the day or so, when something went wrong, and who was responsible.
For more backup, especially when tired, you might send over this article from our own Jeff Atwood, or this one from Joel Spolsky. From this last:
Here are some of the many benefits of
daily builds:
When a bug is fixed, testers get the
new version quickly and can retest to
see if the bug was really fixed.
Developers can feel more secure that a
change they made isn't going to break
any of the 1024 versions of the system
that get produced, without actually
having an OS/2 box on their desk to
test on.
Developers who check in their
changes right before the scheduled
daily build know that they aren't
going to hose everybody else by
checking in something which "breaks
the build" -- that is, something that
causes nobody to be able to compile.
This is the equivalent of the Blue
Screen of Death for an entire
programming team, and happens a lot
when a programmer forgets to add a new
file they created to the repository.
The build runs fine on their machines,
but when anyone else checks out, they
get linker errors and are stopped cold
from doing any work.
Outside groups
like marketing, beta customer sites,
and so forth who need to use the
immature product can pick a build that
is known to be fairly stable and keep
using it for a while.
By maintaining
an archive of all daily builds, when
you discover a really strange, new bug
and you have no idea what's causing
it, you can use binary search on the
historical archive to pinpoint when
the bug first appeared in the code.
Combined with good source control, you
can probably track down which check-in
caused the problem.
When a tester
reports a problem that the programmer
thinks is fixed, the tester can say
which build they saw the problem in.
Then the programmer looks at when he
checked in the fix and figure out
whether it's really fixed.
Allow me to begin by blatantly ripping off Wikipedia. Bear in mind, these are the general benefits of continuous integration, of which nightly builds should be considered a partial implementation. Obviously, your system will be more powerful if you couple nightly builds with your bed of automated (unit, functional, etc.) tests.
Advantages:
when unit tests fail or a bug emerges, developers might revert the codebase back to a bug-free state, without wasting time debugging
developers detect and fix integration problems continuously - avoiding last-minute chaos at release dates, (when everyone tries to check in their slightly incompatible versions).
early warning of broken/incompatible code
early warning of conflicting changes
immediate unit testing of all changes
constant availability of a "current" build for testing, demo, or release purposes
immediate feedback to developers on the quality, functionality, or system-wide impact of code they are writing
frequent code check-in pushes developers to create modular, less complex code
metrics generated from automated testing and CI (such as metrics for code coverage, code complexity, and features complete) focus developers on developing functional, quality code, and help develop momentum in a team
If we're just talking about a nightly build strategy in isolation, what you get is a constant sanity check that your codebase compiles on the test platform(s), along with a snapshot in time detailing who to blame. Couple this with automated testing and a sane strategy of continuous integration, and suddenly you have a robust suite that gives you who failed the tests in addition to who broke the build. Good deal, if you ask me.
You can read about the disadvantages in the remainder of the article, but remember, this is Wikipedia we're talking about here.
I think that...
So that you know when you've broken
something as soon as possible and can
fix it while it's still fresh in your
head, rather than weeks later.
is easily my favorite, but here are some other reasons blatantly stolen when I was just searching for reasons why you wouldn't use CI:
Code you cannot deploy is useless code.
Integrating your code changes with the code changes of other people on the team.
I sometimes forget to run ALL the unit tests before I check in. My CI server never forgets.
Centralized status of your code which can help with communication. (If I checked in broken code and someone else has to be a deployment... well this goes back to my favorite reason.)
Because,
Integrity of your Unit Test is automatically tested. So you need not to worry about functionality of your program is not broken because of changes made by others.
Automatically gets the latest Checked-In files and compiles, so any compile error caused by other reported.
Instant e-Mail acknowledgment on failure and successful execution of build. So you get to who failed the build.
Can be integrated with Code Standard Tool like FX cop, Style Cop for .Net. So while build it automatically checks the Coding Standards.
If one doesn't do full builds on a regular basis, one can end up with a situation where some part of a program that should have been recompiled isn't, that the failure to compile that part of the program conceals a breaking change. Partial builds will continue to work fine, but the next full build will cause things to break for no apparent reason. Getting things to work after that can be a nightmare.
One potential social benefit: automated builds could decrease toxicity among team members. If developers are repeatedly carrying out a multi-step process one or more times per day, mistakes are going to creep in. With manual builds, teammates might have the attitude, "My incompetent developers can't remember how to do builds right every day. You'd think they have it down by now." With automated builds, any problems that come up are problems with a program - granted, a program that someone wrote, but still.

Test planning, results, search, compare and reports

I'm looking for a Tool to do
Test planning
Inserting test results
Searching previous tests and results
Comparing multiple test results
Making reports out of existing test
data
The tests could be almost anything. For example testing performance of a specific software in a specific hardware. The point is that it would be possible to search earlier test procedures and results to be able to reproduce the test conditions. For example new results could be written using the same procedure only with different hardware.
This tool would be used to record test plans and results. The tools would NOT be used for executing the tests. The tool would act more as a database for developers to insert test plans and result, search existing tests and compare results.
How about retrofitting an existing blog, wiki or CMS engine for doing this?
Say for example, in a wiki each wiki article could represent a test. You could have page templates set up, with required sections like "purpose", "scenario", "results".
Pick a system you're already familiar with, so you'll have something running quickly, use it for a while, see what customizations it needs. Once the list of hard-to-implement things gets bigger, you can look for a custom tool, and you'll have a solid list of requirements by then.
You are probably looking for a test (case) management software. A good test management software lets you plan tests (and cases), record results and print reports/provide relevant metrics. Assigning tests to team members and email notifications similar to defect/bug tracking tools should also be included in a good tool. There are quite a few tools for this out there. A modern web-based test management app is our tool TestRail, feel free to give it a try.
(source: gurock.com)