How do I run just a single stage in my bamboo build? - bamboo

I have a bamboo build with 2 stages: Build&Test and Publish. The way bamboo works, if Build&Test fails, Publish is not run. This is usually the way that I want things.
However, sometimes, Build&Test will fail, but I still want Publish to run. Typically, this is a manual process where even though there is a failing test, I want to push a button so that I can just run the Publish stage.
In the past, I had two separate plans, but I want to keep them together as one. Is this possible?

From the Atlassian help forum, here:
https://answers.atlassian.com/questions/52863/how-do-i-run-just-a-single-stage-of-a-build
Short answer: no. If you want to run a stage, all prior stages have to finish successfully, sorry.
What you could do is to use the Quarantine functionality, but that involves re-running the failed job (in yet-unreleased Bamboo 4.1, you may have to press "Show more" on the build result screen to see the re-run button).
Another thing that could be helpful in such situation (but not for OP) is disabling jobs.

Generally speaking, the best solution to most Bamboo problems is to rely on Bamboo as little as possible because you ultimately can't patch it.
In this case, I would just quickly write / re-use a aynchronous dependency resolution mechanism (something like GNU Make and its targets), and run that from a single stage.
Then just run everything on the default all-like target, and let users select the target on a custom run variable.

Related

VNext build system: don't trigger the build if code is not changed

In XAML build system in VS2012, there is an check box option "Build even if nothing has changed since the previous build", when Schedule option is selected.
This is missing in VNext build and the problem is that nightly build is fired even if no code was committed during the day. In my case it takes two hours to build with complete test suite.
Is there an easy workaround or plug in to achieve this?
Thanks
In TFS VNext build system Microsoft removed this option.
I would use an alternative instance of a workaround, you can create a build definition with a CI-trigger.
Another benefits of this practice is you get fast feedback if a change can not be integrated in the code base.
In this case you maybe need more than one build agent to run parallel builds in the case of parallel check-ins.
There is a feature request for it. But I think MS will not realize this request
https://visualstudio.uservoice.com/forums/121579-visual-studio-2015/suggestions/16300498-add-build-event-if-nothing-has-changed-since-prev

Should I commit having test still to pass(failing)?

Our rails development team tries to follow Continuous Integration. We have decided to adopt a policy of only committing features whose tests pass. Is that a good way to go on? Should I delay integrating with other one's features until my tests pass(Even if the partial part of the feature works ok)? Thanks in advance
The tests should pass--if you're running a CI server it'll just spam people with emails until they do. Without a CI server everyone else will have to figure out if those tests are "supposed" to fail. Boo.
Another option is to only check in tests for actually-written features; if you're using tests as an executable specification they wouldn't all pass until the entire app was done and nobody would be able to check anything in ever.
You may also be able to mark tests as "pending" or indicate they should be skipped, but remembering to un–pend/-skip them is often problematic.
The tests SHOULD PASS that's the reason why you are writing them in the first place, if for some reason one or more tests do not pass, it indicates that something went wrong (obviously) and you and your team should be working on the solution.
If the code were committed with test failures, spam mails blaming the programmer who did it, this way the next time he will pay more attention before committing code
I have heard one way to avoid committing code with test failures but I have not personally tested, it involves to have two repositories (it could be a branch), the theory behind is:
The developers commits will target a branch, the purpose of this branch is just to guarantee that all tests pass, you should configure your CI server to build and run tests from this branch
When all the tests pass in the branch, a merge should be done to the trunk, since everyone should be working on this branch the merge should be transparent and automatic
I repeat I have not tested this approach and in my opinion it involves more problems than it solves
Another alternative could be to add a hook to the commit event in your VCS and force to run all tests but this could be time consuming just to perform a single commit
As additional info you could check this response
https://stackoverflow.com/a/7110774/1268570
I would wait personally to the test passes before I intergrate other features.

Is there a way to 'test run' an ant build?

Is there a way to run an ant build such that you get an output of what the build would do, but without actually doing it?
That is to say, it would list all of the commands that would be submitted to the system, output the expansion of all filesets, etc.
When I've searched 'ant' and 'test', I get overwhelming hits for running tests with ant. Any suggestions on actually testing ant build files?
It seems, that you are looking for a "dry run".
I googled it a bit and found no evidence that this is supoorted.
Heres a bugzilla-request for that feature, that explains things a bit:
https://issues.apache.org/bugzilla/show_bug.cgi?id=35464
This is impossible in theory and in practice. In theory, you cannot test a program meaningfully without actually running it (basically the halting problem).
In practice, since individual ant tasks very often depend on each other's output, this would be quite pointless for the vast majority of Ant scripts. Most of them compile some source code and build JARs from the class files - but what would the fileset for the JAR contain if the compiler didn't actually run?
The proper way to test an Ant script is to run it regularly, but on a test system, possibly a VM image that you can restory to the original state easily.
Here's a problem: You have target #1 that builds a bunch of stuff, then target #2 that copies it.
You run your Ant script in test mode, it pretends to do target #1. Now it comes to target #2 and there's nothing to copy. What should target #2 return? Things can get even more confusing when you have if and unless clauses in your ant targets.
I know that Make has a command line parameter that tells it to run without doing a build, but I never found it all that useful. Maybe that's why Ant doesn't have one.
Ant does have a -k parameter to tell it to keep going if something failed. You might find that useful.
As Michael already said, that's what Test Systems - VM's come in handy- are for
From my ant bookmarks => some years ago some tool called "Virtual Ant" has been announced, i never tried it. So don't regard it as a tip but as something someone heard of
From what the site says =
"With Virtual Ant you no longer have to get your hands dirty with XML to create or edit Ant build scripts. Work in a completely virtualized environment similar to Windows Explorer and run your tasks on a Virtual File System to see what they do, in real time, without affecting your real file system*. The actual Ant build script is generated in the background."
Hm, sounds to good to be true ;-)
..without affecting your real file system.. might be what you asked for !?
They provide a 30day trial license so you won't lose no money but only the time to have a look on..

Existing solutions to test a NSIS script

Does anyone know of an existing solution to help write tests for a NSIS script?
The motivation is the benefit of knowing whether modifying an existing installation script breaks it or has undesired side effects.
Unfortunately, I think the answer to your question depends at least partially on what you need to verify.
If all you are worried about is that the installation copies the right file(s) to the right places, sets the correct registry information etc., then almost any unit testing tool would probably meet your needs. I'd probably use something like RSpec2, or Cucumber, but that's because I am somewhat familiar with Ruby and like the fact that it would be an xcopy deployment if the scripts needed to be run on another machine. I also like the idea of using a BDD-based solution because the use of a domain-specific language that is very close to readable text would mean that others could more easily understand, and if necessary modify, the test specification when necessary.
If, however you are concerned about the user experience (what progress messages are shown, etc.) then I'm not sure that the tests you would need could be as easily expressed... or at least not without a certain level of pain.
Good Luck! Don't forget to let other people here know when/if you find a solution you like.
Check out Pavonis.
With Pavonis you can compile your NSIS script and get the output of any errors and warnings.
Another solution would be AutoIT.
You can compile your install using Jenkins and the NSIS command line compiler, set up an AutoIT test script and have Jenkins run the test.

Is there a tool for creating historical report out of j/nunit results

Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.