Should I commit having test still to pass(failing)? - ruby-on-rails-3

Our rails development team tries to follow Continuous Integration. We have decided to adopt a policy of only committing features whose tests pass. Is that a good way to go on? Should I delay integrating with other one's features until my tests pass(Even if the partial part of the feature works ok)? Thanks in advance

The tests should pass--if you're running a CI server it'll just spam people with emails until they do. Without a CI server everyone else will have to figure out if those tests are "supposed" to fail. Boo.
Another option is to only check in tests for actually-written features; if you're using tests as an executable specification they wouldn't all pass until the entire app was done and nobody would be able to check anything in ever.
You may also be able to mark tests as "pending" or indicate they should be skipped, but remembering to un–pend/-skip them is often problematic.

The tests SHOULD PASS that's the reason why you are writing them in the first place, if for some reason one or more tests do not pass, it indicates that something went wrong (obviously) and you and your team should be working on the solution.
If the code were committed with test failures, spam mails blaming the programmer who did it, this way the next time he will pay more attention before committing code
I have heard one way to avoid committing code with test failures but I have not personally tested, it involves to have two repositories (it could be a branch), the theory behind is:
The developers commits will target a branch, the purpose of this branch is just to guarantee that all tests pass, you should configure your CI server to build and run tests from this branch
When all the tests pass in the branch, a merge should be done to the trunk, since everyone should be working on this branch the merge should be transparent and automatic
I repeat I have not tested this approach and in my opinion it involves more problems than it solves
Another alternative could be to add a hook to the commit event in your VCS and force to run all tests but this could be time consuming just to perform a single commit
As additional info you could check this response
https://stackoverflow.com/a/7110774/1268570

I would wait personally to the test passes before I intergrate other features.

Related

Previously implemented goes missing after few builds ..Testing

Say there are 1-10 user stories. All tested okay. -> to Production. Then comes the CR with 5 more user stories. All then tested okay. -> to production.
Then comes 5 more user stories. Tested okay. -> To production.. now here a user story or two from first 1- 10 breaks down. Obviously testers will have to carry the blame for the same.
Developers have direct access to the QA environments' build path. any developer can go put the code file there. just a simple folder structure.
How do we fix this and keep 'our' hands clean?
Also Please note that we do ad-hoc testing due to the stringent timelines.
The situation when something new breaks down something old is rather common. I cannot see what is the problem. QA environment is perfectly good for catching up such a regression.
What i can suggest is:
1. Having Development / QA / Production environments
And try to set up the proper process of if sth new has been coded up and developer-tested it can go to 'QA'. And only when the new stuff has been QA-tested it can go to 'Production';
2. Continuous Build Integration
It's also nice to have the key features covered with the unit tests or (and) to have a suite of automated tests. One button-click can show you the general state of your app and even whose check-in has failed the build.
3. Regression testing
Ensure you have a profound Regression suite. These are run mainly to avoid such problems and verify that no critical issues leak into the production.
Hope this helps a little.

Best practice for writing tests that reproduce bugs

I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.

How do I run just a single stage in my bamboo build?

I have a bamboo build with 2 stages: Build&Test and Publish. The way bamboo works, if Build&Test fails, Publish is not run. This is usually the way that I want things.
However, sometimes, Build&Test will fail, but I still want Publish to run. Typically, this is a manual process where even though there is a failing test, I want to push a button so that I can just run the Publish stage.
In the past, I had two separate plans, but I want to keep them together as one. Is this possible?
From the Atlassian help forum, here:
https://answers.atlassian.com/questions/52863/how-do-i-run-just-a-single-stage-of-a-build
Short answer: no. If you want to run a stage, all prior stages have to finish successfully, sorry.
What you could do is to use the Quarantine functionality, but that involves re-running the failed job (in yet-unreleased Bamboo 4.1, you may have to press "Show more" on the build result screen to see the re-run button).
Another thing that could be helpful in such situation (but not for OP) is disabling jobs.
Generally speaking, the best solution to most Bamboo problems is to rely on Bamboo as little as possible because you ultimately can't patch it.
In this case, I would just quickly write / re-use a aynchronous dependency resolution mechanism (something like GNU Make and its targets), and run that from a single stage.
Then just run everything on the default all-like target, and let users select the target on a custom run variable.

Grails integration tests failing in a (seemingly) random and non-repeatable way

We are writing integration tests for our Grails 2.0.0 application with the help of the Fixtures and Buid-Test-Data plugins.
During testing, it was discovered that the integration test fail at certain times, and pass at other times. Running 'test-app' sometimes results in all tests passing, and sometimes results in some of our tests failing.
When the tests fail, they are caused by a unique constraint being violated during the insert of an instance of a domain class. This would indicate that there are still records in the test DB. I am running the H2 db, and have definitely got 'dbCreate = "create-drop"' in my DataSource.groovy.
Grails 2.0 integration test pollution? seems to indicate there is a significant test-pollution problem in Grails. Are there any solutions to this? Have I hit Grails-8530?
[Edit] the test-pollution seems to be caused by the unit tests. We have sort-of proved this by deleting the unit tests and successfully running 'test-app' repeatedly.
When I run into errors like this I like to try and find the unit test(s) that is causing the problem. This might be kinda tricky since yours seem to only be failing on occasion.
1) I'd look at unit tests that were recently added. If this problem just started happening then that's a good place to look.
2) Metaclassing seems to be good at causing these type of errors so I'd look for metaclassing that isn't setup/torn down properly. Not as much of an issue with 2.0 as with <= 1.3.7 but could be the problem.
3) I wrote a plugin that executes your tests in a random order. Which might not help you solve your current problem. But what might help you is it prints out all of your tests so you can take what it gives you and run grails test-app <pasted list of unit tests> IntegrationTestThatIsFailing then start removing unit tests to find the culprit(s). ( http://grails.org/plugin/random-test-order). I found a bug in this with 2.0 that I haven't had time to fix yet (integration tests fail when asserting on rendered view name) but it should still print out your test names for you (which is better than doing it yourself :)
The fact integration tests fail with a constraint violation due to existing records reminds me of a situation I once encountered with functional tests (selenium) executing in unpredictable order, some of them not cleaning up the database properly. Sure, the situation with functional tests is different, since it is more difficult to restore the database state (The testcase cannot rollback a transaction in another jvm).
Although integration tests usually roll back transactions, it is still possible to break this behavior if your code controls transactions (commits) explicitly.
First, I would try forcing execution order as mentioned by Jarred in 3). Assuming you can then reproduce the behavior, I would then check transactional behaviour next. Setting the logging level of org.hibernate.transaction to debug should show you where transaction boundaries are.
Sorry, don't yet have a good explanation why wiping out the unit tests helps getting rid of the symptoms besides a general "possibly metaclassing issues". :)

Software testing advice?

Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text