Say there are 1-10 user stories. All tested okay. -> to Production. Then comes the CR with 5 more user stories. All then tested okay. -> to production.
Then comes 5 more user stories. Tested okay. -> To production.. now here a user story or two from first 1- 10 breaks down. Obviously testers will have to carry the blame for the same.
Developers have direct access to the QA environments' build path. any developer can go put the code file there. just a simple folder structure.
How do we fix this and keep 'our' hands clean?
Also Please note that we do ad-hoc testing due to the stringent timelines.
The situation when something new breaks down something old is rather common. I cannot see what is the problem. QA environment is perfectly good for catching up such a regression.
What i can suggest is:
1. Having Development / QA / Production environments
And try to set up the proper process of if sth new has been coded up and developer-tested it can go to 'QA'. And only when the new stuff has been QA-tested it can go to 'Production';
2. Continuous Build Integration
It's also nice to have the key features covered with the unit tests or (and) to have a suite of automated tests. One button-click can show you the general state of your app and even whose check-in has failed the build.
3. Regression testing
Ensure you have a profound Regression suite. These are run mainly to avoid such problems and verify that no critical issues leak into the production.
Hope this helps a little.
Related
We are switching from a classic 'Waterfall' model into more Agile-orient philosophy. We decided to give BDD a try (Cucumber), but we have some issues with migrating some of our 'old' methodologies. The biggest question mark is how manual tests integrates into the cycle.
Let's say the Project Manager defined the Feature and some basic Scenario Outlines. With the test team, we defined around 40 Scenarios for this feature. Some are not possible to automatically test, which means they will have to be tested manually. Execute manual testing when all you have is the feature file, feels wrong. We want to be able to see past failure rate of tests for example. Most of the Test-Cases managers support such features, but they can't work with Feature files. Maintaining the Manual Testcases in external Test-Case manager, will cause never-ending updating issues between the Feature file and the Test-Case manager.
I'm interested to hear if anyone is able to cover this 'mid-ground' and how.
This is not a very unusual case. Even in Agile it may not be possible to automate every scenario. The scrum teams I am working with usually tag them as #manual scenario in the feature file. We have configured our automation suite (Cucumber - Ruby) to ignore these tags while running nightly jobs. One problem with this is, as you have mentioned, we won't know what was the outcome of manual tests as the testers document the results locally.
My suggestion for this was to document the results of each iteration in a YML or any other file format that suits the purpose. This file should be part of the automation suite and should be checked in the repository. So to start with you have results documented along with the automation suite. Later when you have the resource and time, you can add a functionality to your automation suite to read this file and generate a report either with other automation results or separately. Until then your version control should help you to track all previous results.
Hope this helps.
To add to #Eswar's answer, if you're using Cucumber (or one of it's siblings), one option would be to execute the test runner manually and include prompts for the tester to check certain aspects. They then pass/fail the test according to their judgement.
This is often useful for aesthetic aspects e.g. cross-browser rendering, element alignment, correct images used, etc.
As #Eswar mentioned, you can exclude these tests from your automated runs by tagging them.
See this article for an example.
Test cases that cannot be automated are a poor fit for a cucumber test. We have a bunch of these edge cases. It is nigh impossible to get Selenium to verify PDF documents well. Same thing for CSV downloads (not impossible, but not worth the effort). Look and feel tests simply require human eyes at this point. Accessibility testing with screen readers is best done manually as well.
For that, be sure to record the acceptance criteria in the user story in whichever tool you use to track work items. Write a manual test case. The likes of Azure DevOps, Jira, IBM Rational Team Concert and their ilk have ways to record manual test plans, link them to stories, and record the results of executing a manual test.
I would remove the manual test cases from the cucumber tests, and rely on the acceptance criteria for the story, and link the story to some sort of manual test case, be it in a tool or a spreadsheet.
Sometimes you just need to compromise.
We use Azure DevOps with Test Plans + some custom code to synchronize cucumber tests to ADO. I can describe how we’ve realized it in our projects:
We start with the cucumber file first. Each User Story has its own Feature file. The scenarios in the Feature are the acceptance criteria for the story. We end up with lots of Feature files, so we use naming conventions and folders to organize them.
We annotate the top of the Feature file with a tag to the User Story, eg #Story-1234
We‘ve written a command line utility that reads the cucumber files with these tags. It then fetches all the Test Suites in the Test Plan that are linked to Stories. In ADO, a story can only be linked to a single test suite. If a Test Suite doesn’t exist for that Story, our tool creates one.
For each Scenario, the tool creates a an ADO Test Case and then annotates the Scenario with the Test Case ID. This creates amazing traceability for each User Story as the related Test Cases are automatically linked to the Story in the Azure DevOps UI
Although we don’t do this, we could populate the TestCase with the step definitions from our cucumber Scenario. It’s a basic XML structure that describes the steps to take. This would be useful if we wanted to manually execute the test case using the Azure DevOps Test Case UI. Since we focus primarily on automation, we rely on the steps in our Feature files and our ADO Test Cases end up being symbolic links back to cucumber Scenarios.
Because our cucumber tests are written in C# (SpecFlow), we can get the full class name and method for the cucumber test code. Our tool is able to update the Azure DevOps Test Case with the automation details.
Any test case that isn’t ready for automation or must be done manually, we annotate the Scenario with a #ignore or #manual tag.
Using Azure DevOps Pipelines, we use the Visual Studio Test task to run our tests. The important point here is we execute the Test Plan option. This option fetches the Test Cases in the Test Plan that have automation and then executes the specific cucumber tests. The out-of-the-box functionally updates the Test Case statuses with the test results.
After running through automation, we use the Test Plan Report in Azure DevOps which shows the Test Case execution status over time and can distinguish between test automated and manual test cases.
We execute any remaining manual test cases to complete the Test Plan
For us, we often found that the manual cases that cannot be automated are exception cases, or cases that depend on external environment (for example malformed data, network connection not available, maintenance, first time guide...). These cases require special setup to simulate the environment when they happen.
Ideally, I believe it is possible to cover everything, given that you are prepared to go as far as you can to make it happen. But in reality, it is most often too much an effort needed that we prefer the hybrid approach of mixed manual-automatic test cases. We do, however, try to convert those exception cases over the time to automatic ones, by setting up separate environment to simulate exception cases and write automation tests against them.
Nevertheless, even with that effort, there would be cases when it's impossible to simulate, and I believe they should be covered by technical tests from engineers.
You could use an approach similar to the following example:
http://concordion.org/Example.html
When you use a build or continuous integration system to track your test runs, you could add simple specifications / tests for your manual cases that contain a text comparison (e.g. "pass" or "fail"). Then you would need to update the spec after each manual test run, check it in, and start the tests in your build / continuous Integration system. Then the manual results would be recorded together with the results of the automated test execution.
If you would use a tool like Concordion+ (https://code.google.com/p/concordion-plus/) you could even write a summary specification, which could contain scenarios for each of your manual tests. Each one would be reported as individual test result in your test execution environment.
Cheers
taking screen shots seems to be a good idea, you can still automate the verification but will need to go an extra mile. for instance when using Selenium you can add Sikuli(NB: u can't run headless test) to compare results (images) or take a screenshot with Robot (java.awt) use OCR to read text and assert or verify(TestNG)
I've read a lot of questions about multiple asserts in tests. Some are against it and some think it's OK. But I'm starting to wonder how I should do it with longer tests that have many steps.
For example this test with an Android device:
Start wifi
Install app
Uninstall app
Stop wifi
run test a couple of times
As I want to run it multiple times and always in this order it has to be a single test(?). So then I'm forced to do four asserts on the way:
Check that wifi is on.
Check that the app got installed.
Check that the app got uninstalled.
Check that wifi is off.
test is OK
Is this wrong or ugly? I don't see how I could get away from it without splitting up the test and as I see it as a single test case it also seems wrong.
From what I understand from the description: yes, this is wrong because of this part
always in this order
A good unit test is isolated (not dependent on other tests) and its results are not dependent on a particular order of execution. This is important because many frameworks simply have no guarantee to the order of execution.
I think you can split that test up in multiple tests. Keep in mind that in order to test something you might have to change the state prior to it (which is what you do with starting/stopping WIFI) so this is something hard to overcome.
This could be your layout of tests:
StartWifi
StopWifi
InstallApp_WithWifiStarted_InstallsSuccesfully
InstallApp_WithoutWifiStarted_AbortsInstallation
and continue like this for uninstall (I'm not sure what the requirements for that are).
With these tests you will now have knowledge of the following:
The wifi service can be started
The wifi service can be stopped
Installing the app with wifi works
Installing the app without wifi doesn't work
Whereas with your single test you could only deduce from a failure that something went wrong throughout the line but it's unclear where. The problem could have been located at
Starting wifi
Installing app
Uninstalling app
Stopping wifi
With separate, smaller tests you can rule out the ones that aren't applicable because they work themselves.
[At this point I notice you changed the tag from unit-testing to integration-testing]
It's important to note though that what you do isn't bad per sé: larger units are good to test as well although, as you indicate yourself, this is where you're getting close to integration testing.
It's important that you use unit-testing and integration-testing as a complementary testing method: by having these smaller unit tests and your bigger integration test, you can verify that the smaller parts work and that the combination of them works.
Conclusion: yes, having several asserts in your test is okay but make sure you also have smaller tests to test the independent units.
Yes, it's fine to use multiple asserts in a single test. Your test is an integration test and it looks like an acceptance test, and it is normal for those (which exercise a big part of the system) to have many assertions. There should only be one block of assertions, however.
To illustrate that, here are the four tests I think you need to test the functionality you're testing (considering only happy paths):
Test that the wifi can be turned on.
Turn the wifi on.
Assert that the wifi is on.
Turn the wifi off.
Test that the wifi can be turned off:
Turn the wifi on.
Turn the wifi off.
Assert that the wifi is off.
Test that the application can be installed:
Turn the wifi on.
Install the application
Assert that the application is installed.
Turn the wifi off.
Uninstall the application (if you need to do that to clean up).
Test that the application can be uninstalled:
Turn the wifi on.
Install the application.
Uninstall the application.
Assert that the application is uninstalled.
Turn the wifi off.
Each test tests only one action. It might take multiple language-level assertions to test that that action did everything it was supposed to; that's fine. The point is that there's only one block of assertions, and it's at the end of the test (not counting cleanup steps). Tests that need setup code don't need to assert anything about whether that setup code succeeded; that was already done in another test. Likewise, actions that are used in cleanup steps (the steps which follow the assertions) are tested in one place and don't need to be tested again when they're used for cleanup. Each action is tested in one place. The result is that you only need to read one test to find out how a piece of functionality should behave, and you're more likely to need to change only one test if the way that functionality should behave changes.
Our rails development team tries to follow Continuous Integration. We have decided to adopt a policy of only committing features whose tests pass. Is that a good way to go on? Should I delay integrating with other one's features until my tests pass(Even if the partial part of the feature works ok)? Thanks in advance
The tests should pass--if you're running a CI server it'll just spam people with emails until they do. Without a CI server everyone else will have to figure out if those tests are "supposed" to fail. Boo.
Another option is to only check in tests for actually-written features; if you're using tests as an executable specification they wouldn't all pass until the entire app was done and nobody would be able to check anything in ever.
You may also be able to mark tests as "pending" or indicate they should be skipped, but remembering to un–pend/-skip them is often problematic.
The tests SHOULD PASS that's the reason why you are writing them in the first place, if for some reason one or more tests do not pass, it indicates that something went wrong (obviously) and you and your team should be working on the solution.
If the code were committed with test failures, spam mails blaming the programmer who did it, this way the next time he will pay more attention before committing code
I have heard one way to avoid committing code with test failures but I have not personally tested, it involves to have two repositories (it could be a branch), the theory behind is:
The developers commits will target a branch, the purpose of this branch is just to guarantee that all tests pass, you should configure your CI server to build and run tests from this branch
When all the tests pass in the branch, a merge should be done to the trunk, since everyone should be working on this branch the merge should be transparent and automatic
I repeat I have not tested this approach and in my opinion it involves more problems than it solves
Another alternative could be to add a hook to the commit event in your VCS and force to run all tests but this could be time consuming just to perform a single commit
As additional info you could check this response
https://stackoverflow.com/a/7110774/1268570
I would wait personally to the test passes before I intergrate other features.
Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text
Looking for a way to get a visual report about:
overall test success percentage over time (information about if and how quickly tests are going greener)
visualised single test results over time (to easily notice test gone red that has been green for long time or vice versa to pay attention to a test that has just gone green)
any other visual statistics that would benefit testers and the project as a whole
Basically a tool that would generate results from the whole test results directory not just off the single (daily) run.
Generally it seems it could be done using XSLT, but it doesn't seem to have much flexibility to work with multiple files at the same time.
Does such a tool exist already?
I feel fairly courageous to claim that most Continuous Integration Engines such as Hudson (for Java) provide such capability either natively or through plugins. In Hudson's case there's a few code coverage plugins available already and I think it does basic graphs from unit tests automatically by itself.
Oh and remember to configure the CI properly, for example our Hudson polls CVS every 10 minutes and if it sees any changes, it does all the associated tricks (get updated .java files, compile, run tests, verify dependencies etc.) to see if the build is still OK or not.
Hudson will do this and it will work with Nunit (here), Junit (natively), and MSTest.exe tests using the steps I outline here. It does all that you require and more. Even if you want it to ONLY run tests and give you feedback on those, it can.
There's such new report supporting NUnit \ JUnit called Allure. To retrieve information from NUnit you need to use NUnit adapter, for JUnit - read the following wiki page. You can use it with Jenkins via respective plugin.