We use Test Cases in Microsoft Test Manager 2010 to handle much of our acceptance testing, however we want to change the description for the Test Cases to HTML (something we have done for other work item types) and possibly make further changes...
Is it possible to change the Test Case work item template without breaking the Test Manager?
If changes can be made, are there any limitations?
Yes, with some restrictions. Test Steps cannot be changed at all.
This post gives an good example of customizing.
Note that ne of the new features in MTM 11 is Rich Text support for Test Cases.
So just hold on :)
Related
We are switching from a classic 'Waterfall' model into more Agile-orient philosophy. We decided to give BDD a try (Cucumber), but we have some issues with migrating some of our 'old' methodologies. The biggest question mark is how manual tests integrates into the cycle.
Let's say the Project Manager defined the Feature and some basic Scenario Outlines. With the test team, we defined around 40 Scenarios for this feature. Some are not possible to automatically test, which means they will have to be tested manually. Execute manual testing when all you have is the feature file, feels wrong. We want to be able to see past failure rate of tests for example. Most of the Test-Cases managers support such features, but they can't work with Feature files. Maintaining the Manual Testcases in external Test-Case manager, will cause never-ending updating issues between the Feature file and the Test-Case manager.
I'm interested to hear if anyone is able to cover this 'mid-ground' and how.
This is not a very unusual case. Even in Agile it may not be possible to automate every scenario. The scrum teams I am working with usually tag them as #manual scenario in the feature file. We have configured our automation suite (Cucumber - Ruby) to ignore these tags while running nightly jobs. One problem with this is, as you have mentioned, we won't know what was the outcome of manual tests as the testers document the results locally.
My suggestion for this was to document the results of each iteration in a YML or any other file format that suits the purpose. This file should be part of the automation suite and should be checked in the repository. So to start with you have results documented along with the automation suite. Later when you have the resource and time, you can add a functionality to your automation suite to read this file and generate a report either with other automation results or separately. Until then your version control should help you to track all previous results.
Hope this helps.
To add to #Eswar's answer, if you're using Cucumber (or one of it's siblings), one option would be to execute the test runner manually and include prompts for the tester to check certain aspects. They then pass/fail the test according to their judgement.
This is often useful for aesthetic aspects e.g. cross-browser rendering, element alignment, correct images used, etc.
As #Eswar mentioned, you can exclude these tests from your automated runs by tagging them.
See this article for an example.
Test cases that cannot be automated are a poor fit for a cucumber test. We have a bunch of these edge cases. It is nigh impossible to get Selenium to verify PDF documents well. Same thing for CSV downloads (not impossible, but not worth the effort). Look and feel tests simply require human eyes at this point. Accessibility testing with screen readers is best done manually as well.
For that, be sure to record the acceptance criteria in the user story in whichever tool you use to track work items. Write a manual test case. The likes of Azure DevOps, Jira, IBM Rational Team Concert and their ilk have ways to record manual test plans, link them to stories, and record the results of executing a manual test.
I would remove the manual test cases from the cucumber tests, and rely on the acceptance criteria for the story, and link the story to some sort of manual test case, be it in a tool or a spreadsheet.
Sometimes you just need to compromise.
We use Azure DevOps with Test Plans + some custom code to synchronize cucumber tests to ADO. I can describe how we’ve realized it in our projects:
We start with the cucumber file first. Each User Story has its own Feature file. The scenarios in the Feature are the acceptance criteria for the story. We end up with lots of Feature files, so we use naming conventions and folders to organize them.
We annotate the top of the Feature file with a tag to the User Story, eg #Story-1234
We‘ve written a command line utility that reads the cucumber files with these tags. It then fetches all the Test Suites in the Test Plan that are linked to Stories. In ADO, a story can only be linked to a single test suite. If a Test Suite doesn’t exist for that Story, our tool creates one.
For each Scenario, the tool creates a an ADO Test Case and then annotates the Scenario with the Test Case ID. This creates amazing traceability for each User Story as the related Test Cases are automatically linked to the Story in the Azure DevOps UI
Although we don’t do this, we could populate the TestCase with the step definitions from our cucumber Scenario. It’s a basic XML structure that describes the steps to take. This would be useful if we wanted to manually execute the test case using the Azure DevOps Test Case UI. Since we focus primarily on automation, we rely on the steps in our Feature files and our ADO Test Cases end up being symbolic links back to cucumber Scenarios.
Because our cucumber tests are written in C# (SpecFlow), we can get the full class name and method for the cucumber test code. Our tool is able to update the Azure DevOps Test Case with the automation details.
Any test case that isn’t ready for automation or must be done manually, we annotate the Scenario with a #ignore or #manual tag.
Using Azure DevOps Pipelines, we use the Visual Studio Test task to run our tests. The important point here is we execute the Test Plan option. This option fetches the Test Cases in the Test Plan that have automation and then executes the specific cucumber tests. The out-of-the-box functionally updates the Test Case statuses with the test results.
After running through automation, we use the Test Plan Report in Azure DevOps which shows the Test Case execution status over time and can distinguish between test automated and manual test cases.
We execute any remaining manual test cases to complete the Test Plan
For us, we often found that the manual cases that cannot be automated are exception cases, or cases that depend on external environment (for example malformed data, network connection not available, maintenance, first time guide...). These cases require special setup to simulate the environment when they happen.
Ideally, I believe it is possible to cover everything, given that you are prepared to go as far as you can to make it happen. But in reality, it is most often too much an effort needed that we prefer the hybrid approach of mixed manual-automatic test cases. We do, however, try to convert those exception cases over the time to automatic ones, by setting up separate environment to simulate exception cases and write automation tests against them.
Nevertheless, even with that effort, there would be cases when it's impossible to simulate, and I believe they should be covered by technical tests from engineers.
You could use an approach similar to the following example:
http://concordion.org/Example.html
When you use a build or continuous integration system to track your test runs, you could add simple specifications / tests for your manual cases that contain a text comparison (e.g. "pass" or "fail"). Then you would need to update the spec after each manual test run, check it in, and start the tests in your build / continuous Integration system. Then the manual results would be recorded together with the results of the automated test execution.
If you would use a tool like Concordion+ (https://code.google.com/p/concordion-plus/) you could even write a summary specification, which could contain scenarios for each of your manual tests. Each one would be reported as individual test result in your test execution environment.
Cheers
taking screen shots seems to be a good idea, you can still automate the verification but will need to go an extra mile. for instance when using Selenium you can add Sikuli(NB: u can't run headless test) to compare results (images) or take a screenshot with Robot (java.awt) use OCR to read text and assert or verify(TestNG)
Can any one please tell me is there any kind of tools or eclipse base plugins available for generate relevant test cases for SalesForce platform related Apex classes. It seems with code coverage they are not expecting out come like we expect with JUnit, they want to cover whether, test cases are going through the flows of the source classes (like code go through).
Please don't get this post in wrong, I don't want anyone is going to write test cases for my codes :). I have post this question due to nature of SalesForce expecting that code coverage should be. Thanks.
Although Salesforce requires a certain percentage of code coverage for your test cases, you really need to be writing cases that check the results to ensure that the code behaves as designed.
So, even if there was a tool that could generate code to get 100% coverage of your test class, it wouldn't be able to test the results of those method calls, leaving you with a false sense of having "tested code".
I've found that breaking up long methods into separate, sometimes static, methods makes it easier to do unit testing. You can test each individual method, and not worry so much about tweaking parameters to a single method so that it covers all execution paths.
it's now possible to generate test classes automatically for your class/trigger/batch. You can install "Test Class Generator" app from AppExchange and see it working.
This would really help you generating test class and saves lot of your development time.
Here's the situation that I'm working with:
Build tests in Selenium
Get all the tests running correctly (in Firefox)
Export all the tests to MSTest (so that each test can be run in IE, Chrome and FF)
If any test needs to be modified, do that editing in Selenium IDE
So it's a very one-way workflow. However, I'd now like to do a bit more automation. For instance, I'd like every test to run under each of two accounts. I'm getting into a maintenance issue. If I have 6 tests that I want to run under two accounts, suddenly I'd need 12 tests in the Selenium IDE tests. That's too much editing. But a ton of that code is exactly the same.
How can I share chunks of Selenium tests among tests? Should I use Selenium IDE to develop the test first time then never use it again (only doing edits in VS after that)?
Selenium code is very linear after you export it from the IDE.
For example (ignore syntax):
someTestMethod() {
selenium.open("http://someLoginPage.com");
selenium.type("usernameField", "foo");
selenium.type("passwordField", "bar");
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
This is the login page. Every single one of your tests will have to use it. You should refactor it into a method.
someTestMethod(){
selenium.open("http://someLoginPage.com");
String username = "foo";
String password = "bar";
performLogin(username, password);
}
performLogin(String username, String password){
selenium.type("usernameField", username);
selenium.type("passwordField", password);
selenium.click("loginButton");
selenium.waitForPageToLoad("30000");
assertTrue(selenium.isTextPresent("Welcome * foo"));
}
The performLogin() method does not have to be in the same file as your test code itself. You can create a separate class for it with your methods and share it between your tests.
We have classes that correspond to certain functionalities on our UI. For example, we have many ways to search in our app. All methods that helps you with search functionality will be in the SearchUtil class.
Structuring your tests similarly will give you the following advantages:
If the UI changes (an id of a field), you go to your one method, update the id and you are good to go
If the flow of your logic changes you also have only one place to update
To test whether your changes worked, you only have to run one of the tests to verify. All other tests use the same code so it should work.
A lot more expressive as you look at the code. With well named methods, you create a higher level of abstraction that is easier to read and understand.
Flexible and extensible! The possibilities are limitless. At this point you can use conditions, loops, exceptions, you can do your own reporting, etc...
This website is an excellent resource on what you are trying to accomplish.
Good Luck!
There are two aspects to consider regarding code reuse:
Eliminating code duplication in your own code base -- c_maker touched on this.
Eliminating code duplication from code generated by Selenium IDE.
I should point out that my comments lean heavily to the one-way workflow that you are using, jcollum, but even more so: I use IDE to generate code just once for a given test case. I never go back to the IDE to modify the test case and re-export it. (I do keep the IDE test case around as a diagnostic tool when I want to experiment with things while I am fine-tuning and customizing my test case in code (in my case, C#).
The reasons I favor using IDE tests only as a starting point are:
IDE tests will always have a lot of code duplication from one test to another; sometimes even within one test. That is just the nature of the beast.
In code I can make the test case more "user-friendly", i.e. I can encapsulate arcane locators within a meaningful-named property or method so it is much clearer what the test case is doing.
Working in code rather than the IDE just provides much greater flexibility.
So back to IDE-generated code: it always has massive amounts of duplication. Example:
verifyText "//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span" Home
generates this block of code:
try
{
Assert.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
}
catch (AssertionException e)
{
verificationErrors.Append(e.Message);
}
Each subsequent verifyText command generates an identical block of code, differing only by the two parameters.
My solution to this pungent code smell was to develop Selenium Sushi, a Visual Studio C# project template and library that lets you eliminate most if not all of this duplication. With the library I can simply write this one line of code to match the original line of code from the IDE test case:
Verify.AreEqual("Home",
selenium.GetText("//form[#id='aspnetForm']/div[2]/div/div[2]/div[1]/span"));
I have an extensive article covering this (Web Testing with Selenium Sushi: A Practical Guide and Toolset) that was just published on Simple-Talk.com in February, 2011.
You can also put some fragments or one-liners, e.g.
note( "now on page: " . $sel->get_location() . ", " . $sel->get_title() ;
into the "code snippets" collection of your IDE ( I use Eclipse).
That's not true reuse, but hey it works for me for throwaway testscripts or quick enhancements of existing testscripts.
Where i am working we have the following issue:
Our current test procedure is that our business analyst test the release based on their specifications/tests. If it passes these tests it is given to the quality dept where they test the new release and the entire system to check if something else was broken.
Just to mention that we outsource our development. Unfortunately the release given to us is rarely tested by the developers and thats "the relationship" we have with them these last 7 years....
As a result if the patch/release fails the tests at the functionality testing level or at the quality level with each patch given we need to test the whole thing again not just the release.
Is there a way we can prevent this from happening?
You have two options:
Separate the code into independent modules so that a patch/change in one module only means you have to re-test that one module. However, due to dependencies this is effective only to a very limited degree.
Introduce automated tests so that re-testing is not as expensive. It takes some more work at fist, but will definitely pay off in your scenario. You don't have to do unit test or TDD - integration tests based on capture-replay tools are often easier to introduce in your scenario (established project with manual testing process).
Implement a continuous testing framework that you and the developers can access. Someething like CruiseControl.Net and NUnit to automate the functional tests.
Given access, they'll be able to see nightly tests on the build. Heck, they don't even need to test it themselves, your tests will be being run every night (or regularly), and they'll know straight away what faults they've caused, or fixed, if any.
Define a 'Quality SLA' - namely that all unit tests must pass, all new code must have a certain level of coverage, all new code must have a certain score in some static analysis checker.
Of course anything like this can be gamed, so have regular post release debriefs where you discuss areas of concern and put in place contingency to avoid it in future.
Implement GO server with Dashboard and handle with GO Agent GUI at your end.
http://www.thoughtworks-studios.com/forms/form/go/downloadlink text
Preparing test strategy, I have run into this simple (not so simple as I could not find the right name for that) problem - is there any name for this testing?
I defined the testing needed as manual testing done according by gray box strategy..but as it is testing against testcases, how is it called? The opposite would be exploratory testing. Thanks :)
If you mean testing where a test case is already specified and someone is just executing the predefined scripts, I've usually seen this referred to as scripted testing.
If you are referring to the testing of test cases and other testing-related products (such as test data sets and test scripts), that is often called meta-testing.