Test planning, results, search, compare and reports - testing

I'm looking for a Tool to do
Test planning
Inserting test results
Searching previous tests and results
Comparing multiple test results
Making reports out of existing test
data
The tests could be almost anything. For example testing performance of a specific software in a specific hardware. The point is that it would be possible to search earlier test procedures and results to be able to reproduce the test conditions. For example new results could be written using the same procedure only with different hardware.
This tool would be used to record test plans and results. The tools would NOT be used for executing the tests. The tool would act more as a database for developers to insert test plans and result, search existing tests and compare results.

How about retrofitting an existing blog, wiki or CMS engine for doing this?
Say for example, in a wiki each wiki article could represent a test. You could have page templates set up, with required sections like "purpose", "scenario", "results".
Pick a system you're already familiar with, so you'll have something running quickly, use it for a while, see what customizations it needs. Once the list of hard-to-implement things gets bigger, you can look for a custom tool, and you'll have a solid list of requirements by then.

You are probably looking for a test (case) management software. A good test management software lets you plan tests (and cases), record results and print reports/provide relevant metrics. Assigning tests to team members and email notifications similar to defect/bug tracking tools should also be included in a good tool. There are quite a few tools for this out there. A modern web-based test management app is our tool TestRail, feel free to give it a try.
(source: gurock.com)

Related

Documenting functional tests

I'm about to start writing e2e tests for a web app i've been working on for the last months. I am currently investigating how to best document these tests. In my company the way it's been done before (on older, non-web programs) is to have a big word document that describes the action of each test, and the expected result. Tests are then run with a third party software, and if any test fail, we can use the documentation to troubleshoot.
This way works fine, but i'm wondering if there is a more efficient, "web-based" way of documenting the e2e tests. We have no prior experience with web-based apps, and my research lead me to observablehq's javascript-based notebooks. I thought maybe it is possible to integrate the actual tests into it, along with the test specifications and then run the codeblocks from there. But i'm not sure this approach is worth the extra effort rather than the current way we do things.
I guess what i'm asking is how other developers are documenting e2e tests for web-based apps, and lessons learned around it?
If you can use an automation framework that makes you build the tests from a specification. This is typically a markdown file which describes the business case being tested. Each of the steps are executed by the framework. This means that you can re-use the steps as you build out the specifications. An example of this is Gauge. You can read their documentation on building specifications to get a better idea of what I mean.
There are a few advantages to following this approach:
The specifications are stored alongside the code. This means that the test cases follow the code as it evolves. In the 'old days' where this was stored in documents there was a challenge keeping this in sync with versions of the code.
The tests are self documentation the specification both drives the test and documents the test.
The test reports are produced in HTML and therefore are easier to understand.
Good documentation is key, when talking about end to end testing it could be a little more challenging. Use cases and its data organization is the first thing to address. You want your test case input and output verification organized in a cohesive way, including specification and use case description.
Some project with e2e test case documentation example:
Cloud storage mirror
Cross vendor database synchronizer
Finally you might be interested in test data organization

Need help regarding Test Management tool Testlink

Our company is small and in a project only 1 or 2 testers are assigned. And our all test related things are maintained on Excels sheets. And for bug tracking we are using Mantis. We create test cases on Excel sheet and execute them via same.
Is TestLink or any other test management tool will be helpful to us or not. As number of testers are less so there are no merging of test cases are done. Only one QA develop test cases and execute it. Please suggest me if it will be any help to us or not.
If so please suggest only free application
I am working for a startup and we are pretty much using Testlink. Our QA team is always pretty small (in between 1-3). It's very helpful for organizing and keeping the test cases for your whole system. It becomes more useful when you go for a release. You can assign your testers based on a test build so that they can go through test cases one after another and mark which tests are passing or not. Finally, you can generate report based on those for your build.
Hope that helps.
Regardless of if there is only one tester or many, it is still a good practice to make use of a test management tool and using a lightweight solution will make you more productive.
There are many benefits over using a static excel file and we recently put together a short blog post which goes into a little detail of the benefits to organizing your testing process with a test management tool which may be of interest.
If you are using Mantis to track your issues, you will often find that test management tools integrate with tools like this so that when a test fails a ticket is automatically created and this is a huge time saver.

Where to find good test case templates/examples? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.

How do you organise/layout your test scripts

I'm interested in how others organise their test scripts or have seen good test scripts organised anywhere they've worked. Also, what level of detail is in those test scripts. This specifically relates to test scripts created for manual testing as opposed to those created for any automated test purposes.
The problem as I see it is this, there is a lot of complexity in test scripts but without the benefit of the principles used in organising a complex or large code base. You need to be able to specify what a piece of code should do but without boring someone to death as they read it.
Also, How do you layout test scripts, I'm not keen to create fully specified scripts suitable to be run by data entry types as that isn't the team we have and the overhead of maintaining them seems too high. Also, it feels to me that specifying the process in such detail removes responsibility from the person actually doing the testing for the quality of the product. Do people specify every button click and value to be entered? If not then what level of detail is specified.
Tests executed by humans should be at a very high level of abstraction.
E.g. a test case for stackoverflow registration:
Good:
A site visitor with an existing OpenId
account registers as a stackoverflow
user and posts an answer.
Bad:
1) Navigate to
http://stackoverflow.com 2) Click on
the login link 3) Etc...
This is important for several reasons:
a) it keeps the tests maintainable. So you don't have to update your test script every time navigation elements are relabeled (e.g. 'login' changes to 'sign in').
b) it saves your testers from going insane from the tedium of minute details.
c) writing detailed manual test scripts is a poor use of your finite test resources.
Detailed manual test scripts will divert your testers into writing bugs for minor documentation issues. You want to use your time to find the real bugs that will impact customers.
Tests can be grouped by priority. The BVT/smoke tests could have the highest priority with functional, integration, regression, localization, stress, and performance having lower priorities. Depending on your test pass you would select a priority and run all tests with that or higher priorities. All you need to do is determine which priority a particular test is.
I try to make manual tests fit into an automated structure---you can have both.
The organization schemes used by automated tests (e.g., the xUnit frameworks) work for
me. In fact, they can be used to semi-automate the tests, by stopping and calling for a manual test to be run, or input put to be entered, or a GUI to be inspected. The scheme usually is to mirror the directory structure of the production code, or to include the tests inside the production code, sometimes as inner classes. Tests above the unit level can often be fit into the higher level directories (assuming you have a deep enough directory tree). These higher level tests can go in (mirrored) directories that have no production code, but are there for organizational purposes.
The level of detail---well, that depends, right?
Matt Andresen has provided good answer, in general case, but there are situations, when you can't do it that way. For example when you are working on validated applications, that must comply with regulations form other parties like FDA, and it goes through very intensive audit, review, sign off, than 2 answer form your example is required. Although I would opt for moving into automation with HP QuickTestPro or IBM RationaRobot in this case.
Maybe you should try with some tests repository? There are again tools from HP QualityCenter and IBM products, but this can expensive. You could find some cheaper, that will let you organize them into tree structures, by requirement/feature, assign them priorities, group them into test suits for releases, group them into regression testing suits etc...

How do you organize your release tests?

In the company where I work we have major releases twice every year. Extensive testing (automated and manual) is done the weeks before.
The automated tests produce logfiles, the results of the manual tests are written down in test plans (Word documents). As you can imagine this results in a lot of different files to be managed and interpreted by the test engineers.
How do you organize your release tests?
E.g. Do use a bug tracker? Do you use any other tools? How do you specify what has to be tested? Who does the testing? How is the ratio developers / testers?
You could use combination of a bug tracker (JIRA, Mantis, Bugzilla) and test case management tool like testlink
It's almost impossible to properly organise the testing without keeping good track of your test and their results.
We use PMC suite(1) and it has a very useful organisation structure for the tests:
Test Scenarios (batteries of tests)
Test Cases (linked to the Requirements)
Test runs with their respective results
These are linked to the Bugs which are in their turn linked to the Tasks etc.
When a manual test is run the tester executes a scenario and goes through the test cases, with the results being tracked. All found issues are documented as Bugs.
1. it's developed by my company but please don't consider this to be a ad :)
If you develop with MS products and technologies you could always look at Team Foundation Server. I find it fits perfectly for managing automated Unit Testing/builds, managing bugs, managing test results, assigning testing tasks, etc. That is what we use. Its not cheap thoguh, but worth the investment if its in the budget.