Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm building a command line tool where I can execute commands like this on the input:
PROMPT>userName=Seán<CR>
PROMPT>zodiacSign=Virgo<CR>
where userName is a string type and zodiacSign is of type enumerator.
I also have auto-complete such that I can hit the tab key and get clues, like this
PROMPT>zodiacSign=C<TAB>
Cancer
Capricorn
PROMPT>zodiacSign=Ca
The thing is that I'm getting more and more subtle requirements which I'm finding more and more difficult to document into User Stories. For example, I just received the requirement where if I hit carriage-return for the following:
PROMPT>zodiacSign=Can<CARRIAGE-RETURN>
The software should then auto-complete the command zodiacSign=Cancer and execute it since it is the only option.
I will put in place function tests to test each of these nuances. By doing this, I can demo User Stories via my Function Tests.
But what convenient tool would you recommend where I can store requirements / user stories, perhaps even linking them to function tests? Perhaps this tool includes coverage graphs.
Who is the audience for the requirements? If it is a developer, I'd say that the version control system is a great place to store them. :-)
I would recommend the use of Cucumber or FitNesse. Using the tests as requirements is the way to go.
Cucumber example:
Scenario:
If a single match is available and the carriage return is pressed
auto-complete should accept the match
Given valid Zodiac Signs are "Cancer,Capricorn"
When the user enters "zodiacSign=Can<CARRIAGE-RETURN>" at the prompt
Then the shell should auto-complete to "zodiacSign=Cancer"
This is a completely executable test and does well to describe the required functionality.
Hope that helps!
Brandon
Take a look at FitNesse. It's a combination of a requirements Wiki and functional test execution framework.
When you write the requirements, you put them in a table where you have sample data and expected results. Click "test" and FitNesse parses the table, and makes the call. Pretty cool.
FitNesse is indeed a popular tool, but some would argue that FitNesse is evil (it can be misused easily, and suffers from numerous issues). A good open-source cross-platform alternative would be soapUI.
soapUI can manage functional testing, as well as keep track of your system's requirements, use cases and user stories, and link them to the tests.
It has a nice GUI with what-not (including coverage graphs, like you want!). Most of the features are included in the free version.
For your need, take a look at QMetry.
It's a very complete tool that allow you to define requirements, test cases, test scenarios and also the launching of test scenarios.
Reporting is also nice and HMI is very user friendly.
Hope this help
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
We are building a large CRM system based on the SalesForce.com cloud. I am trying to put together a test plan for the system but I am unsure how to create system-wide tests. I want to use some behaviour-driven testing techniques for this, but I am not sure how I should apply them to the platform.
For the custom parts we will build in the system I plan to approach this with either Cucumber of SpecFlow driving Selenium actions on the UI. But for the SalesForce UI Customisations, I am not sure how deep to go in testing. Customisations such as Workflows and Validation Rules can encapsulate a lot of complex logic that I feel should be tested.
Writing Selenium tests for this out-of-box functionality in SalesForce seems overly burdensome for the value. Can you share your experiences on System testing with the SalesForce.com platform and how should we approach this?
That is the problem with detailed test plan up front. You trying to guess what kind of errors, how many, and in what areas you will get. This may be tricky.
Maybe you should have overall Master Test Plan specifying only test strategy, main tool set, risks, relative amount of how much testing you want to put in given areas (based on risk).
Then when you starting to work on given functionality or iteration (I hope you are doing this in iterations not waterfall), you prepare detailed test plan for this set of work. You adjust your tools/estimates/test coverage based on experiences from previous parts.
This way you can say at the beginning what is your general approach and priorities, but you let yourself adapt later as project progresses.
Question about how much testing you need to put into testing COTS is the same as with any software: you need to evaluate the risk.
If your software need to be
Validated because of external
regulations (FDA,DoD..)
you will need to go deep with your
tests, almost test entire app. One
problem here may be ensuring
external regulator, that tools you
used for validation are validated
(and that is a troublesome).
If your application is
mission-critical for your company,
than you still need to do a lot of
testing based on extensive risk
analysis.
If your application is not concerned
with all above, you can go with
lighter testing. Probably you can
skip functionality that was tested
by platform manufacturer, and focus
on your customisations. On the other
hand I would still write tests (at
least happy paths) for
workflows you will be using in your
business processes.
When we started learning Selenium testing in 2008 we created Recruiting application from SalesForce handbook and created a suite of tests and described our path step by step in our blog. It may help you get started if you decide to write Selenium code to test your app.
I believe the problem with SalesForce is you have Unit and UI testing, but no Service-level testing. The SpecFlow I've seen which drives Selenium UI is brittle and doesn't encapsulate what I'm after in engineering a service-level test solution:
( When I navigate to "/Selenium-Testing-Cookbook-Gundecha-Unmesh/dp/1849515743"
And I click the 'buy now' button
And then I click the 'proceed to checkout' button)
That is not the spirit or intent of Specflow.
Given I have not selected a product
When I select Proceed to Checkout
Then ensure I am presented with a message
In order to test that with selenium, you essentially have to translate that to clicks and typing, whereas in the .NET realm, you can instantiate objects, etc., in the middle-tier, and perform hundreds of instances and derivations against the same BACKGROUND (mock setup).
I'm told that you can expose SF through an API at some security risk. I'd love to find more about THAT.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm beginning a new project of about 1 year of development (for the first version) done with multiple developers, testers, etc.
I'm wondering if something exist that could help me do the following:
List all user goals
Associate functions to these user goals
Associate requirements to these functions
Associate design activities to these requirements
Associate development tasks to these requirements
Associate tests to these requirement
Qualify tests (system test, regression test, developer test, automated or not)
This way, I could:
Track if the program developed fulfill all user goals
Track if all functions are tested
Do a test matrix traceability to know if each requirements is tested
Track tests to do if a function is to be changed
Track the time needed to develop a function (it can serve later to estimate the time needed to modify it or to add a similar function to the program)
List all system tests to do when a new version is shipped
List all regression tests to do
List all developer test to do when there is a change in the function
List all automated test, this way we could know what is the percentage of the functions that are automatically testes.
etc.
You can suggest open source or commercial programs.
The Atlassian suite of software would seem to be a good fit and is very cheaply priced for a few users ($10 for up to ten users). I've direct (and good!) experience of using JIRA and find it very simple to use and flexible enough for my needs. Another alternative would be FogBugz, but I've no first-hand experience of using this.
re FogBugz, it is well worth having a look at the processes behind it, having worked on many non software projects I believe it is a universally sound methodology (even if Joel is a little quirky in his thinking.....).
I use SmartSheet because it is simple, but still has heirachial tasking, as you have set out in the question. It is good at dealing with people, unlikely it is good at manageing code, whereas FogBugz presumably does that.
A key feature of SS over Atl and others is additional users cost nothing.
One decision you have to make is do you want the project plan to be output in a simple way which many stakeholders can understand, or detailed so you can track much activity. Obviously the detail will require effort.
You have made a good start by setting out the issues, your culture of management may well be more valuable than the tool you choose.
ciao
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Is keyword driven testing something that could be implemented using Selenium? If so, how exactly and where can I learn more about it? A simple example might help me get started :)
Thanks!
The Page Object Model is a way to represent your pages using Selenium 1 or 2/webdriver that might be of interest to you. With proper setup your tests become human readable and within an IDE that supports code completion, simple to write.
I know this isn't quite what you are asking for, but it provides excellent abstraction and makes tests readable and powerful. You can mock your test flow with somewhat plain language and then fill it in later.
U can look into "robotframework". The documentation also available in the wikipage.
It promises to replace the code to keyword.
http://code.google.com/p/robotframework/
Yes. But keyword driven testing is not something particular to Selenium. Selenium is just the tool/framework for interacting with the browser UI elements in an automated fashion. Keyword driven testing frameworks are typically independent from the automation tool. Try googling keyword driven test automation frameworks to get started.
Using TestPlan with Selenium as a backend is a good option for such testing. I have written several scripts which load CSV files, have hand-coded tables, or use automatic generators to drive the testing.
The language in TestPlan is however clear enough that a typical non-programmer can pick it up and work with it. This further alleviates the trouble. For example, the below is a simple script to submit a form.
GotoURL http://mydomain.com/
SubmitForm with
%Params% with
%name% Tom
%age% 45
end
end
Check //p[#class='success']
That goes to a page, submits the form, and ensures that the result has a specific element (XPath is used, but there are other predicates to locate things).
Open2Test is like add-on style keyword driven framework.
It aims to replace from writing test code to keyword.
But... I'm not sure anybody really using it. There are too little info on the web.
http://www.open2test.org/index.html
Take a look at Test Automation Framework which comes with plug-n-play Keyword driven model.
http://menonvarun.github.io/taf/index.html
http://menonvarun.github.io/taf/pages/keyword_model_in_taf.html
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
For small-to-large teams developing software together, what tools are used to form a comprehensive team development framework?
Specifically, I'm looking for a comprehensive list of all the individual functions involved (e.g. source control, bug management, testing tools, project management), not specific product recommendations. I'm also not restricting the list to a particular methodology (e.g. Scrum).
Source control (obviously) including branch management
Issue tracking (features and bugs), possibly with task reassignment and forwarding, and often things like screen recording
Individual task management, sometimes integrated with the issue tracking system
Communication software. Some teams use emails and IMs even within the same building or tweets. There are some tools that integrated within the code so you could "chat around a piece of code". Screen and application sharing are also useful.
Good build tool.
Distributed pair programming tools if applicable, shared editors otherwise.
Similar support in case tools.
Less commonly used but promising tools (from academic background), some now have IDE based versions.
Real-time awareness (prevent nerge conflicts by letting you know somebody is working on the same file before you actually write code)
In-code social tagging, useful for bootmarking specific items
In-code contract communication tools (e.g., make a caller aware of special expectations in the invoked method as a way of avoiding errors).
You've hit the major ones in your post:
IDE (Integrated Development Environment)
Coding Guidelines (sometimes looked over, but it still helps tremendously)
Source Control
Testing Suite (Unit Testing, Test Case/Test Script Management and Tracking)
Issue Tracking/Bug Reporting
Build Management
...I'm sure I'm missing something obvious, but somebody around here will correct me.
And the one I missed...
Diagraming software (I.E. Rational Software Modeler, etc.)
A few more:
Requirements management software
Code review software
Continuous integration tool
Documentation repository - e.g. Wiki
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.