What are various methods for discovering test cases - testing

All,
I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects.
Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test.
Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though.
I would appreciate your views on this.
Thanks,
Byte

Bugs! One of my favorite starting places on a project for adding new test cases is to take a look at the bug tracking system. The existing bugs are test cases in their own right, but they also can steer you towards new test cases. If a particular module is buggy, it can lead you to develop more test cases in that area. If a particular developer seems to introduce a certain class of bugs, it can guide testing of future projects by that developer.
Another useful consideration is to look more at testing techniques, than test cases. In your example of a registration form, how would you attack it from a business requirements perspective? Security? Concurrency? Valid/invalid input?

Testing Computer Software is a good book on how to do all kinds of different types of testing; black box, white box, test case design, planning, managing a testing project, and probably a lot more I missed.
For the example you give, I would do something like this:
For each field, I would think about the possible values you can enter, both valid and invalid. I would look for boundary cases; if a field is numeric, what happens if I enter a value one less than the lower bound? What happens if I enter the lower bound as a value? Etc.
I would then use a tool like Microsoft's Pairwise Independent Combinatorial Testing (PICT) Tool to generate as few test scenarios as I could across the cases for all input fields.
I would also write an automated test to pound away on the form using random input, capture the results and see if the responses made sense (virtual monkeys at a keyboard).

Ask questions. Keep a list of question words and force yourself to come up with questions about the product or a feature. Lists like this can help you get out of the proverbial box or rut. Don't spend too much time on a question word if nothing comes to you.
Who
Whose
What
Where
When
Why
How
How much
Then, when you answer them, ask "else" questions. This forces you to distrust, for a moment at least, your initial conclusions.
Who else
Whose else
etc..
Then, ask the "not" questions--negate or refute your assumptions, and challenge them.
Who not (eg, Who might not need access to this secure feature, and why?)
What not (what data will the user not care about? What will the user not put in this text box? Are you sure?)
etc...
Other modifiers to the qustions could be:
W else
W not
W risks
W different
Combine two question words, eg, Who and when.

In the case of the form, I'd look at what I can enter into it and test various boundary conditions there,e.g. what happens if no username is supplied? I'm reminded of there being a few different forms of testing:
Black box testing - This is where you test without looking inside what is being tested. The challenge here is not being able to see inside can cause issues with limiting what are useful tests and how many different tests are worthwhile. This is of course what some default testing can look like though.
White box testing - This is where you can look at the code and have metrics like code coverage to ensure that you are covering a percentage of the code base. This is generally better as in this case you know more about what is being done.
There are also performance tests compared to logic tests that are also worth noting somewhere,e.g. how fast does the form validate me rather than just does the form do this.

Identify your assumptions from different perspectives:
How can users possibly misunderstand this?
Why do I think it acts or should act this way?
What biases might I have about how this software should work?
How do I know the requirements/design/implementation is what's needed?
What other perspectives (users, administrators, managers, developers, legal) might exist on priority, importance, goals, etc, of this software?
Is the right software being built?
Do I really know what a valid name/phone number/ID number/address/etc looks like?
What am I missing?
How might I be mistaken about (insert noun here)?
Also, use any of the mnemonics and testing lists noted here:
http://www.qualityperspectives.ca/resources_mnemonics.html

Discussing test ideas with others. When you explain your ideas to someone else, you tend to see ways to refine or expand on them.

Group brainstorming sessions. (or informally in pairs when necessary)
see these brainstorming techniques

Make data tables with major features listed across the top and side, and consider possible interactions between each pair. Doing this in three dimensions can get unwieldy.

Keep test catalogs with common questions and problem types for different kinds of tasks such as integer validation and workflow steps etc.

Make use of Exploratory Testing Dynamics and Satisfice Heuristic Test Strategy Model by James Bach. Both offer general ways to start thinking more broadly or differently about the product, which can help you switch between boxes and heuristics in testing.

Related

Can BDD work for Big Data ETL testing?

I was wandering if anyone uses BDD for testing a Big Data ETL application?
I can see how BDD can be used for testing applications having a client interact with them, but in case of Big Data ETL application there is no client interaction so its hard to see what 'When' I might use.
For example:
Give 100 event of type A occur
And 50 event of type B occur after 5 minute
Then database rows should be:
|Type|Count|Bucket|
|A|100|1|
|B|50|2|
But that seems wrong.
Any one with an insight?
Can you give me an example of what you'd expect to see in an ETL output?
There are a couple of responses you could give to this. One might be the different kinds of database rows you'd expect, and the fact that some of them will probably be repeated, but not others. That was something that struck me as weird, but if you're used to working with star schemas then you'll probably notice other differences instead.
Normally I'd steer people away from talking about the database, but if you're working with star schemas, I think it's OK to mention the facts and dimensions (I haven't worked with ETL a lot, but I do remember talking through specific examples of these and what I would expect to see).
The alternative is to use the client.
I saw that you said there was no client; however, there's always a client, even if it's one that might exist in the future. There are implications for ETL which run across security, performance and access, amongst others. It's worth having a client, even if it's a string-based or SQL-based toy, to explore the things which might trip you up.
Why are you doing this? What's new about the thing the business or users or customers will be able to do when this is in place, that they can't do already? And can you get hold of an example of that?
"We'll be able to understand how X is performing against Y standard."
Great. Can you give me an example of some X, some Y, and some standard? How will you measure the performance? What data will you be looking for? Should everyone be able to see that data? Can you think of any scenario where someone shouldn't be able to access that?
Those examples become the ETL equivalent of scenarios; the conversations retain the same pattern. You just end up automating them at a different level, since your API is machine-oriented rather than human-oriented, and some of your conversations will be about monitoring instead of testing. Your conversations should still be with the people.
Your "when" will be the query or report that you run, within the data, permission and security context in which you run it.
BDD always works for application logic inside Big data space. Remember the testing triangle principle. Have your unit tests. Practice BDD and build your integration and acceptance tests with BDD and within your sprint. Its not recommended to have your test data externally maintained and thus validating E2E flow with all moving pieces needs to be light weight. Practice TDD model if permits.

How do I systematically test and think like a real tester

My friend asked me this question today. How to test a vending machine and tell me its test cases. I am able to give some test cases but those are some random thoughts. I want to know how to systematically test a product or a piece of software. There are lots of tests like unit testing, functional testing, integration testing, stress testing etc. But I would like to know how do I systematically test and think like a real tester ? Can someone please explain me how all these testings can be differentiated and which one can be applied in a real scenario. For example Test a file system.
Even long-time, well respected, professional testers will tell you: It is an art more than a science.
My trick to designing new test cases starts with the various types of tests you mention, and it must include all those to be thorough, but I try to find a list of all the ways I can interact with the code/product.
For the vending machine example, there are tons of parts, inside and out.
Simple testing, as the product is designed to work, gives plenty of cases
Does it give the correct change
How fast can it process the request
What if an item is out of stock
What if it is overfilled
What if the change drawer is full
What if the items are too big, or badly racked
What if the user puts in too little money
What if it is out of change
Then there are the interesting cases, which normal users wouldn't think about.
What if you try to tip it over
Give it a fake coin
Steal from it
Put a coin in with a string
Give it funny amounts of change
Give it half-ripped bills
Pry it open with a crow-bar
Feed it bad power/brownout
Turn it off in the middle of various operations
The way to think like a tester is figure out every possible way you can attack it, from all the "funny cases" in usual scenarios, to all the methods that are completely outside of how it should be used. Any point of input, including ones you might think the developers/owners have control over, are fair game.
You can also use many automated test tools, such as pairwise test selection, model-based test toolkits, or for software, various stress/load and security tools.
I feel like this answer was a good start, but I now realize it was only half of the story.
Coming up with every single way you can possibly test the system is important. You need to learn to stretch the limits of your imagination, your problem decomposition skills, your understanding of chains of functionality/failure, and your domain knowledge about the thing you are testing. This is the point I was attempting to make above. With the right mindset, and with enough vigilance, these skills will start to improve very quickly - within a year, or within a few years (depending on the complexity of the domain).
The second level of becoming a very competent tester is to determine which tests you should care about. You will always be able to break every system, in a ton of different ways. Whether those failures are important or not is a more interesting question, and is often much more difficult to answer. The benefit to answering this question, though, is two-fold.
First, if you know why it is important to fix pieces of the system that break (or to skip fixing them!), then you can understand where you should focus your efforts. You know what you can afford to spend less time testing, and what you must spend more time on.
Second, and more importantly, you will help your team expose where they should be focusing their efforts. You will start to uncover things that are called "second-order unknowns". Your team doesn't know what it doesn't know.
The primary trick that helps you accomplish this is to always ask "why?", until whoever you are asking is stumped.
An example:
Q: Why this test?
A: Because I want to exercise all functionality in the system.
Q: Why does this system function this way?
A: Because of the decisions that the programmer made, based on the product specifications.
Q: Why did our product specifications ask for this?
A: Because the company that we are writing the software for had a requirement that the software works this way.
Q: Why did that company we are contracting for add that as a requirement?
A: Because their users need to do :thing:
Q: Why do the users need to do :thing:?
A: Because they are trying to accomplish :xyz:
Q: Why do they need to accomplish :xyz:
A: Because they save money by doing :abc:
Q: Why did they choose :xyz: to solve :abc:?
A: ... good question.
Q: What could they do instead?
A: ... now that I think about it, there's a ton of options! Maybe one of them works better?
With practice, you will start knowing which specific "why" questions to ask, and which to focus on. You will also learn to start deeper down the chain, and be less mechanical in your approach.
This is no longer just about ensuring that the product matches the specifications that the dev, pm, customer, or end user provided. It also helps determine if the solution you are providing is the highest quality solution that your team could provide.
A hidden requirement of this is that you must learn that half your job as a tester is to ask questions all the time. You might think that your team mates will be annoyed at this, but hopefully I've shown that it is both crucial to your development, and the quality of the product you are testing. Smart and curious teammates who care about the product (who aren't busy and frustrated) will love your questions.
#brett :
Suppose you have the system with you, which you want to test. Now the main thing that comes into picture is make sure you have the test scenario or test plan. Once you have that, then for you it becomes very much clear about how and what to test about the system.
Once you have test plan then your vision becomes clear regarding what all is expected and what all is something unexpected. For unexpected behavior you can recheck once and file an issue, if you think that that is not correct. I had given you answer in a general case. if you have a real world scrnario, then it may be really helpful to provide guidelines on that.

Hallway usability testing: How much of the UI do you actually make functional?

When doing hallway usability tests do most of you make your apps fully or near fully functional? Or do you just make sure the links or flow chain correctly? Or do you just draw on paper and go with that?
I'm would like to test early on a prototype and am trying to find a good balance. But at the same time am worried that some non functional parts might actually not give representative results.
Thanks.
Usability tests, hallway or otherwise, only need the functionality that you need to test. In most usability tests, you should go in with specific design questions to answer and develop your prototype to the point where it can answer those questions. For example, if you need to test if users understand your indication of the sort order for a table, all you need is a paper picture of the table showing the sort indication (with the table contents blurred) and ask them how the table is sorted. If you need to test the IA, all you need is a bunch of web pages, empty except for a title, that are linked through the navigation menus.
You only need the pages relevant for the tasks you give your users. If you’re just testing the IA, then you only need the pages on the normative path. If you are also testing error recovery, then you need the pages off the normative path along with the full navigation controls. If you are also testing error detection, then you need content on the pages as well.
You can also simulate functionality when that’s easier to do. For example, in testing if users can figure out how to get a desired sort order, when the user clicks on a non-functioning control for sorting the table, you can say, “Okay, doing that will get you this,” and you take the mouse and select a bookmark that shows the table in the new sort order.
In hallway testing, if users breach the fidelity envelope, you can simply say, “I haven’t made that part yet. Let’s go back to A, and continue from there.” Of course, you should note that the user made a wrong turn in the task you intended for them. I haven’t had any problems with users complaining about non-functional features when I tell them up front it’s an incomplete prototype and we’re only testing the UI for features x, y, and z at the moment.
For low fidelity prototypes, I often call them “mockups” or “drawings” to users rather than “prototypes” to indicate the low functionality. You can put obvious placeholders in for missing content (e.g., “Blah, blah, blah…”, “TODO: Picture of product about here.”). If a user comments on something outside the fidelity envelope (e.g., “This symbol should be red to stand out more”), simply note it, and say that topic is under development (e.g., “Thanks. We haven’t started work on the colors yet. We’re just trying to figure out how to organize the site right now.”).
Usability testing with limited-fidelity prototypes is really necessary for iterative design to be feasible for most projects. Otherwise, you waste too much work developing things that have to be redone.
A couple things to remember:
Test early and often.
The goal of usability testing is to find problems with the UI, not Q/A your code.
Therefore, if users can see the parts of your UI you are interested in testing and interact with them in a realistic way (e.g., click on buttons and links), you should be able to collect useful data. If some links are dead-ends, that's okay, as long as there's some way for users to recover and continue on. Basically, with prototypes, the "correct" path should work, but it's okay if incorrect paths don't (as long as there's a reasonably quick way to get back on the correct path). Even static storyboards (non-functioning drawings of a UI) can provide you with some information if you ask the right questions, e.g., "What would you do on this screen if you wanted to view your shopping cart?").
I would suggest a couple rounds of usability testing. First on paper, perhaps later on screen, generally throughout the application lifecycle (take an Agile approach to it).
There is a good argument to be made for paper prototypes. When users see a screen, even limited functionality, they may be hesitant to suggest changes since it looks "done."
Make no mistake, it's not trivial to get it all down on paper, but that's where I would start. Probably start with just a section or two of the application. And make sure somebody with good people skills and/or explaining skills is there to walk the user through it. Have a second person on-hand to take notes. Try to ask open-ended questions, etc.
For a hallway test, I would test with NONE of the functionality implemented.
Test against designs done on a whiteboard or on paper. You'll be surprised at how much you find out in these minimal mockups. And they are very inexpensive to make!
Functional prototypes are for later. If you give your usability subject a functional interface, they are much less likely to question whether you've implemented the right set of features in the first place.
I would make the UI functional, so that the user can really play with it, it will be much better than a static image. People can tell you whether they feel comfortable on the UI.
I would make sure everything in the UI works, or at least takes you to a clear, unambiguous message pointing out that the feature isn't implemented yet.
Showing prototypes to clients with a disclaimer up front about how feature X doesn't work yet will usually be ignored. They'll try out the prototype, click on featuree X and indignantly reply "Feature X doesn't work! This really needs to work in the final version! Why doesn't it work?". The client is confused and unhappy about the product, and it's frustrating for yourself because it overshadows the positive feedback. Besides, you told them it didn't work, why can't they use their imagination to envision how it would work in the final version?
Make it work, be it with a rough version, dummy data, or even a simple message saying "would show results sorted alphabetically now".

Where to find good test case templates/examples? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm trying to establish more formal requirements and testing procedures then we have now, but I can't find any good reference examples of documents involved.
At the moment, after feature freeze testers "click through the application" before deployment, however there are no formal specification what needs to be tested.
First, I'm thinking about a document which specifies every feature that needs to be tested, something like this (making this up):
user registration form
country dropdown (are countries fetched from the server correctly?)
password validation (are all password rules observed, is user notified if password is too weak?)
thank-you-for-registration
...and so on. This could also serve as something client can sign as a part of requirements before programmers start coding. After the feature list is complete, I'm thinking about making this list a first column in a spreadsheet which also says when was the feature last tested, did it work, and if it didn't work how did it break. This would give me a document testers could fill after each testing cycle, so that programmers have to-do list, with information what doesn't work and when did it break.
Secondly, I'm thinking of test cases for testers, with detailed steps like:
Load user registration form.
(Feature 1.1) Check country dropdown menu.
Is country dropdown populated with countries?
Are names of countries localized?
Is the sort order correct for each language?
(Feature 1.2) Enter this passwords: "a", "bob", "password", "password123", "password123#". Only the last password should be accepted.
Press "OK".
(Feature 2) Check thank-you note.
Is the text localized to every supported language?
This would give testers specific cases and checklist what to pay attention to, with pointers to the features in the first document. This would also give me something to start automating testing process (currently we don't have much testing automation apart from unit tests).
I'm looking for some examples how others have done this, without too much paperwork. Typically, tester should be able to go through all tests in an hour or two. I'm looking for a simple way to make client agree on which features should we implement for the next version, and for testers to verify that all new features are implemented and all existing features are working, and report it to programmers.
This is mostly internal testing material, which should be a couple of Word/Excel documents. I'm trying to keep one testing/bugfixing cycle under two days. I'm tracking programming time, implementation of new features and customer tickets in other ways (JIRA), this would basically be testing documentation. This is lifecycle I had in mind:
PM makes list of features. Customer signs it. (Document 1 is created.)
Test cases are created. (Document 2.)
Programmers implement features.
Testers test features according to test cases. (And report bugs through Document 1.)
Programmers fix bugs.
GOTO 4 until all bugs are fixed.
End of internal testing; product is shown to customer.
Does anyone have pointers to where some sample documents with test cases can be found? Also, all tips regarding the process I outlined above are welcome. :)
ive developed two documents i use.
one is for your more 'standard websites' (e.g. business web presence):
http://pm4web.blogspot.com/2008/07/quality-test-plan.html
the other one i use for web-based applications:
http://pm4web.blogspot.com/2008/07/writing-system-test-plan.html
hope that helps.
First, I think combining the requirements document with the test case document makes the most sense since much of the information is the same for both and having the requirements in front of the testers and the test cases in front of the users and developers reinforces the requirement and provides varying view points of them. Here's a good starting point for the document layout: http://www.volere.co.uk/template.htm#anchor326763 - if you add: steps to test, resulting expectations of the test, edge/bound cases - you should have a pretty solid requirement spec and testing spec in one.
For the steps, don't forget to include an evaluate step, where you, the testers, developers, etc. evaluate the testing results and update the requirement/test doc for the next round (you will often run into things that you could not have thought of and should add into the spec...both from a requirements perspective and testing one).
I also highly recommend using mindmapping/work-breakdown-structure to ensure you have all of the requirements properly captured.
David Peterson's Concordion web-site has a very good page on technique for writing good specifications (as well as a framework for executing said specifications). His advice is simple and concise.
As well you may want to check out Dan North's classic blog post on Behavior-DrivenDevelopment (BDD). Very helpful!
You absolutely need a detailed specification before starting work; otherwise your developers don't know what to write or when they have finished. Joel Spolsky has written a good essay on this topic, with examples. Don't expect the spec to remain unchanged during development though: build revisions into the plan.
meade, above, has recommended combining the spec with the tests. This is known as Test Driven Development and is a very good idea. It pins things down in a way that natural language often doesn't, and cuts down the amount of work.
You also need to think about unit tests and automation. This is a big time saver and quality booster. The GUI level tests may be difficult to automate, but you should make the GUI layer as thin as possible, and have automated tests for the functions underneath. This is a huge time saver later in development because you can test the whole application thoroughly as often as you like. Manual tests are expensive and slow, so there is a strong temptation to cut corners: "we only changed the Foo module, so we only need to repeat tests 7, 8 and 9". Then the customer phones up complaining that something in the Bar module is broken, and it turns out that Foo has an obscure side effect on Bar that the developers missed. Automated tests would catch this because automated tests are cheap to run. See here for a true story about such a bug.
If your application is big enough to need it then specify modules using TDD, and turn those module tests into automated tests.
An hour to run through all the manual tests sounds a bit optimistic, unless its a very simple application. Don't forget you have to test all the error cases as well as the main path.
Go through old bug reports and build up your test cases from them. You can test for specific old bugs and also make more generalizations. Since the same sorts of bugs tend to crop up over and over again this will give you a test suite that's more about catching real bugs and less about the impossible (or very expensive) task of full coverage.
Make use of GUI and web automation. Selenium, for example. A lot can be automated, much more than you think. Your user registration scenario, for example, is easily automated. Even if they must be checked by a human, for example cross browser testing to make sure things look right, the test can be recorded and replayed later while the QA engineer watches. Developers can even record the steps to reproduce hard to automate bugs and pass that on to QA rather than taking the time consuming, and often flawed, task of writing down instructions. Save them as part of the project. Give them good descriptions as to the intent of the test. Link them to a ticket. Should the GUI change so the test doesn't work any more, and it will happen, you can rewrite the test to cover its intention.
I will amplify what Paul Johnson said about making the GUI layer as thin as possible. Separate form (the GUI or HTML or formatting) from functionality (what it does) and automate testing the functionality. Have functions which generate the country list, test that thoroughly. Then a function which uses that to generate HTML or AJAX or whatever, and you only have to test that it looks about right because the function doing the actual work is well tested. User login. Password checks. Emails. These can all be written to work without a GUI. This will drastically cut down on the amount of slow, expensive, flawed manual testing which has to be done.

How much a tester should know about internal details of code?

How useful, if at all, is for the testers on a product team to know about the internal code details of a product. This does not mean they need to know every line of code but a good idea of how the code is structured, what is the object model, how the various modules are inter-linked, what are the inter-dependencies between various features etc.? This can argubaly help them in finding related issues or defects once they hit one. On the other side, this can potentially 'bias' their "user-centric" approach towards evaluating and certifying the product and can effect the testing results in the end.
I have not heard of any specific model for such interaction. (Lets assume a product that users, potentially non-technical consume, and not a framework or API that the testers are testing - in the latter case the testers may need to understand the code to test that because the user is another programmer).
That entirely depends upon the type of testing being done.
For functional system testing, the testers can and probably should be oblivious to the details of the implementation -- if they know the details they may inadvertently account for that in their test strategy and not properly test the product.
For performance and scalability testing it's often helpful for the testers to have some high-level knowledge of the structure of the codebase, as it's beneficial in identifying potential performance hotspots, and therefore writing targetted test cases. The reason this is important is that generally performance testing is a broad open-ended process, so anything that can be done to focus the testing to get results is beneficial to everybody.
This sounds similiar to this previous question: Should QA test from a strictly black-box perspective?
I've never seen a circumstance where a tester who knew a lot about the internals of system was disadvantaged.
I would assert that there are self justifying myths that an informed tester is as adequate or even better than a deeply technical one because:
It allows project managers to use 'random or low quality resources' for testing. The 'as uninformed as the user myth'. If you want this type of testing - get some 'real' users to test your stuff.
Testers are still often seen as cheaper and less valuable than developers. The 'anybody can do blackbox testing myth'.
Development can defer proper testing to the test team. Two myths in one 'we don't need to train testers' and 'only the test team does testing' myths.
What you are looking at here is the difference between black box (no knowledge of the internals), white box (all knowledge) and grey box (some select knowledge).
The answer really depends on the purpose of the code. For integration heavy projects then where and how they communicate, even if it is entirely behind the scenes, allows testers to produce appropriate non-functional test cases.
These test cases are determining whether or not a component will gracefully handle the lack of availability of a dependency. It can also be used to identify performance related issues.
For example: As a tester if I know that the Web UI component defers a request to a orchestration service that does the real work then I can construct a scenario where the orchestration takes a long time (high load). If the user then performs another request (simulating user impatience) and the web service will receive a second request while the first is still going. If we continually repeat this the web service will eventually die from stress. Without knowing the underlying model it would not be easy to find the problem
In most cases for functionality testing then black box is preferred, as soon as you move towards non-functional or system integration then understanding the interactions can assist in ensuring appropriate test coverage.
Not all testers are skilled or comfortable working/understanding the component interactions or internals so it is on a per tester/per system basis on whether it is appropriate.
In almost all cases we start with black box and head towards white as the need sees.
A tester does not need to know internal details.
The application should be tested without any knowledge of interal structure, development problems, externals depenedncies.
If you encumber the tester with those additional info you push him into a certain testing scheme and the tester should never be pushed in a direction he should just test from a non coder view.
There are multiple testing methodologies that require code reviewing, and also those that don't.
The advantages to white-box testing (i.e. reading the code) is that you can tailor your testing to only test areas that you know (from reading the code) will fail.
Disadvantages include time wasted from actual testing to understand the code.
Black-box testing (i.e. not reading the code) can be just as good (or better?) at finding bugs than white-box.
Normally both types of testing can happen on one project, developers white-box unit testing, and testers black-box integration testing.
I prefer Black Box testing for final test regimes
In an ideal world...
Testers should know nothing about the internals of the code
They should know everything the customer will - i.e. have the documents/help required to use the system/application.(this definetly includes the API description/documents if it's some sort of code deliverable)
If the testers can't manage to find the defects with these limitations, you haven't documented your API/application enough.
If they are dedicated testers (Only thing they do) then I think they should know as little about the code as possible that they are attempting to test.
Too often they try to determine why its failing, that is the responsibility of the developer not the tester.
That said I think developers make great testers, because we tend to know the edge cases for certain types of functionality.
Here's an example of a bug which you can't find if you don't know the code internals, because you simply can't test all inputs:
long long int increment(long long int l) {
if (l == 475636294934LL) return 3;
return l + 1;
}
However, in this case it would be found if the tester had 100% code coverage as a target, and looked at only enough of the internals to write tests to achieve that.
Here's an example of a bug which you quite likely won't find if you do know the code internals, because false confidence is contagious. In particular, it is usually not possible for the author of the code to write a test which catches this bug:
int MyConnect(socket *sock) {
/* socket must have been bound already, but that's OK */
return RealConnect(sock);
}
If the documentation of MyConnect fails to mention that the socket must be bound, then something unexpected will happen some day (someone will call it unbound, and presumably the socket implementation will select an arbitrary local address). But a tester who can see the code often doesn't have the mindset of "testing" the documentation. Unless they're really on form, they won't notice that there's an assumption in the code not mentioned in the docs, and will just accept the assumption. In contrast, a tester writing from the docs could easily spot the bug, because they'll think "what possible states can a socket be in? I'll do a test for each". Since no constraints are mentioned, there's no reason they won't try the case that fails.
Answer: do both. One way to do this is to write a test suite before you see/write the code, and then add more tests to cover any special cases you introduce in your implementation. This applies whether or not the tester is the same person as the programmer, although obviously if the programmer writes the second kind of test, then only one person in the organisation has to understand the code. It's arguable whether it's a good long-term strategy to have code only one person has ever understood, but it's widespread, because it certainly saves time getting something out the door.
[Edit: I decline to say how these bugs came about. Maybe the programmer of the first one was clinically insane, and for the second one there are some restrictions on the port used, in order to workaround some weird network setup known to occur, and the socket is supposed to have been created via some de-weirdifying API whose existence is mentioned in the general sockets docs, but they neglect to require its use. Clearly in both these cases the programmer has been very careless. But that doesn't affect the point: the examples don't need to be realistic, since if you don't catch bugs that only a very careless programmer would make, then you won't catch all the actual bugs in your code unless you never have a bad day, make a crazy typo, etc.]
I guess it depends how good of testing you want. If you just want to sanity check the common scenarios, then by all means, just give the testers / pizza-eaters the application and tell them to go crazy.
However, if you'd like to have a chance at finding edge cases, performance or load issues, or a whole lot of other issues that hide in the depths of your code, you'd probably be better off hiring testers who know how and when to use white box techniques.
Your call.
IMHO, I think the industry view of testers is completely wrong.
Think about it ... you have two plumbers, one is extremely experienced, knows all the rules, the building codes, and can quickly look at something and know if the work is done right or not. The other plumber is good, and get the job done reliably.
Which one would you want to do the final inspection to make sure you don't come home to a flooded house? In fact, in what other industry do they allow someone who knows hardly anything about the system they are inspecting to actually do the inspection?
I have seen the bar for QA go up over the years, and that makes me happy. In time, QA may become something that devs aspire to be.
In short, not only should they be familiar with the code being tested, but they should have an understanding that rivals the architects of the product, as well as be able to effectively interface with the product owner(s) / customers to ensure that what is being created is actually what they want. But now I am going into a whole seperate conversation ...
Will it happen? Probably sooner than you think. I have been able to reduce the number of people needed to do QA, increase the overall effectiveness of the team, and increase the quality of the product simply by hiring very skilled people with dev / architect backgrounds with a strong aptitude for QA. I have lower operating costs, and since the software going out is higher quality, I end up with lower support costs. FWIW ... I have found that while I can backfill the QA guys effectively into a dev role when needed, the opposite is almost always not true.
If there is time, a tester should definitely go through a developers code. This way, you can improve your tests to get better coverage.
So, maybe if you write your black box tests looking at the spec and think you have the time to execute all of those and will still be left with time, going through code cannot be a bad idea.
Basically it all depends on how much time you have.. Another thing you can do to improve coverage is look at the developers design documents. Those should give you a good idea of what the code is going to look like...
Testers have the advantage of being familiar with both the dev code and the test code!
I would say they don't need to know the internal code details at all. However they do need to know the required functionality and system rules in full detail - like an analyst. Otherwise they won't test all the functionality, or won't realise when the system misbehaves.
For user acceptance testing the tester does not need to know the internal code details of the app. They only need to know the expected functionality, the business rules. When a bug is reported
Whoever is fixing the bug should know the inter-dependencies between various features.