I am working on the Black Box case as part of a Software Test Document and I am not quite sure how to do it. My professor states that we dont need to provide actual results. I am just confused as to what and how I am suppose to do this. Is there any good examples out there that I can reference. I looked at the IEEE 829 but thats not really helpful.
Perhaps your professor is asking you to apply Black Box Design techniques to design test cases to test certain functionality or requirements.
Some examples:
equivalence partitioning
state transition
boundary value analysis
pairwise testing
Definition:
Black box testing is a Testing, either functional or non-functional, without reference to the internal structure of the component or system. So in this method internal structure of program is not considered, tester should provide input set to the program and test whether the program is giving expected output or not.
This method is called as black box because, tester is not aware of the software program. Software program is like a black box; inside which tester cannot see.
BLACK BOX TESTING TECHNIQUES
Following are some techniques that can be used for designing black box tests:
Equivalence partitioning
Equivalence Partitioning is a software test design technique that involves dividing input values into valid and invalid partitions and selecting representative values from each partition as test data.
Boundary Value Analysis
Boundary Value Analysis is a software test design technique that involves determination of boundaries for input values and selecting values that are at the boundaries and just inside/outside of the boundaries as test data.
Graph Based Testing Methods
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.
Error Guessing
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
Example of Black Box Testing
A tester, without knowledge of the internal structures of a website, tests the web pages by using a browser and providing inputs (i.e. clicks, keystrokes) and verifying whether the output produced is the expected output.
Black box testing is a software testing method where in testers are not required to know coding or internal structure of the software. Black box testing method relies on testing software with various inputs and validating results against expected output. You can write the Software Test Document using various Black box techniques like Equivalence partitioning, State transition, Boundary value analysis etc. based upon your Application scope.
Related
Is testing based on an API (like a Javadoc) a black box or grey box test?
What I think
I think it is grey box, testing.
Why
Black box test is when we DONT have knowledge of the system and its inner workings. However since we are given the API, we know the return types, the parameters passed, etc we have general albeit basic understanding of what each method should do and the inner workings of the system.
Also if you recall the meaning of grey box testing : A test is designed based on the knowledge of algorithm, architectures, internal states, or other high -level descriptions of the program behavior.
Since we have the API, we can design some test cases, which will be of relatively high/medium coverage.
API Testing is not inherently black, grey, or white-box testing. As you say, it's all about the knowledge. If I'm working with the API in the same way anyone in the public would, we could call that black box testing because I'm at the same knowledge level. On the other hand, if I'm an internal tester and I can open up the source code and ask the developers questions, it's white-box. And honestly, grey is just in-between.
All that said, there is no standard strict definition for these terms and therefor no real demarcation line for when testing switches from one to the other.
As long as your testing is working only with the inputs and the outputs of the API it is black box testing.
When you start to test APIs looking also at source code coverage then it is gray or even more.
I am looking for:
A general explanation of the different types/branches of automation, particularly in regard to computers and programming.
More specifically, what type of automation would writing a program to automatically fill out an online form be considered?
I haven't been able to find a solid answer online, because most results are about types machine automation.
White box = Structured based automation testing
You know how the application works from a technical perspective. You might know the workflow (you can see into the system) ex: structure based test design to achieve 100% coverage [code coverage, decision coverage, and statement coverage-- decision tables or state transition testing]
Black box = Dynamic based automation testing
You don't know how the application works but you know what the expected outcome should be. (you view the software as a black box with inputs and outputs but have no knowledge of how the system or component is structured inside the box. here, the tester concentrates on what the software does, NOT HOW it does it. Ex: Equivalence partitioning [aka equivalence partitioning = only test one condition from each partition]
I've searched through the web, but each source says differently.
So I've made two kinds of test. The first one is the 'data cycle test' from TMap and the second a input-output black box test.
Now I know that the black box test, is testing the input-output values without looking at the code.
Below is a template of a Black box test:
Nr. Definition Expected value actual value
But Tmap says that blackbox test is a collection of different kinds of test techniques. Like the 'data cycle test'.
So what is blackbox test exactly? Is it ONE test technique or a collection of tests techniques? And if it is a collection of test technique, what is this expected-actual test technique called?
Black Box Testing:
Approach to testing where the program is considered as a black-box.
Testing based on solely on analysis of requirements [specification, user documentation etc.]
Also called as
Functional testing (Testing all the features)
Data-Driven Testing (Same action for different set of data)
I/O-Driven Testing
Black-Box testing applies to all levels of testing (e.g unit, component and system) - conducted during integration, system and acceptance testing.
Test case design methods:
Commonly used methods:
Equivalence partitioning: It is a process of dividing the input domain into valid/invalid classes, and for a valid input class, make the equal partition so that it will reduce the test cases.
Boundary value analysis: It is a process of checking the inputs on boundaries, one less than boundary and one greater than boundary.
Error guessing: is a ad hoc approach, based on intuition and experience, to identify the tests that are likely to expose errors.
Reference: http://en.wikipedia.org/wiki/Exploratory_testing
Definition:
Black box testing is a Testing, either functional or non-functional, without reference to the internal structure of the component or system. So in this method internal structure of program is not considered, tester should provide input set to the program and test whether the program is giving expected output or not.
This method is called as black box because, tester is not aware of the software program. Software program is like a black box; inside which tester cannot see.
BLACK BOX TESTING TECHNIQUES
Following are some techniques that can be used for designing black box tests:
Equivalence partitioning
Equivalence Partitioning is a software test design technique that involves dividing input values into valid and invalid partitions and selecting representative values from each partition as test data.
Boundary Value Analysis
Boundary Value Analysis is a software test design technique that involves determination of boundaries for input values and selecting values that are at the boundaries and just inside/outside of the boundaries as test data.
Graph Based Testing Methods
Each and every application is build up of some objects. All such objects are identified and graph is prepared. From this object graph each object relationship is identified and test cases written accordingly to discover the errors.
Error Guessing
This is purely based on previous experience and judgment of tester. Error Guessing is the art of guessing where errors can be hidden. For this technique there are no specific tools, writing the test cases that cover all the application paths.
I always thought of it using an analogy. Imagine you’re a mechanic testing whether a car engine works.
Black box testing is like having the hood/bonnet closed, getting in the car and pressing all the buttons and pedals and driving it around to see if it all works correctly. You might not know what type of engine is in the car or exactly how that specific engine works, but you can test whether the engine is working as you’d expect it to by messing around with all the external parts which interact with the engine.
Black box testing is a specification based testing. There variout black box testing techniques like:
1. Equivalence Partitioning
2. Boundary Value Analysis
3. Decision Table
4. State Transition
5. Use Case Testing
Black box testing technique is a dynamic testing technique. In this type od testing technique tester does not know about code. He or She test on the bases of input & output. In this type of testing functional and non functional testing included.
Can you give me certain examples in which black box testing gives the impression that "everything is ok" but white box testing might uncover an error. And examples where white box testing gives an impression that "everything is ok" but black box testing might uncover an error??
Thanx in advance
Blackbox testing can miss pretty much anything that isn't clearly documented or intuitive. For example, in this SO answer entry section, I have a toolbar that I can "test", but w/o taking a look at the code, I may not discover that I need to test the hotkeys, or understand how highlighted text responds to bold and italic attributes in random combinations. I can experiment and figure this out, but it's not as efficient.
In larger applications, control flow issues are often missed - think of obscure logic flows, or even rareley executed case statements.
However, if you do white box testing only, usability is typically the first to suffer. A perfectly functional piece of software can also be difficult to use, have unaligned UI elements, etc.
Why do you ask?
I recently came across it while studying for exam wish me luck.
Lets suppose you being a programmer keeping a track of users logging into your website or whatever, and the counter you have kept is of type int, the range of int is as you know 65,535 and your number of users exceeds the range of the type. in that case black box test might be unable to detect what's going on in between, but white box test will do.
For specific input, an error occurs internally resulting in:
Improper data placed in a global data area;
Improper flags that will be tested in a subsequent series of tests;
Improper hardware control that can only be uncovered during system test; yet "correct" output is produced.
Error detection by white-box testing contradicting black-box testing:
Testing to ensure that all independent paths within a module will be executed at least once.
Testing to exercise all logical decisions on their true and false branches.
Testing to ensure that all loops execute at their boundaries and within their operational bounds.
Error detection by black-box testing contradicting white-box testing:
Testing for interface functionality.
Testing system behavior and performance.
Test for classes of input.
could you give me some examples, of when would white box testing find errors where black box testing will not?
Neither is necessarily better than the other. A black box approach tends to be a user focused approach. So this is a good way to ensure the usability and correctness of an application from a user perspective. The drawback to testing from just a black box perspective is many of the code paths may remain unexercised. This is where white box testing comes into play. Using both together is frequently referred to as grey box testing and it allows you to build user focused scenarios as well as verify you are getting good code coverage as well as efficient use of your test cycles.
A couple good resources for additional information include How We Test Software at Microsoft, Testing Computer Software.
Black box: Here, the verification is based on the requirements of the design. Thus, anything external to the design and the performance of the design in terms of requirements must be verified. In terms of assertions, that would mean the interfaces and responses. They can also include assumptions if the configurations are fixed. They may also include coverage about important test cases with reference to the requirements. The implementation of the design is not considered. These assertions are typically written by a verification, and not by the designer. They can, and should be done prior to the actual design tasks.
White box: That deals with the actual implementation. Typically, a designer may add information about assumptions, and assertions about expected results particular about the design. For example, if the design uses a FIFO, it would be good to add assertions about the FIFO never reading a value when it's empty, or push data when full. If the design has an EDAC, assertions should be added that the EDAC is indeed performing its duties. These assertions are typically written by the design ever, and they are important.
See the Wikipedia entry on Software testing. I think the most important point regarding white-box vs. black-box is:
White box testing methods can also be used to evaluate the completeness of a test suite that was created with black box testing methods. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.
Basically, white box testing allows you to test execution paths that you might have overlooked with black-box testing simply because you wouldn't have known they existed.