Is JUnit black-box or white-box testing? I think that it is white-box but i am not sure. I am looking for that but I can't find a clear answer. Even a simple discussion about that would be useful.
The use of the word "unit" in the name of "JUnit" might indicate that JUnit is only suitable for unit testing, and since unit testing is practically synonymous with white-box testing, your suspicion is in the right direction. However, things are slightly more nuanced.
Unit testing, when done according to the book, is necessarily white-box testing in all cases except when testing units that have no dependencies.
In the case of units that have no dependencies there is no difference between unit testing and integration testing, (there are no dependencies to integrate,) therefore the notion of white-box vs. black-box testing is inapplicable.
In the case of units that do have dependencies, unit testing is necessarily white-box testing because in order to test a unit and only that unit you are not allowed to test it in integration with its dependencies, so you have to strip all of its dependencies away, and replace them with mocks, but in doing so you are claiming to know not only what the dependencies of your unit are, but more importantly, in precisely what ways it interacts with them. (Which methods it calls, with what parameters, etc.).
However, despite "unit" being part of its name, JUnit by itself is not limiting you to Unit testing, so it does not impose either a white-box nor a black-box approach. You can use JUnit to do any kind of testing that you like.
It is the addition of a mocking framework such as JMock, Mockito, etc that would make your tests necessarily of the white-box kind.
When I use JUnit I only do what I call Incremental Integration Testing. This means that first I test all units with no dependencies, then I do integration testing on units whose dependencies have already been tested, and so on until everything has been tested. In some cases I substitute some of the dependencies with special implementations that are geared towards testing (for example, HSQLDB instead of an actual on-disk RDBMS) but I never use mocks. Therefore, I never do unit testing except for the fringe case of units with no dependencies, where, as I have already explained, there is no distinction between unit testing and integration testing. Thus, I never do white-box testing, I only do black-box testing. And I use JUnit for all of that. (Or my own testing platform which is largely compatible with JUnit.)
Most of the industry out there appears to be using JUnit to do extensive unit testing (white-box testing) and also their integration (black-box) testing.
Based on white-boxing testing definition JUnit is included in these form of testing, contain these techniques:(e.g. we create a test based on code or provides all information necessary to test all the possible pathways. This includes not only correct inputs, but incorrect inputs, so that error handlers can be verified as well.)
Related
In the last days I studied about tests with Jest, but i don't understood the next.
When I have integration tests I don't use mock? Mock are used just on unit tests?
UPDATE
Today, in my company, the approach that we follow is: Unit tests always mock all external access data, and integration tests should not mock.
Is interesting, associate integration tests with hlg environment, because you can discovery easily what and where broke software
Mocks are a kind of Test Double - a test-specific replacement of a dependency, for purposes of making automated tests deterministic.
There's no universally accepted formal definition of what constitutes a unit test, but in this context, I find the following definition (essentially my own wording) useful:
A unit test is an automated test that tests a unit in isolation of its dependencies.
This definition, however, conveniently avoids defining what a unit is, but that's less important in this context.
Likewise, we can define an integration test as an automated test that exercises the System Under Test (SUT) with its real dependencies. Thus, instead of replacing the database dependency with a Test Double, the test exercises the SUT integrated with a real database, and so on.
Thus, with this view of integration testing, no Test Doubles are required because all real dependencies are integrated.
There's another view of integration testing that considers integration testing the exercise of various software components (units, if you will) with each other, while still replacing out-of-process resources like databases or web services with Test Doubles. This is often easier to accomplish, and can be a valuable technique, but whether you decide to call these unit tests or integration tests is largely a question of personal preference.
Unfortunately, there's no universally accepted consistent definition of these terms. I usually try to stick with the vocabulary documented in xUnit Test Patterns, which is the most comprehensive and internally consistent body of work on the topic (that I know of).
From ISTQB definition, Integration is “A test level that focuses on interactions between components or systems.”
So you can have integration test between units, or between different components, or between subsystems. You may also integrate system of systems.
You can read unit test in wikipedia.
So you can use unit test framework (mock/stub) to do integration test also, but when integration test of whole application usually requires a full environment setup, which unit test framework can not do.
Here are my 2 cents:
Unit tests - always use mock. The "unit" of test is a method.
Integration tests - never use mock. The "unit" of test is a class.
End-to-end tests - uses the actual program. The "unit" of test is a single "happy path".
I have a C# api. Now I also need a Java implementation of it. And to maintain both api then.
To ensure both implementations don't diverge with time, I'd like to share the same tests between them (not necessarily the unit tests, but at least the end-to-end tests).
So far, I can think of two different ways to do it:
Put both implementations behind a rest api, and test through this
api. The advantage is that I can have the exact same tests for both
implementations. The drawback is that it’s a bit heavy to put in
place.
Use Behavior Driven Development tests (Cucumber for Java, SpecFlow for C#), and use the same feature files
for the different implementations. The drawback is that I’ll have to
provide an implementation of the steps for each language, so there’s
the risk that the tests are actually different in subtle ways.
I would be grateful for any idea to handle it in a more satisfactory way.
IMHO the Facade for the both APIs is the better option. It'll allow you to reuse the same test suites above unit testing (Integration, System, E2E, UAT).
The drawback is that it’s a bit heavy to put in place.
I doubt that having double BDD frameworks will be a less effort. Your language bindings will have to be different (e.g. *.feature.cs and .feature.java files). The unit test framework that you'll have to put in place (NUnit, MSTest, JUnit, TestNG etc.) will require a separate CI server handling.
In the same time having only one implementation for all your tests will be a lot more feasible. It's not mandatory to use the same language for tests as the SUT's one. All the tests will be in sync for both the APIs requirements, regardless of the language.
I have been trying to understand these but every article, wiki etc. says something else. My understanding is that:
Functional test means testing of a new functionality in isolation and against docs. Or maybe also exploratory testing?
Funcional testing means validating the application as a whole against specifications only from functional point of view.
And from book, functional testing is said to be a part of System testing when the whole app is tested and checked against Functinal Requirements or design documents.
Could anyone experienced in this field make it clear for me?
Thank you
Functional testing is a type of testing, whereas system testing defines the scope of the test. The two are orthogonal concepts, though functional tests are usually performed on the system as a whole (hence the confusion in some people's minds).
Functional test - a test for a single function of the system
Functional testing - perfoming functional tests
Functional tests can be performed on any part of the system - e.g. individual classes or clusters of classes, sub-systems - or the system as a whole.
System testing - performing tests on the system as a whole, as opposed to a unit or module. These might be functional tests, usability tests, exploratory tests etc.
Also note that the distinction between tests and requirements documentation is blurred with frameworks such as Concordion allowing documentation to define and perform functional tests.
What is the real difference between acceptance tests and functional tests?
What are the highlights or aims of each? Everywhere I read they are ambiguously similar.
In my world, we use the terms as follows:
functional testing: This is a verification activity; did we build a correctly working product? Does the software meet the business requirements?
For this type of testing we have test cases that cover all the possible scenarios we can think of, even if that scenario is unlikely to exist "in the real world". When doing this type of testing, we aim for maximum code coverage. We use any test environment we can grab at the time, it doesn't have to be "production" caliber, so long as it's usable.
acceptance testing: This is a validation activity; did we build the right thing? Is this what the customer really needs?
This is usually done in cooperation with the customer, or by an internal customer proxy (product owner). For this type of testing we use test cases that cover the typical scenarios under which we expect the software to be used. This test must be conducted in a "production-like" environment, on hardware that is the same as, or close to, what a customer will use. This is when we test our "ilities":
Reliability, Availability: Validated via a stress test.
Scalability: Validated via a load test.
Usability: Validated via an inspection and demonstration to the customer. Is the UI configured to their liking? Did we put the customer branding in all the right places? Do we have all the fields/screens they asked for?
Security (aka, Securability, just to fit in): Validated via demonstration. Sometimes a customer will hire an outside firm to do a security audit and/or intrusion testing.
Maintainability: Validated via demonstration of how we will deliver software updates/patches.
Configurability: Validated via demonstration of how the customer can modify the system to suit their needs.
This is by no means standard, and I don't think there is a "standard" definition, as the conflicting answers here demonstrate. The most important thing for your organization is that you define these terms precisely, and stick to them.
I like the answer of Patrick Cuff. What I like to add is the distinction between a test level and a test type which was for me an eye opener.
test levels
Test level is easy to explain using V-model, an example:
Each test level has its corresponding development level. It has a typical time characteristic, they're executed at certain phase in the development life cycle.
component/unit testing => verifying detailed design
component/unit integration testing => verifying global design
system testing => verifying system requirements
system integration testing => verifying system requirements
acceptance testing => validating user requirements
test types
A test type is a characteristics, it focuses on a specific test objective. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Test types can be executed at any test level. I like to use as test types the quality characteristics mentioned in ISO/IEC 25010:2011.
functional testing
reliability testing
performance testing
operability testing
security testing
compatibility testing
maintainability testing
transferability testing
To make it complete. There's also something called regression testing. This an extra classification next to test level and test type. A regression test is a test you want to repeat because it touches something critical in your product. It's in fact a subset of tests you defined for each test level. If a there's a small bug fix in your product, one doesn't always have the time to repeat all tests. Regression testing is an answer to that.
The difference is between testing the problem and the solution. Software is a solution to a problem, both can be tested.
The functional test confirms the software performs a function within the boundaries of how you've solved the problem. This is an integral part of developing software, comparable to the testing that is done on mass produced product before it leaves the factory. A functional test verifies that the product actually works as you (the developer) think it does.
Acceptance tests verify the product actually solves the problem it was made to solve. This can best be done by the user (customer), for instance performing his/her tasks that the software assists with. If the software passes this real world test, it's accepted to replace the previous solution. This acceptance test can sometimes only be done properly in production, especially if you have anonymous customers (e.g. a website). Thus a new feature will only be accepted after days or weeks of use.
Functional testing - test the product, verifying that it has the qualities you've designed or build (functions, speed, errors, consistency, etc.)
Acceptance testing - test the product in its context, this requires (simulation of) human interaction, test it has the desired effect on the original problem(s).
The answer is opinion. I worked in a lot of projects and being testmanager and issuemanager and all different roles and the descriptions in various books differ so here is my variation:
functional-testing: take the business requirements and test all of it good and thorougly from a functional viewpoint.
acceptance-testing: the "paying" customer does the testing he likes to do so that he can accept the product delivered. It depends on the customer but usually the tests are not as thorough as the functional-testing especially if it is an in-house project because the stakeholders review and trust the test results done in earlier test phases.
As I said this is my viewpoint and experience. The functional-testing is systematic and the acceptance-testing is rather the business department testing the thing.
Audience. Functional testing is to assure members of the team producing the software that it does what they expect. Acceptance testing is to assure the consumer that it meets their needs.
Scope. Functional testing only tests the functionality of one component at a time. Acceptance testing covers any aspect of the product that matters to the consumer enough to test before accepting the software (i.e., anything worth the time or money it will take to test it to determine its acceptability).
Software can pass functional testing, integration testing, and system testing; only to fail acceptance tests when the customer discovers that the features just don't meet their needs. This would usually imply that someone screwed up on the spec. Software could also fail some functional tests, but pass acceptance testing because the customer is willing to deal with some functional bugs as long as the software does the core things they need acceptably well (beta software will often be accepted by a subset of users before it is completely functional).
Functional Testing: Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known as
black-box testing.
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to
accept the system.
In my view the main difference is who says if the tests succeed or fail.
A functional test tests that the system meets predefined requirements. It is carried out and checked by the people responsible for developing the system.
An acceptance test is signed off by the users. Ideally the users will say what they want to test but in practice it is likely to be a sunset of a functional test as users don't invest enough time. Note that this view is from the business users I deal with other sets of users e.g. aviation and other safety critical might well not have this difference,
Acceptance testing:
... is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.
Though this goes on to say:
It is also known as functional testing, black-box testing, release acceptance, QA testing, application testing, confidence testing, final testing, validation testing, or factory acceptance testing
with a "citation needed" mark.
Functional testing (which actually redirects to System Testing):
conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
So from this definition they are pretty much the same thing.
In my experience acceptance test are usually a subset of the functional tests and are used in the formal sign off process by the customer while functional/system tests will be those run by the developer/QA department.
Acceptance testing is just testing carried out by the client, and includes other kinds of testing:
Functional testing: "this button doesn't work"
Non-functional testing: "this page works but is too slow"
For functional testing vs non-functional testing (their subtypes) - see my answer to this SO question.
The relationship between the two:
Acceptance test usually includes functional testing, but it may include additional tests. For example checking the labeling/documentation requirements.
Functional testing is when the product under test is placed into a test environment which can produce variety of stimulation (within the scope of the test) what the target environment typically produces or even beyond, while examining the response of the device under test.
For a physical product (not software) there are two major kind of Acceptance tests: design tests and manufacturing tests. Design tests typically use large number of product samples, which have passed manufacturing test. Different consumers may test the design different ways.
Acceptance tests are referred as verification when design is tested against product specification, and acceptance tests are referred as validation, when the product is placed in the consumer's real environment.
They are the same thing.
Acceptance testing is performed on the completed system in as identical as possible to the real production/deployement environment before the system is deployed or delivered.
You can do acceptance testing in an automated manner, or manually.
I read the Wikipedia article on scenario testing, but I am sad to say it is very short. I am left wondering: are scenario tests a collection of sequential unit tests? Or, perhaps, like a single multi-step unit test? Do many frameworks support scenario tests, or are they covered by unit testing?
If they have nothing to do with automation, what are they?
I don't think there's any fixed relationship between the number and distribution of tests and scenario tests.
I think the most common code-representation of a scenario is a specific set of business data required to support a specific story (scenario). This is often provided in the form of database data, fake stub data or a combination of both.
The idea is that this dataset has known and well-defined characteristics that will provide well defined results all across a given business process.
For a web application I could have a single web-test (or several for variations) that click through the full scenario. In other cases the scenario is used at a lower level, possibly testing a part of the scenario in a functional test or a unit test. In this case I normally never group the tests by scenario, but choose the functional grouping of tests I normally use for unit/functional tests. Quite often there's a method within "Subsystem1Test" that is called "testScenario1" or maybe "testScenarioInsufficientCredit". I prefer to give my scenarios names.
In addition to korsenvoid's response, in my experience scenario based testing will often be automated as it will be included in regression testing. Regression testing is regularly automated as to do it manually does not scale well with regular releases.
In commercial software, good examples of scenerio tests are tutorials included with the user documentation. These obviously must work in each release or be removed from the docs, and hence must be tested.
While you can carry out scenario testing using sequenced unit tests, my guess is that it is more common to use GUI based automation tools. For example, I use TestComplete in this role with a scripting framework to good effect. Scenario tests are typically carried out from a user/client perspective which can be difficult to accurately replicate at a unit level.
IMHO, scenario testing is a testing activity, as opposed to development activity ; hence it's about testing a product, not unit(s) of that product. The test scenario are end-to-end scenarios, using the natural interfaces of the product. If the product has programmatic interfaces, then you could use an unit test framework, or Fitnesse.