Difference between acceptance test and functional test? - testing

What is the real difference between acceptance tests and functional tests?
What are the highlights or aims of each? Everywhere I read they are ambiguously similar.

In my world, we use the terms as follows:
functional testing: This is a verification activity; did we build a correctly working product? Does the software meet the business requirements?
For this type of testing we have test cases that cover all the possible scenarios we can think of, even if that scenario is unlikely to exist "in the real world". When doing this type of testing, we aim for maximum code coverage. We use any test environment we can grab at the time, it doesn't have to be "production" caliber, so long as it's usable.
acceptance testing: This is a validation activity; did we build the right thing? Is this what the customer really needs?
This is usually done in cooperation with the customer, or by an internal customer proxy (product owner). For this type of testing we use test cases that cover the typical scenarios under which we expect the software to be used. This test must be conducted in a "production-like" environment, on hardware that is the same as, or close to, what a customer will use. This is when we test our "ilities":
Reliability, Availability: Validated via a stress test.
Scalability: Validated via a load test.
Usability: Validated via an inspection and demonstration to the customer. Is the UI configured to their liking? Did we put the customer branding in all the right places? Do we have all the fields/screens they asked for?
Security (aka, Securability, just to fit in): Validated via demonstration. Sometimes a customer will hire an outside firm to do a security audit and/or intrusion testing.
Maintainability: Validated via demonstration of how we will deliver software updates/patches.
Configurability: Validated via demonstration of how the customer can modify the system to suit their needs.
This is by no means standard, and I don't think there is a "standard" definition, as the conflicting answers here demonstrate. The most important thing for your organization is that you define these terms precisely, and stick to them.

I like the answer of Patrick Cuff. What I like to add is the distinction between a test level and a test type which was for me an eye opener.
test levels
Test level is easy to explain using V-model, an example:
Each test level has its corresponding development level. It has a typical time characteristic, they're executed at certain phase in the development life cycle.
component/unit testing => verifying detailed design
component/unit integration testing => verifying global design
system testing => verifying system requirements
system integration testing => verifying system requirements
acceptance testing => validating user requirements
test types
A test type is a characteristics, it focuses on a specific test objective. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Test types can be executed at any test level. I like to use as test types the quality characteristics mentioned in ISO/IEC 25010:2011.
functional testing
reliability testing
performance testing
operability testing
security testing
compatibility testing
maintainability testing
transferability testing
To make it complete. There's also something called regression testing. This an extra classification next to test level and test type. A regression test is a test you want to repeat because it touches something critical in your product. It's in fact a subset of tests you defined for each test level. If a there's a small bug fix in your product, one doesn't always have the time to repeat all tests. Regression testing is an answer to that.

The difference is between testing the problem and the solution. Software is a solution to a problem, both can be tested.
The functional test confirms the software performs a function within the boundaries of how you've solved the problem. This is an integral part of developing software, comparable to the testing that is done on mass produced product before it leaves the factory. A functional test verifies that the product actually works as you (the developer) think it does.
Acceptance tests verify the product actually solves the problem it was made to solve. This can best be done by the user (customer), for instance performing his/her tasks that the software assists with. If the software passes this real world test, it's accepted to replace the previous solution. This acceptance test can sometimes only be done properly in production, especially if you have anonymous customers (e.g. a website). Thus a new feature will only be accepted after days or weeks of use.
Functional testing - test the product, verifying that it has the qualities you've designed or build (functions, speed, errors, consistency, etc.)
Acceptance testing - test the product in its context, this requires (simulation of) human interaction, test it has the desired effect on the original problem(s).

The answer is opinion. I worked in a lot of projects and being testmanager and issuemanager and all different roles and the descriptions in various books differ so here is my variation:
functional-testing: take the business requirements and test all of it good and thorougly from a functional viewpoint.
acceptance-testing: the "paying" customer does the testing he likes to do so that he can accept the product delivered. It depends on the customer but usually the tests are not as thorough as the functional-testing especially if it is an in-house project because the stakeholders review and trust the test results done in earlier test phases.
As I said this is my viewpoint and experience. The functional-testing is systematic and the acceptance-testing is rather the business department testing the thing.

Audience. Functional testing is to assure members of the team producing the software that it does what they expect. Acceptance testing is to assure the consumer that it meets their needs.
Scope. Functional testing only tests the functionality of one component at a time. Acceptance testing covers any aspect of the product that matters to the consumer enough to test before accepting the software (i.e., anything worth the time or money it will take to test it to determine its acceptability).
Software can pass functional testing, integration testing, and system testing; only to fail acceptance tests when the customer discovers that the features just don't meet their needs. This would usually imply that someone screwed up on the spec. Software could also fail some functional tests, but pass acceptance testing because the customer is willing to deal with some functional bugs as long as the software does the core things they need acceptably well (beta software will often be accepted by a subset of users before it is completely functional).

Functional Testing: Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known as
black-box testing.
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to
accept the system.

In my view the main difference is who says if the tests succeed or fail.
A functional test tests that the system meets predefined requirements. It is carried out and checked by the people responsible for developing the system.
An acceptance test is signed off by the users. Ideally the users will say what they want to test but in practice it is likely to be a sunset of a functional test as users don't invest enough time. Note that this view is from the business users I deal with other sets of users e.g. aviation and other safety critical might well not have this difference,

Acceptance testing:
... is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.
Though this goes on to say:
It is also known as functional testing, black-box testing, release acceptance, QA testing, application testing, confidence testing, final testing, validation testing, or factory acceptance testing
with a "citation needed" mark.
Functional testing (which actually redirects to System Testing):
conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
So from this definition they are pretty much the same thing.
In my experience acceptance test are usually a subset of the functional tests and are used in the formal sign off process by the customer while functional/system tests will be those run by the developer/QA department.

Acceptance testing is just testing carried out by the client, and includes other kinds of testing:
Functional testing: "this button doesn't work"
Non-functional testing: "this page works but is too slow"
For functional testing vs non-functional testing (their subtypes) - see my answer to this SO question.

The relationship between the two:
Acceptance test usually includes functional testing, but it may include additional tests. For example checking the labeling/documentation requirements.
Functional testing is when the product under test is placed into a test environment which can produce variety of stimulation (within the scope of the test) what the target environment typically produces or even beyond, while examining the response of the device under test.
For a physical product (not software) there are two major kind of Acceptance tests: design tests and manufacturing tests. Design tests typically use large number of product samples, which have passed manufacturing test. Different consumers may test the design different ways.
Acceptance tests are referred as verification when design is tested against product specification, and acceptance tests are referred as validation, when the product is placed in the consumer's real environment.

They are the same thing.
Acceptance testing is performed on the completed system in as identical as possible to the real production/deployement environment before the system is deployed or delivered.
You can do acceptance testing in an automated manner, or manually.

Related

Differences between User Acceptance Test and Test Case Scenario and Functional Test [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
In the context of Agile software development, what's the difference between User Acceptance Test (UAT), Test Case Scenario and Functional Test?
The members of the team I am part of, they consider the three things as different, but I see them as exactly the same thing.
In fact, all of them are designed having the end user in mind
There's a lot of different sorts of testing. Many of them overlap. Many use the same tools. Many are specializations of other more general terms. Often they blur together. People argue about the terminology all the time.
You're correct that they all have the end user in mind, but they are different.
User Acceptance Test
This is a specific form of an acceptance test where a subject-matter expert, ideally the client or their representative, tests the software. This is in addition to functional and acceptance testing done by QA. It's designed to simulate, as closely as possible, an actual end-user using the software; the tester is asked to perform a bunch of common tasks with the new system, but not given specific instructions nor coaching on how to do it.
For example, if you were creating a site for an airline, they might be asked to register, login, book a flight, make a payment, check in, check their flight status, and so on.
Functional Test
This is blackbox testing done by the QA role. It verifies the thing does what it's supposed to do; you give it inputs, you check the outputs. Typically this is testing against the specification and/or requirements document.
"Functional" here doesn't refer to code functions, but that the system functions as expected. Testing specific functions is unit testing.
They can be purely functional, "when I do X I get Y". They can be about resource use, "when I do X it uses no more than Y memory/time". Or about error checking, "when I give it garbage I get a well formed error". Anything that validates it meets the requirements.
Test Case Scenario
Sounds like Scenario Testing: this uses stories, similar to user stories, that help a tester work through a complicated testing scenario. Scenario testing tests complicated combinations of things which might arise during actual use and often cut across multiple systems.
An example of a test scenario might be: "in the middle of processing the system runs out of disk space; verify an admin is notified, that processing resumes once space is cleared, and that no data is lost".
A User Acceptance Test might use Scenario Testing.
These are my rules of thumb:
Unit testing: does this one function work?
Integration testing: do the functions work together?
Functional testing: does it function as required?
Acceptance testing: is it acceptable to the client?
Regression testing: does it still work like it used to?
User Acceptance Testing is having business users trying out your app.
There is also an Acceptance Testing done by QA when they check the new functionality - you can call it a Story Acceptance Testing to distinguish between these. These are not necessarily Functional Tests (could be Security, Performance testing, etc.).
Test Case is a number of steps to check a small piece of functionality. It has Prerequisites, Steps, Expected Result, Actual Result. This is one of the ways of carrying out Functional Testing. Others could be: exploratory testing, checklists.
Test Scenario - steps that cover a bigger picture. Often they cover cases of how real users would use the app. But these are carried out by QA team.
Functional Test - a test that checks the functionality as opposed to e.g. Performance. This can be a unit test as well, but since this terminology is mostly used by QA - when people talk about them they usually mean Functional System Test.
Note, that different authorities may use different definitions of the same terms. Check out Holes in testing terminology: Test Types and Test Levels. Since it's impossible to find the one true terminology it's more important that you use terms consistently within your team even if they are used differently in other companies and teams.
User acceptance testing is a process that obtains confirmation that a system meets agreed customer/product manager requirements.
Functional Testing is actual functionality test of the software there can be many different types of testing but in simple word testing the functionality what is should be expected.
The test scenario is the high level of testing cases when the first classification of module testing then module dividing into a scenario and at last small and specifical test steps with expected result says test cases, so the test scenario is group test cases with limited to specific functionality and module.

Different Types of Testing (eg. Unit, Functional, Integration, etc.) Document [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
A few years ago I saw this great (PDF) document from Google. The document explained, in a single page, what all the various types of software testing mean (eg. what separates a functional test from an integration test from a unit test from a ...). It was a very handy reference, but of course I didn't save a link to it, and now I can't find it when I google for it (oh the irony of not being able to google a Google document).
Now I know there are great SO answers for exactly this question, but I was specifically looking for a single-page, print-formatted guide that I could hang in the office, rather than an SO answer.
Can anyone point me to either the Google document I'm thinking of, or any other good single-page breakdown of software testing types?
Software Testing Types
Software testing life cycle is the process that explains the flow of the tests that are to be carried on each product. The V- Model i.e Verification and Validation Model is a perfect model which is used in the improvement of the software project. This model contains software development life cycle on one side and software testing life cycle on the other hand side. The checklists for software tester sets a baseline that guides him to carry on the day-to-day activities.
Black Box Testing: It explains the process of giving the input to the system and checking the output, without considering how the system generates the output. It is also known as Behavioral Testing.
Functional Testing: The software is tested for the functional requirements. This checks whether the application is behaving according to the specification.
Performance Testing: This testing checks whether the system is performing properly, according to the user's requirements. Performance testing depends upon the Load and Stress Testing, that is internally or externally applied to the system.
Load Testing: In this type of performance testing, the system is raised beyond the limits in order to check the performance of the system when higher loads are applied.
Stress Testing: In this type of performance testing, the system is tested beyond the normal expectations or operational capacity.
Usability Testing: This is also known as 'Testing for User Friendliness'. It checks the ease of use of an application.
Regression Testing: Regression testing is one of the most important types of testing, which checks whether a small change in any component of the application affects the unchanged components or not. This is done by re-executing the previous versions of the application.
Smoke Testing: It is used to check the testability of the application, and is also called 'Build Verification Testing or Link Testing'. That means, it checks whether the application is ready for further testing and working, without dealing with the finer details.
Sanity Testing: Sanity testing checks for the behavior of the system. This is also called Narrow Regression Testing.
Parallel Testing: Parallel testing is done by comparing results from two different systems like old vs new or manual vs automated.
Recovery Testing: Recovery testing is very necessary to check how fast the system is able to recover against any hardware failure, catastrophic problems or any type of system crash.
Installation Testing: This type of software testing identifies the ways in which installation procedure leads to incorrect results.
Compatibility Testing: Compatibility testing determines if an application under supported configurations performs as expected, with various combinations of hardware and software packages.
Configuration Testing: This testing is done to test for compatibility issues. It determines minimal and optimal configuration of hardware and software, and determines the effect of adding or modifying resources such as memory, disk drives, and CPU.
Compliance Testing: This checks whether the system was developed in accordance with standards, procedures, and guidelines.
Error-Handling Testing: This determines the ability of the system to properly process erroneous transactions.
Manual-Support Testing: This type of software testing is an interface between people and application system.
Inter-Systems Testing: This method is an interface between two or more application systems.
Exploratory Testing: Exploratory testing is similar to ad-hoc testing, and is performed to explore the software features.
Volume Testing: This testing is done when huge amount of data is processed through the application.
Scenario Testing: Scenario testing provides a more realistic and meaningful combination of functions, rather than artificial combinations that are obtained through domain or combinatorial test design.
User Interface Testing: This type of testing is performed to check, how user-friendly the application is. The user should be able to use the application, without any assistance by the system personnel.
System Testing: This testing conducted on a complete, integrated system, to evaluate the system's compliance with the specified requirements. This is done to check if the system meets its functional and non-functional requirements and is also intended to test beyond the bounds defined in the software / hardware requirement specifications.
User Acceptance Testing: Acceptance testing is performed to verify that the product is acceptable to the customer and if it's fulfilling the specified requirements of that customer. This testing includes Alpha and Beta testing.
Alpha Testing: Alpha testing is performed at the developer's site by the customer in a closed environment. This is done after the system testing.
Beta Testing: This is done at the customer's site by the customer in the open environment. The presence of the developer, while performing these tests, is not mandatory. This is considered to be the last step in the software development life cycle as the product is almost ready.
White Box Testing: It is the process of giving the input to the system and checking, how the system processes the input to generate the output. It is mandatory for a tester to have the knowledge of the source code.
Unit Testing: Unit testing is done at the developer's site to check whether a particular piece / unit of code is working fine. It tests the unit of the program as a whole.
Static and Dynamic Analysis: In static analysis, it is required to go through the code in order to find out any possible defect in the code. Whereas, in dynamic analysis, the code is executed and analyzed for the output.
Statement Coverage: It assures that the code is executed in such a way that every statement of the application is executed at least once.
Decision Coverage: This helps in making decision by executing the application, at least once to judge whether it results in true or false.
Condition Coverage: In this type of software testing, each and every condition is executed by making it true and false, in each of the ways, at least once.
Path Coverage: Each and every path within the code is executed at least once to get a full path coverage, which is one of the important parts of the white box testing.
Integration Testing: Integration testing is performed when various modules are integrated with each other to form a sub-system or a system. This mostly focuses in the design and construction of the software architecture. This is further classified into Bottom-Up Integration and Top-Down Integration testing.
Bottom-Up Integration Testing: Here the lowest level components are tested first and then the testing of higher level components is done using 'Drivers'. The entire process is repeated till the time all the higher level components are tested.
Top-Down Integration Testing: This is totally opposite to bottom-up approach, as it tests the top level modules and the branch of the modules are tested step by step using 'Stubs', until the related module comes to an end.
Security Testing: Testing that confirms, how well a system protects itself against unauthorized internal or external, or willful damage of code; means security testing of the system. Security testing assures that the program is accessed by the authorized personnel only.
Mutation Testing: In mutation testing, the application is tested for the code that was modified after fixing a particular bug/defect.
use the following link for getting the printed format
Types of Software Testing
May be following can help you.
http://www.kostcare.com/pdf/Testing%20at%20Different%20Phase%20of%20Software%20Development%20Life%20Cycle.pdf
http://ijcsi.org/papers/7-3-1-11-16.pdf
http://www.softwaretestinghelp.com/types-of-software-testing/
http://rajeevprabhakaran.wordpress.com/2008/11/20/different-types-of-testing/

which kind of testing is required for this scenario

software product is integrated and complete ,now to check whether it meets the intended specifications and functional requirements specified in requirements documentation:-
integration testing or functional testing or user acceptance testing
Yes this is a discussion in many projects. What is the scope of the different test phase and I think you ask a very valid question I have discussed very often in projects.
The answer is opinion because I have read different answers in different books and standards and it also depends on the size of the software and the kind of the software.
Here are good answers
In my world normally integration testing is to see whether it works with all up-stream and down-stream systems while functional testing is done on a system alone. But often the functional testing is an end-to-end test and cannot be done standalone so the integration and functional testing becomes the same test phase.
user acceptance testing usually is done by someone else and is the client who gives his sign-off and does his own set of test cases.

what is the difference and advantage of usecase based testing and system testing

In what way usecase based testing is differen from system testing ?
Can we consider system tesing as a subset of usecase based tesing , (ie)system testing consider only usecase of components or sub system with in the system.
I think you are mixing two terms. System testing is a testing phase while Use Case Testing is technique of designing test cases based on use cases, that can be used on many testing levels. For example:
1) In Use Case Testing you create Test Cases based on Use cases. The system, or at least components involved in given use case should be developed, build and integrated. One may want to check if two modules involved in given Use Case are working together properly. So in your Integration Test you prepare test case based on use case that exposes cooperation of those two modules.
2) When you are doing System Tests as part of them you can do Use Case Testing - to confirm that behavior specified by Use Case works as it should. But as Robert Harvey pointed out, System testing is to examine compliance with requirements so it makes both positive testing and negative testing. Therefore System Testing not only covers expected behavior described in Use Cases, but also tries to 'break' the system from specific requirement point of view.
3) Additionally it should be mentioned that as Use Cases contain some expected user actions they make good starting point for User Acceptance Testing. On the other hand as user don't want to check Login Use Case but rather Login and do some stuff and observe some effects part of their business process so merely checking Use Cases is not enough. Use Cases Are some starting point but UAT usually will require tests going deeper into the buisness process that given software should support.
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the "integrated" software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of integration testing is to detect any inconsistencies between the software units that are integrated together (called assemblages) or between any of the assemblages and the hardware. System testing is a more limiting type of testing; it seeks to detect defects both within the "inter-assemblages" and also within the system as a whole.
System testing is an investigatory testing phase, where the focus is to have almost a destructive attitude and tests not only the design, but also the behaviour and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).
Use Case testing is a specialized form of Verification and Validation testing, where the use cases become the test cases. The purpose of this kind of testing is to see if the software does what it is supposed to do; ie. i meets its functional specifications.
Verification: The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
Validation: The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements.
In other words, validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place, while verification is ensuring that the product has been built according to the requirements and design specifications. Validation ensures that ‘you built the right thing’. Verification ensures that ‘you built it right’. Validation confirms that the product, as provided, will fulfil its intended use.

Are scenario tests groups of sequential unit tests?

I read the Wikipedia article on scenario testing, but I am sad to say it is very short. I am left wondering: are scenario tests a collection of sequential unit tests? Or, perhaps, like a single multi-step unit test? Do many frameworks support scenario tests, or are they covered by unit testing?
If they have nothing to do with automation, what are they?
I don't think there's any fixed relationship between the number and distribution of tests and scenario tests.
I think the most common code-representation of a scenario is a specific set of business data required to support a specific story (scenario). This is often provided in the form of database data, fake stub data or a combination of both.
The idea is that this dataset has known and well-defined characteristics that will provide well defined results all across a given business process.
For a web application I could have a single web-test (or several for variations) that click through the full scenario. In other cases the scenario is used at a lower level, possibly testing a part of the scenario in a functional test or a unit test. In this case I normally never group the tests by scenario, but choose the functional grouping of tests I normally use for unit/functional tests. Quite often there's a method within "Subsystem1Test" that is called "testScenario1" or maybe "testScenarioInsufficientCredit". I prefer to give my scenarios names.
In addition to korsenvoid's response, in my experience scenario based testing will often be automated as it will be included in regression testing. Regression testing is regularly automated as to do it manually does not scale well with regular releases.
In commercial software, good examples of scenerio tests are tutorials included with the user documentation. These obviously must work in each release or be removed from the docs, and hence must be tested.
While you can carry out scenario testing using sequenced unit tests, my guess is that it is more common to use GUI based automation tools. For example, I use TestComplete in this role with a scripting framework to good effect. Scenario tests are typically carried out from a user/client perspective which can be difficult to accurately replicate at a unit level.
IMHO, scenario testing is a testing activity, as opposed to development activity ; hence it's about testing a product, not unit(s) of that product. The test scenario are end-to-end scenarios, using the natural interfaces of the product. If the product has programmatic interfaces, then you could use an unit test framework, or Fitnesse.