What is the difference between smoke testing and sanity testing? When do will perform smoke testing and when do will perform sanity testing?
Sanity testing
Sanity testing is the subset of regression testing and it is performed when we do not have enough time for doing testing.
Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.
Example
For example, in a project there are 5 modules: Login Page, Home Page, User's Details Page, New User Creation and Task Creation.
Suppose we have a bug in the login page: the login page's username field accepts usernames which are shorter than 6 alphanumeric characters, and this is against the requirements, as in the requirements it is specified that the username should be at least 6 alphanumeric characters.
Now the bug is reported by the testing team to the developer team to fix it. After the developing team fixes the bug and passes the app to the testing team, the testing team also checks the other modules of the application in order to verify that the bug fix does not affect the functionality of the other modules.
But keep one point always in mind: the testing team only checks the extreme functionality of the modules, it does not go deep to test the details because of the short time.
Sanity testing is performed after the build has cleared the smoke tests and has been accepted by QA team for further testing. Sanity testing checks the major functionality with finer details.
Sanity testing is performed when the development team needs to know quickly the state of the product after they have done changes in the code, or there is some controlled code changed in a feature to fix any critical issue, and stringent release time-frame does not allow complete regression testing.
Smoke testing
Smoke Testing is performed after a software build to ascertain that the critical functionalities of the program are working fine. It is executed "before" any detailed functional or regression tests are executed on the software build.
The purpose is to reject a badly broken application, so that the QA team does not waste time installing and testing the software application.
In smoke testing, the test cases chosen cover the most important functionalities or components of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionalities of the system are working fine.
For example, typical smoke tests would be:
verify that the application launches successfully,
Check that the GUI is responsive
Smoke testing
Smoke testing came from the hardware environment where testing should be done to check whether the development of a new piece of hardware causes no fire and smoke for the first time.
In the software environment, smoke testing is done to verify whether we can consider for further testing the functionality which is newly built.
Sanity testing
A subset of regression test cases are executed after receiving a functionality or code with small or minor changes in the functionality or code, to check whether it resolved the issues or software bugs and no other software bug is introduced by the new changes.
Difference between smoke testing and sanity testing
Smoke testing
Smoke testing is used to test all areas of the application without going into too deep.
A smoke test always use an automated test or a written set of tests. It is always scripted.
Smoke testing is designed to include every part of the application in a not thorough or detailed way.
Smoke testing always ensures whether the most crucial functions of a program are working, but not bothering with finer details.
Sanity testing
Sanity testing is a narrow test that focuses on one or a few areas of functionality, but not thoroughly or in-depth.
A sanity test is usually unscripted.
Sanity testing is used to ensure that after a minor change a small part of the application is still working.
Sanity testing is a cursory testing, which is performed to prove that the application is functioning according to the specifications. This level of testing is a subset of regression testing.
Hope these points help you to understand the difference between smoke testing and sanity testing.
References
http://www.softwaretestinghelp.com/smoke-testing-and-sanity-testing-difference/
https://www.guru99.com/smoke-sanity-testing.html
Smoke and sanity testing
In general, smoke and sanity testing seems very similar to many tester who have just started, because in both we talk about build, we talk about functionality and we talk about the rejection of builds, if build's health is not good for the feasible testing.
After going through several projects, from start ups to product base company, I figured out the basic difference between smoke and sanity testing.
I am writing difference between smoke testing and sanity testing here to help you in answering at least one question that normally all testers get asked in interview.
Smoke testing
Smoke testing is done to test the health of builds.
It is also known as the shallow and wide testing, in that we normally include those test cases which can cover all the functionality of the product.
We can say that it's the first step of testing and, after this, we normally do other kind of functional and system testing, including regression testing.
It's normally done by a developer with the help of certain scripts or certain tools, but in some cases it can be performed by a tester too.
It's valid for initial stage of a build confirmation. For example, suppose we have started the development of a certain product, and we are producing a build for the first time, then smoke testing becomes a necessity for the product.
Sanity testing
It is sub-regression
Sanity is done for those builds which have gone through many regression tests and a minor change in code has happened. In this case, we normally do the intensive testing of functionalities where this change has occurred or may have influenced.
Due to this, it is also known as "narrow" and "deep" testing
It's performed by a tester
It's done for mature builds, like those that are just going to hit production, and have gone through multiple regression processes.
It can be removed from the testing process, if regression is already being performed.
If any build doesn't pass the sanity tests, then it is thrown to developer back for the correction of the build.
Try to understand both by this example.
Suppose if you're buying a car from showroom.
The first thing you will check the car contains are for example if it's four tires, a staring, headlight, or all other basic things. This is called smoke testing.
If you're checking how much mileage the car is giving or what is max speed, then this is known as sanity testing.
Smoke Testing
Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep
The test cases for smoke testing of the software can be either manual or automated
Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details.
Smoke testing of the software application is done to check whether the build can be accepted for through software testing
This testing is performed by the developers or testers
Smoke testing exercises the entire system from end to end
Smoke testing is like General Health Check Up
Smoke testing is usually documented or scripted
Santy Testing
Sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
Sanity test is generally without test scripts or test cases.
Sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
Sanity testing of the software is to ensure whether the requirements are met or not.
Sanity testing is usually performed by testers
Sanity testing exercises only the particular component of the entire system
Sanity Testing is like specialized health check up
Sanity testing is usually not documented and is unscripted
For more visit Link
Smoke testing is about checking if the requirements are satisfied or not.
Smoke testing is a general health check up.
Sanity testing is about checking if a particular module is completely working or not. Sanity testing is specialized in particular health check up.
Smoke tests are tests which aim is to check if everything was build correctly. I mean here integration, connections. So you check from technically point of view if you can make wider tests. You have to execute some test cases and check if the results are positive.
Sanity tests in general have the same aim - check if we can make further test. But in sanity test you focus on business value so you execute some test cases but you check the logic.
In general people say smoke tests for both above because they are executed in the same time (sanity after smoke tests) and their aim is similar.
Smoke testing
Suppose a new build of an app is ready from the development phase.
We check if we are able to open the app without a crash. We login to the app. We check if the user is redirected to the proper URL and that the environment is stable. If the main aim of the app is to provide a "purchase" functionality to the user, check if the user's ID is redirected to the buying page.
After the smoke testing we confirm the build is in a testable form and is ready to go through sanity testing.
Sanity testing
In this phase, we check the basic functionalities, like
login with valid credentials,
login with invalid credentials,
user's info are properly displayed after logging in,
making a purchase order with a certain user's id,
the "thank you" page is displayed after the purchase
THERE IS NO DIFFERENCE BETWEEN smoke and sanity as per ISTQB.
sanity is synonym of smoke.
Check it here : https://glossary.istqb.org/en/search/sanity
Smoke Testing:-
Smoke test is scripted, i.e you have either manual test cases or automated scripts for it.
Sanity Testing:-
Sanity tests are mostly non scripted.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!
Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.
Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests
in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.
I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.
Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I don't have a clear idea about smoke testing and sanity testing, some books say that both are same but some tester in some project called as a smoke testing and some tester in some project called as sanity testing, So please give me clear cut idea about my question.
Sorry but there is no clear-cut. Like you explain in your question there is no consensus on the definition, or at least on the difference between sanity and smoke.
Now about smoke tests (or sanity tests!), those are the tests that you can run quickly to get a general idea of how your System Under Test (SUT) behaves. For software testing, this will obviously contain some kind of installation, setup, playing around with the feature and shutdown. If nothing goes wrong, then you now you can go on with your testing. This provides a quick feedback to the team and avoid starting a longer test campaign only to realise that some major features are broken and the SUT is not really usable.
This definition stands for both manual and automated tests. For example, if you use Jenkins (for CI) and Robot Framework (for test automation), you could create 2 jobs in Jenkins: smoke tests and full tests (using tags, this is straightforward). Smoke test job could last a couple of minutes (or max 15 minutes let's say) and the full tests job could as long as needed. Thus the smoke test job gives you a quick feedback on the SUT build (if your smoke tets is a child project of the SUT build of course)
Smoke testing also known as Build version Testing.
Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
sanity testing is a type of testing to check the capability of a new software version is able to perform well enough to accept it for a major testing effort.
Think of the analogy of testing a new electronic device. The first thing you do is turn it on to see if it starts smoking. If it does, there's something fundamentally wrong, so no additional testing either can be done or is worth doing.
For a website, the simplest smoke test is to go to the website and see if the http response is 200. If not, there's no point in testing further. A more useful smoke test might be to hit every page once.
Smoke tests should run as fast as possible. The goal is quick feedback so that you can make a decision.
As for the difference between smoke tests and sanity tests... There is no significant difference. What you call them is irrelevant, as long as everyone in your organization has the same basic understanding. What's important is a quick verification that the system under test is running and has no blatantly obvious flaws.
The smoke test is designed to see if the device seems to work at all. - This is to determine if we can go on with more extensive testing or if something fundamental is broken.
The sanity tests are designed to test the most frequent usecases.
Example:
You are testing a cellphone.
Smoketest - Does it start up without crashing/starting to smoke etc. does it seem to work good enough to perform more extensive testing?
Sanity test - Can you place/recieve calls/messages - the most basic and most used features.
These are both done often and should be quick to run through, they are NOT extensive tests.
Smoke Testing is testing the basic and critical features of an application, before going ahead and doing thorough testing of that application.
Note: Only if the Smoke testing passes, we can carry ahead with other stages of testing, else the product is not fit to be tested and should be sent to Development team.
Sanity Testing: There is no clear definition as such, but this one I picked up from the Internet
Check the entire application at the basic level, focuses on Breadth rather than length.
software product is integrated and complete ,now to check whether it meets the intended specifications and functional requirements specified in requirements documentation:-
integration testing or functional testing or user acceptance testing
Yes this is a discussion in many projects. What is the scope of the different test phase and I think you ask a very valid question I have discussed very often in projects.
The answer is opinion because I have read different answers in different books and standards and it also depends on the size of the software and the kind of the software.
Here are good answers
In my world normally integration testing is to see whether it works with all up-stream and down-stream systems while functional testing is done on a system alone. But often the functional testing is an end-to-end test and cannot be done standalone so the integration and functional testing becomes the same test phase.
user acceptance testing usually is done by someone else and is the client who gives his sign-off and does his own set of test cases.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Are functional testing and integration testing the same?
You begin your testing through unit testing, then after completing unit testing you go for integration testing where you test the system as a whole. Is functional testing the same as integration testing? You still taking the system as a whole and testing it for functionality conformance.
Integration testing is when you test more than one component and how they function together. For instance, how another system interacts with your system, or the database interacts with your data abstraction layer. Usually, this requires a fully installed system, although in its purest forms it does not.
Functional testing is when you test the system against the functional requirements of the product. Product/Project management usually writes these up and QA formalizes the process of what a user should see and experience, and what the end result of those processes should be. Depending on the product, this can be automated or not.
Functional Testing:
Yes, we are testing the product or software as a whole functionally whether it is functionally working properly or not (testing buttons, links etc.)
For example: Login page.
you provide the username and password, you test whether it is taking you to home page or not.
Integration Testing:
Yes, you test the integrated software only but you test where the data flow is happening and is there any changes happening in the database.
For example: Sending e-mail
You send one mail to someone, there is a data flow and also change in database (the sent table increases value by 1)
Remember - clicking links and images is not integration testing. Hope you understood why, because there is no change in database by just clicking on a link.
Hope this helped you.
Functional Testing: It is a process of testing where each and every component of the module is tested. Eg: If a web page contains text field, radio botton, Buttons and Drop down etc components needed to be checked.
Integration Testing: Process where the dataflow between 2 modules are checked.
This is an important distinction, but unfortunately you will never find agreement. The problem is that most developers define these from their own point of view. It's very similar to the debate over Pluto. (If it were closer to the Sun, would it be a planet?)
Unit testing is easy to define. It tests the CUT (Code Under Test) and nothing else. (Well, as little else as possible.) That means mocks, fakes, and fixtures.
At the other end of the spectrum there is what many people call system integration testing. That's testing as much as possible, but still looking for bugs in your own CUT.
But what about the vast expanse between?
For example, what if you test just a little bit more than the CUT? What if you include a Fibonacci function, instead of using a fixture which you had injected? I would call that functional testing, but the world disagrees with me.
What if you include time() or rand()? Or what if you call http://google.com? I would call that system testing, but again, I am alone.
Why does this matter? Because system-tests are unreliable. They are necessary, but they will sometimes fail for reasons beyond your control. On the other hand, functional tests should always pass, not fail randomly; if they are fast, they might as well be used from the start in order to use Test-Driven Development without writing too many tests for your internal implementation. In other words, I think that unit-tests can be more trouble than they are worth, and I have good company.
I put tests on 3 axes, with all their zeroes at unit-testing:
Functional-testing: using real code deeper and deeper down your call-stack.
Integration-testing: higher and higher up your call-stack; in other words, testing your CUT by running the code which would use it.
System-testing: more and more unrepeatable operations (O/S scheduler, clock, network, etc.)
A test can easily be all 3, to varying degrees.
I would say that both are tightly linked to each other and very tough to distinguish between them.
In my view, Integration testing is a subset of functional testing.
Functionality testing is based on the initial requirements you receive. You will test the application behaviour is as expected with the requirements.
When it comes to integration testing, it is the interaction between modules. If A module sends an input, B module able to process it or not.
Integration testing - Integration testing is nothing but the testing of different modules. You have to test relationship between modules. For ex you open facebook then you see login page after entering login id and password you can see home page of facebook hence login page is one module and home page is another module. you have to check only relationship between them means when you logged in then only home page must be open not message box or anything else. There are 2 main types of integration testing TOP-DOWN approach and BOTTOM UP approach.
Functional Testing - In functional testing you have to only think about input and output. In this case you have to think like a actual user. Testing of What input you gave and what output you got is Functional testing. you have to only observe output. In functional testing you don't need to test coding of application or software.
In a Functional testing tester focuses only Functionality and sub functionality of application. Functionality of app should be working properly or not.
In integration testing tester have to check dependency between modules or sub-modules.Example for modules records should be fetching and displayed correctly in another module.
Integration Test:-
When Unit testing is done and issues are resolved to the related components then all the required components need to integrate under the one system so that it can perform an operation.
After combining the components of the system,To test that whether the system is working properly or not,this kind of testing is called as Integration Testing.
Functional Testing:-
The Testing is mainly divided into two categories as
1.Functional Testing
2.Non-Functional Testing
**Functional Testing:-
To test that whether the software is working according to the requirements of the user or not.
**Non-Functional Testing:-
To test that whether the software satisfies the quality criteria like Stress Test,Security test etc.
Usually,Customer will provide the requirements for Functional Test only and for Non Functional test,Requirements should not be mentioned but the application necessarily perform those activity.
Integration testing It can be seen as how the different modules of the system work together.
We mostly refers to the integrated functionality of the different modules, rather different components of the system.
For any system or software product to work efficiently, every component has to be in sync with each other.
Most of the time tool we used for integration testing will be chosen that we used for unit testing.
It is used in complex situations, when unit testing proves to be insufficient to test the system.
Functional Testing
It can be defined as testing the individual functionality of modules.
It refers to testing the software product at an individual level, to check its functionality.
Test cases are developed to check the software for expected and unexpected results.
This type of testing is carried out more from a user perspective. That is to say, it considers the expectation of the user for a type of input.
It is also referred as black-box testing as well as close-box testing
Checking the functionality of the application is generally known as functional testing, where as the integration testing is to check the flow of data from one module to other.
Lets take example of money transfer app.Suppose we have page in which we enter all the credentials and if we press transfer button and after that if we getting any success, Then this is functional testing. But in same example if we verify the amount transfer then it is integration testing.
Authors diverge a lot on this. I don't believe there is "the" correct interpretation for this. It really depends.
For example: most Rails developers consider unit tests as model tests, functional tests as controller tests and integration tests as those using something like Capybara to explore the application from a final user's perspective - that is, navigating through the page's generated HTML, using the DOM to check for expectations.
There is also acceptance tests, which in turn are a "live" documentation of the system (usually they use Gherkin to make it possible to write those in natural language), describing all of the application's features through multiple scenarios, which are in turn automated by a developer. Those, IMHO, could be also considered as both, functional tests and integration tests.
Once you understand the key concept behind each of those, you get to be more flexible regarding the right or wrong. So, again IMHO, a functional test could also be considered an integration test. For the integration test, depending on the kind of integration it's exercising, it may not be considerate a functional test - but you generally have some requirements in mind when you are writing an integration test, so most of the time it can be also considerate as a functional test.
What is the real difference between acceptance tests and functional tests?
What are the highlights or aims of each? Everywhere I read they are ambiguously similar.
In my world, we use the terms as follows:
functional testing: This is a verification activity; did we build a correctly working product? Does the software meet the business requirements?
For this type of testing we have test cases that cover all the possible scenarios we can think of, even if that scenario is unlikely to exist "in the real world". When doing this type of testing, we aim for maximum code coverage. We use any test environment we can grab at the time, it doesn't have to be "production" caliber, so long as it's usable.
acceptance testing: This is a validation activity; did we build the right thing? Is this what the customer really needs?
This is usually done in cooperation with the customer, or by an internal customer proxy (product owner). For this type of testing we use test cases that cover the typical scenarios under which we expect the software to be used. This test must be conducted in a "production-like" environment, on hardware that is the same as, or close to, what a customer will use. This is when we test our "ilities":
Reliability, Availability: Validated via a stress test.
Scalability: Validated via a load test.
Usability: Validated via an inspection and demonstration to the customer. Is the UI configured to their liking? Did we put the customer branding in all the right places? Do we have all the fields/screens they asked for?
Security (aka, Securability, just to fit in): Validated via demonstration. Sometimes a customer will hire an outside firm to do a security audit and/or intrusion testing.
Maintainability: Validated via demonstration of how we will deliver software updates/patches.
Configurability: Validated via demonstration of how the customer can modify the system to suit their needs.
This is by no means standard, and I don't think there is a "standard" definition, as the conflicting answers here demonstrate. The most important thing for your organization is that you define these terms precisely, and stick to them.
I like the answer of Patrick Cuff. What I like to add is the distinction between a test level and a test type which was for me an eye opener.
test levels
Test level is easy to explain using V-model, an example:
Each test level has its corresponding development level. It has a typical time characteristic, they're executed at certain phase in the development life cycle.
component/unit testing => verifying detailed design
component/unit integration testing => verifying global design
system testing => verifying system requirements
system integration testing => verifying system requirements
acceptance testing => validating user requirements
test types
A test type is a characteristics, it focuses on a specific test objective. Test types emphasize your quality aspects, also known as technical or non-functional aspects. Test types can be executed at any test level. I like to use as test types the quality characteristics mentioned in ISO/IEC 25010:2011.
functional testing
reliability testing
performance testing
operability testing
security testing
compatibility testing
maintainability testing
transferability testing
To make it complete. There's also something called regression testing. This an extra classification next to test level and test type. A regression test is a test you want to repeat because it touches something critical in your product. It's in fact a subset of tests you defined for each test level. If a there's a small bug fix in your product, one doesn't always have the time to repeat all tests. Regression testing is an answer to that.
The difference is between testing the problem and the solution. Software is a solution to a problem, both can be tested.
The functional test confirms the software performs a function within the boundaries of how you've solved the problem. This is an integral part of developing software, comparable to the testing that is done on mass produced product before it leaves the factory. A functional test verifies that the product actually works as you (the developer) think it does.
Acceptance tests verify the product actually solves the problem it was made to solve. This can best be done by the user (customer), for instance performing his/her tasks that the software assists with. If the software passes this real world test, it's accepted to replace the previous solution. This acceptance test can sometimes only be done properly in production, especially if you have anonymous customers (e.g. a website). Thus a new feature will only be accepted after days or weeks of use.
Functional testing - test the product, verifying that it has the qualities you've designed or build (functions, speed, errors, consistency, etc.)
Acceptance testing - test the product in its context, this requires (simulation of) human interaction, test it has the desired effect on the original problem(s).
The answer is opinion. I worked in a lot of projects and being testmanager and issuemanager and all different roles and the descriptions in various books differ so here is my variation:
functional-testing: take the business requirements and test all of it good and thorougly from a functional viewpoint.
acceptance-testing: the "paying" customer does the testing he likes to do so that he can accept the product delivered. It depends on the customer but usually the tests are not as thorough as the functional-testing especially if it is an in-house project because the stakeholders review and trust the test results done in earlier test phases.
As I said this is my viewpoint and experience. The functional-testing is systematic and the acceptance-testing is rather the business department testing the thing.
Audience. Functional testing is to assure members of the team producing the software that it does what they expect. Acceptance testing is to assure the consumer that it meets their needs.
Scope. Functional testing only tests the functionality of one component at a time. Acceptance testing covers any aspect of the product that matters to the consumer enough to test before accepting the software (i.e., anything worth the time or money it will take to test it to determine its acceptability).
Software can pass functional testing, integration testing, and system testing; only to fail acceptance tests when the customer discovers that the features just don't meet their needs. This would usually imply that someone screwed up on the spec. Software could also fail some functional tests, but pass acceptance testing because the customer is willing to deal with some functional bugs as long as the software does the core things they need acceptably well (beta software will often be accepted by a subset of users before it is completely functional).
Functional Testing: Application of test data derived from the specified functional
requirements without regard to the final program structure. Also known as
black-box testing.
Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria—enables an end user to determine whether or not to
accept the system.
In my view the main difference is who says if the tests succeed or fail.
A functional test tests that the system meets predefined requirements. It is carried out and checked by the people responsible for developing the system.
An acceptance test is signed off by the users. Ideally the users will say what they want to test but in practice it is likely to be a sunset of a functional test as users don't invest enough time. Note that this view is from the business users I deal with other sets of users e.g. aviation and other safety critical might well not have this difference,
Acceptance testing:
... is black-box testing performed on a system (e.g. software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery.
Though this goes on to say:
It is also known as functional testing, black-box testing, release acceptance, QA testing, application testing, confidence testing, final testing, validation testing, or factory acceptance testing
with a "citation needed" mark.
Functional testing (which actually redirects to System Testing):
conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
So from this definition they are pretty much the same thing.
In my experience acceptance test are usually a subset of the functional tests and are used in the formal sign off process by the customer while functional/system tests will be those run by the developer/QA department.
Acceptance testing is just testing carried out by the client, and includes other kinds of testing:
Functional testing: "this button doesn't work"
Non-functional testing: "this page works but is too slow"
For functional testing vs non-functional testing (their subtypes) - see my answer to this SO question.
The relationship between the two:
Acceptance test usually includes functional testing, but it may include additional tests. For example checking the labeling/documentation requirements.
Functional testing is when the product under test is placed into a test environment which can produce variety of stimulation (within the scope of the test) what the target environment typically produces or even beyond, while examining the response of the device under test.
For a physical product (not software) there are two major kind of Acceptance tests: design tests and manufacturing tests. Design tests typically use large number of product samples, which have passed manufacturing test. Different consumers may test the design different ways.
Acceptance tests are referred as verification when design is tested against product specification, and acceptance tests are referred as validation, when the product is placed in the consumer's real environment.
They are the same thing.
Acceptance testing is performed on the completed system in as identical as possible to the real production/deployement environment before the system is deployed or delivered.
You can do acceptance testing in an automated manner, or manually.