what is smoke testing? And at what circumstances we can use smoke testing in our project [closed] - testing

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I don't have a clear idea about smoke testing and sanity testing, some books say that both are same but some tester in some project called as a smoke testing and some tester in some project called as sanity testing, So please give me clear cut idea about my question.

Sorry but there is no clear-cut. Like you explain in your question there is no consensus on the definition, or at least on the difference between sanity and smoke.
Now about smoke tests (or sanity tests!), those are the tests that you can run quickly to get a general idea of how your System Under Test (SUT) behaves. For software testing, this will obviously contain some kind of installation, setup, playing around with the feature and shutdown. If nothing goes wrong, then you now you can go on with your testing. This provides a quick feedback to the team and avoid starting a longer test campaign only to realise that some major features are broken and the SUT is not really usable.
This definition stands for both manual and automated tests. For example, if you use Jenkins (for CI) and Robot Framework (for test automation), you could create 2 jobs in Jenkins: smoke tests and full tests (using tags, this is straightforward). Smoke test job could last a couple of minutes (or max 15 minutes let's say) and the full tests job could as long as needed. Thus the smoke test job gives you a quick feedback on the SUT build (if your smoke tets is a child project of the SUT build of course)

Smoke testing also known as Build version Testing.
Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
sanity testing is a type of testing to check the capability of a new software version is able to perform well enough to accept it for a major testing effort.

Think of the analogy of testing a new electronic device. The first thing you do is turn it on to see if it starts smoking. If it does, there's something fundamentally wrong, so no additional testing either can be done or is worth doing.
For a website, the simplest smoke test is to go to the website and see if the http response is 200. If not, there's no point in testing further. A more useful smoke test might be to hit every page once.
Smoke tests should run as fast as possible. The goal is quick feedback so that you can make a decision.
As for the difference between smoke tests and sanity tests... There is no significant difference. What you call them is irrelevant, as long as everyone in your organization has the same basic understanding. What's important is a quick verification that the system under test is running and has no blatantly obvious flaws.

The smoke test is designed to see if the device seems to work at all. - This is to determine if we can go on with more extensive testing or if something fundamental is broken.
The sanity tests are designed to test the most frequent usecases.
Example:
You are testing a cellphone.
Smoketest - Does it start up without crashing/starting to smoke etc. does it seem to work good enough to perform more extensive testing?
Sanity test - Can you place/recieve calls/messages - the most basic and most used features.
These are both done often and should be quick to run through, they are NOT extensive tests.

Smoke Testing is testing the basic and critical features of an application, before going ahead and doing thorough testing of that application.
Note: Only if the Smoke testing passes, we can carry ahead with other stages of testing, else the product is not fit to be tested and should be sent to Development team.
Sanity Testing: There is no clear definition as such, but this one I picked up from the Internet
Check the entire application at the basic level, focuses on Breadth rather than length.

Related

How to fit automation (System or E2E) tests in agile development lifecycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I am an automation test engineer and never found a right answer on how to fit System Integration tests (E2E) in the agile development life cycle.
We are a team of 10 developers and 2 QAs. The team is currently trying to baseline a process around the best processes around verification & validation of user stories once they have been implemented.
The current process we are following is a mixture of both static reviews and manual / Automated tests.
This is how our process goes:
1. Whenever a story is ready, the lead conducts a story preparation meeting where we discuss the requirements, ensures everybody is on the same page, estimation etc;
2. The story comes onto the board and picked up by a developer
3. The story is implemented by the developer. The implementation includes necessary unit and integration tests as well.
4. The story will then go for a code review
5. Once the code review is passed, it will be deployed & released into production
6. If something goes wrong in production, the code will be reverted back.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved). The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
In such situations, we are compromising on quality and releasing the code without properly testing it.
What would be the best approach in this situation? Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
Any good suggestions around this process are highly appreciated.
Here are some suggestions.
The real problem with validation & verification by QA comes when there is no way to test the changes manually (as there are a lot of micro-services involved).
This is where you need to invest time and effort. Some possible approaches include:
Creating mock micro-services
Creating a test environment which runs versions of the micro-services
Both of these approaches will be challenging, but when solved will typically pay-off in the medium to long term.
Currently, we are adding all these automation tests to our QA backlog and slowly creating our regression test pack.
The value from automated regression tests comes when they have reasonable levels of coverage (say 50-70% of important features are covered). You may want to consider spending some time getting the coverage up before working on new requirements. This short-term hit on the team's output will be more than offset by:
Savings in time spent manually testing
More frequent running of tests (possibly using continuous integration) which improves quality
A greater confidence amongst the developers to make changes to the code and to refactor
The automation test framework is still not quite mature enough for us to write the automation tests quick enough before the developers implements their code.
Why not get the developers involved in writing automation tests? This would allow you to balance the creating of tests with the coding of new requirements. This may give the appearance of reducing the output of the team, but the team will become increasingly efficient as the coverage improves.
We are a team of 10 developers and 2 QAs
I like to think you are a team of 12 with development and QA skills. Share knowledge and spread the workload until you have a team that can deliver requirements and quality.
For our team, we lose time, but after a development story is done the corresponding test auomation story is put in to the next sprint.
Finished stories are unit tested and run through the current test automation scripts to make sure we haven't regressed with our past tests/code.
Once the new tests are constructed, we run our completed code via HP UFT and if successful, setup for deployment to Production.
This probably isn't the best way to get things done currently, but it has been a way for us to make sure everything gets automated and tested before heading to Production.

What is the difference between smoke testing and sanity testing?

What is the difference between smoke testing and sanity testing? When do will perform smoke testing and when do will perform sanity testing?
Sanity testing
Sanity testing is the subset of regression testing and it is performed when we do not have enough time for doing testing.
Sanity testing is the surface level testing where QA engineer verifies that all the menus, functions, commands available in the product and project are working fine.
Example
For example, in a project there are 5 modules: Login Page, Home Page, User's Details Page, New User Creation and Task Creation.
Suppose we have a bug in the login page: the login page's username field accepts usernames which are shorter than 6 alphanumeric characters, and this is against the requirements, as in the requirements it is specified that the username should be at least 6 alphanumeric characters.
Now the bug is reported by the testing team to the developer team to fix it. After the developing team fixes the bug and passes the app to the testing team, the testing team also checks the other modules of the application in order to verify that the bug fix does not affect the functionality of the other modules.
But keep one point always in mind: the testing team only checks the extreme functionality of the modules, it does not go deep to test the details because of the short time.
Sanity testing is performed after the build has cleared the smoke tests and has been accepted by QA team for further testing. Sanity testing checks the major functionality with finer details.
Sanity testing is performed when the development team needs to know quickly the state of the product after they have done changes in the code, or there is some controlled code changed in a feature to fix any critical issue, and stringent release time-frame does not allow complete regression testing.
Smoke testing
Smoke Testing is performed after a software build to ascertain that the critical functionalities of the program are working fine. It is executed "before" any detailed functional or regression tests are executed on the software build.
The purpose is to reject a badly broken application, so that the QA team does not waste time installing and testing the software application.
In smoke testing, the test cases chosen cover the most important functionalities or components of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionalities of the system are working fine.
For example, typical smoke tests would be:
verify that the application launches successfully,
Check that the GUI is responsive
Smoke testing
Smoke testing came from the hardware environment where testing should be done to check whether the development of a new piece of hardware causes no fire and smoke for the first time.
In the software environment, smoke testing is done to verify whether we can consider for further testing the functionality which is newly built.
Sanity testing
A subset of regression test cases are executed after receiving a functionality or code with small or minor changes in the functionality or code, to check whether it resolved the issues or software bugs and no other software bug is introduced by the new changes.
Difference between smoke testing and sanity testing
Smoke testing
Smoke testing is used to test all areas of the application without going into too deep.
A smoke test always use an automated test or a written set of tests. It is always scripted.
Smoke testing is designed to include every part of the application in a not thorough or detailed way.
Smoke testing always ensures whether the most crucial functions of a program are working, but not bothering with finer details.
Sanity testing
Sanity testing is a narrow test that focuses on one or a few areas of functionality, but not thoroughly or in-depth.
A sanity test is usually unscripted.
Sanity testing is used to ensure that after a minor change a small part of the application is still working.
Sanity testing is a cursory testing, which is performed to prove that the application is functioning according to the specifications. This level of testing is a subset of regression testing.
Hope these points help you to understand the difference between smoke testing and sanity testing.
References
http://www.softwaretestinghelp.com/smoke-testing-and-sanity-testing-difference/
https://www.guru99.com/smoke-sanity-testing.html
Smoke and sanity testing
In general, smoke and sanity testing seems very similar to many tester who have just started, because in both we talk about build, we talk about functionality and we talk about the rejection of builds, if build's health is not good for the feasible testing.
After going through several projects, from start ups to product base company, I figured out the basic difference between smoke and sanity testing.
I am writing difference between smoke testing and sanity testing here to help you in answering at least one question that normally all testers get asked in interview.
Smoke testing
Smoke testing is done to test the health of builds.
It is also known as the shallow and wide testing, in that we normally include those test cases which can cover all the functionality of the product.
We can say that it's the first step of testing and, after this, we normally do other kind of functional and system testing, including regression testing.
It's normally done by a developer with the help of certain scripts or certain tools, but in some cases it can be performed by a tester too.
It's valid for initial stage of a build confirmation. For example, suppose we have started the development of a certain product, and we are producing a build for the first time, then smoke testing becomes a necessity for the product.
Sanity testing
It is sub-regression
Sanity is done for those builds which have gone through many regression tests and a minor change in code has happened. In this case, we normally do the intensive testing of functionalities where this change has occurred or may have influenced.
Due to this, it is also known as "narrow" and "deep" testing
It's performed by a tester
It's done for mature builds, like those that are just going to hit production, and have gone through multiple regression processes.
It can be removed from the testing process, if regression is already being performed.
If any build doesn't pass the sanity tests, then it is thrown to developer back for the correction of the build.
Try to understand both by this example.
Suppose if you're buying a car from showroom.
The first thing you will check the car contains are for example if it's four tires, a staring, headlight, or all other basic things. This is called smoke testing.
If you're checking how much mileage the car is giving or what is max speed, then this is known as sanity testing.
Smoke Testing
Smoke testing is a wide approach where all areas of the software application are tested without getting into too deep
The test cases for smoke testing of the software can be either manual or automated
Smoke testing is done to ensure whether the main functions of the software application are working or not. During smoke testing of the software, we do not go into finer details.
Smoke testing of the software application is done to check whether the build can be accepted for through software testing
This testing is performed by the developers or testers
Smoke testing exercises the entire system from end to end
Smoke testing is like General Health Check Up
Smoke testing is usually documented or scripted
Santy Testing
Sanity software testing is a narrow regression testing with a focus on one or a small set of areas of functionality of the software application.
Sanity test is generally without test scripts or test cases.
Sanity testing is a cursory software testing type. It is done whenever a quick round of software testing can prove that the software application is functioning according to business / functional requirements.
Sanity testing of the software is to ensure whether the requirements are met or not.
Sanity testing is usually performed by testers
Sanity testing exercises only the particular component of the entire system
Sanity Testing is like specialized health check up
Sanity testing is usually not documented and is unscripted
For more visit Link
Smoke testing is about checking if the requirements are satisfied or not.
Smoke testing is a general health check up.
Sanity testing is about checking if a particular module is completely working or not. Sanity testing is specialized in particular health check up.
Smoke tests are tests which aim is to check if everything was build correctly. I mean here integration, connections. So you check from technically point of view if you can make wider tests. You have to execute some test cases and check if the results are positive.
Sanity tests in general have the same aim - check if we can make further test. But in sanity test you focus on business value so you execute some test cases but you check the logic.
In general people say smoke tests for both above because they are executed in the same time (sanity after smoke tests) and their aim is similar.
Smoke testing
Suppose a new build of an app is ready from the development phase.
We check if we are able to open the app without a crash. We login to the app. We check if the user is redirected to the proper URL and that the environment is stable. If the main aim of the app is to provide a "purchase" functionality to the user, check if the user's ID is redirected to the buying page.
After the smoke testing we confirm the build is in a testable form and is ready to go through sanity testing.
Sanity testing
In this phase, we check the basic functionalities, like
login with valid credentials,
login with invalid credentials,
user's info are properly displayed after logging in,
making a purchase order with a certain user's id,
the "thank you" page is displayed after the purchase
THERE IS NO DIFFERENCE BETWEEN smoke and sanity as per ISTQB.
sanity is synonym of smoke.
Check it here : https://glossary.istqb.org/en/search/sanity
Smoke Testing:-
Smoke test is scripted, i.e you have either manual test cases or automated scripts for it.
Sanity Testing:-
Sanity tests are mostly non scripted.

Test Automation architecture [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!
Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.
Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests
in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.
I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.
Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.

Difference System Acceptance Test and User Acceptance Test [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I've read the terms System Acceptance Test and User Acceptance Test in a document.
But I can't really figure out what's the difference between these two.
Can anybody explain the difference?
There is no official terminology in testing. Usually, the context in which they are used in the document should help find out the exact meaning the author have in mind.
From my experience though, I would say:
system acceptance testing is more about platforms, OS, browser type etc. It is about using the SUT in a close-to-reality set of environments that ressemble the one the SUT is going to be used in. The actual test effort might be to have a set of end2end tests that you will run in those difference environments.
user acceptance testing focus more on the end-user experience. What you will check is that the user gets what he wants from the SUT, feature by feature. Here you will take a single platform/env and run many different smaller tests to check the feature one by one. You can do this by following test plans or with a more exploratory approach
User acceptance testing is done by client or customer.It will take place in client place.They will check whether the application meeting the requirements or not.
system acceptance testing is the testing done on a particular application in different environments such as different OS, Browsers, browser versions etc.It is usually done in developer location only.
UAT- after completion of testing cycle, application goes to user acceptance testing it means client or user will test that application then that application will be live.
SAT- are special control systems developed to control the functionality of subsystem before sending the acceptance to the provider or system to check if the equipment to be accepted fulfills its specifications.
System Acceptance Testing (SAT):- **
It is end-to-end testing wherein testing environment is similar to the production environment.We can also called it End to End testing.
Here, we navigate through all the features of the software and test if the end business / and feature works. We just test the end feature and don’t check for data flow or do functional testing and all.
**
**
User Acceptance Testing (UAT)
**:-Acceptance testing is done by end users. Here, they use the s/w for the business for a particular period of time and check whether the s/w can handle all kinds of real-time business scenarios / situations.
System Testing: Finding the defects, when the system is tested as a whole, this also called as end to end testing. In this testing, tester needs to test the application from log-in to log-out.
UAT: User acceptance testing is to get the acceptance from the client. UAT is generally done at client's environment. Before UAT pre UAT should be done.
Lifted from http://www.geekinterview.com/question_details/19127

what is PAT (Pre Acceptance Testing)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 4 years ago.
Improve this question
What is exactly PAT, When we will do the pre acceptance testing?
I don't think it's a widely-used term or part of a standard. Therefore, what exactly it means is organization-specific and should be defined in a glossary somewhere. More likely though you'll just have to ask people what it means.
Any testing done before acceptance testing.
This would include:
Unit tests
Stress tests
Integration tests
Performance tests
There's no standardised meaning for the term - often it depends on your process- be it Agile or Extreme Programming etc.
Generally however, there are a number of tests done by developers or testing in a developer test environment. This can be unit tests, developer tests, sanity regression tests, performance tests - ie tests that the QA team wants done before they'll even look at it. At a bare minimum, it might be just testing that the software builds (although it's frightening how often I've had a developer fail to even check this).
Well I would like to share something which everyone may not agree to but this is what I feel Pre-Acceptance testing would be:
The testing done to perform that the system under test functions as per the designed requirements to cover the customer's business areas before entering the User Acceptance Test phase where users from the customer's side are invited to perform the testing at the vendor's location where development team assistance is available when any flaw occurs in the expected business flow. This will be called as Alpha Test. Please feel free to correct me if I have said something wrong.
Acceptance testing is a testing technique performed to determine whether or not the software system has met the requirement specifications. In SDLC, acceptance testing fit as follows:
Requirement Analysis - Acceptance Testing
High Level Design - System Testing
Low Level Design - Integration Testing
Coding - Unit Testing
Simply put, PAT is any test done before acceptance testing. There are various forms of acceptance testing such as User acceptance testing, business acceptance testing, alpha testing and beta testing.