What is a smoke test? [duplicate] - testing

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What is a smoke testing and what will it do for me?
What is a sanity test/check
What is a smoke test and why is it useful?

"Smoke test" is a term that comes from electrical engineering. It refers to a very basic, very simple test, where you just plug the device in and see if smoke comes out.
It doesn't tell you anything about whether or not the device actually works. The only thing it tells you is that it is not completely broken.
It is usually used as a time-saving step before the more thorough regression/integration/acceptance tests, since there is no point in running the full testsuite, if the thing catches fire anyway.
In programming, the term is used in the same sense: it is a very shallow, simple test that checks for some simple properties like: does the program even start? Can it print its help message? If those simple things don't work, there is no point in even attempting to run the full testsuite, which can sometimes take minutes or even hours.

A smoke test is basically just a sanity check to see if the software functions on the most basic level.
If your smoke test fails, it means there is no point in running your other functional tests.

A smoke test is a quick, lightweight test or set of tests, usually automated, that confirm that the basic functionality of system is working correctly. It tends to emphasize broad, not deep tests, and is usually done before launching a more extensive set of tests.

From wikipedia:
It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing.
In computer programming and software
testing, smoke testing is a
preliminary to further testing, which
should reveal simple failures severe
enough to reject a prospective
software release. In this case, the
smoke is metaphorical.

Smoke test is a quick test to see if the application will "catch on fire" when you run it the first time. In other words, that the application isn't broken.

Related

How Auomated testing works and what to test?

How automated test works and how can I try one? How tools for automated testing works and what they do?
If possible please post with examples to clarify the ideas.
Any help on this topic is very welcome! Thanks.
Automated testing means writing a script for the tasks that we test manually.
Tools include softwares where we write a few lines of code in a sequence as we wish to perform a partiular test. Then running that script to perform the tests and generate results.
Automated testing saves the hours that we manually spend in repeating a series of test cases for a particular test.
Probably the best place to start is the xUnit libraries (JUnit, PHPUnit, jsUnit, etc.). If you're interested in testing of Web interfaces, there's something called Selenium. They provide lots of code examples to look at. These tools allow you to set up some input values, run some code, and then it allows you to verify the final output of that code matches your expectations (aka assertions).
In more sophisticated development teams, these automated tests are run every time new code gets submitted to the project to make sure no new bugs are introduced. As Priyanka mentioned, they save lots of time and eliminate the possibility of human error because the tests are run automatically where they would otherwise be done manually.
I'm sorry for not being more specific. This is a very broad topic of discussion.

Testing phases, where is the smoke testing?

So I was looking to a book and I dont really understand their classification:
Unit tests
Integration tests
Smoke and Sanity tests
System tests
Acceptance tests
I thought smoke test would be right after integration one? Also I thought that sanity means quick check of the application when new part is deployed.
Also the question: is this correct or should the smoke and sanity tests be in different order. If so, why?
Smoke tests should be performed before sanity tests - that is correct. The purpose of smoke tests is just to quickly check whether the SUT is runnable, it's interfaces and main components respond to the users actions. There is no deep insight into the app during these tests.
The sanity tests can be a subset of regression tests. Their main goal is to quickly test logic of the application in compliance with requirements provided. Should be done after each major change in the way some parts of system work. And simply if results are negative there is no point in going through more detailed tests. They should give us the information whether tested parts of system match the requirements and specification.
And now the thing is that sanity tests can be put into the unit test level as well as system test level. You can simply run a few unit tests specificly designed to check only basic of functionality and these can be than called sanity tests. The same applies to system test level. So there is no strict definition of where is the place for sanity tests. I believe you should not take it as granted because every case is different and context of tests and application should be taken into major consideration.
A Smoke Test is a Quick & Dirty test of the most important features usually done by someone other than the developer after unit & integration testing to see if there's a point in doing more specific/ rigorous testing.
Basic test of key features.
Only test happy path.
Major features.
For Example if you're smoke testing an API
Check responses are correct.
Test login with valid details.
Test main endpoints.
Check the correct response is returned.
Smoke Testing is the first & foremost Testing done by any QA personnel. This is done once the unit testing is completed by the developer.
The main agenda to perform Smoke testings is to believe your application can handle the positive flow at the Least. Once this done QA gradually proceeds with the following
1.Functional Testing
2.Link & Download Options
3.UI
4.System Testing
5.Regression for better results from previous builds.
Happy Testing :)

Looking for "test execution manager" software to manage automated tests [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
We develop several products and already have extensive unit-tests and fully automated functional tests for them. Problem is that those tests don't run frequently, just manually by developer or just before shipping a new version.
I'm looking for a "test execution manager" software which will allow:
defining test suites as a collection of my existing tests ;
executing the test suites on multiple machines in a our test lab ;
collecting results and presenting them nicely ;
preserve test execution history and results
Most "testing solutions" I've found concentrate on "writing automated tests" (which we already have working) or closely integrate with other aspects of software development, like defining requirements and filing bugs (which we have and don't want to change).
Can anyone recommend a simple and flexible software to do the above without forcing specific development processes?
I though on using (or abusing) Hudson CI for this. Hudson can already run tests, collect results and present them, both periodically or due to code commit; but it was not designed for test suite definition. Any input from experienced Hudson users on this idea is appreciated..
First of all, our developers are not allowed to check in code without running the unit tests. We also run a CI server (Hudson), which builds after a commit and runs the unit tests. We are working on getting the functional tests implemented for the nightly builds.
You said your developers test the software? This is a bad thing. At least let a developer that is not familiar with the code to test your app otherwise you are likely to overlook some bugs, because their existence was ruled out by the developer writing the code. Additionally, who writes the functional tests? Developers again? You should get your BA's to write them. Always remember, four eyes see more than two.
So after all that said, I assume, that the unit tests, will always be run before code is checked into your SCM. The following is targeted primarily at the functional tests.
Simple solution:
You can always create scripts to bundle your tests (batch or shell script that runs the individual test).
Executing of test suites is actually one of the purposes of Hudson
Collecting and presenting results, that is what Hudson is for
See above, can be done with Hudson, without abusing it.
A good solution:
Did you look at tools like IBM Rational Quality Manager? Depending on the test tools you use, you might want to use a test management tool different one. Oracle also offers a tool for it. Don't be mistaken usually these tools can be fairly expensive and offer way more than you want to use. With a little bit help from google you should find something that suits your needs. My keywords were "centralized test management".
In case you use FitNesse for your functional test. You can define suites in FitNesse and I think a suite can be part of a larger suite. FitNesse definitely keeps historic test data. The test can be run from command line which enables you to run the tests from ant or maven.
If you use a unit test framework for your functional testing you can also run them as part of an nightly build and schedule it using your CI server (Hudson or Cruise Control or ...)

Is Selenium a good piece of testing software to use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
On my last project, I created some test cases through Selenium, then automated them so they would run on every build launched from hudson. It worked fantastic, and was consistent for about a month.
Then the tests started failing. It was, most times, timing issues which caused the failures. After about two weeks of effort put in over the course of the next two months, it was decided to drop the Selenium tests. They should have been passing, but the responses and timing of the web application were varying to the extent to which tests would fail when they should have passed.
Did you have a similar experience? Is Selenium still a good tool to use for Web Application testing?
Selenium is great tool for web testing, although it's important to make sure your tests are reliable. Timing issues are common, so I would suggest the following:
Make sure you set a sensible timeout value. I find between 1-2 minutes works well.
Don't have pauses in your tests - they are the main cause for timing issues. Instead use the waitFor* commands. The waitForCondition is very useful
Identify external calls that can cause timeouts and block that traffic from the machine running tests. You can do this on a firewall level or simply redirect the domain to localhost in your hosts file.
Update:
You should also consider using Selenium Grid. It wont directly help with your timeouts, but it can provide a quicker feedback loop for your failures. If you're using TestNG to run your tests you can get it to automatically rerun failures - this gives the tests failing due to timeouts a second chance.
At my previous job we investigated using it as a test tool but found it too fussy to bother integrating into our process. Pretty much the same experience as you.
This was two or three years ago in version 0.8 or so though, I would have expected it to get better since then.
I've had a similar experience. We created a project that would bootstrap a selenium proxy and run an automated suite of tests, but unfortunately it clashed with our build server in a huge way. There were too many browser inconsistencies and third party dependencies for us to reliably add it to our build. It was also too slow for us, and added too much time to our builds.
Most of the errors we would run into would be timeouts.
We ended up keeping the project and use it for integration tests on major releases. The bootstrapping code that we used has proved invaluable in other areas as well.
Probably best to be run after a nightly build when there's the time for it. It, or Watin, coulod be integrated with your build scripts.
Very much depends on your team, but if you've a small testing team this can be priceless for picking up some very obvious runtime issues.
I'd keep the scope modest and really use them for some sanity testing that at least each page can load.
I did have a similar experience with Selenium. We had a legacy system which we built a sort of testing framework around so that we could test the changes we were making. This worked great at the start but eventually some of the earlier tests began to fail (or take too long to run) so we started to turn off more and more of the tests.
To fix some of the issues we stopped selenium from opening and closing a browser for each test i.e. the tests were broken up into blocks and for each block of tests the browser would only be opened once. This reduced the time taken to run the tests from several hours to 30 minutes.
Despite the issues I think Selenium is a great tool for testing web-based applications. Many of the problems we experienced centered on the fact that the system we were testing was a legacy system. If you like test-driven development then Selenium fits in very well with that development practice.
EDIT:
Another good thing about Selenium is the ability to track what developer introduced the error as well as where the error is (source file). This makes life so much easier when it comes to fixing the error.
We initially tried to use selenium on our build machine but tests were very brittle and we found we spent a lot of time trying to keep old tests running when changes occurred to unrelated functionality accessed through the same set of pages. We were automating the tests through nunit.
I would use selenium more as a customer acceptance and integration testing tool. I'd agree with using it for a nightly build on functionality that is stable.
At a first glance, Selenium looks great. Unfortunately, as sometimes happens with open source projects, they rush to implement new features instead of making it more stable.

Is the automated testing still referred to as smoke testing?

If not, is smoke testing still used?
It's sort of a Venn Diagram. Some Automated tests are Smoke tests, and some smoke tests are Automated (inasfar as they are ran by a computer program). A Smoke test is a take off (if I recall correctly) on the term "Where there's smoke, there's usually fire." It's a set of preliminary tests that the program must pass to be considered for 'real' (viz. fire) testing.
A smoke test can be manual insomuch as a tester has a list of steps he follows, but these aren't automated with a computer program.
Smoke testing is still used -- in places I've worked, it's usually automated.
Automated testing can do smoke testing (shallow, wide), but it can also do other testing like regression testing, and unit testing. Basically automated testing can be any repeatable test.
Yes, smoke testing is still being used. I've generally seen two scenarios. The first is to determine whether the software is ready for more in depth testing. The second, and IMO more common, to skimp on fully testing functionality that should not have been affected by the changes to the new build.
I don't think smoke tests are usually automated. The smoke test in my experience is really just a basic sanity test to make sure that subsequent tests can actually be run, and that nothing basic got broken like startup code or menu entries. This would usualy be done manually by a person. I suppose it could be automated, but usually it involves the addition of new features so the automated tests would have to be changed as well and you'd still have the same problem that you'd need a person to verify that the automated tests were modified to test the new feature properly. In contrast, automated tests (like unit tests) represent a regression test suite and are created to test well-established functionality that should not change much from release to release, although of course you would add unit tests to cover new functionality as well.
Probably more in companies from a hardware background where the smoke test was taken literally. Few people call them that anymore. It's usually just a small yet broad subset of a larger acceptance or system test suite. These tets are automated and are automatically run against code before it is submitted or on submission to source code control.
I am not sure we can compare Smoke and Automated testing. Smoke testing is a way to run a set of basic tests on a build, covering all the basic features but not going in depth on any. The purpose is to determine whether a build can be used for more detailed testing or not. It is also a set of steps that can be run quickly even on a developer build to determine if there are any issues due to some significant or core changes that are about to go in a build. We consider Smoke test to be one of our 'test plans' but one that is run on every build.
Automated testing is not specific to Smoke tests but can be applied there as well. It is done to 'automate' redundant or repetitive steps that a tester always does to save time. That is the primary purpose of automation. It is allowes a tester to spend more time to do other tests.
It can never be replacement of testing by a real brain nor everything can be automated. It is an activity that supplements the testing process in place, not replace it.
Since Smoke test is potentially run on every build, there is a good value in automating it. If a smoke test run manually takes 4 hours, and after automation it takes 1 hour, you have saved an effort of 3 man-hours * number of builds.
There are several tools in market for automation testing - AutoIT and SilkTest to name a few.
In very simple words we can say that Smoke testing can be automated but it is not like automated testing is always smoke testing.
Yes, smoke testing is a popular way of testing any application/software.
My understanding of "smoke testing" is different than the wikipedia article. I understand smoke testing to be the developer opening the app and testing the basic functionality to verify that the app looks right & is doing the basics. So I always thought it was a manual process, not an automated one.
Test automation suite contains various levels like smoke test, acceptance test, nightly build, so on. Its up to the tester to decide which test case needs to be run in each level. Each test case is numbered depending upon the levels at which they should be run. Say if there are 2 test cases automated, numbered with 1 and 2 respectively to indicate the levels, and you define test level as 2 in configuration file, its gonna run only the second test case and gives you the result. Smoke test generally has less number of test cases compared to acceptance test.
Smoke test can be automated but not all automated tests are smoke tests.