Is the automated testing still referred to as smoke testing? - testing

If not, is smoke testing still used?

It's sort of a Venn Diagram. Some Automated tests are Smoke tests, and some smoke tests are Automated (inasfar as they are ran by a computer program). A Smoke test is a take off (if I recall correctly) on the term "Where there's smoke, there's usually fire." It's a set of preliminary tests that the program must pass to be considered for 'real' (viz. fire) testing.
A smoke test can be manual insomuch as a tester has a list of steps he follows, but these aren't automated with a computer program.
Smoke testing is still used -- in places I've worked, it's usually automated.

Automated testing can do smoke testing (shallow, wide), but it can also do other testing like regression testing, and unit testing. Basically automated testing can be any repeatable test.
Yes, smoke testing is still being used. I've generally seen two scenarios. The first is to determine whether the software is ready for more in depth testing. The second, and IMO more common, to skimp on fully testing functionality that should not have been affected by the changes to the new build.

I don't think smoke tests are usually automated. The smoke test in my experience is really just a basic sanity test to make sure that subsequent tests can actually be run, and that nothing basic got broken like startup code or menu entries. This would usualy be done manually by a person. I suppose it could be automated, but usually it involves the addition of new features so the automated tests would have to be changed as well and you'd still have the same problem that you'd need a person to verify that the automated tests were modified to test the new feature properly. In contrast, automated tests (like unit tests) represent a regression test suite and are created to test well-established functionality that should not change much from release to release, although of course you would add unit tests to cover new functionality as well.

Probably more in companies from a hardware background where the smoke test was taken literally. Few people call them that anymore. It's usually just a small yet broad subset of a larger acceptance or system test suite. These tets are automated and are automatically run against code before it is submitted or on submission to source code control.

I am not sure we can compare Smoke and Automated testing. Smoke testing is a way to run a set of basic tests on a build, covering all the basic features but not going in depth on any. The purpose is to determine whether a build can be used for more detailed testing or not. It is also a set of steps that can be run quickly even on a developer build to determine if there are any issues due to some significant or core changes that are about to go in a build. We consider Smoke test to be one of our 'test plans' but one that is run on every build.
Automated testing is not specific to Smoke tests but can be applied there as well. It is done to 'automate' redundant or repetitive steps that a tester always does to save time. That is the primary purpose of automation. It is allowes a tester to spend more time to do other tests.
It can never be replacement of testing by a real brain nor everything can be automated. It is an activity that supplements the testing process in place, not replace it.
Since Smoke test is potentially run on every build, there is a good value in automating it. If a smoke test run manually takes 4 hours, and after automation it takes 1 hour, you have saved an effort of 3 man-hours * number of builds.
There are several tools in market for automation testing - AutoIT and SilkTest to name a few.

In very simple words we can say that Smoke testing can be automated but it is not like automated testing is always smoke testing.
Yes, smoke testing is a popular way of testing any application/software.

My understanding of "smoke testing" is different than the wikipedia article. I understand smoke testing to be the developer opening the app and testing the basic functionality to verify that the app looks right & is doing the basics. So I always thought it was a manual process, not an automated one.

Test automation suite contains various levels like smoke test, acceptance test, nightly build, so on. Its up to the tester to decide which test case needs to be run in each level. Each test case is numbered depending upon the levels at which they should be run. Say if there are 2 test cases automated, numbered with 1 and 2 respectively to indicate the levels, and you define test level as 2 in configuration file, its gonna run only the second test case and gives you the result. Smoke test generally has less number of test cases compared to acceptance test.
Smoke test can be automated but not all automated tests are smoke tests.

Related

How to get the code coverage for the Web App?

I'm have repo A, where we have our application code and repo B where we have Selenium code. Now we need to get the code coverage.
Any possible solutions?
You're kind of going down a rat-hole trying to calculate code-coverage from system-tests. Code coverage, as measured with tools like jacoco, is typically done on unit-tests as part of the source-code build. That is, it's generated as part of the 'test' or 'integration-test' phase of the same maven build that did the 'compile' phase. Jacoco is very easy to utilize in this scenario.
Selenium tests are more system-test level in that they work on a running system. Source code instrumentation on the .CLASS files is more difficult in this realm, so you would have to jump through painful hoops to get jacoco results from selenium.
Further, chasing code-coverage in Selenium is a bad idea. When you want to insure all branches are covered, you have to write a bunch of tests to test the permutations. You want a lightweight framework, i.e. unit test, to verify permutations. Using a heavyweight framework like selenium will mean you are spending a LOT of time spinning up and down containers. That's not to say Selenium is bad. You have to do measured-code-coverage in unit-test land, and then demonstrate that those unit tests are meaningful with a handful of system tests. Selenium (unmeasured) gives credibility that a statement like "we have unit tests with 80% code coverage" indicates "our system has reliable tests"

How to get combined test coverage from functional and unit tests

I have an existing Spring MVC webapp, built with Ant, set up in Jenkins for CI builds.
I am getting nice code coverage reports from my unit tests with Cobertura.
I recently added some functional/UI tests with Selenium. Does anyone have suggestions for how I could get a single code coverage report from both functional and unit tests? Has anyone done this successfully?
My end goal is to count code coverage holistically, so each class/method can be tested with the technique that makes the most sense and I hope to get close to 100% across all forms of testing. A specific example: it might make more sense to cover controllers through end-to-end UI testing, when they don't have any real logic of their own to test in isolation. I would then still report the code as "covered".
I am not trying to start a debate about unit tests being good/bad or TDD vs. BDD - I am asking a question about how to accomplish my goal with a given set of technologies.
I think Grails handles this nicely, but I haven't figured out how to do this with a regular webapp (Spring MVC, Java EE/JSF, etc.)

Testing phases, where is the smoke testing?

So I was looking to a book and I dont really understand their classification:
Unit tests
Integration tests
Smoke and Sanity tests
System tests
Acceptance tests
I thought smoke test would be right after integration one? Also I thought that sanity means quick check of the application when new part is deployed.
Also the question: is this correct or should the smoke and sanity tests be in different order. If so, why?
Smoke tests should be performed before sanity tests - that is correct. The purpose of smoke tests is just to quickly check whether the SUT is runnable, it's interfaces and main components respond to the users actions. There is no deep insight into the app during these tests.
The sanity tests can be a subset of regression tests. Their main goal is to quickly test logic of the application in compliance with requirements provided. Should be done after each major change in the way some parts of system work. And simply if results are negative there is no point in going through more detailed tests. They should give us the information whether tested parts of system match the requirements and specification.
And now the thing is that sanity tests can be put into the unit test level as well as system test level. You can simply run a few unit tests specificly designed to check only basic of functionality and these can be than called sanity tests. The same applies to system test level. So there is no strict definition of where is the place for sanity tests. I believe you should not take it as granted because every case is different and context of tests and application should be taken into major consideration.
A Smoke Test is a Quick & Dirty test of the most important features usually done by someone other than the developer after unit & integration testing to see if there's a point in doing more specific/ rigorous testing.
Basic test of key features.
Only test happy path.
Major features.
For Example if you're smoke testing an API
Check responses are correct.
Test login with valid details.
Test main endpoints.
Check the correct response is returned.
Smoke Testing is the first & foremost Testing done by any QA personnel. This is done once the unit testing is completed by the developer.
The main agenda to perform Smoke testings is to believe your application can handle the positive flow at the Least. Once this done QA gradually proceeds with the following
1.Functional Testing
2.Link & Download Options
3.UI
4.System Testing
5.Regression for better results from previous builds.
Happy Testing :)

Robotframework integrated with a testing tool

Is anyone aware of any ongoing open source project that integrates robotframework with a load testing tool such as grinder, jmeter, funkload etc?
Thanks
Yes. There is a Python library for integration of Robot Framework and JMeter: Robot Framework JMeter Library . It can be used for running JMeter and parsing and converting results. I am author of this library so I might not be objective.
No, and that's likely not to happen. Robot Framework is for functional not load testing. How would you deem a load test as pass/fail and how long does it run?
Robot Framework and functional tests have a finite set execution time (takes as long as it needs to complete testing the particular feature or times out before doing so in case it hung, etc.), and has strict criteria as to what is pass/fail when test runs.
With load testing, you at least during exploratory runs and design of test, you don't run for fixed time, or even if fixed, it's usually not short (except trial runs and scalable burst increases). And criteria for pass/fail is usually within ranges rather than yes/no.
So it's harder to integrate and design a test library that can offer pass/fail and run within some set time for load testing. Unless someone can define a good architectural design of a test and test library for how to do so with Robot Framework.
I think the idea would be that a test case is created only once and can be used in both functional tests as in load tests and even in end user monitoring. In this (utopic) way a test case can be used during the whole lifecycle of an application. With a tag (for instance) a test case can be promoted to be also a loadtesting test case with another type of response validation. Would be nice to run Robot framework and create a Loadrunner-TrueClient (or another browser-driven loadtesttool) script. Main purpose of the integration would be to automate the scripting.

What is a smoke test? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicates:
What is a smoke testing and what will it do for me?
What is a sanity test/check
What is a smoke test and why is it useful?
"Smoke test" is a term that comes from electrical engineering. It refers to a very basic, very simple test, where you just plug the device in and see if smoke comes out.
It doesn't tell you anything about whether or not the device actually works. The only thing it tells you is that it is not completely broken.
It is usually used as a time-saving step before the more thorough regression/integration/acceptance tests, since there is no point in running the full testsuite, if the thing catches fire anyway.
In programming, the term is used in the same sense: it is a very shallow, simple test that checks for some simple properties like: does the program even start? Can it print its help message? If those simple things don't work, there is no point in even attempting to run the full testsuite, which can sometimes take minutes or even hours.
A smoke test is basically just a sanity check to see if the software functions on the most basic level.
If your smoke test fails, it means there is no point in running your other functional tests.
A smoke test is a quick, lightweight test or set of tests, usually automated, that confirm that the basic functionality of system is working correctly. It tends to emphasize broad, not deep tests, and is usually done before launching a more extensive set of tests.
From wikipedia:
It refers to the first test made after repairs or first assembly to provide some assurance that the system under test will not catastrophically fail. After a smoke test proves that "the pipes will not leak, the keys seal properly, the circuit will not burn, or the software will not crash outright," the assembly is ready for more stressful testing.
In computer programming and software
testing, smoke testing is a
preliminary to further testing, which
should reveal simple failures severe
enough to reject a prospective
software release. In this case, the
smoke is metaphorical.
Smoke test is a quick test to see if the application will "catch on fire" when you run it the first time. In other words, that the application isn't broken.