How Measure the functional test coverage for automation tests? - testing

I am working on the project which is having automation test for each feature/module using Javascript Cypress. Wanted to measure the functional test coverage for automation tests. Is there tool/idea/way to measure the functional test coverage for automation tests? This is not regarding the automation test code coverage. We know for test code coverage number of tools/library available and this is more about functional test coverage.

Related

Regarding regression suite and smoke test suite implementation in Nunit Framework : C# Selenium

We have been using Nunit Framework for our Test Automation Project.
Programming Language : C#
Automation IDE : Visual Studio, Selenium Libraries
Currently we are running all tests in a single namespace and class file.
We have got a requirement to divide the test cases as below
Implement Test Suite concept like Smoke Suite, Regression Suite
Divide the test cases into functionality wise and keep in Regression suite.
For example : Smoke Suite : All general test cases
Regression Suite should contain Functionality1, Funtionality2,...Functionality n test cases, like we see in HP ALM or Microsoft Test Manager.
like under regression suite ..login test cases, book ticket test cases, Cancellation ticket test cases...
Could you please have a look once and let me know the attributes, way of implementation ideas in Nunit Framework.
Regards,
Khaja Shaik
Very general question, so a very general answer. :-)
There are a number of approaches. I will list three...
Divide your tests into separate assemblies. Run each assembly separately when you need it... like SmokeTests.dll, RegressionSuite.dll. I prefer to use this method to divide unit tests from functional or integration tests. It's particularly useful if different people are responsible for each, like programmers for unit tests and testers for final functional tests.
Use Categories to flag fixtures as belonging to each group, like [Category("SmokeTest")], etc. When you run the tests, you need to specify a filter to select the category or, if you don't, you'll run all of them. This has the advantage that the same fixture could be run in more than one category.
Use namespaces to separate your tests, like namespace SmokeTests, etc. Personally, I don't like this because namespaces are more useful to divide tests needing common setup, etc so I prefer to reserve the namespace for dividing the tests according to the nature of what they are testing. However, I've seen some people do it this way.

How to get combined test coverage from functional and unit tests

I have an existing Spring MVC webapp, built with Ant, set up in Jenkins for CI builds.
I am getting nice code coverage reports from my unit tests with Cobertura.
I recently added some functional/UI tests with Selenium. Does anyone have suggestions for how I could get a single code coverage report from both functional and unit tests? Has anyone done this successfully?
My end goal is to count code coverage holistically, so each class/method can be tested with the technique that makes the most sense and I hope to get close to 100% across all forms of testing. A specific example: it might make more sense to cover controllers through end-to-end UI testing, when they don't have any real logic of their own to test in isolation. I would then still report the code as "covered".
I am not trying to start a debate about unit tests being good/bad or TDD vs. BDD - I am asking a question about how to accomplish my goal with a given set of technologies.
I think Grails handles this nicely, but I haven't figured out how to do this with a regular webapp (Spring MVC, Java EE/JSF, etc.)

Testing phases, where is the smoke testing?

So I was looking to a book and I dont really understand their classification:
Unit tests
Integration tests
Smoke and Sanity tests
System tests
Acceptance tests
I thought smoke test would be right after integration one? Also I thought that sanity means quick check of the application when new part is deployed.
Also the question: is this correct or should the smoke and sanity tests be in different order. If so, why?
Smoke tests should be performed before sanity tests - that is correct. The purpose of smoke tests is just to quickly check whether the SUT is runnable, it's interfaces and main components respond to the users actions. There is no deep insight into the app during these tests.
The sanity tests can be a subset of regression tests. Their main goal is to quickly test logic of the application in compliance with requirements provided. Should be done after each major change in the way some parts of system work. And simply if results are negative there is no point in going through more detailed tests. They should give us the information whether tested parts of system match the requirements and specification.
And now the thing is that sanity tests can be put into the unit test level as well as system test level. You can simply run a few unit tests specificly designed to check only basic of functionality and these can be than called sanity tests. The same applies to system test level. So there is no strict definition of where is the place for sanity tests. I believe you should not take it as granted because every case is different and context of tests and application should be taken into major consideration.
A Smoke Test is a Quick & Dirty test of the most important features usually done by someone other than the developer after unit & integration testing to see if there's a point in doing more specific/ rigorous testing.
Basic test of key features.
Only test happy path.
Major features.
For Example if you're smoke testing an API
Check responses are correct.
Test login with valid details.
Test main endpoints.
Check the correct response is returned.
Smoke Testing is the first & foremost Testing done by any QA personnel. This is done once the unit testing is completed by the developer.
The main agenda to perform Smoke testings is to believe your application can handle the positive flow at the Least. Once this done QA gradually proceeds with the following
1.Functional Testing
2.Link & Download Options
3.UI
4.System Testing
5.Regression for better results from previous builds.
Happy Testing :)

Selenium IDE: How do I create a script to be executed before/after every test case in a given test suite?

I'm looking for something equivalent to JUnit setUp() and tearDown() methods. In other words: I have a test suite; I would like to write a setup test case and a teardown test case. The setup test case would be executed before each test in the suite. The teardown test case would be executed after each test in the suite.
How?
It sounds to me like you're at the point where you need to export your tests from Selenium IDE into another format/language. Selenium IDE is great for quick prototyping of tests or for showing off what Selenium can do, but when you actually begin to build a library of tests, you need to use a real programming language. Setup and Teardown are a part of every major testing suite (you mentioned JUnit but also TestNG, NUnit and MSTest for C#, etc) so use one! Using a real programming language also allows you to refactor your tests, extracting common functionality into classes and methods so that when your Application Under Test changes, you only need to change one method and not 100 tests. Most testing frameworks also support some sort of data driven testing which many Selenium users find useful.
Are you generating Java code to drive your test cases?
I ended up writing a custom format for C# to handle integrating Selenium test cases with MbUnit which are then just pulled to a Team City server and run after our nightly builds.
I suggest you check out Robot Framework. There is a Selenium library available for Robot Framework so you get almost all Selenium functionality plus you get a great framework to create your test suite.
In Robot Framework you can simply define Test Setup in the initial settings and it will be executed before every test case. Similarly Test Teardown will be executed after every test case in your test suite.

Is the automated testing still referred to as smoke testing?

If not, is smoke testing still used?
It's sort of a Venn Diagram. Some Automated tests are Smoke tests, and some smoke tests are Automated (inasfar as they are ran by a computer program). A Smoke test is a take off (if I recall correctly) on the term "Where there's smoke, there's usually fire." It's a set of preliminary tests that the program must pass to be considered for 'real' (viz. fire) testing.
A smoke test can be manual insomuch as a tester has a list of steps he follows, but these aren't automated with a computer program.
Smoke testing is still used -- in places I've worked, it's usually automated.
Automated testing can do smoke testing (shallow, wide), but it can also do other testing like regression testing, and unit testing. Basically automated testing can be any repeatable test.
Yes, smoke testing is still being used. I've generally seen two scenarios. The first is to determine whether the software is ready for more in depth testing. The second, and IMO more common, to skimp on fully testing functionality that should not have been affected by the changes to the new build.
I don't think smoke tests are usually automated. The smoke test in my experience is really just a basic sanity test to make sure that subsequent tests can actually be run, and that nothing basic got broken like startup code or menu entries. This would usualy be done manually by a person. I suppose it could be automated, but usually it involves the addition of new features so the automated tests would have to be changed as well and you'd still have the same problem that you'd need a person to verify that the automated tests were modified to test the new feature properly. In contrast, automated tests (like unit tests) represent a regression test suite and are created to test well-established functionality that should not change much from release to release, although of course you would add unit tests to cover new functionality as well.
Probably more in companies from a hardware background where the smoke test was taken literally. Few people call them that anymore. It's usually just a small yet broad subset of a larger acceptance or system test suite. These tets are automated and are automatically run against code before it is submitted or on submission to source code control.
I am not sure we can compare Smoke and Automated testing. Smoke testing is a way to run a set of basic tests on a build, covering all the basic features but not going in depth on any. The purpose is to determine whether a build can be used for more detailed testing or not. It is also a set of steps that can be run quickly even on a developer build to determine if there are any issues due to some significant or core changes that are about to go in a build. We consider Smoke test to be one of our 'test plans' but one that is run on every build.
Automated testing is not specific to Smoke tests but can be applied there as well. It is done to 'automate' redundant or repetitive steps that a tester always does to save time. That is the primary purpose of automation. It is allowes a tester to spend more time to do other tests.
It can never be replacement of testing by a real brain nor everything can be automated. It is an activity that supplements the testing process in place, not replace it.
Since Smoke test is potentially run on every build, there is a good value in automating it. If a smoke test run manually takes 4 hours, and after automation it takes 1 hour, you have saved an effort of 3 man-hours * number of builds.
There are several tools in market for automation testing - AutoIT and SilkTest to name a few.
In very simple words we can say that Smoke testing can be automated but it is not like automated testing is always smoke testing.
Yes, smoke testing is a popular way of testing any application/software.
My understanding of "smoke testing" is different than the wikipedia article. I understand smoke testing to be the developer opening the app and testing the basic functionality to verify that the app looks right & is doing the basics. So I always thought it was a manual process, not an automated one.
Test automation suite contains various levels like smoke test, acceptance test, nightly build, so on. Its up to the tester to decide which test case needs to be run in each level. Each test case is numbered depending upon the levels at which they should be run. Say if there are 2 test cases automated, numbered with 1 and 2 respectively to indicate the levels, and you define test level as 2 in configuration file, its gonna run only the second test case and gives you the result. Smoke test generally has less number of test cases compared to acceptance test.
Smoke test can be automated but not all automated tests are smoke tests.