How to get the code coverage for the Web App? - selenium

I'm have repo A, where we have our application code and repo B where we have Selenium code. Now we need to get the code coverage.
Any possible solutions?

You're kind of going down a rat-hole trying to calculate code-coverage from system-tests. Code coverage, as measured with tools like jacoco, is typically done on unit-tests as part of the source-code build. That is, it's generated as part of the 'test' or 'integration-test' phase of the same maven build that did the 'compile' phase. Jacoco is very easy to utilize in this scenario.
Selenium tests are more system-test level in that they work on a running system. Source code instrumentation on the .CLASS files is more difficult in this realm, so you would have to jump through painful hoops to get jacoco results from selenium.
Further, chasing code-coverage in Selenium is a bad idea. When you want to insure all branches are covered, you have to write a bunch of tests to test the permutations. You want a lightweight framework, i.e. unit test, to verify permutations. Using a heavyweight framework like selenium will mean you are spending a LOT of time spinning up and down containers. That's not to say Selenium is bad. You have to do measured-code-coverage in unit-test land, and then demonstrate that those unit tests are meaningful with a handful of system tests. Selenium (unmeasured) gives credibility that a statement like "we have unit tests with 80% code coverage" indicates "our system has reliable tests"

Related

Are there any advantages of using Testng with cucumber?

When creating automated tests with selenium, I thought one would use easier cucumber with selenium or testng with selenium or just junit with selenium although using only junit is not very popular. I have recently found out that you could use cucumber with testng but I don't see what is the gain of doing this. If someone is using both of them together can you tell me why ?
EDIT:
Using Testng over junit has many advantages. My question is if i use cucumber does it still make a difference or not anymore.
P.S I am not trying to start this tool vs this tool war
The answer that you seem to be looking for, is one of interest in what Cucumber, as a tool, adds to existing test frameworks.
The answer:
Cucumber adds an extra level of communication between you (the development team) and the management team. You are able to link test cases to scenarios that are now understandable by the business, which means that everybody is on the same page. You can even use the BDD tool to start talking about behaviours of the feature:
What things should be included?
Do we need more information?
Lets add that to the file, so that we can test that use case later.
Any new functionality added to the feature later?
Need to understand which section has gone wrong quickly, without having to decipher code written by the intern that was in for 2 months in the summer?
Cucumber helps with all of this, and that's just scraping the surface.
TestNG, JUnit, Selenium? You imagine it, you can do it. With Cucumber as your helpful neighbourhood BDD tool, you can pull together your test suite and bolt an abstraction layer on top. The business will now be able to look at the test results. Where tests have failed, they will be able to describe exactly what section has gone wrong to other members of management, without having to go too far into technical details.
If you're wondering whether to use JUnit or TestNG for this, it is most likely to be your choice. Using whatever is the current test tool to bolt cucumber on top of is the best option if you have an existing suite.
Also, make sure you are using the right language for your team. For instance:
Are you introducing a team of manual testers to developing test automation?
Maybe you should use Ruby or JavaScript, as they are easier languages to pick up as a first language
Are you a development team, using cucumber to add an abstraction layer to your unit tests?
Use the language that you are using for development, with the unit test tool that you are using.
Are you developers in test, using cucumber for automating tests for your website?
Use the language that you and your team are most comfortable with, taking the language being used for development over any others that tie with this (based on a team vote).
I think it depends on what are your other tests (unit ones for example) and how you run them.
If your current tests are already using TestNG, then it will be easier to run Cucumber tests with TestNG engine.
At the opposite, if you already have JUnit tests, it could be easier to use JUnit for Cucumber run (but TestNG is able to run JUnit tests, so you can use TestNG in that case too).
And if you have no other tests, so the choice of the test runner will depend on your own taste.
Yes.. I understand your question. Even I had the same doubt as below:
We use selenium for automation testing. Since they don't provide proper reports, we add TestNG to it (and also for other features). But now, we have cucumber, which gives proper reports. So why do we need TestNG?
I realized, though we get proper results with cucumber, TestNG provides us with many other features which cucumber cannot; like setting priority, setting method dependency, timeouts, grouping , etc.
Though cucumber provides a tag feature, it does not provide all the features provided by TestNG. Maybe when cucumber incorporates all those features, we can eliminate TestNG.

How to get combined test coverage from functional and unit tests

I have an existing Spring MVC webapp, built with Ant, set up in Jenkins for CI builds.
I am getting nice code coverage reports from my unit tests with Cobertura.
I recently added some functional/UI tests with Selenium. Does anyone have suggestions for how I could get a single code coverage report from both functional and unit tests? Has anyone done this successfully?
My end goal is to count code coverage holistically, so each class/method can be tested with the technique that makes the most sense and I hope to get close to 100% across all forms of testing. A specific example: it might make more sense to cover controllers through end-to-end UI testing, when they don't have any real logic of their own to test in isolation. I would then still report the code as "covered".
I am not trying to start a debate about unit tests being good/bad or TDD vs. BDD - I am asking a question about how to accomplish my goal with a given set of technologies.
I think Grails handles this nicely, but I haven't figured out how to do this with a regular webapp (Spring MVC, Java EE/JSF, etc.)

Frontend testing: what and how to test, and what tool to use?

I have been writing tests for my Ruby code for a while, but as a frontend developer I am obviously interested in bring this into the code I write for my frontend code. There is quite a few different options which I have been playing around with:
CasperJS
Capybara & Rspec
Jasmine
Cucumber or just Rspec
What are people using for testing? And further than that what do people test? Just JavaScript? Links? Forms? Hardcoded content?
Any thoughts would be greatly appreciated.
I had the same questions a few months ago and, after talking to many developers and doing a lot of research, this is what I found out. You should unit test your JavaScript, write a small set of UI integration tests and avoid record and playback testing tools. Let me explain that in more detail.
First, consider the test pyramid. This is a interesting analogy created by Mike Cohn that will help you decide which kind of testing you should be doing. At the bottom of the pyramid are the unit tests, which are solid and provide fast feedback. These should be the foundation of your test strategy and thus occupy the largest part of the pyramid. At the top, you have the UI tests. Those are the tests that interact with your UI directly, like Selenium does for example. Although these tests might help you find bugs, they are more expensive and provide very slow feedback. Also, depending on the tool you use, they become very brittle and you will end up spending more time maintaining these tests than writing actual production code. The service layer, in the middle, includes integration tests that do not require an UI. In Rails, for instance, you would test your REST interface directly instead of interacting with the DOM elements.
Now, back to your question. I found out that I could greatly reduce the number of bugs in my project, which is a web application written in Spring Roo (Java) with tons of JavaScript, simply by writing enough unit tests for JS. In my application, there is a lot of logic written in JS and that is the kind of thing that I am testing here. I am not concerned about how the page will actually look or if the animations plays as they should. I test if the modules I write in JS will execute the expected logic, if element classes are correctly assigned and if error conditions are well handled. For these tests, I've been using Jasmine. This is a great tool. It is very easy to learn and has nice mocking capabilities, which are called spies. Jasmine-jQuery adds more great functionality if you are using jQuery. In particular, it allows you to specify fixtures, which are snippets of the HTML code, so you don't have to manually mock the DOM. I have integrated this tool with maven and these tests are part of my CI strategy.
You have to be careful with UI tests, specially if you rely on record/playback tools like Selenium. Since the UI changes often, these tests keep breaking and you will spend a lot of time finding out if the tests really failed or if they are just outdated. Also, they don't add as much value as unit tests. Since they need an integrated environment to run, you will mostly like run them only after you finished developing, when the cost of fixing things is higher.
For smoke/regression tests, however, UI tests are very useful. If you need to automate these, then you should watch out for some dangers. Write your tests, don't record them. Recorded tests usually rely on automatically generated xpaths that break for every little change you do on your code. I believe Cucumber is a good framework for writing these tests and you can use it along with WebDriver to automate the browser interaction. Code thinking about tests. In UI tests, you will have to make elements easier to find so you don't have to rely on complex xpaths. Adding class and id elements where you usually wouldn't will be frequent. Don't write tests for every small corner case. These tests are expensive to write and take too long to run. You should focus on the cases that explore most of your functionality. If you write too many tests at this level you will probably test the same functionality that you have previously tested on your unit tests (supposing you have written them).
In my current project I am using Spock and Geb to write the UI tests. I find these tools amazing. They are written in Groovy, which suits better my Java project.
There are lots of options and tools for that. But their choice depends on whether you have a web UI or it's a desktop app?
Supposing from the tools you've mentioned it's Web UI. I would suggest Selenium (aka WebDriver): http://seleniumhq.org/docs/
There is a variety of languages it supports (Ruby is in the list). It can be run against a variety of browsers, ad it's quite easy to use with lots of tutorials and tips available.
Oh, and it's free, of course :)
I though as this post gets a lot of likes, I would post my answer to my question as I do write lots of tests now and how you test front end has moved on a lot now.
So in terms of FE testing I spent lot of time using karma with Jasmine, although karma will work nicely with other test suites like mocha & qunit. While these are great and karma allows you to interface directly with browsers to run your tests. The downside is as your test suite gets large it can become quite slow.
So recently I have moved to Jest which is much faster and if your writing react app, using enzyme with snap shot testing give you really good coverage. Talking of coverage Jest has Istanbul coverage built in and set up and mocking is really easy simple to use. The downside it doesn't test in browser and it using something called jsdom which is fast, but does have a few nuisances. Personally I don't find this a big deal particularly when I compile my code through webpack/babel which means the cross browser bugs are fairly few and far between, so generally isn't an issue if you manually test anyway (and imo you should).
In terms of working within the rails stack, this much easy now that the webpacker gem is now available and using npm and node is generally much more excepted. I would recommend using nvm to manage your node versions
While this isn't strictly testing, I would also recommend using linting as this also picks up a lot of issues in your code. For JS I use eslint with prettier and scss/css I use stylelint
In terms on what to test, I think as Carlos talks about the test pyramid is still relevant, after all the theory doesn't change, just the tools. I would also add to be practical about tests, I would always test, but to what level and coverage will depend on the project. It is important to manage your time and spending hours/days testing a short lifecycle project. Larger/longer term projects the benefits of a larger test suite is obviously greater.
Anyway I hope that helps people that look at the question.

Haskell testing workflow

I just started a new Haskell project and wanted to set up a good testing workflow from the beginning. It seems like Haskell has a lot of excellent and unique testing tools and many different ways to integrate them.
I have looked into:
HUnit
QuickCheck
benchpress
HPC
complexity
Which all seem to work very well in their domains, but I'm looking for a comprehensive approach to testing and was wondering what has worked well for other people.
Getting unit testing, code coverage, and benchmarks right is mostly about picking the right tools.
test-framework provides a one-stop shop to run all your HUnit test-cases and QuickCheck properties all from one harness.
Code coverage is built into GHC in the form of the HPC tool.
Criterion provides some pretty great benchmarking machinery
I'll use as a running example a package that I just started enabling with unit testing, code coverage, and benchmarks:
http://github.com/ekmett/speculation
You can integrate your tests and benchmarks directly into your cabal file by adding sections for them, and masking them behind flags so that they don't make it so that every user of your library has to have access to (and want to use for themselves) the exact version of the testing tools you've chosen.
http://github.com/ekmett/speculation/blob/master/speculation.cabal
Then, you can tell cabal about how to run your test suite. As cabal test doesn't yet exist -- we have a student working on it for this year's summer of code! -- the best mechanism we have is Here is how to use cabal's user hook mechanism. This means switching to a 'Custom' build with cabal and setting up a testHook. An example of a testHook that runs a test program written with test-framework, and then applies hpc to profile it can be found here:
http://github.com/ekmett/speculation/blob/master/Setup.lhs
And then you can use test-framework to bundle up QuickCheck and HUnit tests into one program:
http://github.com/ekmett/speculation/blob/master/Test.hs
The cabal file there is careful to turn on -fhpc to enable code coverage testing, and then the testHook in Setup.lhs manually runs hpc and writes its output into your dist dir.
For benchmarking, the story is a little more manual, there is no 'cabal benchmark' option. You could wire your benchmarks into your test hook, but I like to run them by hand, since Criterion has so many graphical reporting options. You can add your benchmarks to the cabal file as shown above, give them separate compilation flags, hide them behind a cabal flag, and then use Criterion to do all the heavy lifting:
http://github.com/ekmett/speculation/blob/master/Benchmark.hs
You can then run your benchmarks from the command line and get pop-up KDE windows with benchmark results, etc.
Since in practice you're living in cabal anyways while developing Haskell code, it makes a lot of sense to integrate your toolchain with it.
Edit: Cabal test support now does exist. See http://www.haskell.org/cabal/release/cabal-latest/doc/users-guide/developing-packages.html#test-suites
The approach is advocate in RWH ch 11 and in XMonad is approximately:
State all properties of the system in QuickCheck
Show test coverage with HPC.
Confirm space behavior with heap profiling.
Confirm thread/parallel behavior with ThreadScope.
Confirm microbenchmark behavior with Criterion.
Once your major invariants are established via QuickCheck, you can start refactoring, moving those tests into type invariants.
Practices to support your efforts:
Run a simplified QuickCheck regression on every commit.
Publish HPC coverage details.
The test-framework package is really awesome. You can easily integrate HUnit and QuickCheck tests, and get executables that run specified suites only, based on command-line flags, with multiple output targets.
Testing and profiling are different beasts though. For profiling I'd set up a separate executable that stresses just the section you want to profile, and just looking carefully at the results of profiling builds and runs (with -prof-auto-all for compilation and +RTS -p for a runtime flag).
For testing, I rely on HUnit and QuickCheck properties and use the Haskell Test Framework to collect all unit tests and all QuickCheck properties automatically.
Disclaimer: I'm the main developer of the Haskell Test Framework.

Is the automated testing still referred to as smoke testing?

If not, is smoke testing still used?
It's sort of a Venn Diagram. Some Automated tests are Smoke tests, and some smoke tests are Automated (inasfar as they are ran by a computer program). A Smoke test is a take off (if I recall correctly) on the term "Where there's smoke, there's usually fire." It's a set of preliminary tests that the program must pass to be considered for 'real' (viz. fire) testing.
A smoke test can be manual insomuch as a tester has a list of steps he follows, but these aren't automated with a computer program.
Smoke testing is still used -- in places I've worked, it's usually automated.
Automated testing can do smoke testing (shallow, wide), but it can also do other testing like regression testing, and unit testing. Basically automated testing can be any repeatable test.
Yes, smoke testing is still being used. I've generally seen two scenarios. The first is to determine whether the software is ready for more in depth testing. The second, and IMO more common, to skimp on fully testing functionality that should not have been affected by the changes to the new build.
I don't think smoke tests are usually automated. The smoke test in my experience is really just a basic sanity test to make sure that subsequent tests can actually be run, and that nothing basic got broken like startup code or menu entries. This would usualy be done manually by a person. I suppose it could be automated, but usually it involves the addition of new features so the automated tests would have to be changed as well and you'd still have the same problem that you'd need a person to verify that the automated tests were modified to test the new feature properly. In contrast, automated tests (like unit tests) represent a regression test suite and are created to test well-established functionality that should not change much from release to release, although of course you would add unit tests to cover new functionality as well.
Probably more in companies from a hardware background where the smoke test was taken literally. Few people call them that anymore. It's usually just a small yet broad subset of a larger acceptance or system test suite. These tets are automated and are automatically run against code before it is submitted or on submission to source code control.
I am not sure we can compare Smoke and Automated testing. Smoke testing is a way to run a set of basic tests on a build, covering all the basic features but not going in depth on any. The purpose is to determine whether a build can be used for more detailed testing or not. It is also a set of steps that can be run quickly even on a developer build to determine if there are any issues due to some significant or core changes that are about to go in a build. We consider Smoke test to be one of our 'test plans' but one that is run on every build.
Automated testing is not specific to Smoke tests but can be applied there as well. It is done to 'automate' redundant or repetitive steps that a tester always does to save time. That is the primary purpose of automation. It is allowes a tester to spend more time to do other tests.
It can never be replacement of testing by a real brain nor everything can be automated. It is an activity that supplements the testing process in place, not replace it.
Since Smoke test is potentially run on every build, there is a good value in automating it. If a smoke test run manually takes 4 hours, and after automation it takes 1 hour, you have saved an effort of 3 man-hours * number of builds.
There are several tools in market for automation testing - AutoIT and SilkTest to name a few.
In very simple words we can say that Smoke testing can be automated but it is not like automated testing is always smoke testing.
Yes, smoke testing is a popular way of testing any application/software.
My understanding of "smoke testing" is different than the wikipedia article. I understand smoke testing to be the developer opening the app and testing the basic functionality to verify that the app looks right & is doing the basics. So I always thought it was a manual process, not an automated one.
Test automation suite contains various levels like smoke test, acceptance test, nightly build, so on. Its up to the tester to decide which test case needs to be run in each level. Each test case is numbered depending upon the levels at which they should be run. Say if there are 2 test cases automated, numbered with 1 and 2 respectively to indicate the levels, and you define test level as 2 in configuration file, its gonna run only the second test case and gives you the result. Smoke test generally has less number of test cases compared to acceptance test.
Smoke test can be automated but not all automated tests are smoke tests.