Unit tests in Sonar - maven-2

Is it possible to run unit tests when a non-maven project is analyzed with Sonar, in Sonar light mode?

Sonar doesn't run unit tests. But it should be able to analyze existing unit tests reports. From Reuse in Sonar unit test reports generated by other systems:
2. Using Sonar in its full capability in an ANT environment
If you are using ANT to build your
applications, the main weakness so far
in Sonar was that it did not allow to
display Unit tests results nor Code
coverage. I am sure that now you have
read the first use case, you know that
by using the
“-Dsonar.dynamicAnalysis=reuseReports”
parameter, this limitation does not
exist anymore. You simply need to
specify where those reports to reuse
are going to be found, by using the
following properties :
sonar.cobertura.reportPath,
sonar.clover.reportPath,
sonar.surefire.reportsPath...

Related

Automated Testing for testers with no coding required

I'm trying to improve the testing process where I work, but without adjusting the structure.
What we have: VSTS, Selenium IDE, Testers who write test cases, but not code.
What I'd like to do is manage a way to marry our TFS continuous integration with the Selenium tests we write. These are NOT the code-driven selenium tests, but rather the IDE version where users click through, and set assertions using the IDE (All are just UI tests). I know we can export those tests plans as a .SIDE file, but what I can't figure out, is how to have our TFS server execute those as part of a deployment or build pipeline.
Ideally, developers/devops would setup projects in TFS from the onset with whatever solution makes sense to execute these Selenium .SIDE files, but afterwards, the testers would manage adding/modifying those tests cases elsewhere.
The real goal here is to not have testers writing code, or checking in code. Only writing these UI Selenium tests, but having TFS execute those as part of CI.
Researching this on the internet drives me basically always to something that requires testers to write code.
I don't think it can automate testing without code, at lease, you need a test project containing your automated tests.
Generally, in Azure DevOps, we use Visual Studio Test task to run tests. This task supports using the following tests:
Test assembly: Use this option to specify one or more test assemblies that contain your tests. You can optionally specify a
filter criteria to select only specific tests.
Test plan: Use this option to run tests from your test plan that have an automated test method associated with it. To learn more about
how to associate tests with a test case work item, see Associate
automated tests with test cases.
Test run: Use this option when you are setting up an environment to run tests from test plans. This option should not be used when
running tests in a continuous integration/continuous deployment
(CI/CD) pipeline.
This was a question that I had as well, and I think I found an imperfect but better solution.
I wasn't able to get my Selenium IDE tests running with Jenkins, but I was able to get them to run with TeamCity, another CI.
I created a build step like the following :
Runner type: Command Line
Working Directory: where the selenium IDE .side file is located
Run: Custom Script
With the build script content that I usually use to run my Selenium IDE Tests, such as selenium-side-runner sidefile.side
I also added the following so I could output the results in Junitor another form: --output-directory=results --output-format=junit
You can also add the following so the tests are run headlessly, this only works in Chrome : -c "goog:chromeOptions.args=[--headless,--nogpu] browserName=chrome"
Finally, I also use --filter to run one test suite at a time, but that is optional too.
I then used another build step to export the results to our test manger, xray, but I think that is beyond the scope of this question.
The problem with this solution is that it runs directly from a users individual machine still, but this can be work around.

BDD Cucumber test management tool

Is there an open source tool available to control the running of BDD cucumber tests?
We are developing BDD cucumber tests and would like the option to control the tests when running them (start/stop/pause/restart) using an open source (or proprietary) test tool.
The short answer it, yes.
The somewhat longer answer is that it depends on your echo system.
If you are using Java, then any build tool will be sufficient. That is Maven, Gradle or similar. These are easy to integrate in your Continuous Integration, CI, environment. With a tool chain like that, you are able to execute Cucumber on every build and will always know if your system works or not.
Yes , but in small scope (Automation tests) and less process control related to run and control tests ,In high scope with multiple branches and projects i think you have to move to Jenkins with full control.
Following link describe the coparsion : https://www.saashub.com/compare-jenkins-vs-cucumber

TFS: Create individual Bug items when XUnit tests fail

Our goal is to implement CI testing and deployment for our DEV web environments:
Goal
Run XUnit tests on check-in.
If tests fail, create individual, associated Bug work items. Stop.
If tests tests pass, deploy build to a UNC file path.
Current Setup
CI is on for the branch, and the build definition currently has enabled Create Work Item on Failure on the Options panel.
XUnit was integrated into the Visual Studio Test build step by providing the Path to Custom Test Adapters necessary.
Problem
Tests run and display results correctly in the build, but no bugs are created for the failed tests, only one for the overall build fail.
Question
How can I create individual Bugs (and include details about the bug in its description)?
You would have to write your own code to create Bugs for each test failure.
I would however recommend against it as this creates unessesery work items and they may not really be bugs. Maybe we have a single test that fails, and the other 200 tests fail as a result. We only have one bug. You will overwhelm people.
You can easily create bugs as you investigate failures using the failed test list that is part of the build results.
https://www.visualstudio.com/en-us/docs/test/continuous-testing/getting-started/getting-started-with-continuous-testing

sonar integration test coverage

We have seperate integration test project which fires the integration test cases on different modules . At present we do not have unit test cases within each module. We would like to ensure that the integration test covers most of the domain functionality.
Since we have the integration tests in a different project , Sonar always reports the test coverage as zero for the modules under test.
Is there any way to have the test coverage reported on a project , when the actual test is run from a different project
Thanks
You should be able to achieve what you want by reading the Code Coverage documentation page on the wiki. Most notably, you'll be able to use the following sample project to see how it works:
IT JaCoCo Sonar Runner sample project
Basically, you have to run your integration tests first using JaCoCo to generate the coverage report (jacoco.exec) and then you reuse this report during the SonarQube analysis.

Code coverage for GUI based functional tests

I am trying to get bytecode coverage analysis using a code coverage tool (like Emma or Jacoco) after testing with a GUI based functional testing tool (like HP QuickTest Pro or Selenium).
Anyone who has done this could please give me an idea to start this project?
I am doing this now. My approach is to use JaCoCo ant tasks to instrument the binary byte-code files, and use a specific CLASSPATH to execute the instrumented binaries from an ant build.xml from Jenkins.
The reason for doing the code coverage from byte-code is that there is an existing set-up that runs test scripts for a large application using HP QuickTest Pro . I would imagine that the test coverage is in the single digits, but we need an empirical baseline to demonstrate the possible improvements in code coverage from doing unit tests during a build.