junit report visualization tools - testing

I'm looking for an junit report visualization tool. I have a set of xml reports generated by a custom test-suite tool and I want to be able to visualize their history. A plugin for jenkins would be ideal, but a standalone application is also ok.
The one thing I have found so far that seems to fit the spec:
http://junitth.sourceforge.net/
Though I am a bit wary of using this as development seems to have stopped a year or two ago.
Any suggestions welcome.. thanks

There's a new tool called Allure. This is not exactly what you described because it uses its own XML files which can be automatically generated during test run via provided JUnit RunListener. However it supports a lot of test-related features like custom test description, grouping by features and stories, adding attachments, etc. You could try to adapt your custom XML to their schema and then generate the visualization with standalone tool or plugin.

Related

Displaying test results in a better way in Azure DevOps with Pipelines

Azure Pipelines has a way of displaying test results after builds, described here. This seems to be very limited, however, and doesn't display much of the stuff we want.
We're using NUnit 3 with a .NET Core project, and NUnit outputs a bunch of stuff for test results. We specifically want to display FQN of the test at a minimum, but being able to easily view tests by category would suffice as well. This doesn't seem to be possible with the default Tests tab.
My question: Does DevOps support doing something like this or do I need to make my own site/API for viewing test results? I'm open to pretty much anything, we just want to avoid anything that requires manual offline processing.
As per the Microsoft comment to the question, this is not currently possible. I added a suggestion on their tracker here.

What are the main points that must be documented for a data integration project?

I am working on a data integration project using Talend.
I have many input sources heterogeneous, I make transformations and I save output data to many output sources. Actually, I am doing Extraction, Transformation and Load (ETL).
My Talend Job is executing everyday on a linux server (the production). I have a development and test environment on a windows VM, ... In fact, I have many things I want to document and I don't really know how. I used to document web development projects (just the frontend), but not data integration projects.
Can you guys help me with some keywords, examples, templates, so that I can make a clear documentation for my client ?
Thanks in advance :)
While I can't speak to your organizational needs, Talend is designed to be largely self documenting. If you were diligent in filling out the documentation and descriptions in your job, you can right click the job and select 'Generate Doc as HTML'.

Gradle tooling api get task outputs

I've managed to get some project's task list thanks to the Gradle tooling API GradleProject.getTasks(). It's kinda cool, I can read task's name, description, group and whether it's public or not.
I was wondering if it was possible to get tasks outputs directory, especially for tests or code coverage stuff, the kind of tasks that produce HTML-like reports. It would be nice to display these reports in a web UI.
Does anyone know if this is possible, or at least planned to be added in a future release of the tooling API ?
Thanks alot :)
In order to get additional information about tasks like the TaskOutputs from tooling api you, would have to implement a tooling API plugin like this:
https://github.com/bmuschko/tooling-api-custom-model
See here: https://github.com/bmuschko/tooling-api-custom-model/blob/master/plugin/src/main/java/org/gradle/sample/plugins/toolingapi/custom/ToolingApiCustomModelPlugin.java#L31-L39. This method is where you can collect the information you are interested it and surface that in your "model" class.
I have successfully done this for one of the projects I work on: https://github.com/liferay/liferay-blade-cli/tree/master/gradle-tooling/src/main/java/com/liferay/blade/gradle/tooling
If my understanding is correct, currently Gradle tooling API doesn't support HTML-like reports. There reports should be implemented by the tasks you are using in your build.
For example, for Android testing tasks(task for unit and cAT for UI automation testing), you can find the HTML testing result in [your project path]/app/build/reports.

Looking for a open source web testing automation framework [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 8 years ago.
Improve this question
Guys I am looking for Web Testing Generic Automation Framework which can be used to do automation testing of various web based applications .Looking for C# based framework as that is the language I am more familiar with. But any other language framework will also do and it should not use any proprietary/licensed language.
Framework should have some open source and free of cost license model.
I searched for selenium based framework on Google and SO. But could not come with any which have source code available. It will be good if the framework encapsulates all the functionality provided by Selenium WebDriver and/or Selenium RC and empower the functional tester to create and maintain test in human readable scripts.
Requirements of the framework:
The framework code should avoid hard coding of test steps. My idea is to maintain the test scripts outside the automation framework code , so that they can be easily be modified if needed. The framework should read through the step tables and the data tables and run the test accordingly.
If there is no such framework available now right then we can collectively build such a framework in a open source community model.
P.S.
I have read a little about Hermes Framework and Robot Framework, but not yet tried them, any help is welcome.
The good side of this problem: there are a lot of flexible tools and approaches, you can get together and build a flexible, reliable and robust test automation framework.
The hard part is: yes, there is no “out of box” solution, and you’ll need to find and put together lots of tools in order to solve this test automation puzzle.
What I would recommend:
First you need to choose a unit-test test framework. This is a tool which helps to identify separate methods in code as tests, so you can run them together or separately and get the run results, such as pass or fail.
My personal opinion, is that the testing tool – MS-Test – which ships with Visual Studio 2013 (and also Express Edition) is good enough. Another alternatives are: NUnit or Gallio Icarus
All unit-testing frameworks includes a mechanism for doing assertions inside the test. The capability of assertions class depends on given unit-testing framework. Here, I would like to recommend a popular library which works great for the entire unit testing framework.
This is Fluent Assertions (also available from NuGet repository).
That’s a hard moment. You need to decide: are you going to use the PageObject approach in order to build your test automation framework, or you are going to choose simpler approach, without heavy utilization of the Object Oriented Programming.
Properly designed Page Objects makes your test automation code much maintainable. Utilizing the OOP – you can do a magic in your code: write less to do more. Although, such approach requires more skill.
Here are a good articles on this topic:
Maintainable Automated UI Tests
And this one:
Tips to Avoid Brittle UI Tests
The alternative to the PageObject is a scripted approach. This approach can be also successful and requires less time to start.
Coypu is a good and usable example of such framework for Selenium Web Driver.
All the popular unit-testing frameworks support data-driven tests. The best support is in NUnit – you can run/re-run and see the tests generated for individual data row in the tests tree.
MS-Test supports reading data from different data-sources: text files, excel, mssql etc., but it is not possible to re-run the test for individual data row. Although, there is a hack for this – Ms-Test Rows.
For my data-driven tests, I am using a great library – Linq to Excel
I have a lot more to say. There are so many approaches to build test automation framework – and there is no ready solution yet.
I am trying to build one according to my testing methodology – SWD.Starter .
This project is still on its early development stages. But, at least, probably you’ll find a few tips how to build and organize the test automation code.
I've implemented https://github.com/leblancmeneses/RobustHaven.IntegrationTests based on my prior experience on large projects "trying" to implement full end to end testing.
I've been using this and and have a lot of useful extensions for general selenium, angularjs, and kendo ui work. Since this framework is not obtrusive you could just use these extensions without using anything else.
I'm using this on my latest project and everyone is loving it.
There are a lot of bdd/spec frameworks (specflow, mspec, nspec, storyq) to help wire the behavior of your system to tests.
What I've learned:
make it frictionless for any .net developer/tester to begin writing/running tests.
Most fail here because it requires installing additional pluggins into visual studio.
mine uses the standard nunit
Logically you would think that a feature is a class file and scenarios are [Test] methods - to support some of these frameworks they make each scenario a class file.
use the original spec to create stubs of your tests - hopefully readable code
I used spec flow back in 2010 - so things might have changed. I generated my tests from my bdd document. A year later when I went to add more tests and update existing tests, I felt I wasted a lot of time with ceremony than writing code I really wanted - I stopped using it.
My approach uses t4 to generate stubs - developer has a choice to generate from feature file, for a specific scenario or don't use generated code at all.
how is state shared across steps / nested steps
most use dictionary<string,object> to help you separate data from being hardcoded in your tests accessed from a context object.
mine uses viewmodels and pointers to those viewmodels - if your using something like angularjs you are using viewmodels in your server side display/editor templates and in angularjs controller so why not reuse these in your tests!
start early with CI - make development transparent
My project has ResultDiff that given the nunit testresult.xml file, folder location to your gherkin feature files, and output json file; Read description on why this is important on the screenshot: https://github.com/leblancmeneses/RobustHaven.IntegrationTests#step-5-ci-setup-resultdiff
Example:
Modified means business and developers have a mismatch of Gherkin statements - did something change that we need to talk about?
What is missing? a dashboard to render the .json file created by ResultDiff. It's on my backlog.....
With a centralized dashboard that supports multiple environments(branches of your code) this dashboard will serve all stakeholders (business, developers) what is the status of features being developed.
There is a framework named "omelet" which is built in java on top of testng for selenium,
For cross browser multi-parallel testing , it easily blends with your CI tools and have some cool reporting features with step level reports
Running your test cases on BrowserStack and Grid was never so easy as with omelet with few config changes.
if you want to give it a try then do follow the 5 min tutorial available on the website, there is archetype available on maven central + there are many more features available
Stable version is 1.0.4 and we are currently looking for people to contribute to project.
Documentation over here
Github link

Jenkins Internationalization (I18N) testing tools/plugin?

Does anyone know of any good internationalization testing tools or plugins that can be automated and triggered by jenkins?
Thanks in advance.
Lingoport's Globalyzer ( http://www.lingoport.com/globalyzer ) is a tool which does static code analysis to identify internalization issues and help mitigate them. It run the analysis in two modes, interactive with a UI, or from the command line.
The command line can be integrated in a continuous build environment. There is configuration work to adapt how the static analysis is performed on specific projects by languages (Java, JavaScript, etc.) and tune what should be identified and what should be filtered, as it is different for each project.
See http://www.globalyzer.com/gzserver/help/commandline.htm and related help files may give you a more in-depth answer to your question.