Test Result Management and Reporting tool - testing

I'm looking for a some sort of management/reporting tool that collects the results of tests, stores them for reporting, then lets users generate reports based on those tests.
We have numerous test running tools that run on a variety of platforms, but all output test results in the JUnit format. The test are not specific to hardware or platform, but rather generic. What we would like to do is have an automated (or manual) test run be able to submit it to a central location along with additional information, like platform, OS, hardware configuration and maybe user defined data. The management/reporting tool would store this data.
Then, a manager would be able to go to the tool and request (or more likely, access a dashboard that developers have setup) an update on the current status. This could be a list of test results that were run in a particular configuration, or a hardware status, or just the results of specific test(s).
Any suggestions?

We built a test management tool Enterprise Tester (www.enterprisetester.com) that allows users to pull automated test results in nUnit, nUnit, XSLT, Selenium etc and report off the results.
In addition to pulling the results and reporting you are able to trace these tests back to requirements that have been created giving you the ability to see test coverage and the status of this coverage on dashboards. If you are using JIRA (or google) or anything that uses open social gadgets you can pass these gadgets to other tools also.
Feel free to contact me directly if you would like to talk further about it
Regards
Bryce

You can also try an open source project called Allure Framework. It was created specially for showing test execution results in a nice form. A set of popular test frameworks such as JUnit, PHPUnit, TestNG, py.test, RSpec, Scalatest are already supported. Other ones such as NUnit will be supported soon.

Check out Hudson. it's very useful and configurable

Related

Dashboard to display test automation results from different tools

I have got to create one common dashboard that can be used to display the execution results along with execution status and pass percentage from various test automation tools like
api - mochasome reports,
seleium- extent reports,
selenium - cucumber reports.
selenium - testng reports
pitest reports
etc..
Which tool can be used for this requirement?
Datadog might work if you can upload the reports: https://docs.datadoghq.com/continuous_integration/setup_tests/junit_upload/?tab=linux
Still looking for a solution myself too.

Automating Sequence of Manual Steps

I have sequence of steps that an user does, e.g. logging on the a remote UNIX shell, creation of files/directories, changing permission, Running remote Shell scripts and commands, File deletion, File movements,
Run DB queries and basis the query results perform certain tasks exporting the results to a file or run further shell commands/scripts or DB insert statements etc etc.
doing there steps users achieves different processed or data processing and validating.
What is the best way to automate the above schenerio, Should we go for a Workflow tools like Activiti etc. or is there a better framework/way to achieve the requirements.
My requirement is to work with Open-source, and possibly Java based.
I am completely new to this so any help pointers would be appreciated.
The scenario you describe is certainly possible with a workflow tool like Activiti. Apache Camel or Spring Integration would be another possibility (as all the steps you mention are automatic system tasks).
A workflow framework would be a good option if you need one of these
you want to store the history data for 'audit purposes': who did what/when/how long did it take.
you want to visually model your steps, perhaps to discuss it with business people.
there is a need for human interaction between some of the steps
Your description reminds me of a software/account provisioning process.
There are a large number of provisioning tools on the market both Open Source or otherwise (Dell Crowbar is one options).
However, A couple of the comments you made in your response to Joram indicate a more general purpose tool such as Activiti may be an option:
"Swivel Chair" tasks - User tasks that may one day be automated
Visual model of process state
Most provisioning tools dont allow for generic user tasks and dont provide a (good) visual model of the process state.
However, they generally include remote script execution which would need to be cobbled together as a service task if using a BOM tool.
I would certainly expand my research to include provisioning tools as they sound like a better fit, however if you cant find anything that works for you, a BPM platform provides a generic framework to build what you need.

Should I put my automated regression tests inside a web application?

Question
Is it worth building a web application front-end for my department's automated regression tests? I've searched quite a bit and I don't think anything like this exits. Basically the web application would allow a user to specify a URL, expected inputs, and expected outputs and an expected return URL. On the back-end a headless browser would be running on the server to test the scenario just defined by the user, most likely using calls to a headless browser... I've searched quite a bit to see if something as simple as this exists but I haven't had any luck. I've found lots of tools for allowing programmatic operation of browser commands but a web front-end for testing another web application I have not.
Background
My team has dedicated automated regression tests that the testers run on their local machines. The tests are written in Python, utilize some Selenium integration plugins, and use an excel spreadsheet as input on what to test. They are maintained by the QA department.
Problem
Nobody outside the QA team knows how extensive these regression tests
are because they exist only on individual laptops.
They have no central repository, and the dev team has no means of
actively updating these tests as we build new features. We must leave
it 100% up to the QA department.
The business analysts don't have access to the results of these
tests. Because of all this, a lot of uncertainty exists around our
automated testing increasing reluctance to change things without
instructing the QA team to perform full scale manual regression
tests...
This has led me to consider putting all of our Selenium tests in the cloud behind a user-friendly web front-end that anyone can use and access from anywhere. They could then easily create new tests using dropdown menus. Everyone, developers, testers, and business analysts, can see whats covered in a test sequence and update them as we add new features. I believe this would also make it easier to have Jenkins jobs trigger tests to run at timed intervals if web application exposed web service hooks for jenkins... But I feel like perhaps I'm re-inventing the wheel. Is what I'm proposing to build worth it?
Personally i would not spend too much time in creating a website to accept user input to create a testscript. Instead I would spend that time in creating a solid test framework and use Jenkins to trigger the tests.
You also need to consider the 'website' maintenance in future. What will happen if some new feature has to be included in the website? QA/BA team will depend on the developer to add the feature.
I think it is better to use keyword driven framework - where you can write your entire test in spreadsheet. [In my project QA people who are not familiar with programming create test scripts with this approach].
As Jenkins web based application - anyone can trigger your automated regression tests. Even the BAs (in my project, that is what i have done). No technical skill is required. We can also pass parameters through jenkins. Parameters can be anything from text to a file. So, you can upload a file which contains the steps to be executed to the jenkins job and the rest should be taken care by your test framework.
You would definitely need a central repository. It is a must have. You can take a look at VisualSVN server. It is easy and FREE.
Keyword Driven framework using Selenium:
http://www.testautomationguru.com/keyword-driven-framework-for-localization-testing-using-selenium-webdriver/
Continuous regression & results:
http://www.testautomationguru.com/continuous-regression-testing-best-practises/
Smoke Test after each build:
http://www.testautomationguru.com/automated-smoke-test-best-practises/

Intellij running one test in TestNG

So my typical workflow is
I write a data driven test using TestNG in IntelliJ.
I supply hundreds of data items
Run the test and one or two of them fail
I see the list of passed/failed tests in the "Run" pane.
I would like the ability to just right click that "instance" of the test and run that test alone (with breakpoints). Currently IntelliJ does not seem to have that feature. I would have to right click the test and when I run, it runs the whole set of tests with hundreds of data points.
Is this possible?
TestNG supports this at the testng.xml level, where you can specify which indices of your data provider should be used. It's called "invocation-numbers" and you can see what it looks like by running a test with a data provider, failing some of its invocation numbers and looking at the testng-failed.xml that gets generated.
Back to your question: your IDE needs to support this feature in order to make it available in the UI, so I suggest you ask on the IDEA forums
The feature has been added as of Intellij 142.1217: https://youtrack.jetbrains.com/issue/IDEA-57906

Using only its API, how can I post test results to Team Foundation Server 2010?

I have a process for running automated functional tests which is external to Microsoft Team Foundation Server (TFS) 2010. Test cases are tracked as Test Case work items within TFS, however. After running these tests, how can I publish the results to TFS using the TFS API? Can someone point me to sample code that demonstrates this?
Please note that I expressly want to avoid a solution that requires transformation of my test results into the .trx file format. Searches have turned up dead links, or solutions that rely on this method.
It sounds like the following blogs may almost be what you're after
http://blogs.msdn.com/b/jpricket/archive/2010/02/23/creating-fake-builds-in-tfs-build-2010.aspx
http://msmvps.com/blogs/vstsblog/archive/2011/04/26/creating-fake-builds-in-tfs-build-2010-using-the-command-line.aspx
It doesn't actually have code for adding test results, however it does say this:
"In order to associate test results and the like, you have to create build project nodes with the fake build."
You should be able to create a Microsoft.TeamFoundation.Build.Client.TestSummary with a summary of your test results.
There's a couple of internal classes that look interesting, specifically Microsoft.TeamFoundation.Build.Controls.TestRunDetails, which could potentially be useful if you don't mind using some reflection.
However what I would recommend, is using the API to look at the nodes in a standard TFS build to see how they are built up.