We have a main CRM application, that used to be housed externally but has been brought in-house. We have all our stand-alone websites/apps/etc that update/read from this main application. Each of those individual projects have their own selenium tests associated with them. What I assume those projects due, is drum up the resources, builds the tests, and runs against those existing websites, and gets rid of the resources, because no longer needed.
It seems like we should be able to reference those build pipelines from one project into another. Is there a way to do this? Seems like it would be easier to manage this way.
From what I have read, copy the test dll's into the main crm project to run. Is that what is needed to be done? The test piece on the CRM application will find all those test dlls? And then all the reporting are under the main crm sometimes and sometimes under the stand alone projects in other times? I much rather would like the test results to be associated with the repos/projects associated with the functional area.
So, the solution is that you can add a deployment group, with the Deployment Group you could test the applications in different machines.
In your scenario, you could see all projects included in that deployment group under your main CRM, then you could kick off each build pipeline has your tests in them.
Please refer to Automating Selenium Tests in Azure Pipelines for details
Related
Just wondering if there is a way to integrate TFS with TestRail to replace(get rid of entirely) the Test Hub within TFS to use TestRail to record Test Plans?
My concern with removing the Test Hub, would be if Test Rail can still reference IDs in Bug and Stories within TFS and vice versa?
You currently cannot remove the standard Hubs in Visual Studio Team Services / TFS or replace them with something else. You can enable an extension that either adds its own functionality under said hub as a separate tab or adds another Test Rail hub to the top level (if there is any) or write your own. Extensions currently cannot leave their sandbox to overwrite standard functionality.
There is nothing preventing anything from keeping track of work item numbers anywhere, so the second part of your question, whether it would have broken any form of integrations, that's unlikely.
If you are on TFS, you could try creating a custom process template that doesn't have the Test Case, Shared Step, Test Suite and Test Plan work item types, this will likely at least completely cripple the existing functionality. In the on-prem version you can also customize the files on disk, I've never tried, but it's likely that you could probably hack the test hub away. That would be a totally unsupported scenario through.
Question
Is it worth building a web application front-end for my department's automated regression tests? I've searched quite a bit and I don't think anything like this exits. Basically the web application would allow a user to specify a URL, expected inputs, and expected outputs and an expected return URL. On the back-end a headless browser would be running on the server to test the scenario just defined by the user, most likely using calls to a headless browser... I've searched quite a bit to see if something as simple as this exists but I haven't had any luck. I've found lots of tools for allowing programmatic operation of browser commands but a web front-end for testing another web application I have not.
Background
My team has dedicated automated regression tests that the testers run on their local machines. The tests are written in Python, utilize some Selenium integration plugins, and use an excel spreadsheet as input on what to test. They are maintained by the QA department.
Problem
Nobody outside the QA team knows how extensive these regression tests
are because they exist only on individual laptops.
They have no central repository, and the dev team has no means of
actively updating these tests as we build new features. We must leave
it 100% up to the QA department.
The business analysts don't have access to the results of these
tests. Because of all this, a lot of uncertainty exists around our
automated testing increasing reluctance to change things without
instructing the QA team to perform full scale manual regression
tests...
This has led me to consider putting all of our Selenium tests in the cloud behind a user-friendly web front-end that anyone can use and access from anywhere. They could then easily create new tests using dropdown menus. Everyone, developers, testers, and business analysts, can see whats covered in a test sequence and update them as we add new features. I believe this would also make it easier to have Jenkins jobs trigger tests to run at timed intervals if web application exposed web service hooks for jenkins... But I feel like perhaps I'm re-inventing the wheel. Is what I'm proposing to build worth it?
Personally i would not spend too much time in creating a website to accept user input to create a testscript. Instead I would spend that time in creating a solid test framework and use Jenkins to trigger the tests.
You also need to consider the 'website' maintenance in future. What will happen if some new feature has to be included in the website? QA/BA team will depend on the developer to add the feature.
I think it is better to use keyword driven framework - where you can write your entire test in spreadsheet. [In my project QA people who are not familiar with programming create test scripts with this approach].
As Jenkins web based application - anyone can trigger your automated regression tests. Even the BAs (in my project, that is what i have done). No technical skill is required. We can also pass parameters through jenkins. Parameters can be anything from text to a file. So, you can upload a file which contains the steps to be executed to the jenkins job and the rest should be taken care by your test framework.
You would definitely need a central repository. It is a must have. You can take a look at VisualSVN server. It is easy and FREE.
Keyword Driven framework using Selenium:
http://www.testautomationguru.com/keyword-driven-framework-for-localization-testing-using-selenium-webdriver/
Continuous regression & results:
http://www.testautomationguru.com/continuous-regression-testing-best-practises/
Smoke Test after each build:
http://www.testautomationguru.com/automated-smoke-test-best-practises/
I am trying to create a unit test to run on two machines in Microsoft Test Manager 2010. In this test I want some client and server side test code to run simultaneously; the client side test being dependent on server side test working successfully.
When putting together a Test Suite in Test Manager, I want to be able to set both tests to have the same order value (so they run at the same time) but the validation prevents this; setting the order as shown below:
Is there any way I can achieve the simultaneous test execution I am after?
Sorry for the late answer... I've missed the notification about your answers to my question :-( Sorry for that!
In case you are still looking for solution, here my suggestion.
I suppose you have a test environment consisting of two machines (for server and client).
If so, you will not be able to run tests on both of them, or better to say you will not have enough control over running tests. Check How to Run automated tests on multiple computers at the same time
Actually I posted a related question to "Visual Studio Development Forum", you could check the answers I got here: Is it possible to run test on several virtual machines, which belong to the same environment, using build-deploy-test workflow
That all means you will end up creating two environments each consisting of one machine (one for server and one for client).
But then you will not be able to reference both environment in your build definition it you can only select one environment in DefaultLabTemplate.
That leads to the solution I can suggest:
Create two lab environments
Create three build definitions
the first one will only build your test code
the second one will deploy last successful build from the first one and start tests on the server environment
the third one will deploy last successful build from the first one and start tests on the client environment.
Run the first build definition automatically at night
Trigger the latter two simultaneously later.
It's not really nice, I know...
You will have to synchronize the build definition building the test code with the two build definitions running the tests.
I was thinking about setting up similar tests some months ago and it was the best solution I came up with...
Another option I have not tried yet could be:
Use a single test environment consisting of two machines and use different roles for them (server and client respectively).
In MTM create two Test Settings (one for the server role and one for the client role).
Create a bat file starting tests using tcm.exe tool (see How to: Run Automated Tests from the Command Line Using Tcm for more details).
You will need two tcm.exe calls, one for each Test Settings you have created.
Since a tcm.exe call just queues a test run an returns (more or less) immediately this bath file will start tests (more or less) simultaneously.
Create a build definition using DefaultLabTemplate.
This definition will:
build test code
deploy them to both machines in your environment
run your bath script as the last deployment step
(you will have to make sure this script is located on the build machine or deploy it there or make it accessible from the build machine)
As I've said, I have not tried it yet.
The disadvantage of this approach will be that you will not see the test part in the build log since the tests will not be started by means provided by DefaultLabTemplate. So the build will not fail when tests fail.
But you will still be able to see test outcomes in MTM and will have test results for each machine.
But depending on what is more important to you (having rest results or having build definition that fails if tests fail or having both) it could be a solution for you.
Yes, you can with modified TestSettings file.
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx
I'm looking for a some sort of management/reporting tool that collects the results of tests, stores them for reporting, then lets users generate reports based on those tests.
We have numerous test running tools that run on a variety of platforms, but all output test results in the JUnit format. The test are not specific to hardware or platform, but rather generic. What we would like to do is have an automated (or manual) test run be able to submit it to a central location along with additional information, like platform, OS, hardware configuration and maybe user defined data. The management/reporting tool would store this data.
Then, a manager would be able to go to the tool and request (or more likely, access a dashboard that developers have setup) an update on the current status. This could be a list of test results that were run in a particular configuration, or a hardware status, or just the results of specific test(s).
Any suggestions?
We built a test management tool Enterprise Tester (www.enterprisetester.com) that allows users to pull automated test results in nUnit, nUnit, XSLT, Selenium etc and report off the results.
In addition to pulling the results and reporting you are able to trace these tests back to requirements that have been created giving you the ability to see test coverage and the status of this coverage on dashboards. If you are using JIRA (or google) or anything that uses open social gadgets you can pass these gadgets to other tools also.
Feel free to contact me directly if you would like to talk further about it
Regards
Bryce
You can also try an open source project called Allure Framework. It was created specially for showing test execution results in a nice form. A set of popular test frameworks such as JUnit, PHPUnit, TestNG, py.test, RSpec, Scalatest are already supported. Other ones such as NUnit will be supported soon.
Check out Hudson. it's very useful and configurable
I have a process for running automated functional tests which is external to Microsoft Team Foundation Server (TFS) 2010. Test cases are tracked as Test Case work items within TFS, however. After running these tests, how can I publish the results to TFS using the TFS API? Can someone point me to sample code that demonstrates this?
Please note that I expressly want to avoid a solution that requires transformation of my test results into the .trx file format. Searches have turned up dead links, or solutions that rely on this method.
It sounds like the following blogs may almost be what you're after
http://blogs.msdn.com/b/jpricket/archive/2010/02/23/creating-fake-builds-in-tfs-build-2010.aspx
http://msmvps.com/blogs/vstsblog/archive/2011/04/26/creating-fake-builds-in-tfs-build-2010-using-the-command-line.aspx
It doesn't actually have code for adding test results, however it does say this:
"In order to associate test results and the like, you have to create build project nodes with the fake build."
You should be able to create a Microsoft.TeamFoundation.Build.Client.TestSummary with a summary of your test results.
There's a couple of internal classes that look interesting, specifically Microsoft.TeamFoundation.Build.Controls.TestRunDetails, which could potentially be useful if you don't mind using some reflection.
However what I would recommend, is using the API to look at the nodes in a standard TFS build to see how they are built up.