Azure DevOps - Connect Test Case and Test Run in Report - testing

I am trying to create a report that shows me the connection of the test case and the latest test run, as well as connected defects.
At the moment I am only able to see the test run connected to the result in the test plan, but also the defect connection is missing.
How is it possible to set up a query that displays test case, test run status and connected defect status?
Best Regards

Related

How can I clone Test Cases between Test Plans?

I need to clone certain Test Suites from one test plan to another.
A test plan is made up of test suites and those test suites are made up of test cases.
So I would like to take a test Suite Id and clone/copy it over to a new test plan.
We are using Azure Dev Ops online.
Is this possible or am I looking at it via the wrong way?
This was something I noticed when upgrading from using MTM for test case management. It seems like this kind of option was missing. I made a developer community post asking for clarity on whether this was a gap.
Current Response:
Thank you for your feedback!
According to your description , I suggest you install Test Case Explorer extension and then you can access to Test Case -> Pivot by Test plan -> click “Clone test plan”, set Area path and Iteration path, then Clone.

Why one tfs build summary shows test results on one machine, not on another

I have a TFS build and some unit tests. Build runs, test runs on build server.
But: On one machine, I can see test results, not on another. See screenshots. (Ignore the failed tests, there is no test results that's the matter)
The same is on the web-UI: with one user account I can see "test Results" and stats, on another machine (with another user account) there is a message "Test result with ID 123 not found or deleted".
I even don't know where the Test Result file might be located. Is there a permission problem?
The test results are stored in the collection database so given that the two users have the same rights on the team project they should be able to see the same build details.
You could investigate if the two users have appointed different Access level (license level) through the admin pages: http://TFS_SERVER:8080/tfs/_admin/_licenses. I am not completely sure if a Stakeholder is allowed to see test results from builds.

Run selected test cases in Squish for BDD design

Squish is an UI automation tool. Here I wanted to apply filters to select test cases. Or execute test cases need a condition to be run.
Squish offers few ways to run test cases from command line and have some control over them.
To run a test suite from command line:
squishrunner --testsuite /home/MyProject/suite_UI
2.To run a particular test case/cases:
squishrunner --testsuite /home/reggie/suite_addressbook [--testcase test_case_name]*
To run a particular scenario (Usage of tags)
Here you need to do some work. Let's say you want to categorise your tests between smoke test and Full Regression.
So you can use tags on top of scenarios. It quite easy.
Example:
#smoke
Scenario: To connect to the device, start the emulator
Given I am in the Start Screen
When I Click on Manual connection option
Then I should be able to connect to the device
#FullRegression
Scenario: To connect to the using using Manual connection option with connection type as Ethernet Only
Given Start Screen
When I Click on Manual connection option for Ethernet Connection
Then I should be able to connect to the sensor for Ethernet Only connection
To run all 'smoke' tags from a particular test case,
--testcase "tst_com_device_ManualConnect" --tags #smoke
Skipping tests
You can skip one or many test cases,
----skip-testcase "tst_com_device_ManualConnect" --tags #smoke
All except the ones with #smoke tag will be run.

Can you run two test cases simultaneously in a Test Suite in Microsoft Test Manager 2010?

I am trying to create a unit test to run on two machines in Microsoft Test Manager 2010. In this test I want some client and server side test code to run simultaneously; the client side test being dependent on server side test working successfully.
When putting together a Test Suite in Test Manager, I want to be able to set both tests to have the same order value (so they run at the same time) but the validation prevents this; setting the order as shown below:
Is there any way I can achieve the simultaneous test execution I am after?
Sorry for the late answer... I've missed the notification about your answers to my question :-( Sorry for that!
In case you are still looking for solution, here my suggestion.
I suppose you have a test environment consisting of two machines (for server and client).
If so, you will not be able to run tests on both of them, or better to say you will not have enough control over running tests. Check How to Run automated tests on multiple computers at the same time
Actually I posted a related question to "Visual Studio Development Forum", you could check the answers I got here: Is it possible to run test on several virtual machines, which belong to the same environment, using build-deploy-test workflow
That all means you will end up creating two environments each consisting of one machine (one for server and one for client).
But then you will not be able to reference both environment in your build definition it you can only select one environment in DefaultLabTemplate.
That leads to the solution I can suggest:
Create two lab environments
Create three build definitions
the first one will only build your test code
the second one will deploy last successful build from the first one and start tests on the server environment
the third one will deploy last successful build from the first one and start tests on the client environment.
Run the first build definition automatically at night
Trigger the latter two simultaneously later.
It's not really nice, I know...
You will have to synchronize the build definition building the test code with the two build definitions running the tests.
I was thinking about setting up similar tests some months ago and it was the best solution I came up with...
Another option I have not tried yet could be:
Use a single test environment consisting of two machines and use different roles for them (server and client respectively).
In MTM create two Test Settings (one for the server role and one for the client role).
Create a bat file starting tests using tcm.exe tool (see How to: Run Automated Tests from the Command Line Using Tcm for more details).
You will need two tcm.exe calls, one for each Test Settings you have created.
Since a tcm.exe call just queues a test run an returns (more or less) immediately this bath file will start tests (more or less) simultaneously.
Create a build definition using DefaultLabTemplate.
This definition will:
build test code
deploy them to both machines in your environment
run your bath script as the last deployment step
(you will have to make sure this script is located on the build machine or deploy it there or make it accessible from the build machine)
As I've said, I have not tried it yet.
The disadvantage of this approach will be that you will not see the test part in the build log since the tests will not be started by means provided by DefaultLabTemplate. So the build will not fail when tests fail.
But you will still be able to see test outcomes in MTM and will have test results for each machine.
But depending on what is more important to you (having rest results or having build definition that fails if tests fail or having both) it could be a solution for you.
Yes, you can with modified TestSettings file.
http://blogs.msdn.com/b/vstsqualitytools/archive/2009/12/01/executing-unit-tests-in-parallel-on-a-multi-cpu-core-machine.aspx

Cross browsers testing - how to ensure uniqueness of test data?

My team is new to automation and plan to automate the cross browsers testing.
Thing that we not sure, how to make sure the test data is unique for each browser’s testing? The test data need to be unique due to some business rules.
I have few options in mind:
Run the tests in sequential order. Restore database after each test completed.
The testing report for each test will be kept individually. If any error occurs, we have to reproduce the error ourselves (data has been reset).
Run the tests concurrently/sequentially. Add a prefix to each test data to uniquely identify the test data for different browser testing. E.g, FF_User1, IE_User1
Run the tests concurrently/sequentially. Several test nodes will be setup and connect to different database. Each test node will run the test using different browser and the test data will be stored in different database.
Anyone can enlighten me which one is the best approach to use? or any other suggestion?
Do you need to run every test in all browsers? Otherwise, mix and match - pick which tests you want to run in which browser. You can organize your test data like in option 2 above.
Depending on which automation tool you're using, the data used during execution can be organized as iterations:
Browser | Username | VerifyText(example) #headers
FF | FF_User1 | User FF_User1 successfully logged in
IE | IE_User1 | User IE_User1 successfully logged in
If you want to randomly pick any data that works for a test and only want to ensure that the browsers use their own data set, then separate the tables/data sources by browser type. The automation tool should have an if clause you can use to then select which data set gets picked for that test.