Using TFS, is there any way to find out when a test first failed?
I am new to TFS and to the project, where we have a couple of unit tests failing since a while back, but we do not know what change that was committed when they first broke. When committing to TFS, the tests are run automatically. So is there any way TFS can save that info in a log, so we can take a close look at exactly what happened the the code when the tests started to fail?
Alternatively, suggestions for other tools that can do this are appreciated.
As you are aware, ultimately, it doesn't matter when a unit test first failed. What's important is (1) you fix the cause of the broken tests, (2) the team gets notified immediately when any test breaks. That said, I agree that knowing what changed can help find the cause.
I'm a bit rusty with TFS (and don't have one in front of me), but:
Test results are stored in tbl_TestResult in the collection database.
Test results' attachments are stored in tbl_Attachment also in the collection database.
Do you have the TFS cube setup?
Might be of use to you: Test steps and results in TFS 2010
Related
I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.
I found a bug in an open source project on GitHub, and wrote a failing test for it, but haven't suggested a fix due to insufficient familiarity with the code.
How does one usually contribute such tests? Shall I create a pull request? Note that the continuous integration would fail for my commit as it adds a (currently) failing test..
(For reference here's the actual test)
You can try to use the "Issues" functionality of Git. Create an issue as a bug report, instead of creating a pull request.
I am working with an TFS 2017 environment with test agent 2015. Before this we had an TFS 2013 environment with test agent 2013 and MTM (this worked fine).
On the moment we have the following problem:
We run a set with around 40 tests, all of them have multiple iterations. if the first iteration fails we see this in tfs, the test status is set to failed this is perfect. However if the first iteration succeeds and the second fails the test case is set to passed in TFS. But if the second iteration fails we want the whole test to be set to failed. The way it is now it looks like almost all our tests pass however sometimes a lot of later iterations fail what means that we get false reporting.
When I open the .TRX file belonging to one machine I can see what iterations failed and which one succeeded.
so the problem in a nutshell:
if the first iteration of a test passes and the second one fails the whole test is set to Passed in stead of failed what gives us false reporting.
I have absolutely no idea what we are doing wrong. But now it gives is false information about our runs.
Is there anyone here that has experienced the same problem?
Any help would be really appreciated as I have not been able to find any information about this subject on google.
I have posted this on the Microsoft forum. They have answered that they can reproduce it what means it's probably an issue in tfs/testagent. More information can be found here:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/4a384376-feae-46a9-a3da-e4445bc905d8/tfs-automated-tests-with-multiple-iterations-show-as-passed-even-when-the-second-iteration-fails?forum=tfsgeneral
I'm trying to get acquainted with test automation using Microsoft TFS Api.
I've created the program which runs my test set - it uses code similar to one described here, e.g.:
var testRun = _testPoint.Plan.CreateTestRun(false);
testRun.DateStarted = DateTime.Now;
// ...
testRun.Save();
I believe this forces them to start as soon as any of agents can run them, instead of being delayed to certain time. Am I wrong? Anyway, it works all right.
But I was told by my lead that the task should be started each time the new input files are copied to certain folder (on the network I think, or perhaps in TFS).
So I'm searching for a way which allow to trigger tests on some condition - but currently without any luck. Probably I miss proper keywords.
I only found something vaguely related here but it seems they say it is not possible in a proper way.
So are there any facilities in TFS / MTM, any ways or approaches to achieve my goal? Thanks in advance for any hints / links.
You would need to write a system service (or other) that uses the file system watcher. Then when the file changes you can run your code above.
There is no built in feature in TFS to watch a folder for changes.
We have a particular file, say X.zip that is only modified by 1 or 2 people. Hence we don't want the build to trigger on every check-in, as the other files are mostly untouched.
I need to check for a condition prior to building, whether the checked-in item is "X.zip" or not.. if yes, then trigger a build, else don't. We use only CI builds.
Any idea on how to trigger the build only when this particular file is checked-in? Any other approaches would be greatly appreciated as i am a newbie in TFS...
Tara.
I don't know of any OOTB feature which can do this, what you would need to do is write your own custom MSBuild task which is executed prior to the build running (pre-build action).
The task will then need to use the TFS API to check the current check in for the file you want and if it's not found you'll have to set the task to failed.
This isn't really ideal as it'll indicate to Team Build a build failure, which, depending on whether you're using check in policies, may be unhelpful. It'd also be harder to at-a-glance work out which builds failed because of the task and which failed because of a real problem.
You can change the build to occur less frequently rather than every check in, which will reduce load on your build server.
Otherwise you may want to dig into Cruise Control .NET, it may support better conditional builds.
If you could move X.zip into it's own folder, then you could set up a CI build with a workspace that only looked at the folder containing X.zip.
You would then need to add an explicit call to tf get to download the rest of the code as Team Build only downloads what the workspace is looking at.
But this might be simpler than the custom task approach?