How to create and link a bug to a specific iteration on a Test Result on Microsoft Test Manager (Microsoft Team Foundation Server TFS) - testing

I have a manual test result with various iterations. Some iterations passed the test but other s didn't. I need to create and link a bug to a specific test result iteration (the one that didn't pass the test) but when I choose to create and link a bug it always defaults to the first iteration. I can't see how to choose the specific iteration I want to link the bug to.
Before the click:
After the click:
On http://msdn.microsoft.com/en-us/library/dd380693.aspx they show how to link a bug to a test result. However, they don't specify on how to link a Bug to a particular iteration.
Thanks in advance!

You'd select it in Iteration dropdown under Classification above.

Apparently there is no way to do this once the test run has finished. It's a usability bug that Microsoft needs to fix. At least that what a Testing Microsoft Partner say on the MSDN Forums. The only way to to this is from the Test Runner itself.

Related

(MS Dynamics test automation) Can not switch to iframe, frames changing automatically

I am trying to switch frames in MS Dynamics 365 system using Selenium WebDriver. I will explain one of the issues below. Here is the html element code:
element code here
Usually, i used to use id=contentIFrame0 or 1, and the frames were switching fine. The problem is, that MS Dynamics generates those iframes dynamicly, usually contains max 3 iframes(contentIFrame0, contentIFrame1, contentIFrame2), but the fact is that you never know they will be 2 or 1 on the page and why, so if you use today one of them directly - tommorow your tests will fail because of the changes.
It seems like I have to switch all the time to the last frame, but it works randomly, because sometimes there is the first one contains element and another one scripts. Other thing i tried to do, is to switch to one iframe which has attributes: style = visibility: visible(before that, i tried to print in console how many visible frames driver sees - but written all the time 0). Also, if i try to print in the console how many iframes there are on the page - the counter is 2, but I can see 3.
If there is anyone who tried to automate MS Dynamics 365 and had the same problem?
I have discribed probably all cases, maybe you will notice the logics and difference.
I am not sure if this works in your case but please give it a try.
If you know one of the elements in the frame which you are trying to switch then use the css selector or xpath
driver.switchTo().frame(driver.findElement(By.cssSelector("iframe[title='test']")));
It very hard to test in this fashion as Microsoft doesn't guarantee that the objects being rendered will stay the same. It may be 3 frames today, but in the next version the dev team may introduce more or less, working with the DOM directly is no longer supported.
I would highly recommend the following framework for testing Dynamics: https://github.com/Microsoft/EasyRepro
It will help elevate your testing up one level, it’s introducing a level of abstraction so as to minimize the need to work with HTML directly by isolating all that low-level work in the framework code.
Here is a great post about EasyRepro: http://www.itaintboring.com/dynamics-crm/easy-repro-what-is-it/
Goodluck
This xpath finds the main pane reliably
//iframe[contains(contains(#id,'contentIFrame') and contains(#style,'visible')]
Note: not applicable to Dynamics 365 Unified Interface, it has completely different DOM.

Best practice for writing tests that reproduce bugs

I am struggling a bit with the way how to write tests that reproduce an issue that has not been yet fixed.
Should one write the test and use wrong expectations and once the bug is fixed the developer will see the failure and adjust the expectations or should one just write the test with correct expectations and disable it. Once it is fixed you have to enable it again.
I would prefer the way to define wrong expectations and add the correct ones in comments and once I fix an issue I will immediately get a notification that it fails. If I disable it I won't see it failing and it will probably stay disabled until one will discover this test.
Are there any other ways doing this?
Thanks for your comments.
Martin
Ideally you would write a test that reproduces the bug and then fix said bug.
If for whatever reason that is not currently an option I would say that your approach of having the wrong expectations would be better than having an ignored test. Assuming that you use some clear variable name/ method name / comments that the test is more a placeholder and not the desired outcome.
One thing that I've done is write a test that is a "time bomb" reminder. I pick a date that is a few weeks/months out from now that I expect to be able to get back to it or have it fixed by. If I end up having to push the date out 2 or 3 times I end up deleting the test because it must not be that important.
as #Jarred said, best way is to write a test that express the correct expectations, check if it fails, then fix production code and see the test passes.
if it's not an option then remember that tests are not only to test but also to document. so write a test that document how your program does actually work. if necessary add a comment to the test. and don't write tests that are ignored - it's pointless. in future you can refactor your code many times, you could accidentally fix this test or introduce even more error in this area. writing tests that are intended to be long term ignored is just a waste of time.
don't be afraid that you will forget about that particular bug/test, just create a ticket in your issue tracking system - that's what it's made for.
if you use a testing framework that supports groups, you can add all those tests to be able to instantly exclude those test if needed.
also i really don't like the concept of 'time bomb tests'. your build MUST be reproducible - that's the fundamental assumption of release management, continuous integration, ability to pass your code to another team etc. tests are not meant to track and remind about the issues, it's the job of the issue tracking system. seriously, don't do it
Actually I thought about this again. We are using JUnit and it supports defining expectations on exceptions via #Test(expected=Exception.class).
So what one can do is write the test with the desired expectations and define the test with #Test(expected=AssertionError.class). Once the test will be fixed the test starts failing and the developer has to remove the expectation.

Is there any Java Testing Tool for manual tests?

Basically, I want a free testing program that will allow me to create MANUAL tests with a series of steps where I can mark steps as passed or failed. I would like it to behave much like SpiraTest but it must be Non-web based.
In other words, I might write a test like:
Step Description Expected Actual Pass/Fail
1. Run Program Program should start Pass
2. Click Open Open dialog displays Pass
3. Select file Program opens file Pass
Anyone know if such a thing exists?
And no, I do not want any automated testing stuff, this must be manual user testing. Thanks!
I use Excel / Spreadsheet for recording detailed test steps. I think that should pretty well solve your problem.

Print complete control flow through gdb including values of variables

The idea is that given a specific input to the program, somehow I want to automatically step-in through the complete program and dump its control flow along with all the data being used like classes and their variables. Is their a straightforward way to do this? Or can this be done by some scripting over gdb or does it require modification in gdb?
Ok the reason for this question is because of an idea regarding a debugging tool. What it does is this. Given two different inputs to a program, one causing an incorrect output and the other a correct one, it will tell what part of the control flow differ for them.
So What I think will be needed is a complete dump of these 2 control flows going into a diff engine. And if the two inputs are following similar control flows then their diff would (in many cases) give a good idea about why the bug exist.
This can be made into a very engaging tool with many features build on top of this.
Tell us a little more about the environment. dtrace, for example, will do a marvelous job of this in Solaris or Leopard. gprof is another possibility.
A bumpo version of this could be done with yes(1), or expect(1).
If you want to get fancy, GDB can be scripted with Python in some versions.
What you are describing sounds a bit like gdb's "tracepoint debugging".
See gdb's internal help "help tracepoint". You can also see a whitepaper
here: http://sourceware.org/gdb/talks/esc-west-1999/
Unfortunately, this functionality is not currently implemented for
native debugging, but I believe that CodeSourcery is doing some work
on it.
Check this out, unlike Coverity, Fenris is free and widly used..
How to print the next N executed lines automatically in GDB?

How do I run a subset of OCUnit tests in Xcode

I have a suite of unit tests that I use before checking in my project. However, very often it's the case that only one of them finds some regression in the code. In these cases I'd like to only run that particular unit test while debugging the failure. I haven't found any way to do this in Xcode. Is it possible?
If you're happy restricting your testing to a single test class, a simple option is to create a second test target (duplicate the existing target, change the product name and remove the contents of the "Compile Sources" build phase, if you wish) and add only the test source file you're trying to fix to it.
Alternatively, you can use the "Other Test Flags" option to pass a -SenTest argument to otest, the test runner:
% /Developer/Tools/otest
2009-08-29 22:28:39.555 otest[70089:10b] Usage: otest [-SenTest Self | All | None |
<TestCaseClassName/testMethodName>] <path of unit to be tested>
More information about using this method is here.
Thanks for that push in the right direction. I ended using the same basic concept, but I added a GUI that lets you select what gets run as well as get a nice red/green status for each test. If anyone is interested, the code is at the URL below. The UI needs to more spit and polish, but it seems to be working.
http://github.com/nall/XcodeUnitTestGUI/tree/master
After I started the project above, I found this project which is really fantastic.
http://github.com/gabriel/gh-unit
For new readers: A much better way now available in Xcode is to edit the scheme for the target to be tested and select "Test" in the left hand column of the scheme pane.
Use the widgets in the Tests column to expand targets and suites.
You can disable/enable tests on a per test target, per suite or per test basis using the check boxes on the right