Show Test result Form test suites using TFS api - tfs-sdk

I am working with a school project where I am going to analyse a companies defect database.
They are using Microsoft Foundation Server (TFS).
I am all new to TFS and the TFS api.
I have some problem in getting the right data from TFS using the TFS Client Object Model
.
I can retrieve all Test Plans, their respective test suites and every test case that a specific test suite uses, but the problem comes when I want to see in which test suite I have a specific test result from a test case. Since more than one the suite can use the same test cases, I cant see in which suite the result came from.
I am doing this way in getting test cases from suites:
foreach (ITestSuiteEntry testcase in suiteEntrys)
{
Console.WriteLine("\t \t Test Case: {0}", testcase.Title+", "+"Test case priority: "+ testcase.TestCase.Priority+"Test case Id: "+testcase.TestCase.Id);
//Console.Write(", Test Case: {0}", testcase.ToString());
//get each test result:
getTestResult(testcase,proj);
}
private void getTestResult(ITestSuiteEntry testcase, ITestManagementTeamProject proj)
{
var testResults = proj.TestResults.ByTestId(testcase.TestCase.Id);
foreach (ITestCaseResult result in testResults)
{
Console.WriteLine("\t \t \t"+result.DateCreated+","+result.Outcome);
}
}
So my question is, how can I see the Test Suite which was used to execute the test?

Have a look at following code snippet.
It shows you how to get Test Results for a specific Test Suite using Test Points Ids.
You can use similar approach to achieve your goal.
var tfsCollection = new TfsTeamProjectCollection(
new Uri(tfsUrl),
new System.Net.NetworkCredential(<user>, <password>));
tfsCollection.EnsureAuthenticated();
var testManagementService = tfsCollection.GetService<ITestManagementService>();
var teamProject = testManagementService.GetTeamProject(projectName);
var testPlan = teamProject.TestPlans.Find(testPlanId);
// Get all Test Cases belonging to a particular Test Suite.
// (If you are using several Test Configurations and want to take only one of them into account,
// you will have to add 'AND ConfigurationId = <your Test Configuration Id>' to the WHERE clause.)
string queryForTestPointsForSpecificTestSuite = string.Format("SELECT * FROM TestPoint WHERE SuiteId = {0}", suiteId );
var testPoints = testPlan.QueryTestPoints(queryForTestPointsForSpecificTestSuite);
// We are going to use these ids when analyzing Test Results
List<int> testPointsIds = (from testPoint in testPoints select testPoint.Id).ToList();
var testResults = teamProject.TestResults.ByTestId(testCaseId);
var testResultsForSpecificTestSuite = testResults.Where(testResult => testPointsIds.Contains(testResult.TestPointId));
This blog post will help you when creating queries: WIQL for Test

Related

How to connect to local docker db to test Get async methods

I want to setup a connection to the database so that i can test 2 methods in 2 repositories, i want to test getAllGamesasync() method which returns a list of all entities from the database, and the getGamesByNameasync() method which returns games by their names.
I am running docker to run the db and has populated rows with dummy data, i want to connect to it and run the test, So the question is What connection string to configure so code can talk to the SQL Server instance running in docker.
The methods work fine i have tested using a In-Memory DB to manually insert entities and test them against the methods using the below unit test. So the unit test for Get_GamesByName looks like this:
public async Task Get_GamesByName()
{
var options = new DbContextOptionsBuilder<GamesDbContext>()
.options.UseSqlServer(Configuration.GetConnectionString("GamesDbContext")));
using (var context = new GamesDbContext(option))
{
GamesRepository gamesRepository = new GamesRepository (context);
var result = await gamesRepository.GetGamesByNameAsync("Witcher");
Assert.Equal(2, result.count);

How to run an specific test case in the selected environment in SoapUI

I have multiple Environment and a lot of test cases, but not all test cases are needed to be run in all environment. Is there a way to run only an specific test cases from a test suite based on the selected Environment.
For Example
If I select Environment1, it will run the following test cases
TC0001
TC0002
TC0003
TC0004
TC0005
If I select Environment2, it will run only the following test cases
TC0001
TC0003
TC0005
There can be different solution to achieve this since you have multiple environments i.e., pro software being used.
I would achieve the solution using Test Suite's Setup Script:
Create Test Suite level custom property. Use the same name as your environment name. For instance, DEV is the environment defined, use the same as test suite property name and provide the list of values separated by comma as value for that property, say TC1, TC2 etc.,
Similarly defined other environments and its values as well.
Copy the below script in Setup Script for the test suite and execute the script which enables or disables the test cases according to the environment and property value
Test Suite's Setup Script
/**
* This is soapui's Setup Script
* which enables / disables required
* test cases based on the user list
* for that specific environment
**/
def disableTestCase(testCaze) {
testCaze.disabled = true
}
def enableTestCase(testCaze) {
testCaze.disabled = false
}
def getEnvironmentSpecificList(def testSuite) {
def currentEnv = testSuite.project.activeEnvironment.NAME
def enableList = testSuite.getPropertyValue(currentEnv).split(',').collect { it.trim()}
log.info "List of test for enable: ${enableList}"
enableList
}
def userList = getEnvironmentSpecificList(testSuite)
testSuite.testCaseList.each { kase ->
if (userList.contains(kase.name)) {
enableTestCase(kase)
} else {
disableTestCase(kase)
}
}
Other way to achieve this is using Event feature of ReadyAPI, you may use TestRunListener.beforeRun() and filter the test case whether to execute or ignore.
EDIT:
If you are using ReadyAPI, then you can the new feature called tag the test cases. A test case can be tagged with multiple values and you can execute tests using specific tags. In this case, you may not needed to have the setup script as that is for the open source edition. Refer documentation for more details.
This solution is only specific to Pro software and Open Source edition does have this tag feature.

Hadoop Map reduce Testing - custom record reader

I have written a custom record reader and looking for sample test code to test my custom reader using MRUnit or any other testing framework. Its working fine as per the functionality but I would like to add test cases before I make an install. Any help would be appreciable.
In my opinion, a custom record reader is like any iterator. For testing my record reader I have been able to work without MRUnit or any other hadoop junit frameworks. The test executes quickly and the footprint is small too. Initialize the record reader in your test case and keep iterating on it. Here is a pseudocode from one of my tests. I can provide you more details if you want to proceed in this direction.
MyInputFormat myInputFormat = new MyInputFormat();
//configure job and provide input format configuration
Job job = Job.getInstance(conf, "test");
conf = job.getConfiguration();
// verify split type and count if you want to verify the input format also
List<InputSplit> splits = myInputFormat.getSplits(job);
TaskAttemptContext context = new TaskAttemptContextImpl(conf, new TaskAttemptID());
RecordReader<LongWritable, Text> reader = myInputFormat.createRecordReader(splits.get(1), context);
reader.initialize(splits.get(1), context);
for (; number of expected value;) {
assertTrue(reader.nextKeyValue());
// verify key and value
assertEquals(expectedLong, reader.getCurrentKey());
}

How to get result history of specific testcase using tfs-sdk

I have automated testcase in Test Manager. This testcase was executed several times in different builds (It is situated in several test runs). I can see history of test execution through Test Manager UI (Test Manager -> Analyze Test Runs -> Open Test Run -> View Results for Testcase -> Result History table).
How to get same data using TFS API?
I would do it this way:
using Microsoft.TeamFoundation.Client;
using Microsoft.TeamFoundation.TestManagement.Client;
var tfsCollection = new TfsTeamProjectCollection(
new Uri(#"http://<yourTFS>:8080/tfs/<your collection>"),
new System.Net.NetworkCredential(<user who can access to TFS>,<password>));
tfsCollection.EnsureAuthenticated();
ITestManagementService testManagementService = tfsCollection.GetService<ITestManagementService>();
var testRuns = testManagementService.QueryTestRuns("SELECT * FROM TestRun WHERE TestRun.TestPlanId=<your test plan ID>");
IEnumerable<ITestCaseResult> testResultHistoryYouWant = from testRun in testRuns
from testResult in testRun.QueryResults()
where testResult.TestCaseId == <your test case ID>
select testResult;

Generate custom steps with behat

I try to write a custom step that's generate step
my code looks like :
/**
* #Then /^Check_raoul$/
*/
public function checkRaoul()
{
// grab the content ...
// get players ...
$to_return = array();
foreach ($players as $player) {
$player = $player->textContent;
if (preg_match('/^.*video=([^&]*)&.*$/', $player, $matches))
{
array_push($to_return, new Step\Then('I check the video of id "'.$matches[1].'"'));
}
}
return $to_return;
}
/**
* #Then /^I check the video of id "([^"]*)"$/
*/
public function iCheckTheVideoOfId($id)
{
// ...
}
works fine but when integrating to jenkins or un cli, if many executions of iCheckTheVideoOfId fail, I see just one error. I wish generate a number of steps equal to the number of iCheckTheVideoOfId calls
what I a doing wrong ?
We abandoned using Jenkins to do BDD checks due to the differences in how test feedback is presented and what Jenkins is capable of. We found that just running our suites locally and then a full check before pushing code to the repo produced better results and helped everyone get better at using the framework.
To answer your question directly I would suggest configuring your jenkins job to not fail when a test fails.
This can be accomplished by not outputting results at all. You can modify your command line options to not output failures at all and just log results to an output file. You could then run a script at the end to check for failures.