Management of TFS test plan regarding a new iteration - testing

In TFS Test Hub, I have a reference test plan in which some hundred of test cases are ordered and sorted in a hierarchy of folders:
- FrontOffice
-- UserManagement
--- TestCase 1234
--- TestCase 5678
- BackOffice
-- etc.
When a new iteration has to be tested, I have two choices:
1- Add existing test cases in a new Test Plan, which is good, but make me lose the folder Hierarchy
2- Clone the reference test plan, which preserves the folders, but makes clones of the test cases
In this last case, the link with the requirement is second order:
Requirement --TestedBy -> ReferenceTestCase --Cloned-> ThisIterationTestCase
Option #1 is good for reporting, but tedious for execution
Option #2 is good for execution, but makes it impossible to query test results bounded to a requirement
Do you guys have any advice regarding this situation?

For your requirement, you can create test suites programmatically through REST API or client API (the structure can be defined in a JSON or xml file):
Create a test suite
The Test Management API – Part 2: Creating & Modifying Test Plans

Related

Azure DevOps - Multiple manual testruns in one

I have manual test plan in Azure DevOps with tree of suites that correspond to different functions in my app. Let's say it looks like this:
Now, I need to have one place where I can review tests results from whole test plan ran for particular build. Like acceptance tests.
There's no way to run multiple suites in one run, I guess. Didn't find such possibility, though. Tests ran suite by suite produce multiple testruns, which is understandable.
What I want to achieve is one link to all test results for specific build which I can provide further to PM.

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

Cross browsers testing - how to ensure uniqueness of test data?

My team is new to automation and plan to automate the cross browsers testing.
Thing that we not sure, how to make sure the test data is unique for each browser’s testing? The test data need to be unique due to some business rules.
I have few options in mind:
Run the tests in sequential order. Restore database after each test completed.
The testing report for each test will be kept individually. If any error occurs, we have to reproduce the error ourselves (data has been reset).
Run the tests concurrently/sequentially. Add a prefix to each test data to uniquely identify the test data for different browser testing. E.g, FF_User1, IE_User1
Run the tests concurrently/sequentially. Several test nodes will be setup and connect to different database. Each test node will run the test using different browser and the test data will be stored in different database.
Anyone can enlighten me which one is the best approach to use? or any other suggestion?
Do you need to run every test in all browsers? Otherwise, mix and match - pick which tests you want to run in which browser. You can organize your test data like in option 2 above.
Depending on which automation tool you're using, the data used during execution can be organized as iterations:
Browser | Username | VerifyText(example) #headers
FF | FF_User1 | User FF_User1 successfully logged in
IE | IE_User1 | User IE_User1 successfully logged in
If you want to randomly pick any data that works for a test and only want to ensure that the browsers use their own data set, then separate the tables/data sources by browser type. The automation tool should have an if clause you can use to then select which data set gets picked for that test.

Selenium setup/teardown best practices- returning data to orginal state

{
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("Illinois"));
_editUserPage.State.SelectedText = "New York";
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
_editUserPage.SaveChanges();
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
}
In my example above, I am changing the User's state from Illinois to New York; my question is: should I change the state back to the orignal value of Illiois at the end of the test?
I have roughly 20 other independent tests in the same file and I wanted to know what the best practice is for returning data to the original state. We are using setup/teardown for the entire test suite, just not within each individual test.
Best practice so far which I did see was this:
The test had one test data input (excel sheet)
Each run would add some prefix to the data (e.g. name Pavel => Test01_Pavel)
Test did verify that such data do not exist in the system
Test created testing data according to the input and verified that those data are present
Test deleted all testing data and verified that those data are deleted.
But the really best answer is "it depends." I, personally, am not deleting any test data from the system because
Test environment is strictly divided from prod one
Test data can be useful later on during performance testing (e.g. downloading list of users from the system)
So the real question which you should ask yourself is:
Does deleting test data at the end bring anything good to you?
And vice versa: What happens if the test data remain in the system?
BTW, if you feel like that "the application will definitely break if there is too much nonsense/dummy data in it" you should definitely test that scenario. Imagine that your service will become popular over night (Charlie sheen tweeting about using your page:) ) and millions of users would like to register themselves.
The approach taken in the company I work for is:
Spin up a dedicated test environment in cloud (AWS)
Kick off suite of tests
Each test would insert the data it requires
Once the test suite has completed then tear down the servers, inc. db
This way you have a fresh database each time the tests run and therefore the only danger of bad data breaking a test is if 2 tests in the suite are generating conflicting data.

Entity Framework Code First - Tests Overlapping Each Other

My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.
Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.
Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.