After Test Case/Suite/Plan Migration all Test Cases appear as "Active" - does not match source state - azure-devops-migration-tools

After performing a migration for a Team Area (Work Items then Test configs) the feedback I have from my tester colleague is that every Test that has migrated is sitting in an "Active" state, when a good number of these have either "Passed", "Failed" or "In Progress" on the source platform.
Is this a limitation of the WorkItemMigrationConfig or of any of the Test configuration processors? is this expected behaviour?
It is quite pertinent to retain the outcome states of Tests that have been run historically

This is the expected behaviour.
There is no way to migrate Test Runs (which is where that data comes from) to another environment. Test results data is lost during a migration.

Related

How to use "Requirement-based suite" (Azure DevOps) when testing same user story in multiple environments

I relate testing to the user stories by creating a requirement-base suite in ADO. When I do this, the user story shows this beaker on the user story, indicating whether testing passed, failed, etc.:
I've noticed if you relate multiple requirement-based suites to the same user story, the newest testing results overwrites what the beaker shows. For example, in my case, stories progress from lower environments to higher environments (in my instance; dev, QA (functional testing), UAT (regression testing), prod). I can't use the same requirement-based suite for both QA and UAT because I'd have to reset the tests which would lose the results of the lower environment. So, I have to create a new requirement-based suite for the higher environment and relate it to the same user story. When I do this, the new suite results overwrites the results for the lower environment when looking at the beaker. In other words, if I have one test in QA and one in UAT, both related to the user story, the beaker will only reflect one test, not both.
What I think should happen is the beaker shows the testing from the lower and higher environment.
Am I doing this right?
I can reproduce this situation on my side, this could be by design. The beaker in the card will only display the outcome from the higher-configuration testing.

Workflow from development to testing and merge

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

Selenium setup/teardown best practices- returning data to orginal state

{
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("Illinois"));
_editUserPage.State.SelectedText = "New York";
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
_editUserPage.SaveChanges();
Assert.That(_editUserPage.State.SelectedText, Is.EqualTo("New York"));
}
In my example above, I am changing the User's state from Illinois to New York; my question is: should I change the state back to the orignal value of Illiois at the end of the test?
I have roughly 20 other independent tests in the same file and I wanted to know what the best practice is for returning data to the original state. We are using setup/teardown for the entire test suite, just not within each individual test.
Best practice so far which I did see was this:
The test had one test data input (excel sheet)
Each run would add some prefix to the data (e.g. name Pavel => Test01_Pavel)
Test did verify that such data do not exist in the system
Test created testing data according to the input and verified that those data are present
Test deleted all testing data and verified that those data are deleted.
But the really best answer is "it depends." I, personally, am not deleting any test data from the system because
Test environment is strictly divided from prod one
Test data can be useful later on during performance testing (e.g. downloading list of users from the system)
So the real question which you should ask yourself is:
Does deleting test data at the end bring anything good to you?
And vice versa: What happens if the test data remain in the system?
BTW, if you feel like that "the application will definitely break if there is too much nonsense/dummy data in it" you should definitely test that scenario. Imagine that your service will become popular over night (Charlie sheen tweeting about using your page:) ) and millions of users would like to register themselves.
The approach taken in the company I work for is:
Spin up a dedicated test environment in cloud (AWS)
Kick off suite of tests
Each test would insert the data it requires
Once the test suite has completed then tear down the servers, inc. db
This way you have a fresh database each time the tests run and therefore the only danger of bad data breaking a test is if 2 tests in the suite are generating conflicting data.

Entity Framework Code First - Tests Overlapping Each Other

My integration tests are use a live DB that's generated using the EF initalizers. When I run the tests individually they run as expected. However when I run them all at once, I get a lot of failed tests.
I appear to have some overlapping going on. For example, I have two tests that use the same setup method. This setup method builds & populates the DB. Both tests perform the same test ACT which adds a handful of items to the DB (the same items), but what's unique is each test is looking for different calculations (instead of one big test that does a lot of things).
One way I could solve this is to do some trickery in the setup that creates a unique DB for each test that's run, that way everything stays isolated. However the EF initilization stuff isn't working when I do that because it is creating a new DB rather than dropping & replacing it iwth a new one (the latter triggers the seeding).
Ideas on how to address this? Seems like an organization of my tests... just not show how to best go about it and was looking for input. Really don't want to have to manually run each test.
Use test setup and tear down methods provided by your test framework and start transaction in test setup and rollback the transaction in test tear down (example for NUnit). You can even put setup and tear down method to the base class for all tests and each test will after that run in its own transaction which will rollback at the end of the test and put the database to its initial state.
Next to what Ladislav mentioned you can also use what's called a Delta Assertion.
For example, suppose you test adding a new Order to the SUT.
You could create a test that Asserts that there is exactly 1 Order in the database at the end of the test.
But you can also create a Delta Assertion by first checking how many Orders there are in the database at the start of the test method. Then after adding an Order to the SUT you test that there are NumberOfOrdersAtStart + 1 in the database.