Workflow from development to testing and merge - testing

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.

A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed

AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

Related

Wanting to get rid of the Test Hub in TFS and integrate with Test Rail

Just wondering if there is a way to integrate TFS with TestRail to replace(get rid of entirely) the Test Hub within TFS to use TestRail to record Test Plans?
My concern with removing the Test Hub, would be if Test Rail can still reference IDs in Bug and Stories within TFS and vice versa?
You currently cannot remove the standard Hubs in Visual Studio Team Services / TFS or replace them with something else. You can enable an extension that either adds its own functionality under said hub as a separate tab or adds another Test Rail hub to the top level (if there is any) or write your own. Extensions currently cannot leave their sandbox to overwrite standard functionality.
There is nothing preventing anything from keeping track of work item numbers anywhere, so the second part of your question, whether it would have broken any form of integrations, that's unlikely.
If you are on TFS, you could try creating a custom process template that doesn't have the Test Case, Shared Step, Test Suite and Test Plan work item types, this will likely at least completely cripple the existing functionality. In the on-prem version you can also customize the files on disk, I've never tried, but it's likely that you could probably hack the test hub away. That would be a totally unsupported scenario through.

Should I put my automated regression tests inside a web application?

Question
Is it worth building a web application front-end for my department's automated regression tests? I've searched quite a bit and I don't think anything like this exits. Basically the web application would allow a user to specify a URL, expected inputs, and expected outputs and an expected return URL. On the back-end a headless browser would be running on the server to test the scenario just defined by the user, most likely using calls to a headless browser... I've searched quite a bit to see if something as simple as this exists but I haven't had any luck. I've found lots of tools for allowing programmatic operation of browser commands but a web front-end for testing another web application I have not.
Background
My team has dedicated automated regression tests that the testers run on their local machines. The tests are written in Python, utilize some Selenium integration plugins, and use an excel spreadsheet as input on what to test. They are maintained by the QA department.
Problem
Nobody outside the QA team knows how extensive these regression tests
are because they exist only on individual laptops.
They have no central repository, and the dev team has no means of
actively updating these tests as we build new features. We must leave
it 100% up to the QA department.
The business analysts don't have access to the results of these
tests. Because of all this, a lot of uncertainty exists around our
automated testing increasing reluctance to change things without
instructing the QA team to perform full scale manual regression
tests...
This has led me to consider putting all of our Selenium tests in the cloud behind a user-friendly web front-end that anyone can use and access from anywhere. They could then easily create new tests using dropdown menus. Everyone, developers, testers, and business analysts, can see whats covered in a test sequence and update them as we add new features. I believe this would also make it easier to have Jenkins jobs trigger tests to run at timed intervals if web application exposed web service hooks for jenkins... But I feel like perhaps I'm re-inventing the wheel. Is what I'm proposing to build worth it?
Personally i would not spend too much time in creating a website to accept user input to create a testscript. Instead I would spend that time in creating a solid test framework and use Jenkins to trigger the tests.
You also need to consider the 'website' maintenance in future. What will happen if some new feature has to be included in the website? QA/BA team will depend on the developer to add the feature.
I think it is better to use keyword driven framework - where you can write your entire test in spreadsheet. [In my project QA people who are not familiar with programming create test scripts with this approach].
As Jenkins web based application - anyone can trigger your automated regression tests. Even the BAs (in my project, that is what i have done). No technical skill is required. We can also pass parameters through jenkins. Parameters can be anything from text to a file. So, you can upload a file which contains the steps to be executed to the jenkins job and the rest should be taken care by your test framework.
You would definitely need a central repository. It is a must have. You can take a look at VisualSVN server. It is easy and FREE.
Keyword Driven framework using Selenium:
http://www.testautomationguru.com/keyword-driven-framework-for-localization-testing-using-selenium-webdriver/
Continuous regression & results:
http://www.testautomationguru.com/continuous-regression-testing-best-practises/
Smoke Test after each build:
http://www.testautomationguru.com/automated-smoke-test-best-practises/

Separating building and testing jobs in Jenkins

I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.

Jira as a test case execution tool

Is there a way to track test case exection & progress in Jira? I currently use the Test Management Workflow & provides me various phases like TestCaseReview, Pass, Fail, Invalid, ...
But is there a way to assocaite the test status to a specific run.
For instance, I would like to execute 100 tests with a specfic filter for Release 1.0
Execute the same set of tests for Release 2.0 as well & track the status.
This was fairly simple in Zephyr (provided us options to create test cycles and add tests). But sadly, the plugin is not available for OnDemand version of Jira.
So, I am looking to accomplish the same functionality with a vanilla version of Jira
Unfortunately I don't believe that what you want to do is possible in 'vanilla' JIRA.
The only possible way to do this in JIRA would be to create a 'swimlane' in JIRA for your release and then create individual JIRA tickets for each test (or given the large number of tests you have, small groups of tests) and then migrate the tickets through appropriate status values either as vertical swimlanes (or as sub-status) within a swim lane.
Sorry I can't suggest anything easier to manage.

How to verify lots of events in a reasonable way

I am new to software testing. Currently I need to test a middle-sized web application. We have just refactored our codebase and added many event logging logic to the existing code. The event logging code will write to both Windows Eventlog and a SQL database table as well.
The amount of the events is about 200. What approach should I take to test/verify this code refactoring effectivly and efficiently?
Thanks.
I would be tempted to implement unit tests for each of the events to make sure when an event occurs the correct information is passed into your event logging logic.
This would mean that you can trigger one event on the deployed site and verify the data is written to the database and event log. You can have an acceptable level of confidence that the remaining event will be recorded correctly.
If unit testing isn't an option then you will need to verify each event manually, I would alternate between checking the database and the event log as there should be little risk in this area failing. That would mean you would have 200 tests rather than 400 tests.
You could also partition the application into sensible sections and trigger a few events for each section to give you a reasonable level of confidence in the application.
The approach you take will really be determined by how long you have to test, what the cost of would be if an event didn't get logged, and how well developed the logging logic is.
Hope this helps
I would have added tests before you did the refactoring. you dont know where you have broken it already :).
you are saying that it logs into EventViewer and DB, I hope you have exposed logging feature as an interface so that you can:
Extend it to log to some other device if needed
Also makes mocking bit a lot easier
if you have 200 events to test, that's not going to be easy tbh. I dont think you can escape from creating eq number of tests for your 200 events.
I would do it this way:
i would search for all places where my logging interface is used and note all classes and
start with critical paths/ones first (that way you at least cover critical ones)
or you could start from the end, i.e. note down all possible combinations of logs you are getting, maybe point to stale data so that you know if the input is the same, output should be the same too. And every time, regression test your new binaries agaisnt this data and you should get similar number/level of logs.
This shouldn't be to difficult.
Pick a free automated web test tool like Watir (java) or WatiN (.net), (or VS UI Test if you have it.)
Create tests that cover the areas of the web application you expect/need to fire events. Examine the SQL Db after each test to see what events did fire.
If those event streams are correct for the test add a step into the test to verifiy that exactly that event stream was created in the Db.
This will give you a set of tests that will validate the eventing from any portion of your web site in a repeatable fashion.
The efficent & efective part of this approach is that it allows you to create only as many tests as you need to verify the app. Also you do not need to recreate a unit test approach with one test per event.
Automating the tests will allow you re-execute them without additonal effort, and this will really add up over the long haul.
This approach can also be taken with manual testing, but it will be tricky to get consistent & repeatable results. Also re-testing will take nearly as long as the testing uncovers defects that need to be fixed.
Note: while this will be the most effective & efficent way it will not be exhaustive. There will likely be edge case that get missed, but that can be said of nearly any test approach. Just add test cases until you get the coverage you need.
Hope this helps,
Chris