I have a build job which takes a parameter (say which branch to build) that, when it completes triggers a testing job (actually several jobs) which does some stuff like download a bunch of test data and checks that the new version is works with the test data.
My problem is that I can't seem to figure out a way to show the test results in a sensible way. If I just use one testing job then the test results for "stable" and "dodgy-future-branch" get mixed up which isn't what I want and if I create a separate testing job for each branch that the build job understands it quickly becomes unmanageable because of combinatorial explosion (say 6 branches and 6 different types of testing mean I need 36 testing jobs and then when I want to make a change, say to save more builds, then I need to update all 36 by hand)
I've been looking at Job Generator Plugin and ez-templates in the hope that I might be able to create and manage just the templates for the testing jobs and have the actual jobs be created / updated on the fly. I can't shake the feeling that this is so hard because my basic model is wrong. Is it just that the separation of the building and testing jobs like this is not recommended or is there some other method to allow the filtering of test results for a job based on build parameters that I haven't found yet?
I would define a set of simple use cases:
Check in on development branch triggers build
Successful build triggers UpdateBuildPage
Successful build of development triggers IntegrationTest
Successful IntegrationTest triggers LoadTest
Successful IntegrationTest triggers UpdateTestPage
Successful LoadTest triggers UpdateTestPage
etc.
So especially I wouldn't look into all jenkins job results for overviews, but create a web page or something like that.
I wouldn't expect the full matrix of build/tests, and the combinations that are used will become clear from the use cases.
Related
I have manual test plan in Azure DevOps with tree of suites that correspond to different functions in my app. Let's say it looks like this:
Now, I need to have one place where I can review tests results from whole test plan ran for particular build. Like acceptance tests.
There's no way to run multiple suites in one run, I guess. Didn't find such possibility, though. Tests ran suite by suite produce multiple testruns, which is understandable.
What I want to achieve is one link to all test results for specific build which I can provide further to PM.
I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.
As far as I know TestCafe default behaviour is to run tests in parallel.
Indeed the browsers function accepts an array of browser (which is cool).
What I would like to do however is quite different. I have fixtures based on area of my portal (search, payment etc...) and so I'd like to know if it's possible to run these tests in CLI in parallel as they are orthogonal.
The scope is of course to improve the execution time as the number test
cases will grow.
On the other hand I'd like also to catch the failures meaning that if a test ran in parallel on a specific metadata filter fails possibly we would like to stop the others too.
I am not using TestCafe's docker but our custom one with just Firefox, Chrome installed and we launch of tests in headless mode.
As a last point a great thing would be if we could run these scenario/metadata in parallel but somehow at the end of the test suite gather the reports together.
I understand the question is not easy especially because it involves either TestCafe or GitlabCi but probably someone else faced this problem too.
Thank you
If I understand you correctly, the behavior you described can be achieved by dividing the test execution among multiple CI jobs. For example, each CI job can test a particular area of your portal. For that, run TestCafe with specified metadata of your fixture/test. Also, most of the CI systems allow you to cancel all other jobs in a pipeline if one of the jobs fails (unfortunately, Gitlab hasn't released this feature yet).
On the other hand, you can use TestCafe's programmatic API: create multiple TestCafe runners, each running the desired subset of tests. However, at the end of the test execution, you'll need to merge generated reports into one report manually. Check this answer to get an idea of how to create multiple runners.
Is there a way to track test case exection & progress in Jira? I currently use the Test Management Workflow & provides me various phases like TestCaseReview, Pass, Fail, Invalid, ...
But is there a way to assocaite the test status to a specific run.
For instance, I would like to execute 100 tests with a specfic filter for Release 1.0
Execute the same set of tests for Release 2.0 as well & track the status.
This was fairly simple in Zephyr (provided us options to create test cycles and add tests). But sadly, the plugin is not available for OnDemand version of Jira.
So, I am looking to accomplish the same functionality with a vanilla version of Jira
Unfortunately I don't believe that what you want to do is possible in 'vanilla' JIRA.
The only possible way to do this in JIRA would be to create a 'swimlane' in JIRA for your release and then create individual JIRA tickets for each test (or given the large number of tests you have, small groups of tests) and then migrate the tickets through appropriate status values either as vertical swimlanes (or as sub-status) within a swim lane.
Sorry I can't suggest anything easier to manage.
I'm trying to configure a set of build configurations in TeamCity 6 and am trying to model a specific requirement in the cleanest possible manner way enabled by TeamCity.
I have a set of acceptance tests (around 4-8 suites of tests grouped by the functional area of the system they pertain to) that I wish to run in parallel (I'll model them as build configurations so they can be distributed across a set of agents).
From my initial research, it seems that having a AcceptanceTests meta-build config that pulls in the set of individual Acceptance test configs via Snapshot dependencies should do the trick. Then all I have to do is say that my Commit build config should trigger AcceptanceTests and they'll all get pulled in. So, lets say I also have AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC
So far, so good (I know I could also turn it around the other way and cause the Commit config to trigger AcceptanceSuiteA, AcceptanceSuiteB and AcceptanceSuiteC - problem there is I need to manually aggregate the results to determine the overall success of the acceptance tests as a whole).
The complicating bit is that while AcceptanceSuiteC just needs some Commit artifacts and can then live on it's own, AcceptanceSuiteA and AcceptanceSuiteB need to:
DeploySite (lets say it takes 2 minutes and I cant afford to spin up a completely isolated one just for this run)
Run tests against the deployed site
The problem is that I need to be able to ensure that:
the website only gets configured once
The website does not get clobbered while the two suites are running
If I set up DeploySite as a build config and have AcceptanceSuiteA and AcceptanceSuiteB pull it in as a snapshot dependency, AFAICT:
a subsequent or parallel run of AcceptanceSuiteB could trigger another DeploySite which would clobber the deployment that AcceptanceSuiteA and/or AcceptanceSuiteB are in the middle of using.
While I can say Limit the number of simultaneously running builds to force only one to happen at a time, I need to have one at a time and not while the dependent pieces are still running.
Is there a way in TeamCity to model such a hierarchy?
EDIT: Ideas:-
A crap solution is that DeploySite could set a 'in use flag' marker and then have the AcceptanceTests config clear that flag [after AcceptanceSuiteA and AcceptanceSuiteB have completed]. The problem then becomes one of having the next DeploySite down the pipeline wait until said gate has been opened again (Doing a blocking wait within the build, doesnt feel right - I want it to be flagged as 'not yet started' rather than looking like it's taking a long time to do something). However this sort of stuff a flag over here and have this bit check it is the sort of mutable state / flakiness smell I'm trying to get away from.
EDIT 2: if I could programmatically alter the agent configuration, I could set Agent Requirements to require InUse=false and then set the flag when a deploy starts and clear it after the tests have run
Seems you go look on the Jetbrains Devnet and YouTrack tracker first and remember to use the magic word clobber in your search.
Then you install groovy-plug and use the StartBuildPrecondition facility
To use the feature, add system.locks.readLock. or system.locks.writeLock. property to the build configuration.
The build with writeLock will only start when there are no builds running with read or write locks of the same name.
The build with readLock will only start when there are no builds running with write lock of the same name.
therein to manage the fact that the dependent configs 'read' and the DeploySite config 'writes' the shared item.
(This is not a full productised solution hence the tracker item remains open)
EDIT: And I still dont know whether the lock should be under Build Parameters|System Properties and what the exact name format should be, is it locks.writeLock.MYLOCKNAME (i.e., show up in config with reference syntax %system.locks.writeLock.MYLOCKNAME%) ?
Other puzzlers are: how does one manage giving builds triggered by build completion of a writeLock task read access - does the lock get dropped until the next one picks up (which would allow another writer in) - or is it necessary to have something queue up the parent and child dependency at the same time ?