As part of testing in the cloud, I plan to copy my test results (I happen to be using gradle) as xml out of a short lived container to an s3 bucket.
I am wondering what is the best approach for making use of the bucket full of test results and if anyone has ideas for how to get the most of these results with the least custom code, etc.
In the past I have handled this with XSLT to make the XML into HTML but that approach feels like more work than I'd like to have to do.
Has anyone solved this issue for their own purposes that can share your approach?
As I side note... I could also copy out to my s3 bucket the HTML reports from gradle if someone has an idea about how to use the HTML instead.
Ideally, I would be able to see a sorted list of results and click on the most recent to see the test run details.
Disclaimer: I'm not a Devops guy, I'll present a "developer's" point of view here
The answer IMO depends on what exactly would like to do with all these reports.
In a nutshell, you can have text/xml reports from the test execution (I can't say for gradle but in maven surefire plugin creates these reports, I believe gradle does the same).
You can also generate a good looking site with the reports information, there are tools for this as well, for example Allure Reports. Bottom line, some stuff to be shown can be generated in the build directory.
Copying the reports is as simple as copying a file, but the question is what would like to do with them when they're in S3?
In my understanding, its a Job for CI tools (like Jenkins) to run the tests during the build, or maybe a suit of automation tests against a deployed environment (in the cloud).
These CI Tools run gradle to build the project, and run tests, then they show test results per build memorizing last N builds. It can also be integrated with tools like aforementioned allure and show the HTML report per build as long as the build is stored in CI tool.
So I'm not sure I understand why would you like to "run tests in the cloud" (from which I assume you're running tests suite when the artifact is being deployed).
As for possible ideas of what can be done with all these results:
You can create a site that will show the results across builds (again, this is
already solved by CI tool like Jenkins)
You can store the results of running the tests in some database and provide some
clever statistics (I personally don't see much benefit if usually the build is
green and you're not in the process of "recuperation" of project from flaky tests or
something)
Related
I'm trying to get reporting working for Karate DSL, and it's proven a challenge because my team uses Circle CI instead of Jenkins. Cucumber reporting seems to only work for Jenkins.
I've had a look at this documentation, here:
https://github.com/intuit/karate/tree/master/karate-demo#example-report
https://github.com/jenkinsci/cucumber-reports-plugin
I was wondering if there is a circle friendly equivalent you could recommend? It'd be even better if the reports could be generated in the terminal. It's going to be a hard sell to convince my team to change CI tools just so I can implement a test framework.
Thanks!
Here's what I suggest:
If you follow the demo / doc instructions - you will get the HTML reports in say target/cucumber-html-reports, and this is "pure Maven and Java", no dependency on CircleCI at all so far.
Now all you need to do is somehow make these HTML reports accessible via the web. In Jenkins, there is an HTML Publisher Plugin. I am not familiar with CircleCI but a quick search suggests that there is a way to expose links to build artifacts.
Also note that when you follow the demo, Java JUnit XML reports would also be output to target/cucumber-reports. It looks like CircleCI has support for these which means that it should be able to derive the build pass/fail status and stats if configured right.
Also note that Karate now enables you to write custom reports: https://stackoverflow.com/a/66773839/143475
I'm trying to improve the testing process where I work, but without adjusting the structure.
What we have: VSTS, Selenium IDE, Testers who write test cases, but not code.
What I'd like to do is manage a way to marry our TFS continuous integration with the Selenium tests we write. These are NOT the code-driven selenium tests, but rather the IDE version where users click through, and set assertions using the IDE (All are just UI tests). I know we can export those tests plans as a .SIDE file, but what I can't figure out, is how to have our TFS server execute those as part of a deployment or build pipeline.
Ideally, developers/devops would setup projects in TFS from the onset with whatever solution makes sense to execute these Selenium .SIDE files, but afterwards, the testers would manage adding/modifying those tests cases elsewhere.
The real goal here is to not have testers writing code, or checking in code. Only writing these UI Selenium tests, but having TFS execute those as part of CI.
Researching this on the internet drives me basically always to something that requires testers to write code.
I don't think it can automate testing without code, at lease, you need a test project containing your automated tests.
Generally, in Azure DevOps, we use Visual Studio Test task to run tests. This task supports using the following tests:
Test assembly: Use this option to specify one or more test assemblies that contain your tests. You can optionally specify a
filter criteria to select only specific tests.
Test plan: Use this option to run tests from your test plan that have an automated test method associated with it. To learn more about
how to associate tests with a test case work item, see Associate
automated tests with test cases.
Test run: Use this option when you are setting up an environment to run tests from test plans. This option should not be used when
running tests in a continuous integration/continuous deployment
(CI/CD) pipeline.
This was a question that I had as well, and I think I found an imperfect but better solution.
I wasn't able to get my Selenium IDE tests running with Jenkins, but I was able to get them to run with TeamCity, another CI.
I created a build step like the following :
Runner type: Command Line
Working Directory: where the selenium IDE .side file is located
Run: Custom Script
With the build script content that I usually use to run my Selenium IDE Tests, such as selenium-side-runner sidefile.side
I also added the following so I could output the results in Junitor another form: --output-directory=results --output-format=junit
You can also add the following so the tests are run headlessly, this only works in Chrome : -c "goog:chromeOptions.args=[--headless,--nogpu] browserName=chrome"
Finally, I also use --filter to run one test suite at a time, but that is optional too.
I then used another build step to export the results to our test manger, xray, but I think that is beyond the scope of this question.
The problem with this solution is that it runs directly from a users individual machine still, but this can be work around.
We are currently using bamboo to build and test our software. Now our build plans are just a bunch of task: execute this bat, execute that bat etc. Created with the Bamboo UI.
It happens that over months/years the build plan needs adjustments:
Parallelize jobs
Add extra jobs
Change some tasks
But this will break when we try to build an older version of the software. Some scripts (called from bamboo task) are not-existing in older versions.
At my previous employer we used Jenkins pipelines where the content of the build and test was just a file present in the sources repo.
Now with bamboo it appears you can use Bamboo Specs. From I read you create specs file and when you run this, it which will create build plan. But I don't see a relation to cater for changing build plans over time (changing steps).
For example the Bamboo Specs of develop are used to build all Plan Branches (e.g. Pull Requests). So if you want to change the build in a PullRequest, you first need to merge this into develop, the Bamboo Spec of develop updates the Build Plan. Not possible to test this before merging.
Question: How can you make scripted buildplans in Bamboo, where every branch of develop can a have possible other way of building?
We have it now setup as:
Buildplan 'Product A': plan branches: develop, release_x, release, y
Buildplan 'Product A PullRequest': plan branches: feature/*
Edit: supported in 7.0: https://confluence.atlassian.com/bamboo/enhanced-plan-branch-configuration-996709304.html
Old answer:
I found Atlassian documentation: https://jira.atlassian.com/browse/BAM-19620. They call it 'divergent plan branches'. No support, there is a feature request.
As of 15-4-2019:
Atlassian Update – [11 April 2019] Hi everyone,
Thank you for your votes and thoughts on this issue.
We fully understand that many of you are dependent on this
functionality.
After careful consideration, we've decided to prioritise [this
feature] on Bamboo roadmap. We hope to start development after our
current projects are completed.
Expect to hear an update on our progress within the next 6 months.
To learn more on how your suggestions are reviewed, see our updated
workflow for server feature suggestions.
Kind regards,
Bamboo Team
Question: How can you make scripted build plans in Bamboo?
To make scripted build plans in Bamboo, you have to use bamboo specs. Since you are already familiar with Jenkins, bamboo specs work exactly like Jenkinsfile by automating your pipeline. The benefit of using this is that it lives in your source code and the changes you make to this file in your source code automatically changes your plan(pipeline) when bamboo build is triggered.
This is how I script build plans in bamboo:
I add my bamboo.yml file under the root of my repo. But currently, I use git subtree and my bamboo specs live in there. But you don't have to do this. The below link provides you with a simple approach.
Link my repo to bamboo
Tell bamboo to scan for bamboo specs in the repo
Make commit and push
https://confluence.atlassian.com/bamboo/tutorial-bamboo-specs-yaml-stored-in-bitbucket-server-941616819.html
If I have to make changes to the plan in the future, I edit the bamboo specs file then commit and push.
I had the same problem and unfortunately had to go through an unpleasant choice
Backporting the build script
This is not necessarily feasible everywhere, but I managed to make it work somehow for my project.
The idea is: treat the build script as a C#/Java interface, or better as a contract.
As soon as your branches do not provide significant changes in building the software, e.g. your desktop app becomes a web app, or you switch from Ant to Gradle, you can handle this.
Assuming my application is always a web application to be released as a jar on JFrog Artifactory, I have identified the following steps that are common to all maintained versions:
Use javac to build the jar of all modules
Use gulp to build the Javascript resources
Run JUnit from the repository
Baptize 💒 the artifacts with a version number obtained with a tricky algorithm
Push the artifacts to JFrog Artifactory
So the idea is that I had taken my Ant build script and mostly rewrote it in order to do the same tasks on different versions of the application. I started doing the changes from an older version, not maintained anymore, as an excercise. In fact, my official Git branches look like release/x.y.z where semver is x.y.z.k and newer bugfix-builds are built from the head of any x.y.z release.
So I took release/3.10.0 branch and rewrote Ant. I am currently testing with a manually created Bamboo plan
Stage: Compile
ant clean ivy-retrieve compile jar #builds the jar in a job
ant gulp-install gulp-prod zip #creates javascript resources
Stage: Test
ant run-junit
Manual Stage: Release
ant baptize ivy-release #tags the artifact using ${bamboo.jira.version} and pushes to JFrog Artifactory
What I am going to do with Yaml
Since the build script is the same, but specific tasks (e.g. Java compiler version) may change in different versions, I can create a very single Yaml script that rules them versions all.
I will then merge release/3.10.0 => release/3.10.1 => release/3.10.2 ... release/3.11.2 by merging the conflicts
Personal experience
Tonight I am struggling in making the JUnit tests work as I also chose to backport my testing framework to the older version of the project. I accept that a few tests will fail because older and non-maintained versions contain bugs. For me this is a way to prove that the system works.
Indeed, diverging branches are a great idea, but I am forced to use Bamboo 6 in my office
I am having an issue getting selenium end to end tests to work after an automated deployment using visual studio team services (VSTS).
I have a build working that generates a build artefact. This is triggered from VSTS but runs on an on premises build server. I have a deployment working that deploys to an on premises development web server. All this works including unit tests running after the build.
When I try to add testing after the deployment is when I run into the problem. The tests are to be run on the build server and point to the dev server website. The deployment has two phases. A deploy and then an agent phase that runs a test assemblies task using the build agent on the build server. The problem seems to be that the test dll's are not being included in the build artifact and so are never found when the test process runs. Deploy setup us as follows.
I have a copy files before the publish artifact in the build definition that seems to copy the files in to the right place but they are not included in the zip file artefact. I've looked at several websites and posts on here but I still seem to be missing a vital bit of knowledge that will get this working.
Use of the log did help as recommended so thanks for that.
I have managed to get this working. I separated the selenium tests out into a separate solution and built that separately creating it's own build artefact. I then added this to the agent task in the deploy. This worked. The only thing I need to get sorted now is the correct search path to find the test DLL's. It's not quite as dynamic as I would like at the moment. I can play tunes on that until I get it right though.
I accept this is working around rather than solving the original problem but needs and timescales must. I think moving the end to end UI tests out of the main solution makes sense anyway but no doubt others may think differently.
Thanks for your help everyone
I am working on one devops project, from selenium I am running test script and a log file is getting generated. How to configure jira to read the log file generated by selenium.. I want to go with API approch but unable to do so. Jenkins I am using as a CI tool here. Any suggestions ?
Hmm. Generally it's a better approach to display your test results in Jenkins instead of creating issues for them automatically. You didn't mention what technology you use (nodejs, java, ...), but typically you let your test runner generate a test results file that jenkins can interprete, so it will display the results nicely. There are various jenkins plugins that can help with that.
If you want to go a step further and still create issues automatically, you can script that in a separate step of your jenkins job, using the JIRA REST API and a scripting language of your choice. It just comes down to parsing the results file and let your script create issues for the failures.