I know I can include tests in the starter code and do automatic testing using Travis CI on Github Push(s).
That said, I would prefer to keep the tests on the Travis CI side - hidden from the students. I am know to Travis CI, is this possible?
I would also like to report the test results to an external database. Is this possible?
If Travis CI cannot do it, are there any recommendations on alternative services, CI or other. I would prefer something with Github integration.
I know I can catch the push event with a Webhook, pull down the repository, and run test on a machine of my own. As usual, I would rather avoid building/maintaining one more system.
Since Travis CI does not give you any disk space, I added a custom script to pull in tests. On completion, I post the results to Amazon S3, see Uploading Artifacts on Travis CI
Related
We have a pretty big CI Pipeline including E2E Tests. We currently cannot use the GitLab feature of deploying to dynamic environments, because there is currently still manual work necessary for creating an environment. Therefore all e2e tests run against a single "E2E stage". I would like to add more E2E stages manually.
But here is the problem:
How can I deploy to one (currently unused) E2E stage, run the tests (in a different GitLabCI job) and lock the stage for other tests during that time?
I tried using three special runners with different tags ( ["e2e", "e2e-1"], ["e2e", "e2e-2"], ["e2e", "e2e-3"] ), but I could not tell GitLab to run all jobs of a pipeline on one of these runners (chosen randomly or even better which one is currently free) only. If I start the deployment on "e2e-3" and after that "e2e-1" is free for testing, it might choose that one, which then will not have the correct artifact deployed.
We are currently doing traditional waterfall model where we have manual and automation tests in SIT and UAT environments. We are moving to Agile/Devops and I am working on POC on Devops. Based on my research, Devops is suited for CI and CD meaning the testing is automated and pipeline is automated from Dev to Production. However when we implement, we want to do automatic code deployments in different environments but stop the pipeline to conduct manual QA testing and Manual UAT before the code is signed off for PROD deployment. If I use Jenkins for Devops, is it recommended to stop the pipeline for few days until manual QA is completed and manual approval is done? How is manual testing accounted in Devops implementations? Any insights would be helpful.
CI and CD are engineering practices that enable teams to improve productivity.
And these should be implemented step by step - first implement CI and then CD. So, build pipelines as you mature in the DevOps processes.
For example, leverage Jenkins pipeline to first orchestrate CI pipeline wherein the following is automated-
application build,
unit testing,
code coverage,
The output of this stage are binaries that are deployed in a binary repository like Nexus.
The next step after successful implementation of CI, is CD - the process to auto-deploy artifacts from one environment to another. Consider we need to deploy artifacts (binaries) in QA for testing. You can extend your CI pipeline to perform CD by moving artifacts from DEV to QA systems. And then stop here, since movement to next environment will be done only when manual testing records are approved. This means progressing to next environment will be manually triggered. Hence, while planning to build a CD pipeline, chalk out the essential steps that should be automated and then progress step by step.
Once you are ready with automated tests and tools, you then complete your CD pipeline and automated the movement to artifacts from DEV-QA-NONPROD, etc.
Having a pipeline blocked for days is certainly an anti-pattern. Here is one way to mitigate it -
Separate Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
Have a separate process routing correct artifacts for environments (disclaimer: I'm biased towards the one we provide - https://relizahub.com, since I'm working on it; video how approvals are implemented - https://www.youtube.com/watch?v=PzdZjMby6Is)
So essentially, what happens - you run CI pipeline, which create a deployment artifact. Then you have some approvals (manual and/or automated) which are recorded specifically on this artifact. Then you have a separate deployment pipeline which picks the right artifact and does the deployment. This way all pipelines are running quickly and you don't have to deal with pipeline runs stuck for a long time.
We are currently using bamboo to build and test our software. Now our build plans are just a bunch of task: execute this bat, execute that bat etc. Created with the Bamboo UI.
It happens that over months/years the build plan needs adjustments:
Parallelize jobs
Add extra jobs
Change some tasks
But this will break when we try to build an older version of the software. Some scripts (called from bamboo task) are not-existing in older versions.
At my previous employer we used Jenkins pipelines where the content of the build and test was just a file present in the sources repo.
Now with bamboo it appears you can use Bamboo Specs. From I read you create specs file and when you run this, it which will create build plan. But I don't see a relation to cater for changing build plans over time (changing steps).
For example the Bamboo Specs of develop are used to build all Plan Branches (e.g. Pull Requests). So if you want to change the build in a PullRequest, you first need to merge this into develop, the Bamboo Spec of develop updates the Build Plan. Not possible to test this before merging.
Question: How can you make scripted buildplans in Bamboo, where every branch of develop can a have possible other way of building?
We have it now setup as:
Buildplan 'Product A': plan branches: develop, release_x, release, y
Buildplan 'Product A PullRequest': plan branches: feature/*
Edit: supported in 7.0: https://confluence.atlassian.com/bamboo/enhanced-plan-branch-configuration-996709304.html
Old answer:
I found Atlassian documentation: https://jira.atlassian.com/browse/BAM-19620. They call it 'divergent plan branches'. No support, there is a feature request.
As of 15-4-2019:
Atlassian Update – [11 April 2019] Hi everyone,
Thank you for your votes and thoughts on this issue.
We fully understand that many of you are dependent on this
functionality.
After careful consideration, we've decided to prioritise [this
feature] on Bamboo roadmap. We hope to start development after our
current projects are completed.
Expect to hear an update on our progress within the next 6 months.
To learn more on how your suggestions are reviewed, see our updated
workflow for server feature suggestions.
Kind regards,
Bamboo Team
Question: How can you make scripted build plans in Bamboo?
To make scripted build plans in Bamboo, you have to use bamboo specs. Since you are already familiar with Jenkins, bamboo specs work exactly like Jenkinsfile by automating your pipeline. The benefit of using this is that it lives in your source code and the changes you make to this file in your source code automatically changes your plan(pipeline) when bamboo build is triggered.
This is how I script build plans in bamboo:
I add my bamboo.yml file under the root of my repo. But currently, I use git subtree and my bamboo specs live in there. But you don't have to do this. The below link provides you with a simple approach.
Link my repo to bamboo
Tell bamboo to scan for bamboo specs in the repo
Make commit and push
https://confluence.atlassian.com/bamboo/tutorial-bamboo-specs-yaml-stored-in-bitbucket-server-941616819.html
If I have to make changes to the plan in the future, I edit the bamboo specs file then commit and push.
I had the same problem and unfortunately had to go through an unpleasant choice
Backporting the build script
This is not necessarily feasible everywhere, but I managed to make it work somehow for my project.
The idea is: treat the build script as a C#/Java interface, or better as a contract.
As soon as your branches do not provide significant changes in building the software, e.g. your desktop app becomes a web app, or you switch from Ant to Gradle, you can handle this.
Assuming my application is always a web application to be released as a jar on JFrog Artifactory, I have identified the following steps that are common to all maintained versions:
Use javac to build the jar of all modules
Use gulp to build the Javascript resources
Run JUnit from the repository
Baptize 💒 the artifacts with a version number obtained with a tricky algorithm
Push the artifacts to JFrog Artifactory
So the idea is that I had taken my Ant build script and mostly rewrote it in order to do the same tasks on different versions of the application. I started doing the changes from an older version, not maintained anymore, as an excercise. In fact, my official Git branches look like release/x.y.z where semver is x.y.z.k and newer bugfix-builds are built from the head of any x.y.z release.
So I took release/3.10.0 branch and rewrote Ant. I am currently testing with a manually created Bamboo plan
Stage: Compile
ant clean ivy-retrieve compile jar #builds the jar in a job
ant gulp-install gulp-prod zip #creates javascript resources
Stage: Test
ant run-junit
Manual Stage: Release
ant baptize ivy-release #tags the artifact using ${bamboo.jira.version} and pushes to JFrog Artifactory
What I am going to do with Yaml
Since the build script is the same, but specific tasks (e.g. Java compiler version) may change in different versions, I can create a very single Yaml script that rules them versions all.
I will then merge release/3.10.0 => release/3.10.1 => release/3.10.2 ... release/3.11.2 by merging the conflicts
Personal experience
Tonight I am struggling in making the JUnit tests work as I also chose to backport my testing framework to the older version of the project. I accept that a few tests will fail because older and non-maintained versions contain bugs. For me this is a way to prove that the system works.
Indeed, diverging branches are a great idea, but I am forced to use Bamboo 6 in my office
I have been trying to find out how should I execute the Selenium Test (Java) using gitlab CI
I have created an automation framework and I am able to run the maven project via jenkins
I wanted to run the same maven project with the help of gitlab ci runner
My Code will be available on git and just need to trigger the execution as a when developer checks in the code
Please help me out with this setup, I have been trying to find out the solution but couldn't figure out any
I suggest you to read about jeknins and gitlab hooks here: https://docs.gitlab.com/ee/integration/jenkins.html
, In general, these hooks "follows" any push you perform to you gitlab repository, and run the desired build on them, including pulling the latest code.
I have several projects in Github, and I´d like to linked them to Jenkins in order to automate testing and improve the quality code.
Is there any free online way to do it?
The git jenkins plugin allows you to link a Jenkins job to a git repository (included GitHub). Then use custom actions to run unit tests.