GitLabCI deploy to one of x environments - gitlab-ci

We have a pretty big CI Pipeline including E2E Tests. We currently cannot use the GitLab feature of deploying to dynamic environments, because there is currently still manual work necessary for creating an environment. Therefore all e2e tests run against a single "E2E stage". I would like to add more E2E stages manually.
But here is the problem:
How can I deploy to one (currently unused) E2E stage, run the tests (in a different GitLabCI job) and lock the stage for other tests during that time?
I tried using three special runners with different tags ( ["e2e", "e2e-1"], ["e2e", "e2e-2"], ["e2e", "e2e-3"] ), but I could not tell GitLab to run all jobs of a pipeline on one of these runners (chosen randomly or even better which one is currently free) only. If I start the deployment on "e2e-3" and after that "e2e-1" is free for testing, it might choose that one, which then will not have the correct artifact deployed.

Related

Manual Testing in Devops Pipeline

We are currently doing traditional waterfall model where we have manual and automation tests in SIT and UAT environments. We are moving to Agile/Devops and I am working on POC on Devops. Based on my research, Devops is suited for CI and CD meaning the testing is automated and pipeline is automated from Dev to Production. However when we implement, we want to do automatic code deployments in different environments but stop the pipeline to conduct manual QA testing and Manual UAT before the code is signed off for PROD deployment. If I use Jenkins for Devops, is it recommended to stop the pipeline for few days until manual QA is completed and manual approval is done? How is manual testing accounted in Devops implementations? Any insights would be helpful.
CI and CD are engineering practices that enable teams to improve productivity.
And these should be implemented step by step - first implement CI and then CD. So, build pipelines as you mature in the DevOps processes.
For example, leverage Jenkins pipeline to first orchestrate CI pipeline wherein the following is automated-
application build,
unit testing,
code coverage,
The output of this stage are binaries that are deployed in a binary repository like Nexus.
The next step after successful implementation of CI, is CD - the process to auto-deploy artifacts from one environment to another. Consider we need to deploy artifacts (binaries) in QA for testing. You can extend your CI pipeline to perform CD by moving artifacts from DEV to QA systems. And then stop here, since movement to next environment will be done only when manual testing records are approved. This means progressing to next environment will be manually triggered. Hence, while planning to build a CD pipeline, chalk out the essential steps that should be automated and then progress step by step.
Once you are ready with automated tests and tools, you then complete your CD pipeline and automated the movement to artifacts from DEV-QA-NONPROD, etc.
Having a pipeline blocked for days is certainly an anti-pattern. Here is one way to mitigate it -
Separate Continuous Integration (CI) and Continuous Deployment (CD) pipelines.
Have a separate process routing correct artifacts for environments (disclaimer: I'm biased towards the one we provide - https://relizahub.com, since I'm working on it; video how approvals are implemented - https://www.youtube.com/watch?v=PzdZjMby6Is)
So essentially, what happens - you run CI pipeline, which create a deployment artifact. Then you have some approvals (manual and/or automated) which are recorded specifically on this artifact. Then you have a separate deployment pipeline which picks the right artifact and does the deployment. This way all pipelines are running quickly and you don't have to deal with pipeline runs stuck for a long time.

E2E Test Automation workflow with GitLab CI/CD

I am to build a test automation system for E2E testing for a company. The product is React/Node.JS based runs in a cloud (Docker & Kubernetes). The code is stored in GitLab repositories for which there are CI/CD pipelines setup for test/lint/deployment.
I plan to use Jest for test orchestration and Selenium / Appium for the UI testing (FRW being in TypeScript), while creating a generator to test our proprietary backend interface.
My code is in a similar repository and will be containerized and uploaded to the test environment.
In my former workplaces we used TeamCity and similar tools to manage test sessions but I do not seem to be able to find the perfect link between our already set up GitLab CI/CD and the E2E testing framework.
I know it could be implemented as part of the pipeline, but for me it seems lacking (which can also be because of my inexperience)
Could you advise some tools/methods for handling test session management for system testing in such an environment?
(with a GUI where I can see the progress of all sessions, being able to manage them, run / rerun / run on certain platforms only, etc)

What are the advantages of running specflow selenium tests in release pipeline instead of build pipeline

I was executing specflow selenium tests in azure build pipeline which is working fine. But someone forced me to run these tests in the release pipeline instead of build pipelines by taking artifacts from the build pipeline.
I am not deploying any application to the server or any other machine. My release pipeline only runs the selenium tests.
I am wondering why should I create a release pipeline if I can do this is in build pipeline itself.
Running your Selenium tests in the build pipeline has following disadvantages:
In most cases Selenium tests are much slower than e.g. simple unit tests, which increases the time of the build pipeline. But you want a fast build pipeline to continue with your work.
If your tests are not stable you will break the build which is IMHO a no-go.
But in some cases it makes sense to execute a small set of Selenium tests during the build pipeline (if not covered with other tests).
This makes sense if you have a big product or when the build pipeline takes very long. You don't want to wait a few hours to get a successful build in your release pipeline where all tests fail because some basic functionallity does not work.
In continuous integration, focus is getting an automated good build out with basic build verification tests while continuous deployment is focused heavily on testing and release management flows.
Typically you will run unit tests in your build workflow, and functional tests in your release workflow after your app is deployed (usually to a QA environment).
The official document also recommends running Selenium tests in the release pipeline

VeriFIX test automation in Jenkins

We are working with a client that uses VeriFIX to test their FIX message flow. Whilst they have built up lots of tests in many suites, it is a manual process to run them and to collate the results.
On the VeriFIX website it says
Incorporate tests into nightly builds using VeriFIX’s command-line script player.
but I cannot find any details on how to to it. Does anyone have any experience in running VeriFIX tests in a continuous integration server (ideally a Jenkins pipeline).
Many thanks.
You can run VeriFIX playlists in batch mode from the command line:
"%VERIFIX_HOME%\verifixbatch\verifixbatch.exe" -version "FIX (x.y)" -playlist "myplaylist" -disablelogging "false"
If you have received the user manual with your installation of veriFIX, the details of how to integrate with CI are in there.
To integrate veriFIX with Jenkins you will create batch files containing tests and run the batch files as jobs in Jenkins.
The placement of your veriFIX installation is important. If your veriFIX is on a users machine, as is often the case, separate from the environment machine Jenkins resides on, there can be difficulties getting the tests to run.
If you have a centralised install of veriFIX things are much easier.

Bamboo build with local and remote agents

I have a .Net WebApi project and continuous integration is setup using Bamboo. I am using spec flow tests and some of the tests are tagged to run on bamboo remote agent as they are slow in nature. Other tests supposed to be run on multiple local agents. I have setup multiple stages in Bamboo build plan as stages get run in parallel with each stage is set to run specifically tagged tests suits.
My question is what is the general practice to setup a Bamboo plan to run on multiple agents (local and remote) and how can I share one MS Build output (dlls and config) across multiple agents.
If you need to split build and test phase then usually you have Build stage with one job which produces artifact with build output.
Then you create another stage and put several jobs there. Jobs may be configured to download produced artifact from Build stage and execute tests against your build.
If you want to run some of your jobs at remote agent, you can add some Job requirement which only remote agent can satisfy.