Suggested Configuration for executing K6 scripts on GiLab CI - gitlab-ci

I have been playing around with K6 performance tests on GitLab CI and I am wondering what is the best and recommended approach for setup.
According to the K6 docs and sample project it defines the .gitlab-ci.yml as follows:
before_script:
- mkdir -p .k6-bin
- |
if [[ ! -f .k6-bin/k6 ]]; then
curl -O -L https://github.com/loadimpact/k6/releases/download/v0.21.1/k6-v0.21.1-linux64.tar.gz;
tar -xvzf k6-v0.21.1-linux64.tar.gz;
mv k6-v0.21.1-linux64/k6 .k6-bin/k6;
fi
cache:
key: k6-bin
paths:
- .k6-bin
loadtest:
stage: test
script: .k6-bin/k6 run -o cloud loadtests/main.js
I found this to be quite verbose especially when you consider a prebuilt docker image is made available. This approach would require manual updates when new versions are released and doesn’t seem as clean as the following configuration I am currently using:
loadtest:
stage: test
image:
name: loadimpact/k6:latest
entrypoint: [""]
script: k6 run ./loadtests/main.js
Both work exactly as expected which is why I’m wondering whether the K6 team know something and don’t recommend the use of their docker image?

Ah, I am one of the people on the k6 team and in this case you are absolutely right - the docker approach is the better one. We'll fix the documentation and the example repo - https://github.com/loadimpact/k6/issues/1196. I don't know why they advocated the other approach - it was probably an old copy-paste from another CI system that doesn't work as well with containers as GitLab CI does. Case in point, the actual k6 version used is super old - v0.21.1 was released on Jun 4 2018. Thanks for pointing this out, we'll fix the docs and example in the upcoming days, so for now stick with your gut instead of our obsolete docs!

Related

gitlab CI dependency availability between stages

I have 7 stages in my pipeline. I need ruby for 3 of the stages.
things I have tried two different options,
Install ruby on each of the required stage,
Install ruby as part of the before_script section
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
Is there a way to do install dependencies as part of one stage and carry it forward for rest of the stages.
example yml
image: ubuntu:21.10
before_script:
- apt update
- apt install ruby-full
- apt install python3.8
stages:
- s1
- s2
- s3
- s4
s1:
stage: s1
script: ruby s1.rb
s2:
stage: s2
script: ruby s2.rb
s3:
stage: s3
script: python3 s3.py
s4:
stage: s4
script: python3 s4.py
There's a few elements here to understand. Generally, every job starts with the same fresh environment. The only differences to this would be files passed through artifacts: or files restored from cache: configurations. Actions performed in one job generally otherwise have no effect on any other jobs.
Using before_script takes up too much of time trying to install ruby on the 4 other stages that does not require it.
It's also important to know that before_script can be set for each job independently. If one job doesn't need it, just override the before_script: key in that job.
Anyhow. There are a few ways you might optimize your build speed with respect to dependencies:
Docker image containing your dependencies
Typically, you would just use a ruby image as your image: for jobs requiring ruby. Usually an official image from dockerhub will work, like ruby:3.1-alpine.
some_ruby_job:
image: "ruby:3.1-alpine"
script: # ruby is already available by default
- echo "hello ruby"
- ruby -v
some_other_job:
image: alpine:latest
script:
- echo "this job does not need ruby"
Making a custom docker image
If your dependencies are very complex, you may even choose to create your own docker images and push them to the project's container registry so you can use the custom image with all your dependencies as your image:.
You could even build an image in one stage and use it as the image: in subsequent stages. This example uses docker caching with --cache-from to further speed up that process.
build:
image: docker:19.03.12
stage: .pre
services:
- docker:19.03.12-dind
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker pull $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME || true
- docker build --cache-from $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME -t $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME .
- docker push $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
some_ruby_job:
stage: test
# This is the image that was built in the previous stage!
image: $CI_REGISTRY/group/project/image:$CI_BRANCH_NAME
script:
- echo "all my dependencies are here!"
- ruby -v
Caching
To further speed things along, you may also choose to cache your ruby dependencies (say, if you install gems as part of your job)
Something like:
some_ruby_job:
stage: one
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
# ...
That way the vendor/ruby directory is cached which will avoid the need to download the gems again in every stage.
Cache policy
You can also speed up caching behavior in subsequent stages by setting the cache policy to pull (to avoid time spent uploading the cache after the job). In other words, only one job is responsible for generating the cache, the other jobs reuse the same cache.
ruby_jobs_in_future_stages:
cache:
key:
files:
- Gemfile.lock
paths:
- vendor/ruby
policy: pull # only download the cache, don't upload it

Gitlab CI/CD: missing /usr/local/bin/gitlab-runner. Uploading artifacts is disabled

So I have a quite simple gitlab-ci.yml script:
test stage:
stage: build
artifacts:
paths:
- result/
script:
…
So the problem is when it gets to the “Uploading artifacts for successful job”, it prints “Missing /usr/local/bin/gitlab-runner. Uploading artifacts is disabled”.
Tried to change an owner and a group of the gitlab-runner file to “gitlab-runner”, even gave 777 rights, but nothing helped.
Any ideas where I’m wrong?
If you are using gitlab latest version, it will be installed in /usr/bin/gitlab-runner, but you are trying to use /usr/local/bin/gitlab-runner.

Gitlab CI / CD with Terraform + Python

On my Gitlab CI / CD, I have a terraform code that requires Python installed to use an external module.
When running terraform plan via Gitlab pipelines, I get the following error:
module.notify_slack.module.lambda.data.aws_caller_identity.current[0]: Refreshing state...
Error: can't find external program "python3"
on .terraform/modules/notify_slack.lambda/terraform-aws-lambda-1.6.0/package.tf line 3, in data "external" "archive_prepare":
3: data "external" "archive_prepare" {
ERROR: Job failed: exit code 1
What image do I need to use that contains Terraform and Python? Will I need to create my own docker image?
I know this is a bit of an old post, but I'll share my solution in case anyone else stumbles upon this problem too.
Choose an existing python image and install terraform manually - this seems to me to be the easiest solution, if pragmatism is important to you.
This is the relevant section of my .gitlab-ci.yml file:
default:
image: python:latest
before_script:
- python -V # Display version for debugging purposes only
- apt-get update -y
- apt-get install unzip wget -y
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
- mv terraform /usr/local/bin/
- terraform --version # Display version for debugging purposes only
The environment variable was set up in the GitLab CI/CD settings, otherwise just change it for the specific version of Terraform you want.
I was pleasantly surprised at the speed in which this installation takes place, as this clearly isn't an optimal way to do it - the best performing runner will probably use your own custom image with all of your required dependencies pre-installed - I'll leave you to decide whether its worth it for your own purposes. Nonetheless, this solution doesn't appear to be prohibitively slow.

Creating a TravisCI config for automated testing of a Go application

I would like to create a .travis.yml config that performes the following:
fetches a Go application source code from Github,
installs the other required libraries with go get
attempts to build the go application with go build
run tests with go test
I new to Go application testing with TravisCI, therefore I would appreciate any help or examples someone could point me to.
Add a .travis.yml to the root of your repository;
Connect your GitHub account to TravisCI
Flick the switch to run builds on commits and pull requests.
Here's what I'm using for some of the Gorilla toolkit repos:
language: go
sudo: false
matrix:
include:
- go: 1.2
- go: 1.3
- go: 1.4
- go: 1.5
- go: 1.6
- go: tip
install:
- go get golang.org/x/tools/cmd/vet
script:
- go get -t -v ./...
- diff -u <(echo -n) <(gofmt -d .)
- go tool vet .
- go test -v -race ./...
(source: https://github.com/gorilla/csrf/blob/master/.travis.yml)

How do we use the 'variables' keyword in gitlab-ci.yml?

I am trying to make use of the variables: keyword documented in the Gitlab CI Documentation here:
FROM: https://docs.gitlab.com/ce/ci/yaml/README.html
variables
This feature requires gitlab-runner with version equal or greater than
0.5.0.
GitLab CI allows you to add to .gitlab-ci.yml variables that are set
in build environment. The variables are stored in repository and are
meant to store non-sensitive project configuration, ie. RAILS_ENV or
DATABASE_URL.
variables:
DATABASE_URL: "postgres://postgres#postgres/my_database"
These variables can be later used in all executed commands and
scripts.
The YAML-defined variables are also set to all created service
containers, thus allowing to fine tune them.
When I attempt to use it, my builds do not run any stages and are marked successful anyway, a good sign of bad YAML. I pasted my gitlab-ci.yml contents into the LINT tool in the settings area and the output error is:
Status: syntax is incorrect
Error: variables job: unknown parameter PACKAGE_NAME
I'm using my YAML syntax the same as the docs, however it will not work. I'm unable to find any open bugs related to this. Below are my current versions and a sanitized version of my gitlab-ci.yml.
Gitlab Version: 7.13.2 Omnibus
Gitlab Runner Version: 0.5.2
gitlab-ci.yml (Sanitized)
types:
- test
- build
variables:
PACKAGE_NAME: "awesome-django-app"
PACKAGE_SUMMARY: "Awesome webapp backend."
MAJOR_RELEASE: "1"
MINOR_RELEASE: "0"
PATCH_LEVEL: "0dev"
DEV_DB_URL: "db"
DEV_SERVER: "pydev.example.com"
PROD_SERVER: "pyprod.example.com"
TEST_SERVER: "pytest.example.com"
envtest:
type: test
script:
- ". ./testbuild.sh"
tags:
- python2.7
- postgres
- linux
except:
- tags
buildrpm:
type: build
script:
- mkdir -p ~/rpmbuild/SOURCES
- mkdir -p ~/rpmbuild/SPECS
- mkdir -p ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL
- cp $PACKAGE_NAME.spec ~/rpmbuild/SPECS/.
- cp -r * ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL/.
- cd ~/tarbuild
- tar -zcf ~/rpmbuild/SOURCES/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL.tar.gz *
- cd ~
- rm -Rf ~/tarbuild
- rpmlint -i ~/rpmbuild/SPECS/$PACKAGE_NAME.spec
- echo $CI_BUILD_ID
- 'rpmbuild -ba ~/rpmbuild/SPECS/$PACKAGE_NAME.spec \
--define="_build_number $CI_BUILD_ID" \
--define="_python_version_min 2.7" \
--define="_version $MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL" \
--define="_package_name $PACKAGE_NAME" \
--define="_summary $SUMMARY"'
- scp rpmbuild/RPMS/noarch/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL-$CI_BUILD_ID.noarch.rpm $DEV_SERVER:~/.
tags:
- python2.7
- postgres
- linux
- rpm
except:
- tags
Question:
How do I use this value properly?
Additional Info:
Removing this section from the YAML file causes everything to work so the rest of the file is in working order. (Of course undefined variables lead to script errors...)
Even just reducing the variables for testing down to just PACKAGE_NAME causes the same break.
The original answer is no longer correct.
The original documentation now stands, Now there are more ways as well. Variables can be created from the GUI, API, or by being defined in the .gitlab-ci.yml as well.
https://docs.gitlab.com/ce/ci/variables/README.html
While it is in the documentation, I do not believe that variables were included in the latest version of gitlab (7.13). The functionality to read variables out of the yaml files was brought in by a commit by ayufan 9 days ago.
Looking at the parser on the 7.13 stable branch, you can see that his contribution did not make it in. So assuming you're on 7.13 or earlier, I'm afraid we are out of luck. Since it is on master, I am fairly certain that we'll see it in the next release. Until then, we could either monkey patch, do a git pull if you're using the source directly, or just rely on the project variables until the next release.