On my Gitlab CI / CD, I have a terraform code that requires Python installed to use an external module.
When running terraform plan via Gitlab pipelines, I get the following error:
module.notify_slack.module.lambda.data.aws_caller_identity.current[0]: Refreshing state...
Error: can't find external program "python3"
on .terraform/modules/notify_slack.lambda/terraform-aws-lambda-1.6.0/package.tf line 3, in data "external" "archive_prepare":
3: data "external" "archive_prepare" {
ERROR: Job failed: exit code 1
What image do I need to use that contains Terraform and Python? Will I need to create my own docker image?
I know this is a bit of an old post, but I'll share my solution in case anyone else stumbles upon this problem too.
Choose an existing python image and install terraform manually - this seems to me to be the easiest solution, if pragmatism is important to you.
This is the relevant section of my .gitlab-ci.yml file:
default:
image: python:latest
before_script:
- python -V # Display version for debugging purposes only
- apt-get update -y
- apt-get install unzip wget -y
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip
- mv terraform /usr/local/bin/
- terraform --version # Display version for debugging purposes only
The environment variable was set up in the GitLab CI/CD settings, otherwise just change it for the specific version of Terraform you want.
I was pleasantly surprised at the speed in which this installation takes place, as this clearly isn't an optimal way to do it - the best performing runner will probably use your own custom image with all of your required dependencies pre-installed - I'll leave you to decide whether its worth it for your own purposes. Nonetheless, this solution doesn't appear to be prohibitively slow.
Related
I have been playing around with K6 performance tests on GitLab CI and I am wondering what is the best and recommended approach for setup.
According to the K6 docs and sample project it defines the .gitlab-ci.yml as follows:
before_script:
- mkdir -p .k6-bin
- |
if [[ ! -f .k6-bin/k6 ]]; then
curl -O -L https://github.com/loadimpact/k6/releases/download/v0.21.1/k6-v0.21.1-linux64.tar.gz;
tar -xvzf k6-v0.21.1-linux64.tar.gz;
mv k6-v0.21.1-linux64/k6 .k6-bin/k6;
fi
cache:
key: k6-bin
paths:
- .k6-bin
loadtest:
stage: test
script: .k6-bin/k6 run -o cloud loadtests/main.js
I found this to be quite verbose especially when you consider a prebuilt docker image is made available. This approach would require manual updates when new versions are released and doesn’t seem as clean as the following configuration I am currently using:
loadtest:
stage: test
image:
name: loadimpact/k6:latest
entrypoint: [""]
script: k6 run ./loadtests/main.js
Both work exactly as expected which is why I’m wondering whether the K6 team know something and don’t recommend the use of their docker image?
Ah, I am one of the people on the k6 team and in this case you are absolutely right - the docker approach is the better one. We'll fix the documentation and the example repo - https://github.com/loadimpact/k6/issues/1196. I don't know why they advocated the other approach - it was probably an old copy-paste from another CI system that doesn't work as well with containers as GitLab CI does. Case in point, the actual k6 version used is super old - v0.21.1 was released on Jun 4 2018. Thanks for pointing this out, we'll fix the docs and example in the upcoming days, so for now stick with your gut instead of our obsolete docs!
I have a very basic integration configured for Gitlab-CI but it fails almost at the beginning when it has to clone the code.
My integration is this:
image: node:latest
stages:
- build
- test
cache:
paths:
- node_modules/
- dist/
build-prod:
stage: build
script:
- npm install
- npm run build-prod
artifacts:
paths:
- node_modules/
- dist/
test_with_karma:
stage: test
script: ng test
And the error that I get is this:
Running with gitlab-runner 11.7.0 (8bb608ff)
on fakehost 2eaf11ea
Using Docker executor with image node:latest ...
Pulling docker image node:latest ...
Using docker image sha256:8c67bfd7b95bdc535edc4a4144f5392b0f73efd6385fbcb47747d028d7059359 for node:latest ...
Running on runner-2eaf11ea-project-56-concurrent-0 via fakehost...
Cloning repository...
Cloning into '/builds/redacted/frontend'...
remote: You are not allowed to download code from this project.
fatal: unable to access 'https://gitlab-ci-token:xxxxxxxxxxxxxxxxxxxx#working-domain.com/redacted/frontend.git/': The requested URL returned error: 403
/bin/bash: line 65: cd: /builds/redacted/frontend: No such file or directory
ERROR: Job failed: exit code 1
What is the problem here?
Check if this is covered by gitlab-org/gitlab-ce issue 39469
YAY - it works for me. This problem seems to have multiple solutions.
The one that worked for me is #44855
To summarize. Being an Administrator on Gitlab does not mean you have the "access" to do whatever you want to do in Gitlab.
"Unable to access" permissions applies to the person who is logged into Gitlab and running the job.
To fix the problem - the person / account running the job must be a member (master) of the project.
This will apply to private projects.
It is not necessary to make a private project Public even though that appears to fix the problem. GITLAB suggests you must have https for the project to work you can use http.
SOLUTION - add your account to the project even if you are the Administrator
And:
Conrad has described it correctly.
You need to have rights to the project to run pipeline, however, as administrator, you can start any pipeline.
I've got the case when the user being Admin in Gitlab could push his commit from command line, although theoretically having no rights to project - and the pipeline has failed.
This inconsistency need to be fixed, either Admin user should not be able to push/start pipeline, having no rights for it, or he should authomatically be granted all rights to all projects. I'd prefer the first one, because it separates gitlab administration from project rights. Sometimes I prefer not having full rights, just like working as non-root under Linux.
I have a singularity container that has been made for me (to run tensorflow on comet GPU nodes) but I need to modify the keras install for my purposes.
I understand that .simg files are not editable (and that the writable .img format is deprecated), so the process of converting to an .img file, editing, and then converting back to .simg is discouraged:
sudo singularity build --writable development.img production.simg
## make changes
sudo singularity build production2.img development.simg
It seems to me the best way might be to extract the contents (say into a sandbox), edit them, and then rebuild the sandbox into an .simg image.
I know how to do the second conversion (singularity build new-sif sandbox), but how can I do the first?
I have tried the following, but the command never finishes:
sudo singularity build tf_gpu tensorflow-gpu.simg
WARNING: Authentication token file not found : Only pulls of public images will succeed
Build target already exists. Do you want to overwrite? [N/y] y
2018/10/12 08:39:54 bufio.Scanner: token too long
INFO: Starting build...
You can easily convert between a sandbox and a production build using the following:
sudo singularity build lolcow.sif docker://godlovedc/lolcow # pulls and builds an example container
sudo singularity build --sandbox lolcow_sandbox/ lolcow.sif # converts from container to a writable sandbox
sudo singularity build lolcow2 lolcow_sandbox/ # converts from sandbox to container
So, you can edit the sandbox and then rebuild accordingly.
I am trying to make use of the variables: keyword documented in the Gitlab CI Documentation here:
FROM: https://docs.gitlab.com/ce/ci/yaml/README.html
variables
This feature requires gitlab-runner with version equal or greater than
0.5.0.
GitLab CI allows you to add to .gitlab-ci.yml variables that are set
in build environment. The variables are stored in repository and are
meant to store non-sensitive project configuration, ie. RAILS_ENV or
DATABASE_URL.
variables:
DATABASE_URL: "postgres://postgres#postgres/my_database"
These variables can be later used in all executed commands and
scripts.
The YAML-defined variables are also set to all created service
containers, thus allowing to fine tune them.
When I attempt to use it, my builds do not run any stages and are marked successful anyway, a good sign of bad YAML. I pasted my gitlab-ci.yml contents into the LINT tool in the settings area and the output error is:
Status: syntax is incorrect
Error: variables job: unknown parameter PACKAGE_NAME
I'm using my YAML syntax the same as the docs, however it will not work. I'm unable to find any open bugs related to this. Below are my current versions and a sanitized version of my gitlab-ci.yml.
Gitlab Version: 7.13.2 Omnibus
Gitlab Runner Version: 0.5.2
gitlab-ci.yml (Sanitized)
types:
- test
- build
variables:
PACKAGE_NAME: "awesome-django-app"
PACKAGE_SUMMARY: "Awesome webapp backend."
MAJOR_RELEASE: "1"
MINOR_RELEASE: "0"
PATCH_LEVEL: "0dev"
DEV_DB_URL: "db"
DEV_SERVER: "pydev.example.com"
PROD_SERVER: "pyprod.example.com"
TEST_SERVER: "pytest.example.com"
envtest:
type: test
script:
- ". ./testbuild.sh"
tags:
- python2.7
- postgres
- linux
except:
- tags
buildrpm:
type: build
script:
- mkdir -p ~/rpmbuild/SOURCES
- mkdir -p ~/rpmbuild/SPECS
- mkdir -p ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL
- cp $PACKAGE_NAME.spec ~/rpmbuild/SPECS/.
- cp -r * ~/tarbuild/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL/.
- cd ~/tarbuild
- tar -zcf ~/rpmbuild/SOURCES/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL.tar.gz *
- cd ~
- rm -Rf ~/tarbuild
- rpmlint -i ~/rpmbuild/SPECS/$PACKAGE_NAME.spec
- echo $CI_BUILD_ID
- 'rpmbuild -ba ~/rpmbuild/SPECS/$PACKAGE_NAME.spec \
--define="_build_number $CI_BUILD_ID" \
--define="_python_version_min 2.7" \
--define="_version $MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL" \
--define="_package_name $PACKAGE_NAME" \
--define="_summary $SUMMARY"'
- scp rpmbuild/RPMS/noarch/$PACKAGE_NAME-$MAJOR_RELEASE.$MINOR_RELEASE.$PATCH_LEVEL-$CI_BUILD_ID.noarch.rpm $DEV_SERVER:~/.
tags:
- python2.7
- postgres
- linux
- rpm
except:
- tags
Question:
How do I use this value properly?
Additional Info:
Removing this section from the YAML file causes everything to work so the rest of the file is in working order. (Of course undefined variables lead to script errors...)
Even just reducing the variables for testing down to just PACKAGE_NAME causes the same break.
The original answer is no longer correct.
The original documentation now stands, Now there are more ways as well. Variables can be created from the GUI, API, or by being defined in the .gitlab-ci.yml as well.
https://docs.gitlab.com/ce/ci/variables/README.html
While it is in the documentation, I do not believe that variables were included in the latest version of gitlab (7.13). The functionality to read variables out of the yaml files was brought in by a commit by ayufan 9 days ago.
Looking at the parser on the 7.13 stable branch, you can see that his contribution did not make it in. So assuming you're on 7.13 or earlier, I'm afraid we are out of luck. Since it is on master, I am fairly certain that we'll see it in the next release. Until then, we could either monkey patch, do a git pull if you're using the source directly, or just rely on the project variables until the next release.
I am working on creating an automated unit testing system which will utilise docker to test individual student assignments, written in Python, against a single unit test file.
I have created a website where students can upload their assignments but I'm a little but unsure as to how to get the automation with Docker working.
The workflow looks something like this:
A student uploads an assignment for marking
This is copied to a linux host which contains docker
The file sits here while it waits to be tested
So, say I had twenty student uploading there .py files, named as their unique student numbers, could I:
Create a Docker container which runs Ubuntu and Python
Copy the student file and unit test into this container
Run the unit test
Output the results as a text file
Copy this text file back to my webserver to display the results
Could somebody point me in the right direction to get started with this automation? I'm really just after some help of the Docker side of things, not on copying the files from my webserver to the Docker host.
Thanks.
Yes, it is possible to use Docker for that.
The Dockerfile would look like this:
FROM ubuntu
MAINTAINER xxx <user#example.org>
# update ubuntu repository
RUN DEBIAN_FRONTEND=noninteractive apt-get -y update
# install ubuntu packages
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install python python-pip
# install python requirements
RUN pip install ...
# define a mount point
VOLUME /student.py
# define command for this image
CMD ["python","/student.py"]
Now, you have to build this image with docker build -t student_test ..
To start the script and grab the output you can use:
docker run --volume /path/to/s12345.py:/student.py student_test > student_results_12345.txt`.
The --volume parameter is needed, to mount a student script to the defined mount point. Also, you could start multiple containers at once.
All paths are relative to current working directory.
Checkout the following project
https://github.com/CenturyLinkLabs/buildpack-runner
Uses Heroku buildpacks to create a docker image. Crazy but a neat idea if you get it working.