I can manage to hard code shell scripts in .yml file, so that GitLab runner can execute the .yml contents when someone perform a push to the Git repo.
I would like to explore, if the .yml file can get data from external source (e.g. mysql database), and using loops to generate the shell scripts.
something like this:-
// this is a .yml file
get some data
foreach($data as $datum)
deploy_dev:
stage: deploy
script:
- echo "Deploy to server"
- eval $(ssh-agent)
- ssh-add ~/.ssh/key.pem
- ssh -p22 root#xxxx "do something"
environment:
name: dev
url: http://xxxx
only:
- dev
}
If it is possible, then I can update db data -> .yml -> dynamic stages for CI/CD.
Thanks.
Why don't you write a script (in any language) that does what you want and run just this script inside your CI? You can pass ENV vars to it if you need to get data from the pipeline.
Related
I just began with the implementation of CI jobs using gitlab-ci and I'm trying to create a job template. Basically the job uses the same image, tags and script where I use variables:
.job_e2e_template: &job_e2e
stage: e2e-test
tags:
- test
image: my_image_repo/siderunner
script:
- selenium-side-runner -c "browserName=$JOB_BROWSER" --server http://${SE_EVENT_BUS_HOST}:${SELENIUM_HUB_PORT}/wd/hub --output-directory docker/selenium/out_$FOLDER_POSTFIX docker/selenium/tests/*.side;
And here is one of the jobs using this anchor:
test-chrome:
<<: *job_e2e
variables:
JOB_BROWSER: "chrome"
FOLDER_POSTFIX: "chrome"
services:
- selenium-hub
- node-chrome
artifacts:
paths:
- tests/
- out_chrome/
I'd like this template to be more generic and I was wondering if I could also use variables in the services and artifacts section, so I could add a few more lines in my template like this:
services:
- selenium-hub
- node-$JOB_BROWSER
artifacts:
paths:
- tests/
- out_$JOB_BROWSER/
However I cannot find any example of that and the doc only talks about using that in scripts. I know that variables are like environment variables for jobs but I'm not sure if they can be used for other purposes.
Any suggestions?
Short answer, yes you can. Like described in this blog post, gitlab does a deep merge based on the keys.
You can see how your merged pipeline file looks like under CI/CD -> Editor -> View merged YAML.
If you want to modularize your pipeline even further I would recommend using include instead of yaml anchors, so you can reuse your templates in different pipelines.
I am working on a Deployment with Bitbucket and got some trouble using the variables. In Bitbucket I set in
Repository Settings / Deployment / Staging:
SOME_VAR = foobar
The bitbucket-pipelines.yml looks like this:
...
definitions:
steps:
- step: &build-some-project
name: 'Build Project'
script:
- echo "\$test='$SOME_VAR';" >> file.php
...
I am not able to pass the value of a variable into the file within the script part via echo, but why? I also put it in double qutes (like echo "\$test='"$SOME_VAR"';"). But the result always is just $test=''; = an empty string. The expected result should be $test='foobar';. Does anybody know how to get this working? The variable is not a secure variable.
// edit
The Repository Variables are working as well. But: I need the same variable with different values for each environment. Also the Repository Variables are accessable for users with permissions to push into the repo.
For the script to understand that the variable to be fetched from the deployment variable mention the deployment type
definitions:
steps:
- step: &build-some-project
name: 'Build Project'
deployment: Staging
script:
- echo "\$test='$SOME_VAR';" >> file.php
How to set gitlab-ci varibales through script not just in "varibales" section in .gitlab-ci.yaml?So that I can set variables in one job and use in different job
There is currently no way in GitLab to pass environment variable between stages or jobs.
But there is a request for that: https://gitlab.com/gitlab-org/gitlab/-/issues/22638
Current workaround is to use artifacts - basically pass files.
We had a similar use case - get Java app version from pom.xml and pass it to various jobs later in the pipeline.
How we did it in .gitlab-ci.yml:
stages:
- prepare
- package
variables:
VARIABLES_FILE: ./variables.txt # "." is required for image that have sh not bash
get-version:
stage: build
script:
- APP_VERSION=...
- echo "export APP_VERSION=$APP_VERSION" > $VARIABLES_FILE
artifacts:
paths:
- $VARIABLES_FILE
package:
stage: package
script:
- source $VARIABLES_FILE
- echo "Use env var APP_VERSION here as you like ..."
If you run a script you can set an environment variable
export MY_VAR=the-value
once the environment variable is set it should persist in the current environment.
Now for why you do not want to do that.
A tool like Gitlab CI is meant to achieve repeatability in your
artifacts. Consistency is the matter here. What happens if a second job
has to pick up a variable from the first? Then you have multiple paths!
# CI is a sequence
first -> second -> third -> fourth -> ...
# not a graph
first -> second A -> third ...
\> second B />
How did you get to third? Now if you had to debug third which path do you test? If the build in third is broken who is responsible second A or second B?
If you need a variable use it now, not later in another job/script. Whenever you
want to write a longer sequence of commands make it a script and execute the script!
You can use either Artifact or Cache to achieve this, see the official documentation for more information around Artifact and Cache:
https://docs.gitlab.com/ee/ci/caching/#how-cache-is-different-from-artifacts
i'm trying to implement automation inside my GitLab project.
In order to perform security scan, i would like to use ZAP to go through all the URLs present in the
project and scan them. It's clearly not possible to pass manually all the URLs, so i'm trying to find a way to make all the test as automated as possible.
The problem is: how to reach all the URLs present in the application?
I thought a way could be to pass them as a "variable" in the YML file, and use them as parameter in the ZAP command, something like that (see below).
Is this a reasonable solution? Is there any other way to perform an automated scan inside a repository (without passing manually the URLs)?
Thanks
variables:
OWASP_CONTAINER: $APP_NAME-$BUILD_ID-OWASP
OWASP_IMAGE: "owasp/zap2docker-stable"
OWASP_REPORT_DIR: "owasp-data"
ZAP_API_PORT: "8090"
PENTEST_IP: 'application:8080'
run penetration tests:
stage: pen-tests
image: docker:stable
- docker exec $OWASP_CONTAINER zap-cli -v -p $ZAP_API_PORT active-scan http://$PENTEST_IP/html
You need to turn on a new feature flag (FF_NETWORK_PER_BUILD) to enable a network per build. Then also services can reach each others (Available since GitLab runner 12.9). For more information see: https://docs.gitlab.com/runner/executors/docker.html#networking
Working example owasp zap job in GitLab CI:
owasp-zap:
variables:
FF_NETWORK_PER_BUILD: 1
image: maven
services:
- selenium/standalone-chrome
- name: owasp/zap2docker-weekly
entrypoint: ['zap.sh', '-daemon', '-host', '0.0.0.0', '-port', '8080',
'-config', 'api.addrs.addr.name=.*', '-config', 'api.addrs.addr.regex=true', '-config', 'api.key=1234567890']
script:
- sleep 5
- mvn clean test -Dbrowser=chrome -Dgrid_url=http://selenium-standalone-chrome:4444/wd/hub -Dproxy=http://owasp-zap2docker-weekly:8080
- curl http://owasp-zap2docker-weekly:8080/OTHER/core/other/htmlreport/?apikey=1234567890 -o report.html
artifacts:
paths:
- report.html
In Jenkins job i need to execute shell on jenkins machine. I use Execute shell step from there i get value, which have to be used in another step Execute shell script on remote host using ssh. I can't find way to pass that value...
Here is my jenkins job configuration:
I appreciate any help.
Content of the file which constains the script:
echo "VERSION=`/data/...`"> env.properties
Path of the properties file:
env.properties
In shell script using ssh:
echo"... $VERSION"
Something like this works for the build, maybe here works too.