pipeline passed without any changes gitlab ci cd - gitlab-ci

im new comer in gitlab ci cd
this is my .gitlab-ci.yml in my project and i want to test it how it works . i have registered gitlab runner that is ok but my problem is when i add a file to my project it will run pipelines and they are successfully passed but nothing changes in my project in server?
what is the problem? it is not dockerized project
image: php:7.2
stages:
- preparation
- building
cache:
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
composer:
stage: preparation
script:
- mkdir test5
when: manual
pull:
stage: building
script:
- mkdir test6

CI/CD Pipelines like GitLab CI/CD are nothing different than virtual environments, usually docker, that can operate on the basis of your code as you do on your own host system.
Your mkdir operations definitely have an effect but the changes remain inside the virtual environment because they are not reflected to your remote repository. For this to work, you have to setup your repository from inside the CI/CD runner and commit to your repository, (again) just like you do from your own host system. To execute custom commands, GitLab CI/CD has the script parameter. I am sure, reading this will get you up and running.

Related

Gitlab CI/CD test whether pm2 start gives errors

So I am trying to make a CI/CD pipeline for the first time as I have no experience with DevOps at all, and I want to add pm2 to it, so I can test whether it is able to successfully start the API without issues. This way, I can ensure that the API works when being deployed.
The one issue I have is if I do pm2 start ecosystem.config.js --env development, how will the pipeline know whether this startup is successful or not?
This is currently the YAML file that I have:
image: node:10.19.0
services:
- mongo:3.6.8
- pm2:3.5.1
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
test:
stage: test
script:
- npm run lint
- pm2 start ecosystem.config.js --env development
Any ideas how you can test pm2 startup?
Thanks in advance!
UPDATE #1:
The reason why I want to test pm2 startup is that pm2 has several "process.env" variables that are required in my API to be started.
Starting the API without those variables immediately gives errors, as it needs the variables to initialize the connection to other APIs.
So I don't know whether there is a workaround to be able to start the API without an issue?

How to download/unzip and install browserstack local in gitlab-ci.yml

I'm trying to run automated tests via browserstack on private server, tests are executed on Gitlab Ci. Since it is private server I need force local parameter when executing tests. When running from local PC following solution works perfectly:
Downloading binary
running command./BrowserStackLocal --key --force-local
I would like to do the same in .gitlab-ci.yml file, but I dont know exactly how to achieve this (how do download unzip and install browserstacklical binary)
This is my .gitlab-ci.yml file right now:
stages:
- e2e_testing
e2e_testing:
image: node:10.15.3
stage: e2e_testing
variables:
NODE_ENV: dev
script:
- apt-get update
- apt-get install unzip
- wget http://www.browserstack.com/browserstack-local/BrowserStackLocal-linux-x64.zip
- unzip BrowserStackLocal-linux-x64.zip
- ./BrowserStackLocal --key ${BROWSERSTACK_ACCESSKEY} --force-local
- npm ci
- npm run test:browserstack
only:
- master
tags:
- docker
- build
artifacts:
when: always
paths:
- reports/
You can execute the BrowserStack Local Binary through code using the Local Bindings for Node JS. Reference: https://github.com/browserstack/browserstack-local-nodejs
When using the Local Bindings, the Binary is automatically downloaded and initiated through code itself.
You could try executing the sample test: https://github.com/browserstack/browserstack-local-nodejs/blob/master/node-example.js from your Gitlab CI.

Git push executes unwanter Gitlab runner

I'm new to GitLab CI. Constructed very simple YAML just for test purposes. I configured runner with shell executor on my AWS machine and register it properly. In Settings/Pipelines I see activated runner. When I push something on my repository following YAML should be executed: docker-auto-scale
before_script:
- npm install
cache:
paths:
- node_modules/
publish:
stage: deploy
script:
- node app.js
Instead completly another runner is continouosly started (whatever I change - even when I turn off runner on my machine). It is runner with ID: Runner: #40786. In logs I can read:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on docker-auto-scale (e11ae361)
Using Docker executor with image ruby:2.1 ...
I didn't even have Docker executor - I chose shell one. What is going on? Please support.
When you registered the new runner, did you give it a tag?
if so, and it would be e.g. my_tag modify your yaml file and append:
publish:
stage: deploy
script:
- node app.js
tags:
- my_tag
otherwise the build will be picked up by a shared runner.

How to run build in local machine with drone.io

Does the build have to run on the drone.io server? Can I run the build locally? Since developers need to pass the build first before pushing code to github, I am looking for a way to run the build on developer local machine. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: mongo-test
commands:
- mvn test
It includes two docker containers. How to run the build against this file in drone? I looked at the drone cli but it doesn't work in my expected way.
#BradRydzewski comment is the right answer.
To run builds locally you use drone exec. You can check the docs.
Extending on his answer, you must execute the command in the root of your local repo, exactly where your .drone.yml file is. If your build relies on secrets, you need to feed these secrets through the command line using the --secret or --secrets-file option.
When running a local build, there is no cloning step. Drone will use your local git workspace and mount it in the step containers. So, if you checkout some other commit/branch/whatever during the execution of the local build, you will mess things up because Drone will see those changes. So don't update you local repo while the build is running.

How to avoid reinstalling dependencies for each job in Gitlab CI

I'm using Gitlab CI 8.0 with gitlab-ci-multi-runner 0.6.0. I have a .gitlab-ci.yml file similar to the following:
before_script:
- npm install
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
This works but it means the dependencies are installed independently before each test job. For a large project with many dependencies this adds a considerable overhead.
In Jenkins I would use one job to install dependencies then TAR them up and create a build artefact which is then copied to downstream jobs. Would something similar work with Gitlab CI? Is there a recommended approach?
Update: I now recommend using artifacts with a short expire_in. This is superior to cache because it only has to write the artifact once per pipeline whereas the cache is updated after every job. Also the cache is per runner so if you run your jobs in parallel on multiple runners it's not guaranteed to be populated, unlike artifacts which are stored centrally.
Gitlab CI 8.2 adds runner caching which lets you reuse files between builds. However I've found this to be very slow.
Instead I've implemented my own caching system using a bit of shell scripting:
before_script:
# unique hash of required dependencies
- PACKAGE_HASH=($(md5sum package.json))
# path to cache file
- DEPS_CACHE=/tmp/dependencies_${PACKAGE_HASH}.tar.gz
# Check if cache file exists and if not, create it
- if [ -f $DEPS_CACHE ];
then
tar zxf $DEPS_CACHE;
else
npm install --quiet;
tar zcf - ./node_modules > $DEPS_CACHE;
fi
This will run before every job in your .gitlab-ci.yml and only install your dependencies if package.json has changed or the cache file is missing (e.g. first run, or file was manually deleted). Note that if you have several runners on different servers, they will each have their own cache file.
You may want to clear out the cache file on a regular basis in order to get the latest dependencies. We do this with the following cron entry:
#daily find /tmp/dependencies_* -mtime +1 -type f -delete
EDIT: This solution was recommended in 2016. In 2021, you might consider the caching docs instead.
A better approach these days is to make use of artifacts.
In the following example, the node_modules/ directory is immediately available to the lint job once the build stage has completed successfully.
build:
stage: build
script:
- npm install -q
- npm run build
artifacts:
paths:
- node_modules/
expire_in: 1 week
lint:
stage: test
script:
- npm run lint
From docs:
cache: Use for temporary storage for project dependencies. Not useful for keeping intermediate build results, like jar or apk files. Cache was designed to be used to speed up invocations of subsequent runs of a given job, by keeping things like dependencies (e.g., npm packages, Go vendor packages, etc.) so they don’t have to be re-fetched from the public internet. While the cache can be abused to pass intermediate build results between stages, there may be cases where artifacts are a better fit.
artifacts: Use for stage results that will be passed between stages. Artifacts were designed to upload some compiled/generated bits of the build, and they can be fetched by any number of concurrent Runners. They are guaranteed to be available and are there to pass data between jobs. They are also exposed to be downloaded from the UI. Artifacts can only exist in directories relative to the build directory and specifying paths which don’t comply to this rule trigger an unintuitive and illogical error message (an enhancement is discussed at https://gitlab.com/gitlab-org/gitlab-ce/issues/15530 ). Artifacts need to be uploaded to the GitLab instance (not only the GitLab runner) before the next stage job(s) can start, so you need to evaluate carefully whether your bandwidth allows you to profit from parallelization with stages and shared artifacts before investing time in changes to the setup.
So, I use cache. When don't need to update de cache (eg. build folder in a test job), I use policy: pull (see here).
I prefer use cache because removes files when pipeline finished.
Example
image: node
stages:
- install
- test
- compile
cache:
key: modules
paths:
- node_modules/
install:modules:
stage: install
cache:
key: modules
paths:
- node_modules/
after_script:
- node -v && npm -v
script:
- npm i
test:
stage: test
cache:
key: modules
paths:
- node_modules/
policy: pull
before_script:
- node -v && npm -v
script:
- npm run test
compile:
stage: compile
cache:
key: modules
paths:
- node_modules/
policy: pull
script:
- npm run build
I think it´s not recommended because all jobs of the same stage could be executed in parallel.
First all jobs of build are executed in parallel.
If all jobs of build succeeds, the test jobs are executed in parallel.
If all jobs of test succeeds, the deploy jobs are executed in parallel.
If all jobs of deploy succeeds, the commit is marked as success.
If any of the previous jobs fails, the commit is marked as failed and no jobs of further stage are executed.
I have read that here:
http://doc.gitlab.com/ci/yaml/README.html
Solved a problem with a symbolic link to a folder outside the working directory. The solution looks like this:
//.gitlab-ci.yml
before_script:
- New-Item -ItemType SymbolicLink -Path ".\node_modules" -Target "C:\GitLab-Runner\cache\node_modules"
- yarn
after_script:
- (Get-Item ".\node_modules").Delete()
I know this is a enough dirty solution but it saves a lot of time for build process and extends the storage life.
GitLab introduced caching to avoid redownloading dependencies for each job.
The following Node.js example is inspired from the caching documentation.
image: node:latest
# Cache modules in between jobs
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- .npm/
before_script:
- npm ci --cache .npm --prefer-offline
server_tests:
script: mocha
client_tests:
script: karma start karma.conf.js
Note that the example uses npm ci. This command is like npm install, but designed to be used in automated environments. You can read more about npm ci in the documentation and the command line arguments you can pass.
For further information, check Caching in GitLab CI/CD and the cache keyword reference.