So I am trying to make a CI/CD pipeline for the first time as I have no experience with DevOps at all, and I want to add pm2 to it, so I can test whether it is able to successfully start the API without issues. This way, I can ensure that the API works when being deployed.
The one issue I have is if I do pm2 start ecosystem.config.js --env development, how will the pipeline know whether this startup is successful or not?
This is currently the YAML file that I have:
image: node:10.19.0
services:
- mongo:3.6.8
- pm2:3.5.1
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
test:
stage: test
script:
- npm run lint
- pm2 start ecosystem.config.js --env development
Any ideas how you can test pm2 startup?
Thanks in advance!
UPDATE #1:
The reason why I want to test pm2 startup is that pm2 has several "process.env" variables that are required in my API to be started.
Starting the API without those variables immediately gives errors, as it needs the variables to initialize the connection to other APIs.
So I don't know whether there is a workaround to be able to start the API without an issue?
Related
im new comer in gitlab ci cd
this is my .gitlab-ci.yml in my project and i want to test it how it works . i have registered gitlab runner that is ok but my problem is when i add a file to my project it will run pipelines and they are successfully passed but nothing changes in my project in server?
what is the problem? it is not dockerized project
image: php:7.2
stages:
- preparation
- building
cache:
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
composer:
stage: preparation
script:
- mkdir test5
when: manual
pull:
stage: building
script:
- mkdir test6
CI/CD Pipelines like GitLab CI/CD are nothing different than virtual environments, usually docker, that can operate on the basis of your code as you do on your own host system.
Your mkdir operations definitely have an effect but the changes remain inside the virtual environment because they are not reflected to your remote repository. For this to work, you have to setup your repository from inside the CI/CD runner and commit to your repository, (again) just like you do from your own host system. To execute custom commands, GitLab CI/CD has the script parameter. I am sure, reading this will get you up and running.
My goal is to set up CI on gitlab. My project consists of 5 different services, details are unimportant. I have tests written which I currently run locally by running a terminal npm run dev and in a second terminal npm run test all whist in other terminals the remaining 4 services are run similarly using npm run dev/npm start. I am using gitlab-runner ssh executor (although I would be open to try others) I would like to be able to deploy the dev server and run my jest tests in my pipeline with the dependent projects running. After the tests have completed initiated from a single service, I would like the other dependent services to be brought down gracefully.
I have thought about issuing npm run dev as a background process nohup and using nc to await a response on the port, something like while ! nc -w 1 -z localhost 3000; do sleep 0.1; done however I don't think that would be best practice, as well as the issue of killing the initial npm run dev process after the tests have run. I would also use the same technique for bringing up the other dependent services.
Another idea was to use Jest beforeAll() to setup the dev server and afterAll() to tear it down (along with dependent services), is there a method to programmatically deploy the nuxt development server and express servers?
The question is; how should this best be achieved? By my suggested methods or ones I haven't come across yet.
It would be good to use gitlabs pipeline triggers to deploy the other dependent services but my question there would be - if I use the first method to run the dev servers in the background, how can I keep these up and running until the tests from the upstream pipeline finish?
Some extra bits:
in my package.json:
...
"scripts": {
"dev": "nuxt",
"test": "jest"
}
Example of .gitlab-ci.yml - this of course hangs at npm run dev.
deployNuxt:
stage: build
tags:
- test
before_script:
- export PATH=$PATH:/home/[user]/.nvm/versions/node/v13.12.0/bin
script:
- npm install
- npm run dev
- npm run test
If I were to use pipeline API, it would look like:
stages:
- Trigger-cross-projects
- build
deployAuth:
stage: Trigger-cross-projects
tags:
- test
script:
- "curl -X POST -F token=[token] -F ref=[ref_name] [pipeline api endpoint]"
deployNuxt:
stage: build
tags:
- test
before_script:
- export PATH=$PATH:/home/[user]/.nvm/versions/node/v13.12.0/bin
script:
- npm install
- npm run dev
- npm run test
Architecture (separate git projects):
service 1 - nuxt (frontend) <---- this is where the tests are run
service 2 - express server (backend)
service 3 - authentication
service 4 - gateway
Relevant question and answer regarding starting processes with nohup (bare in mind I'm looking for best practice here though, and how to keep dependent services running during tests): https://stackoverflow.com/a/56857336/3770935
I'm new to GitLab CI. Constructed very simple YAML just for test purposes. I configured runner with shell executor on my AWS machine and register it properly. In Settings/Pipelines I see activated runner. When I push something on my repository following YAML should be executed: docker-auto-scale
before_script:
- npm install
cache:
paths:
- node_modules/
publish:
stage: deploy
script:
- node app.js
Instead completly another runner is continouosly started (whatever I change - even when I turn off runner on my machine). It is runner with ID: Runner: #40786. In logs I can read:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on docker-auto-scale (e11ae361)
Using Docker executor with image ruby:2.1 ...
I didn't even have Docker executor - I chose shell one. What is going on? Please support.
When you registered the new runner, did you give it a tag?
if so, and it would be e.g. my_tag modify your yaml file and append:
publish:
stage: deploy
script:
- node app.js
tags:
- my_tag
otherwise the build will be picked up by a shared runner.
Does the build have to run on the drone.io server? Can I run the build locally? Since developers need to pass the build first before pushing code to github, I am looking for a way to run the build on developer local machine. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: mongo-test
commands:
- mvn test
It includes two docker containers. How to run the build against this file in drone? I looked at the drone cli but it doesn't work in my expected way.
#BradRydzewski comment is the right answer.
To run builds locally you use drone exec. You can check the docs.
Extending on his answer, you must execute the command in the root of your local repo, exactly where your .drone.yml file is. If your build relies on secrets, you need to feed these secrets through the command line using the --secret or --secrets-file option.
When running a local build, there is no cloning step. Drone will use your local git workspace and mount it in the step containers. So, if you checkout some other commit/branch/whatever during the execution of the local build, you will mess things up because Drone will see those changes. So don't update you local repo while the build is running.
I have a docker image built up for mongodb test. You can be found from zhaoyi0113/mongo-uat. When start a docker container from this image, it will create a few mongodb instances which will take a few minutes to startup. Now I want to run my integration test cases inside this container by drone CI. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: zhaoyi0113/mongo-uat
commands:
- npm install
- npm run integration
There are two steps in this pipeline, the first is to run unit test in a nodejs project. The second one integration is used to run integration test cases in the mongodb docker image.
when I run drone exec it will get an error failed to connect to mongo instance. I think that because the mongodb instance needs a few minutes to startup. The commands npm install and npm run integration should be run after the mongodb instance launched. How can I delay the build commands?
EDIT1
The image zhaoyi0113/mongo-uat has mongodb environment. It will create a few mongodb instances. I can run this command docker run -d zhaoyi0113/mongo-uat to launch this container after that I can attach to this container to see the mongodb instances. I am not sure how drone launch the docker container.
The recommended approach to integration testing is to place your service containers in the service section of the Yaml [1][2]
Therefore in order to start a Mongo service container I would create the below Yaml file. The Mongo service will start on the default port at 127.0.0.1 and be accessible from your pipeline containers.
pipeline:
test:
image: node
commands:
- npm install
- npm run test
integration:
image: node
commands:
- npm run integration
services:
mongo:
image: mongo:3.0
This is the recommended approach for testing services like MySQL, Postgres, Mongo and more.
[1] http://readme.drone.io/usage/getting-started/#services
[2] http://readme.drone.io/usage/services-guide/
As a short addendum to Brads answer: While the mongo service will run on 127.0.0.1 on the drone host machine - it will not be possible to reach the service from this IP within the node app. To access the service you would reference its service name (here: mongo).