run vue dev server gitlab-ci and microservice architecture - npm

My goal is to set up CI on gitlab. My project consists of 5 different services, details are unimportant. I have tests written which I currently run locally by running a terminal npm run dev and in a second terminal npm run test all whist in other terminals the remaining 4 services are run similarly using npm run dev/npm start. I am using gitlab-runner ssh executor (although I would be open to try others) I would like to be able to deploy the dev server and run my jest tests in my pipeline with the dependent projects running. After the tests have completed initiated from a single service, I would like the other dependent services to be brought down gracefully.
I have thought about issuing npm run dev as a background process nohup and using nc to await a response on the port, something like while ! nc -w 1 -z localhost 3000; do sleep 0.1; done however I don't think that would be best practice, as well as the issue of killing the initial npm run dev process after the tests have run. I would also use the same technique for bringing up the other dependent services.
Another idea was to use Jest beforeAll() to setup the dev server and afterAll() to tear it down (along with dependent services), is there a method to programmatically deploy the nuxt development server and express servers?
The question is; how should this best be achieved? By my suggested methods or ones I haven't come across yet.
It would be good to use gitlabs pipeline triggers to deploy the other dependent services but my question there would be - if I use the first method to run the dev servers in the background, how can I keep these up and running until the tests from the upstream pipeline finish?
Some extra bits:
in my package.json:
...
"scripts": {
"dev": "nuxt",
"test": "jest"
}
Example of .gitlab-ci.yml - this of course hangs at npm run dev.
deployNuxt:
stage: build
tags:
- test
before_script:
- export PATH=$PATH:/home/[user]/.nvm/versions/node/v13.12.0/bin
script:
- npm install
- npm run dev
- npm run test
If I were to use pipeline API, it would look like:
stages:
- Trigger-cross-projects
- build
deployAuth:
stage: Trigger-cross-projects
tags:
- test
script:
- "curl -X POST -F token=[token] -F ref=[ref_name] [pipeline api endpoint]"
deployNuxt:
stage: build
tags:
- test
before_script:
- export PATH=$PATH:/home/[user]/.nvm/versions/node/v13.12.0/bin
script:
- npm install
- npm run dev
- npm run test
Architecture (separate git projects):
service 1 - nuxt (frontend) <---- this is where the tests are run
service 2 - express server (backend)
service 3 - authentication
service 4 - gateway
Relevant question and answer regarding starting processes with nohup (bare in mind I'm looking for best practice here though, and how to keep dependent services running during tests): https://stackoverflow.com/a/56857336/3770935

Related

How to create a Pipeline with Gitlab CI, just for the affected packages with Turborepo

So, I am in the process to integrate Turborepo into our NodeJS(React, Next, Node) Monorepo which uses GitLab CI. The thing is the example in the Docs is quite not up to what I want.
For reference here is what they have in their Docs:
image: node:latest
# To use Remote Caching, uncomment the next lines and follow the steps below.
# variables:
# TURBO_TOKEN: $TURBO_TOKEN
# TURBO_TEAM: $TURBO_TEAM
stages:
- build
build:
stage: build
script:
- npm install
- npm run build
- npm run test
We have a few stages. Beyond the ones in their example:
install
build
package
What I would ideally like to have is to use Turborepo and GitLab Downstream Pipelines to run as follows:
install stage should run when root package.json has changed.
build, package stage should be run just for the affected packages. (i.e, if shared-lib is changed, then shared-lib should run as well as the 2 consumers app-a, app-b. In parallel)
I read the Docs and I can somehow make the Downstread Job run but not for the affected. Instead it does it for all. The main problem is how can I read the affected packages and its consumers, and just run those.
I read with the latest version I can use the --dry commands to read those. But let's say that works reliable which from my testing it doesn't. how can I put those packages, as Downstream Jobs in Gitlab?

Gitlab CI/CD test whether pm2 start gives errors

So I am trying to make a CI/CD pipeline for the first time as I have no experience with DevOps at all, and I want to add pm2 to it, so I can test whether it is able to successfully start the API without issues. This way, I can ensure that the API works when being deployed.
The one issue I have is if I do pm2 start ecosystem.config.js --env development, how will the pipeline know whether this startup is successful or not?
This is currently the YAML file that I have:
image: node:10.19.0
services:
- mongo:3.6.8
- pm2:3.5.1
cache:
paths:
- node_modules/
stages:
- build
- test
- deploy
build:
stage: build
script:
- npm install
test:
stage: test
script:
- npm run lint
- pm2 start ecosystem.config.js --env development
Any ideas how you can test pm2 startup?
Thanks in advance!
UPDATE #1:
The reason why I want to test pm2 startup is that pm2 has several "process.env" variables that are required in my API to be started.
Starting the API without those variables immediately gives errors, as it needs the variables to initialize the connection to other APIs.
So I don't know whether there is a workaround to be able to start the API without an issue?

pipeline passed without any changes gitlab ci cd

im new comer in gitlab ci cd
this is my .gitlab-ci.yml in my project and i want to test it how it works . i have registered gitlab runner that is ok but my problem is when i add a file to my project it will run pipelines and they are successfully passed but nothing changes in my project in server?
what is the problem? it is not dockerized project
image: php:7.2
stages:
- preparation
- building
cache:
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
composer:
stage: preparation
script:
- mkdir test5
when: manual
pull:
stage: building
script:
- mkdir test6
CI/CD Pipelines like GitLab CI/CD are nothing different than virtual environments, usually docker, that can operate on the basis of your code as you do on your own host system.
Your mkdir operations definitely have an effect but the changes remain inside the virtual environment because they are not reflected to your remote repository. For this to work, you have to setup your repository from inside the CI/CD runner and commit to your repository, (again) just like you do from your own host system. To execute custom commands, GitLab CI/CD has the script parameter. I am sure, reading this will get you up and running.

Gitlab continuous integration testing with Selenium

I am working on a project to build, test and deploy an application to the cloud using a .gitlab-ci.yml
1) Build the backend and frontend using pip install and npm install
build_backend:
image: python
stage: build
script:
- pip install requirements.txt
artifacts:
paths:
- backend/
build_frontend:
image: node
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- frontend
2) Run unit and functional tests using PyUnit and Python Selenium
test_unit:
image: python
stage: test
script:
- python -m unittest discover
test_functional:
image: python
stage: test
services:
- selenium/standalone-chrome
script:
- python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
3) Deploy to Google Cloud using the sdk
deploy:
image: google/cloud-sdk
stage: deploy
environment:
name: $CI_BUILD_REF_NAME
url: https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
script:
- echo $GAE_KEY > /tmp/gae_key.json
- gcloud config set project $GAE_PROJECT
- gcloud auth activate-service-account --key-file /tmp/gae_key.json
- gcloud --quiet app deploy --version $CI_BUILD_REF_SLUG --no-promote
after_script:
- rm /tmp/gae_key.json
This all runs perfectly, except for the selenium tests are run on the deployed url not the current build:
python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
I need to have gitlab run three things simultaneously:
a) Selenium
b) Python server with the application
- Test script
Possible approaches to run the python server:
Run within the same terminal commands as the test script somehow
Docker in Docker
Service
Any advice, or answers would be greatly appreciated!
I wrote a blog post on how I set up web tests for a php application. Ok PHP, but I guess something similar can be done for a python project.
What I did, was starting a php development server from within the container that runs the web tests. Because of the artifacts, the development server can access the php files. I figure out the IP address of the container, and using this IP address the selenium/standalone-chrome container can connect back to the development server.
I created a simple demo-project, you can check out the .gitlab-ci.yml file. Note that I pinned the selenium container to an old version; this was because of an issue with an old version of the php webdriver package, today this isn't needed anymore.

How to run integration test inside a docker container in drone pipeline

I have a docker image built up for mongodb test. You can be found from zhaoyi0113/mongo-uat. When start a docker container from this image, it will create a few mongodb instances which will take a few minutes to startup. Now I want to run my integration test cases inside this container by drone CI. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: zhaoyi0113/mongo-uat
commands:
- npm install
- npm run integration
There are two steps in this pipeline, the first is to run unit test in a nodejs project. The second one integration is used to run integration test cases in the mongodb docker image.
when I run drone exec it will get an error failed to connect to mongo instance. I think that because the mongodb instance needs a few minutes to startup. The commands npm install and npm run integration should be run after the mongodb instance launched. How can I delay the build commands?
EDIT1
The image zhaoyi0113/mongo-uat has mongodb environment. It will create a few mongodb instances. I can run this command docker run -d zhaoyi0113/mongo-uat to launch this container after that I can attach to this container to see the mongodb instances. I am not sure how drone launch the docker container.
The recommended approach to integration testing is to place your service containers in the service section of the Yaml [1][2]
Therefore in order to start a Mongo service container I would create the below Yaml file. The Mongo service will start on the default port at 127.0.0.1 and be accessible from your pipeline containers.
pipeline:
test:
image: node
commands:
- npm install
- npm run test
integration:
image: node
commands:
- npm run integration
services:
mongo:
image: mongo:3.0
This is the recommended approach for testing services like MySQL, Postgres, Mongo and more.
[1] http://readme.drone.io/usage/getting-started/#services
[2] http://readme.drone.io/usage/services-guide/
As a short addendum to Brads answer: While the mongo service will run on 127.0.0.1 on the drone host machine - it will not be possible to reach the service from this IP within the node app. To access the service you would reference its service name (here: mongo).