Vue Cypress and Gitlab CI/CD - vue.js

I am currently trying to get my E2E tests running on Gitlab with their CI/CD platform.
My issue at the moment is that I cannot both my dev server and cypress to run at the same time so that the E2E tests can run.
Here is my current .gitlab-ci.yml file:
image: node
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
stages:
- setup
- test
setup:
stage: setup
image: cypress/base:10
script:
- npm ci
# check Cypress binary path and cached versions
# useful to make sure we are not carrying around old versions
- npx cypress cache path
- npx cypress cache list
cypress:
stage: test
image: cypress/base:10
script:
# I need to start a dev server here in the background
- cypress run --record --key <my_key> --parallel
artifacts:
when: always
paths:
- cypress/videos/**/*.mp4
- cypress/screenshots/**/*.png
expire_in: 7 day

In Cypress's official GitHub page, there is an example .gitlab-ci.yml for running Cypress in continuous integration.
It uses command npm run start:ci & for running dev server in the background.
So, your .gitlab-ci.yml might look like this:
⋮
cypress:
image: cypress/base:10
stage: test
script:
- npm run start:ci & # start the server in the background
- cypress run --record --key <my_key> --parallel
⋮

Or use this utility to start the server, wait for an URL to respond, then run tests and shut down the server https://github.com/bahmutov/start-server-and-test

Related

How to run a script on a service container in Gitlab CI

I'm doing e2e testing using Cypress in Gitlab CI. I imported database and backend as services. Now I need to run an npm script on backend to populate the db. How do I do that?
.docker: &docker
tags:
- docker
t:test-server-mr:
stage: test
allow_failure: true
before_script:
- echo "Skipping global before script"
image: node:12.16.1-stretch
services:
- name: registry.gitlab.com/registryname/backend/db:latest
alias: database
- name: registry.gitlab.com/registryname/backend/master
alias: backend
script:
- npm install
- npm run ci
<<: [*docker]
You can add another entry under the script property:
script:
- npm install
- npm run ci
- npm run init_my_db

Run a dev server in CI pipleine

I have a CI pipeline setup using Github Action/Workflows, where i would want to run Cypress Automated tests, However I am having some logical problems of how to run my dev server. let me show you my pipeline
name: Nuxt CI Pipeline
on:
push:
branches: [ CI-pipeline ]
# pull_request:
# branches: [ master ]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [ 14.x ]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Make envfile
uses: SpicyPizza/create-envfile#v1
with:
envkey_ENV: staging
file_name: .env
- run: npm ci
- run: npm run dev
- run: |
cd e2e
ls -l
npm ci
npx cypress run
Now I want to spin up the devserver and run the tests on that port usually 3000 , however the problem is when the command npm run dev is executed, the pipeline keeps on waiting there and doesnt move forward , which makes sense as devserver doesn't return a response as other commands will , so its kinda stuck there. My knowledge of devops is bare minimum , can someone point out what am i missing?
I think the way of execution is not ideal, especially since so the node server is also not killed correctly in the end. Using a helper package like start-server-and-test should do the trick for you:
npm install --save-dev start-server-and-test
While I'm not sure what exactly is behind your scripts in your package.json, it could look something like this in the end:
"scripts": {
"start:ci": "<<start your dev server>>",
"cy:run": "cypress run --browser chrome --headless",
"cy:ci": "start-server-and-test start:ci http://localhost:3000 cy:run"
},
Then you can simply run this as a single command in your pipeline with npm run cy:ci. The script will take care of starting your dev server, waiting for the URL to be available, then executing the tests and after all tests are finished, it will shut down the server.

How to download/unzip and install browserstack local in gitlab-ci.yml

I'm trying to run automated tests via browserstack on private server, tests are executed on Gitlab Ci. Since it is private server I need force local parameter when executing tests. When running from local PC following solution works perfectly:
Downloading binary
running command./BrowserStackLocal --key --force-local
I would like to do the same in .gitlab-ci.yml file, but I dont know exactly how to achieve this (how do download unzip and install browserstacklical binary)
This is my .gitlab-ci.yml file right now:
stages:
- e2e_testing
e2e_testing:
image: node:10.15.3
stage: e2e_testing
variables:
NODE_ENV: dev
script:
- apt-get update
- apt-get install unzip
- wget http://www.browserstack.com/browserstack-local/BrowserStackLocal-linux-x64.zip
- unzip BrowserStackLocal-linux-x64.zip
- ./BrowserStackLocal --key ${BROWSERSTACK_ACCESSKEY} --force-local
- npm ci
- npm run test:browserstack
only:
- master
tags:
- docker
- build
artifacts:
when: always
paths:
- reports/
You can execute the BrowserStack Local Binary through code using the Local Bindings for Node JS. Reference: https://github.com/browserstack/browserstack-local-nodejs
When using the Local Bindings, the Binary is automatically downloaded and initiated through code itself.
You could try executing the sample test: https://github.com/browserstack/browserstack-local-nodejs/blob/master/node-example.js from your Gitlab CI.

How to do a single build with GitLab for Vue application with multiple .env files

I have a simple .gitlab-ci.yml file that builds my Vue application. I build once and then deploy the dist folder to my various environments:
stages:
- build
- deploy_dev
- deploy_stg
- deploy_prd
build:
image: node:latest # Pull Node image
stage: build
script:
- npm install -g #vue/cli#latest
- npm install
- npm run build
artifacts:
expire_in: 2 weeks
paths:
- dist/
deploy_to_dev:
image: python:latest
stage: deploy_dev
dependencies:
- build
only:
- master # Only deply master branch automatically to Dev
script:
- export AWS_ACCESS_KEY_ID=$DEV_AWS_ACCESS_ID
- export AWS_SECRET_ACCESS_KEY=$DEV_AWS_ACCESS_KEY
- pip install awscli # Install AWS CLI
- aws s3 sync ./dist s3://$DEV_BUCKET
This all works great, however, I've now introduced some config and build my app differently per environment - for 3 environments I have 3 different build commands. Eg, I have an .env.production so for a production build my command becomes:
npm run build -- --mode production
Is there any way to get around having different builds for each environment but still using the .env files based on a GitLab variable?
You should split your build job to have one per environment and use the environment concept to have something like that for dev and production envs :
.build_template: &build_template
image: node:latest # Pull Node image
script:
- npm install -g #vue/cli#latest
- npm install
- npm run build -- --mode $CI_ENVIRONMENT_NAME
build_dev:
stage: build_dev
<<: *build_template
environment:
name: dev
build_prod:
stage: build_prod
<<: *build_template
environment:
name: production
In this snippet, I used anchors to avoid duplicate lines.

Run sequentially jobs by environment using gitlab-ci

I'm looking for a solution to my problem. I want to run sequentially jobs by environment using gitlab-ci.
Build dev (manual launch) -----> create docker image dev (automatic if dev build works)
Build staging (manual launch) ------> create docker image staging (automatic if staging build works)
...
If the dev build fails, I don't want to create the docker image dev.
How can I do this ? Morever, is it possible to do a for loop to build each environment ?
Here what I currently do :
stages:
- build
- docker-image
dev:build:
stage: build
script:
- npm install
- npm run build:development
when: manual
staging:build:
stage: build
script:
- npm install
- npm run build:staging
when: manual
demo:build:
stage: build
script:
- npm install
- npm run build:demo
when: manual
dev:image:
stage: docker-image
script:
- docker build -t registry/project:dev --build-arg environment=development .
#- docker push registry/project:dev
when: on_success
staging:image:
stage: docker-image
script:
- docker build -t registry/project:staging --build-arg environment=staging .
#- docker push registry/project:staging
when: manual
demo:image:
stage: docker-image
script:
- docker build -t registry/project:demo --build-arg environment=demo .
#- docker push registry/project:demo
when: manual
Thank you very much
You should disallow failure for the manual jobs so the pipeline will not continue any further on the unsuccessful build.
Manual jobs are allowed to fail by default.
To disallow failure:
dev:build:
stage: build
script:
- npm install
- npm run build:development
when: manual
allow_failure: false
Note:
when: on_success
on your dev:image job is either wrong syntax or does nothing.