I am working on a project to build, test and deploy an application to the cloud using a .gitlab-ci.yml
1) Build the backend and frontend using pip install and npm install
build_backend:
image: python
stage: build
script:
- pip install requirements.txt
artifacts:
paths:
- backend/
build_frontend:
image: node
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- frontend
2) Run unit and functional tests using PyUnit and Python Selenium
test_unit:
image: python
stage: test
script:
- python -m unittest discover
test_functional:
image: python
stage: test
services:
- selenium/standalone-chrome
script:
- python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
3) Deploy to Google Cloud using the sdk
deploy:
image: google/cloud-sdk
stage: deploy
environment:
name: $CI_BUILD_REF_NAME
url: https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
script:
- echo $GAE_KEY > /tmp/gae_key.json
- gcloud config set project $GAE_PROJECT
- gcloud auth activate-service-account --key-file /tmp/gae_key.json
- gcloud --quiet app deploy --version $CI_BUILD_REF_SLUG --no-promote
after_script:
- rm /tmp/gae_key.json
This all runs perfectly, except for the selenium tests are run on the deployed url not the current build:
python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
I need to have gitlab run three things simultaneously:
a) Selenium
b) Python server with the application
- Test script
Possible approaches to run the python server:
Run within the same terminal commands as the test script somehow
Docker in Docker
Service
Any advice, or answers would be greatly appreciated!
I wrote a blog post on how I set up web tests for a php application. Ok PHP, but I guess something similar can be done for a python project.
What I did, was starting a php development server from within the container that runs the web tests. Because of the artifacts, the development server can access the php files. I figure out the IP address of the container, and using this IP address the selenium/standalone-chrome container can connect back to the development server.
I created a simple demo-project, you can check out the .gitlab-ci.yml file. Note that I pinned the selenium container to an old version; this was because of an issue with an old version of the php webdriver package, today this isn't needed anymore.
Related
I am currently trying to get my E2E tests running on Gitlab with their CI/CD platform.
My issue at the moment is that I cannot both my dev server and cypress to run at the same time so that the E2E tests can run.
Here is my current .gitlab-ci.yml file:
image: node
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
stages:
- setup
- test
setup:
stage: setup
image: cypress/base:10
script:
- npm ci
# check Cypress binary path and cached versions
# useful to make sure we are not carrying around old versions
- npx cypress cache path
- npx cypress cache list
cypress:
stage: test
image: cypress/base:10
script:
# I need to start a dev server here in the background
- cypress run --record --key <my_key> --parallel
artifacts:
when: always
paths:
- cypress/videos/**/*.mp4
- cypress/screenshots/**/*.png
expire_in: 7 day
In Cypress's official GitHub page, there is an example .gitlab-ci.yml for running Cypress in continuous integration.
It uses command npm run start:ci & for running dev server in the background.
So, your .gitlab-ci.yml might look like this:
⋮
cypress:
image: cypress/base:10
stage: test
script:
- npm run start:ci & # start the server in the background
- cypress run --record --key <my_key> --parallel
⋮
Or use this utility to start the server, wait for an URL to respond, then run tests and shut down the server https://github.com/bahmutov/start-server-and-test
I'm trying to run automated tests via browserstack on private server, tests are executed on Gitlab Ci. Since it is private server I need force local parameter when executing tests. When running from local PC following solution works perfectly:
Downloading binary
running command./BrowserStackLocal --key --force-local
I would like to do the same in .gitlab-ci.yml file, but I dont know exactly how to achieve this (how do download unzip and install browserstacklical binary)
This is my .gitlab-ci.yml file right now:
stages:
- e2e_testing
e2e_testing:
image: node:10.15.3
stage: e2e_testing
variables:
NODE_ENV: dev
script:
- apt-get update
- apt-get install unzip
- wget http://www.browserstack.com/browserstack-local/BrowserStackLocal-linux-x64.zip
- unzip BrowserStackLocal-linux-x64.zip
- ./BrowserStackLocal --key ${BROWSERSTACK_ACCESSKEY} --force-local
- npm ci
- npm run test:browserstack
only:
- master
tags:
- docker
- build
artifacts:
when: always
paths:
- reports/
You can execute the BrowserStack Local Binary through code using the Local Bindings for Node JS. Reference: https://github.com/browserstack/browserstack-local-nodejs
When using the Local Bindings, the Binary is automatically downloaded and initiated through code itself.
You could try executing the sample test: https://github.com/browserstack/browserstack-local-nodejs/blob/master/node-example.js from your Gitlab CI.
I built a Vue.js Vuex user interface. It works perfectly (on my laptop). I want to deploy it on Gitlab pages.
I used the file described here (except that I upgraded the Node.js version):
build site:
image: node:10.8
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:10.8
stage: test
script:
- npm install --progress=false
- npm run unit
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/your/project/path/
The job is marked as run successfully on the pipeline. However when I click on the pages URL I get a 404 HTTP error code.
What am I missing?
I was facing a similar issue when I was trying to deploy my Vue.js application to Gitlab pages. After weeks of trial and error, I have got it to work.
Seeing your above script your building the app, unit testing it and trying to deploy it to an external server. If you need it in Gitlab pages as well you have to use the pages job.
Here is my pages job for deploying a vue.js app to Gitlab pages:
pages:
image: node:latest
stage: deploy
script:
- npm install --progress=false
- npm run build
- rm -rf public
- mkdir public
- cp -r dist/* public
artifacts:
expire_in: 1 week
paths:
- public
only:
- master
Hope this is what you're looking for.
You can deploy without the pipeline. In order for this to work you have to first build your application for production. If you have used Vue cli this is done by invoking the build command. ex. npm run build
This will generate a dist folder where your assets are. This is what you have to push in your repository. For example, look at my repository.
https://github.com/DanijelH/danijelh.github.io
And this is the page
https://danijelh.github.io/
I'm new to GitLab CI. Constructed very simple YAML just for test purposes. I configured runner with shell executor on my AWS machine and register it properly. In Settings/Pipelines I see activated runner. When I push something on my repository following YAML should be executed: docker-auto-scale
before_script:
- npm install
cache:
paths:
- node_modules/
publish:
stage: deploy
script:
- node app.js
Instead completly another runner is continouosly started (whatever I change - even when I turn off runner on my machine). It is runner with ID: Runner: #40786. In logs I can read:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on docker-auto-scale (e11ae361)
Using Docker executor with image ruby:2.1 ...
I didn't even have Docker executor - I chose shell one. What is going on? Please support.
When you registered the new runner, did you give it a tag?
if so, and it would be e.g. my_tag modify your yaml file and append:
publish:
stage: deploy
script:
- node app.js
tags:
- my_tag
otherwise the build will be picked up by a shared runner.
I have a docker image built up for mongodb test. You can be found from zhaoyi0113/mongo-uat. When start a docker container from this image, it will create a few mongodb instances which will take a few minutes to startup. Now I want to run my integration test cases inside this container by drone CI. Below is my .drone.yml file:
pipeline:
build:
image: node:latest
commands:
- npm install
- npm test
- npm run eslint
integration:
image: zhaoyi0113/mongo-uat
commands:
- npm install
- npm run integration
There are two steps in this pipeline, the first is to run unit test in a nodejs project. The second one integration is used to run integration test cases in the mongodb docker image.
when I run drone exec it will get an error failed to connect to mongo instance. I think that because the mongodb instance needs a few minutes to startup. The commands npm install and npm run integration should be run after the mongodb instance launched. How can I delay the build commands?
EDIT1
The image zhaoyi0113/mongo-uat has mongodb environment. It will create a few mongodb instances. I can run this command docker run -d zhaoyi0113/mongo-uat to launch this container after that I can attach to this container to see the mongodb instances. I am not sure how drone launch the docker container.
The recommended approach to integration testing is to place your service containers in the service section of the Yaml [1][2]
Therefore in order to start a Mongo service container I would create the below Yaml file. The Mongo service will start on the default port at 127.0.0.1 and be accessible from your pipeline containers.
pipeline:
test:
image: node
commands:
- npm install
- npm run test
integration:
image: node
commands:
- npm run integration
services:
mongo:
image: mongo:3.0
This is the recommended approach for testing services like MySQL, Postgres, Mongo and more.
[1] http://readme.drone.io/usage/getting-started/#services
[2] http://readme.drone.io/usage/services-guide/
As a short addendum to Brads answer: While the mongo service will run on 127.0.0.1 on the drone host machine - it will not be possible to reach the service from this IP within the node app. To access the service you would reference its service name (here: mongo).