I'm trying to run automated tests via browserstack on private server, tests are executed on Gitlab Ci. Since it is private server I need force local parameter when executing tests. When running from local PC following solution works perfectly:
Downloading binary
running command./BrowserStackLocal --key --force-local
I would like to do the same in .gitlab-ci.yml file, but I dont know exactly how to achieve this (how do download unzip and install browserstacklical binary)
This is my .gitlab-ci.yml file right now:
stages:
- e2e_testing
e2e_testing:
image: node:10.15.3
stage: e2e_testing
variables:
NODE_ENV: dev
script:
- apt-get update
- apt-get install unzip
- wget http://www.browserstack.com/browserstack-local/BrowserStackLocal-linux-x64.zip
- unzip BrowserStackLocal-linux-x64.zip
- ./BrowserStackLocal --key ${BROWSERSTACK_ACCESSKEY} --force-local
- npm ci
- npm run test:browserstack
only:
- master
tags:
- docker
- build
artifacts:
when: always
paths:
- reports/
You can execute the BrowserStack Local Binary through code using the Local Bindings for Node JS. Reference: https://github.com/browserstack/browserstack-local-nodejs
When using the Local Bindings, the Binary is automatically downloaded and initiated through code itself.
You could try executing the sample test: https://github.com/browserstack/browserstack-local-nodejs/blob/master/node-example.js from your Gitlab CI.
Related
im new comer in gitlab ci cd
this is my .gitlab-ci.yml in my project and i want to test it how it works . i have registered gitlab runner that is ok but my problem is when i add a file to my project it will run pipelines and they are successfully passed but nothing changes in my project in server?
what is the problem? it is not dockerized project
image: php:7.2
stages:
- preparation
- building
cache:
key: "$CI_JOB_NAME-$CI_COMMIT_REF_SLUG"
composer:
stage: preparation
script:
- mkdir test5
when: manual
pull:
stage: building
script:
- mkdir test6
CI/CD Pipelines like GitLab CI/CD are nothing different than virtual environments, usually docker, that can operate on the basis of your code as you do on your own host system.
Your mkdir operations definitely have an effect but the changes remain inside the virtual environment because they are not reflected to your remote repository. For this to work, you have to setup your repository from inside the CI/CD runner and commit to your repository, (again) just like you do from your own host system. To execute custom commands, GitLab CI/CD has the script parameter. I am sure, reading this will get you up and running.
I am currently trying to get my E2E tests running on Gitlab with their CI/CD platform.
My issue at the moment is that I cannot both my dev server and cypress to run at the same time so that the E2E tests can run.
Here is my current .gitlab-ci.yml file:
image: node
variables:
npm_config_cache: "$CI_PROJECT_DIR/.npm"
CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- .npm
- cache/Cypress
- node_modules
stages:
- setup
- test
setup:
stage: setup
image: cypress/base:10
script:
- npm ci
# check Cypress binary path and cached versions
# useful to make sure we are not carrying around old versions
- npx cypress cache path
- npx cypress cache list
cypress:
stage: test
image: cypress/base:10
script:
# I need to start a dev server here in the background
- cypress run --record --key <my_key> --parallel
artifacts:
when: always
paths:
- cypress/videos/**/*.mp4
- cypress/screenshots/**/*.png
expire_in: 7 day
In Cypress's official GitHub page, there is an example .gitlab-ci.yml for running Cypress in continuous integration.
It uses command npm run start:ci & for running dev server in the background.
So, your .gitlab-ci.yml might look like this:
⋮
cypress:
image: cypress/base:10
stage: test
script:
- npm run start:ci & # start the server in the background
- cypress run --record --key <my_key> --parallel
⋮
Or use this utility to start the server, wait for an URL to respond, then run tests and shut down the server https://github.com/bahmutov/start-server-and-test
I built a Vue.js Vuex user interface. It works perfectly (on my laptop). I want to deploy it on Gitlab pages.
I used the file described here (except that I upgraded the Node.js version):
build site:
image: node:10.8
stage: build
script:
- npm install --progress=false
- npm run build
artifacts:
expire_in: 1 week
paths:
- dist
unit test:
image: node:10.8
stage: test
script:
- npm install --progress=false
- npm run unit
deploy:
image: alpine
stage: deploy
script:
- apk add --no-cache rsync openssh
- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" >> ~/.ssh/id_dsa
- chmod 600 ~/.ssh/id_dsa
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
- rsync -rav --delete dist/ user#server.com:/your/project/path/
The job is marked as run successfully on the pipeline. However when I click on the pages URL I get a 404 HTTP error code.
What am I missing?
I was facing a similar issue when I was trying to deploy my Vue.js application to Gitlab pages. After weeks of trial and error, I have got it to work.
Seeing your above script your building the app, unit testing it and trying to deploy it to an external server. If you need it in Gitlab pages as well you have to use the pages job.
Here is my pages job for deploying a vue.js app to Gitlab pages:
pages:
image: node:latest
stage: deploy
script:
- npm install --progress=false
- npm run build
- rm -rf public
- mkdir public
- cp -r dist/* public
artifacts:
expire_in: 1 week
paths:
- public
only:
- master
Hope this is what you're looking for.
You can deploy without the pipeline. In order for this to work you have to first build your application for production. If you have used Vue cli this is done by invoking the build command. ex. npm run build
This will generate a dist folder where your assets are. This is what you have to push in your repository. For example, look at my repository.
https://github.com/DanijelH/danijelh.github.io
And this is the page
https://danijelh.github.io/
I'm new to GitLab CI. Constructed very simple YAML just for test purposes. I configured runner with shell executor on my AWS machine and register it properly. In Settings/Pipelines I see activated runner. When I push something on my repository following YAML should be executed: docker-auto-scale
before_script:
- npm install
cache:
paths:
- node_modules/
publish:
stage: deploy
script:
- node app.js
Instead completly another runner is continouosly started (whatever I change - even when I turn off runner on my machine). It is runner with ID: Runner: #40786. In logs I can read:
Running with gitlab-ci-multi-runner 9.5.0 (413da38)
on docker-auto-scale (e11ae361)
Using Docker executor with image ruby:2.1 ...
I didn't even have Docker executor - I chose shell one. What is going on? Please support.
When you registered the new runner, did you give it a tag?
if so, and it would be e.g. my_tag modify your yaml file and append:
publish:
stage: deploy
script:
- node app.js
tags:
- my_tag
otherwise the build will be picked up by a shared runner.
I am working on a project to build, test and deploy an application to the cloud using a .gitlab-ci.yml
1) Build the backend and frontend using pip install and npm install
build_backend:
image: python
stage: build
script:
- pip install requirements.txt
artifacts:
paths:
- backend/
build_frontend:
image: node
stage: build
script:
- npm install
- npm run build
artifacts:
paths:
- frontend
2) Run unit and functional tests using PyUnit and Python Selenium
test_unit:
image: python
stage: test
script:
- python -m unittest discover
test_functional:
image: python
stage: test
services:
- selenium/standalone-chrome
script:
- python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
3) Deploy to Google Cloud using the sdk
deploy:
image: google/cloud-sdk
stage: deploy
environment:
name: $CI_BUILD_REF_NAME
url: https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
script:
- echo $GAE_KEY > /tmp/gae_key.json
- gcloud config set project $GAE_PROJECT
- gcloud auth activate-service-account --key-file /tmp/gae_key.json
- gcloud --quiet app deploy --version $CI_BUILD_REF_SLUG --no-promote
after_script:
- rm /tmp/gae_key.json
This all runs perfectly, except for the selenium tests are run on the deployed url not the current build:
python tests/example.py http://selenium__standalone-chrome:4444/wd/hub https://$CI_BUILD_REF_SLUG-dot-$GAE_PROJECT.appspot.com
I need to have gitlab run three things simultaneously:
a) Selenium
b) Python server with the application
- Test script
Possible approaches to run the python server:
Run within the same terminal commands as the test script somehow
Docker in Docker
Service
Any advice, or answers would be greatly appreciated!
I wrote a blog post on how I set up web tests for a php application. Ok PHP, but I guess something similar can be done for a python project.
What I did, was starting a php development server from within the container that runs the web tests. Because of the artifacts, the development server can access the php files. I figure out the IP address of the container, and using this IP address the selenium/standalone-chrome container can connect back to the development server.
I created a simple demo-project, you can check out the .gitlab-ci.yml file. Note that I pinned the selenium container to an old version; this was because of an issue with an old version of the php webdriver package, today this isn't needed anymore.