Run Phpunit and JS test simultaneous via CircleCI - testing

I use circle to run JS and PHP test (Protractor/ Phpunit).
I would like use parallelism to win time but i don't know configure the parallelism. I activate the parallelism in the circle parameters (2 containers).
My actual circle configuration (circle.yml) :
# Depend de app/config/parameters.circle.yml (parametre symfony pour circle) et app/config/apache.circle (configuration d'Apache pour Circle)
# Configuration du serveur
machine:
php:
version: 5.4.21
timezone:
Europe/Paris
hosts:
bluegrey.circle.dev: 127.0.0.1
dependencies:
pre:
# SauceConnect (Angular)
- wget https://saucelabs.com/downloads/sc-latest-linux.tar.gz
- tar -xzf sc-latest-linux.tar.gz
- ./bin/sc -u johnnyEvo -k xxx:
background: true
pwd: sc-*-linux
# Installation protractor (Angular)
- npm install -g protractor
# On active XDebug
- sed -i 's/^;//' ~/.phpenv/versions/$(phpenv global)/etc/conf.d/xdebug.ini
- echo "xdebug.max_nesting_level = 250" > ~/.phpenv/versions/$(phpenv global)/etc/conf.d/xdebug.ini
# Configuration d'Apache
- cp app/config/apache.circle /etc/apache2/sites-available
- a2ensite apache.circle
- sudo service apache2 restart
override:
# Composer
- composer install --prefer-source --no-interaction
post:
# Assets
- app/console assetic:dump
# Parameters
- cp app/config/parameters.circle.yml.dist app/config/parameters.yml
database:
pre:
# Base de données (test)
- app/console doctrine:database:create --env=test --no-interaction
- app/console doctrine:schema:update --force --env=test --no-interaction
# Base de données (prod/ angular)
- app/console doctrine:database:drop --no-interaction --force
- app/console doctrine:database:create --no-interaction
- app/console doctrine:schema:update --force --no-interaction
# Fixture
- app/console doctrine:fixture:load --no-interaction
test:
pre:
# Permission pour que Protractor puisse naviguer le site
- sudo setfacl -R -m u:www-data:rwx -m u:`whoami`:rwx app/cache app/logs app/sessions
- sudo setfacl -dR -m u:www-data:rwx -m u:`whoami`:rwx app/cache app/logs app/sessions
override:
- php -d memory_limit=-1 bin/phpunit -c app
- protractor angutest
Thank you

I'm one of the CircleCI devs.
the most straightforward way is to run the PHP tests on one container and the JS tests on another, if they have approximately similar runtimes then you'll get the benefit without having to manually split test suites.
Something like the following would work in that case:
test:
override:
- case $CIRCLE_NODE_INDEX in 0) php -d memory_limit=-1 bin/phpunit -c app ;; 1) protractor angutest ;; esac:
parallel: true

Related

GitLab CI - Symfony4 - An exception occurred in driver: SQLSTATE[HY000] [2002] No such file or directory

Symfony 4.4 - php 7.4
My .gitlab-ci.yml
image: php:7.4-cli
variables:
APP_DOMAIN: $APP_DOMAIN
MYSQL_ROOT_PASSWORD: $MYSQL_ROOT_PASSWORD #root
MYSQL_DATABASE: $MYSQL_DATABASE #test
MYSQL_USER: $MYSQL_USER #runner
MYSQL_PASSWORD: $MYSQL_PASSWORD #password
DB_HOST: $DB_HOST #mysql
DATABASE_URL: $DATABASE_URL #DATABASE_URL=mysql://runner:password#mysql:3306/test
cache:
paths:
- vendor/
services:
- name: mysql:5.7
alias: mysql
before_script:
- apt update -y
- pecl install xdebug
- docker-php-ext-enable xdebug
- apt install -y libzip-dev zip
- docker-php-ext-install pdo pdo_mysql zip
- curl -sS https://getcomposer.org/installer | php
- mv composer.phar /usr/local/bin/composer
- composer install --prefer-dist --no-ansi --no-interaction --no-progress
stages:
- test
- static_analysis
phpunit-test:
stage: test
script:
- php bin/console cache:clear --env=test
- php bin/console doctrine:database:drop --if-exists --force --env=test
- php bin/console doctrine:database:create --env=test
- php bin/console doctrine:migrations:migrate --env=test --no-interaction
#- php bin/console doctrine:fixture:load --no-interaction --env=test
- php bin/phpunit --coverage-text --colors=never
behat-test:
stage: test
script:
- echo "I don't know how run behat yet..."
phpstan:
stage: static_analysis
script: ./vendor/bin/phpstan analyse src tests --level=6
dependencies:
- phpunit-test
my .env.test
# define your env variables for the test env here
KERNEL_CLASS='App\Kernel'
APP_SECRET='$ecretf0rt3st'
SYMFONY_DEPRECATIONS_HELPER=999999
PANTHER_APP_ENV=panther
APP_ENV=test
DATABASE_URL=mysql://runner:password#mysql:3306/test
On Gitlab, during phpunit-test, at the command:
php bin/console doctrine:database:[drop|create] --env=test
the pipeline fails with:
In AbstractMySQLDriver.php line 112:
An exception occurred in driver: SQLSTATE[HY000] [2002] No such file or directory
In Exception.php line 18:
SQLSTATE[HY000] [2002] No such file or directory
In PDOConnection.php line 38:
SQLSTATE[HY000] [2002] No such file or directory
WHY my msql image does'nt work ? Is it the problem ? How can I fix that ? Help !!!
I tried:
delete APP_ENV and DATABASE_URL from .env.test
delete the 3306 port in DATABASE_URL=mysql://runner:password#mysql/test
...
I made a mistake on definition of DATABASE_URL variable in GitLab !!!
It was: (in GitLab->Project->Settings->CI/CD->Variables)
key:
DATABASE_URL
value:
DATABASE_URL=mysql://runner:password#mysql:3306/test
instead of: (the solution)
key:
DATABASE_URL
value:
mysql://runner:password#mysql:3306/test
Sorry for the inconvenience

gitlab-ci job not running script

I am new to gitlab-ci and trying a minimal Python application based on a gitlab template.
My .gitlab-ci.yml file is below:
# This file is a template, and might need editing before it works on your project.
# Official language image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/python/tags/
#image: python:latest
# Change pip's cache directory to be inside the project directory since we can
# only cache local items.
variables:
PIP_CACHE_DIR: "$CI_PROJECT_DIR/.cache"
stages:
- test
- run
# Pip's cache doesn't store the python packages
# https://pip.pypa.io/en/stable/reference/pip_install/#caching
#
# If you want to also cache the installed packages, you have to install
# them in a virtualenv and cache it as well.
#cache:
# paths:
# - .cache/pip
# - venv/
before_script:
- python -V # Print out python version for debugging
#- pip install virtualenv
#- virtualenv venv
- python -m venv venv
- venv/scripts/activate
job1:
stage: test
script:
- python setup.py test
#- pip install tox flake8 # you can also use tox
#- tox -e py36,flake8
job2:
stage: run
script:
- pip install wheel
- python setup.py bdist_wheel
artifacts:
paths:
- dist/*.whl
#pages:
# script:
# - pip install sphinx sphinx-rtd-theme
# - cd doc ; make html
# - mv build/html/ ../public/
# artifacts:
# paths:
# - public
# only:
# - master
The jobs are seen within the gitlab web UI and they appear to run on my (windows based shell executor) runner.
When I look at the output for the jobs, it appears as if the actual script commands for each job aren't running at all.
Here's the output from job1:
Running with gitlab-runner 11.2.0 (35e8515d)
on GKUHN-L04 b0162458
Using Shell executor...
Running on GKUHN-L04...
Fetching changes...
Removing venv/
HEAD is now at 2484105 And agai..
From https://gitlab.analog.com/GKuhn/test_gitlab_ci
- [deleted] (none) -> origin/test_ci
fdd4216..cd618ba master -> origin/master
Checking out cd618ba9 as master...
Skipping Git submodules setup
$ python -V
Python 3.7.0
$ python -m venv venv
$ venv/scripts/activate
Job succeeded
And job2:
Running with gitlab-runner 11.2.0 (35e8515d)
on GKUHN-L04 b0162458
Using Shell executor...
Running on GKUHN-L04...
Fetching changes...
Removing venv/
HEAD is now at cd618ba updated .gitlab-ci.yml file
Checking out cd618ba9 as master...
Skipping Git submodules setup
$ python -V
Python 3.7.0
$ python -m venv venv
$ venv/scripts/activate
Uploading artifacts...
WARNING: dist/*.whl: no matching files
ERROR: No files to upload
Job succeeded
What am I doing wrong??
Turns out this is bug on Windows with the shell executor:
https://gitlab.com/gitlab-org/gitlab-runner/issues/2730
And a duplicate of this question: Gitlab CI does not execute npm scripts

Run selenium in the host with docker and Capybara

I have tests with capybara in a docker container. I use this to setup selenium :
Capybara.register_driver :selenium do |app|
require 'selenium/webdriver'
Selenium::WebDriver::Firefox::Binary.path = ENV['FIREFOX_BINARY_PATH'] || Selenium::WebDriver::Firefox::Binary.path
Capybara::Selenium::Driver.new(app, :browser => :firefox)
end
It work when we run tests with xvfb but I want to see the real browser when tests are running so I'm looking for a way to use the browser in the host.
I think it's possible to launch geckodriver on the host and share the port 4444 but I didn't succeeded yet. Capybara launch a new instance of geckodriver, on the container, each time.
What can I do?
Edit 1 : Add more info
I all config I have for capybara :
#<Capybara::SessionConfig:0x0055ce67731a00
#always_include_port=false,
#app_host="http://domain-test.engagement.lvh.me:1300",
#automatic_label_click=false,
#automatic_reload=true,
#default_host="http://www.example.com",
#default_max_wait_time=5,
#default_selector=:css,
#enable_aria_label=false,
#exact=false,
#exact_text=false,
#ignore_hidden_elements=true,
#match=:smart,
#raise_server_errors=true,
#run_server=true,
#save_path=#<Pathname:/app/tmp/capybara>,
#server_errors=[StandardError],
#server_host=nil,
#server_port=1300,
#visible_text_only=false,
#wait_on_first_by_default=false>
Here is my docker-compose file :
version: '3'
services:
web:
build: .
command: rails s -b 0.0.0.0
working_dir: /app
volumes:
- .:/app
- ./tmp/bundle:/usr/local/bundle
- $SSH_AUTH_SOCK:/ssh-agent
environment:
- BUNDLE_JOBS=4
- SSH_AUTH_SOCK=/ssh-agent
- MONGO_HOST=mongo
- REDIS_HOST=redis
- MEMCACHE_HOST=memcache
ports:
- "80:3000"
- "1300:1300"
links:
- mongo
- redis
- memcache
mongo:
image: mongo:3.4.9
volumes:
- ~/data/mongo/db:/data/db
redis:
image: redis:2.8.17
volumes:
- ~/data/redis:/data
memcache:
image: memcached:1.5-alpine
And finally my Dockerfile :
FROM ruby:2.3.1
RUN apt-get update && apt-get install -y build-essential qt5-default \
libqt5webkit5-dev gstreamer1.0-plugins-base gstreamer1.0-tools gstreamer1.0-x \
xvfb rsync
ARG GECKODRIVER_VERSION=0.19.0
RUN wget --no-verbose -O /tmp/geckodriver.tar.gz https://github.com/mozilla/geckodriver/releases/download/v$GECKODRIVER_VERSION/geckodriver-v$GECKODRIVER_VERSION-linux64.tar.gz \
&& rm -rf /opt/geckodriver \
&& tar -C /opt -zxf /tmp/geckodriver.tar.gz \
&& rm /tmp/geckodriver.tar.gz \
&& mv /opt/geckodriver /opt/geckodriver-$GECKODRIVER_VERSION \
&& chmod 755 /opt/geckodriver-$GECKODRIVER_VERSION \
&& ln -fs /opt/geckodriver-$GECKODRIVER_VERSION /usr/bin/geckodriver
RUN apt-get install -y libgtk-3-dev \
&& wget --no-verbose https://ftp.mozilla.org/pub/firefox/releases/56.0/linux-x86_64/en-US/firefox-56.0.tar.bz2 \
&& tar -xjf firefox-56.0.tar.bz2 \
&& mv firefox /opt/firefox56 \
&& ln -s /opt/firefox56/firefox /usr/bin/firefox
ENV TZ Europe/Paris
RUN echo $TZ > /etc/timezone && \
apt-get update && apt-get install -y tzdata && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get clean
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 0C49F3730359A14518585931BC711F9BA15703C6 && \
echo "deb http://repo.mongodb.org/apt/debian jessie/mongodb-org/3.4 main" | tee /etc/apt/sources.list.d/mongodb-org-3.4.list && \
apt-get update && \
apt-get install -y mongodb-org
RUN gem install bundler
RUN mkdir /app
WORKDIR /app
In order to get Selenium to use a remote geckodriver instance you need to provide the url option to it.
Capybara.register_driver :selenium do |app|
require 'selenium/webdriver'
Capybara::Selenium::Driver.new(app, :browser => :firefox, url: 'http://<your ip as reachable from docker>:<port geckodriver is available on>')
end
This will then require you to run geckodriver on the machine your want firefox to run on, possibly using the --binary option to specify where firefox is located. It will also probably require setting Capybara.app_host (and possibly Capybara.always_include_port depending on your exact configuration) so the browser requests are routed back to the app under test running on the docker instance.
Another thing to consider is that the AUT will need to be bound to an interface on the docker instance which is reachable from the host. By default Capybara binds to the 127.0.0.1 interface which probably isn't reachable, so you can set Capybara.server = '0.0.0.0' to bind to all available interfaces, or specify the specific external interface.

Protractor on travis-ci. Unable to connect to host localhost on port 7055 after 45000 ms

I'm trying to run protractor (via grunt) on travis-ci. My tests run successfully when I run them locally, but I keep running into the following error on Travis-ci. Thanks in advance.
Here is my full project in case any other files are required:
https://github.com/crobby/oshinko-console/tree/travis-integration
$ grunt test-integration --baseUrl=https://${IP}
Running "protractor:default" (protractor) task
Starting selenium standalone server...
[launcher] Running 1 instances of WebDriver
Selenium standalone server started at http://10.10.20.130:40443/wd/hub
ERROR - Unable to start a WebDriver session.
/home/travis/build/crobby/oshinko-console/node_modules/selenium-webdriver/lib/atoms/error.js:113
var template = new Error(this.message);
UnknownError: Unable to connect to host localhost on port 7055 after 45000 ms.
My .travis.yml file looks like
sudo: required
## use node_js
language: node_js
node_js:
- "6"
## home folder is /home/travis/build/radanalyticsio/oshinko-console
services:
- docker
before_install:
## add insecure-registry and restart docker
- pwd
- sudo cat /etc/default/docker
- sudo service docker stop
- sudo sed -i -e 's/sock/sock --insecure-registry 172.30.0.0\/16/' /etc/default/docker
- sudo cat /etc/default/docker
- sudo service docker start
- sudo service docker status
## chmod needs sudo, so all other commands are with sudo
- sudo mkdir -p /home/travis/origin
- sudo chmod -R 766 /home/travis/origin
## download oc 1.5.1 binary
- sudo wget https://github.com/openshift/origin/releases/download/v1.5.1/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz -P /home/travis/origin
- sudo ls -l /home/travis/origin
- sudo tar -C /home/travis/origin -xvzf /home/travis/origin/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz
- sudo ls -l /home/travis/origin/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit
- sudo cp /home/travis/origin/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit/oc /home/travis/origin
- sudo chmod -R 766 /home/travis/origin
- sudo ls -l /home/travis/origin
- sudo chmod -R +755 /home/travis/origin/*
- sudo cp /home/travis/origin/oc /bin
- sudo ls -l /bin
- oc version
- export PATH=$PATH:/home/travis/origin/
- echo $PATH
## below cmd is important to get oc working in ubuntu
- sudo docker run -v /:/rootfs -ti --rm --entrypoint=/bin/bash --privileged openshift/origin:v1.5.1 -c "mv /rootfs/bin/findmnt /rootfs/bin/findmnt.backup"
- oc cluster up --host-config-dir=/home/travis/origin
- sudo ls -l /home/travis/origin
- oc cluster down
## get the latest release code
- sudo cp dist/scripts/templates.js /home/travis/origin/master
- sudo cp dist/scripts/scripts.js /home/travis/origin/master
- sudo cp dist/styles/oshinko.css /home/travis/origin/master
- sudo chmod -R 766 /home/travis/origin/master
- sudo ls -l /home/travis/origin/master
## add changes to master-config.yaml
- "sudo sed -i -e \"s/extensionScripts: null/extensionScripts:\\n - templates.js\\n - scripts.js/\" /home/travis/origin/master/master-config.yaml"
- "sudo sed -i -e \"s/extensionStylesheets: null/extensionStylesheets:\\n - oshinko.css/\" /home/travis/origin/master/master-config.yaml"
- sudo cat /home/travis/origin/master/master-config.yaml
## oc cluster up
- oc cluster up --host-config-dir=/home/travis/origin --use-existing-config=true
## find IP:PORT of openshift
- IPSTR=`oc status |grep server`
- echo $IPSTR
- IP=${IPSTR##*/}
- echo ${IP}
install:
- npm install grunt-cli -g
- npm install
- npm install -g protractor
- node_modules/protractor/bin/webdriver-manager update
- node_modules/protractor/bin/webdriver-manager status
- cat node_modules/protractor/config.json
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
script:
- echo ${IP}
## integration tests need headless setup
- grunt test-integration --baseUrl=https://${IP}
notifications:
email:
on_success: never
on_failure: never
Looks like I needed to be using an older firefox to be compatible with the other bits I'm using.
Working .travis.yml is below.
sudo: required
## use node_js
language: node_js
node_js:
- "6"
addons:
firefox: "46.0"
## home folder is /home/travis/build/radanalyticsio/oshinko-console
services:
- docker
before_install:
## add insecure-registry and restart docker
- pwd
- sudo cat /etc/default/docker
- sudo service docker stop
- sudo sed -i -e 's/sock/sock --insecure-registry 172.30.0.0\/16/' /etc/default/docker
- sudo cat /etc/default/docker
- sudo service docker start
- sudo service docker status
## chmod needs sudo, so all other commands are with sudo
- sudo mkdir -p /home/travis/origin
- sudo chmod -R 766 /home/travis/origin
## download oc 1.5.1 binary
- sudo wget https://github.com/openshift/origin/releases/download/v1.5.1/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz -P /home/travis/origin
- sudo ls -l /home/travis/origin
- sudo tar -C /home/travis/origin -xvzf /home/travis/origin/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit.tar.gz
- sudo ls -l /home/travis/origin/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit
- sudo cp /home/travis/origin/openshift-origin-client-tools-v1.5.1-7b451fc-linux-64bit/oc /home/travis/origin
- sudo chmod -R 766 /home/travis/origin
- sudo ls -l /home/travis/origin
- sudo chmod -R +755 /home/travis/origin/*
- sudo cp /home/travis/origin/oc /bin
- sudo ls -l /bin
- oc version
- export PATH=$PATH:/home/travis/origin/
- echo $PATH
## below cmd is important to get oc working in ubuntu
- sudo docker run -v /:/rootfs -ti --rm --entrypoint=/bin/bash --privileged openshift/origin:v1.5.1 -c "mv /rootfs/bin/findmnt /rootfs/bin/findmnt.backup"
- oc cluster up --host-config-dir=/home/travis/origin
- sudo ls -l /home/travis/origin
- oc cluster down
## get the latest release code
- sudo cp dist/scripts/templates.js /home/travis/origin/master
- sudo cp dist/scripts/scripts.js /home/travis/origin/master
- sudo cp dist/styles/oshinko.css /home/travis/origin/master
- sudo chmod -R 766 /home/travis/origin/master
- sudo ls -l /home/travis/origin/master
## add changes to master-config.yaml
- "sudo sed -i -e \"s/extensionScripts: null/extensionScripts:\\n - templates.js\\n - scripts.js/\" /home/travis/origin/master/master-config.yaml"
- "sudo sed -i -e \"s/extensionStylesheets: null/extensionStylesheets:\\n - oshinko.css/\" /home/travis/origin/master/master-config.yaml"
- sudo cat /home/travis/origin/master/master-config.yaml
## oc cluster up
- oc cluster up --host-config-dir=/home/travis/origin --use-existing-config=true
## find IP:PORT of openshift
- IPSTR=`oc status |grep server`
- echo $IPSTR
- IP=${IPSTR##*/}
- echo ${IP}
install:
- npm install grunt-cli -g
- npm install
- npm install grunt-protractor-runner#1.2.1
- ./node_modules/grunt-protractor-runner/scripts/webdriver-manager-update
before_script:
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
script:
- echo ${IP}
## integration tests need headless setup
- grunt test-integration --baseUrl=https://${IP}
notifications:
email:
on_success: never
on_failure: never

Laravel Continuous Integration with Gitlab-runner in offline environment (CentOS 7)

I'm developing a website on a totally offline environment. also, I use gitlab runner for CI and the host is CentOS 7.
the problem is that gitlab runner uses gitlab-runner user on centos for deploying laravel application and apache uses apache user for running laravel.
I got Permission denied error on apache til I changed ownership of files. after that I get this error on apache log:
Uncaught UnexpectedValueException: The stream or file "storage/logs/laravel.log" could not be opened: failed to open stream: Permission denied
it seems that some vendor libraries like monolog want to write error or debug logs onto storage/logs/laravel.log but it gets permission denied. :(
.gitlab-ci.yml
stages:
- build
- test
- deploy
buildBash:
stage: build
script:
- bash build.sh
testBash:
stage: test
script:
- bash test.sh
deployBash:
stage: deploy
script:
- sudo bash deploy.sh
build.sh
#!/bin/bash
set -xe
# creating env file from production file
cp .env.production .env
# initializing laravel
php artisan key:generate
php artisan config:cache
# database migration
php artisan migrate --force
deploy.sh
#!/bin/bash
PWD=$(pwd)'/public'
STG=$(pwd)'/storage'
ln -s $PWD /var/www/html/public
chown apache.apache -R /var/www/html/public
chmod -R 755 /var/www/html/public
chmod -R 775 $STG
Am I using gitlab runner correct? how can I fix the permission denied error?
SELinux
I found the problem and it was selinux, like always it was selinux and I ignored it at the begining
What's the problem:
you can see selinux context on files with ls -lZ command, by default all files on www are httpd_sys_content_t, the problem is that selinux just allow apache to read these files. you should change storage and bootstrap/cache context so it can be writable.
there are 4 apache context type:
httpd_sys_content_t: read-only directories and files
httpd_sys_rw_content_t: readable and writable directories and files used by Apache
httpd_log_t: used by Apache for log files and directories
httpd_cache_t: used by Apache for cache files and directories
What to do:
first of all install policycoreutils-python for better commands
yum install -y policycoreutils-python
after installing policycoreutils-python the semanage command is available, so you can change file context like this:
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/laravel/storage(/.*)?"
semanage fcontext -a -t httpd_sys_rw_content_t "/var/www/html/laravel/bootstrap/cache(/.*)?"
don't forget to commit the changes by this command:
restorecon -Rv /var/www/html/laravel/storage
restorecon -Rv /var/www/html/laravel/bootstrap/cache
the problem is solved :)
ref: http://www.serverlab.ca/tutorials/linux/web-servers-linux/configuring-selinux-policies-for-apache-web-servers/