How to bind Jenkins build output with tests result? - selenium

I'm setting automated protractor tests to run in a docker container with the help of jenkins. But not been able to make a the jenkins build result to reflect the testing outcome (if some test fail, build should fail also).
Important to say that all tests should run, even if the first one fails.
The tests are initiated with docker-compose up --abort-on-container-exit and my docker-compose file looks like:
version: '2'
services:
selenium:
image: selenium/standalone-chrome
ports:
- 4444:4444
volumes:
- /dev/shm:/dev/shm
protractor:
volumes:
- ./reporting:/assets/reporting
image: protractor-test
command: "dockerize -wait http://selenium:4444 -timeout 60m protractor /assets/conf.js"

Looks like your docker-compose command is returning exit code 0 no matter what.
How about using a Jasmine xunit reporter to generate a test report, copy the generated xml test report to outside the container (using docker cp), and then publish it using Jenkins' post-build action?
The job will be marked as failed if the xml is not present, which means there's an error during the test runtime or it will be marked as unstable, if it has failed any of the test asserts.

Related

docker-compose using cached file with pytest

I've configured IntelliJ to use python via a stack I've defined in docker-compose. I'm configured my project to execute my pytest via docker-compose so that I can use the debugger. However, I've discovered that after the initial run, when I change my code and re-run my tests, pytest is not seeing my changes, but rather executing a cached version of the code.
The only way I've discovered to get around this is to invoke the File menu option Invalidate Caches and Restart. This is annoying.
This my compose file:
networks:
app: {}
services:
item-set-definitions:
build:
context: /Users/kudrykma/ghoildex/kudrykma/analytics/sa-item-set-definitions
target: build
command:
- /bin/bash
image: item-sets:test
networks:
app: {}
volumes:
- source: /Users/kudrykma/ghoildex/kudrykma/analytics/sa-item-set-definitions
target: /project
type: bind
version: '3.9'
In the pytest Run configuration I've tried adding -force-recreate option in the docker-compose Command and options field but IntelliJ won't recognize it.
Does anyone know how I can configure IntelliJ to not cache any of my source file so that pytest will see my changed code?
Thank you

Selenium side runner + chromedriver tests with docker not running

I am trying to get selenium side runner to run some tests using docker, to include in our CI.
I am able to run the tests locally in my machine by running:
selenium-side-runner C:\path-to-tests\tests-selenium.side
This is windows host.
I am trying to do the same using docker locally, so afterwards I will migrate this to our Teamcity.
First I am running the selenium server container:
docker run -d -p 4444:4444 --name chromedriver selenium/standalone-chrome:3.4.0
Afterwards I run the selenium side runner container:
docker run -v C:\path-to-tests:/sides --link chromedriver:chromedriver nixel2007/docker-selenium-side-runner
I have to link the containers otherwise I get an error saying that the container can't connect to chromedriver:4444
I also have to mount the volume where my tests are.
When I do this and run, I get the following error:
Test suite failed to run
WebDriverError: Unable to parse new session response
What am I missing here?
UPDATE:
I also tried different versions of the selenium/standalone-chrome container, selenium/standalone-chrome:3.4.0, selenium/standalone-chrome:3.141.59-xenon and selenium/standalone-chrome:latest
All fail with different errors.
SECOND UPDATE:
I have been able to get the tests to run, both locally and in teamcity. One of the issues that I am facing right now is that docker-compose seems to hang. Not sure if this is container related, or docker-compose related.
When I run the tests, the selenium side runner container exits with code 1 and I do not get back to the host console prompt, it stays forever waiting for something to happen.
The error is this:
selenium_selenium-side-runner_1 exited with code 1
I have gotten the docker-compose file from here:
https://github.com/nixel2007/docker-selenium-side-runner/blob/master/docker-compose.yml
Any clues on what I might be missing?

Using multiple runners in one gitlab-ci

I want to run CI pipline with 2 jobs:
job will boot up a docker image with docker-runner and run test inside docker
will run under ssh runner and pull code on a remote server.
Is it possible?
Yes, it's possible. You need to:
Register two GitLab Runners with needed executor (docker and shell), each witch different tag (or, at least one of them with a build tag).
Declare a specific tag for given job in your .gitlab-ci.yml, .
Shell runner registration:
[root#jsc00mca ~]# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://example.com/
Please enter the gitlab-ci token for this runner:
1a2b3c
Please enter the gitlab-ci description for this runner:
[jsc00mca.example.com]: my-shell-runner
Please enter the gitlab-ci tags for this runner (comma separated):
shell
Whether to run untagged builds [true/false]:
[false]:
Whether to lock the Runner to current project [true/false]:
[true]:
Registering runner... succeeded runner=ajgHxcNz
Please enter the executor: virtualbox, docker+machine, kubernetes, docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels:
shell
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
Docker runner registration:
[root#jsc00mca ~]# gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://example.com/
Please enter the gitlab-ci token for this runner:
1a2b3c
Please enter the gitlab-ci description for this runner:
[jsc00mca.example.com]: my-docker-runner
Please enter the gitlab-ci tags for this runner (comma separated):
docker
Whether to run untagged builds [true/false]:
[false]:
Whether to lock the Runner to current project [true/false]:
[true]:
Registering runner... succeeded runner=ajgHxcNz
Please enter the executor: virtualbox, docker+machine, kubernetes, docker, shell, ssh, docker-ssh+machine, docker-ssh, parallels:
docker
Please enter the default Docker image (e.g. ruby:2.1):
alpine:latest
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
.gitlab-ci.yml
buildWithShell:
stage: build
tags:
- shell
script:
- echo 'Building with the shell executor...'
buildWithDocker:
image: alpine:latest
stage: build
tags:
- docker
script:
- echo 'Building with the docker executor...'
Yes you can trigger different/mixed runners from a single gitlab-ci pipeline.
First you should register a shell runner on the target host and give it a tag (truncated):
$ gitlab-runner register
...
Please enter the gitlab-ci tags for this runner (comma separated):
my_shell_runner
...
Please enter the executor: virtualbox, docker+machine, docker-ssh+machine, docker, docker-ssh, parallels, shell, ssh:
shell
Within your gitlab-ci.yaml something like this should work.
The 'test' job runs your test command in a docker container based on the image NAME_OF_IMAGE.
If that succeeds, the 'deploy' job chooses your shell runner based on the tag 'my_shell_runner' and will execute all commands within the script tag on the runner's host (truncated):
test:
stage: test
services:
- docker:dind
tags:
- docker-executor
script:
- docker run --rm NAME_OF_IMAGE sh -c "TEST_COMMAND_TO_RUN"
deploy:
stage: deploy
tags:
- my_shell_runner
script:
- COMMAND_TO_RUN
- COMMAND_TO_RUN
- COMMAND_TO_RUN

Can't get tests to pass on Gitlab CI

I've been trying to get our tests to pass on our Gitlab CI, but can't. I'm using the stock pipelines config that comes with Gitlab. All I've had to do is provide the gitlab yaml file to config the CI.
This is what we're using
image: maven:3.5.0-jdk-8-alpine
services:
- postgres:latest
variables:
POSTGRES_DB: my_test_db
POSTGRES_USER: my_test_user
POSTGRES_PASSWORD: ""
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository -Dorg.slf4j.simpleLogger.log.org.apache.maven.cli.transfer.Slf4jMavenTransferListener=WARN -Dorg.slf4j.simpleLogger.showDateTime=true -Djava.awt.headless=true"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
ACTIVE_ENV: test
connect:
image: postgres
script:
# official way to provide password to psql: http://www.postgresql.org/docs/9.3/static/libpq-envars.html
- export PGPASSWORD=$POSTGRES_PASSWORD
- psql -h "postgres" -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "SELECT 'OK' AS status;"
stages:
- test
test:
stage: test
script:
- "mvn -Denvironments=test -B db-migrator:migrate; mvn -Denvironments=test -DACTIVE_ENV=test -B test"
Everything works perfectly up to the point where the tests run. Then they all error out with similar messages:
383 [main] WARN org.javalite.activeweb.DBSpecHelper - no DB connections are configured, none opened
456 [main] WARN org.javalite.activeweb.DBSpecHelper - no DB connections are configured, none opened
Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 0.528 sec <<< FAILURE! - in app.models.RoleTest
validatePresenceOfUsers(app.models.RoleTest) Time elapsed: 0.071 sec <<< ERROR!
org.javalite.activejdbc.DBException: Failed to retrieve metadata from DB, connection: 'default' is not available
I have one database.properties file that is checked in and is for tests only (our dev and prod envs use jndi). It looks like so:
test.driver=org.postgresql.Driver
test.username=my_test_user
test.password=
test.url=jdbc:postgresql://postgres/edv_test
Again, migrations run using all this exact same config. I just can't figure out why the tests won't run. I understand why it's saying there's no default db, but I don't get why it's not seeing the test settings and configuring that connection as expected.
So you know, the Maven flag environments like this: mvn test -Denvironments=test only works for the DB-Migrator, and not for the tests. Any JavaLite application in a standard running mode or as a test, it will be looking at ACTIVE_ENV. If this is not set, it will assume development. In test mode, it will be looking at database.properties block development.test.xxx=yyy as in http://javalite.io/database_configuration#property-file-configuration.
Think of it as "development" environment, "test" mode.
Additionally, DbConfig is not involved in tests, as database connections in test have a special treatment (rollback transactions), see: http://javalite.io/testing_with_db_connection

Gitlab CI Different executor per stage

Is it possible to have 2 stages in gitlab-ci.yml and one to be run with docker runner but the other to be run with shell?
Imagine I want to run tests in a docker container but I want to run deploy stage in shell locally in the container.
Not exactly stages but you can have different jobs to be run by different runners using tags configuration option which should give you exactly what you want.
Add (either during runner creation or later in Project settings -> Runners) tag docker to the Docker runner and tag shell to the shell runner. Then you can set the tags in your .gitlab-ci.yml file:
stages:
- test
- deploy
tests:
stage: test
tags:
- docker
script:
- [test routine]
deployment:
stage: deploy
tags:
- shell
script:
- [deployment routine]